threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "\nSorry to bother you folks but...\n\nI wrote the following to message to pgsql-interfaces. I have received one\nwork around of putting an extra locked field on the table and setting and\nunsetting that but that would cause other problems. I have thought of\ncreating a locking daemon of my own for the application to request locks\nfrom but that would be very hard to figure out as I've never tried ipc. I\nhave read through lock.c again and I think that user locks may be what I\nwant, as there is only one table that needs to be protected this way in my\napplication. I just don't know how to proceed.\n\nThanks\n\nTrevor\n\n>I am using Postgres 6.5 on Linux through the libpq interface\n>\n>I have a situation where an existing record can be opened by one user,\n>updated or rolled back. While the first user has the record I need to make\n>sure no other user can open it. I use\n>\n>SELECT .... FOR UPDATE\n>\n> This works but the second process blocks (looking to the user like a hung\n> program), until the first process releases the record. Since updating a\n> record can take some time the blocking could be there for some time.\n> \n> I would like the second SELECT .... FOR UPDATE to fail with an error I can\n> catch telling me of a conflict (and hopefully which process has the lock),\n> allowing me to backout gracefully with a consolatory message to the user.\nI\n> would like normal SELECTs (without FOR UPDATE) to work an normal.\n> \n> I think that this is an option in pg_options but I'm not sure. I have\nlooked\n> in the source (lock.c and lmgr.c) to try and figure out what to do. I\nfound\n> something called \"user locks\" which looks promising but I'm still not\nsure.\n**********************************************************************\nThis message (including any attachments) is confidential and may be \nlegally privileged. If you are not the intended recipient, you should \nnot disclose, copy or use any part of it - please delete all copies \nimmediately and notify the Hays Group Email Helpdesk on \n+44 (0) 01908 256 050.\n\nAny information, statements or opinions contained in this message\n(including any attachments) are given by the author. They are not \ngiven on behalf of Hays unless subsequently confirmed by an individual\nother than the author who is duly authorised to represent Hays.\n \nA member of the Hays plc group of companies.\nHays plc is registered in England and Wales number 2150950.\nRegistered Office Hays House Millmead Guildford Surrey GU2 5HJ.\n**********************************************************************\n",
"msg_date": "Thu, 15 Jul 1999 17:40:22 +0100",
"msg_from": "\"Burgess, Trevor - HMS\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Locking"
}
] |
[
{
"msg_contents": "I have asked this question on all of the appropriate pgsql mailing lists\nand no one has been able to help. Please, if you could, help me out with\nthis problem. Thank you.\n--\n\nI have been playing around with this for some time now to no avail. I\nhave a table info with a two-dimensional text type array action. Is\nthere any way to select the corresponding value of one of the elements\nwithout knowing the order of the elements?\n\nE.g.\n\nCREATE TABLE info (action text[][]);\n\nINSERT INTO info VALUES ('{{\"VAR\",\"VAL\"},{\"VAR2\",\"VAL2\"}}');\n\nNow what SELECT query will search for \"VAR\" within text[][] (in this\ncase it is the first element, but it may not always be) and print out\n\"VAL.\"\n\n\nAny information would be greatly appreciated.\n\nThank you very much.\n\nEvan\n",
"msg_date": "Thu, 15 Jul 1999 10:33:16 -0700",
"msg_from": "Evan Klinger <[email protected]>",
"msg_from_op": true,
"msg_subject": "SELECT using arrays"
}
] |
[
{
"msg_contents": "Hi.\n\nIve recently looked into porting an internal tool our company uses to\nversion 6.5 of postgresql (from 6.4.2). Unfortunately, the original author\nof this tool used items like:\n\nPGresult *res;\n...\nmyconn = res->conn;\n\nin a few spots (usually to be used to query pg_type to get string typename\nfor come columns of the result set). Looking through the libpq headers, it\ndoes appear that the PGconn member of the struct is still there, but the\nstruct definition (struct pg_result) has been hidden from applications via\nmoving the struct definitions to a nother file.\n\nI realize that using code like the above is a BadThing(tm), and if I were\nwriting the application, I would not have done it that way. However, if I\nam going to port this application to v6.5, it will require some workaround.\nMy question is this: If the PGresult struct contains a PGconn member,\nshould there be an accessor function for it? Or is this member considered\nto be private? If so, I guess I will have to rewrite a large section of\nthis application from scratch, but I thought I would check on the reasoning\nfor the move of the conn member here first.\n\nThanks for all the hard work guys. \n\nRegards,\nMike\n\n",
"msg_date": "Thu, 15 Jul 1999 12:40:42 -0500 (CDT)",
"msg_from": "Michael J Schout <[email protected]>",
"msg_from_op": true,
"msg_subject": "migration to v6.5"
},
{
"msg_contents": "Michael J Schout <[email protected]> writes:\n> My question is this: If the PGresult struct contains a PGconn member,\n> should there be an accessor function for it? Or is this member considered\n> to be private? If so, I guess I will have to rewrite a large section of\n> this application from scratch, but I thought I would check on the reasoning\n> for the move of the conn member here first.\n\nI had intended to remove that member entirely, but desisted in order to\ngrant some breathing room to people in your situation ;-). For the\nmoment you can access it if you include libpq-int.h in your application.\n\nThe reasoning for removing it is that a PGresult could outlive the\nPGconn it was produced from, leaving you with a dangling pointer.\n\nI would like to remove it eventually, but probably won't do so for\nanother version or two.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Jul 1999 15:09:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] migration to v6.5 "
}
] |
[
{
"msg_contents": "I have finished removing unused #include's from *.c files. I used the\nscripts I wrote in tools/pginclude. I can't imagine anyone doing this\nmanually, though I know there was a lot of cleanup done around 6.0 in\nthis area.\n\nThe one remaining step is to make system includes use <> and pgsql\nincludes use \"\".\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 15 Jul 1999 15:15:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "#include removal"
}
] |
[
{
"msg_contents": "I am done. I have attempted not to disturbe any of the config.h #if's. \nThe only one that may have been a little messed up is multi-byte, but I\nchecked every #if, so I think we will be OK.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Jul 1999 01:27:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "final #include cleanup"
},
{
"msg_contents": "Hust did acvs update and didn't see any changes.\nI use \nexport CVSROOT=:pserver:[email protected]:/usr/local/cvsroot\ncvs -z9 update -rREL6_5 pgsql\n\nIs't right ?\n\n\tRegards,\n\t\tOleg\n\nOn Fri, 16 Jul 1999, Bruce Momjian wrote:\n\n> Date: Fri, 16 Jul 1999 01:27:46 -0400 (EDT)\n> From: Bruce Momjian <[email protected]>\n> To: PostgreSQL-development <[email protected]>\n> Subject: [HACKERS] final #include cleanup\n> \n> I am done. I have attempted not to disturbe any of the config.h #if's. \n> The only one that may have been a little messed up is multi-byte, but I\n> checked every #if, so I think we will be OK.\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Fri, 16 Jul 1999 10:19:38 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] final #include cleanup"
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> Hust did acvs update and didn't see any changes.\n> I use \n> export CVSROOT=:pserver:[email protected]:/usr/local/cvsroot\n> cvs -z9 update -rREL6_5 pgsql\n\n> Is't right ?\n\nNo --- those commits were to the tree tip. REL6_5 is a frozen tag;\nyou won't *ever* see any more changes if you pull with that tag.\n(REL6_5_PATCHES is the branch to pull if you want to track 6.5.*\npatches...)\n\nBTW: thanks, Bruce! The messy #includes have bothered me for some\ntime, particularly the failure to distinguish our includes from\nsystem headers via \"\" versus <>. Now I can finally get reasonable\ndependency lists from gcc -MM.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Jul 1999 09:25:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] final #include cleanup "
},
{
"msg_contents": "> Hust did acvs update and didn't see any changes.\n> I use \n> export CVSROOT=:pserver:[email protected]:/usr/local/cvsroot\n> cvs -z9 update -rREL6_5 pgsql\n> \n> Is't right ?\n> \n> \tRegards,\n> \t\tOleg\n\nThese #include changes are not in 6.5REL tree.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Jul 1999 11:59:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] final #include cleanup"
},
{
"msg_contents": "On Fri, 16 Jul 1999, Bruce Momjian wrote:\n\n> Date: Fri, 16 Jul 1999 11:59:50 -0400 (EDT)\n> From: Bruce Momjian <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: PostgreSQL-development <[email protected]>\n> Subject: Re: [HACKERS] final #include cleanup\n> \n> > Hust did acvs update and didn't see any changes.\n> > I use \n> > export CVSROOT=:pserver:[email protected]:/usr/local/cvsroot\n> > cvs -z9 update -rREL6_5 pgsql\n> > \n> > Is't right ?\n> > \n> > \tRegards,\n> > \t\tOleg\n> \n> These #include changes are not in 6.5REL tree.\n\nDoes 6.5REL tree is a place for 6.5.1 ?\n\n\tRegards,\n\t\tOleg\n\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Fri, 16 Jul 1999 21:08:25 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] final #include cleanup"
},
{
"msg_contents": "> > These #include changes are not in 6.5REL tree.\n> \n> Does 6.5REL tree is a place for 6.5.1 ?\n> \n\nThis has confused me too. Seems it is called not REL6_5, but\nREL6_5PATCHES. I have a copy of REL6_5 here myself. Good thing I\nhaven't patched anything in there.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Jul 1999 13:25:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] final #include cleanup"
}
] |
[
{
"msg_contents": "Thanks, Tom.\n\n>> Right, the real sequence when you are changing disk layout details is\n>> \tpg_dumpall with old pg_dump and backend.\n>> \tstop postmaster\n>> \trm -rf installation\n>> \tmake install\n>> \tinitdb\n>> \tstart postmaster\n>> \tpsql <pgdumpscript.\n>> \n>> You may want to do your development work in a \"playpen\" installation\n>> instead of risking breaking your \"production\" installation with these\netc., etc.\n\n",
"msg_date": "Fri, 16 Jul 1999 09:38:24 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] MAX Query length "
}
] |
[
{
"msg_contents": "I think the point is that you wouldn't, but the most important part is to\nget it off the wire. Let someone do that first, and then worry about what\nthe administrator can see. One would hope that your administrator is more\ntrustworthy than joe hacker out on the network.\n\n\n>> Why would you want to make it visible to anyone? \n>> \n>> Vince.\n\nAs a user, I would be extremely concerned if I knew that my password was\nfairly transparent on the network, but less so if I knew that the wire was\nsafe, although my admin could see it. First prize would, of course, be\ntotal secrecy.\n\n\nMikeA\n",
"msg_date": "Fri, 16 Jul 1999 09:44:50 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Security WAS RE: [HACKERS] Updated TODO list"
},
{
"msg_contents": "From: Ansley, Michael <[email protected]>\n> I think the point is that you wouldn't, but the most important part is to\n> get it off the wire. Let someone do that first, and then worry about what\n> the administrator can see. One would hope that your administrator is more\n> trustworthy than joe hacker out on the network.\n> >> Why would you want to make it visible to anyone?\n> >>\n> >> Vince.\n>\n> As a user, I would be extremely concerned if I knew that my password was\n> fairly transparent on the network, but less so if I knew that the wire was\n> safe, although my admin could see it. First prize would, of course, be\n> total secrecy.\n\nI have no idea where this misconception came from, but it's just plain\nincorrect. You can do both - store hashes instead of plaintext passwords and\nsend logins securely over the network. Yes, the current authentication\nscheme does not allow for it. But it just means that the scheme is outdated.\nThere are plenty of good secure solutions. It's just a matter of choosing\none.\n\nGene Sokolov.\n\n",
"msg_date": "Fri, 16 Jul 1999 12:18:36 +0400",
"msg_from": "\"Gene Sokolov\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Security WAS RE: [HACKERS] Updated TODO list"
},
{
"msg_contents": "\nOn 16-Jul-99 Ansley, Michael wrote:\n> I think the point is that you wouldn't, but the most important part is to\n> get it off the wire. Let someone do that first, and then worry about what\n> the administrator can see. One would hope that your administrator is more\n> trustworthy than joe hacker out on the network.\n> \n> \n>>> Why would you want to make it visible to anyone? \n>>> \n>>> Vince.\n> \n> As a user, I would be extremely concerned if I knew that my password was\n> fairly transparent on the network, but less so if I knew that the wire was\n> safe, although my admin could see it. First prize would, of course, be\n> total secrecy.\n\nBut you can use something like ssh to take care of the wire. It's alot\nbetter than the method used by browsers for login and password.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Fri, 16 Jul 1999 10:24:26 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Security WAS RE: [HACKERS] Updated TODO list"
}
] |
[
{
"msg_contents": "I know that you can do both. It seemed from previous postings, however,\nthat there was an issue about the urgency of each, if they are actually\nseparate issues. I would have thought that the two are linked, and would be\nsolved as such.\n\nMikeA\n\n\n>> I have no idea where this misconception came from, but it's \n>> just plain\n>> incorrect. You can do both - store hashes instead of \n>> plaintext passwords and\n>> send logins securely over the network. Yes, the current \n>> authentication\n>> scheme does not allow for it. But it just means that the \n>> scheme is outdated.\n>> There are plenty of good secure solutions. It's just a \n>> matter of choosing\n>> one.\n>> \n>> Gene Sokolov.\n>> \n",
"msg_date": "Fri, 16 Jul 1999 10:26:45 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Security WAS RE: [HACKERS] Updated TODO list"
}
] |
[
{
"msg_contents": "\n> Vadim Mikheev wrote:\n> > \n> > > The \"restore of a server\" is a main problem here, but I suggest the\n> > > following\n> > > additional backup tool, that could be used for a \"restore of a server\"\n> > > which could then be used for a rollforward and would also be a lot\n> faster\n> > > than a pg_dump:\n> > >\n> > > 1. place a vacuum lock on db (we don't want vacuum during backup)\n> > > 2. backup pg_log using direct file access (something like dd bs=32k)\n> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > > 3. backup the rest in any order (same as pg_log)\n> > > 4. release vacuum lock\n> > \n> > It looks like log archiving, not backup.\n> > I believe that _full_ backup will do near the same\n> > things as pg_dump now, but _incremental_ backup will\n> > fetch info about what changed after last _full_ backup\n> > from log.\n> \n> Sorry, I was wrong. pg_dump is what's known as Export utility\n> in Oracle and backup is quite different thing. But I have\n> corrections for full backup described above:\n> \n> 1. no vacuum lock is needed: all vacuum ops will be logged\n> in normal way to rollback changes in failures;\nYes.\n> 2. all datafiles have to be backed up _before_ log backup\n> due to WAL logic: changes must be written to log before\n> they'll be written to on-disk data pages.\n> \nWhen I was talking about pg_log, I meant pg_log as it is now.\nAs I understand it, it only stores commit/rollback info for each used xtid\nand no other info.\n\nThis would be all we need, for a rollback of all transactions that were not \ncommitted at the time the backup began, as long as no vacuum removes\nthe old rows (and these are not reused). The xtid's that are higher than the\n\nlargest xtid in pg_log need also be rolled back. I am not sure though\nwhether \nwe have enough info after the commit is flushed to the new row. \nThis flush would have to be undone at restore time.\n\nI like this approach more than always needing a transaction log at restore\ntime.\nIt makes it possible to configure a db to not write a transaction log,\nas postgresql behaves now. After all a lot of installations only need to be\nable \nto restore the database to the state it was at the last full backup.\n\nThe main issue is IMHO a very fast consistent online backup,\nand a fast foolproof restore of same. The transaction log, and\nrollforward comes after that.\n\nAndreas\n\nPS: for rollback you need the before image of rows, I would keep this in a\nseparate place like Oracle (rollback segment) and Informix (physical log)\nsince this info does not need to go to the rollforward tape.\n\nAlthough if we did keep this info in the WAL, then postgresql could also do\na \"rollback in time\" by walking this log in the opposite direction.\nMight be worth discussion.\n",
"msg_date": "Fri, 16 Jul 1999 10:28:14 +0200",
"msg_from": "Zeugswetter Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] RE: [GENERAL] Transaction logging"
},
{
"msg_contents": "Zeugswetter Andreas IZ5 wrote:\n> \n> > 2. all datafiles have to be backed up _before_ log backup\n> > due to WAL logic: changes must be written to log before\n> > they'll be written to on-disk data pages.\n> >\n> When I was talking about pg_log, I meant pg_log as it is now.\n> As I understand it, it only stores commit/rollback info for each used xtid\n> and no other info.\n\nActually, I would like to implement WAL as it's done in other systems.\nThere would be no more pg_log with xact statuses as now. But for\nthe first implementation it's easy to leave pg_log as is (UNDO\nis hard to implement). In any case WAL will be _single_ source\nabout everything - what's changes and what transactions were\ncommited/aborted. From this point of view pg_log will be just\none of datafiles: on recovery changes (commit/abort statuses)\nwill be applied to pg_log just like to other datafiles.\n\n> PS: for rollback you need the before image of rows, I would keep \n> this in a separate place like Oracle (rollback segment) and Informix \n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nOracle places rollback segments in memory to speedup abort/MVCC.\nBefore images are in WAL and used to restore rollback segments\non recovery.\n\n> (physical log) since this info does not need to go to the \n> rollforward tape.\n\nVadim\n",
"msg_date": "Mon, 19 Jul 1999 11:40:46 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] RE: [GENERAL] Transaction logging"
}
] |
[
{
"msg_contents": "Why don't you adjust the ids of your system folders, such that they are\nordered properly? You should have a fixed number of system folders, so you\ncan guarantee the ids that they will receive. So make the Inbox -4. Then\nyou just order by folder id, ascending. -4 comes first, with the user\nfolders always coming after the system folders.\n\nAlternatively, you can sort by an expression, something like:\n\nORDER BY (if(folderid < 0) then return(abs(folderid)) else\nreturn(folderid+max(abs(MIN(folderid)))))\n\nWhat this does is shift all the ids up to ensure that they all fall into the\npositive range, while inverting the order of the negative ids, which seems\nlike it's what you want. Of course, this isn't legal SQL. You would\nprobably have to write a function to implement this. This will work no\nmatter what folders you add, system or user, and will always give you the\noldest folders (i.e.: those with the lowest absolute id) first, for each\ngroup.\nThe MAX will make it slow though, except, of course, that in a function, you\ncan store the value, instead of repeatedly looking it up.\n\nSo:\n\nSELECT \tfolderid,\n\t\tfoldername,\n\t\tcount(*) as \"messgaes\",\n\t\tsum(bool2int(flagnew)) as \"newmessages\",\n\t\tsum(contentlength) as \"size\" \nFROM \t\tusermail,folders \nWHERE \tusermail.loginid='michael' AND\n\t\tfolders.loginid=usermail.loginid AND\n\t\tusermail.folder = folders.folderid \nGROUP BY \tfolderid,foldername \n\nUNION ALL\n\nSELECT \tfolderid,\n\t\tfoldername,\n\t\t0,\n\t\t0,\n\t\t0 \nFROM \t\tfolders \nWHERE \tloginid='michael' AND\n\t\tNOT EXISTS (SELECT folder \n\t\t\t\tFROM usermail \n\t\t\t\tWHERE loginid='michael' AND\n\t\t\t\t\tfolder=folderid\n\t\t\t\t) \n\nORDER BY get_effective_order(folderid);\n\nAnd then define the function get_effective_order using pgsql to return the\nvalue described above.\n\n\nHowever, I don't think that you are going to get away from the UNION ALL.\n\nBTW\nIf you are going to do this:\n>> fastmail=> select 1 as \"test\" order by (test<9);\nthen why not just do this:\nselect 1 as \"test\" order by (1<9);\nIf you actually have a field, then you would be able to put it in. If you\nhave an expression like this:\nselect x+y\\z as \"some_number\" from test order by (somenumber>9);\nthen you could just as easily do this:\nselect x+y\\z as \"some_number\" from test order by (x+y\\z>9);\nThat's why the expression will not evaluate properly, I think.\n\nMikeA\n\n>> \n>> My folder numbers are: negative numbers are system folders \n>> such as New\n>> mail, trash, drafts and sentmail. I wanted to order the \n>> tuples so that the\n>> folderids were sorted from -1 to -4, then 1 to x. This way the system\n>> folders would always appear first in the list.\n\n<big snip>\n\n>> Using a column name within an expression in the order by \n>> does not seem to\n>> work...\n>> Or a much simpler example to illustrate the bug:\n>> fastmail=> select 1 as \"test\" order by (test<9);\n>> ERROR: attribute 'test' not found\n>> \n>> fastmail=> select 1 as \"test\" order by test;\n>> test\n>> ----\n>> 1\n>> (1 row)\n>> \n>> \n\n<not so big snip>\n\n>> \n>> Do I need outer joins to make this work instead of the \n>> screwed up union\n>> method I'm trying here, or is it just a series of bugs?\n>> \n>> -Michael\n>> \n>> \n",
"msg_date": "Fri, 16 Jul 1999 11:18:40 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Counting bool flags in a complex query "
}
] |
[
{
"msg_contents": "Sorry guys, this line:\n>> ORDER BY (if(folderid < 0) then return(abs(folderid)) else\n>> return(folderid+max(abs(MIN(folderid)))))\n\nShould have read:\nORDER BY (if(folderid < 0) then return(abs(folderid)) else\nreturn(folderid+abs(MIN(folderid))))\n\nThe max was an error.\n\nMikeA\n\n>> \n>> Why don't you adjust the ids of your system folders, such \n>> that they are\n>> ordered properly? You should have a fixed number of system \n>> folders, so you\n>> can guarantee the ids that they will receive. So make the \n>> Inbox -4. Then\n>> you just order by folder id, ascending. -4 comes first, \n>> with the user\n>> folders always coming after the system folders.\n>> \n>> Alternatively, you can sort by an expression, something like:\n>> \n>> ORDER BY (if(folderid < 0) then return(abs(folderid)) else\n>> return(folderid+max(abs(MIN(folderid)))))\n>> \n>> What this does is shift all the ids up to ensure that they \n>> all fall into the\n>> positive range, while inverting the order of the negative \n>> ids, which seems\n",
"msg_date": "Fri, 16 Jul 1999 12:01:26 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Counting bool flags in a complex query "
}
] |
[
{
"msg_contents": "Wayne Piekarski <[email protected]> writes:\n> the other day I did a pg_dump of our 6.4.2 database and tried to load it\n> back into 6.5 - it failed with the error message:\n\n> FATAL 1: btree: failed to add item to the page\n\nIIRC this just means the tuple is too long ... btrees want to be able to\nfit at least two tuples per disk page, so indexed fields can't exceed\n4k bytes in a stock installation. Sometimes you'll get away with more,\nbut not if two such keys end up on the same btree page.\n\nIt's not real clear to me *why* we are keeping an index on the prosrc\nfield of pg_proc, but we evidently are, so plpgsql source code can't\nsafely exceed 4k per proc as things stand.\n\nIn short, it was only by chance that you were able to put this set of\nprocs into 6.4 in the first place :-(\n\nCan any hackers comment on whether pg_proc_prosrc_index is really\nnecessary?? Just dropping it would allow plpgsql sources to approach 8k,\nand I can't think of any scenario where it's needed...\n\nBTW, Jan has been muttering about compressing plpgsql source, which\nwould provide some more breathing room for big procs, but not before 6.6.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Jul 1999 10:49:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Oversize proc sources (was Re: [BUGS] Backend dies creating plpgsql\n\tprocedures (with reproducible example!))"
},
{
"msg_contents": "> It's not real clear to me *why* we are keeping an index on the prosrc\n> field of pg_proc, but we evidently are, so plpgsql source code can't\n> safely exceed 4k per proc as things stand.\n> \n> In short, it was only by chance that you were able to put this set of\n> procs into 6.4 in the first place :-(\n> \n> Can any hackers comment on whether pg_proc_prosrc_index is really\n> necessary?? Just dropping it would allow plpgsql sources to approach 8k,\n> and I can't think of any scenario where it's needed...\n> \n> BTW, Jan has been muttering about compressing plpgsql source, which\n> would provide some more breathing room for big procs, but not before 6.6.\n\nGood question.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Jul 1999 12:22:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Oversize proc sources (was Re: [BUGS] Backend dies creating\n\tplpgsql procedures (with reproducible example!))"
},
{
"msg_contents": "> Wayne Piekarski <[email protected]> writes:\n> > the other day I did a pg_dump of our 6.4.2 database and tried to load it\n> > back into 6.5 - it failed with the error message:\n> \n> > FATAL 1: btree: failed to add item to the page\n> \n> IIRC this just means the tuple is too long ... btrees want to be able to\n> fit at least two tuples per disk page, so indexed fields can't exceed\n> 4k bytes in a stock installation. Sometimes you'll get away with more,\n> but not if two such keys end up on the same btree page.\n\nOk, well this is quite interesting actually. The test example I sent had\nvery large procedures, but my actual real life case contains functions\nwith length(prosrc) = 2082, 2059, 18888, 1841, 1525 ... etc bytes long. So\nI am nowhere near 4096 bytes, but I have crossed the 2048 byte boundary.\n\nThe error message is the same for both my test case and the real life\npg_dump so I'm not sure what this indicates. Is the problem actually at\n2048 bytes?\n\n> It's not real clear to me *why* we are keeping an index on the prosrc\n> field of pg_proc, but we evidently are, so plpgsql source code can't\n> safely exceed 4k per proc as things stand.\n> \n> In short, it was only by chance that you were able to put this set of\n> procs into 6.4 in the first place :-(\n\nYeah, this makes sense now. When we used to reload our procedures, I\nalways did a vacuum before hand which seemed to make it more reliable, and\nthen we would only replace one function at a time (ie, never a bulk reload\nof all our functions).\n\nEvery so often we'd have a problem when playing with test databases, but\nwe were always careful with our real one so managed to avoid it. \n\n> > Can any hackers comment on whether pg_proc_prosrc_index is really\n> necessary?? Just dropping it would allow plpgsql sources to approach 8k,\n> and I can't think of any scenario where it's needed...\n\nEeeep! I went and tried this and got some really bizarre behaviour:\n\npsql>UPDATE pg_class SET relname = 'dog' WHERE relname ='pg_proc_prosrc_index';\npostgres> mv pg_proc_prosrc_index dog\npsql> DROP INDEX pg_proc_prosrc_index;\n\nThen, whenever I try to insert a function into pg_proc:\n\ncreate function \"test\" (int4, text) RETURNS int4 AS \n'/home/postgres/functions.so' LANGUAGE 'c';\n\nThe backend dies, but the errlog contains no error message at all.\n/var/log/messages says the backend died with a segmentation fault. Eeep!\n\n\nSo I don't know why this is dying, is the way I dropped the index ok? I\ncouldn't think of any other way to do this because the backend won't let\nme drop or work on any pg_* tables.\n\n> BTW, Jan has been muttering about compressing plpgsql source, which\n> would provide some more breathing room for big procs, but not before 6.6.\n\nI would be happy to drop the pg_proc_prosrc_index - now that I know the\nlimits of plpgsql functions I can rewrite them to call other functions or\nsomething like that to make sure they fit within 4k, but mine are dying at\n2k as well, which is bad :(\n\nI personally would think the prosrc index could go because what kind of\nquery could possibly use this index?\n\n\nthanks for your help,\nWayne\n\n------------------------------------------------------------------------------\nWayne Piekarski Tel: (08) 8221 5221\nResearch & Development Manager Fax: (08) 8221 5220\nSE Network Access Pty Ltd Mob: 0407 395 889\n222 Grote Street Email: [email protected]\nAdelaide SA 5000 WWW: http://www.senet.com.au\n",
"msg_date": "Sat, 17 Jul 1999 12:53:22 +0930 (CST)",
"msg_from": "Wayne Piekarski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Oversize proc sources (was Re: [BUGS] Backend dies creating\n\tplpgsql procedures (with reproducible example!))"
}
] |
[
{
"msg_contents": "Why are our shared libs called e.g. ecpg.so and not libecpg.so ?\nThis seems strange to me, and it also complicates the link, \nsince now I cannot say -lecpg and decide whether to link static or shared\nwith the corresponding linker flag (-bnso or -brtl on AIX)\n\nAndreas \n",
"msg_date": "Fri, 16 Jul 1999 17:19:17 +0200",
"msg_from": "Zeugswetter Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "shared lib names"
},
{
"msg_contents": "> Why are our shared libs called e.g. ecpg.so and not libecpg.so ?\n\nUh, they aren't on at least some platforms. Here is the lib directory\non my linux box:\n\nglobal1.bki.source libpq++.so.3.0\nglobal1.description libpq.a\nlibecpg.a libpq.so\nlibecpg.so libpq.so.2\nlibecpg.so.3 libpq.so.2.0\nlibecpg.so.3.0.0 libpsqlodbc.a\nlibpgtcl.a libpsqlodbc.so\nlibpgtcl.so libpsqlodbc.so.0\nlibpgtcl.so.2 libpsqlodbc.so.0.25\nlibpgtcl.so.2.0 local1_template1.bki.source\nlibpq++.a local1_template1.description\nlibpq++.so pg_geqo.sample\nlibpq++.so.2 pg_hba.conf.sample\nlibpq++.so.2.0 plpgsql.so\nlibpq++.so.3 pltcl.so\n\n??\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 16 Jul 1999 15:34:09 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] shared lib names"
}
] |
[
{
"msg_contents": "I finally found some time to make the shared library Makefile adjustments \nfor the AIX port.\n\n <<aix.patch>> \nBruce, please apply this to the REL6_5_PATCHES (and current), \nsince it would be cool for 6.5.1.\n\nirix5 is broken, cause it has an extra _ in there. Fix is in the Patch.\n\nAndreas",
"msg_date": "Fri, 16 Jul 1999 17:39:14 +0200",
"msg_from": "Zeugswetter Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Makefile.shlib bug and AIX patch"
}
] |
[
{
"msg_contents": "Thomas wrote:\n> > Why are our shared libs called e.g. ecpg.so and not libecpg.so ?\n> \n> Uh, they aren't on at least some platforms. Here is the lib directory\n> on my linux box:\n> \nSorry Bruce, please can you make that change to my patch:\n\n- shlib\t\t\t\t:= $(NAME)$(DLSUFFIX)\n\n+ shlib\n:= lib$(NAME)$(DLSUFFIX)\n\nAndreas\n",
"msg_date": "Fri, 16 Jul 1999 17:49:51 +0200",
"msg_from": "Zeugswetter Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] shared lib names"
}
] |
[
{
"msg_contents": "\n> > Why are our shared libs called e.g. ecpg.so and not libecpg.so ?\n> \n> Uh, they aren't on at least some platforms. Here is the lib directory\n> on my linux box:\n> \nOk, sorry I see the difference now. Those that are for linking are named\nlib*.so\nand those that are for dyn loading into postgres don't have the lib.\nGood, thanks. \n\nAndreas\n",
"msg_date": "Fri, 16 Jul 1999 17:55:26 +0200",
"msg_from": "Zeugswetter Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] shared lib names"
},
{
"msg_contents": "Zeugswetter Andreas IZ5 <[email protected]> writes:\n> Ok, sorry I see the difference now. Those that are for linking are named\n> lib*.so\n> and those that are for dyn loading into postgres don't have the lib.\n\nWhat? They should all have the \"lib\" AFAIK --- dynamic loading takes\nthe same kind of shared lib as a regular link does on every platform\nI've heard about.\n\nI think you were just fooled by having misspelled the \"shlib\" variable\nthe first time around... or, perhaps, our dynlink support for aix\nis confused too?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Jul 1999 13:14:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] shared lib names "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Zeugswetter Andreas IZ5 <[email protected]> writes:\n> > Ok, sorry I see the difference now. Those that are for linking are named\n> > lib*.so\n> > and those that are for dyn loading into postgres don't have the lib.\n> \n> What? They should all have the \"lib\" AFAIK --- dynamic loading takes\n> the same kind of shared lib as a regular link does on every platform\n> I've heard about.\n\nI don't know about 'should' but they definitely are not required\nto have it.\n\nAs Andreas said, the reason for the 'lib' prefix is to allow the -l\nflag to the linker to work. What you pass to dlopen is the path to the\nfile, so\nit can have any name you want. Generally (may be some platform this\nisn't true)\nyou don't even have to have a magic suffix.\n\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n",
"msg_date": "Fri, 16 Jul 1999 13:48:48 -0400",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] shared lib names"
}
] |
[
{
"msg_contents": "What is the ideal setup to have when contributing to PG development? I can\nalways just download the latest CVS tree, and then presumably run diff when\nI want to send something in.\nHowever, my understanding is that using CVSup allows me to replicate the cvs\ntree into my own repository, which I then check out/update/commit from/to.\nThen, when I wish to send a patch in, I get cvs to produce a diff on the\npgsql module. And CVSup allows me to manage changes from the main PG\nrepository into my own repository, right?\n\n\nIs this right?\n\nMikeA\n\n",
"msg_date": "Fri, 16 Jul 1999 18:10:38 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Contributing"
},
{
"msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> What is the ideal setup to have when contributing to PG development? I can\n> always just download the latest CVS tree, and then presumably run diff when\n> I want to send something in.\n> However, my understanding is that using CVSup allows me to replicate the cvs\n> tree into my own repository, which I then check out/update/commit from/to.\n> Then, when I wish to send a patch in, I get cvs to produce a diff on the\n> pgsql module. And CVSup allows me to manage changes from the main PG\n> repository into my own repository, right?\n\nYes, CVS can easily produced the diff for you.\n\nWe only allow a few people cvs commmit privs. Others send patches to\nthe patches list, and we apply them, usually in a few days.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Jul 1999 12:35:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Contributing"
},
{
"msg_contents": "I think it depends on the level of changes you intend to implement\n(and your 'net access speed). If you just want to tweak some code,\npulling the latest cvs, hacking it up, doing a 'cvs update' and resolve\nany conflicts, then doing 'cvs diff' will give you a nice patch to send\nin. (Any recomendations on parameters for cvs diff?)\n\nIf your planning some major development, where you want to be able to \nhack and slash and not worry about losing your own changes, a local CVSup\nmirror may be preferable.\n\nNote that if you're behind a slow link, even the first scenario can be slow\n(the cvs diff requires 'net access to the repository)\n\nRoss\n\nOn Fri, Jul 16, 1999 at 06:10:38PM +0200, Ansley, Michael wrote:\n> What is the ideal setup to have when contributing to PG development? I can\n> always just download the latest CVS tree, and then presumably run diff when\n> I want to send something in.\n> However, my understanding is that using CVSup allows me to replicate the cvs\n> tree into my own repository, which I then check out/update/commit from/to.\n> Then, when I wish to send a patch in, I get cvs to produce a diff on the\n> pgsql module. And CVSup allows me to manage changes from the main PG\n> repository into my own repository, right?\n> \n> \n> Is this right?\n> \n> MikeA\n> \n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Fri, 16 Jul 1999 11:53:19 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Contributing"
},
{
"msg_contents": "\"Ansley, Michael\" <[email protected]> writes:\n> What is the ideal setup to have when contributing to PG development? I can\n> always just download the latest CVS tree, and then presumably run diff when\n> I want to send something in.\n> However, my understanding is that using CVSup allows me to replicate the cvs\n> tree into my own repository, which I then check out/update/commit from/to.\n\nAFAIK, the main advantage of CVSup is that you have a complete copy of\nthe CVS archive on your own machine, which means you can examine cvs\ncommit log messages, pull old versions, and so forth without having\nto contact hub.org. If you just use \"cvs update\" periodically then\nyou only have the current sources, and have to use remote cvs to do\nthings like checking log messages.\n\nIf you've got the disk space to spare for the full archives, and have\na fairly slow link to hub.org, then a local archive is worthwhile.\n\nI am not sure of the implications of trying to commit into your own\ncopy of the archive when you are using CVSup. I would think that\nthe commits might get lost at next CVSup run ... can anyone who uses\nCVSup clarify?\n\nPersonally I use the \"cvs update\" method because I don't have a lot\nof disk space to spare for Postgres work, and I don't mind using\nremote cvs operations to get at the logs...\n\ncvs update is pretty good about merging changes from the repository\ninto files that you have changed locally. Dunno how well that works\nwith CVSup. Probably you have to do a local \"cvs update\" into your\nworking files after each CVSup run, and the net result on the work\nfiles is just the same.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Jul 1999 13:21:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Contributing "
},
{
"msg_contents": "> I think it depends on the level of changes you intend to implement\n> (and your 'net access speed). If you just want to tweak some code,\n> pulling the latest cvs, hacking it up, doing a 'cvs update' and resolve\n> any conflicts, then doing 'cvs diff' will give you a nice patch to send\n> in. (Any recomendations on parameters for cvs diff?)\n\n\tpgcvs diff -c\n\n> \n> If your planning some major development, where you want to be able to \n> hack and slash and not worry about losing your own changes, a local CVSup\n> mirror may be preferable.\n\nYou can use our src/tools/make_diff tools to help.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Jul 1999 13:22:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Contributing"
},
{
"msg_contents": "> > What is the ideal setup to have when contributing to PG development?\n> AFAIK, the main advantage of CVSup is that you have a complete copy of\n> the CVS archive on your own machine, which means you can examine cvs\n> commit log messages, pull old versions, and so forth without having\n> to contact hub.org. If you just use \"cvs update\" periodically then\n> you only have the current sources, and have to use remote cvs to do\n> things like checking log messages.\n\nThe other principle advantage to CVSup is its efficiency in bringing\nover updates. It is very fast and really minimizes the bandwidth.\n\n> If you've got the disk space to spare for the full archives, and have\n> a fairly slow link to hub.org, then a local archive is worthwhile.\n\nOr find that hub.org disappears occasionally, or...\n\n> I am not sure of the implications of trying to commit into your own\n> copy of the archive when you are using CVSup. I would think that\n> the commits might get lost at next CVSup run ... can anyone who uses\n> CVSup clarify?\n\nCVSup guarantees that the parts of your cvs tree which are in common\nwith the server are the same. So it is probably not such a good idea\nto use it to replicate a checked-out tree if you plan on making any\nchanges, because it will wipe them out on the next update. It does\nallow you to make branches in your local repository, but I don't use\nthis feature.\n\n> cvs update is pretty good about merging changes from the repository\n> into files that you have changed locally. Dunno how well that works\n> with CVSup. Probably you have to do a local \"cvs update\" into your\n> working files after each CVSup run, and the net result on the work\n> files is just the same.\n\nYes. As you can tell, I'm a big fan of CVSup...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sat, 17 Jul 1999 06:06:02 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Contributing"
},
{
"msg_contents": "On Sat, 17 Jul 1999, Thomas Lockhart wrote:\n\n> > > What is the ideal setup to have when contributing to PG development?\n> > AFAIK, the main advantage of CVSup is that you have a complete copy of\n> > the CVS archive on your own machine, which means you can examine cvs\n> > commit log messages, pull old versions, and so forth without having\n> > to contact hub.org. If you just use \"cvs update\" periodically then\n> > you only have the current sources, and have to use remote cvs to do\n> > things like checking log messages.\n> \n> The other principle advantage to CVSup is its efficiency in bringing\n> over updates. It is very fast and really minimizes the bandwidth.\n\nIs this less then when using the -z option for CVS?\n\n> > If you've got the disk space to spare for the full archives, and have\n> > a fairly slow link to hub.org, then a local archive is worthwhile.\n> \n> Or find that hub.org disappears occasionally, or...\n\nWe think we just licked that problem...FreeBSD 3.x and earlier has\na problem with VM fragmentation when you start swapping heavily, so we\njust upgraded Hub to 768Meg of RAM from the 384Meg it was...where it used\nto die once every 24hrs, its been up ~5days now...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 20 Jul 1999 09:06:14 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Contributing"
},
{
"msg_contents": "The Hermit Hacker wrote:\n> \n> On Sat, 17 Jul 1999, Thomas Lockhart wrote:\n> \n> > > > What is the ideal setup to have when contributing to PG development?\n> > > AFAIK, the main advantage of CVSup is that you have a complete copy of\n> > > the CVS archive on your own machine, which means you can examine cvs\n> > > commit log messages, pull old versions, and so forth without having\n> > > to contact hub.org. If you just use \"cvs update\" periodically then\n> > > you only have the current sources, and have to use remote cvs to do\n> > > things like checking log messages.\n> >\n> > The other principle advantage to CVSup is its efficiency in bringing\n> > over updates. It is very fast and really minimizes the bandwidth.\n> \n> Is this less then when using the -z option for CVS?\n\nI believe so. I'm just guessing at CVS's behavior, but I *know* that\nCVSup only sends compressed diffs of the changes to update a cvs\nrepository or a checked-out tree. afaik CVS sends the entire file,\ncompressing it for transmission much as does CVSup.\n\n> > Or find that hub.org disappears occasionally, or...\n> We think we just licked that problem...\n\nNot entirely, unless you can guarantee uptime on Internet routing. I\nsee outages on occasion which I don't think are local to hub.org.\nThat's no news to anyone, but it does seem relevant when discussing\nthe merits of local vs remote repositories.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 20 Jul 1999 15:12:06 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Contributing"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> The Hermit Hacker wrote:\n>> Is this less then when using the -z option for CVS?\n\n> I believe so. I'm just guessing at CVS's behavior, but I *know* that\n> CVSup only sends compressed diffs of the changes to update a cvs\n> repository or a checked-out tree. afaik CVS sends the entire file,\n> compressing it for transmission much as does CVSup.\n\nNo, CVS will send either a whole file or a diff (the U or P code in its\nprintout tells you which way it updated the file, ie, Update whole thing\nor Patch). Either way, it's compressed if you've specified -z.\n\nIt looks like cvs has some semi-intelligent algorithm for choosing\nwhich to do ... probably, it produces the diff and then looks to see\nif the diff is bigger than the file.\n\nI would expect CVSup to require more total net traffic just because\nit has to transfer more info --- log entries, intermediate versions\n(with CVS, if foo.c has been checked in three times since your last\nupdate, you are sent one diff covering all the changes), etc.\nI have not tried to time it however. In any case, I suspect other\nconsiderations are going to drive each hacker's choice of which way\nto run.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Jul 1999 11:42:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Contributing "
}
] |
[
{
"msg_contents": "Forwarding this to the hackers list, since I can confirm the problem\nstill exists in 6.5.0. Looks like the functional extension of the\nearlier problem reported with default constant values being longer\nthan the field. Note trhat it is causing corruption, Leon's username \nwas just too short to make it visible in his example. The constant bug\nis squashed, BTW: Here's my example:\n\ntest=> create table hh (dd char(2) default user, ff int4);\nCREATE\ntest=> insert into hh (ff) values (5);\nINSERT 259723 1\ntest=> select * from hh;\ndd | ff\n--------+----------\nreedstrm|1836217459\n(1 row)\n\ntest=> drop table hh;\nDROP\ntest=> create table hh (dd char(2) default 'fred', ff int4);\nCREATE\ntest=> insert into hh (ff) values (5);\nINSERT 259735 1\ntest=> select * from hh;\ndd|ff\n--+--\nfr| 5\n(1 row)\n\ntest=> select version();\nversion \n--------------------------------------------------------------\nPostgreSQL 6.5.0 on i686-pc-linux-gnu, compiled by gcc 2.7.2.3\n(1 row)\n\n\n\n----- Forwarded message from Leon <[email protected]> -----\n\nX-From_: [email protected] Fri Jul 16 04:56:38 1999\nDate: Fri, 16 Jul 1999 14:45:13 +0500\nFrom: Leon <[email protected]>\nOrganization: Midnight greppers corp.\nX-Mailer: Mozilla 4.08 [en] (X11; I; Linux 2.2.3-5 i686)\nTo: \"'[email protected]'\" <[email protected]>\nSubject: [GENERAL] Weird behavior of 'default user'\nPrecedence: bulk\n\nHello!\n\nLook at this:\n\n------------------\nadb=> create table hh (dd char(2) default user, ff int4);\nCREATE\nadb=> insert into hh (ff) values (5);\nINSERT 572034 1\nadb=> select * from hh;\ndd |ff\n----+--\nleon| 5\n(1 row)\n------------------\n\nHow can I understand that? Column dd is of type char(2), whereas\n'leon' is four characters! Even more, look here:\n\n------------------\nadb=> insert into hh values (user, 7);\nINSERT 572045 1\nadb=> select * from hh;\ndd |ff\n----+--\nleon| 5\nle | 7\n(2 rows)\n------------------\n\nThis absolutely beyond my mind. This means that user, being\ninserted explicitly, is correctly truncated. If it is inserted\nby default, it is wider than column! Seems something very strange\nis going on here.\n\n-- \nLeon.\n\n----- End forwarded message -----\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Fri, 16 Jul 1999 11:32:01 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[[email protected]: [GENERAL] Weird behavior of 'default user']"
},
{
"msg_contents": "> Forwarding this to the hackers list, since I can confirm the problem\n> still exists in 6.5.0. Looks like the functional extension of the\n> earlier problem reported with default constant values being longer\n> than the field. Note trhat it is causing corruption, Leon's username \n> was just too short to make it visible in his example. The constant bug\n> is squashed, BTW: Here's my example:\n\nYes, I have added this to the TODO list. It was discovered in late 6.5,\nand while the constant case was quickly fixed, the non-matching-types\ncase was quickly pointed out by Tom Lane, and it not fixed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Jul 1999 13:21:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [[email protected]: [GENERAL] Weird behavior of 'default\n\tuser']"
},
{
"msg_contents": "\"Ross J. Reedstrom\" <[email protected]> forwards:\n> This absolutely beyond my mind. This means that user, being\n> inserted explicitly, is correctly truncated. If it is inserted\n> by default, it is wider than column! Seems something very strange\n> is going on here.\n\nYes, I thought that some cases of default insertion were still broken,\nand I was right. The patch Bruce put in before only fixes the case\nof a wrong-length string constant being given as the default for a\nchar(n) field. When the default is not a constant, there needs to be\na run-time length coercion to char(n), and there isn't.\n\nI have some work to do in that part of the parser anyway; will take care\nof it (but not in time for 6.5.1, I fear).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Jul 1999 13:25:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [[email protected]: [GENERAL] Weird behavior of 'default\n\tuser']"
}
] |
[
{
"msg_contents": ">Sorry, I think it is too late be adding port-specific feature to 6.5.1. \n>I will put it in 6.6. 6.5.1 release is due the 19th.\n\n\nWell, the current Makefile.shlib REL6_5_PATCH has been checked \nin today, and IS BROKEN for irix5, as I said.\nSo I would apply. The only difference it actually makes, is \nto avoid the manual link of plpgsql.so on AIX.\n\nAndreas\nPS: I won't feel bad if you don't though :-)\n\n\n",
"msg_date": "Fri, 16 Jul 1999 19:54:16 +0200",
"msg_from": "\"Zeugswetter Andreas\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: new: Makefile.shlib bug and AIX patch"
},
{
"msg_contents": "> >Sorry, I think it is too late be adding port-specific feature to 6.5.1. \n> >I will put it in 6.6. 6.5.1 release is due the 19th.\n> \n> \n> Well, the current Makefile.shlib REL6_5_PATCH has been checked \n> in today, and IS BROKEN for irix5, as I said.\n> So I would apply. The only difference it actually makes, is \n> to avoid the manual link of plpgsql.so on AIX.\n> \n> Andreas\n> PS: I won't feel bad if you don't though :-)\n> \n> \n> \n\nI did not check it in. The person who did can check it if they feel it\nis appropriate. I don't understand what has been happening in this area\nwell enough.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Jul 1999 15:08:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: new: Makefile.shlib bug and AIX patch"
}
] |
[
{
"msg_contents": ">> Ok, sorry I see the difference now. Those that are for linking are named\n>> lib*.so\n>> and those that are for dyn loading into postgres don't have the lib.\n>\n>What? They should all have the \"lib\" AFAIK --- dynamic loading takes\n>the same kind of shared lib as a regular link does on every platform\n>I've heard about.\n\n\nWe are only talking about the naming convention here.\ne.g. libpq.so but plpgsql.so\n\n>I think you were just fooled by having misspelled the \"shlib\" variable\n>the first time around... or, perhaps, our dynlink support for aix\n>is confused too?\n\nThe dynlink support is broken for plpgsql.so on aix,\nunless Bruce applies my patch, which I guess he did for current,\nbut not for the 6.5.1 branch.\n\nAndreas\n\n",
"msg_date": "Fri, 16 Jul 1999 20:00:00 +0200",
"msg_from": "\"Zeugswetter Andreas\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AW: [HACKERS] shared lib names "
},
{
"msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> >> Ok, sorry I see the difference now. Those that are for linking are named\n> >> lib*.so\n> >> and those that are for dyn loading into postgres don't have the lib.\n> >\n> >What? They should all have the \"lib\" AFAIK --- dynamic loading takes\n> >the same kind of shared lib as a regular link does on every platform\n> >I've heard about.\n> \n> \n> We are only talking about the naming convention here.\n> e.g. libpq.so but plpgsql.so\n> \n> >I think you were just fooled by having misspelled the \"shlib\" variable\n> >the first time around... or, perhaps, our dynlink support for aix\n> >is confused too?\n> \n> The dynlink support is broken for plpgsql.so on aix,\n> unless Bruce applies my patch, which I guess he did for current,\n> but not for the 6.5.1 branch.\n\nI have not applied either yet. I want someone who understands this to\nmake the decision.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Jul 1999 15:12:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] shared lib names"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> The dynlink support is broken for plpgsql.so on aix,\n>> unless Bruce applies my patch, which I guess he did for current,\n>> but not for the 6.5.1 branch.\n\n> I have not applied either yet. I want someone who understands this to\n> make the decision.\n\nOK, I'll take responsibility for it ...\n\nThe double underscore in 'SO__MINOR_VERSION' in the irix5 code is clearly\na typo and should be fixed, so I committed that into 6.5.1. However I\nam leery of committing the AIX code into 6.5 with so little time left\nbefore 6.5.1 release --- too much risk that there will be problems.\nSeems better to leave shlibs unimplemented on AIX for 6.5.\n\nI have committed both fixes into the main branch, however. Plenty of\ntime to find out if the AIX support has any problems before 6.6.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Jul 1999 19:01:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] shared lib names "
},
{
"msg_contents": "\"Zeugswetter Andreas\" <[email protected]> writes:\n>> What? They should all have the \"lib\" AFAIK --- dynamic loading takes\n>> the same kind of shared lib as a regular link does on every platform\n>> I've heard about.\n\n> We are only talking about the naming convention here.\n> e.g. libpq.so but plpgsql.so\n\nOh, right, I see your point. plpgsql.so isn't intended to ever be\nstatically linked so it needn't obey the \"lib\" naming convention.\n\nNever mind ... ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Jul 1999 11:32:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] shared lib names "
}
] |
[
{
"msg_contents": "It took a little work to recompile after the include-file cleanups :-(.\nYou got overenthusiastic about removing #includes, apparently.\nI have checked in changes for the ones that caused compile failures\nor warnings here, but there may be more.\n\nOne thing that particularly disturbs me is that \"config.h\" was removed\nfrom some of the files in src/backend/port/. I had to put this back\nin the ones used on my platform. I didn't touch anything I didn't get\na warning from, but I would strongly counsel making sure that config.h\nstill gets included *everywhere*, even if it doesn't seem necessary.\n\nFailing to do so may cause subtle problems due to #define symbols not\nbeing defined where they should be or prototypes not being visible that\nshould be. This is especially dangerous in platform-specific code.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Jul 1999 19:29:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "include-file cleanup"
},
{
"msg_contents": "> It took a little work to recompile after the include-file cleanups :-(.\n> You got overenthusiastic about removing #includes, apparently.\n> I have checked in changes for the ones that caused compile failures\n> or warnings here, but there may be more.\n\nI have reviewed and replaced config.h in all files it appeared in in\n6.5, where postgres.h or c.h were not already included. I have also\nremoved config.h from the cleaning script, just as postgres.h was never\nremoved.\n\nI imagine running this only every year or two.\n\n> One thing that particularly disturbs me is that \"config.h\" was removed\n> from some of the files in src/backend/port/. I had to put this back\n> in the ones used on my platform. I didn't touch anything I didn't get\n> a warning from, but I would strongly counsel making sure that config.h\n> still gets included *everywhere*, even if it doesn't seem necessary.\n\nI put them all back in the port/ directory. This is also the reason I\ndid not change any of the system includes, because different OS's need\ndifferent includes.\n\n> Failing to do so may cause subtle problems due to #define symbols not\n> being defined where they should be or prototypes not being visible that\n> should be. This is especially dangerous in platform-specific code.\n\nYes, I know I fixed a number of places where tests were made before\npostgres.h/c.h/config.h were included.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 17 Jul 1999 00:14:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: include-file cleanup"
},
{
"msg_contents": "> I have reviewed and replaced config.h in all files it appeared in in\n> 6.5, where postgres.h or c.h were not already included. I have also\n> removed config.h from the cleaning script, just as postgres.h was never\n> removed.\n\nOK, that sounds good.\n\nThe thing that bothers me is why config.h got removed from these\nport files in the first place. The compiler warning I got (because\nI use gcc -Wmissing-prototypes) was that \"random\" and \"srandom\"\nwere defined without having been declared in any include file.\nNow config.h provides prototypes for those functions --- inside\n#ifdefs of course, but they are there. Your script should have\nnoticed that the name \"random\" mentioned in config.h was also\nmentioned in port/random.c, and therefore not removed the include\nof config.h from random.c. Why did it not make the connection?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Jul 1999 00:51:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: include-file cleanup "
},
{
"msg_contents": "> > I have reviewed and replaced config.h in all files it appeared in in\n> > 6.5, where postgres.h or c.h were not already included. I have also\n> > removed config.h from the cleaning script, just as postgres.h was never\n> > removed.\n> \n> OK, that sounds good.\n> \n> The thing that bothers me is why config.h got removed from these\n> port files in the first place. The compiler warning I got (because\n> I use gcc -Wmissing-prototypes) was that \"random\" and \"srandom\"\n> were defined without having been declared in any include file.\n> Now config.h provides prototypes for those functions --- inside\n> #ifdefs of course, but they are there. Your script should have\n> noticed that the name \"random\" mentioned in config.h was also\n> mentioned in port/random.c, and therefore not removed the include\n> of config.h from random.c. Why did it not make the connection?\n\nBecause the random prototype is in stdlib.h in BSD/OS, and that file was\nalready #included. Seems it must be in another file in your OS.\n\nstdlib.h:168: long random __P((void));\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 17 Jul 1999 01:09:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: include-file cleanup"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Your script should have\n>> noticed that the name \"random\" mentioned in config.h was also\n>> mentioned in port/random.c, and therefore not removed the include\n>> of config.h from random.c. Why did it not make the connection?\n\n> Because the random prototype is in stdlib.h in BSD/OS, and that file was\n> already #included. Seems it must be in another file in your OS.\n> stdlib.h:168: long random __P((void));\n\nAh, well there is the problem: you are (in effect) assuming that\nif stdlib.h defines random() on your platform, then it does so on\neveryone's platform. If that were true, we'd not need port/random.c...\n\nI think your script ought to be set up to ignore system headers\ncompletely, and only look at our own headers to determine which\nsymbols are defined where.\n\nIn theory, only config.h and c.h should contain any substitutes for\nsystem-header symbols, so if you explicitly exclude those two from\nremoval then there should be no portability issue. But I'd be\nhappier if you changed the script so that its decisions are not\naffected by the contents of system headers.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Jul 1999 11:21:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: include-file cleanup "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> Your script should have\n> >> noticed that the name \"random\" mentioned in config.h was also\n> >> mentioned in port/random.c, and therefore not removed the include\n> >> of config.h from random.c. Why did it not make the connection?\n> \n> > Because the random prototype is in stdlib.h in BSD/OS, and that file was\n> > already #included. Seems it must be in another file in your OS.\n> > stdlib.h:168: long random __P((void));\n> \n> Ah, well there is the problem: you are (in effect) assuming that\n> if stdlib.h defines random() on your platform, then it does so on\n> everyone's platform. If that were true, we'd not need port/random.c...\n>\n> I think your script ought to be set up to ignore system headers\n> completely, and only look at our own headers to determine which\n> symbols are defined where.\n\nWell, the script just does the compile with and without the #include. \nReally no way to test if the existance of system tables causes any\ndifference, and if I remove the system files completely, the code will\nnot compile.\n\n> In theory, only config.h and c.h should contain any substitutes for\n> system-header symbols, so if you explicitly exclude those two from\n> removal then there should be no portability issue. But I'd be\n> happier if you changed the script so that its decisions are not\n> affected by the contents of system headers.\n\nI have added c.h to the list of files I skip in pgnoinclude. I can't\nthink of any way to prevent system headers from causing such problems,\nthough.\n\nI specifically don't remove any system headers for this very reason,\nthat different OS's may need system files that I don't.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 17 Jul 1999 11:38:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: include-file cleanup"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Well, the script just does the compile with and without the #include. \n\nOh ... I was assuming you had built something that actually went through\nand gathered up a list of the symbols mentioned in each file.\n\nI fear we are going to be putting back missing includes for a while to\ncome; in particular, I'll bet that MULTIBYTE and possibly USE_LOCALE are\nnow broken, unless you ran the script with those features enabled.\nThere are going to be a few more problems with platform-specific code\nlike the one I found in pqcomm.c, too.\n\nWhat you did is a good hack as a one-shot cleanup, but I can't see\nwanting to repeat it in future, not even as seldom as every year or two,\nunless we build a much more reliable tool for the job.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Jul 1999 12:17:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: include-file cleanup "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Well, the script just does the compile with and without the #include. \n> \n> Oh ... I was assuming you had built something that actually went through\n> and gathered up a list of the symbols mentioned in each file.\n> \n> I fear we are going to be putting back missing includes for a while to\n> come; in particular, I'll bet that MULTIBYTE and possibly USE_LOCALE are\n> now broken, unless you ran the script with those features enabled.\n> There are going to be a few more problems with platform-specific code\n> like the one I found in pqcomm.c, too.\n\nI have added the needed files for MULTIBYTE and LOCALE.\n\n> \n> What you did is a good hack as a one-shot cleanup, but I can't see\n> wanting to repeat it in future, not even as seldom as every year or two,\n> unless we build a much more reliable tool for the job.\n\nOK, I will not run it for another three years. Some of the tools, like\nthe one that changes <> to \"\" as approproate may be good for more\nfrequent use. The tool that makes sure every #include has proper\nincludes may be OK too. As you suggested, if we run the include removal\nscript, we can just have it report what it suggests for removal, and\nmanually review each one.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 17 Jul 1999 12:49:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: include-file cleanup"
},
{
"msg_contents": "A further thought on the include-file cleanup: although you are right to\nbe wary of removing includes of system files, it would be a good idea to\nremove *redundant* includes of system files.\n\nIn particular, since c.h includes <stdlib.h>, as well as <stddef.h>\nand <stdarg.h> if they exist, it should not be necessary for any file\nthat includes c.h (either directly or via postgres.h) to pull these\nin for itself. Removing the \"retail\" inclusions of these files that\nare found in many source files would make life much easier for anyone\ntrying to port to a platform where they don't exist...\n\nAlso, I think some places include c.h without having included\npostgres.h. These should be checked to ensure that config.h has\nbeen included first --- c.h depends on configuration symbols from\nconfig.h to work properly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Jul 1999 13:56:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: include-file cleanup "
},
{
"msg_contents": "> A further thought on the include-file cleanup: although you are right to\n> be wary of removing includes of system files, it would be a good idea to\n> remove *redundant* includes of system files.\n> \n> In particular, since c.h includes <stdlib.h>, as well as <stddef.h>\n> and <stdarg.h> if they exist, it should not be necessary for any file\n> that includes c.h (either directly or via postgres.h) to pull these\n> in for itself. Removing the \"retail\" inclusions of these files that\n> are found in many source files would make life much easier for anyone\n> trying to port to a platform where they don't exist...\n\nThe problem is that we include system includes first. Are there any\nsystem includes that require stdlib to be included first?\n\n\nOK, now you got me started again. And I was going to do some real\npaying work today. :-)\n\nI have removed the duplicate system headers when postgres.h is included,\nand have added string.h and stdio.h to c.h, and have removed those from\nthe files. Now, many C files have _no_ system includes, because they\ncome from postgres.h including c.h.\n\nYou know, at the time, this seems like a real pain, just like pgindent\nwas a pain to get working properly. But in a year, you look back and\nsay, \"Hey, it was worth it. Look at how much easier things are now.\"\n\nI will commit the changes now.\n\n> Also, I think some places include c.h without having included\n> postgres.h. These should be checked to ensure that config.h has\n> been included first --- c.h depends on configuration symbols from\n> config.h to work properly.\n\npostgres.h include c.h, and config.h _now_ includes c.h.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 17 Jul 1999 15:13:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: include-file cleanup"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> The problem is that we include system includes first. Are there any\n> system includes that require stdlib to be included first?\n\nIf so, they are supposed to include it for themselves.\n\nNote: you can't really include ALL sys headers first, since some of them\nneed to be included conditionally, and the condition symbols are coming\nfrom config.h...\n\n> I have removed the duplicate system headers when postgres.h is included,\n> and have added string.h and stdio.h to c.h, and have removed those from\n> the files. Now, many C files have _no_ system includes, because they\n> come from postgres.h including c.h.\n\nSounds pretty good.\n\n>> Also, I think some places include c.h without having included\n>> postgres.h. These should be checked to ensure that config.h has\n>> been included first --- c.h depends on configuration symbols from\n>> config.h to work properly.\n>\n> postgres.h include c.h, and config.h _now_ includes c.h.\n\nOK, so then no .c files should be including c.h directly anymore?\nEverything should include either postgres.h, or config.h if it's\nnot tightly tied to the system?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Jul 1999 17:51:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: include-file cleanup "
},
{
"msg_contents": "> > postgres.h include c.h, and config.h _now_ includes c.h.\n> \n> OK, so then no .c files should be including c.h directly anymore?\n> Everything should include either postgres.h, or config.h if it's\n> not tightly tied to the system?\n\nSome port stuff includes just c.h. Not sure why.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 17 Jul 1999 22:56:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: include-file cleanup"
},
{
"msg_contents": "I wrote:\n>> OK, so then no .c files should be including c.h directly anymore?\n>> Everything should include either postgres.h, or config.h if it's\n>> not tightly tied to the system?\n\nI misread your prior mail --- you have fixed c.h to include\nconfig.h, so it's safe to include c.h directly if not including\npostgres.h. The real story now is: either c.h or postgres.h should be\nthe first file included by all .c files in Postgres, to ensure that the\nautoconfigure symbols from config.h are picked up.\n\nBruce Momjian <[email protected]> writes:\n> Some port stuff includes just c.h. Not sure why.\n\nThat makes sense to me for files that are just duplicating missing\nsystem functions, and don't really need to be aware that they are\ninside Postgres.\n\nLooks like things are in good shape now. I will run a compile here\nmomentarily...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Jul 1999 23:25:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: include-file cleanup "
}
] |
[
{
"msg_contents": "Hi,\n\nA few weeks ago I sent an email out about getting BTP_CHAIN faults when\ntrying to perform operations with tables. My colleague Matt Altus was\ntrawling the mailing lists looking for information about this, and he\nfound some articles previously discussing problems with Btree indices and\nhow they sometimes can have problems handling tables with massive\nduplicate entries in them, as the tree becomes unbalanced, and mentioned\nother things like leaf nodes and so on. The postings talked about how\nfixing up the problem was tricky and was still there, and Oracle solved it\nby including the tid in with the index to make it more unique.\n\nWell, we thought about this, and had a look at every table and index we'd\never had BTP_CHAIN problems with, and all had massive duplication of\nvalues in the particular columns. Ie, one table has 1.5 million rows, and\none of the columns with an index on it (snum) has only 20000 unique values\n- this particular table was very troublesome, whereas others weren't so\nbad because they were a lot smaller. Each table we looked at were all the\nsame problem, and we thought wow, this is really neat because all our\nproblem tables were explained by these postings. None of our other indexes\ncaused problems, because they were more unique.\n\nEach one of our tables has a column called id which is very similar to an\noid except we generate it ourselves, and so we put in a reference to the\nid column after all the other columns in our indexes. ie,\n\ncreate index sessions_snum_index on sessions using btree (snum);\n\nbecame:\n\ncreate index sessions_snum_index on sessions using btree (snum, id);\n\nThe indexes grew a little bit, but now we have not had *ANY* BTP_CHAIN\nfaults at all, and to test it we really thrashed the machine to see if we\ncould cause it to die. It worked perfectly and we were all really happy\nbecause BTP_CHAIN was very annoying to fix up. It was occuring a lot when\nthe machine was under high load.\n\nSo I can definitely recommend this to anyone who has problems like this,\nor tables with lots of rows but not many unique values. The problem does\nnot occur under simple circumstances, only under cases where many backends\nare all running and the system is under a high load.\n\nWould a solution to the problem be to automatically include the row OID\nwhen creating an index? This would fix the problem for everyone\nautomatically without having to do the hack manually. Is it ok to include\nthe OID in an index? I wasn't sure about this which is why I included my\nown ID value instead so someone might want to comment on this.\n\nJust thought I'd share this with everyone so we can all benefit from it.\nThis is a problem which really caused us to doubt the ability of Postgres\nto be used in a high load environment and so I think it should be\nmentioned somewhere. Maybe in the docs?\n\n\nBTW, since getting around BTP_CHAIN our only remaining problem is the\nbackends waiting thing, and we are upgrading to 6.5 tomorrow which we hope\nwill fix this up forever. We did some testing of 6.5 and it runs a *lot*\nfaster, is more reliable, and the load of the machine is very much lower\nthan it normally is with 6.4.2 with our thrash testing program. I assume\nthat 6.4 style code will work unchanged in 6.5? Ie, we've used a lot of\nLOCK TABLE xxx; code everywhere, which we hope will work untouched in 6.5. \n\nWe'll report back after our upgrade once we know that everything works\nreally well.\n\n \nRegards,\nWayne\n\n------------------------------------------------------------------------------\nWayne Piekarski Tel: (08) 8221 5221\nResearch & Development Manager Fax: (08) 8221 5220\nSE Network Access Pty Ltd Mob: 0407 395 889\n222 Grote Street Email: [email protected]\nAdelaide SA 5000 WWW: http://www.senet.com.au\n",
"msg_date": "Sat, 17 Jul 1999 13:10:04 +0930 (CST)",
"msg_from": "Wayne Piekarski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix up for BTP_CHAIN problems"
},
{
"msg_contents": "Wayne Piekarski <[email protected]> writes:\n> Well, we thought about this, and had a look at every table and index we'd\n> ever had BTP_CHAIN problems with, and all had massive duplication of\n> values in the particular columns. Ie, one table has 1.5 million rows, and\n> one of the columns with an index on it (snum) has only 20000 unique values\n> - this particular table was very troublesome, whereas others weren't so\n> bad because they were a lot smaller.\n\nThat's real useful info --- thanks! So the BTP_CHAIN problem is getting\ncaused by some kind of error in btree's handling of equal keys.\n\n> Would a solution to the problem be to automatically include the row OID\n> when creating an index?\n\nVadim had muttered about doing something like that as a substitute for\nfixing the equal-keys logic, but it seems like a kluge to me, especially\nif it makes the index bigger. (OTOH I think he was envisioning using\nsome already-existing field of index tuple headers as the tiebreaker,\nso maybe it wouldn't cost any space.)\n\nVadim, I just committed a change I'd been sitting on for a couple of\nmonths: it eliminates bt_firsteq() by making bt_binsrch()'s binary search\nlogic deal with equal keys. It might be worth your time to look it\nover. I did not change the code's behavior, but I think I did improve\nthe clarity and I certainly added a bunch of documentation. The old\ncode had a bunch of strange behavior at boundary conditions, all of\nwhich I replicated and documented, but I can't help wondering whether\nit was all correct...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Jul 1999 11:15:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Fix up for BTP_CHAIN problems "
},
{
"msg_contents": "Wayne Piekarski wrote:\n> \n> The indexes grew a little bit, but now we have not had *ANY* BTP_CHAIN\n> faults at all, and to test it we really thrashed the machine to see if we\n> could cause it to die. It worked perfectly and we were all really happy\n> because BTP_CHAIN was very annoying to fix up. It was occuring a lot when\n> the machine was under high load.\n ^^^^^^^^^^^^^^^\nHiroshi made patch for this case. This patch is in 6.5.\nI should post it to general list and put on ftp... sorry.\nI'll do it today.\n\nVadim\n",
"msg_date": "Mon, 19 Jul 1999 09:56:24 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Fix up for BTP_CHAIN problems"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> > Would a solution to the problem be to automatically include the row OID\n> > when creating an index?\n> \n> Vadim had muttered about doing something like that as a substitute for\n> fixing the equal-keys logic, but it seems like a kluge to me, especially\n> if it makes the index bigger. (OTOH I think he was envisioning using\n> some already-existing field of index tuple headers as the tiebreaker,\n> so maybe it wouldn't cost any space.)\n\nIt will increase size of index tuples on inner pages, but not\non leaf ones.\n\nVadim\n",
"msg_date": "Mon, 19 Jul 1999 10:06:46 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Fix up for BTP_CHAIN problems"
}
] |
[
{
"msg_contents": "I noticed that PostGress haven't any mechanism for\nsplit tables over a lot of device (Oracle-like).\n \nI wont write it. Can you authorize me?\n \nTIA\n---\nRoberto Colmegna\[email protected] [email protected]\n\n\n\n\n\n\n\n\nI noticed that PostGress haven't any mechanism forsplit tables over a \nlot of device (Oracle-like). I wont write it. Can you authorize \nme? TIA---Roberto [email protected] [email protected]",
"msg_date": "Sat, 17 Jul 1999 07:27:26 +0200",
"msg_from": "\"Roberto Colmegna\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostGress Table Split"
}
] |
[
{
"msg_contents": "Well, this is what I have discovered so far:\n\nI left BLCKSZ as it was, but adjusted MAX_QUERY_SIZE to 65535, and ran make.\nI tried to run a 50k query, and it worked; it took a while, but the results\nwere fine. So, as a temporary solution to this problem, you should be able\nto set MAX_QUERY_SIZE up to what you require (don't go overboard, and please\nlet me know of any problems). However, make sure that you test it properly\nfirst, because I didn't do extensive testing, just enough to make sure that\nit didn't break immediately. Also, please remember that long queries\ndefinitely impact the query processor, so long queries are not a great idea\nfor online sub-systems. They're not great for batch either, but at least\nthere you have a window to play with.\n\nMore news to come.....\n\nMikeA\n\n\n\n>> > \n>> > Troy wrote:\n>> > >> Does Postgres have any limitations on \n>> > >> the length of queries?\n>> > >> \n>> > >> E.g. is \"select one,two,three,...thousand from \n>> one,two,three,...thousand\n>> > where one = x and two is >> x and three is x and ... \n>> thousand = x\" legal?\n>> > >> \n>> > Yes, there is. It is set to BLCKSZ * 2, at least in 6.5. \n>> BLCKSZ is\n>> > normally 8192 bytes, so your query size will be 16k. \n>> However, I'm busy\n>> > working on it at the moment, to make it unlimited (i.e.: \n>> limited by memory\n>> > available).\n>> > \n>> > MikeA\n>> > \n>> > \n>> \n",
"msg_date": "Sat, 17 Jul 1999 12:17:08 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: query length limits"
}
] |
[
{
"msg_contents": "We've now got a viable mechanism for generating man pages from sgml\nsources. So, I'm starting to go through the old man pages (those in\nsrc/man/) to verify that all information in them is available\nsomewhere in the new docs.\n\n>From here on, there is no need to update the src/man/ man pages when\nupdating docs. Please do all updates in doc/src/sgml/{.,/ref}/*.sgml.\nI'll be removing the old man pages from the cvs tree, but not until\nI've got the new man page generating mechanism installed at\npostgresql.org. This should all be completed well in advance of a v6.6\nrelease.\n\nTIA\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sat, 17 Jul 1999 16:31:51 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Merging old man pages"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> We've now got a viable mechanism for generating man pages from sgml\n> sources.\n\nExcellent!\n\n> From here on, there is no need to update the src/man/ man pages when\n> updating docs. Please do all updates in doc/src/sgml/{.,/ref}/*.sgml.\n> I'll be removing the old man pages from the cvs tree,\n\nOK, let me get this straight: man pages will no longer be in the CVS\ntree because they will no longer be original files, but they will be\npart of the standard distribution as derived files, right?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Jul 1999 13:22:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Merging old man pages "
},
{
"msg_contents": "> OK, let me get this straight: man pages will no longer be in the CVS\n> tree because they will no longer be original files, but they will be\n> part of the standard distribution as derived files, right?\n\nYes. Well, at least, maybe, sort of...\n\nAs is the case with the other (html) docs, I'm planning on putting a\nman tarball into the distribution. Up to now, the easiest way to do\nthat is to put the tarball into cvs, but I'm open to other\nsuggestions.\n\nDo I guess correctly that we currently generate our production\nreleases by actually doing a cvs checkout and then a \"mini-build\" of\nthe system to generate the yacc/bison derived files? If so, we could\nconsider doing the same sort of thing for the html and man products,\nbut it (probably) makes this packaging process more fragile.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sat, 17 Jul 1999 20:54:25 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Merging old man pages"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Do I guess correctly that we currently generate our production\n> releases by actually doing a cvs checkout and then a \"mini-build\" of\n> the system to generate the yacc/bison derived files?\n\nRight.\n\n> If so, we could\n> consider doing the same sort of thing for the html and man products,\n> but it (probably) makes this packaging process more fragile.\n\nLess fragile than doing it by hand ;-). I'd say that's exactly the\nway to proceed.\n\nThe shell script src/tools/release_prep contains the commands that\nare executed (at hub.org) to prepare derived files for release.\nAdd whatever is needed to build the derived doc files, and we should\nbe set.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Jul 1999 17:47:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Merging old man pages "
}
] |
[
{
"msg_contents": "Can I have votes on what people want the next version number to be?\n\nWe have to brand the release when we start development(PG_VERSION file).\n6.5 probably should have been called 7.0, but we had already committed\nto 6.5.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 17 Jul 1999 12:51:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "New version number 6.6 or 7.0"
},
{
"msg_contents": "> Can I have votes on what people want the next version number to be?\n> We have to brand the release when we start development(PG_VERSION \n> file). 6.5 probably should have been called 7.0, but we had already \n> committed to 6.5.\n\nWe've been making pretty steady progress over the last few releases.\nI'd suggest that a bump to 7.0 should happen when we've accumulated\nmost of the fixes/improvements from the \"hot list\". We've worked\nthrough most of those; here are the ones I'd like to see at or before\na 7.0 release:\n\no implement outer joins\no merge date/time types and deprecate the old 4-byte ones\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sat, 17 Jul 1999 21:03:46 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New version number 6.6 or 7.0"
},
{
"msg_contents": "> \n> Can I have votes on what people want the next version number to be?\n> \n> We have to brand the release when we start development(PG_VERSION file).\n> 6.5 probably should have been called 7.0, but we had already committed\n> to 6.5.\n\n 6.6.6 - the number of the databeast :-)\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n",
"msg_date": "Sun, 18 Jul 1999 11:56:03 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New version number 6.6 or 7.0"
},
{
"msg_contents": ">\n> Can I have votes on what people want the next version number to be?\n>\n> We have to brand the release when we start development(PG_VERSION file).\n> 6.5 probably should have been called 7.0, but we had already committed\n> to 6.5.\n\nNow seriously:\n\n Naming it 7.0 IMHO requires transaction log, tuple split over\n blocks, foreign keys, outer joins and rules of arbitrary\n size. I don't expect ALL of them for the next release, so let\n it be 6.6.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sun, 18 Jul 1999 12:00:55 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New version number 6.6 or 7.0"
},
{
"msg_contents": "> Naming it 7.0 IMHO requires transaction log, tuple split over\n> blocks, foreign keys, outer joins and rules of arbitrary\n> size. I don't expect ALL of them for the next release, so let\n> it be 6.6.\n\nI like Jan's more complete list...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sun, 18 Jul 1999 16:34:16 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New version number 6.6 or 7.0"
},
{
"msg_contents": "> > Naming it 7.0 IMHO requires transaction log, tuple split over\n> > blocks, foreign keys, outer joins and rules of arbitrary\n> > size. I don't expect ALL of them for the next release, so let\n> > it be 6.6.\n> \n> I like Jan's more complete list...\n\nOK, 6.6 is it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 18 Jul 1999 13:53:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] New version number 6.6 or 7.0"
},
{
"msg_contents": "On Sat, 17 Jul 1999, Thomas Lockhart wrote:\n\n> > Can I have votes on what people want the next version number to be?\n> > We have to brand the release when we start development(PG_VERSION \n> > file). 6.5 probably should have been called 7.0, but we had already \n> > committed to 6.5.\n> \n> We've been making pretty steady progress over the last few releases.\n> I'd suggest that a bump to 7.0 should happen when we've accumulated\n> most of the fixes/improvements from the \"hot list\". We've worked\n> through most of those; here are the ones I'd like to see at or before\n> a 7.0 release:\n> \n> o implement outer joins\n> o merge date/time types and deprecate the old 4-byte ones\n\nMy opinion is that MVCC should have jump'd us to 7.0 in the first\nplace...\n\nIMHO, release for October should be v7.0 ... if the above two get done,\ngreat, if not, no probs...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 19 Jul 1999 11:05:29 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New version number 6.6 or 7.0"
},
{
"msg_contents": "> > We've been making pretty steady progress over the last few releases.\n> > I'd suggest that a bump to 7.0 should happen when we've accumulated\n> > most of the fixes/improvements from the \"hot list\". We've worked\n> > through most of those; here are the ones I'd like to see at or before\n> > a 7.0 release:\n> > \n> > o implement outer joins\n> > o merge date/time types and deprecate the old 4-byte ones\n> \n> My opinion is that MVCC should have jump'd us to 7.0 in the first\n> place...\n> \n> IMHO, release for October should be v7.0 ... if the above two get done,\n> great, if not, no probs...\n\nDue to overwhelming agreement, it is 6.6. I personally vote for 7.0,\nand so do you, but we are outnumbered. We can revisit this as the\nrelease gets closer, but to change it then, I am going to have to change\nPG_VERSION, and that will require initdb for everyone. Perhaps just\nbefore we enter beta, we can discuss it, knowing then what our features\nwill be.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Jul 1999 10:35:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] New version number 6.6 or 7.0"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> We can revisit this as the\n> release gets closer, but to change it then, I am going to have to change\n> PG_VERSION, and that will require initdb for everyone. Perhaps just\n> before we enter beta, we can discuss it, knowing then what our features\n> will be.\n\nWe usually cause enough initdb's during a development cycle that another\none doesn't seem like a big problem. Let's leave it at 6.6 for now, and\nwait to see what the feature list looks like when it's time to start\nbeta.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Jul 1999 11:30:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New version number 6.6 or 7.0 "
}
] |
[
{
"msg_contents": "I have marked the version as 6.6. Initdb everyone. You may also want\nto remove config.cache, as I have removed some old stuff.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 18 Jul 1999 14:08:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "6.6 branding"
}
] |
[
{
"msg_contents": "I think I am going to get the award for most maligned piece of code that\nno one can figure out how to improve.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 18 Jul 1999 18:13:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "LIKE indexing in gram.y"
}
] |
[
{
"msg_contents": "Per Leon's recent gripe in the bugs list, I have confirmed that\n\ncreate table t1 (f1 int4, f2 timestamp default text 'now');\n\ndoes *not* work as expected --- the stored constraint expression\nhas been pre-reduced to a timestamp constant, even though you\nget the desired behavior with\n\ncreate table t1 (f1 int4, f2 datetime default text 'now');\n\nI have tracked down the cause of this, and it's a mess.\n\nFirst off, there is a pg_proc entry for converting a text value\nto datetime (proc text_datetime, OID 1351) but there is no similar\nfunction for timestamp. Therefore, the parser's can_coerce_type\nfunction returns \"true\" if asked whether it can coerce text to\ndatetime, but \"false\" for text to timestamp.\n\nSecond, the actual storage of the constraint expression is being\ndone through an incredibly klugy series of hacks. After the grammar\nparses the constraint expression, the expression is converted back\nto text form (!) instead of being output as a parsetree. Then,\nStoreAttrDefault does a parse_and_plan on that text to generate a\nparsetree, which it will use as the executable form of the constraint.\nThis code makes me ill to look at ... for one thing the reverse-\nconversion code is not nearly as smart as it needs to be:\n\ncreate table quotedefault (f1 int4, f2 text default 'abc\\'def');\nERROR: parser: parse error at or near \"def\"\n\nbecause the deparsed text handed to StoreAttrDefault just looks like\n\t'abc'def'\n\nBut the immediate problem is that StoreAttrDefault tries to coerce the\ncompiled expression to the target data type. If it can't get there\nthrough can_coerce_type, it does a forced coercion by reparsing the\nconstraint text with \":: targettype\" added on. (Still another bug:\nsince it doesn't put parentheses around the constraint expression text\nwhen it does that, the typecast will actually be parsed as applied to\nthe last component of the expression, not the whole thing... which could\nlead to the wrong type coming out.) Of course \"text 'now' :: timestamp\"\nwill be reduced to a timestamp constant, and at that point we've lost.\n\nI am not sure what should be done to clean this up. A brute-force\nsolution would be to make sure that there is a text-to-whatever\nconversion function in pg_proc for any type where the type input\nfunction is not necessarily a constant --- but I'm not sure which\ntypes besides timestamp might meet that description. In any case\nI do not care for the code in StoreAttrDefault at all.\n\nI am about to commit parser fixes that ensure a DEFAULT value is\ncorrectly coerced to the column type when it is used (that is,\ntransformInsertStatement now does a coerce_type rather than just\nassuming what is in pg_attrdef is the right type). So, one possible\napproach is to remove the coercion code from StoreAttrDefault\naltogether. That would mean that\n\t\tfield1 datetime 'now'\nwould start acting the same as\n\t\tfield1 datetime text 'now'\ncurrently does: both of them would be coerced to datetime at runtime,\nnot when the constraint expression is created. Given the frequency\nwith which newbies complain about the current behavior, I think that\nthat might be a Good Thing. But it would be a change in behavior,\nand I suppose there are scenarios where you'd like to be able to get\nthe old behavior.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 18 Jul 1999 18:19:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why DEFAULT text 'now' does not work for TIMESTAMP columns"
},
{
"msg_contents": "On Sun, Jul 18, 1999 at 06:19:41PM -0400, Tom Lane wrote:\n<excellent problem analysis snipped>\n> \n> I am about to commit parser fixes that ensure a DEFAULT value is\n> correctly coerced to the column type when it is used (that is,\n> transformInsertStatement now does a coerce_type rather than just\n> assuming what is in pg_attrdef is the right type). So, one possible\n> approach is to remove the coercion code from StoreAttrDefault\n> altogether. That would mean that\n> \t\tfield1 datetime 'now'\n> would start acting the same as\n> \t\tfield1 datetime text 'now'\n> currently does: both of them would be coerced to datetime at runtime,\n> not when the constraint expression is created. Given the frequency\n> with which newbies complain about the current behavior, I think that\n> that might be a Good Thing. But it would be a change in behavior,\n> and I suppose there are scenarios where you'd like to be able to get\n> the old behavior.\n> \n> Comments?\n\nMy only comment: it seems to me that after your fix, one could still get the\nold behavior via something like:\n field1 datetime now()\nor perhaps:\n field1 datetime 'now'::datetime\ncorrect? And the default behavior will now be what most naive users\nexpect. As long as a workaround exists for the cases where someone cares\nabout the table definition time, I wouldn't worry about staying 'bug'\n(or misfeature?) compatible, unless it's an official, committee backed\nSQL standard misfeature, of course. ;-) Sounds good to me.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Sun, 18 Jul 1999 20:48:08 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Why DEFAULT text 'now' does not work for TIMESTAMP\n\tcolumns"
},
{
"msg_contents": "> Per Leon's recent gripe in the bugs list, I have confirmed that\n> create table t1 (f1 int4, f2 timestamp default text 'now');\n> does *not* work as expected --- the stored constraint expression\n> has been pre-reduced to a timestamp constant, even though you\n> get the desired behavior with\n> create table t1 (f1 int4, f2 datetime default text 'now');\n> I have tracked down the cause of this, and it's a mess.\n\nYes. We've had lots of small \"improvements\" in the code (there is\nblood on my hands :) which danced around a fundamental problem: it\nwould be nice to pre-evaluate functions on constants, but there are a\nfew functions/constants (e.g. \"random\" or 'now') which shouldn't be\ndone this way. Functions and types really should have an \"is cachable\"\nattribute so that they can be pre-evaluated when possible.\n\n> First off, there is a pg_proc entry for converting a text value\n> to datetime (proc text_datetime, OID 1351) but there is no similar\n> function for timestamp. Therefore, the parser's can_coerce_type\n> function returns \"true\" if asked whether it can coerce text to\n> datetime, but \"false\" for text to timestamp.\n\nAt the moment, timestamp serves the useful purpose of illustrating how\nannoying a partially implemented and poorly supported feature can be.\nI've been putting off relabeling \"datetime\" and \"timespan\" as\n\"timestamp\" and \"interval\", thinking that it should wait for the major\nrev bump to 7.0. But it really shouldn't wait. This would align the\nbest types in the date/time code with SQL-standard names. The original\ntimestamp and interval code would be killed. That doesn't fix the\nunderlying problems handling defaults, but would stop most complaints\nabout timestamp...\n\n> Second, the actual storage of the constraint expression is being\n> done through an incredibly klugy series of hacks. After the grammar\n> parses the constraint expression, the expression is converted back\n> to text form (!) instead of being output as a parsetree.\n\nYeah, well, originally it was just passed through as a string, but the\nparser couldn't validate the syntax under those circumstances. So I\nhad the parser tokenize it, and then reassemble the string. But\napparently I didn't try very hard to reassemble it correctly.\n\n> This code makes me ill to look at ... \n\nYou should know by now that Postgres internals aren't for a weak\nstomach ;)\n\n> I am not sure what should be done to clean this up. A brute-force\n> solution would be to make sure that there is a text-to-whatever\n> conversion function in pg_proc for any type where the type input\n> function is not necessarily a constant --- but I'm not sure which\n> types besides timestamp might meet that description. In any case\n> I do not care for the code in StoreAttrDefault at all.\n> \n> I am about to commit parser fixes that ensure a DEFAULT value is\n> correctly coerced to the column type when it is used (that is,\n> transformInsertStatement now does a coerce_type rather than just\n> assuming what is in pg_attrdef is the right type). So, one possible\n> approach is to remove the coercion code from StoreAttrDefault\n> altogether. That would mean that\n> field1 datetime 'now'\n> would start acting the same as\n> field1 datetime text 'now'\n> currently does: both of them would be coerced to datetime at runtime,\n> not when the constraint expression is created. Given the frequency\n> with which newbies complain about the current behavior, I think that\n> that might be a Good Thing. But it would be a change in behavior,\n> and I suppose there are scenarios where you'd like to be able to get\n> the old behavior.\n\nSorry, how does that change behavior for the worse? I can see it\ntaking a performance hit, but under which circumstances would runtime\nevaluation be counter-intuitive or wrong?\n\nAnd while you are being annoyed by code, how about looking at problems\nwith trying to use indices on constants and on functions calls? I've\nassumed that it could benefit from a judicious application of\ncoerce_type...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 19 Jul 1999 06:53:39 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Why DEFAULT text 'now' does not work for TIMESTAMP\n\tcolumns"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Yes. We've had lots of small \"improvements\" in the code (there is\n> blood on my hands :) which danced around a fundamental problem: it\n> would be nice to pre-evaluate functions on constants, but there are a\n> few functions/constants (e.g. \"random\" or 'now') which shouldn't be\n> done this way. Functions and types really should have an \"is cachable\"\n> attribute so that they can be pre-evaluated when possible.\n\nYes. There is a proiscachable column in pg_proc, but it doesn't look\nlike it currently contains useful data, nor do I find any code looking\nat it. We need to put trustworthy data into that column and then we\ncan start using it to tell whether functions are safe to pre-evaluate\non constants. In the case at hand, timestamp_in() would have to be\nmarked unsafe. I'd like to add a generalized constant-subexpression-\ncollapser to the optimizer, and it would need something like that to\ntell it whether to collapse functions whose inputs are constants.\n(Once we did that, the parser would no longer need to worry about\npre-evaluating type conversion functions on constants, which is\neffectively what it does now in selected cases...)\n\n> I've been putting off relabeling \"datetime\" and \"timespan\" as\n> \"timestamp\" and \"interval\", thinking that it should wait for the major\n> rev bump to 7.0. But it really shouldn't wait.\n\nAs long as both sets of names are accepted, I think it probably wouldn't\nmatter if the implementation of one of them changes. I wouldn't like to\nhave my tables containing \"datetime\" suddenly stop working though...\n\n> Yeah, well, originally it was just passed through as a string, but the\n> parser couldn't validate the syntax under those circumstances. So I\n> had the parser tokenize it, and then reassemble the string. But\n> apparently I didn't try very hard to reassemble it correctly.\n\nI was thinking about letting the parser output the same parsetree as\nit does for everything else (thereby saving a chunk of grammar code)\nand then building a little subroutine that could deparse a parsetree\nto text. It wouldn't be that much bigger than the part of the grammar\nthat's doing the task ... for that matter, I think Jan may already have\nsuch a thing somewhere in the rules support.\n\n>> So, one possible\n>> approach is to remove the coercion code from StoreAttrDefault\n>> altogether. That would mean that\n>> field1 datetime 'now'\n>> would start acting the same as\n>> field1 datetime text 'now'\n>> currently does: both of them would be coerced to datetime at runtime,\n>> not when the constraint expression is created. Given the frequency\n>> with which newbies complain about the current behavior, I think that\n>> that might be a Good Thing. But it would be a change in behavior,\n>> and I suppose there are scenarios where you'd like to be able to get\n>> the old behavior.\n\n> Sorry, how does that change behavior for the worse? I can see it\n> taking a performance hit, but under which circumstances would runtime\n> evaluation be counter-intuitive or wrong?\n\nI'm not sure it would ever be counter-intuitive, but I can just see\nsomeone coming along and saying \"Hey, I *wanted* to store the time\nof creation of the table as the default!\". There might be more\nplausible examples with other non-cacheable functions, but I haven't\nthought of any offhand.\n\nIn any case, you could get that result by evaluating the function in\na separate command and pasting its result into the CREATE TABLE, so\nthere's no serious loss of functionality.\n\n> And while you are being annoyed by code, how about looking at problems\n> with trying to use indices on constants and on functions calls?\n\n\"Indices on constants\"? I'm confused...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Jul 1999 11:05:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Why DEFAULT text 'now' does not work for TIMESTAMP\n\tcolumns"
},
{
"msg_contents": "> As long as both sets of names are accepted, I think it probably wouldn't\n> matter if the implementation of one of them changes. I wouldn't like to\n> have my tables containing \"datetime\" suddenly stop working though...\n\nNo, we would have the names aliased in the parser, as we do for\nint->int4, etc.\n\n> > And while you are being annoyed by code, how about looking at problems\n> > with trying to use indices on constants and on functions calls?\n> \"Indices on constants\"? I'm confused...\n\nI just phrased it poorly. I was referring to the int2_col = int4_val\nproblem...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 19 Jul 1999 15:23:44 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Why DEFAULT text 'now' does not work for TIMESTAMP\n\tcolumns"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> As long as both sets of names are accepted, I think it probably wouldn't\n>> matter if the implementation of one of them changes. I wouldn't like to\n>> have my tables containing \"datetime\" suddenly stop working though...\n\n> No, we would have the names aliased in the parser, as we do for\n> int->int4, etc.\n\nOh, I see. And so my next pg_dump output would magically have the\nstandard names. That's cool.\n\n>>>> And while you are being annoyed by code, how about looking at problems\n>>>> with trying to use indices on constants and on functions calls?\n>> \"Indices on constants\"? I'm confused...\n\n> I just phrased it poorly. I was referring to the int2_col = int4_val\n> problem...\n\nAh. I've got that on my to-do list, but I've got no good idea for a\nsolution yet... see prior traffic...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Jul 1999 11:55:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Why DEFAULT text 'now' does not work for TIMESTAMP\n\tcolumns"
}
] |
[
{
"msg_contents": "BENCHMARK SUPPLY\n7540 BRIDGEGATE COURT\nATLANTA GA 30350\n\n***LASER PRINTER TONER CARTRIDGES***\n***FAX AND COPIER TONER***\n \n CHECK OUT OUR NEW CARTRIDGE PRICES :\n \n\nAPPLE \n \n LASER WRITER PRO 600 OR 16/600 $69\n LASER WRITER SELECT 300,310.360 $69\n LASER WRITER 300, 320 $54 \n LASER WRITER LS,NT,2NTX,2F,2G & 2SC $54 \n LASER WRITER 12/640 $79 \n\nHEWLETT PACKARD \n\n LASERJET SERIES 2,3 & 3D (95A) $49 \n LASERJET SERIES 2P AND 3P (75A) $54 \n LASERJET SERIES 3SI AND 4SI (91A) $75 \n LASERJET SERIES 4L AND 4P $49 \n LASERJET SERIES 4, 4M, 5, 5M, 4+ (98A) $59 \n LASERJET SERIES 4000 HIGH YIELD (27X) $99 \n LASERJET SERIES 4V $95 \n LASERJET SERIES 5SI , 8000 $95 \n LASERJET SERIES 5L AND 6L $49 \n LASERJET SERIES 5P, 5MP, 6P, 6MP $59 \n LASERJET SERIES 5000 (29A) $135\n LASERJET SERIES 1100 (92A) $49 \n LASERJET SERIES 2100 (96A) $89\n LASERJET SERIES 8100 (82X)\t\t $145\n\n\nHP LASERFAX \n\n LASERFAX 500, 700, FX1, $59 \n LASERFAX 5000, 7000, FX2, $59 \n LASERFAX FX3 $69 \n LASERFAX FX4 $79 \n \n\nLEXMARK \n\n OPTRA 4019, 4029 HIGH YIELD $135 \n OPTRA R, 4039, 4049 HIGH YIELD $135 \n OPTRA S 4059 HIGH YIELD $135 \n OPTRA E $59 \n OPTRA N $115 \n \n\nEPSON \n\n EPL-7000, 8000 $105 \n EPL-1000, 1500 $105 \n \n\nCANON \n\n LBP-430 $49 \n LBP-460, 465 $59 \n LBP-8 II $54 \n LBP-LX $54 \n LBP-MX $95 \n LBP-AX $49 \n LBP-EX $59 \n LBP-SX $49 \n LBP-BX $95 \n LBP-PX $49 \n LBP-WX $95 \n LBP-VX $59 \n CANON FAX L700 THRU L790 FX1 $59 \n CANONFAX L5000 L70000 FX2 $59 \n \n\nCANON COPIERS \n\n PC 20, 25 ETC.... $89 \n PC 3, 6RE, 7, 11 (A30) $69 \n PC 320 THRU 780 (E40) $89 \n \n\nNEC \n\n SERIES 2 LASER MODEL 90,95 $105\n\n\nPLEASE NOTE:\n\n1) ALL OUR CARTRIDGES ARE GENUINE OEM CARTRIDGES.\n2) WE DO NOT SEND OUT CATALOGS OR PRICE LISTS \n3) WE DO NOT FAX QUOTES OR PRICE LISTS. \n4) WE DO NOT SELL TO RESELLERS OR BUY FROM DISTRIBUTERS\n5) WE DO NOT CARRY: BROTHER-MINOLTA-KYOSERA-PANASONIC PRODUCTS\n6) WE DO NOT CARRY: XEROX-FUJITSU-OKIDATA OR SHARP PRODUCTS\n7) WE DO NOT CARRY ANY COLOR PRINTER SUPPLIES \n8) WE DO NOT CARRY DESKJET/INKJET OR BUBBLEJET SUPPLIES\n9) WE DO NOT BUY FROM OR SELL TO RECYCLERS OR REMANUFACTURERS\n\n WE ACCEPT GOVERNMENT, SCHOOL & UNIVERSITY PURCHASE ORDERS\n JUST LEAVE YOUR PO # WITH CORRECT BILLING & SHIPPING ADDRESS\n\n \n\n****OUR ORDER LINE IS 770-399-0953 ****\n****OUR CUSTOMER SERVICE LINE IS 800-586-0540****\n****OUR E-MAIL REMOVAL AND COMPLAINT LINE IS 800-759-5313****\n\n****PLACE YOUR ORDER AS FOLLOWS**** :\n\nBY PHONE 770-399-0953 \n\nBY FAX: 770-698-9700 \nBY MAIL: BENCHMARK PRINT SUPPLY\n 7540 BRIDGEGATE COURT\n, ATLANTA GA 30350\n\nMAKE SURE YOU INCLUDE THE FOLLOWING INFORMATION IN YOUR ORDER: \n\n 1) YOUR PHONE NUMBER \n 2) COMPANY NAME \n 3) SHIPPING ADDRESS \n 4) YOUR NAME \n 5) ITEMS NEEDED WITH QUANTITIES \n 6) METHOD OF PAYMENT. (COD OR CREDIT CARD) \n 7) CREDIT CARD NUMBER WITH EXPIRATION DATE \n\n \n1) WE SHIP UPS GROUND. ADD $4.5 FOR SHIPPING AND HANDLING.\n2) COD CHECK ORDERS ADD $3.5 TO YOUR SHIPPING COST.\n2) WE ACCEPT ALL MAJOR CREDIT CARD OR \"COD\" ORDERS.\n3) OUR STANDARD MERCHANDISE REFUND POLICY IS NET 30 DAYS\n4) OUR STANDARD MERCHANDISE REPLCAMENT POLICY IS NET 90 DAYS. \n\n\nNOTE NUMBER (1): \n\nPLEASE DO NOT CALL OUR ORDER LINE TO REMOVE YOUR E-MAIL \nADDRESS OR COMPLAIN. OUR ORDER LINE IS NOT SETUP TO FORWARD \nYOUR E-MAIL ADDRESS REMOVAL REQUESTS OR PROCESS YOUR \nCOMPLAINTS..IT WOULD BE A WASTED PHONE CALL.YOUR ADDRESS \nWOULD NOT BE REMOVED AND YOUR COMPLAINTS WOULD NOT BE \nHANDLED.PLEASE CALL OUR TOLL FREE E-MAIL REMOVAL AND \nCOMPLAINT LINE TO DO THAT.\n\nNOTE NUMBER (2):\n\nOUR E-MAIL RETURN ADDRESS IS NOT SETUP TO ANSWER ANY \nQUESTIONS YOU MIGHT HAVE REGARDING OUR PRODUCTS. OUR E-MAIL \nRETURN ADDRESS IS ALSO NOT SETUP TO TAKE ANY ORDERS AT \nTHIS TIME. PLEASE CALL THE ORDER LINE TO PLACE YOUR ORDER\n OR HAVE ANY QUESTIONS ANSWERED. OTHERWISE PLEASE CALL OUR \nCUSTOMER SERCICE LINE.\n\n\nNOTE NUMBER (3):\n\nOWNERS OF ANY OF THE DOMAINS THAT APPEAR IN THE HEADER OF \nTHIS MESSAGE,ARE IN NO WAY ASSOCIATED WITH, PROMOTING, \nDISTRIBUTING OR ENDORSING ANY OF THE PRODUCTS ADVERTISED \nHEREIN AND ARE NOT LIABLE TO ANY CLAIMS THAT MAY ARISE \nTHEREOF. \n \n\n\n\n\n\n \n \n \n \n \n \n \n \n",
"msg_date": "Mon, 19 Jul 1999 00:35:24",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Toner Supplies"
}
] |
[
{
"msg_contents": "Would someone who has the 6.5.1 release tree please update this file\nform me. It is docs/README.NT:\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nFrom: \"Joost Kraaijeveld\" <[email protected]>\nTo: \"Pgsql-Ports@Postgresql. Org\" <[email protected]>\nSubject: RE: [PORTS] Re: psql under win32\nDate: Wed, 21 Apr 1999 07:07:47 +0200\nMessage-ID: <[email protected]>\nMIME-Version: 1.0\n\nInstalling PostgreSQL on NT:\n\n---------------------------------------------------------------------------\n\nIt can be done by done by typing configure, make and make install.\n\n1. Install the Cygwin package\n2. Update to EGCS 1.1.2\n (This may be optional.)\n\n1. Install the Andy Piper Tools (http://www.xemacs.freeserve.co.uk/)\n (This may be optional.)\n\n1. Download the Cygwin32 IPC Package by Ludovic LANGE \n http://www.multione.capgemini.fr:80/tools/pack_ipc/current.tar.gz\n2. Untar the package and follow the readme instructions.\n3. I tested 1.03.\n4. I used the \\cygwin-b20\\h-i568-cygwin32\\i586-cygwin32\\lib and\n\\cygwin-b20\\h-i568-cygwin32\\i586-cygwin32\\include\\sys instead of the\n/usr/local/lib and usr/local/include/sys.\n\n1. Download the current version of PostgreSQL.\n2. Untar the package.\n3. Copy the files from \\pgsql\\src\\win32 according to the readme file.\n3. Edit \\pgsql\\src\\template\\cygwin32 if needed (I had to adjust the YFLAGS\npath).\n4. ./configure\n5. make\n6. create the directory /usr/local/pgsql manually: the mkdir cannot create a\ndirectory 2 levels deep in one step.\n7. make install\n8. cd /usr/lical/pgsql/doc\n9. make install\n10. Set the environmental data\n11. Initdb --username=jkr (do not run this command as administrator)\n\n12. Open a new Cygwin command prompt\n13. Start \"ipc-deamon&\" (background proces)\n14. Start \"postmaster -i 2>&1 > /tmp/postgres.log &\" (background proces)\n15. Start \"tail -f /tmp/postgres.log\" to see the messages\n\n16. cd /usr/src/pgsql/src/test/regress\n17. make all runtest\n\nAll test should be run, allthought the latest snapshot I tested (18-4)\nappears to have some problems with locking.\n\nJoost\n\n[Added by bjm]\n\nBy default, PostgreSQL clients like psql communicate by default using\nunix domain sockets, which don't work on NT. Start the postmaster with\n-i, and when connecting to the database from a client, set the PGHOST\nenvironment variable to 'localhost' or supply the hostname on the\ncommand line.",
"msg_date": "Sun, 18 Jul 1999 22:41:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Please add to 6.5.1"
}
] |
[
{
"msg_contents": "I don't see this feature in Oracle or Gray' book so I would\nlike to know what do you think about ability to have more than\n1 log to let different transactions write to different logs,\nin parallel. All log records from particular transaction will\ngo in one log file (one fsync on commit). Transactions will\nchoose log file to use in circle order. Each log record will have \nunique id shared among all logs to order things.\nBy placing logs on different disk one could significantly\nincrease performance.\n\nComments?\n\nVadim\n",
"msg_date": "Mon, 19 Jul 1999 11:50:41 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "WAL: parallel logging"
},
{
"msg_contents": "> I don't see this feature in Oracle or Gray' book so I would\n> like to know what do you think about ability to have more than\n> 1 log to let different transactions write to different logs,\n> in parallel. All log records from particular transaction will\n> go in one log file (one fsync on commit). Transactions will\n> choose log file to use in circle order. Each log record will have \n> unique id shared among all logs to order things.\n> By placing logs on different disk one could significantly\n> increase performance.\n\nIn most cases, the problem with log writes it not throughput, but\ngetting the disk head to the proper sector to fsync the data.\n\nSomeone said that large Sybase sites use a separate drive for logs, so\nthe head stays in the proper place to write the logs.\n\nThis is just a guess on my part.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Jul 1999 00:31:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] WAL: parallel logging"
},
{
"msg_contents": "\nSybase uses something called user log cache's (ULC) also sometimes known\nas private log caches. The point seems not to be disk contention but\ncontention for the log spinlock. So, each connection gets a certain sized\nlog cache with is flushed when full or at the end of a transaction, so\nthat you get fewer but larger writes to the log.\n\nBrian\n\nOn Mon, 19 Jul 1999, Bruce Momjian wrote:\n\n> > I don't see this feature in Oracle or Gray' book so I would\n> > like to know what do you think about ability to have more than\n> > 1 log to let different transactions write to different logs,\n> > in parallel. All log records from particular transaction will\n> > go in one log file (one fsync on commit). Transactions will\n> > choose log file to use in circle order. Each log record will have \n> > unique id shared among all logs to order things.\n> > By placing logs on different disk one could significantly\n> > increase performance.\n> \n> In most cases, the problem with log writes it not throughput, but\n> getting the disk head to the proper sector to fsync the data.\n> \n> Someone said that large Sybase sites use a separate drive for logs, so\n> the head stays in the proper place to write the logs.\n> \n> This is just a guess on my part.\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> \n\n",
"msg_date": "Mon, 19 Jul 1999 05:09:49 -0400 (EDT)",
"msg_from": "Brian Bruns <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] WAL: parallel logging"
}
] |
[
{
"msg_contents": "I have gotten the right cvs tree, and updated it myself.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Jul 1999 01:08:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "README.NT"
}
] |
[
{
"msg_contents": "Ditto.\n>> \n>> > Naming it 7.0 IMHO requires transaction log, tuple split over\n>> > blocks, foreign keys, outer joins and rules of arbitrary\n>> > size. I don't expect ALL of them for the next release, so let\n>> > it be 6.6.\n>> \n>> I like Jan's more complete list...\n>> \n>> - Thomas\n>> \nMikeA\n",
"msg_date": "Mon, 19 Jul 1999 09:54:55 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] New version number 6.6 or 7.0"
}
] |
[
{
"msg_contents": "Once a new tag or release has been applied to the CVS tree, is a new\ncheckout required, or can I just update with the new tag? Do I need to\nspecify a tag when updating? If not, which branch is used? Are there\nseparate branches for 6.5 bug-fixes and 6.6 development?\n\nMikeA\n\n",
"msg_date": "Mon, 19 Jul 1999 10:41:05 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "CVS"
},
{
"msg_contents": "> Once a new tag or release has been applied to the CVS tree, is a new\n> checkout required, or can I just update with the new tag? Do I need \n> to specify a tag when updating? If not, which branch is used? Are \n> there separate branches for 6.5 bug-fixes and 6.6 development?\n\nIf you want to continue working out of the main branch, you can just\ndo a\n\n cvs update -PdA pgsql\n\nto get the latest updates. If you want to work on the REL6_5_PATCHES\nbranch, you will need to do a separate checkout (afaik), using\n\n cvs checkout -rREL6_5_PATCHES pgsql\n\nand, of course, you will then be working out of the branch, and never\nsee the main branch again. The branch tags are \"sticky\", so after\nchecking out REL6_5_PATCHES a subsequent\n\n cvs update -Pd pgsql\n\nwill continue to get you that branch.\n\nThat is another reason why using CVSup to maintain a local copy of the\ncvs repository is so convenient; it maintains info on all branches, so\nthere is no extra work over the network to check out another branch.\n\nbtw, although I recommended using the \"-PdA\" set of switches when\nworking on the main branch, the \"-A\" is in fact somewhat dangerous in\nthe sense that once you have checked out a branch such as\nREL6_5_PATCHES then an inadvertent update using \"-A\" will have you\nlooking at the main branch instead. It may be better to not use that\nswitch at all...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 19 Jul 1999 14:14:40 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CVS"
},
{
"msg_contents": "\"Ansley, Michael\" <[email protected]> writes:\n> Once a new tag or release has been applied to the CVS tree, is a new\n> checkout required, or can I just update with the new tag? Do I need to\n> specify a tag when updating? If not, which branch is used? Are there\n> separate branches for 6.5 bug-fixes and 6.6 development?\n\nThe tip of the tree (checkout with no branch or tag) is always the\nlatest code; currently it is 6.6-to-be. For the last couple of versions\nwe have made a practice of starting a branch for back-patch corrections\nto existing releases. For example:\n\n 6.3\n |\n |\n 6.4\n | \\\n | 6.4.1\n 6.5 \\\n / | 6.4.2\n 6.5.1 |\n / current\n 6.5.2?? |\n\n\n(In this case, since we didn't split the tree until almost 6.5.1\nrelease time, the left-side branch actually diverges from below 6.5.)\n\nSo: \"cvs checkout pgsql\" for latest and greatest; \"cvs checkout -rREL6_4\npgsql\" for the 6.4 stable release series; \"cvs checkout -rREL6_5_PATCHES\npgsql\" for the 6.5 branch. (Marc will have to answer for the\ninconsistency in the branch tags ;-).) Do this in separate directories\nif you want to keep more than one workspace. For example I currently\nhave\n\t/users/postgres/pgsql/...\t\tcurrent sources\n\t/users/postgres/REL6_5/pgsql/...\t6.5 branch\n\nso I really did\n\tmkdir REL6_5\n\tcd REL6_5\n\tcvs checkout -rREL6_5_PATCHES pgsql\nto get a working copy of the 6.5 branch.\n\nOnce you've done any of those, a simple \"cvs update\" within the toplevel\npgsql directory will get you updates appropriate to the branch --- since\nbranch tags are \"sticky\", cvs knows which branch to pull.\n\nIn particular, to answer your question: the fact that a branch was\ncreated last week doesn't affect the status of a checkout of the tip.\nIt's still the tip, free of sticky tags.\n\nIf there is any further activity in the 6.5 branch, it'd be to produce a\n6.5.2 bug-fix release. We don't generally do that except for really\ncritical bugs, since double-patching a bug in both the tip and a branch\nis a pain.\n\n(The commercial-support venture might result in more bugfix activity\non old branches, since I believe one facet of that idea is better\nsupport of stable releases, but not much has changed yet.)\n\nAs far as I know, no one is currently using branches for work that\nwill eventually be merged back to the main branch, but it could be\ndone if we had any subtasks being shared by many people. (The\ndreaded fmgr interface change may have to go like that...)\n\nThere are also tags for some specific past milestones (use cvs log on a\nfew files to see what they are), but they're a little bit haphazard.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Jul 1999 10:19:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CVS "
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> If you want to continue working out of the main branch, you can just\n> do a\n> cvs update -PdA pgsql\n\nBTW, you can stick any switches you use standardly into a ~/.cvsrc\nfile in your home directory. Mine contains\n\ncvs -z3\nupdate -d -P\ncheckout -P\n\nwhich gives -z3 as a global option to all cvs commands (good for remote\nusage of hub.org, but a waste of cycles if you are using a local CVSup\ndirectory) plus -dP for updates and -P for checkout, which are necessary\nfor sane treatment of subdirectories IMHO. So I just say \"cvs update\"\nand worry not...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Jul 1999 11:34:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CVS "
}
] |
[
{
"msg_contents": "Good question. I was going to do a fresh checkout tomorrow, but it would\nbe interesting to know about this.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Ansley, Michael [mailto:[email protected]]\nSent: 19 July 1999 09:41\nTo: '[email protected]'\nSubject: [HACKERS] CVS\n\n\nOnce a new tag or release has been applied to the CVS tree, is a new\ncheckout required, or can I just update with the new tag? Do I need to\nspecify a tag when updating? If not, which branch is used? Are there\nseparate branches for 6.5 bug-fixes and 6.6 development?\n\nMikeA\n\n",
"msg_date": "Mon, 19 Jul 1999 10:09:59 +0100",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] CVS"
},
{
"msg_contents": "On Mon, 19 Jul 1999, Peter Mount wrote:\n\n> Good question. I was going to do a fresh checkout tomorrow, but it would\n> be interesting to know about this.\n\nIf you do a fresh checkout, it will take the 'current' or 'ongoing' trunk\nof the tree...if you want to work on a branch, you have to stipulate the\n-r option to check that one out seperately...\n\n > \n> Peter\n> \n> -- \n> Peter Mount\n> Enterprise Support\n> Maidstone Borough Council\n> Any views stated are my own, and not those of Maidstone Borough Council.\n> \n> \n> \n> -----Original Message-----\n> From: Ansley, Michael [mailto:[email protected]]\n> Sent: 19 July 1999 09:41\n> To: '[email protected]'\n> Subject: [HACKERS] CVS\n> \n> \n> Once a new tag or release has been applied to the CVS tree, is a new\n> checkout required, or can I just update with the new tag? Do I need to\n> specify a tag when updating? If not, which branch is used? Are there\n> separate branches for 6.5 bug-fixes and 6.6 development?\n> \n> MikeA\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 19 Jul 1999 11:03:18 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] CVS"
},
{
"msg_contents": "> Good question. I was going to do a fresh checkout tomorrow, but it would\n> be interesting to know about this.\n> \n> \n> Once a new tag or release has been applied to the CVS tree, is a new\n> checkout required, or can I just update with the new tag? Do I need to\n> specify a tag when updating? If not, which branch is used? Are there\n> separate branches for 6.5 bug-fixes and 6.6 development?\n> \n\nCurrent 6.6 is cvs without tags. 6.5.1 is tag REL6_5_PATCHES.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Jul 1999 10:29:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CVS"
}
] |
[
{
"msg_contents": "Hi, \n\nI'm back from vacation.\n\nThere seems to be a typo in os.h. It says '#if if defined(__i386__)'. I\nthink there's one if too many. Could anyone please remove it. I do not have\nthe whole source checked out, so I cannot do it myself at the moment.\n\nMichael\n\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Mon, 19 Jul 1999 12:06:20 +0200",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "os.h"
},
{
"msg_contents": "> Hi, \n> \n> I'm back from vacation.\n> \n> There seems to be a typo in os.h. It says '#if if defined(__i386__)'. I\n> think there's one if too many. Could anyone please remove it. I do not have\n> the whole source checked out, so I cannot do it myself at the moment.\n\nYou don't like the #if if? :-)\n\nFixed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Jul 1999 10:30:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] os.h"
}
] |
[
{
"msg_contents": "This was exactly what I was looking for, thanks Tom.\n>> \n>> The tip of the tree (checkout with no branch or tag) is always the\n>> latest code; currently it is 6.6-to-be. For the last couple \n>> of versions\n>> we have made a practice of starting a branch for back-patch \n>> corrections\n>> to existing releases. For example:\n>> \n>> 6.3\n>> |\n>> |\n>> 6.4\n>> | \\\n>> | 6.4.1\n>> 6.5 \\\n>> / | 6.4.2\n>> 6.5.1 |\n>> / current\n>> 6.5.2?? |\n>> \n>> \n\n>> If there is any further activity in the 6.5 branch, it'd be \n>> to produce a\n>> 6.5.2 bug-fix release. We don't generally do that except for really\n>> critical bugs, since double-patching a bug in both the tip \n>> and a branch\n>> is a pain.\nDouble-patching is a pain, but I thought that that was the point of using\nCVS to do your branching. AFAIK, CVS will merge the bug-fixes in, say, the\n6.5.1 branch back into the main branch. Because you want to fix the bugs in\n6.5 into 6.5.1, without having to double-patch, but new development must\nonly go into the main branch. So, when 6.5.1 is released, it is merged back\ninto the main branch to pass the fixes over, and also carries on to 6.5.2 in\na continuation of the existing branch.\n\nAnyway, ideas for Marc.\n\nThanks again, this is great. Should go into the developers docs.\n\nMikeA\n",
"msg_date": "Mon, 19 Jul 1999 16:26:55 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] CVS "
},
{
"msg_contents": "\"Ansley, Michael\" <[email protected]> writes:\n>>> If there is any further activity in the 6.5 branch, it'd be to\n>>> produce a 6.5.2 bug-fix release. We don't generally do that except\n>>> for really critical bugs, since double-patching a bug in both the\n>>> tip and a branch is a pain.\n\n> Double-patching is a pain, but I thought that that was the point of using\n> CVS to do your branching. AFAIK, CVS will merge the bug-fixes in, say, the\n> 6.5.1 branch back into the main branch. Because you want to fix the bugs in\n> 6.5 into 6.5.1, without having to double-patch, but new development must\n> only go into the main branch. So, when 6.5.1 is released, it is merged back\n> into the main branch to pass the fixes over, and also carries on to 6.5.2 in\n> a continuation of the existing branch.\n\nThe trouble is that the tip usually diverges fast enough that mechanical\napplication of the same diffs to tip and stable branch doesn't work\nvery well :-(.\n\nAlso, our usual practice is to prove out a bugfix in the tip and then\nconsider whether to apply it to stable branches. I'm not sure whether\nCVS supports that as easily as merging a branch to the tip, but I'd\nbe *really* wary of mechanical diff transfer to stable branches...\nif the diff extracts too little or too much of the changes in the\ntip file, you might not find out till too late.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Jul 1999 11:41:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CVS "
}
] |
[
{
"msg_contents": "> Bruce.\n> \n> May be, you'll note this 6.5 revisions as the first _commerce quality \n> revisions_.\n> \n> I estimate 6.3 as the first _stable_ revision, and new 6.5 as the first \n> _high stable_ revision versions.\n\nYes. It seems 6.5 is much more stable, but it is hard for us to know\nthat, even now, because we don't get many \"big picture\" reports about\nreleases like this. I am sending this over to the hackers list for\ncomment.\n\n\n> \n> It's the very important point. All UNIX systems for todays get surrender \n> to NT systems because NT have embedded (not embedded but not expansive \n> and almost embedded) SQL server and it allow developers to use SQL for \n> the storing data in the middle-range projects. For now, it was impossible \n> fro the UNIX because you had a choice - to use extra expansive ORACLE \n> (huge monstrous system) or to use DB data base. Not MYSQL not PSQL was \n> stable enougph to store any critical data.\n> \n> This days there is new point when you can announce PSQL as the _almost \n> embedded_ data base. I think this system split in future into the two \n> branches - first, free, withough support and some extra tools, for the \n> embedded data bases used in the cheap projects, and commercial branch \n> with the extra tools and extra possibiloities (for example, threading is \n> not important for the free release). Of cource, I prefere to have not \n> commercial branch at all, but it's the real life...\n> \n> Anyway, it's good if someone announce this versions as _ready for the \n> wide usage_.\n> \n> Alex.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Jul 1999 10:27:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [ANNOUNCE] PostgreSQL status report"
}
] |
[
{
"msg_contents": "> [I sent this to pgsql-docs, which bounced it as being over 40k. I'll leave\n> you to publicise it as you wish.]\n\nThe \"patches\" list is the only one without that restriction...\n\n> I attach a patch containing corrections to the Tutorial and User's Guide for\n> spelling, grammar, euphony and (occasionally) content. I've got as far as\n> ALTER TABLE in chapter 14 of the User's Guide. I'll get on to the others\n> sometime.\n\nGreat! These all look good. Please do *not* make more updates to any\nsgml/ref/*.sgml files until I've worked through them to do a final\nmerge of the old man pages. Since I'm reading them all to do the\nmerge, I'm making some fixes as I go, and these are likely to conflict\nwith work you would do. I'm hoping to be finished in a few days, but\nfor a few pages it goes slowly.\n\n> Please note that <quote>...</quote> produces only surrounding spaces in the\n> Postscript versions as printed by a HP Laserjet 6MP and by ghostscript.\n> I have not attempted to change these.\n\nThat's not a good feature...\n\n> Tom, is it possible to make the printed text fully justified? It would\n> look a lot better, I think.\n\nIt might be possible in the future. At the moment, I can't do much in\nthe way of formatting style adjustments because of small glitches in\nthe way Applix reads the RTF file I generate from the sgml. But I hope\nto talk to Applix about that soon.\n\nThanks again for the great patches!\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 19 Jul 1999 14:29:31 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Corrections to manuals"
}
] |
[
{
"msg_contents": "Thanks all for this info. I'm presuming that, because no-one has been rude\nyet, that a lot of this is not in the developers manual, or FAQ yet. Would\nit be worthwhile putting it there? Just a quick paragraph with the latest\nsettings, and default options for particular types of developers (e.g.:\nsomebody who wants to hack the source and contribute, somebody who wants the\nlatest patch tree, somebody who wants access to the latest source, but no\ncontributions, etc).\n\nMikeA\n\n\n>> -----Original Message-----\n>> From: Tom Lane [mailto:[email protected]]\n>> Sent: Monday, July 19, 1999 5:41 PM\n>> To: Ansley, Michael\n>> Cc: '[email protected]'\n>> Subject: Re: [HACKERS] CVS \n>> \n>> \n>> \"Ansley, Michael\" <[email protected]> writes:\n>> >>> If there is any further activity in the 6.5 branch, it'd be to\n>> >>> produce a 6.5.2 bug-fix release. We don't generally do \n>> that except\n>> >>> for really critical bugs, since double-patching a bug in both the\n>> >>> tip and a branch is a pain.\n>> \n>> > Double-patching is a pain, but I thought that that was the \n>> point of using\n>> > CVS to do your branching. AFAIK, CVS will merge the \n>> bug-fixes in, say, the\n>> > 6.5.1 branch back into the main branch. Because you want \n>> to fix the bugs in\n>> > 6.5 into 6.5.1, without having to double-patch, but new \n>> development must\n>> > only go into the main branch. So, when 6.5.1 is released, \n>> it is merged back\n>> > into the main branch to pass the fixes over, and also \n>> carries on to 6.5.2 in\n>> > a continuation of the existing branch.\n>> \n>> The trouble is that the tip usually diverges fast enough \n>> that mechanical\n>> application of the same diffs to tip and stable branch doesn't work\n>> very well :-(.\n>> \n>> Also, our usual practice is to prove out a bugfix in the tip and then\n>> consider whether to apply it to stable branches. I'm not \n>> sure whether\n>> CVS supports that as easily as merging a branch to the tip, but I'd\n>> be *really* wary of mechanical diff transfer to stable branches...\n>> if the diff extracts too little or too much of the changes in the\n>> tip file, you might not find out till too late.\n>> \n>> \t\t\tregards, tom lane\n>> \n",
"msg_date": "Mon, 19 Jul 1999 18:27:21 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] CVS "
},
{
"msg_contents": "> Thanks all for this info. I'm presuming that, because no-one has been rude\n> yet, that a lot of this is not in the developers manual, or FAQ yet.\n\nNaw, we're just a bunch of polite people, at least today ;)\n\n> Would\n> it be worthwhile putting it there? Just a quick paragraph with the latest\n> settings, and default options for particular types of developers (e.g.:\n> somebody who wants to hack the source and contribute, somebody who wants the\n> latest patch tree, somebody who wants access to the latest source, but no\n> contributions, etc).\n\nPlease look at the appendix entitled \"The CVS Repository\". The source\nis in cvs.sgml (and the derived html or hardcopy appendix is in the\nintegrated docs or developer's guide). Any and all updates or fixes or\nrewrites would be welcome. Also, we should have a good cross-link to\nit on the web page if we don't already. \n\nPerhaps current values could go into the FAQ, but the general\nprinciples and today' settings should be in the main docs. So far,\nwe've had enough docs changes that we get new \"big docs\" for every\nrelease, though sometime (way in the future) we might have things\nsettle down and be able to not update them every release.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 19 Jul 1999 16:58:50 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CVS"
}
] |
[
{
"msg_contents": "Package: postgresql-contrib\nVersion: 6.5-2\n\nDear PostgreSQL hackers,\n\nI have sent this message to pgsql-general and so far received no reply. \nThis bug seems to be 100% reproducible on Linux (i386 and sparc). If this\nproblem is specific to Debian, then it would help me to know that, too.\n\nCascaded updates tend to write old data on top of new, as the following\nminimal example shows:\n\nCREATE TABLE \"tipos\" (\n\t\"tipo\" text NOT NULL,\n\t\"designacao\" text DEFAULT '');\nCREATE TABLE \"duracoes\" (\n\t\"tipo\" text DEFAULT '' NOT NULL,\n\t\"duracao\" timespan NOT NULL);\n\nCREATE FUNCTION \"check_primary_key\" ( ) RETURNS opaque AS '/usr/lib/postgresql/modules/refint.so' LANGUAGE 'C';\nCREATE FUNCTION \"check_foreign_key\" ( ) RETURNS opaque AS '/usr/lib/postgresql/modules/refint.so' LANGUAGE 'C';\n\nCOPY \"tipos\" FROM stdin;\nP\tPr�tica\nT\tTe�rica\nS\tSemin�rio\nTP\tTeorico-pr�tica\n\\.\nCOPY \"duracoes\" FROM stdin;\nP\t@ 3 hours\nT\t@ 1 hour\nT\t@ 1 hour 30 mins\nTP\t@ 1 hour 30 mins\nTP\t@ 2 hours\nTP\t@ 3 hours\n\\.\nCREATE UNIQUE INDEX \"tipos_pkey\" on \"tipos\" using btree ( \"tipo\" \"text_ops\" );\nCREATE UNIQUE INDEX \"duracoes_pkey\" on \"duracoes\" using btree ( \"tipo\" \"text_ops\", \"duracao\" \"timespan_ops\" );\nCREATE TRIGGER \"tipos_trigger_d\" BEFORE DELETE ON \"tipos\" FOR EACH ROW EXECUTE PROCEDURE check_foreign_key ('1', 'cascade', 'tipo', '\"duracoes\"', 'tipo');\nCREATE TRIGGER \"tipos_trigger_u\" AFTER UPDATE ON \"tipos\" FOR EACH ROW EXECUTE PROCEDURE check_foreign_key ('1', 'cascade', 'tipo', '\"duracoes\"', 'tipo');\nCREATE TRIGGER \"tipos_duracoes\" BEFORE INSERT OR UPDATE ON \"duracoes\" FOR EACH ROW EXECUTE PROCEDURE check_primary_key ('tipo', '\"tipos\"', 'tipo');\n\nAfter setting up a database as described above, do the following:\n\n=> update tipos set tipo='Tx' where tipo='T';\nUPDATE 1\n=> select * from tipos;\ntipo|designacao \n----+---------------\nP |Pr�tica \nS |Semin�rio \nTP |Teorico-pr�tica\nTx |Te�rica \n(4 rows)\n\n=> select * from duracoes;\ntipo|duracao \n----+----------------\nP |@ 3 hours \nTP |@ 1 hour 30 mins\nTP |@ 2 hours \nTP |@ 3 hours \nTx |@ 1 hour \nTx |@ 1 hour 30 mins\n(6 rows)\n\nSo far so good! Now:\n\n=> update tipos set tipo='Px' where tipo='P';\nUPDATE 1\n=> select * from tipos;\ntipo|designacao \n----+---------------\nS |Semin�rio \nTP |Teorico-pr�tica\nTx |Te�rica \nPx |Pr�tica \n(4 rows)\n\n=> select * from duracoes;\ntipo|duracao \n----+----------------\nTP |@ 1 hour 30 mins\nTP |@ 2 hours \nTP |@ 3 hours \nTx |@ 1 hour \nTx |@ 1 hour 30 mins\nTx |@ 3 hours \n^^ should be Px, NOT Tx\n(6 rows)\n\nThis makes cascaded updates unusable, unfortunately... I can reproduce the\nsame behaviour on a PC, as well. I am running slink, so I compiled the\npackages myself, from the debianized sources.\n\nThanks for any help!\n\nCarlos Fonseca\n\n\n-- System Information\nDebian Release: 2.1\nKernel Version: Linux diana 2.2.7 #1 Sat May 8 19:57:23 WEST 1999 sparc unknown\n\nVersions of the packages postgresql-contrib depends on:\nii postgresql 6.5-2 Object-relational SQL database, descended fr\n\n\n\n\n",
"msg_date": "Mon, 19 Jul 1999 21:23:22 +0100 (WET)",
"msg_from": "Carlos Fonseca <[email protected]>",
"msg_from_op": true,
"msg_subject": "(Debian Bug#41223) cascaded updates with refint insert bogus data "
},
{
"msg_contents": "Carlos Fonseca wrote:\n> \n> I have sent this message to pgsql-general and so far received no reply.\n> This bug seems to be 100% reproducible on Linux (i386 and sparc). If this\n> problem is specific to Debian, then it would help me to know that, too.\n> \n> Cascaded updates tend to write old data on top of new, as the following\n> minimal example shows:\n\nUnfortunately, when I wrote refint.c ~ 2.5 years ago I used\nDELETE for both cascade UPDATE and DELETE. I don't remember why.\nMassimo Lambertini ([email protected]) changed\nrefint.c to performe UPDATE of foreign keys on UPDATE of primary\nones, but he did error: he uses 1st update new primary key value \nin UPDATE _foreign_table_ SET and so execution plan is prepared,\nsaved, used with this value. Paramater ($1...$n) should be used there.\nI have no time to fix it, sorry. Ask him or learn PL/pgSQL and write\ntrigger youself.\n\nVadim\n",
"msg_date": "Tue, 20 Jul 1999 11:19:42 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] (Debian Bug#41223) cascaded updates with refint insert\n\tbogus data"
}
] |
[
{
"msg_contents": "Hey hackers - \nI don't know if this is fixed in 6.5.1 or not, but the definition field\nin the pg_views system table is broken in 6.5.0, and this breaks view\nediting in pgaccess. The problem is that table qualifications are left\noff the fieldnames in both the SELECT clause and the WHERE clause. Minimal\nexample given below:\n\n\ntest=> create table t1 (textid int4, nextid int4, words text);\nCREATE\ntest=> create table t2 (nextid int4, words text);\nCREATE\ntest=> create view v1 as select t1.textid,t1.words,t2.words as words2\nfrom t1,t2 where t1.nextid=t2.nextid;\nCREATE\ntest=> insert into t1 values (2,1,'some other text');\nINSERT 384454 1\ntest=> insert into t2 values (1,'joint text');\nINSERT 384455 1\ntest=> insert into t1 values (1,1,'some text');\nINSERT 384456 1\ntest=> select * from v1;\ntextid|words |words2 \n------+---------------+----------\n 2|some other text|joint text\n 1|some text |joint text\n(2 rows)\n\ntest=> select definition from pg_views where viewname='v1';\ndefinition \n-----------------------------------------------------------------------\nSELECT \"textid\", \"words\", \"words\" AS \"words2\" FROM \"t1\", \"t2\" WHERE\n\"nextid\" = \"nextid\"; (1 row)\n\ntest=> SELECT \"textid\", \"words\", \"words\" AS \"words2\" FROM \"t1\", \"t2\"\nWHERE \"nextid\" = \"nextid\";\nERROR: Column 'words' is ambiguous\ntest=> \n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Mon, 19 Jul 1999 17:06:34 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "VIEW definitions broken in 6.5.0"
},
{
"msg_contents": ">\n> Hey hackers -\n> I don't know if this is fixed in 6.5.1 or not, but the definition field\n> in the pg_views system table is broken in 6.5.0, and this breaks view\n> editing in pgaccess. The problem is that table qualifications are left\n> off the fieldnames in both the SELECT clause and the WHERE clause. Minimal\n> example given below:\n\nOh,\n\n I see the problem. It is because the rule backparsing utility\n prints the relation name only if it is referenced by another\n name (... FROM t1 X, ...).\n\n I'll change it in the v6.5 tree to print it allways. For v6.6\n I'll work on rule recompilation which requires storing the\n original query text and will avoid that problem entirely.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 20 Jul 1999 12:47:49 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] VIEW definitions broken in 6.5.0"
},
{
"msg_contents": "Jan, or someone, can you comment on this?\n\n\n> Hey hackers - \n> I don't know if this is fixed in 6.5.1 or not, but the definition field\n> in the pg_views system table is broken in 6.5.0, and this breaks view\n> editing in pgaccess. The problem is that table qualifications are left\n> off the fieldnames in both the SELECT clause and the WHERE clause. Minimal\n> example given below:\n> \n> \n> test=> create table t1 (textid int4, nextid int4, words text);\n> CREATE\n> test=> create table t2 (nextid int4, words text);\n> CREATE\n> test=> create view v1 as select t1.textid,t1.words,t2.words as words2\n> from t1,t2 where t1.nextid=t2.nextid;\n> CREATE\n> test=> insert into t1 values (2,1,'some other text');\n> INSERT 384454 1\n> test=> insert into t2 values (1,'joint text');\n> INSERT 384455 1\n> test=> insert into t1 values (1,1,'some text');\n> INSERT 384456 1\n> test=> select * from v1;\n> textid|words |words2 \n> ------+---------------+----------\n> 2|some other text|joint text\n> 1|some text |joint text\n> (2 rows)\n> \n> test=> select definition from pg_views where viewname='v1';\n> definition \n> -----------------------------------------------------------------------\n> SELECT \"textid\", \"words\", \"words\" AS \"words2\" FROM \"t1\", \"t2\" WHERE\n> \"nextid\" = \"nextid\"; (1 row)\n> \n> test=> SELECT \"textid\", \"words\", \"words\" AS \"words2\" FROM \"t1\", \"t2\"\n> WHERE \"nextid\" = \"nextid\";\n> ERROR: Column 'words' is ambiguous\n> test=> \n> \n> -- \n> Ross J. Reedstrom, Ph.D., <[email protected]> \n> NSBRI Research Scientist/Programmer\n> Computer and Information Technology Institute\n> Rice University, 6100 S. Main St., Houston, TX 77005\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 23 Sep 1999 15:07:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] VIEW definitions broken in 6.5.0"
},
{
"msg_contents": ">> The problem is that table qualifications are left\n>> off the fieldnames in both the SELECT clause and the WHERE clause.\n\nYes, that was reported and fixed a while ago. It's definitely in\ncurrent and 6.5.2, not sure about 6.5.1.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Sep 1999 17:38:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] VIEW definitions broken in 6.5.0 "
}
] |
[
{
"msg_contents": "One last missing quoting bug in pg_dump:\nnow that sequence names are properly quoted for field defaults, mixed\ncase sequence names are generated. These are properly quoted in the\nCREATE SEQUENCE lines, but not in the SELECT nextval lines, as per below:\n\nCREATE SEQUENCE \"Teams_TeamID_seq\" start 10 increment 1 maxvalue\n2147483647 minvalue 1 cache 1 ;\nSELECT nextval ('Teams_TeamID_seq');\n\nThis needs to be:\nSELECT nextval ('\"Teams_TeamID_seq\"');\n\nPatch included below.\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005",
"msg_date": "Mon, 19 Jul 1999 17:20:15 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump quoting bug"
},
{
"msg_contents": "Thanks. Fix applied. This will appear in 6.6 and the next 6.5.x release.\n\n\n> One last missing quoting bug in pg_dump:\n> now that sequence names are properly quoted for field defaults, mixed\n> case sequence names are generated. These are properly quoted in the\n> CREATE SEQUENCE lines, but not in the SELECT nextval lines, as per below:\n> \n> CREATE SEQUENCE \"Teams_TeamID_seq\" start 10 increment 1 maxvalue\n> 2147483647 minvalue 1 cache 1 ;\n> SELECT nextval ('Teams_TeamID_seq');\n> \n> This needs to be:\n> SELECT nextval ('\"Teams_TeamID_seq\"');\n> \n> Patch included below.\n> -- \n> Ross J. Reedstrom, Ph.D., <[email protected]> \n> NSBRI Research Scientist/Programmer\n> Computer and Information Technology Institute\n> Rice University, 6100 S. Main St., Houston, TX 77005\n\n[Attachment, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 23 Sep 1999 15:10:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump quoting bug"
}
] |
[
{
"msg_contents": "Herouth Maoz <[email protected]> writes:\n>> I think the problem results from using non-standard constructs such as\n>> order by expression, and indeed ordering by columns that don't appear in\n>> the select list.\n\nI replied:\n> No, that's not the problem.\n\nLooks like I spoke too soon :-(. On further investigation, it does seem\nthat the main problem in Richards' example is that he is trying to sort\nthe result of a UNION by a resjunk attribute. That would work fine as\nfar as the primary SELECT goes, but there's no mechanism right now for\ncreating the same resjunk attribute in the sub-selects.\n\nIndeed, we seem to have a whole passel of problems that are related to\ntransformations done on the target list --- not only resjunk attribute\naddition, but rearrangement of the tlist order for INSERT ... SELECT,\nand probably other things. In a UNION query these will get done on the\ntop-level target list but not propagated into the union'd selects.\nFor example:\n\ncreate table src (a text, b text, c text);\ninsert into src values ('a', 'b', 'c');\n\ncreate table dest (a text default 'A', b text default 'B',\n\t\t c text default 'C');\n\ninsert into dest (a,c) select a,b from src;\n\nselect * from dest;\na|b|c\n-+-+-\na|B|b\n(1 row)\n\n-- OK so far, but now try this:\n\ninsert into dest (a,c) select a,b from src union select a,c from src;\n\nERROR: Each UNION | EXCEPT | INTERSECT query must have the same number\nof columns.\n\n-- The default for B was added to the first select, but not the second.\n-- Even more interesting:\n\ninsert into dest (a,c,b) select a,b,c from src union select a,b,c from src;\n\nselect * from dest;\na|b|c\n-+-+-\na|B|b\na|c|b\na|b|c\n(3 rows)\n\n-- The first select's columns were rearranged per the insert column\n-- spec, but the second's were not.\n\nI'm also worried about what happens when different sub-selects have\ndifferent collections of resjunk attributes and they all get APPENDed\ntogether...\n\nWe've got a few bugs to fix here :-(\n\nMeanwhile, I suspect that Richards' SELECT ... UNION ... ORDER BY\nwould work OK so long as the ORDER BY was for one of the displayed\ncolumns.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Jul 1999 22:59:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] Re: [HACKERS] Counting bool flags in a complex query "
},
{
"msg_contents": "> We've got a few bugs to fix here :-(\n> \n> Meanwhile, I suspect that Richards' SELECT ... UNION ... ORDER BY\n> would work OK so long as the ORDER BY was for one of the displayed\n> columns.\n\nTom, can you give me a list for the TODO list?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Jul 1999 23:33:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Re: [HACKERS] Counting bool flags in a complex query"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> We've got a few bugs to fix here :-(\n\n> Tom, can you give me a list for the TODO list?\n\nThe two cases I mentioned yesterday can be summarized as\n\n * SELECT ... UNION ... ORDER BY fails when sort expr not in result list\n * INSERT ... SELECT ... UNION is not reliable\n\nAnother thing I realized last night is that Except_Intersect_Rewrite's\ncoercion of all the sub-select target lists to compatible types is\npoorly done; for example in the regression database\n\nregression=> select f1 from int4_tbl union select q1 from int8_tbl;\nERROR: int8 conversion to int4 is out of range\n\nI think we want to use logic similar to what exists for CASE expressions\nto find the common supertype of the sub-select results and coerce all\nthe sub-selects to that type. (Thomas, any comments here? Can we pull\nthe CASE logic out of transformExpr and make it into a utility routine?)\n\n * Be smarter about promoting types when UNION merges different data types\n\nFinally, heaven help you if you have a GROUP BY in one of the subselects\nwhose column gets coerced to a different type by Except_Intersect_Rewrite,\nbecause the sortop for the GROUP BY has already been assigned.\n(This is another situation where a multi-level output representation\nwould be a better answer...)\n\n * SELECT ... UNION ... GROUP BY fails if column types disagree\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Jul 1999 11:29:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] Re: [HACKERS] Counting bool flags in a complex query "
},
{
"msg_contents": "> > Tom, can you give me a list for the TODO list?\n> \n> The two cases I mentioned yesterday can be summarized as\n> \n> * SELECT ... UNION ... ORDER BY fails when sort expr not in result list\n> * INSERT ... SELECT ... UNION is not reliable\n> \n> Another thing I realized last night is that Except_Intersect_Rewrite's\n> coercion of all the sub-select target lists to compatible types is\n> poorly done; for example in the regression database\n> \n> regression=> select f1 from int4_tbl union select q1 from int8_tbl;\n> ERROR: int8 conversion to int4 is out of range\n> \n> I think we want to use logic similar to what exists for CASE expressions\n> to find the common supertype of the sub-select results and coerce all\n> the sub-selects to that type. (Thomas, any comments here? Can we pull\n> the CASE logic out of transformExpr and make it into a utility routine?)\n> \n> * Be smarter about promoting types when UNION merges different data types\n> \n> Finally, heaven help you if you have a GROUP BY in one of the subselects\n> whose column gets coerced to a different type by Except_Intersect_Rewrite,\n> because the sortop for the GROUP BY has already been assigned.\n> (This is another situation where a multi-level output representation\n> would be a better answer...)\n> \n> * SELECT ... UNION ... GROUP BY fails if column types disagree\n\nAll added to TODO.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Jul 1999 13:23:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Re: [HACKERS] Counting bool flags in a complex query"
}
] |
[
{
"msg_contents": "Thanks to our fine 6.5 release, the development history article, and my\nstatus report e-mail, in the past few days, I have heard:\n\n Jolly is very proud we have done so much with Postgres95.\n\n Someone wants me to speak to a Linux group meeting in New York City.\n\n Someone wants to translate the article into Italian.\n\n A request from a web site maintainer for PostgreSQL tuturial material.\n\n Someone has ported PostgreSQL server to OS/2 in single-user mode.\n\n Someone described 6.5 as our first \"commercial quality\" release.\n\n\nThings are certainly lookup up for us.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Jul 1999 23:01:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Lots of things happening"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Someone described 6.5 as our first \"commercial quality\" release.\n\nWe need in transaction log to be in \"commercial quality\".\nRemember that server/system crash may break indices and\none will have to run vacuum and re-create indices by hands\nafter that. Recovery, recovery and recovery one more time -:)\n\nVadim\n",
"msg_date": "Tue, 20 Jul 1999 13:41:00 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Lots of things happening"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have a question about the calculation of selectivity.\n\nI see the following code in set_rest_selec() in clausesel.c.\n\n cost_clause = clausenode->selectivity;\n\n\t /*\n * Check to see if the selectivity of this clause or any\n'or'\n * subclauses (if any) haven't been set yet.\n */\n if (cost_clause <= 0 || valid_or_clause(clausenode))\n {\n\nWhy is valid_or_clause(clausenode) necessary ?\n\nThis means that even if selectivity is set,set_rest_selec()\ncalls compute_clause_selec() if the target clause is a\nvalid_or_clause.\ncompute_clause_selec() would add the selectivity of\nelements of or_clause to the current selectivity.\n\nAFAIC,compute_clause_selec() is called twice at least\n ( from add_restrict_and_join_to_rel() in initsplan.c\n and set_rest_selec() in clausesel.c)\nand seems to accumulate the result by repetition if\nthe target clause is a valid_or_clause.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Tue, 20 Jul 1999 12:40:18 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "selectivity calculation for or_clause is wrong ?"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> Why is valid_or_clause(clausenode) necessary ?\n\nLooks like a waste of cycles to me too.\n\nIf the subclauses of an OR could get rearranged during optimization\nthen this might be a necessary check, but AFAIK they don't.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Jul 1999 11:31:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] selectivity calculation for or_clause is wrong ? "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Wednesday, July 21, 1999 12:32 AM\n> To: Hiroshi Inoue\n> Cc: pgsql-hackers\n> Subject: Re: [HACKERS] selectivity calculation for or_clause is wrong ? \n> \n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > Why is valid_or_clause(clausenode) necessary ?\n> \n> Looks like a waste of cycles to me too.\n>\n\nIt's not only a waste of cycles.\nFor exmaple,\n\n1.explain select key1 from b where someitem in (1);\n\n NOTICE: QUERY PLAN:\n\n Seq Scan on b on b (cost=1638.49 rows=261 width=4)\n\n2.explain select key1 from b where someitem in (1,2);\n\n NOTICE: QUERY PLAN:\n\n Seq Scan on b on b (cost=1638.49 rows=773 width=4)\n\n3.explain select key1 from b where someitem in (1,2,3);\n\n NOTICE: QUERY PLAN:\n\n Seq Scan on b on b (cost=1638.49 rows=1274 width=4)\n\n\nrows of each plan 261 : 773 : 1274 not = 1 : 2 : 3.\nIt's nearly = 1 :3 :5.\n\nelements of or_clause except its first element are evaluated \ntwice and the results are accumlated.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n \n\n",
"msg_date": "Wed, 21 Jul 1999 10:31:58 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] selectivity calculation for or_clause is wrong ? "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>>>> Why is valid_or_clause(clausenode) necessary ?\n>> \n>> Looks like a waste of cycles to me too.\n>\n> It's not only a waste of cycles.\n> [ snip ]\n> rows of each plan 261 : 773 : 1274 not = 1 : 2 : 3.\n> It's nearly = 1 :3 :5.\n> elements of or_clause except its first element are evaluated \n> twice and the results are accumlated.\n\nBTW, I fixed this...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 31 Jul 1999 14:26:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] selectivity calculation for or_clause is wrong ? "
}
] |
[
{
"msg_contents": "Try this one:\n\ncreate table fltsrc (f1 float8);\ninsert into fltsrc values (1.0);\ninsert into fltsrc values (1.1);\ncreate table intdest (f1 int4);\ninsert into intdest select distinct f1 from fltsrc;\n\nCurrently, this coredumps because it tries to apply float8lt\nto integer values; the result of column f1 has been coerced to\nthe eventual destination format (int4) before it ever gets out\nof the SELECT stage. But the parser assigned the sortop to use\nfor DISTINCT while f1 was still float :-(.\n\nIn 6.4.2, there's no coredump, but only one tuple gets inserted\ninto intdest, because the DISTINCT processing is done on int4 values.\nI claim that is wrong too, because \"select distinct f1 from fltsrc\"\nyields two tuples not one.\n\nAs far as I can see, there is no way to represent doing the Right\nThing with the current querytree representation. Either the\ntargetlist expression includes a coercion to int4 or it doesn't;\nwe can't represent \"do the DISTINCT sort and filter on float8,\n*then* coerce to int4\" in a single-level targetlist.\n\nMeanwhile, Jan has been muttering that he can't really do rules\nright without an RTE entry for the result of a sub-select. And\nthe more I look at the UNION/EXCEPT/INTERSECT code, the less I\nlike it.\n\nMaybe it is time to swallow hard and redesign the querytree\nrepresentation? I think we really need a multi-level-plan\ndata structure right out of the starting gate... a format\ncloser to the plantree structure that the executor uses would\nprobably be less trouble all around.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Jul 1999 00:21:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Another reason to redesign querytree representation"
},
{
"msg_contents": "> As far as I can see, there is no way to represent doing the Right\n> Thing with the current querytree representation. Either the\n> targetlist expression includes a coercion to int4 or it doesn't;\n> we can't represent \"do the DISTINCT sort and filter on float8,\n> *then* coerce to int4\" in a single-level targetlist.\n> \n> Meanwhile, Jan has been muttering that he can't really do rules\n> right without an RTE entry for the result of a sub-select. And\n> the more I look at the UNION/EXCEPT/INTERSECT code, the less I\n> like it.\n> \n> Maybe it is time to swallow hard and redesign the querytree\n> representation? I think we really need a multi-level-plan\n> data structure right out of the starting gate... a format\n> closer to the plantree structure that the executor uses would\n> probably be less trouble all around.\n\nThe current system was designed by me. It was very little code, and I\nwas quite suprised it worked as well as it does. Feel free to redesign\nit. I did it to be small and nimble. Seems it is not flexible enough.\n\nGot to admit, it is a nice trick to get UNIONS, which is all it was\nreally designed to do. At the time, it was neat to get any feature out\nof so little code. Ah, the buggy pre-6.5 days.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Jul 1999 00:51:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Another reason to redesign querytree representation"
},
{
"msg_contents": "> Maybe it is time to swallow hard and redesign the querytree\n> representation? I think we really need a multi-level-plan\n> data structure right out of the starting gate... a format\n> closer to the plantree structure that the executor uses would\n> probably be less trouble all around.\n\nWell, while you do that keep in mind outer joins. We've been\ndiscussing it, and it is pretty clear that rte's cannot hold the join\ninfo, since the same source table can participate in more than one\njoin. So we might have to carry the info in a special kind of\nqualification node, which allows the planner/optimizer to generate\nsubselects below and above that node, but does not allow it to try the\nfull range of different combinations of joins across it (since it\nappears to me that outer joins have only limited possibilities for\nrearrangement of the query).\n\nBut perhaps a redesigned querytree could carry the info more\nnaturally...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 20 Jul 1999 05:18:48 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Another reason to redesign querytree representation"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Maybe it is time to swallow hard and redesign the querytree\n>> representation?\n\n> Well, while you do that keep in mind outer joins.\n\nWhile *I* redesign it? I barely know what an outer join is.\n\nI'm happy to kibitz while someone else redesigns it, but I don't\nthink I'm qualified to be the lead dog...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Jul 1999 10:11:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Another reason to redesign querytree representation "
},
{
"msg_contents": "> I'm happy to kibitz while someone else redesigns it, but I don't\n> think I'm qualified to be the lead dog...\n\nWell, that's OK, I barely know what a query tree is. But I think I\nknow what I want for outer join behavior. Let's keep talking if you\nare mucking with the query tree stuff...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 20 Jul 1999 14:47:41 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Another reason to redesign querytree representation"
},
{
"msg_contents": "> Thomas Lockhart <[email protected]> writes:\n> >> Maybe it is time to swallow hard and redesign the querytree\n> >> representation?\n> \n> > Well, while you do that keep in mind outer joins.\n> \n> While *I* redesign it? I barely know what an outer join is.\n> \n> I'm happy to kibitz while someone else redesigns it, but I don't\n> think I'm qualified to be the lead dog...\n\nDid you want to do the change for UNION, or were you just suggesting it\nbe done? I can easily add it to the TODO list.\n\nDone:\n\n\t* redesign UNION structures to have separarate target lists.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Jul 1999 13:15:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Another reason to redesign querytree representation"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Did you want to do the change for UNION, or were you just suggesting it\n> be done? I can easily add it to the TODO list.\n> Done:\n\n> \t* redesign UNION structures to have separarate target lists.\n\nActually, it's not so much UNION that's busted as it is INSERT.\nThe parser problems could be dealt with by having a two-level structure\nfor INSERT ... SELECT ..., so that the targetlist for the eventual\nINSERT could be described without changing the semantics of the\nunderlying SELECT.\n\nThere might be other extensions needed for rules (paging Jan...) but\nas far as what I've been looking at goes, the TODO entry could be just\n\n\t* redesign INSERT ... SELECT to have two levels of target list.\n\nThomas, what do you think is needed for outer joins?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Jul 1999 17:22:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Another reason to redesign querytree representation "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Did you want to do the change for UNION, or were you just suggesting it\n> > be done? I can easily add it to the TODO list.\n> > Done:\n> \n> > \t* redesign UNION structures to have separarate target lists.\n> \n> Actually, it's not so much UNION that's busted as it is INSERT.\n> The parser problems could be dealt with by having a two-level structure\n> for INSERT ... SELECT ..., so that the targetlist for the eventual\n> INSERT could be described without changing the semantics of the\n> underlying SELECT.\n> \n> There might be other extensions needed for rules (paging Jan...) but\n> as far as what I've been looking at goes, the TODO entry could be just\n> \n> \t* redesign INSERT ... SELECT to have two levels of target list.\n\nRemoved:\n* Be smarter about promoting types when UNION merges different data types\n* SELECT ... UNION ... GROUP BY fails if column types disagree\n* INSERT ... SELECT ... UNION is not reliable\n\nAnd added:\n\n* redesign INSERT ... SELECT to have two levels of target list\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Jul 1999 17:40:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Another reason to redesign querytree representation"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Removed:\n> * Be smarter about promoting types when UNION merges different data types\n> * SELECT ... UNION ... GROUP BY fails if column types disagree\n> * INSERT ... SELECT ... UNION is not reliable\n\n> And added:\n\n> * redesign INSERT ... SELECT to have two levels of target list\n\nEr ... wait. The first two of those TODO items were separate issues.\nI think they can be fixed without changing the querytree representation.\nThe third can be superseded by this item, though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Jul 1999 17:49:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Another reason to redesign querytree representation "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Removed:\n> > * Be smarter about promoting types when UNION merges different data types\n> > * SELECT ... UNION ... GROUP BY fails if column types disagree\n> > * INSERT ... SELECT ... UNION is not reliable\n> \n> > And added:\n> \n> > * redesign INSERT ... SELECT to have two levels of target list\n> \n> Er ... wait. The first two of those TODO items were separate issues.\n> I think they can be fixed without changing the querytree representation.\n> The third can be superseded by this item, though.\n> \n\nDone.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Jul 1999 19:05:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Another reason to redesign querytree representation"
},
{
"msg_contents": "> Thomas, what do you think is needed for outer joins?\n\nBruce and I have talked about it some already:\n\nFor outer joins, tables must be combined in a particular order. For\nexample, a left outer join requires that any entries in the left-side\ntable which do not have a corresponding entry in the right-side table\nbe expanded with nulls during the join. The information on the outer\njoin can't be carried by the rte since the same table can appear twice\nin an outer join expression:\n\n select * from t1 left join t2 using (i)\n left join t1 on (i = t1.j);\n\nFor a query like\n\n select * from t1 left join t2 using (i) where t2.j = 3;\n\nistm that the outer join must be done before the t2 qualification is\napplied, and that another ordering may produce the wrong result.\n\n>From what I understand Bruce to say, the planner/optimizer is allowed\nto try all kinds of permutations of plans, choosing the one with the\nlowest cost. But if the info for the join is carried in a\nqualification node, then the planner/optimizer must know that it can't\nreorder the query as freely as it does now.\n\nI was thinking of having a new qualification node to carry this info,\nand it could be transformed into a mergejoin node which has a couple\nof new fields indicating left and/or right outer join behavior.\n\nA hashjoin method may be possible for queries which are structured as\na left outer join; other outer joins will need to use the mergejoin\nmethod. Also, some poorly-qualified outer joins reduce to inner joins,\nand perhaps the optimizer can be smart enough to realize this.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 21 Jul 1999 06:20:22 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Another reason to redesign querytree representation"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Thomas, what do you think is needed for outer joins?\n\n> The information on the outer\n> join can't be carried by the rte since the same table can appear twice\n> in an outer join expression:\n>\n> select * from t1 left join t2 using (i)\n> left join t1 on (i = t1.j);\n\nIs that actually a valid query? Wouldn't you at least need to rename\none or the other appearance of t1? (Nitpick, probably, but maybe I\nam not understanding something...)\n\n> For a query like\n> select * from t1 left join t2 using (i) where t2.j = 3;\n> istm that the outer join must be done before the t2 qualification is\n> applied, and that another ordering may produce the wrong result.\n\nIt's not immediately obvious what the semantics of that ought to be...\nbut I agree it makes a difference if you eliminate t2 rows before\nrather than after the join.\n\nThis looks to me like the syntactic notion is that t1-left-join-t2\nis already a single table as far as the rest of the SELECT is \nconcerned. (But what happens if they have column names in common,\nother than i?) In which case you're right, join first then apply\nthe WHERE condition is presumably what's supposed to happen.\n\nIf that's the way it works, I think that an RTE describing the joined\ntable is the natural way to handle it. Obviously this would not be\na primitive node; it would have to be some kind of structure of nodes.\n\n> I was thinking of having a new qualification node to carry this info,\n\nYou would need a qual clause to carry the join condition (t1.i = t2.i\nin your first example, i = t1.j in your second). This would have to\ndangle off a node that represents the specially joined tables, I think.\n\nThere's no such thing as a \"qualification node\"; qual clauses are just\nexpressions that happen to be in WHERE. If the \"specially joined\ntables\" node isn't in the RTE then I think we need to invent some new\nplace to put it. The WHERE expression isn't a natural place for it.\n\n> From what I understand Bruce to say, the planner/optimizer is allowed\n> to try all kinds of permutations of plans, choosing the one with the\n> lowest cost. But if the info for the join is carried in a\n> qualification node, then the planner/optimizer must know that it can't\n> reorder the query as freely as it does now.\n\nYes, the join order would be forced W.R.T. the outer-joined tables,\nat least.\n\nThe other alternative we should consider is the notion that the parser\noutputs are already a multilevel plan structure, where we'd have a whole\nlower plan item representing the outer-join table result. This might\nend up being the same thing as above, since quite possibly the RTE would\nbe the natural place for the upper plan's link to the lower one.\n\nWe need to get Jan involved in this, since this sounds like the same\nkind of stuff he's been saying is needed for rules. In fact Jan\nprobably ought to be leading the discussion, not me...\n\n> A hashjoin method may be possible for queries which are structured as\n> a left outer join; other outer joins will need to use the mergejoin\n> method.\n\nI don't see why plain ol' nested loop couldn't be used too. mergejoin\nis not always better than nested loop, or even always feasible. It\nrequires the availability of sort operators, for one thing.\n\n> Also, some poorly-qualified outer joins reduce to inner joins,\n> and perhaps the optimizer can be smart enough to realize this.\n\nOK, now tell me about inner joins...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Jul 1999 10:36:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Another reason to redesign querytree representation "
},
{
"msg_contents": "> > select * from t1 left join t2 using (i)\n> > left join t1 on (i = t1.j);\n> Is that actually a valid query? Wouldn't you at least need to rename\n> one or the other appearance of t1? (Nitpick, probably, but maybe I\n> am not understanding something...)\n\nafaik it is a valid query. The first outer join combines t1 and t2,\nand by definition the intermediate result loses its table-specific\nlabeling.\n\n> > For a query like\n> > select * from t1 left join t2 using (i) where t2.j = 3;\n> > istm that the outer join must be done before the t2 qualification is\n> > applied, and that another ordering may produce the wrong result.\n> It's not immediately obvious what the semantics of that ought to be...\n> but I agree it makes a difference if you eliminate t2 rows before\n> rather than after the join.\n> This looks to me like the syntactic notion is that t1-left-join-t2\n> is already a single table as far as the rest of the SELECT is\n> concerned. (But what happens if they have column names in common,\n> other than i?) In which case you're right, join first then apply\n> the WHERE condition is presumably what's supposed to happen.\n> If that's the way it works, I think that an RTE describing the joined\n> table is the natural way to handle it. Obviously this would not be\n> a primitive node; it would have to be some kind of structure of nodes.\n\nYour statements are all correct. Maybe defining an RTE which does not\nrefer to a single specific table is the way to go; is there anything\nlike that already? If not, then putting the equivalent down in the\nqualifications might work.\n\n> > I was thinking of having a new qualification node to carry this \n> > info,\n> You would need a qual clause to carry the join condition (t1.i = t2.i\n> in your first example, i = t1.j in your second). This would have to\n> dangle off a node that represents the specially joined tables, I \n> think.\n> There's no such thing as a \"qualification node\"; qual clauses are just\n> expressions that happen to be in WHERE. If the \"specially joined\n> tables\" node isn't in the RTE then I think we need to invent some new\n> place to put it. The WHERE expression isn't a natural place for it.\n\nRight, I wasn't remembering the right terminology.\n\n> > From what I understand Bruce to say, the planner/optimizer is \n> > allowed to try all kinds of permutations of plans, choosing the one \n> > with the lowest cost. But if the info for the join is carried in a\n> > qualification node, then the planner/optimizer must know that it \n> > can't reorder the query as freely as it does now.\n> Yes, the join order would be forced W.R.T. the outer-joined tables,\n> at least.\n> The other alternative we should consider is the notion that the parser\n> outputs are already a multilevel plan structure, where we'd have a \n> whole lower plan item representing the outer-join table result. This \n> might end up being the same thing as above, since quite possibly the \n> RTE would be the natural place for the upper plan's link to the lower \n> one.\n> We need to get Jan involved in this, since this sounds like the same\n> kind of stuff he's been saying is needed for rules. In fact Jan\n> probably ought to be leading the discussion, not me...\n> > A hashjoin method may be possible for queries which are structured \n> > as a left outer join; other outer joins will need to use the \n> > mergejoin method.\n> I don't see why plain ol' nested loop couldn't be used too. mergejoin\n> is not always better than nested loop, or even always feasible. It\n> requires the availability of sort operators, for one thing.\n\nRight. I'm learning as I go. I did put some code into the mergejoin\nroutines to walk the tables properly for outer joins (it is marked by\n#ifdef ENABLE_OUTER_JOINS). But the flags for outer joins are not yet\npassed in, and the result tuples are not constructed (they need some\nnull fields added to the left- or right-side table).\n\n> > Also, some poorly-qualified outer joins reduce to inner joins,\n> > and perhaps the optimizer can be smart enough to realize this.\n> OK, now tell me about inner joins...\n\nAs you know that is what Postgres already does. You can specify inner\njoins using this newer join syntax:\n\n select * from t1 join t2 using (i);\n\nwhich I just convert to normal query tree nodes in the parser so it's\nequivalent to\n\n select * from t1, t2 where t1.i = t2.i;\n\nI probably don't do a complete job, but it's been a while since I've\nlooked at it.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 22 Jul 1999 14:25:18 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Another reason to redesign querytree representation"
}
] |
[
{
"msg_contents": "In merging the man pages, I find that CREATE VERSION is marked as not\nworking in the current release. Did we lose that capability when we\nlost time travel, or is that something separate? If it is not related\nto time travel, why doesn't it work currently?\n\nIf this is to complicated to answer quickly, then perhaps a quick\nyes/no on whether I should retain docs on CREATE VERSION would be\neasier...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 20 Jul 1999 05:50:31 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "CREATE VERSION"
},
{
"msg_contents": "> In merging the man pages, I find that CREATE VERSION is marked as not\n> working in the current release. Did we lose that capability when we\n> lost time travel, or is that something separate? If it is not related\n> to time travel, why doesn't it work currently?\n> \n> If this is to complicated to answer quickly, then perhaps a quick\n> yes/no on whether I should retain docs on CREATE VERSION would be\n> easier...\n\nChuck it. It is not useful, I think, and does not work. I never did\nfigure out what it was supposed to do.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Jul 1999 12:48:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CREATE VERSION"
},
{
"msg_contents": "On Tue, 20 Jul 1999, Bruce Momjian wrote:\n\n> > In merging the man pages, I find that CREATE VERSION is marked as not\n> > working in the current release. Did we lose that capability when we\n> > lost time travel, or is that something separate? If it is not related\n> > to time travel, why doesn't it work currently?\n> > \n> > If this is to complicated to answer quickly, then perhaps a quick\n> > yes/no on whether I should retain docs on CREATE VERSION would be\n> > easier...\n> \n> Chuck it. It is not useful, I think, and does not work. I never did\n> figure out what it was supposed to do.\n\nI was going to ask, but figured it was one of those \"RTFM\" types of things\nand I was just too lazy to check the manual for that one :)\n\nGladit wasn't just me :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 20 Jul 1999 14:00:15 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CREATE VERSION"
}
] |
[
{
"msg_contents": "Me, again...\n\nIn psql, is the \\ that appears before a command supposed to terminate the\nexisting query line (if any), as well as escape the command from the query?\nIt seems that way, because anything after a \\ command is ignored totally,\neven another command. Once the command has been executed, that's it. Is\nthis the way that it's supposed to be?\n\nCheers...\n\nMikeA\n",
"msg_date": "Tue, 20 Jul 1999 14:29:38 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "psql & query string length"
},
{
"msg_contents": "\"Ansley, Michael\" <[email protected]> writes:\n> In psql, is the \\ that appears before a command supposed to terminate the\n> existing query line (if any), as well as escape the command from the query?\n\nIf you think that that logic needs rejiggering, be careful you don't\nbreak \\r (clear the query buffer, don't send the query) or \\g (send\naccumulated query, arranging to dump its output into a file). I think\nthere are some other backslash commands that interact with the query\naccumulation buffer, as well.\n\nI sort of thought that the basic idea is that backslash commands are\nparsed and executed without any effect on the state of an incompletely\nentered query, except when the specific backslash command is defined to\ndo something with the query buffer. I might be all wet though.\n\nIf you got distracted by this point while working on making the query\nbuffer indefinitely extensible, I'd counsel fixing one bug at a time...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Jul 1999 10:27:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql & query string length "
},
{
"msg_contents": "On Tue, 20 Jul 1999, Tom Lane wrote:\n\n> I sort of thought that the basic idea is that backslash commands are\n> parsed and executed without any effect on the state of an incompletely\n> entered query, except when the specific backslash command is defined to\n> do something with the query buffer. I might be all wet though.\n\nThat's the way it works here Tom, so you shouldn't be needing a towel :)\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 20 Jul 1999 10:59:01 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql & query string length "
}
] |
[
{
"msg_contents": "\nBefore I announce this, can someone confirm that its okay as bundled? I\nbelieve I've covered everything, but figure that a second opinion is\nalways nice :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 20 Jul 1999 09:33:10 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "v6.5.1 'bundled'..."
}
] |
[
{
"msg_contents": "Tom Lane wrote:\n>> buffer indefinitely extensible, I'd counsel fixing one bug \n>> at a time...\nI know, you're right.\nI'm not trying to fix it, though, only trying to work out what it is\nsupposed to do in certain circumstances.\n\n\nMikeA\n",
"msg_date": "Tue, 20 Jul 1999 17:30:04 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] psql & query string length "
}
] |
[
{
"msg_contents": "In order not to break it. I could just run the regression tests, I suppose,\nbut it always helps if you kind of know where you're going.\n\n>> -----Original Message-----\n>> From: Ansley, Michael [mailto:[email protected]]\n>> Sent: Tuesday, July 20, 1999 5:30 PM\n>> To: '[email protected]'\n>> Subject: RE: [HACKERS] psql & query string length \n>> \n>> \n>> Tom Lane wrote:\n>> >> buffer indefinitely extensible, I'd counsel fixing one bug \n>> >> at a time...\n>> I know, you're right.\n>> I'm not trying to fix it, though, only trying to work out what it is\n>> supposed to do in certain circumstances.\n>> \n>> \n>> MikeA\n>> \n",
"msg_date": "Tue, 20 Jul 1999 17:50:51 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] psql & query string length "
}
] |
[
{
"msg_contents": "Please Cc: to [email protected]\n\nHello all,\n\nI think I have found a bug in unique index.\nThe bug has been tested with:\n-PostgreSQL 6.5 final version running on a RedHat 5.2 i386\n-PostgreSQL 6.5 final version running on a RedHat 6.0 i386\n-PostgreSQL 6.4.2 running on a RedHat 5.2 i386\nand the results are the same.\n\nCreate a test database and create the following table:\nCREATE TABLE \"livrari\" (\n \"nr_npr\" int4,\n \"data_npr\" date,\n \"nr_ordin\" int4,\n \"sursa\" character varying(32),\n \"destinatie\" character varying(32),\n \"produs\" character varying(8),\n \"spatii_sursa\" text,\n \"caracteristici\" text,\n \"calitate\" character varying(16),\n \"nr_transport\" character varying(16),\n \"delegat\" character varying(32),\n \"brut\" float8,\n \"tara\" float8,\n \"net\" float8,\n \"cod_operatiune\" int4,\n \"pret\" float8,\n \"tva\" float8,\n \"qwerty12345ytrewq54321\" int4); \n\nThen try creating the following unique index :\n\ntest=> create unique index livrari_unic on livrari (nr_npr, data_npr,\nnr_ordin, sursa, destinatie, produs, spatii_sursa, caracteristici,\ncalitate, nr_transport, delegat);\nCREATE\n\nNow try to select the records from table (actually no records, nothing\nhas been inserted):\n\ntest=> select * from livrari;\nERROR: index_info: no amop 403 655369 1\n\nI thought that my database is somehow corrupted and tried to vacuum it :\n\ntest=> vacuum analyze;\nVACUUM\n\nNow, I was trying to select again the records:\n\ntest=> select * from livrari;\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is\nimpossible. Terminating. \n\nAre the some limitations on the number and type of the fields contained\nby the index?\nIs there some mistakes that I make?\n\nPlease Cc: to [email protected]\n\nBest regards,\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n",
"msg_date": "Tue, 20 Jul 1999 19:52:47 +0300",
"msg_from": "Constantin Teodorescu <[email protected]>",
"msg_from_op": true,
"msg_subject": "BUG found in unique index!"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Constantin\n> Teodorescu\n> Sent: Wednesday, July 21, 1999 1:53 AM\n> To: [email protected]\n> Subject: [HACKERS] BUG found in unique index!\n> \n> \n> Please Cc: to [email protected]\n> \n> Hello all,\n> \n> I think I have found a bug in unique index.\n> The bug has been tested with:\n> -PostgreSQL 6.5 final version running on a RedHat 5.2 i386\n> -PostgreSQL 6.5 final version running on a RedHat 6.0 i386\n> -PostgreSQL 6.4.2 running on a RedHat 5.2 i386\n> and the results are the same.\n> \n> Create a test database and create the following table:\n> CREATE TABLE \"livrari\" (\n> \"nr_npr\" int4,\n> \"data_npr\" date,\n> \"nr_ordin\" int4,\n> \"sursa\" character varying(32),\n> \"destinatie\" character varying(32),\n> \"produs\" character varying(8),\n> \"spatii_sursa\" text,\n> \"caracteristici\" text,\n> \"calitate\" character varying(16),\n> \"nr_transport\" character varying(16),\n> \"delegat\" character varying(32),\n> \"brut\" float8,\n> \"tara\" float8,\n> \"net\" float8,\n> \"cod_operatiune\" int4,\n> \"pret\" float8,\n> \"tva\" float8,\n> \"qwerty12345ytrewq54321\" int4); \n> \n> Then try creating the following unique index :\n> \n> test=> create unique index livrari_unic on livrari (nr_npr, data_npr,\n> nr_ordin, sursa, destinatie, produs, spatii_sursa, caracteristici,\n> calitate, nr_transport, delegat);\n> CREATE\n>\n\nYour index has 11 columns.\nCurrently indices could have <= 8(7 ?) columns.\nIt seems create index should cause an error in this case. \n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Wed, 21 Jul 1999 08:49:31 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] BUG found in unique index!"
}
] |
[
{
"msg_contents": "> > I imagine that this flag is specific to the compiler. It would\n> > probably be best to leave it to patches until the alpha issues are\n> > solved for every OS environment; sorry I don't have a platform myself\n> > to test on.\n> > \n> > btw, RedHat is interested in doing a maintenance release of Postgres\n> > rpms, and would dearly love to have the Alpha port problems solved (or\n> > vica versa; they hate that their shipping rpms are broken or not\n> > available on one of their three supported architectures).\n> > \n> > Uncle G, could you tell us the actual port string configure generates\n> > for your platform? At the moment, PORTNAME on my i686 box says\n> > \"linux\", and I don't see architecture info. But perhaps we can have\n> > configure deduce an ARCH parameter too? It already knows it when first\n> > identifying the system...\n> \n> OK, I have made it:\n> \t\n> \tifeq ($(CPU),alpha)\n> \tifeq ($(CC), gcc)\n> \tCFLAGS+= -mieee\n> \tendif\n> \tifeq ($(CC), egcs)\n> \tCFLAGS+= -mieee\n> \tendif\n> \tendif\n> \n> I can always rip it out later.\n\nLet me reiterate Thomas's comments on this. Alpha has been a very\nbad port for us. I realize the problems are complex, but each alpha\nperson seems to know only 80% of what we need to get things working\n100%. We get partial solutions to small problems, that just seem to\nfix things long enough for current release. We had one release that\nwould not even initdb on alpha. We really need alpha folks to get their\nnoses to the grindstones and give us some solid causes/fixes to their\nproblems.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Jul 1999 13:36:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "Ok,\n I would have liked to have seen the fix to typedef time_t AbsoluteTime in\nnabstime.h, rather than this, but i guess its some movement :-/\n\nAs per ur request, I am sending u ( bruce) my changed *.[ch] files from the\nJuly 20 source ( they are not diffs ) . All the regressions tests work except\nfor geometry ( precision ? ) and Rules ( of which i will follow up (later)\nwith a question.\n\n I u folks got any questions please let me know, As i'm sure that you will\nhave some.\n\nRegarding the Test&set problems, u have to compile spin.c with -fno-inline.\ni'd give u a makefile but i'm not sure how u folks are handling the\n ifeq ($(OS), linux )\n ifeq ($(CPU), alpha )\n spin.o: spin.c\n $(CC) $(CFLAGS) -c -fno-inline spin.c -o spin.o\n endif\n endif\n\n I'm using -O3, and seems happy\ngat\n\nBruce Momjian wrote:\n\n> > > I imagine that this flag is specific to the compiler. It would\n> > >\n> > OK, I have made it:\n> >\n> > ifeq ($(CPU),alpha)\n> > ifeq ($(CC), gcc)\n> > CFLAGS+= -mieee\n> > endif\n> > ifeq ($(CC), egcs)\n> > CFLAGS+= -mieee\n> > endif\n> > endif\n> >\n> > I can always rip it out later.\n>\n> Let me reiterate Thomas's comments on this. Alpha has been a very\n> bad port for us. I realize the problems are complex, but each alpha\n> person seems to know only 80% of what we need to get things working\n> 100%. We get partial solutions to small problems, that just seem to\n\n\n\n",
"msg_date": "Thu, 22 Jul 1999 13:50:57 -0400",
"msg_from": "Uncle George <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "In rules.out it appears that the sort order is wrong. The SELECT * FROM\nshoe_ready WHERE total_avail >= 2; first give a sh3, and then a sh1.\nCan someone tell me where/or how the sorting is accomplished ? This\npresumes that some sorting is done.\n\ngat\nBTW this appears to work on the redhat/i386 port . SO where has my alpha\ngone wrong :-(\n\n\n\nQUERY: SELECT * FROM shoelace ORDER BY sl_name;\nsl_name |sl_avail|sl_color |sl_len|sl_unit |sl_len_cm\n----------+--------+----------+------+--------+---------\nsl1 | 5|black | 80|cm | 80\nsl2 | 6|black | 100|cm | 100\nsl3 | 0|black | 35|inch | 88.9\nsl4 | 8|black | 40|inch | 101.6\nsl5 | 4|brown | 1|m | 100\nsl6 | 0|brown | 0.9|m | 90\nsl7 | 7|brown | 60|cm | 60\nsl8 | 1|brown | 40|inch | 101.6\n(8 rows)\n\nQUERY: SELECT * FROM shoe_ready WHERE total_avail >= 2;\nshoename |sh_avail|sl_name |sl_avail|total_avail\n----------+--------+----------+--------+-----------\nsh3 | 4|sl7 | 7| 4\nsh1 | 2|sl1 | 5| 2\n(2 rows)\n\n~\n~\n\n\n",
"msg_date": "Thu, 22 Jul 1999 14:20:13 -0400",
"msg_from": "Uncle George <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> Ok,\n> I would have liked to have seen the fix to typedef time_t AbsoluteTime in\n> nabstime.h, rather than this, but i guess its some movement :-/\n\nIt is on our list. Too late to get that into 6.5.1:\n\n * Make Absolutetime/Relativetime int4 because time_t can be int8 on some ports\n\n> As per ur request, I am sending u ( bruce) my changed *.[ch] files from the\n> July 20 source ( they are not diffs ) . All the regressions tests work except\n> for geometry ( precision ? ) and Rules ( of which i will follow up (later)\n> with a question.\n\nI will not accept non-diff files. See tools/make_diff or use cvs diff.\nIs this the kind of Alpha support I get? :-;\n\n\n> I u folks got any questions please let me know, As i'm sure that you will\n> have some.\n> \n> Regarding the Test&set problems, u have to compile spin.c with -fno-inline.\n> i'd give u a makefile but i'm not sure how u folks are handling the\n> ifeq ($(OS), linux )\n> ifeq ($(CPU), alpha )\n> spin.o: spin.c\n> $(CC) $(CFLAGS) -c -fno-inline spin.c -o spin.o\n> endif\n> endif\n> \n> I'm using -O3, and seems happy\n> gat\n\nI have added to backend/storage/ipc/Makefile:\n\n\t# seems to be required 1999/07/22 bjm\n\tifeq ($(CPU),alpha)\n\tifeq ($(CC), gcc)\n\tCFLAGS+= -fno-inline\n\tendif\n\tifeq ($(CC), egcs)\n\tCFLAGS+= -fno-inline\n\tendif\n\tendif\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Jul 1999 14:29:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> \n> I'm using -O3, and seems happy\n\nCan I now put back optimization to -O2 on alpha? Please send me your\nother diffs.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Jul 1999 14:30:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "I dont know.\n u seem to want to inflict -fno-inline on all modules, which was not my\nfix for the test&set problem. I just wanted the -fno-inline to be set for\nonly spin.c, which is the only module on redhat linux/alpha to have this\nparticular problem.\n\nBruce Momjian wrote:\n\n> >\n> > I'm using -O3, and seems happy\n>\n> Can I now put back optimization to -O2 on alpha? Please send me your\n> other diffs.\n>\n\n\n\n",
"msg_date": "Thu, 22 Jul 1999 21:52:19 -0400",
"msg_from": "Uncle George <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> I dont know.\n> u seem to want to inflict -fno-inline on all modules, which was not my\n> fix for the test&set problem. I just wanted the -fno-inline to be set for\n> only spin.c, which is the only module on redhat linux/alpha to have this\n> particular problem.\n> \n\nCan't really hurt to put it on all files in a directory. I hesistate to\nput per-file flags. It is bad enough we are doing per-directory\nflags. If you see any performance difference on per directory vs. per\nfile, I will change it, ok?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Jul 1999 22:01:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "On Thu, 22 Jul 1999, Uncle George wrote:\n\n> As per ur request, I am sending u ( bruce) my changed *.[ch] files from the\n> July 20 source ( they are not diffs ) . All the regressions tests work except\n> for geometry ( precision ? ) and Rules ( of which i will follow up (later)\n> with a question.\n\n\tSounds great! Would you please send the changed *.[ch] files to me\n(only, no need to echo to the rest of the list) as well, I would like to\ntry them out. \n\tAlso, if you don't feel like making diffs, I can make them (once I\nget your changed filed) and send them back to Bruce.\n\tFinally, we are getting somewhere on the Pgsql Linux/Alpha port!!!\n:)\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n",
"msg_date": "Thu, 22 Jul 1999 20:23:34 -0600 (MDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> On Thu, 22 Jul 1999, Uncle George wrote:\n> \n> > As per ur request, I am sending u ( bruce) my changed *.[ch] files from the\n> > July 20 source ( they are not diffs ) . All the regressions tests work except\n> > for geometry ( precision ? ) and Rules ( of which i will follow up (later)\n> > with a question.\n> \n> \tSounds great! Would you please send the changed *.[ch] files to me\n> (only, no need to echo to the rest of the list) as well, I would like to\n> try them out. \n> \tAlso, if you don't feel like making diffs, I can make them (once I\n> get your changed filed) and send them back to Bruce.\n> \tFinally, we are getting somewhere on the Pgsql Linux/Alpha port!!!\n> :)\n\nRyan, I just sent you the diffs I received.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Jul 1999 22:39:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "On Thu, 22 Jul 1999, Bruce Momjian wrote:\n\n> > \tSounds great! Would you please send the changed *.[ch] files to me\n> > (only, no need to echo to the rest of the list) as well, I would like to\n> > try them out. \n> \n> Ryan, I just sent you the diffs I received.\n\n\tGot them, Thanks! I will check them out tomorrow and let you know\nhow I fair with them. TTYL.\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n",
"msg_date": "Thu, 22 Jul 1999 21:05:11 -0600 (MDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> The SELECT * FROM shoe_ready WHERE total_avail >= 2;\n> first give a sh3, and then a sh1.\n> BTW this appears to work on the redhat/i386 port . SO where has my \n> alpha gone wrong :-(\n\nIt's not wrong. If there is no explicit order-by, your system is\nentitled to return results in any damn order it wants to. The result\nas a set is quite correct (barring other unreported troubles)...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 23 Jul 1999 04:33:28 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "Thanks,\n But I think that a computer has no right to any \"damn order\" it\nwants to, particular if its the same src & test facilities.\ngat\n\nshutup HAL, you will get you're chance to talk to these guys later.\n\n\nThomas Lockhart wrote:\n\n> > The SELECT * FROM shoe_ready WHERE total_avail >= 2;\n> > first give a sh3, and then a sh1.\n> > BTW this appears to work on the redhat/i386 port . SO where has my\n> > alpha gone wrong :-(\n>\n> It's not wrong. If there is no explicit order-by, your system is\n> entitled to return results in any damn order it wants to. The result\n> as a set is quite correct (barring other unreported troubles)...\n\n",
"msg_date": "Fri, 23 Jul 1999 07:26:03 -0400",
"msg_from": "Uncle George <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "On Fri, Jul 23, 1999 at 07:26:03AM -0400, Uncle George wrote:\n> Thanks,\n> But I think that a computer has no right to any \"damn order\" it\n> wants to, particular if its the same src & test facilities.\n> gat\n\nThomas' reply is quite correct. Unless you specify an order, the\nunderlying system (maybe not even postgresql, but the OS and libraries\nit uses) may sort and return comparisons in any order, but always a\nconsistent order.\n\nThe fact that an i386 and an alpha processor based systems return\nresults differently should be of no suprise. You must explicitly\nspecify \"ORDER BY xxx\" in a query, and even then you need to know your\ncollation sequences etc.\n\nRegards,\n-- \nPeter Galbavy\nKnowledge Matters Ltd\nhttp://www.knowledge.com/\n",
"msg_date": "Fri, 23 Jul 1999 14:46:44 +0100",
"msg_from": "Peter Galbavy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> But I think that a computer has no right to any \"damn order\" it\n> wants to, particular if its the same src & test facilities.\n\nNow that you mention it, it isn't the same source since we use some\nUnix library sorting routines. It is fairly common for us to see\nordering differences between platforms, which is why you see so many\n\"order by\" clauses in the regression tests. We can add one more (send\npatches? :) and you would never know there was a difference in\nunderlying behavior...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 23 Jul 1999 13:47:11 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> Thanks,\n> But I think that a computer has no right to any \"damn order\" it\n> wants to, particular if its the same src & test facilities.\n> gat\n\nI totally disagree.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 23 Jul 1999 12:30:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "Thanks,\n But as I said before, with the same src, & tests, same collating\nseq, same lang, same 'c' compiler , and same ..........., u'd expect to\nget the same results. If u don't, as i have found out, there is an\ninconsistency in the PORT, libraries, etc ( whatever ) .\n I can go to upgrade to RH6.0/i386( mine is RH5.2 ) and see if is\nthe same as the RH6.0/alpha, but I really suspect it will (still ) be\ndifferent ( as the RH5.2/i386 matches expected/rules.out ).\n\n Therefor to resolve this inconsistency, I would like to know where\nthe output get ( or gets not ) sorted properly. Any suggestions ?\n\n Linux, et al, is suppose to be consistent on all platforms, and a\nlot of people try very hard to get each linux port in-line with all\nother ports. I dont percieve postgresql as being any different on any\nother linux/( intel/alpha/ppc/sparc/mips ) machine. So I have said, so\nshall it be done. ( :-) )\ngat\n\nThomas Lockhart wrote:\n\n> > But I think that a computer has no right to any \"damn order\" it\n> > wants to, particular if its the same src & test facilities.\n>\n> Now that you mention it, it isn't the same source since we use some\n> Unix library sorting routines. It is fairly common for us to see\n> ordering differences between platforms, which is why you see so many\n> \"order by\" clauses in the regression tests. We can add one more (send\n> patches? :) and you would never know there was a difference in\n> underlying behavior...\n>\n> - Thomas\n\n",
"msg_date": "Fri, 23 Jul 1999 13:35:41 -0400",
"msg_from": "Uncle George <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "On the July 26 snapshot, I have not seen the changes to the postgresql/alpha\nport. Although u can reiterate as much as u want, I percieve no particular\ndirection by u folks to get the alpha port done 100%. I haven't even been\nasked any questions about the alpha patches.\n\nOver the weekend I have resolved the problems with rules.sql, and despite\nyour assurances, I have resolved it to a postgresql peculiarity dealing with\nCost.\n\nAnyway, I have to move on, and just cant wait. Lemme know when u have\napplied/ resolved what to do with the patches.\ngat\n\nBruce Momjian wrote:\n\n>\n>\n> Let me reiterate Thomas's comments on this. Alpha has been a very\n> bad port for us. I realize the problems are complex, but each alpha\n> person seems to know only 80% of what we need to get things working\n> 100%. We get partial solutions to small problems, that just seem to\n> fix things long enough for current release. We had one release that\n> would not even initdb on alpha. We really need alpha folks to get their\n> noses to the grindstones and give us some solid causes/fixes to their\n> problems.\n>\n\n\n",
"msg_date": "Mon, 26 Jul 1999 16:32:25 -0400",
"msg_from": "Uncle George <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha & postgresql"
},
{
"msg_contents": "> On the July 26 snapshot, I have not seen the changes to the postgresql/alpha\n> port. Although u can reiterate as much as u want, I percieve no particular\n> direction by u folks to get the alpha port done 100%. I haven't even been\n> asked any questions about the alpha patches.\n> \n> Over the weekend I have resolved the problems with rules.sql, and despite\n> your assurances, I have resolved it to a postgresql peculiarity dealing with\n> Cost.\n> \n> Anyway, I have to move on, and just cant wait. Lemme know when u have\n> applied/ resolved what to do with the patches.\n> gat\n> \n\nThanks. We are getting over the 6.5.* releases, and are relaxing a\nlittle, seeing as 6.6 is months away. I will certainly let you know\nwhen the patches are applied. Thanks.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 26 Jul 1999 23:04:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha & postgresql"
},
{
"msg_contents": "> > > Sounds great! Would you please send the changed *.[ch] files to me\n> > Ryan, I just sent you the diffs I received.\n> Got them, Thanks! I will check them out tomorrow and let you know\n> how I fair with them. TTYL.\n\nWhere are we on the Alpha port? Once we have some reasonable behavior,\nI'd like to build some source RPMs which contain the patches. They do\nnot have to be applied to the main tree if that is premature, but once\nthey are put into a source RPM then Uncle G can build some binary RPMs\nfor Alpha and try them out.\n\nRedHat will release new RPMs when the Alpha port works, since they're\nanxious that the Alpha is supported...\n\nAs an aside, I've just posted Intel RPMs for v6.5.1 on\n\n ftp://postgresql.org/pub/{RPMS,SRPMS}/*.rpm\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 27 Jul 1999 14:39:31 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> > > > Sounds great! Would you please send the changed *.[ch] files to me\n> > > Ryan, I just sent you the diffs I received.\n> > Got them, Thanks! I will check them out tomorrow and let you know\n> > how I fair with them. TTYL.\n> \n> Where are we on the Alpha port? Once we have some reasonable behavior,\n> I'd like to build some source RPMs which contain the patches. They do\n> not have to be applied to the main tree if that is premature, but once\n> they are put into a source RPM then Uncle G can build some binary RPMs\n> for Alpha and try them out.\n> \n> RedHat will release new RPMs when the Alpha port works, since they're\n> anxious that the Alpha is supported...\n> \n> As an aside, I've just posted Intel RPMs for v6.5.1 on\n> \n> ftp://postgresql.org/pub/{RPMS,SRPMS}/*.rpm\n\nI just bounced the alpha patch over to you, Thomas. If you like it, it\ncan be applied.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 27 Jul 1999 11:12:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> I just bounced the alpha patch over to you, Thomas. If you like it, it\n> can be applied.\n\nGreat. But I'm looking for feedback from Ryan if he has a chance to\ntest it.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 27 Jul 1999 15:43:35 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "On Tue, 27 Jul 1999, Thomas Lockhart wrote:\n\n> > I just bounced the alpha patch over to you, Thomas. If you like it, it\n> > can be applied.\n> \n> Great. But I'm looking for feedback from Ryan if he has a chance to\n> test it.\n\n\tSorry, I have been a bit busy over the weekend. I did get to test\nit on Friday though. The patch applied flawlessly to that day's snapshot.\nThough I quickly hit a minor, but annoying snag. The configure script\ndetects my XLT 366 Alpha's CPU as 'alphaev5', which means that none of the\nalpha conditional clauses in the makefiles get evaluated correctly, and\none ends up with a binary that gets stuck spinlocks (when using -O2 for\nCFLAGS). \n\tI couldn't find anyway to tell make to look for alpha only at the\nstart of the CPU string (i.e. '$CPU =~ /^alpha.*/' in perl syntax), but\nthere might be one I missed. I simply ran configure, then edited\nmakefile.global, and changed 'alphaev5' to 'alpha' and complied as usual.\n\tThis time it worked great! No stuck spinlocks (and -O2 was used!),\nand all the regression tests, saved for rules as Uncle G. has already\nmentioned. \n\tSo, other than the CPU type detection problem, everything looks\nvery good. I have given postgres a decent work out, loading large data\nsets (8 tables, 88k records), and then accessing via a web interface I am\nwriting for work, without any problems at all.\n\tIf no one minds, I will forward Uncle G.'s patches onto some\nDebian-Alpha hackers that contacted me a while back about the status of\npgsql on alphas, and see what reaction they have to them.\n\tTTYL.\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n",
"msg_date": "Tue, 27 Jul 1999 13:48:49 -0600 (MDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> > Great. But I'm looking for feedback from Ryan if he has a chance to\n> > test it.\n> Sorry, I have been a bit busy over the weekend. I did get to test\n> it on Friday though. The patch applied flawlessly to that day's snapshot.\n> Though I quickly hit a minor, but annoying snag. The configure script\n> detects my XLT 366 Alpha's CPU as 'alphaev5', which means that none of the\n> alpha conditional clauses in the makefiles get evaluated correctly, and\n> one ends up with a binary that gets stuck spinlocks (when using -O2 for\n> CFLAGS).\n> I couldn't find anyway to tell make to look for alpha only at the\n> start of the CPU string (i.e. '$CPU =~ /^alpha.*/' in perl syntax), but\n> there might be one I missed. I simply ran configure, then edited\n> makefile.global, and changed 'alphaev5' to 'alpha' and complied as usual.\n\nHmm. That can probably be worked around with an entry in\nMakefile.custom, though I haven't looked at the specific usage.\n\n> This time it worked great! No stuck spinlocks (and -O2 was used!),\n> and all the regression tests, saved for rules as Uncle G. has already\n> mentioned.\n\nFantastic.\n\n> So, other than the CPU type detection problem, everything looks\n> very good. I have given postgres a decent work out, loading large data\n> sets (8 tables, 88k records), and then accessing via a web interface I am\n> writing for work, without any problems at all.\n> If no one minds, I will forward Uncle G.'s patches onto some\n> Debian-Alpha hackers that contacted me a while back about the status of\n> pgsql on alphas, and see what reaction they have to them.\n\nForwarding the patches is good. Is there anything in them which could\npossibly damage a non-alpha machine? If not, and if they are on the\nright track (they must be, since things actually work finally :) then\nthey should eventually end up in our main tree.\n\nIn glancing through the patches, I notice that one change is to pass\n\"Datum\" to all ADT functions which take a char, int2, or int4. That\ncertainly makes the code uglier, but I can see that fudging the calls\nas we did earlier might have led to trouble.\n\nIn the meantime, they could end up in Linux RPMs as patches to the\npristine distribution, and could be in new RPMs released through\nRedHat. They will be very excited (or at least as excited as they\nget... ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 28 Jul 1999 14:34:56 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "On Wed, 28 Jul 1999, Thomas Lockhart wrote:\n\n> > This time it worked great! No stuck spinlocks (and -O2 was used!),\n> > and all the regression tests, saved for rules as Uncle G. has already\n> > mentioned.\n> \n> Fantastic.\n\n\tOne thing I did forget to mention, is that I am getting a decent\nhandful of unaligned traps from postmaster. To put a number on that, from\nrunning the regression tests three times, once with numeric_big enabled, I\ngot ~164 unaligned traps. \n\tNot a show stopper, but something that probably needs to looked\ninto at some point in order to maximize performance of pgsql on Alphas.\n\n> Forwarding the patches is good. Is there anything in them which could\n> possibly damage a non-alpha machine? If not, and if they are on the\n> right track (they must be, since things actually work finally :) then\n> they should eventually end up in our main tree.\n\n\tI will pass the patches on to the Debian people and see what their\nexperience is with them. I know they have a handful of patches they\nalready apply to pgsql as it is (mostly reorganization of files I think),\nso to add one more won't cause them too much more trouble for the time\nbeing.\n\n> In the meantime, they could end up in Linux RPMs as patches to the\n> pristine distribution, and could be in new RPMs released through\n> RedHat. They will be very excited (or at least as excited as they\n> get... ;)\n\n\tAnd something similar for the debian packages as well. I will make\nsure the debian peple get the patches, though I will leave the\nresponsiblity of getting the patches to the redhat people to someone else.\n:) TTYL.\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n",
"msg_date": "Wed, 28 Jul 1999 21:18:23 -0600 (MDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> On Wed, 28 Jul 1999, Thomas Lockhart wrote:\n> \n> > > This time it worked great! No stuck spinlocks (and -O2 was used!),\n> > > and all the regression tests, saved for rules as Uncle G. has already\n> > > mentioned.\n> > \n> > Fantastic.\n> \n> \tOne thing I did forget to mention, is that I am getting a decent\n> handful of unaligned traps from postmaster. To put a number on that, from\n> running the regression tests three times, once with numeric_big enabled, I\n> got ~164 unaligned traps. \n> \tNot a show stopper, but something that probably needs to looked\n> into at some point in order to maximize performance of pgsql on Alphas.\n\nDoes it give you the location? I have already applied some alignment\ncleanups to the current cvs tree.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 28 Jul 1999 23:27:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "\n\tWhen I sat down to send out Uncle G.'s patches to the debian\ndevelopers I realized that the patches really only apply to a moving\ntarget. What I mean, is that they will only apply to current snapshots\n(i.e. Jun 23's), but not to the older 6.5.1 release. By giving out these\npatches, and telling them to just go and get a snapshot, they might end up\ngetting the snapshot on a day that pgsql is broken, or the patch will no\nlonger apply. The best solution I can think of is just to take one of the\nsnapshots (today's if it works, testing it now, otherwise last Fridays),\nand setting it aside along with the patches in a seperate 'linux_alpha'\ndirectory so packagers can have something \"non-moving\" to package for\nthier distributions. Is this a good idea, or does someone have a better\none?\n\n\tAlso, I found at least a temporary solution to the problem of\nalpha CPUs being detected as alphaev5, etc... and breaking the 'alpha'\nmakefile conditionals. Just add 'CPU:alpha' to the linux_alpha template.\nIs there a reason that this would be a bad idea? I don't even really see\nthe reason why config.guess wants to differeniate between different alpha\nCPUs in the first place?\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n\n",
"msg_date": "Thu, 29 Jul 1999 08:33:02 -0600 (MDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> \n> \tWhen I sat down to send out Uncle G.'s patches to the debian\n> developers I realized that the patches really only apply to a moving\n> target. What I mean, is that they will only apply to current snapshots\n> (i.e. Jun 23's), but not to the older 6.5.1 release. By giving out these\n> patches, and telling them to just go and get a snapshot, they might end up\n> getting the snapshot on a day that pgsql is broken, or the patch will no\n> longer apply. The best solution I can think of is just to take one of the\n> snapshots (today's if it works, testing it now, otherwise last Fridays),\n> and setting it aside along with the patches in a seperate 'linux_alpha'\n> directory so packagers can have something \"non-moving\" to package for\n> thier distributions. Is this a good idea, or does someone have a better\n> one?\n\nI would try applying to 6.5.1, make any hand tweeks needed, and generate\na patch from that for 6.5.1.\n\n> \tAlso, I found at least a temporary solution to the problem of\n> alpha CPUs being detected as alphaev5, etc... and breaking the 'alpha'\n> makefile conditionals. Just add 'CPU:alpha' to the linux_alpha template.\n> Is there a reason that this would be a bad idea? I don't even really see\n> the reason why config.guess wants to differeniate between different alpha\n> CPUs in the first place?\n\nSome optmizations are turned off in some Makefiles like\nbackend/utils/adt and backend/storage/ipc. Now that I think of it, you\ncan't send out patches for 6.5.1 because we don't have the alpha stuff\nin there that was put in after 6.5.1. I think the current snapshot may\nbe safe for general use.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 29 Jul 1999 10:41:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> When I sat down to send out Uncle G.'s patches to the debian\n> developers I realized that the patches really only apply to a moving\n> target. What I mean, is that they will only apply to current snapshots\n> (i.e. Jun 23's), but not to the older 6.5.1 release. By giving out these\n> patches, and telling them to just go and get a snapshot, they might end up\n> getting the snapshot on a day that pgsql is broken, or the patch will no\n> longer apply. The best solution I can think of is just to take one of the\n> snapshots (today's if it works, testing it now, otherwise last Fridays),\n> and setting it aside along with the patches in a seperate 'linux_alpha'\n> directory so packagers can have something \"non-moving\" to package for\n> thier distributions. Is this a good idea, or does someone have a better\n> one?\n\nI didn't realize that they weren't developed on v6.5.1 sources. That\nis what I'll need to develop RPM patches. I'd suggest that we work\nwith the v6.5.1 tar file, unless we think that using this version is\nunrealistic, in which case we are waiting for v6.6. As you point out,\na daily snapshot of almost any vintage should be suspect.\n\nLamar Owen is talking to RedHat about getting access to an Alpha\nmachine to help with RPM builds. If that pans out perhaps it will be a\ngood resource for us...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 29 Jul 1999 14:44:31 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "On Thu, 29 Jul 1999, Bruce Momjian wrote:\n\n> > \tAlso, I found at least a temporary solution to the problem of\n> > alpha CPUs being detected as alphaev5, etc... and breaking the 'alpha'\n> > makefile conditionals. Just add 'CPU:alpha' to the linux_alpha template.\n> > Is there a reason that this would be a bad idea? I don't even really see\n> > the reason why config.guess wants to differeniate between different alpha\n> > CPUs in the first place?\n> \n> Some optmizations are turned off in some Makefiles like\n> backend/utils/adt and backend/storage/ipc. \n\n\tFrom what I can tell (i.e. via grep), the CPU variable is only\nused to turn on/off the linux/alpha specific makefile rules that have been\nadded recently. Now, in the future that might change, and there be\noptimizations only for a certain level of alpha chip, which the templates\nhack could break. Of course, we could just deal with the problem when we\nreach it, since it will not be difficult to undo the templates hack and\ncome up with another way to detect CPU type at the makefile level.\n\n> Now that I think of it, you can't send out patches for 6.5.1 because\n> we don't have the alpha stuff in there that was put in after 6.5.1. \n> I think the current snapshot may be safe for general use.\n\n\tThat is what I figured out when the diff between 6.5.1 and\nFriday's snapshot came out at about 3.5MB. The time required to backport\nthe linux/alpha patches to 6.5.1 would be better spent else where.\n\tI just grabbed today's snapshot, patches applied fined, compiled\nand ran regression tests with no problems. Also, the regression tests only\ngenerated 20 unaliagned traps this time, which is a reduction from earlier\n(I think).\n\tAs for distribution packages, we want to get pgsql packages for\nalpha with these patches out there so people can pound on them before we\nroll the patches into the cvs tree for a formal release. That way,\nanything still lingering would be found soon, rather than later. Of\ncourse, we would want the packages to say clearly that they are beta or\npreliminary version only, and so don't use for mission critical operations\nuntil one has tested it out.\n\tAnyway, I will make a set of patches on today's snapshot that\nincludes Uncle G's, and clean up of the linux_alpha template file (setting\n-O2 again, and the CPU define), and then post that here to be forwarded on\nto package developers by the respective people (I will get them to the\ndebian people). Also, either a copy of today's snapshot needs to be set\naside (on the ftp site) for applying these patches, or I will stick the\nsnapshot on my web site.\n\tTTYL.\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n",
"msg_date": "Thu, 29 Jul 1999 09:14:12 -0600 (MDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> > Now that I think of it, you can't send out patches for 6.5.1 because\n> > we don't have the alpha stuff in there that was put in after 6.5.1.\n> > I think the current snapshot may be safe for general use.\n> That is what I figured out when the diff between 6.5.1 and\n> Friday's snapshot came out at about 3.5MB. The time required to backport\n> the linux/alpha patches to 6.5.1 would be better spent else where.\n> As for distribution packages, we want to get pgsql packages for\n> alpha with these patches out there so people can pound on them before we\n> roll the patches into the cvs tree for a formal release. That way,\n> anything still lingering would be found soon, rather than later. Of\n> course, we would want the packages to say clearly that they are beta or\n> preliminary version only, and so don't use for mission critical operations\n> until one has tested it out.\n\nI'm disappointed that we won't have a set of patches for v6.5.1. Is\nthere any possibility of putting these patches into our REL6_5_PATCHES\nbranch to prepare for a v6.5.2 release? What in the current set of\npatches would make this difficult? I believe that Tom Lane has been\npretty good about committing to that branch, and I don't know what\nelse might be missing.\n\nI'm willing to try patching that branch if others could help with\ntesting (don't have an Alpha myself).\n\nI've been trying to get things together so we can have a viable RPM\ndistribution of Postgres for Alphas. RedHat is interested, and I think\nthat it would help the Postgres cause. Does anyone else have this\nspecific interest, or should we just have them wait another 4 months??\n\nComments or suggestions?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 29 Jul 1999 15:32:01 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> > > Now that I think of it, you can't send out patches for 6.5.1 because\n> > > we don't have the alpha stuff in there that was put in after 6.5.1.\n> > > I think the current snapshot may be safe for general use.\n> > That is what I figured out when the diff between 6.5.1 and\n> > Friday's snapshot came out at about 3.5MB. The time required to backport\n> > the linux/alpha patches to 6.5.1 would be better spent else where.\n> > As for distribution packages, we want to get pgsql packages for\n> > alpha with these patches out there so people can pound on them before we\n> > roll the patches into the cvs tree for a formal release. That way,\n> > anything still lingering would be found soon, rather than later. Of\n> > course, we would want the packages to say clearly that they are beta or\n> > preliminary version only, and so don't use for mission critical operations\n> > until one has tested it out.\n> \n> I'm disappointed that we won't have a set of patches for v6.5.1. Is\n> there any possibility of putting these patches into our REL6_5_PATCHES\n> branch to prepare for a v6.5.2 release? What in the current set of\n> patches would make this difficult? I believe that Tom Lane has been\n> pretty good about committing to that branch, and I don't know what\n> else might be missing.\n\nOK, I don't want Thomas disappointed. We have the changes for alignment\nI made, and some changes for optimization in certain places, and the\nUncle George patch, and the removal of the bad comment in the template\nfile.\n\nMy recommendation(hold on to your seats) is to take the current cvs\ntree, patch it with Uncle George's patches and any others needed, and\nrelease a 6.5.2 release that addresses alpha. We can back-patch 6.5.2,\nbut there is really no reason to do that. There is really nothing\n'special' in the current tree. In fact, the most risky of them are the\nalpha ones, and since that is what we are trying to fix, we are not\nadding any new problems to the code.\n\nI am working on some cache stuff, but that is not committed.\n\n> I've been trying to get things together so we can have a viable RPM\n> distribution of Postgres for Alphas. RedHat is interested, and I think\n> that it would help the Postgres cause. Does anyone else have this\n> specific interest, or should we just have them wait another 4 months??\n\nThat is a long time.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 29 Jul 1999 11:44:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> OK, I don't want Thomas disappointed.\n\nThanks. I think ;)\n\n> We have the changes for alignment\n> I made, and some changes for optimization in certain places, and the\n> Uncle George patch, and the removal of the bad comment in the template\n> file.\n> My recommendation(hold on to your seats) is to take the current cvs\n> tree, patch it with Uncle George's patches and any others needed, and\n> release a 6.5.2 release that addresses alpha. We can back-patch 6.5.2,\n> but there is really no reason to do that. There is really nothing\n> 'special' in the current tree. In fact, the most risky of them are the\n> alpha ones, and since that is what we are trying to fix, we are not\n> adding any new problems to the code.\n\nOK. Another tack would be to do what you suggest on the main tree, and\nthen backpatch using diffs on the entire tree. Then we can release on\nthe v6.5.x branch as we would have liked.\n\nI'll be happy to attempt the backpatching, and if I fail then we can\nproceed with a v6.5.2 release based on the main tree. But I'm more\ncomfortable knowing that we've inspected every patch, and included\nonly those which address something significant.\n\nDoes this sound unrealistic? I'm guessing that the backpatching can\nhappen fairly easily, but I don't understand why someone just reported\n3.5MB of diffs. Hmm, how much of those diffs are on the docs tree? I\ndid make a bunch of changes to get the man pages going, and they\naren't relevant for v6.5.2 which could be limited to the src/ tree.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 29 Jul 1999 16:05:15 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> OK. Another tack would be to do what you suggest on the main tree, and\n> then backpatch using diffs on the entire tree. Then we can release on\n> the v6.5.x branch as we would have liked.\n\nWhy not just use the current tree. What does backpatching the entire\ntree do for us?\n\n> \n> I'll be happy to attempt the backpatching, and if I fail then we can\n> proceed with a v6.5.2 release based on the main tree. But I'm more\n> comfortable knowing that we've inspected every patch, and included\n> only those which address something significant.\n\nOh, yes. I see. Good idea to just review the patches and see what is\ninvolved. It is actually pretty easy to do that in one big patch.\n\n> Does this sound unrealistic? I'm guessing that the backpatching can\n> happen fairly easily, but I don't understand why someone just reported\n> 3.5MB of diffs. Hmm, how much of those diffs are on the docs tree? I\n> did make a bunch of changes to get the man pages going, and they\n> aren't relevant for v6.5.2 which could be limited to the src/ tree.\n\nDon't know.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 29 Jul 1999 12:08:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "On Thu, 29 Jul 1999, Bruce Momjian wrote:\n\n> > I'm disappointed that we won't have a set of patches for v6.5.1. Is\n> > there any possibility of putting these patches into our REL6_5_PATCHES\n> > branch to prepare for a v6.5.2 release? What in the current set of\n> > patches would make this difficult? I believe that Tom Lane has been\n> > pretty good about committing to that branch, and I don't know what\n> > else might be missing.\n> \n> OK, I don't want Thomas disappointed. We have the changes for alignment\n> I made, and some changes for optimization in certain places, and the\n> Uncle George patch, and the removal of the bad comment in the template\n> file.\n\n\tAttached is a mini patch of the changes I made to the linux_alpha\ntemplate file. Review and use as you wish. Basically just sets -O2 flag\nfor CFLAGS and also forces the CPU variable to be alpha, so as not to\nbreak the alpha specific makefile rules when the alpha processor is\nget detected as an alphaev5, etc...\n\tOtherwise, everything looks good!\n\n> My recommendation(hold on to your seats) is to take the current cvs\n> tree, patch it with Uncle George's patches and any others needed, and\n> release a 6.5.2 release that addresses alpha. We can back-patch 6.5.2,\n> but there is really no reason to do that. There is really nothing\n> 'special' in the current tree. In fact, the most risky of them are the\n> alpha ones, and since that is what we are trying to fix, we are not\n> adding any new problems to the code.\n\n\tWhile my opinion might not matter that much (not being a major\npgsql developer), I second this idea! By the end of the day I will have\ntaken the 'alpha' patched version of today's snapshot, and\ncompiled/regressed on Linux/Intel, Solaris/Sparc, and maybe Linux/Sparc.\nThat should give us a good idea if the alpha patches are going to break\nanything on other platforms (hopefully not).\n\tOnce you have a 6.5.2 release source tree ready for download (i.e.\njust before public announcement/distribution), let me know and I will run\nit through my systems (Alpha, Intel, and Sparc) just to double check.\n\tWorst case, Linux/Alpha uses 6.5.2 and everyone else (other\nplatforms) uses 6.5.1 until the next major release. This, while a bit\nconfusing/annoying, would not be a show stopper. :)\n\n> > I've been trying to get things together so we can have a viable RPM\n> > distribution of Postgres for Alphas. RedHat is interested, and I think\n> > that it would help the Postgres cause. Does anyone else have this\n> > specific interest, or should we just have them wait another 4 months??\n> \n> That is a long time.\n\n\tHence the reason we should try and get an easy to\nuse/compile/package version of pgsql for Linux/Alpha out the door as soon\nas reasonably possible. That is, one with out patches, it just compiles\nout of the box (for Linux/Alpha) at leat.\n\tTTYL.\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------",
"msg_date": "Thu, 29 Jul 1999 10:30:52 -0600 (MDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> \tAttached is a mini patch of the changes I made to the linux_alpha\n> template file. Review and use as you wish. Basically just sets -O2 flag\n> for CFLAGS and also forces the CPU variable to be alpha, so as not to\n> break the alpha specific makefile rules when the alpha processor is\n> get detected as an alphaev5, etc...\n> \tOtherwise, everything looks good!\n\nI question the CPU line. I modified configure to set CPU. Does the\ntemplate over-ride this?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 29 Jul 1999 13:03:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "On Thu, 29 Jul 1999, Thomas Lockhart wrote:\n\n> OK. Another tack would be to do what you suggest on the main tree, and\n> then backpatch using diffs on the entire tree. Then we can release on\n> the v6.5.x branch as we would have liked.\n> \n> I'll be happy to attempt the backpatching, and if I fail then we can\n> proceed with a v6.5.2 release based on the main tree. But I'm more\n> comfortable knowing that we've inspected every patch, and included\n> only those which address something significant.\n\n\tI will leave you guys to the finer points of source tree\nmanagement (still learning all of the capablities of cvs myself).\n\n> Does this sound unrealistic? I'm guessing that the backpatching can\n> happen fairly easily, but I don't understand why someone just reported\n> 3.5MB of diffs. Hmm, how much of those diffs are on the docs tree? I\n> did make a bunch of changes to get the man pages going, and they\n> aren't relevant for v6.5.2 which could be limited to the src/ tree.\n\n\tI was the one who reported the 3.5MB of diffs. And yes, I did\ncheck to see how many of them were docs, only about 20% of the total\ndiffs. :( I simply took an alpha patched snapshot from today and diffed it\nagainst the 6.5.1 release (after removing all of the CVS directories from\nthe latter). \n\tI still think that backpatching to create an \"alpha\" patch for\n6.5.1 is a bad idea and a waste of time. It is also a waste of time for\ndistribution packagers who have deal with applying yet another patch to\nthe distribution source tree, and everything involved with that. Also,\nthere are those who just want to get the source and compile pgsql for\nthier own use themselves, and many of them don't like having to mess with\npatches. Overall, it just adds unnecessary work and complexity to the\nrelease of a \"Linux/Alpha Ready\" version of pgsql.\n\tIMHO a 6.5.2 release with all of the necessary alpha patches\nalready in the distribution source tree is a much cleaner, clearer\nsolution, for distribution packagers, average users, and\ncompile-it-yourself-people.\n\tMy two cents. TTYL.\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n",
"msg_date": "Thu, 29 Jul 1999 11:04:54 -0600 (MDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> \tIMHO a 6.5.2 release with all of the necessary alpha patches\n> already in the distribution source tree is a much cleaner, clearer\n> solution, for distribution packagers, average users, and\n> compile-it-yourself-people.\n\nI think he was going to generate a 6.5.2 by back-patching, not\ndistributing a new patch to make 6.5.2.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 29 Jul 1999 13:08:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> > IMHO a 6.5.2 release with all of the necessary alpha patches\n> > already in the distribution source tree is a much cleaner, clearer\n> > solution, for distribution packagers, average users, and\n> > compile-it-yourself-people.\n> I think he was going to generate a 6.5.2 by back-patching, not\n> distributing a new patch to make 6.5.2.\n\nYup.\n\nOK, I'm trying to do this to help the Alpha folks, in such a way that\nit helps the Alpha-linux-RH folks to get RPMs also. Having a 6.5.2\nwhich does not run on Intel or Sparc does not help. Having a 6.5.2\nwhich has diverged from the 6.5.x tree in unknown ways does not help.\nHaving us decide by consensus the appropriate model for s/w\ndevelopment (main tree with changes progressing to a full release,\nbranch tree to carry maintenance changes) and then at the first\nopportunity step away from that seems counterproductive in the\nextreme. We ran into this same discussion during v6.4.x, and we're\ndoing it again.\n\nIf y'all can't maintain two branches, then let's stop doing it. otoh,\nwe can't do maintenance releases without a stable branch, so we'd\nbetter think about it before giving up.\n\nI've offered to help, much more than I should bother with. I'll leave\nit to other Alpha stakeholders to decide what they want. I should\npoint out that I offered to our RedHat contacts to try to marshall an\nAlpha-ready build, but so far it's like herding cats.\n\nAnd *really*, if we have 3.5MB of diffs, who are we kidding about\nknowing where they all came from and what they are doing? Backpatching\nor developing patches on a clean 6.5.1 release is the only thing to do\nfor a 6.5.2. Otherwise, call it 6.6-prealpha and we'll wait 4 months\nfor RPMs.\n\nMy $0.03 ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 30 Jul 1999 00:16:03 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> And *really*, if we have 3.5MB of diffs, who are we kidding about\n> knowing where they all came from and what they are doing? Backpatching\n> or developing patches on a clean 6.5.1 release is the only thing to do\n> for a 6.5.2. Otherwise, call it 6.6-prealpha and we'll wait 4 months\n> for RPMs.\n\nOK, let's punt. If someone wants to develop an alpha-only patch for\n6.5.1, they are welcome. We certainly had enought beta time to allow\nAlpha people to address this. After the final minor release is just too\nlate. Sorry.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 29 Jul 1999 20:40:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> OK, I'm trying to do this to help the Alpha folks, in such a way that\n> it helps the Alpha-linux-RH folks to get RPMs also. Having a 6.5.2\n\n[snip]\n\n> point out that I offered to our RedHat contacts to try to marshall an\n> Alpha-ready build, but so far it's like herding cats.\n \n> And *really*, if we have 3.5MB of diffs, who are we kidding about\n> knowing where they all came from and what they are doing? Backpatching\n> or developing patches on a clean 6.5.1 release is the only thing to do\n> for a 6.5.2. Otherwise, call it 6.6-prealpha and we'll wait 4 months\n> for RPMs.\n \n> My $0.03 ;)\n\nI second this. In the last few months, PostgreSQL has really been\nmaking progress in the mindshare area -- once, what was written off as\nbeing unreliable, buggy, and slow, not to mention feature-lean, is now\nbeing touted by many as \"commercial quality\", \"the Free Software\nequivalent to Oracle\", \"stable\", \"reliable\", and \"fast\".\n\nI'm all for having the latest and greatest snapshots working on the\nAlpha -- Woo Hoo, etc, etc. I'm all for the current CVS tree building\nlike a champ on Alpha -- this is good stuff. HOWEVER, if there is a\nneed for a 6.5.x running on Alpha, then 6.5.1 needs to get the Alpha\npatches (possibly a few other reliability patches -- but, keep the\nnumber of patches down to a minimum -- this is still a 6.5.x release --\nbug fixing only.) for a 6.5.2, where the advertised bugfixes include the\nlong-awaited Alpha patches.\n\nFor goodness sakes, Alpha is a major architecture -- this needs to be\ndone right. Make the number of possible variables a minimum -- let's\nget a patch set working that applies to virgin 6.5.1. If backporting\nand backpatching is required to do this, in the name of ROBUSTNESS -- by\nall means -- let's do it. \n\n(I say all this after Thomas had to \"slap me around\" a little -- I have\nbeen getting the cart before the horse on some of the RPM issues, and\nneeded a good reminder of just what kind of software package I'm working\non! This is an RDBMS -- people will be using this for major data -- like\nthe guy from Australia who e-mailed here not long ago about 6.5 vs\n6.4.2, and mentioned that his database had a few MILLION rows -- did\nanybody catch the significance of that? (Wayne Pierkarski from\nsenet.com.au) Thanks for the wakeup call, Thomas.)\n\nThanks and kudos go the the guys who have made the Alpha port work --\nnow, let's get a patch set against 6.5.1 that works -- if that proves\ntoo difficult, we'll just have to wait until pre-6.6, as Thomas already\nsaid.\n\nPostgreSQL is kicking major tuples -- let's keep it that way.... \n\nMy 1.5 cents...\n\nLamar Owen\nWGCR Internet Radio\n",
"msg_date": "Thu, 29 Jul 1999 20:44:12 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> And *really*, if we have 3.5MB of diffs, who are we kidding about\n> knowing where they all came from and what they are doing?\n\n3.5MB of diffs? I must have missed something ... where did those\ncome from, and what are they?\n\nI agree that no large changes should go into 6.5.x at this point,\nbut should we be accepting these diffs into the 6.6 branch?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 29 Jul 1999 21:11:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PORTS] RedHat6.0 & Alpha "
},
{
"msg_contents": "\nOkay, let me get this straight...v6.5 was in beta for, what, 2 months?\nAnd it isn't until *after* v6.5.1 is released that the Alpha guys realized\nthat \"oops, it doesn't work\"? And they have a patch that amounts to ~1/2\nthe size of the current distribution to get this to work?\n\n*rofl*\n\nThe stable branch is meant to allow *minor* changes to go into it, and, if\nthere are enough, to generate a new *stable* distribution. Minor changes\nare \"we put && instead of || in an if statement that only shows up #ifdef\n<feature> is enabled\"...or even where a bug is fixed that is based on us\nmissing an error check that adds a few lines of code.\n\nI have no problems with building a v6.5.2, or .3, or .4, if required...but\na 3.5MB diff does not constitute a 'minor bug fix' and should be merged\ninto v6.6 only...\n\nOn Fri, 30 Jul 1999, Thomas Lockhart wrote:\n\n> > > IMHO a 6.5.2 release with all of the necessary alpha patches\n> > > already in the distribution source tree is a much cleaner, clearer\n> > > solution, for distribution packagers, average users, and\n> > > compile-it-yourself-people.\n> > I think he was going to generate a 6.5.2 by back-patching, not\n> > distributing a new patch to make 6.5.2.\n> \n> Yup.\n> \n> OK, I'm trying to do this to help the Alpha folks, in such a way that\n> it helps the Alpha-linux-RH folks to get RPMs also. Having a 6.5.2\n> which does not run on Intel or Sparc does not help. Having a 6.5.2\n> which has diverged from the 6.5.x tree in unknown ways does not help.\n> Having us decide by consensus the appropriate model for s/w\n> development (main tree with changes progressing to a full release,\n> branch tree to carry maintenance changes) and then at the first\n> opportunity step away from that seems counterproductive in the\n> extreme. We ran into this same discussion during v6.4.x, and we're\n> doing it again.\n> \n> If y'all can't maintain two branches, then let's stop doing it. otoh,\n> we can't do maintenance releases without a stable branch, so we'd\n> better think about it before giving up.\n> \n> I've offered to help, much more than I should bother with. I'll leave\n> it to other Alpha stakeholders to decide what they want. I should\n> point out that I offered to our RedHat contacts to try to marshall an\n> Alpha-ready build, but so far it's like herding cats.\n> \n> And *really*, if we have 3.5MB of diffs, who are we kidding about\n> knowing where they all came from and what they are doing? Backpatching\n> or developing patches on a clean 6.5.1 release is the only thing to do\n> for a 6.5.2. Otherwise, call it 6.6-prealpha and we'll wait 4 months\n> for RPMs.\n> \n> My $0.03 ;)\n> \n> - Thomas\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 29 Jul 1999 22:27:05 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "On Thu, 29 Jul 1999, Bruce Momjian wrote:\n\n> > \tIMHO a 6.5.2 release with all of the necessary alpha patches\n> > already in the distribution source tree is a much cleaner, clearer\n> > solution, for distribution packagers, average users, and\n> > compile-it-yourself-people.\n> \n> I think he was going to generate a 6.5.2 by back-patching, not\n> distributing a new patch to make 6.5.2.\n\nExcuse ignorance...but...what is back-patching? :(\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 29 Jul 1999 22:30:10 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> On Thu, 29 Jul 1999, Bruce Momjian wrote:\n> \n> > > \tIMHO a 6.5.2 release with all of the necessary alpha patches\n> > > already in the distribution source tree is a much cleaner, clearer\n> > > solution, for distribution packagers, average users, and\n> > > compile-it-yourself-people.\n> > \n> > I think he was going to generate a 6.5.2 by back-patching, not\n> > distributing a new patch to make 6.5.2.\n> \n> Excuse ignorance...but...what is back-patching? :(\n\nDiff'ing stable and current trees, reviewing all the changes, and\napplying the patch to make the stable tree look similar to the current\ntree, without any possible bugs.\n\nAt this point, we are saying goodbye to 6.5.*. Alpha people can\ngenerate an alpha-only patch for 6.5.1 if they wish. They are too late\nfor the 6.5.* tree.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 29 Jul 1999 21:36:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "On Wed, 28 Jul 1999, Bruce Momjian wrote:\n\n> > On Wed, 28 Jul 1999, Thomas Lockhart wrote:\n> > \n> > > > This time it worked great! No stuck spinlocks (and -O2 was used!),\n> > > > and all the regression tests, saved for rules as Uncle G. has already\n> > > > mentioned.\n> > > \n> > > Fantastic.\n> > \n> > \tOne thing I did forget to mention, is that I am getting a decent\n> > handful of unaligned traps from postmaster. To put a number on that, from\n> > running the regression tests three times, once with numeric_big enabled, I\n> > got ~164 unaligned traps. \n> > \tNot a show stopper, but something that probably needs to looked\n> > into at some point in order to maximize performance of pgsql on Alphas.\n> \n> Does it give you the location? I have already applied some alignment\n> cleanups to the current cvs tree.\n\n\tThe only location it gives are memory addresses, like:\n\npostmaster(21349): unaligned trap at 0000000120131600: 000000011fff6a5d 28 1\n\nIf these are useful (which I doubt), I can provide you with a set from the\nrun of the regression tests quite easily.\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n",
"msg_date": "Thu, 29 Jul 1999 20:28:42 -0600 (MDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "On Thu, 29 Jul 1999, Bruce Momjian wrote:\n\n> > \tAttached is a mini patch of the changes I made to the linux_alpha\n> > template file. Review and use as you wish. Basically just sets -O2 flag\n> > for CFLAGS and also forces the CPU variable to be alpha, so as not to\n> > break the alpha specific makefile rules when the alpha processor is\n> > get detected as an alphaev5, etc...\n> > \tOtherwise, everything looks good!\n> \n> I question the CPU line. I modified configure to set CPU. Does the\n> template over-ride this?\n\n\tApparently yes, the template definitions override anything that\nconfigure figures out. I didn't know which method would be better,\nmodifying config.guess to return 'alpha' for CPU no matter what was\nexactly was detected (as long as it was still an alpha of some sort) or\nforce CPU to be 'alpha' in templates. Your choice which way to do it, just\nmake sure CPU is alpha no matter if it is a UDB (21064), XLT (21164), or\nDS20 (21264) that one is compling on.\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n",
"msg_date": "Thu, 29 Jul 1999 20:34:28 -0600 (MDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> > Does it give you the location? I have already applied some alignment\n> > cleanups to the current cvs tree.\n> \n> \tThe only location it gives are memory addresses, like:\n> \n> postmaster(21349): unaligned trap at 0000000120131600: 000000011fff6a5d 28 1\n> \n> If these are useful (which I doubt), I can provide you with a set from the\n> run of the regression tests quite easily.\n\nOh.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 29 Jul 1999 22:44:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> > I question the CPU line. I modified configure to set CPU. Does the\n> > template over-ride this?\n> \n> \tApparently yes, the template definitions override anything that\n> configure figures out. I didn't know which method would be better,\n> modifying config.guess to return 'alpha' for CPU no matter what was\n> exactly was detected (as long as it was still an alpha of some sort) or\n> force CPU to be 'alpha' in templates. Your choice which way to do it, just\n> make sure CPU is alpha no matter if it is a UDB (21064), XLT (21164), or\n> DS20 (21264) that one is compling on.\n\nI guess template is OK.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 29 Jul 1999 22:45:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "On Fri, 30 Jul 1999, Thomas Lockhart wrote:\n\n> > > IMHO a 6.5.2 release with all of the necessary alpha patches\n> > > already in the distribution source tree is a much cleaner, clearer\n> > > solution, for distribution packagers, average users, and\n> > > compile-it-yourself-people.\n> > I think he was going to generate a 6.5.2 by back-patching, not\n> > distributing a new patch to make 6.5.2.\n\n\tOk, you lost me on the terminology there then. What exactly is\n'back-patching'?\n\n> If y'all can't maintain two branches, then let's stop doing it. otoh,\n> we can't do maintenance releases without a stable branch, so we'd\n> better think about it before giving up.\n\n\tI do follow your logic on a stable vs. unstable tree, and can see\nthe benefit of having it. \n\n> I've offered to help, much more than I should bother with. I'll leave\n> it to other Alpha stakeholders to decide what they want. I should\n> point out that I offered to our RedHat contacts to try to marshall an\n> Alpha-ready build, but so far it's like herding cats.\n\n\tYea, it has been like that with the Linux/Alpha port for some\ntime, including other packages then pgsql alone. :( As for the other Alpha\nstakeholders, I have yet to hear from any of them at all in this\ndisscussion and for a while in any discussion concerning pgsql and\nLinux/Alpha. Of course, every now and then, some Linux/Alpha user comes\nalong and asks why we haven't moved anywhere with pgsql in the last so\nlong, and gets mad with any answer I try and give them. My conclusion\nabout Linux/Alpha is that lots of people want the power of the alpha\nprocessor, but don't want to help out and get rid of some of the lingering\nsharp edges. They want it to work right out of the box! That leaves things\nto a few of us die hards to get everything working, and most of them\nfocus on more fundamental things, like gcc and glibc, and the applications\nend up getting the short end of the stick. Ok, I will get off my soap box\nhere, back to the trenches....\n\n> And *really*, if we have 3.5MB of diffs, who are we kidding about\n> knowing where they all came from and what they are doing? Backpatching\n> or developing patches on a clean 6.5.1 release is the only thing to do\n> for a 6.5.2. Otherwise, call it 6.6-prealpha and we'll wait 4 months\n> for RPMs.\n\n\tAfter this discussion and a few tests of my own, I think I had\nbetter change my position on this issue.\n\tFirst of all, today's snapshot with Uncle G's patches compiles and\nruns on Linux/Intel and Solaris/Sparc as well as they do without the\npatches on the same snapshot for the most part. Though the patches seem to\nbreak the random regression test on Linux/Intel. Also, today's snapshot\n(clean) will not compile on Solaris/Sparc, as there is an extra #endif in\n./src/backend/port/isinf.c that gcc on Solaris pukes on. :(\n\tSo, this snapshot is in suspect, and it looks like the alpha\npatches are as well, at least as far as other platforms go. \n\tMy vote would be go back and do a 'alpha' patch off of 6.5.1, and\ndistribute that to the distribution people to get pgsql running on\nLinux/Alpha in the short time. Then, four months or so down the road when\nthe next release target comes up, we plan to have a version of pgsql that\nwill run on both Alpha and other platforms. That means Uncle G's patches\nneed to be checked for what they do to the other platforms. \n\tThis would get us a Alpha ready version of pgsql now (there has\nbeen enough delay as it is, we really don't want to wait any more), not\nput us out on the limb with a possibly unstable release of pgsql, and\ngives us time to get the alpha patches properly tested and integrated into\nthe main source tree.\n\tAs I see it, these are the following things that need to be added\nto 6.5.1 to make it alpha ready:\n\n\t* Uncle G's Alpha patches { which I have }.\n\t* Makefile conditionals for Linux/Alpha { which I can find with\n only moderate trouble }.\n\t* Bruce's alignment patches { which I do not have }.\n\nBruce, if you could get me your alignment patches, then I will try and\napply the above to 6.5.1, and make a patch that bring 6.5.1 up to alpha\nready state. Then we give that patch to debian and RH developers, tell\nthem to only apply it to thier alpha builds, and that we will have a\nuniversal source tree for all platforms (including alpha) in a few months.\n\tThis is simular to what was done (might even still be done) for\nthe Linux kernel itself. To compile a 2.0.x kernel for Linux/Alpha, one\ngot the clean source, a set of alpha patches for the same rev level, and\napplied them to the clean source to generate an alpha ready kernel source\ntree. \n\tIs this a viable idea, or just another horrible kludge?\n\n> My $0.03 ;)\n\n\tRaising the ante here? :) Well, then this was my four cents!\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n\n",
"msg_date": "Thu, 29 Jul 1999 21:03:41 -0600 (MDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "On Thu, 29 Jul 1999, The Hermit Hacker wrote:\n\n> Okay, let me get this straight...v6.5 was in beta for, what, 2 months?\n> And it isn't until *after* v6.5.1 is released that the Alpha guys realized\n> that \"oops, it doesn't work\"? And they have a patch that amounts to ~1/2\n> the size of the current distribution to get this to work?\n> \n> *rofl*\n\n\tYea, I think it has turned into a bit of a crazy mess. :) I had\nmeant to do something about Alpha for the 6.5 release, but it came too\nsoon after school got out to do anything (i.e. when I actually had free\ntime). Then Uncle G. came along, out of the blue, and fixed everything in\na few days, but then got impatient that we had not applied his patches\nafter a few more days and then moved on to other conquests. That left us\nwith patches that worked, which we were grateful for (at least I was), but\nprovided unknown affects on other platforms and only against an unstable,\nin flux snapshot.\n\tThen I tried to see how many differences there were between 6.5.1\nand the current snapshot, only to find that the differences were \"a lot\". \n\n> The stable branch is meant to allow *minor* changes to go into it, and, if\n> there are enough, to generate a new *stable* distribution. Minor changes\n> are \"we put && instead of || in an if statement that only shows up #ifdef\n> <feature> is enabled\"...or even where a bug is fixed that is based on us\n> missing an error check that adds a few lines of code.\n\n\tAgreed. Uncle G's alpha patches alone break that as they are 62k\nin size and touch quite a few files.\n\n> I have no problems with building a v6.5.2, or .3, or .4, if required...but\n> a 3.5MB diff does not constitute a 'minor bug fix' and should be merged\n> into v6.6 only...\n\n\tYea, 3.5MB does not consitute a minor bug fix (maybe for M$ it\ndoes, but lets not go there). And that includes all changes between 6.5.1\nand the current snapshot, not just the alpha ones.\n\n\tSo, after reading the emails that arrived while writing my last\none... If I could get my hands on Bruce's alignment patches, then by\nMonday, I should be able to have a set of alpha patches against 6.5.1 that\nprovide a working alpha version for the time being (until 6.6 comes around\nand we can clean up the alpha patches and put them in the main tree).\n\n\tPS. As far as I can tell, us Alpha guys are pretty few in number,\nat least those who are actually subscribed to the pgsql-ports and\npgsql-hackers email lists and try and do something for pgsql on\nLinux/Alpha. Unfortuntely this \"Alpha guy\" often finds himself very busy\nand his C skills not up to the task of hunting down obscure platform bugs\nin a huge mass of code. Something along the lines of \"The spirit is\nwilling, but the flesh is weak.\" :(\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n",
"msg_date": "Thu, 29 Jul 1999 21:30:14 -0600 (MDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "\nI do not want to see any patches commit'd to v6.5.x branch that is\n*anything* but a minor bug fix...personally, we had a 2 month beta period\non this...the Alpha related stuff should have been submit'd then, at the\nvery latest...\n\nPlease feel free to work at getting these into v6.6, *before* v6.6 is\nreleased, so that this problem doesn't rear its head again...\n\nOn Thu, 29 Jul 1999, Ryan Kirkpatrick wrote:\n\n> On Fri, 30 Jul 1999, Thomas Lockhart wrote:\n> \n> > > > IMHO a 6.5.2 release with all of the necessary alpha patches\n> > > > already in the distribution source tree is a much cleaner, clearer\n> > > > solution, for distribution packagers, average users, and\n> > > > compile-it-yourself-people.\n> > > I think he was going to generate a 6.5.2 by back-patching, not\n> > > distributing a new patch to make 6.5.2.\n> \n> \tOk, you lost me on the terminology there then. What exactly is\n> 'back-patching'?\n> \n> > If y'all can't maintain two branches, then let's stop doing it. otoh,\n> > we can't do maintenance releases without a stable branch, so we'd\n> > better think about it before giving up.\n> \n> \tI do follow your logic on a stable vs. unstable tree, and can see\n> the benefit of having it. \n> \n> > I've offered to help, much more than I should bother with. I'll leave\n> > it to other Alpha stakeholders to decide what they want. I should\n> > point out that I offered to our RedHat contacts to try to marshall an\n> > Alpha-ready build, but so far it's like herding cats.\n> \n> \tYea, it has been like that with the Linux/Alpha port for some\n> time, including other packages then pgsql alone. :( As for the other Alpha\n> stakeholders, I have yet to hear from any of them at all in this\n> disscussion and for a while in any discussion concerning pgsql and\n> Linux/Alpha. Of course, every now and then, some Linux/Alpha user comes\n> along and asks why we haven't moved anywhere with pgsql in the last so\n> long, and gets mad with any answer I try and give them. My conclusion\n> about Linux/Alpha is that lots of people want the power of the alpha\n> processor, but don't want to help out and get rid of some of the lingering\n> sharp edges. They want it to work right out of the box! That leaves things\n> to a few of us die hards to get everything working, and most of them\n> focus on more fundamental things, like gcc and glibc, and the applications\n> end up getting the short end of the stick. Ok, I will get off my soap box\n> here, back to the trenches....\n> \n> > And *really*, if we have 3.5MB of diffs, who are we kidding about\n> > knowing where they all came from and what they are doing? Backpatching\n> > or developing patches on a clean 6.5.1 release is the only thing to do\n> > for a 6.5.2. Otherwise, call it 6.6-prealpha and we'll wait 4 months\n> > for RPMs.\n> \n> \tAfter this discussion and a few tests of my own, I think I had\n> better change my position on this issue.\n> \tFirst of all, today's snapshot with Uncle G's patches compiles and\n> runs on Linux/Intel and Solaris/Sparc as well as they do without the\n> patches on the same snapshot for the most part. Though the patches seem to\n> break the random regression test on Linux/Intel. Also, today's snapshot\n> (clean) will not compile on Solaris/Sparc, as there is an extra #endif in\n> ./src/backend/port/isinf.c that gcc on Solaris pukes on. :(\n> \tSo, this snapshot is in suspect, and it looks like the alpha\n> patches are as well, at least as far as other platforms go. \n> \tMy vote would be go back and do a 'alpha' patch off of 6.5.1, and\n> distribute that to the distribution people to get pgsql running on\n> Linux/Alpha in the short time. Then, four months or so down the road when\n> the next release target comes up, we plan to have a version of pgsql that\n> will run on both Alpha and other platforms. That means Uncle G's patches\n> need to be checked for what they do to the other platforms. \n> \tThis would get us a Alpha ready version of pgsql now (there has\n> been enough delay as it is, we really don't want to wait any more), not\n> put us out on the limb with a possibly unstable release of pgsql, and\n> gives us time to get the alpha patches properly tested and integrated into\n> the main source tree.\n> \tAs I see it, these are the following things that need to be added\n> to 6.5.1 to make it alpha ready:\n> \n> \t* Uncle G's Alpha patches { which I have }.\n> \t* Makefile conditionals for Linux/Alpha { which I can find with\n> only moderate trouble }.\n> \t* Bruce's alignment patches { which I do not have }.\n> \n> Bruce, if you could get me your alignment patches, then I will try and\n> apply the above to 6.5.1, and make a patch that bring 6.5.1 up to alpha\n> ready state. Then we give that patch to debian and RH developers, tell\n> them to only apply it to thier alpha builds, and that we will have a\n> universal source tree for all platforms (including alpha) in a few months.\n> \tThis is simular to what was done (might even still be done) for\n> the Linux kernel itself. To compile a 2.0.x kernel for Linux/Alpha, one\n> got the clean source, a set of alpha patches for the same rev level, and\n> applied them to the clean source to generate an alpha ready kernel source\n> tree. \n> \tIs this a viable idea, or just another horrible kludge?\n> \n> > My $0.03 ;)\n> \n> \tRaising the ante here? :) Well, then this was my four cents!\n> \n> ----------------------------------------------------------------------------\n> | \"For to me to live is Christ, and to die is gain.\" |\n> | --- Philippians 1:21 (KJV) |\n> ----------------------------------------------------------------------------\n> | Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n> ----------------------------------------------------------------------------\n> | http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n> ----------------------------------------------------------------------------\n> \n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 30 Jul 1999 00:36:10 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> \tYea, it has been like that with the Linux/Alpha port for some\n> time, including other packages then pgsql alone. :( As for the other Alpha\n> stakeholders, I have yet to hear from any of them at all in this\n> disscussion and for a while in any discussion concerning pgsql and\n> Linux/Alpha. Of course, every now and then, some Linux/Alpha user comes\n> along and asks why we haven't moved anywhere with pgsql in the last so\n> long, and gets mad with any answer I try and give them. My conclusion\n> about Linux/Alpha is that lots of people want the power of the alpha\n> processor, but don't want to help out and get rid of some of the lingering\n> sharp edges. They want it to work right out of the box! That leaves things\n> to a few of us die hards to get everything working, and most of them\n> focus on more fundamental things, like gcc and glibc, and the applications\n> end up getting the short end of the stick. Ok, I will get off my soap box\n> here, back to the trenches....\n\nYes, this is our impression too. We get lots of head-shaking, but not\nlots of roll-up-their sleves help.\n\n> \tFirst of all, today's snapshot with Uncle G's patches compiles and\n> runs on Linux/Intel and Solaris/Sparc as well as they do without the\n> patches on the same snapshot for the most part. Though the patches seem to\n> break the random regression test on Linux/Intel. Also, today's snapshot\n> (clean) will not compile on Solaris/Sparc, as there is an extra #endif in\n> ./src/backend/port/isinf.c that gcc on Solaris pukes on. :(\n\nFixed now. That was me. That file was a mess before.\n\n> \tSo, this snapshot is in suspect, and it looks like the alpha\n> patches are as well, at least as far as other platforms go. \n> \tMy vote would be go back and do a 'alpha' patch off of 6.5.1, and\n> distribute that to the distribution people to get pgsql running on\n> Linux/Alpha in the short time. Then, four months or so down the road when\n> the next release target comes up, we plan to have a version of pgsql that\n> will run on both Alpha and other platforms. That means Uncle G's patches\n> need to be checked for what they do to the other platforms. \n\nAgreed.\n\n> \tThis would get us a Alpha ready version of pgsql now (there has\n> been enough delay as it is, we really don't want to wait any more), not\n> put us out on the limb with a possibly unstable release of pgsql, and\n> gives us time to get the alpha patches properly tested and integrated into\n> the main source tree.\n> \tAs I see it, these are the following things that need to be added\n> to 6.5.1 to make it alpha ready:\n> \n> \t* Uncle G's Alpha patches { which I have }.\n> \t* Makefile conditionals for Linux/Alpha { which I can find with\n> only moderate trouble }.\n> \t* Bruce's alignment patches { which I do not have }.\n\nI just changed many DOUBLEALIGN's to MAXALIGN. It was a cosmetic fix,\nas far as I could tell. Are they different on Alpha?\n\n> \n> Bruce, if you could get me your alignment patches, then I will try and\n> apply the above to 6.5.1, and make a patch that bring 6.5.1 up to alpha\n> ready state. Then we give that patch to debian and RH developers, tell\n> them to only apply it to thier alpha builds, and that we will have a\n> universal source tree for all platforms (including alpha) in a few months.\n> \tThis is simular to what was done (might even still be done) for\n> the Linux kernel itself. To compile a 2.0.x kernel for Linux/Alpha, one\n> got the clean source, a set of alpha patches for the same rev level, and\n> applied them to the clean source to generate an alpha ready kernel source\n> tree. \n> \tIs this a viable idea, or just another horrible kludge?\n\nSounds good.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 29 Jul 1999 23:52:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "On Thu, 29 Jul 1999, Bruce Momjian wrote:\n\n> > \t* Uncle G's Alpha patches { which I have }.\n> > \t* Makefile conditionals for Linux/Alpha { which I can find with\n> > only moderate trouble }.\n> > \t* Bruce's alignment patches { which I do not have }.\n> \n> I just changed many DOUBLEALIGN's to MAXALIGN. It was a cosmetic fix,\n> as far as I could tell. Are they different on Alpha?\n\nSo, if I were to go through and make these changes in the -stable tree as\nwell, it would be purely cosmetic?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 30 Jul 1999 01:36:57 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> On Thu, 29 Jul 1999, Bruce Momjian wrote:\n> \n> > > \t* Uncle G's Alpha patches { which I have }.\n> > > \t* Makefile conditionals for Linux/Alpha { which I can find with\n> > > only moderate trouble }.\n> > > \t* Bruce's alignment patches { which I do not have }.\n> > \n> > I just changed many DOUBLEALIGN's to MAXALIGN. It was a cosmetic fix,\n> > as far as I could tell. Are they different on Alpha?\n> \n> So, if I were to go through and make these changes in the -stable tree as\n> well, it would be purely cosmetic?\n> \n\nI think so, but am not sure what the alpha has for those values.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 30 Jul 1999 01:25:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> > As I see it, these are the following things that need to be added\n> > to 6.5.1 to make it alpha ready:\n> > * Uncle G's Alpha patches { which I have }.\n> > * Makefile conditionals for Linux/Alpha \n> > * Bruce's alignment patches { which I do not have }.\n> I just changed many DOUBLEALIGN's to MAXALIGN. It was a cosmetic fix,\n> as far as I could tell. Are they different on Alpha?\n> > Bruce, if you could get me your alignment patches, then I will try and\n> > apply the above to 6.5.1, and make a patch that bring 6.5.1 up to alpha\n> > ready state.\n\nI *love* this plan. And I'll go one better: v6.5.x is not \"dead\", in\nthe sense that Tom Lane has been faithfully applying relevant patches\nfor his fixes in case a v6.5.2 is released. I'll guess that the Intel\nproblems noted with the main tree are not present in the v6.5.x tree,\nso any new problems noted would be due to the upcoming Alpha patches.\nLet's develop patches on 6.5.x (I'll post snapshots when we want them)\nand Lamar and I can test the Intel behavior.\n\nUnless someone else wants to do it, I'll handle applying the Alpha\npatches to the v6.5.x branch of CVS.\n\nWe can publish an Alpha candidate tree so the debian folks can look at\nit, and we can build a RPM for someone (Uncle George?) to test on a\nRedHat box.\n\nv6.5.2 might be possible yet ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 30 Jul 1999 13:42:40 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "On Fri, 30 Jul 1999, The Hermit Hacker wrote:\n\n> I do not want to see any patches commit'd to v6.5.x branch that is\n> *anything* but a minor bug fix...personally, we had a 2 month beta period\n> on this...the Alpha related stuff should have been submit'd then, at the\n> very latest...\n\n\tYea, they should have, but this is anything but a perfect world.\n:( \n\n> Please feel free to work at getting these into v6.6, *before* v6.6 is\n> released, so that this problem doesn't rear its head again...\n\n\tWill do! I will make an alpha patch for 6.5.1 to keep the alpha\npeople happy in the short term. Then I will work on evaluating the full\neffect of the alpha patches and then integrate them into the main source\ntree bit by bit until it all works on alpha, but does not adversly affect\nany other platform. V6.6 is about four months out, right?\n\tTTYL.\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n\n",
"msg_date": "Fri, 30 Jul 1999 08:32:25 -0600 (MDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "On Fri, 30 Jul 1999, Bruce Momjian wrote:\n\n> > On Thu, 29 Jul 1999, Bruce Momjian wrote:\n> > \n> > > > \t* Uncle G's Alpha patches { which I have }.\n> > > > \t* Makefile conditionals for Linux/Alpha { which I can find with\n> > > > only moderate trouble }.\n> > > > \t* Bruce's alignment patches { which I do not have }.\n> > > \n> > > I just changed many DOUBLEALIGN's to MAXALIGN. It was a cosmetic fix,\n> > > as far as I could tell. Are they different on Alpha?\n> > \n> > So, if I were to go through and make these changes in the -stable tree as\n> > well, it would be purely cosmetic?\n> > \n> \n> I think so, but am not sure what the alpha has for those values.\n\n\tIt is only cosmetic, for on the alpha, after configure is run,\nALIGNOF_{LONG,DOUBLE,ALIGNOF} all equal '8'. Further testing showed that\nthe macros LONGALIGN, DOBULEALIGN, and MAXALIGN generate the same result\nwhen provided with the same input value. Therefore, I don't see anyway\nthat changing DOUBLEALIGNs to MAXALIGNs would reduce unaligned traps on\nthe alpha. :( \n\tPS. I am testing an alpha patched 6.5.1 right now, and so far it\nlooks promising. :) Final results soon!\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n",
"msg_date": "Fri, 30 Jul 1999 09:17:33 -0600 (MDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "On Thu, 29 Jul 1999, Bruce Momjian wrote:\n\n> Yes, this is our impression too. We get lots of head-shaking, but not\n> lots of roll-up-their sleves help.\n\n\tThat about sums things up very nicely... Even I am guilty of that\nas I often find myself too busy to do anything more than tell someone that\npgsql is broken on alpha (not for much longer though), and yet never set\naside the time to do anything about it. :(\n\n> > (clean) will not compile on Solaris/Sparc, as there is an extra #endif in\n> > ./src/backend/port/isinf.c that gcc on Solaris pukes on. :(\n> \n> Fixed now. That was me. That file was a mess before.\n\n\tInteresitng that neither Linux/Alpha or Linux/Intel puked on it...\n\n> > Bruce, if you could get me your alignment patches, then I will try and\n> > apply the above to 6.5.1, and make a patch that bring 6.5.1 up to alpha\n> > ready state. Then we give that patch to debian and RH developers, tell\n> > them to only apply it to thier alpha builds, and that we will have a\n> > universal source tree for all platforms (including alpha) in a few months.\n> > \tThis is simular to what was done (might even still be done) for\n> > the Linux kernel itself. To compile a 2.0.x kernel for Linux/Alpha, one\n> > got the clean source, a set of alpha patches for the same rev level, and\n> > applied them to the clean source to generate an alpha ready kernel source\n> > tree. \n> > \tIs this a viable idea, or just another horrible kludge?\n> \n> Sounds good.\n\n\tOk, I have already started hacking up 6.5.1. It will take a little\nwhile to run the regression tests and then I want to run a few pgsql\napplications of mine through it as well to pound on it further. If I can't\nbreak it, then I will release a patch soon. :)\n\tAre there any other \"alpha hacks\" that I missed? TTYL.\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n",
"msg_date": "Fri, 30 Jul 1999 09:21:55 -0600 (MDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "On Fri, 30 Jul 1999, Thomas Lockhart wrote:\n\n> > > As I see it, these are the following things that need to be added\n> > > to 6.5.1 to make it alpha ready:\n> > > * Uncle G's Alpha patches { which I have }.\n> > > * Makefile conditionals for Linux/Alpha \n> > > * Bruce's alignment patches { which I do not have }.\n> > > Bruce, if you could get me your alignment patches, then I will try and\n> > > apply the above to 6.5.1, and make a patch that bring 6.5.1 up to alpha\n> > > ready state.\n> \n> I *love* this plan. \n\n\tGreat!\n\n> And I'll go one better: v6.5.x is not \"dead\", in the sense that Tom\n> Lane has been faithfully applying relevant patches for his fixes in\n> case a v6.5.2 is released. I'll guess that the Intel problems noted\n> with the main tree are not present in the v6.5.x tree, so any new\n> problems noted would be due to the upcoming Alpha patches. \n\n\tSo, if I understand this correctly, the snapshot available on the\nFTP site is from the unstable tree, and there is a \"stable 6.5.x\" tree\nthat can only be access by cvs{up}? And that this stable tree should not\nhave quite as much delta from 6.5.1 as the snapshots do? Or did I miss\nsomething?\n\n> Let's develop patches on 6.5.x (I'll post snapshots when we want them)\n> and Lamar and I can test the Intel behavior.\n\n\tOk, patches are in progress. Regression tests passed, now pounding\non it with some of my own applications. \n\n> We can publish an Alpha candidate tree so the debian folks can look at\n> it, and we can build a RPM for someone (Uncle George?) to test on a\n> RedHat box.\n\n\tSounds good.\n\n> v6.5.2 might be possible yet ;)\n\n\tHmm... I don't think other people want to roll in the alpha\npatches into the stable tree (with good reason). I think we are best off\nwith just an alpha only version of pgsql via patches on 6.5.1, and leave\nintegration of the alpha patches into the full pgsql source tree for 6.6.\nMy two cents.\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n",
"msg_date": "Fri, 30 Jul 1999 09:32:44 -0600 (MDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "Ryan Kirkpatrick <[email protected]> writes:\n>>>> (clean) will not compile on Solaris/Sparc, as there is an extra #endif in\n>>>> ./src/backend/port/isinf.c that gcc on Solaris pukes on. :(\n>> \n>> Fixed now. That was me. That file was a mess before.\n\n> \tInteresitng that neither Linux/Alpha or Linux/Intel puked on it...\n\nisinf.c doesn't get compiled at all on platforms that have native\nisinf(), so the error wouldn't show up except on a platform that has\nboth a compiler that objects to extra #endif and no isinf().\nI wouldn't be surprised if there are similar glitches in other files\nunder port/ :-(\n\n> \tOk, I have already started hacking up 6.5.1. It will take a little\n> while to run the regression tests and then I want to run a few pgsql\n> applications of mine through it as well to pound on it further. If I can't\n> break it, then I will release a patch soon. :)\n> \tAre there any other \"alpha hacks\" that I missed? TTYL.\n\nYou should be working from a CVS pull of the REL6_5_PATCHES branch,\nnot from the 6.5.1 distribution tarball --- I've committed several\nfixes into that branch since 6.5.1, and I think other people have too.\n(If that *is* what you're doing, then nevermind...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 30 Jul 1999 11:49:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PORTS] RedHat6.0 & Alpha "
},
{
"msg_contents": "> > Fixed now. That was me. That file was a mess before.\n> \n> \tInteresitng that neither Linux/Alpha or Linux/Intel puked on it...\n\nThey probably don't use it.\n\n> \tOk, I have already started hacking up 6.5.1. It will take a little\n> while to run the regression tests and then I want to run a few pgsql\n> applications of mine through it as well to pound on it further. If I can't\n> break it, then I will release a patch soon. :)\n> \tAre there any other \"alpha hacks\" that I missed? TTYL.\n\nNo. The test for CPU in the Makefiles was so we could do -O2 in the\ngeneral makefile, and just add some flags in the makefiles that caused\nproblems.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 30 Jul 1999 11:51:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "> > And I'll go one better: v6.5.x is not \"dead\", in the sense that Tom\n> > Lane has been faithfully applying relevant patches for his fixes in\n> > case a v6.5.2 is released. I'll guess that the Intel problems noted\n> > with the main tree are not present in the v6.5.x tree, so any new\n> > problems noted would be due to the upcoming Alpha patches. \n> \n> \tSo, if I understand this correctly, the snapshot available on the\n> FTP site is from the unstable tree, and there is a \"stable 6.5.x\" tree\n> that can only be access by cvs{up}? And that this stable tree should not\n> have quite as much delta from 6.5.1 as the snapshots do? Or did I miss\n> something?\n\nYes. This is correct.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 30 Jul 1999 12:04:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "On Fri, 30 Jul 1999, Ryan Kirkpatrick wrote:\n\n> \tSo, if I understand this correctly, the snapshot available on the\n> FTP site is from the unstable tree, and there is a \"stable 6.5.x\" tree\n> that can only be access by cvs{up}? And that this stable tree should not\n> have quite as much delta from 6.5.1 as the snapshots do? Or did I miss\n> something?\n\nthis is correct...\n\n> \tHmm... I don't think other people want to roll in the alpha\n> patches into the stable tree (with good reason). I think we are best off\n> with just an alpha only version of pgsql via patches on 6.5.1, and leave\n> integration of the alpha patches into the full pgsql source tree for 6.6.\n> My two cents.\n\nWe are going to be rolling a v6.5.2, and .3, and .4 ... basically, until\nv6.6 is released, v6.5.x is our stable release, and, from a commercial\nperspective, has to be maintained.\n\nI don't expect anyone working on -current to maintain it, I'm going to\nwork on it, but I do hope that if someone fixes a bug in -current that\nexists in -stable, and that can be *easily* fixed, that we get the fix in\nthere also...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 30 Jul 1999 13:08:06 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Stable vs Current (Was: Re: [HACKERS] Re: [PORTS] RedHat6.0 & Alpha)"
},
{
"msg_contents": "On Fri, 30 Jul 1999, The Hermit Hacker wrote:\n\n> On Fri, 30 Jul 1999, Ryan Kirkpatrick wrote:\n> > \tHmm... I don't think other people want to roll in the alpha\n> > patches into the stable tree (with good reason). I think we are best off\n> > with just an alpha only version of pgsql via patches on 6.5.1, and leave\n> > integration of the alpha patches into the full pgsql source tree for 6.6.\n> > My two cents.\n> \n> We are going to be rolling a v6.5.2, and .3, and .4 ... basically, until\n> v6.6 is released, v6.5.x is our stable release, and, from a commercial\n> perspective, has to be maintained.\n\n\tI understand that. It is just that from what time I have spent\nlooking at the alpha patches, they do a lot more than just \"maintenance\".\nSo while there may indeed by 6.5.2, .3, etc.. releases, none of them\nshould include the alpha patches in the source tree (instead have a new\nset of \"after release\" alpha specific patches, or stick them in contrib).\nI don't want to put the alpha patches in until after I have a chance to\nreview them (for compatiblity to other platforms), which will probably\ntake a few weeks to a few months.\n \n> I don't expect anyone working on -current to maintain it, I'm going to\n> work on it, but I do hope that if someone fixes a bug in -current that\n> exists in -stable, and that can be *easily* fixed, that we get the fix in\n> there also...\n\n\tSounds good... Only the alpha fixes don't fall under the heading\nof \"*easily* fixed in -stable\", so they ought to stay out of there for\nnow.\n\tOtherwise your seperation of stable from current trees is a good\nidea, and I now have a better understanding of development and release of\npgsql, especially in relation to the alpha patches. Thanks.\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n",
"msg_date": "Fri, 30 Jul 1999 10:31:52 -0600 (MDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stable vs Current (Was: Re: [HACKERS] Re: [PORTS] RedHat6.0 &\n\tAlpha)"
},
{
"msg_contents": "On Fri, 30 Jul 1999, Tom Lane wrote:\n\n> Ryan Kirkpatrick <[email protected]> writes:\n>\n> > \tOk, I have already started hacking up 6.5.1. It will take a little\n> > while to run the regression tests and then I want to run a few pgsql\n> > applications of mine through it as well to pound on it further. If I can't\n> > break it, then I will release a patch soon. :)\n> > \tAre there any other \"alpha hacks\" that I missed? TTYL.\n> \n> You should be working from a CVS pull of the REL6_5_PATCHES branch,\n> not from the 6.5.1 distribution tarball --- I've committed several\n> fixes into that branch since 6.5.1, and I think other people have too.\n> (If that *is* what you're doing, then nevermind...)\n\n\tActually, I am working with the 6.5.1 tarball, for the simple\nreason that I want a set of patches I can post on the debian-alpha mailing\nlist, along with the instructions to grab the 6.5.1 tarball from\nftp.postgresql.org, apply patches, configure, compile, install, and they\nare set to go (No need to do a CVS pull, etc). Once this set of patches is\ndone and out, then I will do a cvs pull of the REL6_5_PATCHES branch and\nmake modifcations to my patch as needed so when 6.5.2 is released at some\nlater date, there is a minimal amount of work on my part to release new\nalpha patches for that release.\n\tThat at least they easiest way I can see to do things for the near\nterm. For each 6.5.x release made, I make a set of patches that can be\napplied to the release tar-ball, to make it alpha friendly. Then, only\nwhen 6.6 comes along, will the alpha patches be intergrated into the main\ntree, and the extra, \"after-release-alpha-patches\" will no longer be\nneeded. A little bit of extra work, but it has only taken me less than 2.5\nhours this morning to backport the alpha patches from the most recent\nsnapshot to 6.5.1 tarball, so not a big deal.\n\tAnyway, patches will be appearing shortly.\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n",
"msg_date": "Fri, 30 Jul 1999 10:38:58 -0600 (MDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PORTS] RedHat6.0 & Alpha "
},
{
"msg_contents": "On Fri, 30 Jul 1999, Ryan Kirkpatrick wrote:\n\n> \tSounds good... Only the alpha fixes don't fall under the heading\n> of \"*easily* fixed in -stable\", so they ought to stay out of there for\n> now.\n\nIf there are even pieces of the alpha patches that can be applied to the\ncentral repository, please submit them for inclusion...#ifdef __alpha__\nworks quite well :) Reducing the size of the patch for each release is\nalways a good thing...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 30 Jul 1999 13:56:23 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stable vs Current (Was: Re: [HACKERS] Re: [PORTS] RedHat6.0 &\n\tAlpha)"
},
{
"msg_contents": "On Fri, 30 Jul 1999, The Hermit Hacker wrote:\n\n> On Fri, 30 Jul 1999, Ryan Kirkpatrick wrote:\n> \n> > \tSounds good... Only the alpha fixes don't fall under the heading\n> > of \"*easily* fixed in -stable\", so they ought to stay out of there for\n> > now.\n> \n> If there are even pieces of the alpha patches that can be applied to the\n> central repository, please submit them for inclusion...#ifdef __alpha__\n> works quite well :) Reducing the size of the patch for each release is\n> always a good thing...\n\n\tThat is what I plan to do in the coming months as I review the\nalpha patches chagne by change. And yes, #ifdefs are always good to use!\n:)\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n",
"msg_date": "Fri, 30 Jul 1999 11:25:29 -0600 (MDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stable vs Current (Was: Re: [HACKERS] Re: [PORTS] RedHat6.0 &\n\tAlpha)"
},
{
"msg_contents": "Ryan Kirkpatrick wrote:\n...\n >\tActually, I am working with the 6.5.1 tarball, for the simple\n >reason that I want a set of patches I can post on the debian-alpha mailing\n >list, along with the instructions to grab the 6.5.1 tarball from\n >ftp.postgresql.org, apply patches, configure, compile, install, and they\n >are set to go (No need to do a CVS pull, etc).\n\nHi Ryan,\nI'm at a disadvantage here, because I don't have an Alpha and rely on\nothers on debian-alpha to get postgresql packages compiled for Alpha.\nThanks for your efforts on this.\n\nI just want to comment on what you are saying about generating a Debian\nsource package. There will be a problem, because you are proposing to\nprovide source that will be different from the main 6.5.1 source; however,\nthe Debian archive assumes that source is identical across all architectures.\nThis means that the Alpha source for PostgreSQL must not be uploaded to the\nDebian archive because it will replace the source for all other \narchitectures.\n\nIf this were to be a permanent problem, it could be addressed by renaming\nthe packages; however, this would cause a lot of trouble to many users,\nso I don't want to do that when 6.6 will remove the need for it.\n\nWhat I propose to do is to disable the Alpha build in the next version of\nthe Debian package (6.5.1-4) and make it put out information that the\nAlpha source must be downloaded from <somewhere>. I would prefer that to be\nin my account at www.debian.org, so that I can incorporate any changes\nthat go into the mainstream package.\n\n\n\nAs to producing the Alpha packages, the procedure should go something\nlike this:\n\n1. Patch postgresql-6.5.1.orig (i.e. postgresql as provided at ftp.postgresql.org).\n\n2. Examine the patches in the latest debian postgresql-6.5.1-x.diff.gz\n(where x is the latest Debian release) and merge everything that does\nnot conflict with the new Alpha patches.\n\n3. Update the version number in debian/changelog to 6.5.1-x.0.1alph and\nbuild the binary packages.\n\n4. Upload the binary packages only.\n\n5. Put the source package in my account (and make sure I have permission\nto write it!).\n\n\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"And Samuel said, Hath the LORD as great delight in \n burnt offerings and sacrifices, as in obeying the \n voice of the LORD? Behold, to obey is better than \n sacrifice, and to hearken than the fat of rams.\" \n I Samuel 15:22 \n\n\n",
"msg_date": "Fri, 30 Jul 1999 23:53:59 +0100",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PORTS] RedHat6.0 & Alpha "
},
{
"msg_contents": "\nOn Fri, 30 Jul 1999, Oliver Elphick wrote:\n\n> If this were to be a permanent problem, it could be addressed by renaming\n> the packages; however, this would cause a lot of trouble to many users,\n> so I don't want to do that when 6.6 will remove the need for it.\n> \n> What I propose to do is to disable the Alpha build in the next version of\n> the Debian package (6.5.1-4) and make it put out information that the\n> Alpha source must be downloaded from <somewhere>. I would prefer that to be\n> in my account at www.debian.org, so that I can incorporate any changes\n> that go into the mainstream package.\n\nWhat kind of patches are we dealing with? One of us could probably easily\nreview them here and find a way to have the source patched in the event of\nan Alpha build environment (I already am working on a similar solution for\nbinutils). Being pretty familiar with the postgresql source and the Alpha\nproblems with it, feel free to mail me any patches that you want me to\ntest (right now, I run postgresql, but have no data being served by it, so\nnothing's in danger).\n\nC\n\n",
"msg_date": "Fri, 30 Jul 1999 20:52:04 -0400 (EDT)",
"msg_from": "Christopher C Chimelis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PORTS] RedHat6.0 & Alpha "
},
{
"msg_contents": "On Fri, 30 Jul 1999, Oliver Elphick wrote:\n\n> Ryan Kirkpatrick wrote:\n> ...\n> >\tActually, I am working with the 6.5.1 tarball, for the simple\n> >reason that I want a set of patches I can post on the debian-alpha mailing\n> >list, along with the instructions to grab the 6.5.1 tarball from\n> >ftp.postgresql.org, apply patches, configure, compile, install, and they\n> >are set to go (No need to do a CVS pull, etc).\n> \n> Hi Ryan,\n> I'm at a disadvantage here, because I don't have an Alpha and rely on\n> others on debian-alpha to get postgresql packages compiled for Alpha.\n> Thanks for your efforts on this.\n\n\tYou are welcome. :)\n\n> I just want to comment on what you are saying about generating a Debian\n> source package. There will be a problem, because you are proposing to\n> provide source that will be different from the main 6.5.1 source; however,\n> the Debian archive assumes that source is identical across all architectures.\n> This means that the Alpha source for PostgreSQL must not be uploaded to the\n> Debian archive because it will replace the source for all other \n> architectures.\n\n\tI am well aware of this, and if I could have avoided it, I would\nhave. But I decided that the ablity of getting pgsql working on\nlinux/alpha today out weighted any transient difficulties in packaging\n(that would vanish with 6.6 in a couple of months). If you think it was a\nbad decision, the fault then is mine. I have been trying for 2-3 years to\nget pgsql running right on linux/alpha, and don't want to hold up the best\nsolution we have had (ever) for another few months. That at least is the\nreasons behind my actions.\n\n> As to producing the Alpha packages, the procedure should go something\n> like this:\n> \n> 1. Patch postgresql-6.5.1.orig (i.e. postgresql as provided at ftp.postgresql.org).\n> \n> 2. Examine the patches in the latest debian postgresql-6.5.1-x.diff.gz\n> (where x is the latest Debian release) and merge everything that does\n> not conflict with the new Alpha patches.\n> \n> 3. Update the version number in debian/changelog to 6.5.1-x.0.1alph and\n> build the binary packages.\n> \n> 4. Upload the binary packages only.\n> \n> 5. Put the source package in my account (and make sure I have permission\n> to write it!).\n\n\tOk, this I can do. It will take me about a week to get it all\ndone, but it does appear to be do-able. Most of the debian patches for\npgsql should apply from what I have heard (from debian-alpha people),\nexcept for the palloc one, if memory serves me right. Either way, I should\nbe able to work through them given some time.\n\tThe only tricky parts are the actual uploads of the binaries and\nsource code... I am not a Debian developer (maybe someday, when I have\nmore time), and unless there is an anonymous access to the locations you\nlist above, I will not be able to upload them. Either provide more details\non the uploading, or I will just put both on my web site, and you can grab\nthem from there and put them where needed.\n\tTTYL.\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n",
"msg_date": "Fri, 30 Jul 1999 20:30:57 -0600 (MDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PORTS] RedHat6.0 & Alpha "
},
{
"msg_contents": "On Fri, 30 Jul 1999, Christopher C Chimelis wrote:\n\n> What kind of patches are we dealing with? One of us could probably easily\n> review them here and find a way to have the source patched in the event of\n> an Alpha build environment (I already am working on a similar solution for\n> binutils). Being pretty familiar with the postgresql source and the Alpha\n> problems with it, feel free to mail me any patches that you want me to\n> test (right now, I run postgresql, but have no data being served by it, so\n> nothing's in danger).\n\n\tThe patches are only about 60k, and apply cleanly to the 6.5.1\ntarball. Once applied, pgsql will compile as per usual (as stated in the\nINSTALL file). So, if you can put some conditional in the pgsql package\nbuild to apply my patch (singular) when doing a compile on alpha, that\nwould be great.\n\tIf this is what you want to do, let me know, and I will not try\nand generate a pgsql alpha deb myself (as Oliver asked in his email). This\nwill save me time and allow me to move on to actually processing the alpha\npatches for the 6.6 pgsql tree.\n\tTTYL.\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n",
"msg_date": "Fri, 30 Jul 1999 20:34:52 -0600 (MDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PORTS] RedHat6.0 & Alpha "
},
{
"msg_contents": "> > I just want to comment on what you are saying about generating a Debian\n> > source package. There will be a problem, because you are proposing to\n> > provide source that will be different from the main 6.5.1 source; however,\n> > the Debian archive assumes that source is identical across all architectures.\n\nThe RH RPM distribution has the same constraints. I have hopes that I\ncan take the v6.5.1 tarball, Ryan's patches, a test on RH Alpha, and\nthen validate them for the i386 (and sparc, with a volunteer tester)\narchitectures. If that flys, then perhaps we should commit to a v6.5.2\nwhich *does* contain these changes, but imho we should postpone the\ndiscussion of that until we have shown exactly what it takes.\n\nIf validating on i386 succeeds, we can also do a v6.5.1+patches build\nof an RPM, and presumably the Debian packaging could work this way\ntoo. So it doesn't absolutely require a commit back to the Postgres\ncvs branch if we don't have a consensus on that.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sat, 31 Jul 1999 02:54:05 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PORTS] RedHat6.0 & Alpha"
},
{
"msg_contents": "Ryan Kirkpatrick wrote:\n...\n >\tIf this is what you want to do, let me know, and I will not try\n >and generate a pgsql alpha deb myself (as Oliver asked in his email). This\n >will save me time and allow me to move on to actually processing the alpha\n >patches for the 6.6 pgsql tree.\n \nSorry Ryan; I had not meant that _you_ should be generating the Debian\npackage! I was just documenting the process. \n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Have not I commanded thee? Be strong and of a good \n courage; be not afraid, neither be thou dismayed; for \n the LORD thy God is with thee whithersoever thou \n goest.\" Joshua 1:9 \n\n\n",
"msg_date": "Sat, 31 Jul 1999 05:36:23 +0100",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PORTS] RedHat6.0 & Alpha "
},
{
"msg_contents": "On Sat, 31 Jul 1999, Oliver Elphick wrote:\n\n> Ryan Kirkpatrick wrote:\n> ...\n> >\tIf this is what you want to do, let me know, and I will not try\n> >and generate a pgsql alpha deb myself (as Oliver asked in his email). This\n> >will save me time and allow me to move on to actually processing the alpha\n> >patches for the 6.6 pgsql tree.\n> \n> Sorry Ryan; I had not meant that _you_ should be generating the Debian\n> package! I was just documenting the process. \n\n\tAfter reading other people's responses to your email, I figured\nthat out. But the email was addressed to me, and only CCed to others,\nso it was a bit confusing at first. :) If I can be of any help though\nin gettting the alpha Pgsql deps made, (testing out debs, etc...), feel\nfree to ask.\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n\n",
"msg_date": "Sat, 31 Jul 1999 11:01:49 -0600 (MDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PORTS] RedHat6.0 & Alpha "
}
] |
[
{
"msg_contents": "Looks like I am going to be in the cache for some time, adding indexes\nto system tables that are difficult to do.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Jul 1999 20:37:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "cache fixes"
}
] |
[
{
"msg_contents": "\nIts now been a month since the release of v6.5, and the PostgreSQL Global\nDevelopment Group is pleased to announce the release of v6.5.1, which is\nfocused on addressing any bugs or issues post-release:\n\n Fix for datetime constant problem on some platforms(Thomas)\n Add NT README file\n Portability fixes for linux_ppc, Irix, linux_alpha, OpenBSD, alpha\n Remove QUERY_LIMIT, use SELECT...LIMIT\n Fix for EXPLAIN on inheritance(Tom)\n Patch to allow vacuum on multi-segment tables(Hiroshi)\n R-Tree optimizer selectivity fix(Tom)\n ACL file descriptor leak fix(Atsushi Ogawa)\n New expresssion subtree code(Tom)\n Avoid disk writes for read-only transactions(Vadim)\n Fix for removal of temp tables if last transaction was aborted(Bruce)\n Fix to prevent too large tuple from being created(Bruce)\n plpgsql fixes\n Allow port numbers 32k - 64k(Bruce)\n Add ^ precedence(Bruce)\n Rename sort files called pg_temp to pg_sorttemp(Bruce)\n Fix for microseconds in time values(Tom)\n Tutorial source cleanup\n New linux_m68k port\n Fix for sorting of NULL's in some cases(Tom)\n Shared library dependencies fixed (Tom)\n Fixed glitches affecting GROUP BY in subselects(Tom)\n Fix some compiler warnings (Tomoaki Nishiyama)\n Add Win1250 (Czech) support (Pavel Behal)\n\nThis release is the latest, stable release of PostgreSQL, and is available\nat:\n\n\tftp://ftp.postgresql.org/pub/postgresql-6.5.1.tar.gz\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 20 Jul 1999 22:15:23 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 6.5.1 Released ..."
}
] |
[
{
"msg_contents": "> Hello,\n> \n> Just my IMHO you should not break your tree for commercial and non\n> commercial, you can make more than enough money just from offering\n> commercial support for the database. \n> \n> If you want to get sickly rich make sure NOBODY can sell it and that they\n> can only provide support for it. Look at REDHAT. They sell FREE SOFTWARE,\n> or I should say the media and installation book.\n\nWe have never suggested splitting the development tree. Commercial\nsupport is not for us to make money. Just for us to allow\ncommercial-level help.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Jul 1999 02:01:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [ANNOUNCE] PostgreSQL status report"
},
{
"msg_contents": "On Wed, 21 Jul 1999, Bruce Momjian wrote:\n\n> > Hello,\n> > \n> > Just my IMHO you should not break your tree for commercial and non\n> > commercial, you can make more than enough money just from offering\n> > commercial support for the database. \n> > \n> > If you want to get sickly rich make sure NOBODY can sell it and that they\n> > can only provide support for it. Look at REDHAT. They sell FREE SOFTWARE,\n> > or I should say the media and installation book.\n> \n> We have never suggested splitting the development tree. Commercial\n> support is not for us to make money. Just for us to allow\n> commercial-level help.\n\nWhat he said...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 21 Jul 1999 08:59:41 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [ANNOUNCE] PostgreSQL status report"
}
] |
[
{
"msg_contents": "Hi,\n\nWell, I got psql to do it's thing, eventually. I've tested it for pretty\nmuch everything, including \\e, \\g, \\r, \\i. \nThe one problem that I have had is that after about the third '\\i long.sql',\nI get a core dump, because sprintf moaned about string size complications.\nThe way I have structured it, memory is reallocated (re- malloc'd, not\nrealloc'd) every time the query is extended. I suspect that this is very\ninefficient, and probably causing the system to hooch after loading long.sql\nthree times. The main thought that I have had is to extend the query buffer\nin blocks of about 8k or 16k. I presume that once working with set memory\nsizes, the memory usage will be substantially more efficient. Ideas?\n\nAlso, what's the deal with realloc? I tried it a couple of times, but it\nreally screwed me around (hence the re- malloc'ing). Or is it just a Bad\nMove to use realloc?\n\nMikeA\n\n",
"msg_date": "Wed, 21 Jul 1999 10:04:39 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "psql & query string length"
},
{
"msg_contents": "\"Ansley, Michael\" <[email protected]> writes:\n> The way I have structured it, memory is reallocated (re- malloc'd, not\n> realloc'd) every time the query is extended. I suspect that this is very\n> inefficient,\n\nProbably. You should normally expand by a significant amount each time\nyou reallocate an expansible buffer, just to avoid making too many\ndemands on malloc. The method I favor is to double the buffer size at\neach realloc step.\n\n> and probably causing the system to hooch after loading long.sql\n> three times.\n\n... but not doing so shouldn't cause a coredump. I bet a plain old\nbug is involved here, like writing past the end of the space you do\nhave allocated.\n\n> Also, what's the deal with realloc? I tried it a couple of times, but it\n> really screwed me around (hence the re- malloc'ing). Or is it just a Bad\n> Move to use realloc?\n\nrealloc is perfectly fine ... see above for more likely theory.\n\nOn some old pre-ANSI-standard machines, realloc(NULL, ...) does not\nwork, so for portability's sake you ought to only use realloc to\nincrease the size of an existing buffer.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Jul 1999 10:42:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql & query string length "
}
] |
[
{
"msg_contents": "Hello!\n\n I've sent 3 mails to pgsql-patches. There are two files, one for doc and\nfor src/data directories, and one minor patch for doc/README.locale.\n Please apply.\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Wed, 21 Jul 1999 14:57:01 +0400 (MSD)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "RECODE patches"
},
{
"msg_contents": "> Hello!\n> \n> I've sent 3 mails to pgsql-patches. There are two files, one for doc and\n> for src/data directories, and one minor patch for doc/README.locale.\n> Please apply.\n\nAll three applied to stable and current trees. It is only doc additions\nand charset.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Aug 1999 16:29:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RECODE patches"
}
] |
[
{
"msg_contents": "> Hi, Massimo,\n> \n> You might have noticed my posting a couple of days back about the max query\n> string length problem. I was referred to you by Tom Lane, as he says that\n> you've done some work in this area. I'd like to sort this out, mainly\n> because I'd like to get involved in the development, and this is about as\n> good a place to start as any. So, some question, if you have some time:\n> \n> Why is MAX_QUERY_SIZE dependent on BLCKSZ?\n\nThe implicit assumption is that the maximum query size is correlated to the\ntuple size which is slightly less than block size. A query size twice the\ntuple size seems reasonable in most cases, unless you insert very complex\ndata with a lot of escapes, function calls or constants with large ascii\noutput formats like floats.\n\n> Are there dependencies that I should be aware of when trying to \n> \tadjust this? Not dependencies on MAX_QUERY_SIZE, \n> \tbut rather dependencies _of_ MAX_QUERY_SIZE.\n\nYou should check the buffer size in the parser and egcp. I made also a change\ninto some Makefile to set the YY_BUF_SIZE used by lex to 65536. This is\nreally a kludge but I didn't find a better way to do it. It seems that the\nlibpq buffer size is not dependent on BLCKSZ.\n\n> Which areas of the system, that you are aware of, will require \n> \tchanging, or at least checking, to ensure that they work?\n\nExcept the above YY_BUF_SIZE you shouldn't need to change anything.\nI turned all references to the original `8192' to references to BLCKSZ,\nso changing BLCKSZ should automatically adjust all the other constants.\nTry to grep '#define.*BLCKSZ' in the sources and see what is depending on\nBLCKSZ. I'm currently working with a backend compiled with BLCKSZ=32768\nand it works fine for me.\n\n> \n> Any other hints, tips, and otherwise would be much appreciated.\n\nA better way would be to allocate and grow query buffers dynamically while\nreading the query but you will anyway have troubles with lex and yacc which\nuse statically allocated buffers whose size is hardwired in the program.\nThis is why I had to make those ugly changes in the Makefiles.\n\n> \n> MikeA\n> \n> --------------------------------------------------------------------\n> Science is the game we play with God to find out what his rules are.\n> --------------------------------------------------------------------\n> \n> [(LI)U]NIX IS user friendly; it's just picky about who its friends are.\n> \n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-0461534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n",
"msg_date": "Wed, 21 Jul 1999 13:54:15 +0200 (MEST)",
"msg_from": "Massimo Dal Zotto <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Changing the query string max length"
}
] |
[
{
"msg_contents": "\nI've been reading up on what Informix and Oracle provide in the way of\nobject support.\n\nIn particular I noticed that in Informix when you SELECT on a table it\nby default includes all the objects of sub-classes. In other words the\n\"*\" is postgres terms is always there by default. If you just want that\nclass only you have to say ONLY(tablename).\n\nTo me this is a much better idea. In any proper OO application you would\nbe using the \"*\" in postgres 99% of the time - that being the whole\npoint of OO. Does any consideration want to be given to making the same\nchange while there's not too many people using the inheritance feature?\nI realise breaking compatibility is bad, but I think this is the Right\nThing. When you say \"SELECT * FROM animal\" it's reasonable that you be\nreturned all elephants. To not return them is pretty strange for the\nuninitiated.\n\nThe other thing Informix does is automatically propagate all attributes\nincluding indexes, constraints, pretty much everything to sub-classes.\nAgain.. I think this is the right thing. Any thoughts?\n\nAs for Oracle 8i, as far as I can tell it provides no support for\ninheritance whatsoever. The docs themselves say \"Oracle doesn't support\ninheritance\". It's a bit rich really to call it Oracle \"object\" in any\nshape or form.\n\n-- \nChris Bitmead\nmailto:[email protected]\n",
"msg_date": "Wed, 21 Jul 1999 23:21:27 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "inheritance"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> To me this is a much better idea. In any proper OO application you would\n> be using the \"*\" in postgres 99% of the time - that being the whole\n> point of OO. Does any consideration want to be given to making the same\n> change while there's not too many people using the inheritance feature?\n\nWhat makes you think there's \"not too many people\" using inheritance?\nFurthermore, if we did that it would break the code of people who\n*didn't* think they were using inheritance, except as a means of\ncopying table definitions (which I do a lot, btw).\n\nI don't think we can reverse the default on that at this late date.\n\n> The other thing Informix does is automatically propagate all attributes\n> including indexes, constraints, pretty much everything to sub-classes.\n> Again.. I think this is the right thing. Any thoughts?\n\nI'd be inclined to agree on that, or at least say that we ought to\nprovide a simple way of making it happen. But the right semantics\nare not always obvious. For example, if the ancestor has a SERIAL\ncolumn, do the derived tables get their own sequence objects or\nshare the ancestor's? Does your answer change if the serial column\nwas created \"by hand\" with a \"DEFAULT nextval('some_sequence')\" clause?\nI suspect that any way we jump on this sort of question will be wrong\nfor some apps, so it should be possible to suppress system copying of\nattributes...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Jul 1999 10:59:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] inheritance "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Chris Bitmead <[email protected]> writes:\n> > To me this is a much better idea. In any proper OO application you would\n> > be using the \"*\" in postgres 99% of the time - that being the whole\n> > point of OO.\n\nAnd considering that the Informix OO is probably really\nIllustra/Postgres OO,\nthis is most likely what PostgreSQL was meant to do in the first place.\n\n> > Does any consideration want to be given to making the same\n> > change while there's not too many people using the inheritance feature?\n> \n> What makes you think there's \"not too many people\" using inheritance?\n\nThe poor shape the PostgreSQL inheriatnce (and OO in general) is in ?\n\n> Furthermore, if we did that it would break the code of people who\n> *didn't* think they were using inheritance, except as a means of\n> copying table definitions (which I do a lot, btw).\n\nThis use is to real inheritance as (MS win) cooperative multitasking \nis to real multitasking; when you stick to it too much, you will \nnever have the real one.\n\n> I don't think we can reverse the default on that at this late date.\n\nMaybe we should then need some other construct for _real_ inheritance?\nA keyword like EXTENDS or something. \nWhat does ANSI SQL3 say on inheritance?\n\n> > The other thing Informix does is automatically propagate all attributes\n> > including indexes, constraints, pretty much everything to sub-classes.\n> > Again.. I think this is the right thing. Any thoughts?\n> \n> I'd be inclined to agree on that, or at least say that we ought to\n> provide a simple way of making it happen. But the right semantics\n> are not always obvious. For example, if the ancestor has a SERIAL\n> column, do the derived tables get their own sequence objects or\n> share the ancestor's?\n\nThe ancestors sequence of course (ain't I smart <grin> ;)\n\n> Does your answer change if the serial column\n> was created \"by hand\" with a \"DEFAULT nextval('some_sequence')\" clause?\n\nIt should not, else the column would not be _relly_ inherited.\n\nAnd as we do not have any way change any constraits/defaults after table \ncreation this problem could be postponed to some later date.\n\nbtw, is ALTER TABLE ADD/DROP CONSTRAINT, and changing column defaults \nplanned for 6.6 ?\n\nOTOH, I'm not sure if DROP TABLE should also drop all inherited tables\ntoo?\nMy guess is that it should by default (disabled by ONLY ?) - what does\nInformix do?\n\n> I suspect that any way we jump on this sort of question will be wrong\n> for some apps, so it should be possible to suppress system copying of\n> attributes...\n\nmaybe we should have a TEMPLATE in addition to INHERITS ?\n\n> \n> regards, tom lane\n",
"msg_date": "Wed, 21 Jul 1999 19:26:00 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] inheritance"
}
] |
[
{
"msg_contents": "\nIn implementing a core Text C++ class object, \nwe use realloc() without problems. However, with\nregard to resizing, we always DOUBLE the existing\nsize of the buffer when the string needs to be \nexpanded so that it doesn't take 100 iterations\n(and therefore 100 realloc()'s) to create an 800K\nbuffer.\n\nHope this helps, \n\nM. Mascari\n\n--- \"Ansley, Michael\" <[email protected]>\nwrote:\n> Hi,\n> \n> Well, I got psql to do it's thing, eventually. I've\n> tested it for pretty\n> much everything, including \\e, \\g, \\r, \\i. \n> The one problem that I have had is that after about\n> the third '\\i long.sql',\n> I get a core dump, because sprintf moaned about\n> string size complications.\n> The way I have structured it, memory is reallocated\n> (re- malloc'd, not\n> realloc'd) every time the query is extended. I\n> suspect that this is very\n> inefficient, and probably causing the system to\n> hooch after loading long.sql\n> three times. The main thought that I have had is to\n> extend the query buffer\n> in blocks of about 8k or 16k. I presume that once\n> working with set memory\n> sizes, the memory usage will be substantially more\n> efficient. Ideas?\n> \n> Also, what's the deal with realloc? I tried it a\n> couple of times, but it\n> really screwed me around (hence the re- malloc'ing).\n> Or is it just a Bad\n> Move to use realloc?\n> \n> MikeA\n> \n> \n> \n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 21 Jul 1999 10:07:12 -0400 (EDT)",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] psql & query string length"
}
] |
[
{
"msg_contents": "If there was enuff interest (I'm not siding one way or the other) you could\nadd in a global setting to change the default.\nI was also curious as to why these msgs are cross posted in 3 different\ngroups...\n-----Original Message-----\nFrom: Tom Lane <[email protected]>\nTo: Chris Bitmead <[email protected]>\nCc: [email protected] <[email protected]>;\[email protected] <[email protected]>;\[email protected] <[email protected]>\nDate: Wednesday, July 21, 1999 10:14 AM\nSubject: [GENERAL] Re: [HACKERS] inheritance\n\n\n>Chris Bitmead <[email protected]> writes:\n>> To me this is a much better idea. In any proper OO application you would\n>> be using the \"*\" in postgres 99% of the time - that being the whole\n>> point of OO. Does any consideration want to be given to making the same\n>> change while there's not too many people using the inheritance feature?\n>\n>What makes you think there's \"not too many people\" using inheritance?\n>Furthermore, if we did that it would break the code of people who\n>*didn't* think they were using inheritance, except as a means of\n>copying table definitions (which I do a lot, btw).\n>\n>I don't think we can reverse the default on that at this late date.\n>\n>> The other thing Informix does is automatically propagate all attributes\n>> including indexes, constraints, pretty much everything to sub-classes.\n>> Again.. I think this is the right thing. Any thoughts?\n>\n>I'd be inclined to agree on that, or at least say that we ought to\n>provide a simple way of making it happen. But the right semantics\n>are not always obvious. For example, if the ancestor has a SERIAL\n>column, do the derived tables get their own sequence objects or\n>share the ancestor's? Does your answer change if the serial column\n>was created \"by hand\" with a \"DEFAULT nextval('some_sequence')\" clause?\n>I suspect that any way we jump on this sort of question will be wrong\n>for some apps, so it should be possible to suppress system copying of\n>attributes...\n>\n> regards, tom lane\n>\n\n\n",
"msg_date": "Wed, 21 Jul 1999 10:32:37 -0500",
"msg_from": "\"Kane Tao\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] inheritance "
}
] |
[
{
"msg_contents": "\nJust a quick note to say that my home email is now working again,\nalthough I've yet to get procmail working again, so everything is in my\ninbox (ouch).\n\nAnyhow, because I ended up reinstalling my server (installed an ISDN card,\nand couldn't get RedHat to recognise it - I'm now running SuSe), I may be\na little slow in responding to emails for a few days.\n\nPeter\n\n--\n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Wed, 21 Jul 1999 23:17:36 +0100 (GMT)",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "Email to me"
}
] |
[
{
"msg_contents": "Don't kill a psql client that's in the middle of a COPY IN operation.\n\nWith current sources, the connected backend fails to quit, but instead\ngoes into an infinite loop writing\npq_recvbuf: unexpected EOF on client connection\nto stderr over and over.\n\nIf you have postmaster stderr directed to a disk file, as I believe\nis standard procedure, by and by the disk the postmaster logfile\nis on fills up, and people start getting very unhappy...\n\nI assume this is fairly easily fixed, but do not have a fix right this\ninstant. It's probably my fault though --- I suppose it is an artifact\nof the changes I made a couple months ago to prevent NOTICE messages\nfrom coming out at inopportune times. (If the bug were in pre-6.5\nreleases I'm sure we'd have heard about it before.)\n\nWill produce a back-patch for 6.5 when I have it, but wanted to give\npeople a heads-up now. Most embarrassing.\n\nBTW, it occurs to me that the system ought to have provisions for\nlimiting the size of the logfile, rotating logfiles from time to\ntime, etc ... right now you cannot do those things easily except\nby restarting the postmaster :-(. Even without this bug, a determined\nattacker could create a DOS attack by doing EXPLAIN VERBOSE enough\ntimes to run the postmaster logfile up to full disk. Bruce, I think\nwe need another TODO item:\n * prevent postmaster logfile from growing without bound\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Jul 1999 20:45:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Dropped connection during COPY causes trouble"
},
{
"msg_contents": "> \n> BTW, it occurs to me that the system ought to have provisions for\n> limiting the size of the logfile, rotating logfiles from time to\n> time, etc ... right now you cannot do those things easily except\n> by restarting the postmaster :-(. Even without this bug, a determined\n> attacker could create a DOS attack by doing EXPLAIN VERBOSE enough\n> times to run the postmaster logfile up to full disk. Bruce, I think\n> we need another TODO item:\n> * prevent postmaster logfile from growing without bound\n\nNot sure this really is a PostgreSQL issue. Other OS systems have this\nproblem too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Jul 1999 21:08:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] Dropped connection during COPY causes trouble"
},
{
"msg_contents": "\nOn 22-Jul-99 Bruce Momjian wrote:\n>> \n>> BTW, it occurs to me that the system ought to have provisions for\n>> limiting the size of the logfile, rotating logfiles from time to\n>> time, etc ... right now you cannot do those things easily except\n>> by restarting the postmaster :-(. Even without this bug, a determined\n>> attacker could create a DOS attack by doing EXPLAIN VERBOSE enough\n>> times to run the postmaster logfile up to full disk. Bruce, I think\n>> we need another TODO item:\n>> * prevent postmaster logfile from growing without bound\n> \n> Not sure this really is a PostgreSQL issue. Other OS systems have this\n> problem too.\n\nI've seen these complaints on other lists for other programs (mail, news,\nweb, etc...). IMO it's more of an administration issue than an OS issue\nor even something for the program filling it. The admin needs to know \nwhat's running on his/her system and how to handle the logs (and the \nrotating of) accordingly.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Wed, 21 Jul 1999 21:48:57 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Re: [BUGS] Dropped connection during COPY causes t"
},
{
"msg_contents": "At 09:48 PM 7/21/99 -0400, Vince Vielhaber wrote:\n\n>I've seen these complaints on other lists for other programs (mail, news,\n>web, etc...). IMO it's more of an administration issue than an OS issue\n>or even something for the program filling it. The admin needs to know \n>what's running on his/her system and how to handle the logs (and the \n>rotating of) accordingly.\n\nWell, there are programs that allow automatic rotating of logs, of\ncourse. AOLserver - the webserver I use with Postgres - is one \nexample. I think I can limit the size of a log, too, but would\nhave to check.\n\nIt seems like a useful feature for any 24/7 service.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Wed, 21 Jul 1999 21:07:48 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Re: [BUGS] Dropped connection during COPY causes\n t"
}
] |
[
{
"msg_contents": "Hi all,\n\nA question about TODO item\n * Fix memory exhaustion when using many OR's\n\npull_ors() and pull_ands() are called while processing \ncnfify() and both call copyObject().\n\t\t ^^^^^^^^^^^^^^\nFor example in pull_ors()\n\n\treturn (pull_ors(nconc(copyObject((Node *) args),\n\t\t\tcopyObject((Node *) lnext(orlist)))));\n\ncopyObject() seems too heavy\nIs copyObject() necessary in this case ?\nCouldn't we change as below ?\n\n\treturn (pull_ors(nconc(listCopy(args),\n\t\t\tlistCopy( lnext(orlist)))));\t\n\nI'm not sure it's possible or not ,because I don't understand\ncnfify() and other related stuff.\n\nIf it's possible,it would improve cnfify()'s performance and \nmemory consumption in many OR's cases ,though it would\nnever fix TODO item.\n\nComments ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Thu, 22 Jul 1999 10:47:30 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "cnfify() performance"
}
] |
[
{
"msg_contents": "Hi,\n\nI am trying to write an interface for accessing the postmaster and\nsubsequently a postgres database server in a language that our group has\nwritten, which is called APRIL. However, I am having a few problems\nmaking the connection.\n\nI appear to be able to make a successfuly connection to the postmaster\nby making a normal socket connection to port 5432 and sending a startup\npacket, which consists of:\n\n00 00 01 20 as the length (296 bytes)\n00 02 00 00 as the major and minor protocol numbers (2.0)\n\"dbname\\0\" as a 64 byte string representing the database name\n\"postgres\\0\" as a 32 byte string representing the user name\n\"\\0\" as a 64 byte string representing the options\n\"\\0\" as a 64 byte string representing unused bytes\n\"\\0\" as a 64 byte string representing the tty\n\nAnd I get back:\n\n\"R\" 00 00 00 00 which indicates a successful connection\n\nHowever, when my process has read this, the postmaster displays the\nfollowing error:\n\nFATAL 1: Socket command type unknown\n\nand the connection is closed.\n\nDoes anyone have any idea of what I am doing wrong? I assumed that the\npostmaster would fork a new postgres process to handle my connection and\nI should be expecting some data on the socket to tell me that the\npostgres process is ready for an SQL query. The postmaster is being\nexecuted with the -i option.\n\nThanks in advance,\n\n\nJonathan\n+-------------------------------------------------------------------+\n| \"Never settle with words what you can accomplish with a |\n| flamethrower.\" -- Bruce Feirstein |\n+-------------------------------------------------------------------+",
"msg_date": "Thu, 22 Jul 1999 00:01:39 -0700",
"msg_from": "Jonathan Dale <[email protected]>",
"msg_from_op": true,
"msg_subject": "Frontend/Backend Protocol"
},
{
"msg_contents": "Jonathan Dale <[email protected]> writes:\n> And I get back:\n> \"R\" 00 00 00 00 which indicates a successful connection\n\nLooks good so far (I suppose you are using 'trust' authentication mode).\n\n> However, when my process has read this, the postmaster displays the\n> following error:\n\n> FATAL 1: Socket command type unknown\n\n> and the connection is closed.\n\nNo, the postmaster didn't send that; the backend did. Looks like you\nsent one byte too many, probably a null byte, and the backend received\nit as the first input data byte. Since it's not a valid protocol\ncommand character, the backend gives up and dies.\n\n> I assumed that the\n> postmaster would fork a new postgres process to handle my connection\n\n... it did ...\n\n> and I should be expecting some data on the socket to tell me that the\n> postgres process is ready for an SQL query.\n\nYou should have gotten a ReadyForQuery message if you are talking to\na 6.4 or later backend, and if you used the right protocol version\nnumber in the connect request. I speculate that you have an old server,\nor you asked for protocol version 1, or you miscounted bytes and missed\nthe appearance of the ReadyForQuery ('Z') message.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Jul 1999 20:30:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Frontend/Backend Protocol "
}
] |
[
{
"msg_contents": "subscribe\n\n",
"msg_date": "Thu, 22 Jul 1999 09:22:04 +0200",
"msg_from": "\"F.J.Cuberos\" <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
}
] |
[
{
"msg_contents": "Hello!\n\n Oliver pointed that my English was not good enough and suggested ways to\nimprove my docs. Below is a minor patch for doc/README.locale (the patch\nshould be applied after the patch I've sent yesterday).\n\n*** README.locale.orig\tWed Jul 21 13:42:28 1999\n--- README.locale\tThu Jul 22 12:27:42 1999\n***************\n*** 2,10 ****\n 1999 Jul 21\n ===========\n \n! Josef Balatka, <[email protected]> asked no to remove RECODE and sent me\n Czech ISO-8859-2 -> WIN-1250 translation table.\n! RECODE is no more Cyrillic RECODE and will stay in PostgreSQL.\n \n He also created some bits of documentation, mostly concerning RECODE -\n see README.Charsets.\n--- 2,10 ----\n 1999 Jul 21\n ===========\n \n! Josef Balatka, <[email protected]> asked not to remove RECODE and sent me\n Czech ISO-8859-2 -> WIN-1250 translation table.\n! RECODE now is more than just Cyrillic RECODE and will stay in PostgreSQL.\n \n He also created some bits of documentation, mostly concerning RECODE -\n see README.Charsets.\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Thu, 22 Jul 1999 12:35:45 +0400 (MSD)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "README.locale patch 2"
},
{
"msg_contents": "Patch skipped. I already did this, I think.\n\n\n> Hello!\n> \n> Oliver pointed that my English was not good enough and suggested ways to\n> improve my docs. Below is a minor patch for doc/README.locale (the patch\n> should be applied after the patch I've sent yesterday).\n> \n> *** README.locale.orig\tWed Jul 21 13:42:28 1999\n> --- README.locale\tThu Jul 22 12:27:42 1999\n> ***************\n> *** 2,10 ****\n> 1999 Jul 21\n> ===========\n> \n> ! Josef Balatka, <[email protected]> asked no to remove RECODE and sent me\n> Czech ISO-8859-2 -> WIN-1250 translation table.\n> ! RECODE is no more Cyrillic RECODE and will stay in PostgreSQL.\n> \n> He also created some bits of documentation, mostly concerning RECODE -\n> see README.Charsets.\n> --- 2,10 ----\n> 1999 Jul 21\n> ===========\n> \n> ! Josef Balatka, <[email protected]> asked not to remove RECODE and sent me\n> Czech ISO-8859-2 -> WIN-1250 translation table.\n> ! RECODE now is more than just Cyrillic RECODE and will stay in PostgreSQL.\n> \n> He also created some bits of documentation, mostly concerning RECODE -\n> see README.Charsets.\n> \n> Oleg.\n> ---- \n> Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n> Programmers don't die, they just GOSUB without RETURN.\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Aug 1999 16:33:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] README.locale patch 2"
}
] |
[
{
"msg_contents": "Massimo wrote:\n>> A better way would be to allocate and grow query buffers dynamically\nwhile\n>> reading the query but you will anyway have troubles with lex and yacc\nwhich\n>> use statically allocated buffers whose size is hardwired in the program.\n>> This is why I had to make those ugly changes in the Makefiles.\n\nWell, if the query length is limited by yacc, bison, lex, or any other tools\nthat we use, is it worthwhile trying to make it dynamic? If I can still\nonly get a 64k query into the backend, what use is there in creating a 800k\nquery in psql?\n\nThought and/or bright ideas or welcomed...\n\n\nMikeA\n\n",
"msg_date": "Thu, 22 Jul 1999 11:18:53 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Maximum query string length"
},
{
"msg_contents": "Ansley, Michael wrote:\n> \n> Massimo wrote:\n> >> A better way would be to allocate and grow query buffers dynamically\n> while\n> >> reading the query but you will anyway have troubles with lex and yacc\n> which\n> >> use statically allocated buffers whose size is hardwired in the program.\n> >> This is why I had to make those ugly changes in the Makefiles.\n> \n> Well, if the query length is limited by yacc, bison, lex, or any other tools\n> that we use, is it worthwhile trying to make it dynamic? If I can still\n> only get a 64k query into the backend, what use is there in creating a 800k\n> query in psql?\n> \n> Thought and/or bright ideas or welcomed...\n> \n> MikeA\n\nThat shouldn't be hard to fix. Though it might be hard to fix with good\nperformance. The default input routines in lex fill its internal buffer\nwith\na 'line' of text. But all those routines can be overridden. So you could\nmake lex read directly from the dynamic buffer. Look at yyinput (or is\nit yy_input - can't remember).\n\nIf you do that, I think the only limitation will be on the size of a\nsingle token. I don't know how postgresql handles quoted text and\ncomments,\nbut that would be where the problem may arise.\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n",
"msg_date": "Thu, 22 Jul 1999 08:39:28 -0400",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Maximum query string length"
},
{
"msg_contents": "\"Ansley, Michael\" <[email protected]> writes:\n> Massimo wrote:\n>>> you will anyway have troubles with lex and yacc\n\n> Well, if the query length is limited by yacc, bison, lex, or any other tools\n> that we use, is it worthwhile trying to make it dynamic?\n\nYes, I think so. We have not yet tried hard to persuade those tools to\ncooperate, but I find it hard to believe that they cannot be handed a\nsource string in an externally supplied buffer. At worst, we might find\nthat we can only promise > 64K query length when using a bison-generated\nparser (since the parser innards tend to vary a lot across vendor\nyaccs).\n\nlex may or may not be worth worrying about --- the buffer size limit it\nwould impose would be for a single token, I believe, not for the whole\nquery.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Jul 1999 20:34:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Maximum query string length "
}
] |
[
{
"msg_contents": "\nHi,\n\nI need set lock table privilage for user, but now I must grant upload\npriv. for it. But I needn't upload priv. for user. Upload is possible\nvia RULEs without GRANTed (upload) privilage for user.\n\nMy suggestion is add to GRANT command LOCK privilage.\n\n ....\n privilege is {ALL | SELECT | INSERT | UPDATE | DELETE | RULE | LOCK\n .... \t\t\t\t\t\t\t ^^^^ \n\n\t\t\t\t\t\t\tZakkr\n\t\t\t\t\t\t\t\n\n",
"msg_date": "Thu, 22 Jul 1999 13:32:02 +0200 (CEST)",
"msg_from": "Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "GRANT suggestion"
},
{
"msg_contents": "Zakkr wrote:\n> \n> Hi,\n> \n> I need set lock table privilage for user, but now I must grant upload\n> priv. for it. But I needn't upload priv. for user. Upload is possible\n> via RULEs without GRANTed (upload) privilage for user.\n> \n> My suggestion is add to GRANT command LOCK privilage.\n> \n> ....\n> privilege is {ALL | SELECT | INSERT | UPDATE | DELETE | RULE | LOCK\n\nOracle:\n\nThe table or view must be in your own schema or you must have \nLOCK ANY TABLE system privilege or \nyou must have any object privilege on the table or view. \n ^^^^^^^^^^^^^^^^^^^^\n\nSo, I agreed with new LOCK privilege with addition above.\n\nVadim\n",
"msg_date": "Thu, 22 Jul 1999 20:12:52 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] GRANT suggestion"
}
] |
[
{
"msg_contents": "Using 6.5 (via Thomas Lockhart's Linux RPM build of Jul 2), I get a\nphantom row when doing the following:\n\n create table foo (a int);\n select t1.a, count(*) from foo t1, foo t2 group by t1.a;\n\nI get\n\n a|count\n -+-----\n | 0\n (1 row)\n\ninstead of zero rows. The row has an a column of \"NULL\". This happens\neven if I create table foo as \"(a int not null)\".\n\nI've checked that Informix 7.3LE gives zero rows as expected.\n\nFurther, if I add\n having t1.a is not null\nto the select query to try to get rid of the bogus row then it gives\n ERROR: SELECT/HAVING requires aggregates to be valid\nbut I don't know quite what that's telling me.\n\nSome of you might remember I had that other multi-aggregate/view\nproblem recently which turned out to be fairly fundamentally unfixable\ndue to the way postgres holds views internally in a close-to-SQL\nformat rather than the underlying relational algebra. Can anyone tell\nme if this phantom row thing is another consequence of the\nimplementation of aggregates in postgres or is just a buglet that can\nbe fixed fairly easily?\n\nThanks,\n--Malcolm\n\n-- \nMalcolm Beattie <[email protected]>\nUnix Systems Programmer\nOxford University Computing Services\n",
"msg_date": "Thu, 22 Jul 1999 14:38:19 +0100 (BST)",
"msg_from": "Malcolm Beattie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Phantom row from aggregate in self-join in 6.5"
},
{
"msg_contents": "Malcolm Beattie <[email protected]> writes:\n> Using 6.5 (via Thomas Lockhart's Linux RPM build of Jul 2), I get a\n> phantom row when doing the following:\n> create table foo (a int);\n> select t1.a, count(*) from foo t1, foo t2 group by t1.a;\n> I get\n> a|count\n> -+-----\n> | 0\n> (1 row)\n> instead of zero rows.\n\nIt's not a bug, it's a feature ... or at least there are some around\nhere who claim that the behavior is OK. I think they're wrong, but\nif you want it changed you'll need to cite chapter and verse from the\nSQL92 standard, not just assert that Informix does it differently.\nYou'll find several past discussions of this point in the pgsql-hackers\narchives, and they all seem to have ended inconclusively.\n\n> is it just a buglet that can be fixed fairly easily?\n\nI think it would not be hard to fix, if we have a consensus that the\nbehavior should change.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Jul 1999 20:52:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Phantom row from aggregate in self-join in 6.5 "
},
{
"msg_contents": "At 20:52 22/07/99 -0400, you wrote:\n>Malcolm Beattie <[email protected]> writes:\n>> Using 6.5 (via Thomas Lockhart's Linux RPM build of Jul 2), I get a\n>> phantom row when doing the following:\n>> create table foo (a int);\n>> select t1.a, count(*) from foo t1, foo t2 group by t1.a;\n>> I get\n>> a|count\n>> -+-----\n>> | 0\n>> (1 row)\n>> instead of zero rows.\n>\n>It's not a bug, it's a feature ... or at least there are some around\n>here who claim that the behavior is OK. I think they're wrong, but\n>if you want it changed you'll need to cite chapter and verse from the\n>SQL92 standard, not just assert that Informix does it differently.\n\nI've now checked Dec Rdb, SQL/Server, and MS-Access - and they return 0 rows. Add this to Informix, and one begins to wonder if there are any that match the Postgres behaviour?\n\nAny idea where I can find a copy of the SQL92 standard on the net?\n\n\n>You'll find several past discussions of this point in the pgsql-hackers\n>archives, and they all seem to have ended inconclusively.\n\nI had a quick look at discussions involving informix, but could not find anything. Can you give a little more information about the past discussions, and specifically, what the reasons for preserving this behaviour were?\n\n\n>> is it just a buglet that can be fixed fairly easily?\n>\n>I think it would not be hard to fix, if we have a consensus that the\n>behavior should change.\n>\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 23 Jul 1999 11:39:42 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Phantom row from aggregate in self-join in 6.5 "
},
{
"msg_contents": "At 11:39 AM 7/23/99 +1000, Philip Warner wrote:\n\n>I've now checked Dec Rdb, SQL/Server, and MS-Access - and they return 0\nrows. Add this to Informix, and one begins to wonder if there are any that\nmatch the Postgres behaviour?\n\n>Any idea where I can find a copy of the SQL92 standard on the net?\n\nI'd like an answer to this, too :)\n\nIt may be that you've stumbled into an area the standard's either\nleft \"implementation-dependent\", \"undefined\", or simply forgotten\nor unthought-of. (can you tell I've been drafted into ANSI/ISO\nstandards efforts in the past for Pascal and Modula-2?)\n\nStill, I must say that a row returning \"0\" in response to a \ncount(*) isn't at all suprising, I guess it's a matter of \nwhether or not the count(*) or the specific column being\nextracted determines the behavior.\n\n>>You'll find several past discussions of this point in the pgsql-hackers\n>>archives, and they all seem to have ended inconclusively.\n\n>I had a quick look at discussions involving informix, but could not find\nanything. Can you give a little more information about the past\ndiscussions, and specifically, what the reasons for preserving this\nbehaviour were?\n\nFirst, I wouldn't trust Access to be much of an SQL standards judge.\nIf nothing else, MS's collaboration with Sybase (SQL/Server) might\nperhaps color MS's view of what the standard sez. Not to mention \nthe poaching of parser/semantic code, etc...\n\nAnd doesn't DEC Rdb have some genealogical relationship to SQL/Server?\n(I could be WAY off base here)\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Thu, 22 Jul 1999 18:57:54 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Phantom row from aggregate in self-join in 6.5 "
},
{
"msg_contents": "At 18:57 22/07/99 -0700, you wrote:\n>At 11:39 AM 7/23/99 +1000, Philip Warner wrote:\n>\n>>I've now checked Dec Rdb, SQL/Server, and MS-Access - and they return 0\n>rows. Add this to Informix, and one begins to wonder if there are any that\n>match the Postgres behaviour?\n>\n>>Any idea where I can find a copy of the SQL92 standard on the net?\n>\n>I'd like an answer to this, too :)\n\nI have found a US based organization that sell 38MB file for $220...I guess I'll go to a library.\n\n>It may be that you've stumbled into an area the standard's either\n>left \"implementation-dependent\", \"undefined\", or simply forgotten\n>or unthought-of. (can you tell I've been drafted into ANSI/ISO\n>standards efforts in the past for Pascal and Modula-2?)\n\nIf that's the case, then the example below seems to produce an inconsistency: IMO, changing the columns selected should not change the number of rows returned.\n\n>Still, I must say that a row returning \"0\" in response to a \n>count(*) isn't at all suprising, I guess it's a matter of \n>whether or not the count(*) or the specific column being\n>extracted determines the behavior.\n\nCount returning 0 is good, the problem is that:\n\n select t1.a from foo t1, foo t2 group by t1.a;\n ^\n +--- No count(*)\n\nreturns 0 rows (fine), but that \n\n select t1.a, count(*) from foo t1, foo t2 group by t1.a;\n\nreturns 1 row, which is weird.\n\n\n>\n>First, I wouldn't trust Access to be much of an SQL standards judge.\n>If nothing else, MS's collaboration with Sybase (SQL/Server) might\n>perhaps color MS's view of what the standard sez. Not to mention \n>the poaching of parser/semantic code, etc...\n\nI agree, but it all adds a little weight to the argument - maybe?\n\n\n>And doesn't DEC Rdb have some genealogical relationship to SQL/Server?\n>(I could be WAY off base here)\n\nI don't think so. RDB was at version 3 in 1986 - that's when I started using it. It has had AFAICT a totally separate development stream from MS/Sybase etc, at least since that time, and almost certainly from its genesis. It was purchsed by Oracle a year or two ago, but it still largely the same product. If anything, Oracle have improved it a little.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 23 Jul 1999 12:23:41 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Phantom row from aggregate in self-join in 6.5 "
},
{
"msg_contents": "Philip Warner wrote:\n> \n> At 18:57 22/07/99 -0700, you wrote:\n> >At 11:39 AM 7/23/99 +1000, Philip Warner wrote:\n> >\n> >>I've now checked Dec Rdb, SQL/Server, and MS-Access - and they return 0\n> >rows. Add this to Informix, and one begins to wonder if there are any that\n> >match the Postgres behaviour?\n> >\n> >>Any idea where I can find a copy of the SQL92 standard on the net?\n> >\n> >I'd like an answer to this, too :)\n> \n> I have found a US based organization that sell 38MB file for $220...I guess I'll go to a library.\n\nI have \"ISO and ANSI SQL3 Working Draft-August 12, 1993\", 3M file.\nAny one intrested?\n\nVadim\n",
"msg_date": "Fri, 23 Jul 1999 10:40:58 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Phantom row from aggregate in self-join in 6.5"
},
{
"msg_contents": "At 10:40 23/07/99 +0800, you wrote:\n>\n>I have \"ISO and ANSI SQL3 Working Draft-August 12, 1993\", 3M file.\n>Any one intrested?\n>\n\nVery! E-mail it. Or put it on the PG site, or tell me how to FTP it...\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 23 Jul 1999 12:45:12 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Phantom row from aggregate in self-join in 6.5"
},
{
"msg_contents": "Philip Warner wrote:\n> \n> At 10:40 23/07/99 +0800, you wrote:\n> >\n> >I have \"ISO and ANSI SQL3 Working Draft-August 12, 1993\", 3M file.\n> >Any one intrested?\n> >\n> \n> Very! E-mail it. Or put it on the PG site, or tell me how to FTP it...\n\nI don't remember where I got it and what are copyrights, so\nI'll e-mail it to you, and anyone, in private mail.\n\nVadim\n",
"msg_date": "Fri, 23 Jul 1999 10:58:17 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Phantom row from aggregate in self-join in 6.5"
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n> Still, I must say that a row returning \"0\" in response to a \n> count(*) isn't at all suprising, I guess it's a matter of \n> whether or not the count(*) or the specific column being\n> extracted determines the behavior.\n\nNeither, it's GROUP BY that creates the issue.\n\nIf you do an ungrouped query with aggregates, say\n\n\tSELECT count(*) FROM table WHERE someCondition;\n\nyou will get one and only one row produced, with default values for\nthe aggregates if there are no input rows (ie, either an empty table\nto start with, or nothing gets by the WHERE). Everybody seems\nto be happy with this.\n\nThe question is what happens when GROUP BY enters the picture.\nThere is a faction that thinks that if there are no input rows\nthen you should still get one default row out. That makes no\nsense to me; it seems to me you should get one aggregated row per\ngroup if you have aggregates with GROUP BY, and if there are\nno input rows then there are no groups. But I have not burrowed\ninto the SQL standard to try to develop a bulletproof argument\nfor that position.\n\n>>> You'll find several past discussions of this point in the pgsql-hackers\n>>> archives, and they all seem to have ended inconclusively.\n\n>> I had a quick look at discussions involving informix, but could not find\n> anything.\n\nInformix is not the issue. Look for \"GROUP BY\" and aggregates.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Jul 1999 23:27:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Phantom row from aggregate in self-join in 6.5 "
},
{
"msg_contents": "At 20:52 22/07/99 -0400, Tom Lane wrote:\n>Malcolm Beattie <[email protected]> writes:\n>> Using 6.5 (via Thomas Lockhart's Linux RPM build of Jul 2), I get a\n>> phantom row when doing the following:\n>> create table foo (a int);\n>> select t1.a, count(*) from foo t1, foo t2 group by t1.a;\n>> I get\n>> a|count\n>> -+-----\n>> | 0\n>> (1 row)\n>> instead of zero rows.\n>\n\n>if you want it changed you'll need to cite chapter and verse from the\n>SQL92 standard, not just assert that Informix does it differently.\n\nSadly, I only have access to a 1993 draft standard, but the following is from section 7.10:\n\n \"The result of the <group by clause> is a partitioning of T into\n a set of groups. The set is the minimum number of groups such\n that, for each grouping column of each group of more than one\n row, no two values of that grouping column are distinct.\"\n\n>From my reading of the standad, 'T' is the result of the select statement prior to being grouped. It would seem that if T contains no rows, then \"the minimum number of groups\" would have to be zero.\n\nOther references, such as:\n\n 2) Let CR be the <column reference> with <column name> CN identi-\n fying the grouping column. Every row of a given group contains\n equal values of CN. When a <search condition> or <value expres-\n sion> is applied to a group, CR is a reference to the value of\n CN.\n\n (General Rules, Section 7.10)\n\nWould seem to indicate that any grouped result row must be supported by underlying rows on the ungrouped result set.\n\nFinally, using the above example:\n\n>> create table foo (a int);\n>> select t1.a, count(*) from foo t1, foo t2 group by t1.a;\n>> I get\n>> a|count\n>> -+-----\n>> | 0\n>> (1 row)\n\nthe values returned in the column 'a' NEVER appears in the source table. Is there anyone out there who believes this is NOT a problem?\n\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 23 Jul 1999 13:31:08 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Phantom row from aggregate in self-join in 6.5 "
},
{
"msg_contents": "> > create table foo (a int);\n> > select t1.a, count(*) from foo t1, foo t2 group by t1.a;\n> > I get\n> > a|count\n> > -+-----\n> > | 0\n> > (1 row)\n> > instead of zero rows.\n> It's not a bug, it's a feature ... or at least there are some around\n> here who claim that the behavior is OK. I think they're wrong, but\n> if you want it changed you'll need to cite chapter and verse from the\n> SQL92 standard, not just assert that Informix does it differently.\n\nI don't recall which way I argued before (in fact, I don't recall this\nparticular example), but I do remember arguing (with righteous\nconviction) that the query\n\n select count(*) from foo;\n\nshould return a single row containing a zero value. Did we infer from\nthat some behavior for \"group by\" (I can't recall any)? istm, at least\ntoday, that the behavior for the group-by is wrong, but we'd better\nnot change the behavior of my example query...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 23 Jul 1999 03:33:36 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Phantom row from aggregate in self-join in 6.5"
},
{
"msg_contents": "Philip Warner wrote:\n\n> >>Any idea where I can find a copy of the SQL92 standard on the net?\n\n> >I'd like an answer to this, too :)\n\n> I have found a US based organization that sell 38MB file for $220...I guess I'll go to a library.\n\nGo to http://www.contrib.andrew.cmu.edu/~shadow/sql.html for a good, if\na little out of date, SQL page that lists several reference works.\n\nLamar Owen\nWGCR Internet Radio\n",
"msg_date": "Fri, 23 Jul 1999 10:14:10 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Phantom row from aggregate in self-join in 6.5"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I don't recall which way I argued before (in fact, I don't recall this\n> particular example), but I do remember arguing (with righteous\n> conviction) that the query\n> select count(*) from foo;\n> should return a single row containing a zero value.\n\nNo argument about that one. It's the GROUP BY case that's at issue.\n\n> Did we infer from\n> that some behavior for \"group by\" (I can't recall any)? istm, at least\n> today, that the behavior for the group-by is wrong,\n\nIIRC, you were the main advocate of the position that the code's\nexisting behavior is correct. Does that mean I can go change it? ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 23 Jul 1999 10:48:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Phantom row from aggregate in self-join in 6.5 "
},
{
"msg_contents": "> IIRC, you were the main advocate of the position that the code's\n> existing behavior is correct. Does that mean I can go change it? ;-)\n\nYes, after you slap me around a bit for being so wrong. Do you\nremember when we were discussing it? I want to go back and see why I\nthought this was right. I'm guessing that the example was not phrased\nin exactly this way, and that there may be some other behavior we need\nto maintain. (Otherwise, I might have used up my \"one wrong idea per\nyear\" ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 23 Jul 1999 16:30:15 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Phantom row from aggregate in self-join in 6.5"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> IIRC, you were the main advocate of the position that the code's\n>> existing behavior is correct. Does that mean I can go change it? ;-)\n\n> Yes, after you slap me around a bit for being so wrong. Do you\n> remember when we were discussing it? I want to go back and see why I\n> thought this was right. I'm guessing that the example was not phrased\n> in exactly this way, and that there may be some other behavior we need\n> to maintain. (Otherwise, I might have used up my \"one wrong idea per\n> year\" ;)\n\nActually, it may be my recollection that's wrong. The only discussion\nof the point that I can find right now is the thread \"SUM() and GROUP\nBY\" from around 1/12/99 in pghackers, and it seems to be mostly focused\non arguments about whether you should get NULL or 0 from a no-input\nSUM...\n\nI would've sworn I remember a couple of other related threads in the\npast year or so, but I cannot find them now.\n\nAnyway, unless someone speaks up in favor of the way the code currently\nworks, I will see about changing the results for the GROUP-BY-with-no-\ninput-rows case. I got a few other things to do first though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 23 Jul 1999 18:44:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Phantom row from aggregate in self-join in 6.5 "
},
{
"msg_contents": "Thus spake Tom Lane\n> IIRC, you were the main advocate of the position that the code's\n> existing behavior is correct. Does that mean I can go change it? ;-)\n\nI vote (again) for yes. It's so annoying having to add code to test\neach returned value against 0 just in case there is only one returned\nvalue that needs to be checked.\n\n\"SELECT COUNT(*) FROM table\" should always return one row but \"SELECT\nCOUNT(*) FROM table GROUP BY somethng\" returns a variable number of\nrows anyway so what's the harm in extending \"one or more\" to \"zero\nor more\" returned rows? We have to iterate through whatever the\ncount is anyway in our programs.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sat, 24 Jul 1999 08:00:12 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Phantom row from aggregate in self-join in 6.5"
}
] |
[
{
"msg_contents": "subscribe\n\n",
"msg_date": "Thu, 22 Jul 1999 17:59:06 +0200",
"msg_from": "\"F.J.Cuberos\" <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
}
] |
[
{
"msg_contents": "I just tried to run initdb with the latest CVS snapshot but initdb\nsegfaults, i.e. some programs inside do:\n\nVacuuming template1\nSegmentation fault\nSegmentation fault\nCreating public pg_user view\nSegmentation fault\nSegmentation fault\nSegmentation fault\nCreating view pg_rules\nSegmentation fault\nSegmentation fault\nCreating view pg_views\nSegmentation fault\nSegmentation fault\nCreating view pg_tables\nSegmentation fault\nSegmentation fault\nCreating view pg_indexes\nSegmentation fault\nSegmentation fault\nLoading pg_description\nSegmentation fault\nSegmentation fault\nSegmentation fault\n\nIs this a known problem? Or do I have a local problem on my machine?\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Thu, 22 Jul 1999 18:30:37 +0200",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Seg fault in initdb"
},
{
"msg_contents": "> I just tried to run initdb with the latest CVS snapshot but initdb\n> segfaults, i.e. some programs inside do:\n> \n> Vacuuming template1\n> Segmentation fault\n> Segmentation fault\n> Creating public pg_user view\n> Segmentation fault\n> Segmentation fault\n> Segmentation fault\n> Creating view pg_rules\n> Segmentation fault\n> Segmentation fault\n> Creating view pg_views\n> Segmentation fault\n> Segmentation fault\n> Creating view pg_tables\n> Segmentation fault\n> Segmentation fault\n> Creating view pg_indexes\n> Segmentation fault\n> Segmentation fault\n> Loading pg_description\n> Segmentation fault\n> Segmentation fault\n> Segmentation fault\n> \n> Is this a known problem? Or do I have a local problem on my machine?\n\nI got a in-progress patch in the tree a few days ago, but reversed it\nout in a few minutes. You probably have a copy from then. cvs update\nshould fix it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Jul 1999 15:18:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Seg fault in initdb"
},
{
"msg_contents": "On Thu, Jul 22, 1999 at 03:18:29PM -0400, Bruce Momjian wrote:\n> > I just tried to run initdb with the latest CVS snapshot but initdb\n> > segfaults, i.e. some programs inside do:\n> > ... \n> \n> I got a in-progress patch in the tree a few days ago, but reversed it\n> out in a few minutes. You probably have a copy from then. cvs update\n> should fix it.\n\nYes, initdb works fine now.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Fri, 23 Jul 1999 12:44:14 +0200",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Seg fault in initdb"
},
{
"msg_contents": "Michael Meskes <[email protected]> writes:\n> On Thu, Jul 22, 1999 at 03:18:29PM -0400, Bruce Momjian wrote:\n>>>> I just tried to run initdb with the latest CVS snapshot but initdb\n>>>> segfaults, i.e. some programs inside do:\n>> \n>> I got a in-progress patch in the tree a few days ago, but reversed it\n>> out in a few minutes. You probably have a copy from then. cvs update\n>> should fix it.\n\n> Yes, initdb works fine now.\n\nDepartment of blame where blame is due: it was my bug not Bruce's ...\nI fixed it about 11pm EDT last night.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 23 Jul 1999 09:46:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Seg fault in initdb "
}
] |
[
{
"msg_contents": "\nMorning all...\n\n\tJust had someone inquire as to whether any of the 'Fortune 500'\ncompanies are using PostgreSQL ...\n\n\tDon't know the answer myself...anyone out there associated with\none willing to speak out?\n\n\t\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 22 Jul 1999 15:32:42 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fortune 500 ..."
}
] |
[
{
"msg_contents": "\n*** dynloader.c.old\tThu Jul 22 16:29:46 1999\n--- dynloader.c\tThu Jul 22 16:30:23 1999\n***************\n*** 1,6 ****\n /* Dummy file used for nothing at this point\n *\n- <<<<<<< linux.c\n * see sunos4.h\n =======\n * dynloader.c\n--- 1,5 ----\n***************\n*** 16,25 ****\n *\t $Header: /usr/local/cvsroot/pgsql/src/backend/port/dynloader/linux.c,v 1.15 1999/07/17 20:17:31 momjian Exp $\n *\n *-------------------------------------------------------------------------\n- >>>>>>> 1.15\n */\n- <<<<<<< linux.c\n- =======\n \n #include \"postgres.h\"\n #ifdef HAVE_DLD_H\n--- 15,21 ----\n***************\n*** 114,117 ****\n }\n \n #endif\n- >>>>>>> 1.15\n--- 110,112 ----\n\n\n=== END OF PATCH\n-- \nMark Hollomon\[email protected]\n",
"msg_date": "Thu, 22 Jul 1999 16:34:20 -0400",
"msg_from": "Mark Hollomon <[email protected]>",
"msg_from_op": true,
"msg_subject": "oopsy in dynloader.c"
},
{
"msg_contents": "I can not find this problem in the current source tree. I believe you\nare seeing a merge problem with cvs. Remove the file and reupdate cvs.\n\n> \n> *** dynloader.c.old\tThu Jul 22 16:29:46 1999\n> --- dynloader.c\tThu Jul 22 16:30:23 1999\n> ***************\n> *** 1,6 ****\n> /* Dummy file used for nothing at this point\n> *\n> - <<<<<<< linux.c\n> * see sunos4.h\n> =======\n> * dynloader.c\n> --- 1,5 ----\n> ***************\n> *** 16,25 ****\n> *\t $Header: /usr/local/cvsroot/pgsql/src/backend/port/dynloader/linux.c,v 1.15 1999/07/17 20:17:31 momjian Exp $\n> *\n> *-------------------------------------------------------------------------\n> - >>>>>>> 1.15\n> */\n> - <<<<<<< linux.c\n> - =======\n> \n> #include \"postgres.h\"\n> #ifdef HAVE_DLD_H\n> --- 15,21 ----\n> ***************\n> *** 114,117 ****\n> }\n> \n> #endif\n> - >>>>>>> 1.15\n> --- 110,112 ----\n> \n> \n> === END OF PATCH\n> -- \n> Mark Hollomon\n> [email protected]\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Jul 1999 22:37:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] oopsy in dynloader.c"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> I can not find this problem in the current source tree. I believe you\n> are seeing a merge problem with cvs. Remove the file and reupdate cvs.\n\n(Slaps forehead) Of course. I changed that file to fix a problem with\ndynamic loading and the new plperl.\n\nSorry for the noise.\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n",
"msg_date": "Fri, 23 Jul 1999 08:04:10 -0400",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] oopsy in dynloader.c"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I can not find this problem in the current source tree. I believe you\n> are seeing a merge problem with cvs. Remove the file and reupdate cvs.\n\nIn fact, that sort of breakage is exactly what cvs will do when it finds\na merge conflict (which it simple-mindedly defines as a local change that\nfalls in the same line range as a diff it's trying to apply from the cvs\nmaster file). It will warn you that the merge failed --- so you should\nalways review the output from a cvs update run, looking for conflict\nmessages.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 23 Jul 1999 09:42:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] oopsy in dynloader.c "
}
] |
[
{
"msg_contents": "Hello PostgreSQL hackers - \nI'm considering trying to implement a new feature that would have some\ninteresting possible uses. I'm wondering how difficult it would be\nto implement a table who's content does not come out of the files on\ndisk, but instead is accessed remotely from another database? Absolutely\ninsane, or just mad-scientist insane? I need this to solve a subset of\nthe class of problems that Cohera (Stonebraker's latest commercial db)\nis aimed at, but I don't need all the functionality provided by that\n(nor the 4-5 figure price tag!)\n\nAn initial proof of concept could be done via 'ON SELECT' rules, if 'C'\nfunctions could return result sets, instead of just a single complex type\n(row).\n\nA more complete solution would need to let the optimizer know about\nremote tables, and allow for things like sending of sub-sections of\nqueries where all the tables involved are on the same remote server, etc.\n\nThe problem I'm hoping to solve involves merging two adminstratively\nindependent databases that contain similar, but not identical, data. This\nwould allow queries to run against both backends, but the appplication\nwould only see one data source.\n\nIt occurs to me that this would be useful for people who would like\nto access more than one db on the same postgresql server from a single\nfrontend, or for distributing a db across a Beowulf cluster, perhaps.\n\nSo, am I completely nuts, or is this a possibility?\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Thu, 22 Jul 1999 15:57:45 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RFC: remote tables feature"
},
{
"msg_contents": "> So, am I completely nuts, or is this a possibility?\n\nI've thought about this a bit too. Seems like it would be fun to try.\nMy old Ingres installation had a separate distributed server running\nbetween the client and the actual table server, and perhaps the\nPostgres backend could be taught to do this too. Perhaps you could\nreplace the file manager (for a specific table) with something using\nthe SPI interface to query a remote table. Don't know if there would\nbe deadlock problems if you came into a table from two different\ndirections on the same query though...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 23 Jul 1999 04:21:02 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RFC: remote tables feature"
}
] |
[
{
"msg_contents": "\n> > select t1.a, count(*) from foo t1, foo t2 group by t1.a;\n> \n> Still, I must say that a row returning \"0\" in response to a \n> count(*) isn't at all suprising, I guess it's a matter of \n> whether or not the count(*) or the specific column being\n> extracted determines the behavior.\n> \nThe reason this should intuitively return no rows is the group by clause.\nThe group by is supposed to give 1 row per group. Since there is no\ngroup, there should be no rows returned.\n\nAndreas\n",
"msg_date": "Fri, 23 Jul 1999 10:17:27 +0200",
"msg_from": "Zeugswetter Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Phantom row from aggregate in self-join in 6.5 "
}
] |
[
{
"msg_contents": "subscribe [email protected]\n\n",
"msg_date": "Fri, 23 Jul 1999 11:11:54 +0200",
"msg_from": "\"F.J.Cuberos\" <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
}
] |
[
{
"msg_contents": "Currently, in postgreSQL, primary keys are created as a UNIQUE index on the\nfield(s) that form the primary key.\n\nThis means that there is no difference between explicitely declaring a\nPRIMARY KEY in your table definition or using the CREATE UNIQUE INDEX\ncommand.\nThere is one caveat to this, CREATE UNIQUE INDEX (at least in my PG 6.4.0)\nwill allow NULLs to be inserted in the indexed field (theoretically, all\nNULLs could be different) whereas declaring that field as a primary key in\nthe table definition will ensure that no NULLs can be inserted (because if\nthere are several NULLs, you cannot use the field to uniquely identify an\nentry).\n\nSo to have member_id as you primary key and ensure uniqueness of the\ncombination of firstname, lastname, adress, zipcode you get:\n\nCREATE TABLE \"member\" (\n\t\"member_id\" int4 DEFAULT nextval ( 'lid_id_seq' ) UNIQUE NOT NULL,\n\t\"firstname\" text, -- NOT NULL? you must decide\n\t\"lastnaam\" text, -- Ditto (typo? should it be lastname?)\n\t\"adress\" text, -- Ditto (typo? should it be address?)\n\t\"zipcoder\" character(4), -- Ditto\n\t\"telephone\" text,\n\t\"email\" text,\n\t\"registration_date\" date DEFAULT current_date NOT NULL,\n\t\"student_id\" text,\n\t\"dep_id\" text,\n\t\"password\" text NOT NULL,\n\t\"validated\" bool DEFAULT 'f' NOT NULL,\n\tPRIMARY KEY (member_id)\n);\n\nAnd then you create the unique index on the other fields:\n\nCREATE UNIQUE INDEX member_fn_ln_ad_zc_idx ON member (firstname, lastnaam,\nadress, zipcode);\n\nYou can get more info by typing \\h create index and \\h create table in psql.\n\nRegards,\n\nStuart.\n\n>The idea of the table below is to keep track of members. They have to register\n>themself so I want to prevent them from subscribing twice. That's why I used a\n>primary key on the fields firstname, lastname, adres, zipcode. But I would\n>really want member_id to be my primary key as the table is referenced by other\n>tables. Can I make firstname, lastname... a unique value in another way?\n>Like constraint UNIQUE (firstname, lastname,adres,zipcode)\n>I just made that last one up but is it possible to enforce the uniqueness of a\n>couple of fields together?\n>\n>CREATE TABLE \"member\" (\n>\t\"member_id\" int4 DEFAULT nextval ( 'lid_id_seq' ) UNIQUE NOT NULL,\n>\t\"firstname\" text,\n>\t\"lastnaam\" text,\n>\t\"adress\" text,\n>\t\"zipcoder\" character(4),\n>\t\"telephone\" text,\n>\t\"email\" text,\n>\t\"registration_date\" date DEFAULT current_date NOT NULL,\n>\t\"student_id\" text,\n>\t\"dep_id\" text,\n>\t\"password\" text NOT NULL,\n>\t\"validated\" bool DEFAULT 'f' NOT NULL,\n>\tPRIMARY KEY (firstname, lastname, adres, zipcode));\n+--------------------------+--------------------------------------+\n| Stuart C. G. Rison | Ludwig Institute for Cancer Research |\n+--------------------------+ 91 Riding House Street |\n| N.B. new phone code!! | London, W1P 8BT |\n| Tel. +44 (0)207 878 4041 | UNITED KINGDOM |\n| Fax. +44 (0)207 878 4040 | [email protected] |\n+--------------------------+--------------------------------------+\n",
"msg_date": "Fri, 23 Jul 1999 10:12:52 +0100",
"msg_from": "Stuart Rison <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] database design SQL prob."
},
{
"msg_contents": "Thus spake Stuart Rison\n> Currently, in postgreSQL, primary keys are created as a UNIQUE index on the\n> field(s) that form the primary key.\n> \n> This means that there is no difference between explicitely declaring a\n> PRIMARY KEY in your table definition or using the CREATE UNIQUE INDEX\n> command.\n\nNot completely accurate. Create some tables using both methods then\nrun the following query.\n\nSELECT pg_class.relname, pg_attribute.attname\n FROM pg_class, pg_attribute, pg_index\n WHERE pg_class.oid = pg_attribute.attrelid AND\n pg_class.oid = pg_index.indrelid AND\n pg_index.indkey[0] = pg_attribute.attnum AND\n pg_index.indisprimary = 't';\n\nThis will give you a list of the primary keys if you declare them as\nprimary at creation time. The ones created with just a unique index\nwon't be displayed.\n\nWhile I am on the subject, anyone know how to enhance the above query\nto display all the fields when a complex primary key is defined? The\nabove assumes that all primary keys are one field per table.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sat, 24 Jul 1999 07:49:38 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] database design SQL prob."
},
{
"msg_contents": "At 12:12 +0300 on 23/07/1999, Stuart Rison wrote:\n\n\n> This means that there is no difference between explicitely declaring a\n> PRIMARY KEY in your table definition or using the CREATE UNIQUE INDEX\n> command.\n> There is one caveat to this, CREATE UNIQUE INDEX (at least in my PG 6.4.0)\n> will allow NULLs to be inserted in the indexed field (theoretically, all\n> NULLs could be different) whereas declaring that field as a primary key in\n> the table definition will ensure that no NULLs can be inserted (because if\n> there are several NULLs, you cannot use the field to uniquely identify an\n> entry).\n\nTo be more exact, primary keys are defined as NOT NULL implicitly. If you\nwant to emulate primary keys, you have to define the key as NOT NULL as\nwell as define a unique index on it.\n\nBut for the original question: define the primary key on the id field, and\na unique index on the combination of fields that you want to be unique. Now\neverything makes sense. The field used for reference, which is by\ndefinition the primary key of the table, is indeed defined as primary key.\nThe combination of fields which is not used for references, but which needs\nto be unique, is defined as unique.\n\nHerouth\n\n--\nHerouth Maoz, Internet developer.\nOpen University of Israel - Telem project\nhttp://telem.openu.ac.il/~herutma\n\n\n",
"msg_date": "Sun, 25 Jul 1999 15:40:48 +0300",
"msg_from": "Herouth Maoz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] database design SQL prob."
},
{
"msg_contents": "\"D'Arcy J.M. Cain\" wrote:\n> \n> Not completely accurate. Create some tables using both methods then\n> run the following query.\n> \n> SELECT pg_class.relname, pg_attribute.attname\n> FROM pg_class, pg_attribute, pg_index\n> WHERE pg_class.oid = pg_attribute.attrelid AND\n> pg_class.oid = pg_index.indrelid AND\n> pg_index.indkey[0] = pg_attribute.attnum AND\n> pg_index.indisprimary = 't';\n> \n> This will give you a list of the primary keys if you declare them as\n> primary at creation time. The ones created with just a unique index\n> won't be displayed.\n> \n> While I am on the subject, anyone know how to enhance the above query\n> to display all the fields when a complex primary key is defined? The\n> above assumes that all primary keys are one field per table.\n> \n\nHowever, if you create table with primary key, for example \n\ncreate table tab(\nid int4 primary key,\n...\n);\n\nand make dump of database, it will write in dump file\n\ncreate table tab(\nid int4,\n...\n);\ncreate unique index \"tab_pkey\" on \"tab\" using btree (\"id\");\n\nSo, after dump / restore difference between primary key and unique index\ndisappears.\nIs it right?\n\nSincerely yours, Yury.\ndon.web-page.net, ICQ 11831432\n",
"msg_date": "Mon, 26 Jul 1999 10:16:54 +0600",
"msg_from": "Don Yury <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] database design SQL prob."
},
{
"msg_contents": "Thus spake Don Yury\n> > Not completely accurate. Create some tables using both methods then\n> However, if you create table with primary key, for example \n> \n> create table tab(\n> id int4 primary key,\n> ...\n> );\n> \n> and make dump of database, it will write in dump file\n> \n> create table tab(\n> id int4,\n> ...\n> );\n> create unique index \"tab_pkey\" on \"tab\" using btree (\"id\");\n\nSo it does. I thought that this was fixed in 6.5 but it seems not. Is\nthis on the TODO list?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Mon, 26 Jul 1999 06:30:57 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] database design SQL prob."
},
{
"msg_contents": "> Thus spake Don Yury\n> > > Not completely accurate. Create some tables using both methods then\n> > However, if you create table with primary key, for example \n> > \n> > create table tab(\n> > id int4 primary key,\n> > ...\n> > );\n> > \n> > and make dump of database, it will write in dump file\n> > \n> > create table tab(\n> > id int4,\n> > ...\n> > );\n> > create unique index \"tab_pkey\" on \"tab\" using btree (\"id\");\n> \n> So it does. I thought that this was fixed in 6.5 but it seems not. Is\n> this on the TODO list?\n\nNo. Please give me a line to add.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 26 Jul 1999 09:37:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] database design SQL prob."
}
] |
[
{
"msg_contents": "A very useful feature in some database systems is the ability to restrict who can run certain external or stored procedures, and to grant extra access rights to users when they do run those procedures.\n\nThe usefulness of this may not be imediately obvious, but it is a very powerful feature, especially for preserving integrity and security:\n\nSimple uses include:\n\n1. Make all tables 'read-only', then all updates must happen through procedures. The procedures can make data-based security checks, and can ensure integrity.\n\n2. Make some tables unreadable, then data can only be retrieved via procedures. Once again, data-based security can be achieved.\n\nThe way this is implemented it to specify that when a procedure is run by *any* user, the procedure runs with the access rights of another user/group/entity. \n\nProcedures must also have security associated with them: it is necessary to grant 'execute' access on procedures to the users who need to execute them.\n\nSince this *seems* like it is not likely to get too far into the internals of the optimizer, and seems to be an area that is not under active development by others, and since I am looking for a way to contribute to development, I would be interested in comments that:\n\n1. Tell me if this is much bigger than I think it is.\n2. Tell me if it sounds useful.\n3. Is a good learning excercise.\n4. If it is stepping on other people's toes.\n5. How to do it 8-}\n\nI look forward to comments and suggestions...I think.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 23 Jul 1999 21:10:35 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "RFC: Security and Impersonation"
},
{
"msg_contents": "\nCan't we do this already with views?\n\nOn Fri, 23 Jul 1999, Philip Warner wrote:\n\n> A very useful feature in some database systems is the ability to restrict who can run certain external or stored procedures, and to grant extra access rights to users when they do run those procedures.\n> \n> The usefulness of this may not be imediately obvious, but it is a very powerful feature, especially for preserving integrity and security:\n> \n> Simple uses include:\n> \n> 1. Make all tables 'read-only', then all updates must happen through procedures. The procedures can make data-based security checks, and can ensure integrity.\n> \n> 2. Make some tables unreadable, then data can only be retrieved via procedures. Once again, data-based security can be achieved.\n> \n> The way this is implemented it to specify that when a procedure is run by *any* user, the procedure runs with the access rights of another user/group/entity. \n> \n> Procedures must also have security associated with them: it is necessary to grant 'execute' access on procedures to the users who need to execute them.\n> \n> Since this *seems* like it is not likely to get too far into the internals of the optimizer, and seems to be an area that is not under active development by others, and since I am looking for a way to contribute to development, I would be interested in comments that:\n> \n> 1. Tell me if this is much bigger than I think it is.\n> 2. Tell me if it sounds useful.\n> 3. Is a good learning excercise.\n> 4. If it is stepping on other people's toes.\n> 5. How to do it 8-}\n> \n> I look forward to comments and suggestions...I think.\n> \n> \n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.C.N. 008 659 498) | /(@) ______---_\n> Tel: +61-03-5367 7422 | _________ \\\n> Fax: +61-03-5367 7430 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 23 Jul 1999 08:54:58 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RFC: Security and Impersonation"
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> A very useful feature in some database systems is the ability to\n> restrict who can run certain external or stored procedures, and to\n> grant extra access rights to users when they do run those procedures.\n\nWe have some of this, I think, from ACLs on tables and views. But\nas far as I know there is not a notion of a \"suid view\", one with\ndifferent privileges from its caller. It sounds like a good thing\nto work on. Is there any standard in the area?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 23 Jul 1999 10:51:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RFC: Security and Impersonation "
},
{
"msg_contents": "At 08:54 23/07/99 -0300, The Hermit Hacker wrote:\n>\n>Can't we do this already with views?\n>\n\nNot really. A combination of Views, Triggers and Rules will almost do it, but at the expense of being harder to maintain and more difficult to understand. It may be worth giving a real-world example:\n\nCreate Table Access_Codes(ACCESS_CODE Char(4), DESCRIPTION Varchar(62));\nInsert into ACCESS_CODES Values('SUPR','User may perform any action');\n...+various others\n\nCreate Table USER_ACCESS(USER_ID Int4, ACCESS_CODE Char(4));\n\nCreate Table USERS(USER_ID Int4, USERNAME Varchar(30));\n\nCreate Table GROUPS(GROUP_ID Int4, GROUP_NAME Varchar(30));\n\nCreate Table USER_GROUPS(GROUP_ID Int4, USER_ID Int4);\nInsert Into...etc\n\nThe idea is to have 'ACCESS_CODES' function like priviledges - possibly overriding group membership, and have groups function a lot like unix groups.\n\nNext define the things you want to control (in my case documents stored as blobs):\n\nCreate Table DOCUMENTS(DOCUMENT_ID Int4, DOCUMENT_SOURCE <Blob>, ....) etc.\n\nCreate Table DOCUMENT_GROUPS(DOCUMENT_ID Int4, GROUP_ID Int4);\n\nThe idea is that documents can be members of groups, and that a user must be a member of a group before they can change the document.\n\nNext write the 'update' procedure:\n\nCREATE FUNCTION Update_Document (int4,...<args>...) \n RETURNS Varchar(255) AS '\nDeclare\n DocID Alias for $1;\n UserID int4;\n Msg\tVarchar(255);\n isOK\tint4;\n...declare some other stuff..\nBegin\n Set :isOK = 1;\n Set Msg = 'OK';\n Set UserID = (Select USER_ID From USERS Where USERNAME = CURRENT_USER;\n If not exists(Select * From USER_GROUPS UG, DOCUMENT_GROUPS DG Where\n UG.USER_ID = UserID\n\t\t\tAnd DG.GROUP_ID = UG.GROUP_ID\n And DG.DOCUMENT_ID = DocID) Then\n\n If Not Exists(Select * From USER_ACCESS Where USER_ID = UserID \n and ACCESS_CODE = 'SUPR') \n Then\n Set :isOK = False;\n Set :Msg = 'User has no access to document';\n End If;\n End If;\n\n If isOK == 1 Then\n <Do The Update>;\n End If;\n\n Return Msg;\n\nEnd;\n\nAnd finally, set the table protections:\n\nRevoke All On Table <All> from <All>;\nGrant All On Table <All> To SPECIAL_USER;\n\nGrant Execute on Function UPDATE_DOCUMENT To Public;\n\nSet Authorization On Function UPDATE_DOCUMENT To SPECIAL_USER;\n^\n|\n+-- This is the important bit.\n\n\nWhat we now have is a table that can only be updated according to a set of rules contained in one procedure, and which returns a useful error message when it fails. The rules for access can be as complex as you like, and this system does not preclude the use of triggers to enforce both integrity and further security.\n \nThe same could probably be achieved using rules and triggers for updates, but would not return a nice message on failure, and would, IMO, be less 'clear'.\n\nSorry for the length of the example, but I hope it puts things a little more clearly.\n\n>On Fri, 23 Jul 1999, Philip Warner wrote:\n>\n>> A very useful feature in some database systems is the ability to restrict who can run certain external or stored procedures, and to grant extra access rights to users when they do run those procedures.\n>> \n>> The usefulness of this may not be imediately obvious, but it is a very powerful feature, especially for preserving integrity and security:\n>> \n>> Simple uses include:\n>> \n>> 1. Make all tables 'read-only', then all updates must happen through procedures. The procedures can make data-based security checks, and can ensure integrity.\n>> \n>> 2. Make some tables unreadable, then data can only be retrieved via procedures. Once again, data-based security can be achieved.\n>> \n>> The way this is implemented it to specify that when a procedure is run by *any* user, the procedure runs with the access rights of another user/group/entity. \n>> \n>> Procedures must also have security associated with them: it is necessary to grant 'execute' access on procedures to the users who need to execute them.\n>> \n>> Since this *seems* like it is not likely to get too far into the internals of the optimizer, and seems to be an area that is not under active development by others, and since I am looking for a way to contribute to development, I would be interested in comments that:\n>> \n>> 1. Tell me if this is much bigger than I think it is.\n>> 2. Tell me if it sounds useful.\n>> 3. Is a good learning excercise.\n>> 4. If it is stepping on other people's toes.\n>> 5. How to do it 8-}\n>> \n>> I look forward to comments and suggestions...I think.\n>> \n>> \n>> \n>> ----------------------------------------------------------------\n>> Philip Warner | __---_____\n>> Albatross Consulting Pty. Ltd. |----/ - \\\n>> (A.C.N. 008 659 498) | /(@) ______---_\n>> Tel: +61-03-5367 7422 | _________ \\\n>> Fax: +61-03-5367 7430 | ___________ |\n>> Http://www.rhyme.com.au | / \\|\n>> | --________--\n>> PGP key available upon request, | /\n>> and from pgp5.ai.mit.edu:11371 |/\n>> \n>\n>Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n>Systems Administrator @ hub.org \n>primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n>\n>\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sat, 24 Jul 1999 22:40:27 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] RFC: Security and Impersonation"
},
{
"msg_contents": "At 10:51 23/07/99 -0400, you wrote:\n>\n>We have some of this, I think, from ACLs on tables and views. But\n>as far as I know there is not a notion of a \"suid view\", one with\n>different privileges from its caller. It sounds like a good thing\n>to work on. Is there any standard in the area?\n>\n\nI don't know - I'll look into it. The only system I know that implements\nthis is Dec Rdb, and according to the manuals, is not part of standard SQL.\nThe way they do it is to define 'modules' with more than one procedure, and\nall procedures in the module can have an 'Authorization ID' set, which\nmeans that when the module is run, the access levels of that ID are used.\nMoreover, CURRENT_USER returns the Auth. ID, not the actual user, and they\ndefine SESSION_USER which returns the actual user.\n\nMy preference is for CURRENT_USER to *always* return the current user, and\nto define another name (AUTHORIZATION_USER?) to return the dominant Auth ID.\n\nI'll look through the SQL3 stuff, and see what I can find.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sat, 24 Jul 1999 22:46:33 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] RFC: Security and Impersonation "
}
] |
[
{
"msg_contents": "(Note to hackers: Ole sent me a 1000-row test case off list.)\n\n> oletest=> explain select * from av_parts where partnumber = '123456';\n> NOTICE: QUERY PLAN:\n> \n> Index Scan using av_parts_partnumber_index on av_parts (cost=2.04 rows=1\n> width=124)\n> \n> EXPLAIN\n> oletest=> explain select * from av_parts where nsn = '123456';\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on av_parts (cost=48.00 rows=995 width=124)\n\nOK, I confirm seeing this behavior. I don't have time to dig into\nthe code right now, but will do so when I get a chance.\n\nIt looks like the highly skewed distribution of nsn values (what you\nsent me had 997 '' entries, only 3 non-empty strings) is confusing the\nselectivity estimation code somehow, such that the system thinks that\nthe query is going to match most of the rows. Notice it is estimating\n995 returned rows for the nsn select! Under these circumstances it will\nprefer a sequential scan, since the more-expensive-per-tuple index scan\ndoesn't look like it will be able to avoid reading most of the table.\nThat logic is OK, it's the 0.995 selectivity estimate that's wrong...\n\nExactly why the selectivity estimate is so ludicrous remains to\nbe seen, but I know that there are some bogosities in that code\n(search the pghackers archives for \"selectivity\" for more info).\nI am hoping to do some extensive revisions of the selectivity code\nfor 6.6 or 6.7. This particular problem might be easily fixable,\nor it might have to wait for the rewrite.\n\nThanks for the test case!\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 23 Jul 1999 12:03:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Index not used on simple select "
},
{
"msg_contents": "On Fri, 23 Jul 1999, Tom Lane wrote:\n> It looks like the highly skewed distribution of nsn values (what you\n> sent me had 997 '' entries, only 3 non-empty strings) is confusing the\n\nAs a note, it doesn't seem to matter wether the field has '' or NULL.\nEven after I do a update to set all all rows with '' to NULL, it still\ndoes the same thing.\n\nAlso, my full set of data is not quite so skewed. The nsn field has about\n450,000 non-empty rows in it.\n\nThanks,\nOle Gjerde\n\n\n",
"msg_date": "Fri, 23 Jul 1999 14:57:59 -0500 (CDT)",
"msg_from": "Ole Gjerde <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Index not used on simple select "
},
{
"msg_contents": "Hey,\nI was having problems with UPDATE, so I looked through the archives. Back\naround the 20th of May, there was a thread about update using all memory\n(thread: strange behavior of UPDATE).\n\nIt now looks that I am having that same problem on pg 6.5.1.\nBasically I tried running a simple query:\n\tupdate av_parts set nsn = 'xxxxx' where nsn = '';\n\nAnd postgres started chugging along. After a while(not sure how long) it\nwas using all memory on the computer.\n\nThe box has 82MB of memory and 128 MB of swap.\nThe query is trying to update 3.5 million rows.\n\nI would try to gdb to the process and see where it's spending its time,\nunfortunately that box is pretty much dead until I reboot it. I'll try to\ndo it again later with a ulimit so I can actually log into the box :)\n\nThanks,\nOle Gjerde\n\n",
"msg_date": "Mon, 26 Jul 1999 15:44:10 -0500 (CDT)",
"msg_from": "Ole Gjerde <[email protected]>",
"msg_from_op": false,
"msg_subject": "UPDATE memory exhaustion"
},
{
"msg_contents": "I wrote\n> (Note to hackers: Ole sent me a 1000-row test case off list.)\n>> oletest=> explain select * from av_parts where partnumber = '123456';\n>> NOTICE: QUERY PLAN:\n>> \n>> Index Scan using av_parts_partnumber_index on av_parts (cost=2.04 rows=1\n>> width=124)\n>> \n>> EXPLAIN\n>> oletest=> explain select * from av_parts where nsn = '123456';\n>> NOTICE: QUERY PLAN:\n>> \n>> Seq Scan on av_parts (cost=48.00 rows=995 width=124)\n\n> It looks like the highly skewed distribution of nsn values (what you\n> sent me had 997 '' entries, only 3 non-empty strings) is confusing the\n> selectivity estimation code somehow, such that the system thinks that\n> the query is going to match most of the rows. Notice it is estimating\n> 995 returned rows for the nsn select! Under these circumstances it will\n> prefer a sequential scan, since the more-expensive-per-tuple index scan\n> doesn't look like it will be able to avoid reading most of the table.\n> That logic is OK, it's the 0.995 selectivity estimate that's wrong...\n\nIt turns out that the selectivity estimate for an \"=\" comparison is just\nthe attdisbursion statistic calculated by VACUUM ANALYZE, which can be\nroughly defined as the frequency of the most common value in the column.\n(I took statistics too long ago to recall the exact definition.)\nAnyway, given that the test data Ole sent me contains nearly all ''\nentries, I'd say that the 0.995 value is about right for disbursion.\n\nIndeed, if one were to do a \"select * from av_parts where nsn = ''\",\nthen sequential scan would be the most efficient way to do that.\nThe system has no clue that that's not really something you'd do much.\n\nBy using this estimate, the system is effectively assuming that the\nfrequency of occurrence of values in the table corresponds to the\nprobability that you will be searching for each particular value.\nSo, the selectivity that a search for the most common value would\nhave is a reasonable estimate for the selectivity of a search for any\nvalue. That's a bogus assumption in this case --- but it's hard to\njustify making any other assumption in general.\n\nMy inclination is to hack up eqsel() to never return a selectivity\nestimate larger than, say, 0.5, even when the measured disbursion\nis more. I am not sure that this is a good idea, however. Comments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 27 Jul 1999 16:59:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Selectivity of \"=\" (Re: [HACKERS] Index not used on simple select)"
},
{
"msg_contents": "> > It looks like the highly skewed distribution of nsn values (what you\n> > sent me had 997 '' entries, only 3 non-empty strings) is confusing the\n> > selectivity estimation code somehow, such that the system thinks that\n> > the query is going to match most of the rows. Notice it is estimating\n> > 995 returned rows for the nsn select! Under these circumstances it will\n> > prefer a sequential scan, since the more-expensive-per-tuple index scan\n> > doesn't look like it will be able to avoid reading most of the table.\n> > That logic is OK, it's the 0.995 selectivity estimate that's wrong...\n> \n> It turns out that the selectivity estimate for an \"=\" comparison is just\n> the attdisbursion statistic calculated by VACUUM ANALYZE, which can be\n> roughly defined as the frequency of the most common value in the column.\n> (I took statistics too long ago to recall the exact definition.)\n> Anyway, given that the test data Ole sent me contains nearly all ''\n> entries, I'd say that the 0.995 value is about right for disbursion.\n\nYes, you are correct, though it does look at potentially one or two\nother unique values, depending on the distribution. It basically\nperfectly computes disbursion for unique columns, and columns that\ncontain only two unique values, and it figures in NULL. In other cases,\nthe disbursion is imperfect, but pretty decent.\n\n> My inclination is to hack up eqsel() to never return a selectivity\n> estimate larger than, say, 0.5, even when the measured disbursion\n> is more. I am not sure that this is a good idea, however. Comments?\n\nI would discourage this. I can imagine many cases there >0.5\nselectivites would be valid, i.e. state = \"PA\".\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 27 Jul 1999 18:58:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Selectivity of \"=\" (Re: [HACKERS] Index not used on simple\n select)"
},
{
"msg_contents": "> Tom Lane wrote:\n> > \n> It turns out that the selectivity estimate for an \"=\" comparison is\n> just\n> > the attdisbursion statistic calculated by VACUUM ANALYZE, which can be\n> > roughly defined as the frequency of the most common value in the column.\n> > (I took statistics too long ago to recall the exact definition.)\n> > Anyway, given that the test data Ole sent me contains nearly all ''\n> > entries, I'd say that the 0.995 value is about right for disbursion.\n> > \n> > Indeed, if one were to do a \"select * from av_parts where nsn = ''\",\n> > then sequential scan would be the most efficient way to do that.\n> > The system has no clue that that's not really something you'd do much.\n> \n> Does the system currently index NULLs as well ?\n> \n> I suspect supporting partial indexes (initially just non-NULLs) would \n> let us have much better and also use indexes intelligently for\n> mostly-NULL \n> columns.\n> \n> Perhaps a line like \n> \n> * Add partial index support\n> \n> would fit in TODO\n> \n> -----------------\n> Hannu\n> \n> \n\n\nYes, I think we index nulls. What are partial indexes?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 27 Jul 1999 20:10:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Selectivity of \"=\" (Re: [HACKERS] Index not used on simple\n select)"
},
{
"msg_contents": "Tom Lane wrote:\n> \n It turns out that the selectivity estimate for an \"=\" comparison is\njust\n> the attdisbursion statistic calculated by VACUUM ANALYZE, which can be\n> roughly defined as the frequency of the most common value in the column.\n> (I took statistics too long ago to recall the exact definition.)\n> Anyway, given that the test data Ole sent me contains nearly all ''\n> entries, I'd say that the 0.995 value is about right for disbursion.\n> \n> Indeed, if one were to do a \"select * from av_parts where nsn = ''\",\n> then sequential scan would be the most efficient way to do that.\n> The system has no clue that that's not really something you'd do much.\n\nDoes the system currently index NULLs as well ?\n\nI suspect supporting partial indexes (initially just non-NULLs) would \nlet us have much better and also use indexes intelligently for\nmostly-NULL \ncolumns.\n\nPerhaps a line like \n\n* Add partial index support\n\nwould fit in TODO\n\n-----------------\nHannu\n",
"msg_date": "Wed, 28 Jul 1999 03:13:53 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Selectivity of \"=\" (Re: [HACKERS] Index not used on simple\n select)"
},
{
"msg_contents": "> Yes, I think we index nulls. What are partial indexes?\n\n http://www.PostgreSQL.ORG/docs/postgres/partial-index.htm\n\nPostgres had support for this at one time, and probably still does.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 28 Jul 1999 04:19:21 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Selectivity of \"=\" (Re: [HACKERS] Index not used on simple\n select)"
},
{
"msg_contents": "> > Yes, I think we index nulls. What are partial indexes?\n> \n> http://www.PostgreSQL.ORG/docs/postgres/partial-index.htm\n> \n> Postgres had support for this at one time, and probably still does.\n> \n\nWow, that's really nice writing. I think Tom Lane and I ripped that\nstuff out of the optimizer. (Only kidding.)\n\nNot sure if any of it works.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 28 Jul 1999 00:51:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Selectivity of \"=\" (Re: [HACKERS] Index not used on simple\n select)"
}
] |
[
{
"msg_contents": "I've been reading Ole's posts with great interest\nas we've have just experienced similar problems\nourselves here. I opened up a Slip ;-) #5 but \nI've narrowed down the difference between two \nqueries which illustrate the problem:\n\nThe first query:\n\nSELECT DISTINCT \nsupplies.supply,supplies.supplyunit,\nsupplies.purchaseunit,supplies.vendor,\nsupplies.vendorgroup,supplies.vendoritem,\nsupplies.vendorname,supplies.description,\nsupplies.conversion,supplies.price,\nsupplies.inventory,supplies.commodity,\nsupplies.adddate\nFROM\nsupplies,permitbuy,locations,supplychains,reserves\nWHERE\npermitbuy.webuser = 'mascarj' AND\n(locations.company,locations.costcntr) =\n(permitbuy.company, permitbuy.costcntr) AND\nsupplychains.target = locations.target AND\nreserves.target = supplychains.supplysource AND\nsupplies.supply = reserves.supply AND\nsupplies.inventory = '1' AND\n\n((upper(supplies.supply) LIKE '%SEQ%') OR\n(upper(supplies.vendoritem) LIKE '%SEQ%') OR\n(upper(supplies.vendorname) LIKE '%SEQ%') OR\n(upper(supplies.description) LIKE '%SEQ%'))\n\nORDER BY \nsupplies.description;\n\nThe EXPLAIN shows its using indices as it should:\nNOTICE: QUERY PLAN:\n\nUnique (cost=24076.77 rows=8260854 width=220)\n -> Sort (cost=24076.77 rows=8260854 width=220)\n -> Hash Join (cost=24076.77 rows=8260854\nwidth=220)\n -> Hash Join (cost=1756.00 rows=597537\nwidth=76)\n -> Seq Scan on reserves \n(cost=938.44 rows=20468 width=16)\n -> Hash (cost=121.44 rows=475\nwidth=60)\n -> Hash Join (cost=121.44\nrows=475 width=60)\n -> Seq Scan on\nsupplychains (cost=49.28 rows=1251 width=8)\n -> Hash (cost=26.80\nrows=93 width=52)\n -> Hash Join \n(cost=26.80 rows=93 width=52)\n -> Seq\nScan on locations (cost=10.09 rows=245 width=28)\n -> Hash \n(cost=5.78 rows=56 width=24)\n -> \nIndex Scan using k_permitbuy1 on permitbuy (cost=5.78\nrows=56 width=24)\n -> Hash (cost=1675.03 rows=17637\nwidth=144)\n -> Seq Scan on supplies \n(cost=1675.03 rows=17637 width=144)\n\nEXPLAIN\n\nThis query works as expected and returns within\na reasonable amount of time. However, if an OR\nclause is introduced as below:\n\nSELECT DISTINCT \nsupplies.supply,supplies.supplyunit,\nsupplies.purchaseunit,supplies.vendor,\nsupplies.vendorgroup,supplies.vendoritem,\nsupplies.vendorname,supplies.description,\nsupplies.conversion,supplies.price,\nsupplies.inventory,supplies.commodity,\nsupplies.adddate \nFROM\nsupplies,permitbuy,locations,supplychains,reserves\nWHERE \npermitbuy.webuser = 'mascarj' AND\n(locations.company,locations.costcntr) =\n(permitbuy.company, permitbuy.costcntr) AND\nsupplychains.target = locations.target AND\nreserves.target = supplychains.supplysource AND\nsupplies.supply = reserves.supply AND\nsupplies.inventory = '1' AND\n\n((upper(supplies.supply) LIKE '%SEQ%') OR\n(upper(supplies.vendoritem) LIKE '%SEQ%') OR\n(upper(supplies.vendorname) LIKE '%SEQ%') OR\n(upper(supplies.description) LIKE '%SEQ%')) \n\nOR <-- This is built by our search engine to allow\n -- our users to enter: [SEQ or SCD]...\n\n((upper(supplies.supply) LIKE '%SCD%') OR\n(upper(supplies.vendoritem) LIKE '%SCD%') OR\n(upper(supplies.vendorname) LIKE '%SCD%') OR\n(upper(supplies.description) LIKE '%SCD%'))\n\nORDER BY \nsupplies.description;\n\nThe EXPLAIN shows that it doesn't bother to use \nthe indices for ANY of the joins:\n\nNOTICE: QUERY PLAN:\n\nUnique (cost=63290466304.00 rows=1073741850\nwidth=232)\n -> Sort (cost=63290466304.00 rows=1073741850\nwidth=232)\n -> Nested Loop (cost=63290466304.00\nrows=1073741850 width=232)\n -> Nested Loop (cost=52461780992.00\nrows=1073741851 width=204)\n -> Nested Loop \n(cost=28277893120.00 rows=1073741851 width=168)\n -> Nested Loop \n(cost=28033934.00 rows=573217107 width=160) \n -> Seq Scan on supplies \n(cost=1675.03 rows=29871 width=144)\n -> Seq Scan on\nreserves (cost=938.44 rows=20468 width=16)\n -> Seq Scan on supplychains\n (cost=49.28 rows=1251 width=8) -> \nSeq Scan on permitbuy (cost=22.52 rows=531 width=36)\n -> Seq Scan on locations (cost=10.09\nrows=245 width=28)\n\nEXPLAIN\n\nThe plan shows that it will have to perform a \nsequential scan on the supplies table, which I \nobviously expected because of the use of LIKE, in\nboth plans. However, why is it, that, when an \nOR clause which exclusively references the supplies\ntable is appended to the query, the planner/optimizer\n(which already must perform a sequential scan on \nsupplies) now totally ignores all the indices\nbuilt on the other tables? The result is an \nexecution plan which consumes all RAM on the machine,\nand, at 410M, I killed it, because it was about to \nconsume all swap space as well...\n\nAny help would be greatly appreciated\n\nMike Mascari ([email protected])\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Fri, 23 Jul 1999 12:19:41 -0400 (EDT)",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index not used on select (Is this more OR + LIKE?)"
},
{
"msg_contents": "Mike Mascari <[email protected]> writes:\n> ... However, if an OR clause is introduced as below:\n\n> WHERE \n> permitbuy.webuser = 'mascarj' AND\n> (locations.company,locations.costcntr) =\n> (permitbuy.company, permitbuy.costcntr) AND\n> supplychains.target = locations.target AND\n> reserves.target = supplychains.supplysource AND\n> supplies.supply = reserves.supply AND\n> supplies.inventory = '1' AND\n\n> ((upper(supplies.supply) LIKE '%SEQ%') OR\n> (upper(supplies.vendoritem) LIKE '%SEQ%') OR\n> (upper(supplies.vendorname) LIKE '%SEQ%') OR\n> (upper(supplies.description) LIKE '%SEQ%')) \n\n> OR <-- This is built by our search engine to allow\n> -- our users to enter: [SEQ or SCD]...\n\n> ((upper(supplies.supply) LIKE '%SCD%') OR\n> (upper(supplies.vendoritem) LIKE '%SCD%') OR\n> (upper(supplies.vendorname) LIKE '%SCD%') OR\n> (upper(supplies.description) LIKE '%SCD%'))\n\n> The plan shows that it will have to perform a \n> sequential scan on the supplies table, which I \n> obviously expected because of the use of LIKE, in\n> both plans.\n\nNot necessarily --- since you have a restriction clause on\nsupplies.inventory, an index on that field could be used for an index\nscan. This would only be worthwhile if \"supplies.inventory = '1'\"\neliminates a goodly fraction of the supplies records, of course.\nAnother possibility is using an index on supplies.supply to implement\nthe join on supplies.supply = reserves.supply. The LIKE clauses will\ncertainly have to be done the hard way, but they don't necessarily\nhave to be done the hard way on every single record.\n\n> However, why is it, that, when an OR clause which exclusively\n> references the supplies table is appended to the query, the\n> planner/optimizer (which already must perform a sequential scan on\n> supplies) now totally ignores all the indices built on the other\n> tables?\n\nI think the problem is that the OR appears at top level in the WHERE\nclause (assuming the above is a verbatim transcript of your query).\nOR groups less tightly than AND, so what this really means is\n\t(other-conditions AND (LIKEs-for-SEQ)) OR (LIKEs-for-SCD)\nwhich is undoubtedly not what you had in mind, and will certainly\nproduce a lot of unwanted records if the query manages to complete.\nEvery supplies tuple matching SCD will appear joined to every possible\ncombination of records from the other tables...\n\nPer recent discussions, the query optimizer is currently doing a really\nbad job of optimizing OR-of-ANDs conditions, but I think that you didn't\nmean to ask for that anyway.\n\n> The result is an execution plan which consumes all RAM on the machine,\n> and, at 410M, I killed it, because it was about to consume all swap\n> space as well...\n\nYou're confusing two different problems --- the efficiency of the query\nplan has a lot to do with speed, but relatively little to do with memory\nusage. I think that the memory usage problem here stems from the use of\nupper() in the WHERE condition. Each evaluation of upper() generates a\ntemporary result string, which is not reclaimed until end of statement\nin the current code. (I hope to see that fixed in the next release or\ntwo, but for now you gotta work around it.) You would be better advised\nto use a case-insensitive match operator instead of LIKE. For example,\nthe above conditions could be written\n\tsupplies.supply ~* 'SEQ'\nDunno how inconvenient it is for you to use regular expression patterns\ninstead of LIKE-style patterns, but the memory savings would be\nconsiderable. Even after we fix the memory leakage problem, I expect\nthis would be faster than the LIKE version.\n\nBTW, the reason that your correctly-phrased query doesn't run out\nof memory is that the LIKE conditions don't get evaluated for the\ntuples that don't make it past the other qual conditions. In the\nmistaken version, they get evaluated for every possible combination\nof joined tuples...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 24 Jul 1999 10:53:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Index not used on select (Is this more OR + LIKE?) "
}
] |
[
{
"msg_contents": "Magnus Hagander <[email protected]> writes:\n> I've now finished \"polishing off\" my old SSL code, and rewritten it to work\n> with 6.6 (current snapshot). Included is the patch against the cvs tree from\n> Jul 22nd.\n\nCool. Secure connections are good.\n\n> Unfortunatly, in order to allow for negotiated SSL, this patch breaks the\n> current protocol (meaning old clients will not work with the new server, and\n> the other way around). I felt it was better to break this here, than to\n> break the frontend API (which would otherwise have been required).\n\nThis is *not* cool. Breaking both clients and servers, whether they\nactually support SSL or not, is a bit much, don't you think? Especially\nwhen the way you propose to do it makes it impossible for a server to\nsupport both old and new clients: by the time the server finds out the\nclient's protocol version, it's already done something incompatible\nwith old clients.\n\nI think there must be some way of signaling SSL support capability\nwithout making a backwards-incompatible change in the startup protocol.\nAt a minimum an SSL-enabled server must be able to accept connections\nfrom pre-SSL clients.\n\nIf nothing better comes to mind, we could have SSL-capable servers\nlisten at two port addresses, say 5432 for insecure connections and\n5433 for secure ones. But there's probably a better way.\n\nBTW, it should be possible for the dbadmin to configure a server to\naccept *only* secured connections, perhaps from a subset of users/hosts;\nthat would take a new column in pg_hba.conf. Didn't look at your patch\nclosely enough to see if you already did that...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 23 Jul 1999 12:23:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SSL patch"
},
{
"msg_contents": "On Fri, 23 Jul 1999, Tom Lane wrote:\n\n> Magnus Hagander <[email protected]> writes:\n> > I've now finished \"polishing off\" my old SSL code, and rewritten it to work\n> > with 6.6 (current snapshot). Included is the patch against the cvs tree from\n> > Jul 22nd.\n> \n> Cool. Secure connections are good.\n> \n> > Unfortunatly, in order to allow for negotiated SSL, this patch breaks the\n> > current protocol (meaning old clients will not work with the new server, and\n> > the other way around). I felt it was better to break this here, than to\n> > break the frontend API (which would otherwise have been required).\n> \n> This is *not* cool. Breaking both clients and servers, whether they\n> actually support SSL or not, is a bit much, don't you think? Especially\n> when the way you propose to do it makes it impossible for a server to\n> support both old and new clients: by the time the server finds out the\n> client's protocol version, it's already done something incompatible\n> with old clients.\n> \n> I think there must be some way of signaling SSL support capability\n> without making a backwards-incompatible change in the startup protocol.\n> At a minimum an SSL-enabled server must be able to accept connections\n> from pre-SSL clients.\n> \n> If nothing better comes to mind, we could have SSL-capable servers\n> listen at two port addresses, say 5432 for insecure connections and\n> 5433 for secure ones. But there's probably a better way.\n> \n> BTW, it should be possible for the dbadmin to configure a server to\n> accept *only* secured connections, perhaps from a subset of users/hosts;\n> that would take a new column in pg_hba.conf. Didn't look at your patch\n> closely enough to see if you already did that...\n\nI may be lost here, so forgive me ahead of time...but, if I'm reading\nMagnus' email correctly, this just breaks backward compatibility...with\nthe change, pre-6.6 clients would not be able to talk to a 6.6 server, but\n6.7 and 6.6 would be compatible?\n\nIf this is correct, I've lost what the problem is here, except that, if\nthis is the case, such a change shoudl signal a new major number release,\nvs just minor...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 23 Jul 1999 14:16:56 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: SSL patch"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> I may be lost here, so forgive me ahead of time...but, if I'm reading\n> Magnus' email correctly, this just breaks backward compatibility...with\n> the change, pre-6.6 clients would not be able to talk to a 6.6 server, but\n> 6.7 and 6.6 would be compatible?\n\nAs long as we don't change it again for 6.7, yeah ... but that doesn't\nseem like the point.\n\nWhat I'm concerned about is that we'd have neither compatibility between\nexisting clients and new servers nor existing servers and new clients.\nWhen we changed the protocol for 6.4, we got quite a bit of flak about\n6.4 clients not talking to old servers. But that was just a one-way\nwhammy: a 6.4 server would still talk to old clients. This change is\ngonna be a double whammy.\n\nI think we at least need to find a way to have new servers be able to\ntalk to old clients. Otherwise, it'll be *very* difficult to upgrade\nto 6.6 at large installations; you'd have to change all the clients\nsimultaneously with the server. Those clients aren't necessarily all\non the same machine, and some may not even be under the db admin's\ndirect control. It looks like a recipe for major headaches to me.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 23 Jul 1999 16:38:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: SSL patch "
},
{
"msg_contents": "\nSeems like a trigger for a 7.0 release ... last I understood, major\nreleases generally signified major protocol changes, as well as API...\n\nOn Fri, 23 Jul 1999, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > I may be lost here, so forgive me ahead of time...but, if I'm reading\n> > Magnus' email correctly, this just breaks backward compatibility...with\n> > the change, pre-6.6 clients would not be able to talk to a 6.6 server, but\n> > 6.7 and 6.6 would be compatible?\n> \n> As long as we don't change it again for 6.7, yeah ... but that doesn't\n> seem like the point.\n> \n> What I'm concerned about is that we'd have neither compatibility between\n> existing clients and new servers nor existing servers and new clients.\n> When we changed the protocol for 6.4, we got quite a bit of flak about\n> 6.4 clients not talking to old servers. But that was just a one-way\n> whammy: a 6.4 server would still talk to old clients. This change is\n> gonna be a double whammy.\n> \n> I think we at least need to find a way to have new servers be able to\n> talk to old clients. Otherwise, it'll be *very* difficult to upgrade\n> to 6.6 at large installations; you'd have to change all the clients\n> simultaneously with the server. Those clients aren't necessarily all\n> on the same machine, and some may not even be under the db admin's\n> direct control. It looks like a recipe for major headaches to me.\n> \n> \t\t\tregards, tom lane\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sat, 24 Jul 1999 14:11:03 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: SSL patch "
}
] |
[
{
"msg_contents": "> > Unfortunatly, in order to allow for negotiated SSL, this \n> patch breaks the\n> > current protocol (meaning old clients will not work with \n> the new server, and\n> > the other way around). I felt it was better to break this \n> here, than to\n> > break the frontend API (which would otherwise have been required).\n> \n> This is *not* cool. Breaking both clients and servers, whether they\n> actually support SSL or not, is a bit much, don't you think? \nWell. Yeah, I do.\n\n\n> Especially\n> when the way you propose to do it makes it impossible for a server to\n> support both old and new clients: by the time the server finds out the\n> client's protocol version, it's already done something incompatible\n> with old clients.\n> \n> I think there must be some way of signaling SSL support capability\n> without making a backwards-incompatible change in the startup \n> protocol.\n> At a minimum an SSL-enabled server must be able to accept connections\n> from pre-SSL clients.\nWell. The problem is that the client sends the StartupPacket without reading\nanything at all from the server, which means it is too late to do SSL\nnegotiation after the StartupPacket. It contains the password (possibly in\nclear-text), which would be one of the most important things to protect. So\nI'm pretty sure that the negotiation has to take place _before_ the\nStartupPacket. And since the StartupPacket is the very first thing that is\nsent, it might be hard.\nJust co clearify: the SSL-enabled server still accepts 6.6 clients that are\ncompiled without SSL support, but it will not accept from 6.5 clients, as it\nis now. \n\nOne possibility would be that the client sent a negotiation packet _before_\nit sent the startuppacket. It would be a little bit weird to have this\nnegotiation initiated from the client, but perhaps possible. OTOH, this will\nbreak compatibility in the way that a 6.6 client will not be able to talk to\na 6.5 server. So I dunno if it's worth it.\n\nThen it could be something like:\n\nClient->Server\t\t'S' if SSL support, 'N' otherwise.\nServer->Client\t\tpicks 'S' or 'N' based on what both can do.\n\t\t\t\tIf it receives anything other than 'S' or\n'N', assums <6.6 client,\n\t\t\t\tand sees it as a StartupPacket.\n<if SSL, then negotiate SSL>\nClient->Server\t\tStartupPacket\n\nIs this perhaps better? It's pretty hard to get it into the server to accept\na packet _or_ a single byte as first input on the connection - as it is now,\nit goes directly into the special packet handling routines, which only\nhandles packets. But it might be possible.\n\nQuestion is - is it worth it? Are there perhaps any other changes planned\nthat will break this compatibility anyway?\n\n\n> If nothing better comes to mind, we could have SSL-capable servers\n> listen at two port addresses, say 5432 for insecure connections and\n> 5433 for secure ones. But there's probably a better way.\nI had it set for that from the beginning, but didn't like it. The way I had\nit done then broke the client API, which I considered even worse (AKA the\nclient had to specify to libpq if it was to use SSL or not, which meant that\nthe interface to PQsetdb was changed - not just a simple upgrade to the new\nlibpq was possible).\n\nIt could be possible to have it listen on two ports, one that does not\nnegotiate and one that does, purely for backwards compatibility. But that\ndoes not look like a very good solution either, since it would require\ncontinued support for two different protocols, with all that comes with\nthat.\n\n\n> BTW, it should be possible for the dbadmin to configure a server to\n> accept *only* secured connections, perhaps from a subset of \n> users/hosts;\n> that would take a new column in pg_hba.conf. Didn't look at \n> your patch\n> closely enough to see if you already did that...\nIt is. If you start it up with \"-is\", it will allow only secure connections.\nYou can also use the class \"hostssl\" in pg_hba.conf to configure it based on\nhosts. So you can have e.g.\nOnce the main code is in there, it should also be possible to add\nclient-certificate-based authentication.\n\n\n//Magnus\n",
"msg_date": "Fri, 23 Jul 1999 19:32:55 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [INTERFACES] Re: SSL patch"
},
{
"msg_contents": "> Well. The problem is that the client sends the StartupPacket without reading\n> anything at all from the server, which means it is too late to do SSL\n> negotiation after the StartupPacket. It contains the password (possibly in\n> clear-text), which would be one of the most important things to protect. So\n> I'm pretty sure that the negotiation has to take place _before_ the\n> StartupPacket. And since the StartupPacket is the very first thing that is\n> sent, it might be hard.\n> Just co clearify: the SSL-enabled server still accepts 6.6 clients that are\n> compiled without SSL support, but it will not accept from 6.5 clients, as it\n> is now. \n\nSo your concern is that the client will send hashed password as\ncleartext before finding out it has to do SSL? Doesn't the client do\nSSL and then send the SSL request to the server? Why do we have to have\nclients who use SSL sending non-SSL requests to the server? Let them\nfail if they do that. If you want to force SSL from certain hosts, put\nthat in hba_conf, and only accept SSL from those? I am really lost on\nthe problem here.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 23 Jul 1999 17:24:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RE: [INTERFACES] Re: SSL patch"
},
{
"msg_contents": "Magnus Hagander <[email protected]> writes:\n> Well. The problem is that the client sends the StartupPacket without\n> reading anything at all from the server, which means it is too late to\n> do SSL negotiation after the StartupPacket. It contains the password\n> (possibly in clear-text), which would be one of the most important\n> things to protect.\n\nActually, the StartupPacket does *not* contain a password. But it does\ncontain a valid database name and user name, which might be useful\ninformation to an attacker, so I agree it would be good to protect it.\n\n> Just to clearify: the SSL-enabled server still accepts 6.6 clients that are\n> compiled without SSL support, but it will not accept from 6.5 clients, as it\n> is now. \n\nRight. My feeling is that we must make it possible for a 6.6 server to\naccept connections from 6.5 (and earlier) clients, or the upgrade will\nbe too painful at large sites.\n\n> One possibility would be that the client sent a negotiation packet _before_\n> it sent the startuppacket. It would be a little bit weird to have this\n> negotiation initiated from the client, but perhaps possible. OTOH, this will\n> break compatibility in the way that a 6.6 client will not be able to talk to\n> a 6.5 server.\n\nNot if the 6.6 client is smart about recovering from a connection\nfailure. It could work like this:\n\n\tClient opens connection\n\tClient sends SSL negotiation packet\n\t6.5 server (or SSL-less 6.6 server) sends back error msg\n\t\tand closes connection\n\tClient says \"oh well\", opens a new connection, and\n\t\tproceeds with non-secure connection protocol\n\n(Of course, if the client only wanted a secure connection, it'd give up\ninstead of making the second connection attempt.) This'd be a little\nbit inefficient for new clients talking to old servers, but that doesn't\nseem like it is a fatal objection --- in the other three cases there\nis no extra overhead.\n\nIn the case where the server does have SSL capability, it accepts the\nSSL packet, then the SSL negotiation is completed, and finally the\nusual sort of StartupPacket is sent and the connection proceeds.\n\nOf course, if the client does not want to use a secure connection, it\njust opens the connection and sends a StartupPacket to begin with.\n\nThe only dubious assumption I can see in this is that the server has to\nbe able to distinguish an initial SSL negotiation packet from a\nStartupPacket (and from a CancelRequestPacket). We should ensure that\nthat is true by prefixing an identifying word to the normal contents of\nan SSL packet. Or, if it seems easiest, we could simply have that\ninitial client message consist *only* of a packet that means\n\"BeginSSLProtocol\", and then the server side is the one that actually\nstarts the SSL negotiation. That is almost like your current patch ---\nthe critical differences are that the initial client message for an SSL\nconnection has to be set up so that an old server will reject it\ncleanly, and the client has to be prepared to retry if that happens.\n\nI think I prefer having the client's first message include the first\nstep of SSL negotiation if possible, since that would save one\npacket transfer during the setup process. But if it's too hard to\nmake the SSL libraries play that way, we don't have to.\n\nIn any case, the initial client message for a non-SSL connection should\nbe a plain StartupPacket, and for an SSL connection it must be something\nthat an old server will refuse. That means we want the first 8 bytes to\nbe a packet length count and then a word that does not look like any\nacceptable protocol version number. (Compare the way that\nCancelRequestPackets are recognized.) The data payload of the packet\nwould either be the initial SSL negotiation data, or empty if you decide\nthat the server must send the initial SSL message.\n\n> I had it set for that from the beginning, but didn't like it. The way I had\n> it done then broke the client API, which I considered even worse (AKA the\n> client had to specify to libpq if it was to use SSL or not, which meant that\n> the interface to PQsetdb was changed - not just a simple upgrade to the new\n> libpq was possible).\n\nYou'll still have an API addition, no? Something to set the SSL\nconnection option as \"do not use SSL\", \"must use SSL or fail\", or \"use\nSSL if server supports it\". The last is a reasonable default if the\nclient doesn't specify, but the client must be able to specify. I guess\nthis would only be possible via a conninfo string...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 24 Jul 1999 11:19:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RE: [INTERFACES] Re: SSL patch "
},
{
"msg_contents": "I wrote:\n> [ a bunch of stuff ]\n\nAfter looking into this morning's patches digest, I see that half of\nthis already occurred to you :-).\n\nI'd still suggest extending the client to fall back to non-SSL if the\nserver rejects the connection (unless it is told by the application\nthat it must make an SSL connection). Then there's no compatibility\nproblem at all, even for mix-and-match SSL-enabled and not-SSL-enabled\nclients and servers.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 24 Jul 1999 11:37:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RE: [INTERFACES] Re: SSL patch "
},
{
"msg_contents": "On Sat, 24 Jul 1999, Tom Lane wrote:\n\n> Magnus Hagander <[email protected]> writes:\n\n> > Just to clearify: the SSL-enabled server still accepts 6.6 clients that are\n> > compiled without SSL support, but it will not accept from 6.5 clients, as it\n> > is now. \n> \n> Right. My feeling is that we must make it possible for a 6.6 server to\n> accept connections from 6.5 (and earlier) clients, or the upgrade will\n> be too painful at large sites.\n\nBut, we've had protocol changes before that breaks backward\ncompatibility...why is this all of a sudden a problem? As long as this is\na \"known\" issue for an upgrade (something for the 'migrating to...'\nsection of the HISTORY file), I personally see no reason why the protocol\ncan't mature/change...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sat, 24 Jul 1999 14:06:07 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RE: [INTERFACES] Re: SSL patch "
},
{
"msg_contents": "> > Right. My feeling is that we must make it possible for a 6.6 server to\n> > accept connections from 6.5 (and earlier) clients, or the upgrade will\n> > be too painful at large sites.\n> \n> But, we've had protocol changes before that breaks backward\n> compatibility...why is this all of a sudden a problem? As long as this is\n> a \"known\" issue for an upgrade (something for the 'migrating to...'\n> section of the HISTORY file), I personally see no reason why the protocol\n> can't mature/change...\n\nNo reason to change the protocol when we don't need to.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 24 Jul 1999 13:21:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RE: [INTERFACES] Re: SSL patch"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> But, we've had protocol changes before that breaks backward\n>> compatibility...why is this all of a sudden a problem?\n\n> No reason to change the protocol when we don't need to.\n\nThe point is that we *do not have to* break backwards compatibility to\nadd this feature, and indeed hardly anything would be gained by breaking\ncompatibility. See subsequent messages from myself and Magnus.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 24 Jul 1999 13:50:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RE: [INTERFACES] Re: SSL patch "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> But, we've had protocol changes before that breaks backward\n> >> compatibility...why is this all of a sudden a problem?\n> \n> > No reason to change the protocol when we don't need to.\n\nWhat I meant is that there is reason to break compatibility when we\ndon't need to. Magnus seems like he has addressed this already.\n\n> \n> The point is that we *do not have to* break backwards compatibility to\n> add this feature, and indeed hardly anything would be gained by breaking\n> compatibility. See subsequent messages from myself and Magnus.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 24 Jul 1999 16:15:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RE: [INTERFACES] Re: SSL patch"
}
] |
[
{
"msg_contents": "\nToday I got familiar problem with vacuum analyze\n\ndiscovery=> select version();\nversion\n------------------------------------------------------------------------\nPostgreSQL 6.5.1 on i686-pc-linux-gnulibc1, compiled by gcc egcs-2.91.66\n(1 row)\n\ndiscovery=> vacuum analyze;\nNOTICE: AbortTransaction and not in in-progress state\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is impossible. Terminating.\n\nThis is the last cvs (REL6_5_PATCHES).\nIt's interesting, that I can do vacuum analyze for all tables in this\ndatabase without any problem !\nI dump my database and reload it. After that vacuum analyze worked fine.\nBut after intensive testing of my Web-server I got the same problem.\nI accumulate documents hits in my database using persistent connection\nand this is the only update/insert operation.\n\nI use function to workaround update/insert dilemma - \nI can't just use update. This is modified function suggested by \nPhilip Warner. I'm sure problem somehow connects with this,\nbecause I had no problem when I didn't accumulate statistics but just\ninsert every hits using simple sql.\n\n\n\tRegards,\n\n\t\tOleg\n\ncreate table hits (\n msg_id int4 not null primary key,\n count int4 not null\n);\n\nCREATE FUNCTION \"acc_hits\" (int4) RETURNS int4 AS '\nDeclare\n keyval Alias For $1;\n cnt int4;\nBegin\n Select count into cnt from hits where msg_id = keyval;\n if Not Found then\n cnt := 1;\n Insert Into hits (msg_id,count) values (keyval, cnt);\n else\n cnt := cnt + 1;\n Update hits set count = cnt where msg_id = keyval;\n End If;\n return cnt;\nEnd;\n' LANGUAGE 'plpgsql';\n\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Fri, 23 Jul 1999 23:44:52 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "vacuum analyze problem "
},
{
"msg_contents": "Oleg Bartunov wrote:\n> \n> Today I got familiar problem with vacuum analyze\n> \n> discovery=> select version();\n> version\n> ------------------------------------------------------------------------\n> PostgreSQL 6.5.1 on i686-pc-linux-gnulibc1, compiled by gcc egcs-2.91.66\n> (1 row)\n> \n> discovery=> vacuum analyze;\n> NOTICE: AbortTransaction and not in in-progress state\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally\n> before or while processing the request.\n> We have lost the connection to the backend, so further processing is impossible. Terminating.\n\nWe need in gdb output for this...\n\nVadim\n",
"msg_date": "Mon, 26 Jul 1999 10:31:44 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] vacuum analyze problem"
}
] |
[
{
"msg_contents": "on the patches list, I posted the first files\nfor suuport for using perl as the procedural\nlanguage.\n\nThe makefile uses libtool (probably incorrectly).\n\nIt is not 'safe'.\n\nPerl XS modules cannot be used.\n\nUsing Perl in rules hasn't even been thought about.\n\nIF the code looks suspiciously like Jan's code\nfor pltcl - well, it _is_ Jan's code for pltcl\nhorribly mangled.\n\nIt compiles and runs for me. YMMV.\n\nmore code and docs to follow.\n-- \nMark Hollomon\[email protected]\n",
"msg_date": "Fri, 23 Jul 1999 16:15:10 -0400",
"msg_from": "Mark Hollomon <[email protected]>",
"msg_from_op": true,
"msg_subject": "plperl intial pass"
},
{
"msg_contents": "Great,\n\njust compiled and install but need to look at some examples :-)\nbtw, here is a patch for createlang command to enable plperl\nI'm not sure about trusted field.\n\n--- createlang Sat Jul 24 22:27:05 1999\n+++ /usr/local/pgsql/bin/createlang Wed Jul 21 19:36:55 1999\n@@ -84,9 +84,6 @@\n plpgsql) lancomp=\"PL/pgSQL\"\n trusted=\"TRUSTED\"\n handler=\"plpgsql_call_handler\";;\n- plperl) lancomp=\"PL/Perl\"\n- trusted=\"TRUSTED\"\n- handler=\"plperl_call_handler\";;\n pltcl) lancomp=\"PL/Tcl\"\n trusted=\"TRUSTED\"\n handler=\"pltcl_call_handler\";;\n\n\nOn Fri, 23 Jul 1999, Mark Hollomon wrote:\n\n> Date: Fri, 23 Jul 1999 16:15:10 -0400\n> From: Mark Hollomon <[email protected]>\n> To: [email protected]\n> Subject: [HACKERS] plperl intial pass\n> \n> on the patches list, I posted the first files\n> for suuport for using perl as the procedural\n> language.\n> \n> The makefile uses libtool (probably incorrectly).\n> \n> It is not 'safe'.\n> \n> Perl XS modules cannot be used.\n> \n> Using Perl in rules hasn't even been thought about.\n> \n> IF the code looks suspiciously like Jan's code\n> for pltcl - well, it _is_ Jan's code for pltcl\n> horribly mangled.\n> \n> It compiles and runs for me. YMMV.\n> \n> more code and docs to follow.\n> -- \n> Mark Hollomon\n> [email protected]\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sat, 24 Jul 1999 22:26:33 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] plperl intial pass"
},
{
"msg_contents": "On Sat, Jul 24, 1999 at 10:26:33PM +0400, Oleg Bartunov wrote:\n> Great,\n> \n> just compiled and install but need to look at some examples :-)\n> btw, here is a patch for createlang command to enable plperl\n> I'm not sure about trusted field.\n\nAt this point it definitely should not be trusted. (Trust me).\n\nHow about the famous hello world:\n\ncreate function hello () returns text as '\nreturn \"Hello world!\";' language 'plperl';\n\nor a quick sum:\n\ncreate function sum2 (int4, int4) returns int4 as '\n$_[0] + $_[1];' language 'plperl';\n\nThe args are in @_ (naturally). Tuples are passed as\nhash references.\n\nAccess to SPI functionality is coming.\n\n\n\n-- \nMark Hollomon\[email protected]\n",
"msg_date": "Sun, 25 Jul 1999 12:08:51 -0400",
"msg_from": "Mark Hollomon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] plperl intial pass"
},
{
"msg_contents": "Great !\n\nat least your examples works:\ntest=> select hello();\nNOTICE: plperl_func_handler: have been asked to call __PLperl_proc_329289\nNOTICE: plperl_func_handler: __PLperl_proc_329289 was in the hash\nNOTICE: plperl_call_perl_func: calling __PLperl_proc_329289\nNOTICE: plperl_func_handler: returned from plperl_call_perl_func\nNOTICE: plperl_func_handler: return as string = Hello world!\nNOTICE: plperl_func_handler: Datum is 826ee30\nhello \n------------\nHello world!\n(1 row)\ntest=> create function sum2 (int4, int4) returns int4 as '\ntest'> $_[0] + $_[1];' language 'plperl';\nCREATE\ntest=> select sum2(4,2);\nNOTICE: plperl_func_handler: have been asked to call __PLperl_proc_329290\nNOTICE: plperl_func_handler: __PLperl_proc_329290 doesn't exist yet\nNOTICE: plperl_create_sub: creating the sub\nNOTICE: plperl_call_perl_func: calling __PLperl_proc_329290\nNOTICE: plperl_func_handler: returned from plperl_call_perl_func\nNOTICE: plperl_func_handler: return as string = 6\nNOTICE: plperl_func_handler: Datum is 6\nsum2\n----\n 6\n(1 row)\n\n\tRegards,\n\t\t\n\t\tOleg\nOn Sun, 25 Jul 1999, Mark Hollomon wrote:\n\n> Date: Sun, 25 Jul 1999 12:08:51 -0400\n> From: Mark Hollomon <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] plperl intial pass\n> \n> On Sat, Jul 24, 1999 at 10:26:33PM +0400, Oleg Bartunov wrote:\n> > Great,\n> > \n> > just compiled and install but need to look at some examples :-)\n> > btw, here is a patch for createlang command to enable plperl\n> > I'm not sure about trusted field.\n> \n> At this point it definitely should not be trusted. (Trust me).\n> \n> How about the famous hello world:\n> \n> create function hello () returns text as '\n> return \"Hello world!\";' language 'plperl';\n> \n> or a quick sum:\n> \n> create function sum2 (int4, int4) returns int4 as '\n> $_[0] + $_[1];' language 'plperl';\n> \n> The args are in @_ (naturally). Tuples are passed as\n> hash references.\n> \n> Access to SPI functionality is coming.\n> \n> \n> \n> -- \n> Mark Hollomon\n> [email protected]\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 26 Jul 1999 04:28:39 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] plperl intial pass"
},
{
"msg_contents": "\nOn 24-Jul-99 Oleg Bartunov wrote:\n> Great,\n> \n> just compiled and install but need to look at some examples :-)\n> btw, here is a patch for createlang command to enable plperl\n> I'm not sure about trusted field.\n\nDoes plperl use Perl interpreter or it's completly different language \nwith similar syntax?\n\nDo you have some speed/memory statistic or plpgsql/plperl comparison ?\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n",
"msg_date": "Mon, 26 Jul 1999 12:19:56 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] plperl intial pass"
},
{
"msg_contents": ">\n> Great,\n>\n> just compiled and install but need to look at some examples :-)\n> btw, here is a patch for createlang command to enable plperl\n> I'm not sure about trusted field.\n>\n> --- createlang Sat Jul 24 22:27:05 1999\n> +++ /usr/local/pgsql/bin/createlang Wed Jul 21 19:36:55 1999\n> @@ -84,9 +84,6 @@\n> plpgsql) lancomp=\"PL/pgSQL\"\n> trusted=\"TRUSTED\"\n> handler=\"plpgsql_call_handler\";;\n> - plperl) lancomp=\"PL/Perl\"\n> - trusted=\"TRUSTED\"\n> - handler=\"plperl_call_handler\";;\n> pltcl) lancomp=\"PL/Tcl\"\n> trusted=\"TRUSTED\"\n> handler=\"pltcl_call_handler\";;\n\n I wouldn't make it a TRUSTED language right now, because\n until PL/Perl has a safe mode (what Mark said it hasn't now)\n it is a security hole. Unpriviliged users could create\n functions in PL/Perl that modify the hba.conf!\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 26 Jul 1999 11:15:25 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] plperl intial pass"
},
{
"msg_contents": "Dmitry Samersoff wrote:\n\n>\n> On 24-Jul-99 Oleg Bartunov wrote:\n> > Great,\n> >\n> > just compiled and install but need to look at some examples :-)\n> > btw, here is a patch for createlang command to enable plperl\n> > I'm not sure about trusted field.\n>\n> Does plperl use Perl interpreter or it's completly different language\n> with similar syntax?\n>\n> Do you have some speed/memory statistic or plpgsql/plperl comparison ?\n\n It uses a real Perl precompiler/interpreter inside.\n\n I think it's far too early for such comparisions. As Mark\n wrote, PL/Perl's SPI interface (for accessing tables from\n inside a function) is still to come, and if I remember right,\n triggers are another delayed feature up to now.\n\n When it's done, I would expect that PL/Perl could outperform\n PL/pgSQL in many cases. I haven't done speed comparision\n between PL/pgSQL and PL/Tcl yet, but I know all their\n internals. The reason for my assumtion is that PL/pgSQL uses\n the PostgreSQL executor for all computations. That's IMHO a\n pro, because it assures that any defined datatype, function,\n operator and aggregate is automagically available in PL/pgSQL\n and all computations return exactly the same result as if\n they're done inside an SQL statement. But nothing on earth is\n for free, not even the death - you pay for it with your life.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 26 Jul 1999 11:50:52 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] plperl intial pass"
},
{
"msg_contents": "Mark Hollomon wrote:\n\n>\n> on the patches list, I posted the first files\n> for suuport for using perl as the procedural\n> language.\n\n Congratulations!\n\n>\n> The makefile uses libtool (probably incorrectly).\n>\n> It is not 'safe'.\n>\n> Perl XS modules cannot be used.\n\n What's an XS module? If it's a shared object dynamically\n linked - don't care too much - PL/Tcl cannot either.\n\n>\n> Using Perl in rules hasn't even been thought about.\n\n If a function works from a query, the same function must work\n too in a rule because the rewriter only mangles up parsetrees\n so when executing, they are invoked from a query. Or did you\n mean triggers?\n\n>\n> IF the code looks suspiciously like Jan's code\n> for pltcl - well, it _is_ Jan's code for pltcl\n> horribly mangled.\n\n What ya think where the skeleton for PL/pgSQL came from :-) I\n just wrote my own SQL scripting bytecode compiler and\n executor and placed them into the PL/Tcl sources.\n\n Congrats again - great work - move on.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 26 Jul 1999 12:18:06 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] plperl intial pass"
},
{
"msg_contents": "Dmitry Samersoff wrote:\n> \n> On 24-Jul-99 Oleg Bartunov wrote:\n> > Great,\n> >\n> > just compiled and install but need to look at some examples :-)\n> > btw, here is a patch for createlang command to enable plperl\n> > I'm not sure about trusted field.\n> \n> Does plperl use Perl interpreter or it's completly different language\n> with similar syntax?\n> \n\nIt imbeds the perl interpreter. Just as pltcl imbeds the tcl\ninterpreter.\n\n> Do you have some speed/memory statistic or plpgsql/plperl comparison ?\n\nNo.\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n",
"msg_date": "Mon, 26 Jul 1999 09:48:38 -0400",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] plperl intial pass"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> Mark Hollomon wrote:\n> \n> >\n> > It is not 'safe'.\n> >\n> > Perl XS modules cannot be used.\n> \n> What's an XS module? If it's a shared object dynamically\n> linked - don't care too much - PL/Tcl cannot either.\n\nCorrect. The problem is that the Opcode module, which allows you to\ndisable features of the compiler (to close security holes) is an\nXS module. In theory, it is possible to do without Opcode, but\ndoing so would create a very heavy perl version dependency in plperl.\n\nSo, I have to get XS stuff working in order to disallow XS stuff.\nsigh.\n\nAnd plperl can never be trusted until I can forbid writing to the\nfilesystem.\n\n> \n> >\n> > Using Perl in rules hasn't even been thought about.\n> \n> If a function works from a query, the same function must work\n> too in a rule because the rewriter only mangles up parsetrees\n> so when executing, they are invoked from a query. Or did you\n> mean triggers?\n\nIck. Correct. I meant triggers.\n\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n",
"msg_date": "Mon, 26 Jul 1999 09:59:54 -0400",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] plperl intial pass"
},
{
"msg_contents": "Mark Hollomon wrote:\n>\n> Jan Wieck wrote:\n> >\n>\n> Correct. The problem is that the Opcode module, which allows you to\n> disable features of the compiler (to close security holes) is an\n> XS module. In theory, it is possible to do without Opcode, but\n> doing so would create a very heavy perl version dependency in plperl.\n>\n> So, I have to get XS stuff working in order to disallow XS stuff.\n> sigh.\n>\n> And plperl can never be trusted until I can forbid writing to the\n> filesystem.\n\n I see. Maybe it's possible to get the Opcode stuff working\n without full XS? Adding full XS support only to disable it -\n what an overkill :-)\n\n Correct me if I'm wrong (I'm only guessing). Like for Perl,\n the Tcl interpreter itself sits in a library. To create the\n standalone tclsh, a small tclAppInit.c file is compiled into\n the tclsh executable. The default one only creates one\n interpreter and arranges for the execution of the script\n given in argv[0] or starts up the interactive shell.\n\n A dynamically loadable Tcl module contains one special\n function named <libname>_Init() where first character of\n libname is capitalized. On dynamic load, this function is\n called with the invoking interpreter as argument. This\n function then calls Tcl_CreateCommand() etc. to tell Tcl\n what's coming here and does other module specific\n initializations.\n\n It is now possible, to add other stuff to tclAppInit.c (like\n calls to Mymodule_Init) and link it against some more than\n libtcl.so. That was the standard solution before dynamic\n loading was that easy as it is today (back in the days of\n a.out libs).\n\n Your plperl.c is mostly my pltcl.c - so I assume it does the\n same things mainly. Create an interpreter and throw some\n strings into it, hoping they are intelligable in some way (at\n least produce a helpful error message). Thus, it might be\n possible to add calls to the initializations for the Opcode\n XS directly into the plperl module after creating the\n interpreter and link it against Opcode as well.\n\n This is just the way I would do it for Tcl and I'll surely do\n it someday. I would like to have a second, unsafe\n interpreter in the module. That could then modify files or\n use the frontend library to access a different database on\n another server. Needless to say that this then would be an\n untrusted language, available only for db superusers.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 26 Jul 1999 17:45:19 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] plperl intial pass"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> Mark Hollomon wrote:\n> >\n> > Jan Wieck wrote:\n> > >\n> >\n> > Correct. The problem is that the Opcode module, which allows you to\n> > disable features of the compiler (to close security holes) is an\n> > XS module. In theory, it is possible to do without Opcode, but\n> > doing so would create a very heavy perl version dependency in plperl.\n> >\n> > So, I have to get XS stuff working in order to disallow XS stuff.\n> > sigh.\n> >\n> > And plperl can never be trusted until I can forbid writing to the\n> > filesystem.\n> \n> I see. Maybe it's possible to get the Opcode stuff working\n> without full XS? Adding full XS support only to disable it -\n> what an overkill :-)\n> \n> Correct me if I'm wrong (I'm only guessing). Like for Perl,\n> the Tcl interpreter itself sits in a library. To create the\n> standalone tclsh, a small tclAppInit.c file is compiled into\n> the tclsh executable. The default one only creates one\n> interpreter and arranges for the execution of the script\n> given in argv[0] or starts up the interactive shell.\n> \n> A dynamically loadable Tcl module contains one special\n> function named <libname>_Init() where first character of\n> libname is capitalized. On dynamic load, this function is\n> called with the invoking interpreter as argument. This\n> function then calls Tcl_CreateCommand() etc. to tell Tcl\n ^^^^^^^^^^^^^^^^^\n\nAnd here-in lies the problem. Tcl_CreateCommand is sitting, not\nin the executable, but in the shared-lib with the function call\nhandler. dlopen(), by default will not link across shared-libs.\n\n postgres\n /-----/ \\-----\\\n | |\n plperl.so ---> Opcode.so\n ^^\nThis link doesn't happen.\n\nPassing RTLD_GLOBAL (I think) as a flag to dlopen makes the symbols\nin a shared-lib available for linking into the next shared-lib.\n\nBut postgresql doesn't use the RTLD_GLOBAL flag and patching the\nbackend to load _everything_ with RTLD_GLOBAL seemed like it could\nhave less than desirable behavior.\n\na.out systems are easier since perl's dynamic loading subsystem\nwould take care of problem for me.\n\n> what's coming here and does other module specific\n> initializations.\n> \n> It is now possible, to add other stuff to tclAppInit.c (like\n> calls to Mymodule_Init) and link it against some more than\n> libtcl.so. That was the standard solution before dynamic\n> loading was that easy as it is today (back in the days of\n> a.out libs).\n\nThat is exactly how it works. But see above.\n\nAnd on top of the above problem, postgres assumes all linuxen\nuse a.out type loading. Where as perl uses dlopen where it can.\n\nGetting those two to play together is more than I care to attempt.\nI am researching a fix now to let linux installations use dlopen\nif it is available.\n\nI would not be unhappy if somebody beats me to it.\n\n> This is just the way I would do it for Tcl and I'll surely do\n> it someday. I would like to have a second, unsafe\n> interpreter in the module. That could then modify files or\n> use the frontend library to access a different database on\n> another server. Needless to say that this then would be an\n> untrusted language, available only for db superusers.\n> \n\nYes, I've been thinking about that as well. It would be nice to have\npermissions based on userid. Maybe the 'suid' stuff that is being\ndiscussed in another thread will gives us a mechanism.\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n",
"msg_date": "Mon, 26 Jul 1999 13:38:36 -0400",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] plperl intial pass"
},
{
"msg_contents": "Mark Hollomon wrote:\n\n> > A dynamically loadable Tcl module contains one special\n> > function named <libname>_Init() where first character of\n> > libname is capitalized. On dynamic load, this function is\n> > called with the invoking interpreter as argument. This\n> > function then calls Tcl_CreateCommand() etc. to tell Tcl\n> ^^^^^^^^^^^^^^^^^\n>\n> And here-in lies the problem. Tcl_CreateCommand is sitting, not\n> in the executable, but in the shared-lib with the function call\n> handler. dlopen(), by default will not link across shared-libs.\n>\n> postgres\n> /-----/ \\-----\\\n> | |\n> plperl.so ---> Opcode.so\n> ^^\n> This link doesn't happen.\n\n But it does for PL/Tcl - at least under Linux-ELF. (C = Call\n to, L = Location of functions code segment):\n\n +-------------------------+\n | postgres |\n +-------------------------+\n |\n | dynamic load\n |\n v\n +---------------------------+ +---------------------------+\n | pltcl.so |--------->| libtcl8.0.so |\n | | auto- | |\n | C Tcl_CreateInterp() | dynamic | L Tcl_CreateInterp() |\n | C Tcl_CreateCommand() | load | L Tcl_CreateCommand() |\n | L static pltcl_SPI_exec() | | C pltcl_SPI_exec() |\n +---------------------------+ +---------------------------+\n\n After loading of pltcl.so, it calls Tcl_CreateInterp() to\n build a Tcl interpreter, and then calls Tcl_CreateCommand()\n to tell that interpreter the address of one of it's hidden\n (static) functions plus a name for it from the script side.\n The interpreter just remembers this in it's command hash\n table, and if that keyword occurs when it expects a\n command/procedure name, just calls it via the function\n pointer.\n\n There is no -ltcl8.0 switch in the link step of postgres.\n The fact that pltcl.so needs something out of libtcl8.0.so is\n told when linking pltcl.so:\n\n gcc -shared -o pltcl.so pltcl.o -L/usr/local/lib -ltcl8.0\n\n That results in this:\n\n [pgsql@hot] ~ > ldd bin/postgres\n libdl.so.1 => /lib/libdl.so.1 (0x4000a000)\n libm.so.5 => /lib/libm.so.5 (0x4000d000)\n libtermcap.so.2 => /usr/lib/libtermcap.so.2 (0x40016000)\n libncurses.so.3.0 => /lib/libncurses.so.3.0 (0x4001a000)\n libc.so.5 => /lib/libc.so.5 (0x4005b000)\n [pgsql@hot] ~ > ldd lib/pltcl.so\n ./lib/pltcl.so => ./lib/pltcl.so (0x4000a000)\n libc.so.5 => /lib/libc.so.5 (0x40010000)\n libtcl8.0.so => /usr/local/lib/libtcl8.0.so (0x400cb000)\n\n As you see, there is no libtcl mentioned in the shared lib\n dependencies of the postgres backend. It's the pltcl.so\n shared object that remembers this. And if you invoke \"ldd -r\n -d pltcl.so\" it will print alot of unresolveable symbols, but\n most of them are backend symbols (the others are math ones\n because the above gcc -shared call is in fact incomplete -\n but since the backend is already linked against libm.so it\n doesn't matter :-).\n\n So if I want to use My dynamically loadable package for Tcl\n from inside the PL/Tcl interpreter, I would have to call\n My_Init() from pltcl.so AND add My.so to the linkage of\n pltcl.so. Calling My_Init() causes that \"pltcl.o\" has an\n unresolved reference to symbol _My_Init. The linker find's it\n in My.so and saves this info in pltcl.so so the dynamic\n loader can (and does) resolve it whenever something load\n pltcl.so.\n\n The important key is to reference at least one symbol in the\n shared lib you want to get automatically loaded. You can add\n as much link libs with -l as you want. If none of their\n symbols is needed, the linker will not save this dependency\n (because there is none) in the resulting .so.\n\n I'll give it a try and USE some binary Tcl packages from\n inside. Will tell ya soon.\n\n> Getting those two to play together is more than I care to attempt.\n> I am researching a fix now to let linux installations use dlopen\n> if it is available.\n\n Don't think you need to.\n\n> > This is just the way I would do it for Tcl and I'll surely do\n> > it someday. I would like to have a second, unsafe\n> > interpreter in the module. That could then modify files or\n> > use the frontend library to access a different database on\n> > another server. Needless to say that this then would be an\n> > untrusted language, available only for db superusers.\n> >\n>\n> Yes, I've been thinking about that as well. It would be nice to have\n> permissions based on userid. Maybe the 'suid' stuff that is being\n> discussed in another thread will gives us a mechanism.\n\n I know, I know - and I know how. It cannot work for\n \"internal\" language functions. But for anything that goes\n through some loading (dynloader or PL call hander), the fmgr\n looks up pg_proc and put's informations into the FmgrInfo\n struct. Adding a setuid field to pg_proc and remembering that\n too wouldn't be too much and it then would know when calling\n such a beast. Fmgr then manages a current user stack which\n must be reset on a transaction abort. Anything that needs the\n current user simply looks at the toplevel stack entry.\n\n This is totally transparent then for all non-builtin\n functions and all non-builtin triggers (where I don't know of\n one).\n\n Maybe I kept this far too long in mind. But I thought about\n some more complicated changes to the function call interface\n for a while that would require touching several dozens of\n source files (single argument NULL identification, returning\n tuples and tuple SET's). Doing SETUID would have been some\n DONE WHILE AT IT. I really should do it earlier than the\n SET's, because they require subselecting RTE's (which it the\n third thread now - eh - I better shut up).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 27 Jul 1999 01:01:44 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] plperl intial pass"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> Mark Hollomon wrote:\n>\n> > Jan Wieck wrote:\n> > > \n> > > A dynamically loadable Tcl module contains one special\n> > > function named <libname>_Init() where first character of\n> > > libname is capitalized. On dynamic load, this function is\n> > > called with the invoking interpreter as argument. This\n> > > function then calls Tcl_CreateCommand() etc. to tell Tcl\n> > ^^^^^^^^^^^^^^^^^\n> >\n> > And here-in lies the problem. Tcl_CreateCommand is sitting, not\n> > in the executable, but in the shared-lib with the function call\n> > handler. dlopen(), by default will not link across shared-libs.\n> >\n> > postgres\n> > /-----/ \\-----\\\n> > | |\n> > plperl.so ---> Opcode.so\n> > ^^\n> > This link doesn't happen.\n> \n> But it does for PL/Tcl - at least under Linux-ELF. (C = Call\n> to, L = Location of functions code segment):\n> \n> +-------------------------+\n> | postgres |\n> +-------------------------+\n> |\n> | dynamic load\n> |\n> v\n> +---------------------------+ +---------------------------+\n> | pltcl.so |--------->| libtcl8.0.so |\n> | | auto- | |\n> | C Tcl_CreateInterp() | dynamic | L Tcl_CreateInterp() |\n> | C Tcl_CreateCommand() | load | L Tcl_CreateCommand() |\n> | L static pltcl_SPI_exec() | | C pltcl_SPI_exec() |\n> +---------------------------+ +---------------------------+\n> \n> After loading of pltcl.so, it calls Tcl_CreateInterp() to\n> build a Tcl interpreter, and then calls Tcl_CreateCommand()\n> to tell that interpreter the address of one of it's hidden\n> (static) functions plus a name for it from the script side.\n> The interpreter just remembers this in it's command hash\n> table, and if that keyword occurs when it expects a\n> command/procedure name, just calls it via the function\n> pointer.\n\n\nAHHH, now I understand the difference. By default, the perl installation\ndoes not create a shared library. It creates a static archive only.\nAnd the three linux distros that I have experience with don't force\nthe creation of the shared lib. So, my situation is:\n\n postgres\n |\n |\n +----------------------+ +-----------------+\n | plperl.so | | Opcode.so |\n | +--------------+ | | |\n | | libperl.a | <-+------------| |\n | +--------------+ | | |\n +----------------------+ +-----------------+\n\nAnd it is THAT link that I cannot get to happen without the RTLD_GLOBAL\nflag I mentioned.\n\nSorry for the confusion.\n\nHopefully you can help find a way out of this.\n\nI had a patch to change the way dynloader worked on linuxelf,\nbut over night my disk crashed. brand new UDMA/66 drive. Grrrr.\n\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n",
"msg_date": "Tue, 27 Jul 1999 08:54:47 -0400",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "dynloader and PLs [was: plperl intial pass]"
},
{
"msg_contents": "Jan Wieck wrote:\n\n> \n\n> Mark Hollomon wrote:\n\n>\n\n> > Yes, I've been thinking about that as well. It would be nice to have\n\n> > permissions based on userid. Maybe the 'suid' stuff that is being\n\n> > discussed in another thread will gives us a mechanism.\n\n> \n\n> I know, I know - and I know how. It cannot work for\n\n> \"internal\" language functions. But for anything that goes\n\n> through some loading (dynloader or PL call hander), the fmgr\n\n> looks up pg_proc and put's informations into the FmgrInfo\n\n> struct. Adding a setuid field to pg_proc and remembering that\n\n> too wouldn't be too much and it then would know when calling\n\n> such a beast. Fmgr then manages a current user stack which\n\n> must be reset on a transaction abort. Anything that needs the\n\n> current user simply looks at the toplevel stack entry.\n\n\n\nThat would work.\n\n\n\n> \n\n> This is totally transparent then for all non-builtin\n\n> functions and all non-builtin triggers (where I don't know of\n\n> one).\n\n> \n\n> Maybe I kept this far too long in mind. But I thought about\n\n> some more complicated changes to the function call interface\n\n> for a while that would require touching several dozens of\n\n> source files (single argument NULL identification, returning\n\n> tuples and tuple SET's). Doing SETUID would have been some\n\n> DONE WHILE AT IT. I really should do it earlier than the\n\n> SET's, because they require subselecting RTE's (which it the\n\n> third thread now - eh - I better shut up).\n\n\n\nI've been looking at returning a tuple. It looked to me that the\n\nexecutor would handle a returned tuple okay, it was just SETs that \nwould cause problems. But I suspect I am wrong.\n\n\n\nThe best I could come up with for creating the tuple was using\n\nheap_formtuple. But that requires a TupleDesc so I was going to\n\nuse heap_openr. But that needs the name of the relation which is\n\navaible from the Form_pg_data (?) structure for the return type,\n\nwhich we already must get.\n\n\n-- \n\n\n\nMark Hollomon\n\[email protected]\n\nESN 451-9008 (302)454-9008\n",
"msg_date": "Tue, 27 Jul 1999 09:01:01 -0400",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "fmgr interface [was: plperl inital pass]"
},
{
"msg_contents": "Mark Hollomon wrote:\n\n> I've been looking at returning a tuple. It looked to me that the\n>\n> executor would handle a returned tuple okay, it was just SETs that\n> would cause problems. But I suspect I am wrong.\n\n Functions returning SET's allways return SET's of tuples,\n never SET's of single values. And functions returning tuple\n (SET's) have a targetlist to specify which attribute of the\n returned tuple(s) is wanted. It is the processing of this\n funciton-call-targetlist that's actually broken in the\n executor.\n\n But it's not worth fixing it without beeing able after to use\n more than one attribute of the returned set. And that\n requires the mentioned subselecting RTE. So you could then\n say things like:\n\n SELECT X.a, X.c FROM mysetfunc('Mark') X;\n\n The next problem in returning SET's is, that PostgreSQL isn't\n a state machine - it is stack oriented. The way it was\n supposed to work with SQL language functions was this:\n\n 1. The last query in an SQL function returning a tuple SET\n is allways a SELECT.\n\n 2. When the FUNC node is first hit during execution, the\n function is called. Then the FUNC node is modified by\n the executor and references the execution tree of the\n last command in the function.\n\n 3. Subsequent function calls don't invoke the function\n again, instead functions last commands execution tree is\n asked for the next tuple.\n\n This mechanism could also work for PL functions. A PL\n function returning a SET creates a temp table. At each\n occurence of\n\n RETURN mytup AND RESUME;\n\n it adds the tuple to the temp table. If it finally really\n returns, it hands back an execution plan for a\n\n SELECT * FROM <my_invocations_temp_table>;\n\n Then again, the problem of using multiple attributes of the\n returned set remains.\n\n\n> The best I could come up with for creating the tuple was using\n>\n> heap_formtuple. But that requires a TupleDesc so I was going to\n>\n> use heap_openr. But that needs the name of the relation which is\n>\n> avaible from the Form_pg_data (?) structure for the return type,\n>\n> which we already must get.\n\n Of course, the PL function must create tuples via\n heap_formtuple(). Thus, we need a pg_class entry (etc.) for\n it. The PL handler knows the return type of the function it's\n handling from pg_proc. The corresponding pg_type entry has a\n non-zero typrelid indicating that it's a tuple type. Simply\n use heap_open() with that typrelid and you'll get it.\n\n I'd like to add a new type of relation when we go for return\n SET's.\n\n CREATE STRUCTURE structname (attname type [, ...]);\n\n It just causes another pg_class entry, but these relations\n aren't accessible by normal means and do not have an\n underlying file. Don't know if it's valid SQL syntax, but\n what else could tell the parser what type of a tuple a SET\n function will return if it's not an existing relation\n structure?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 27 Jul 1999 16:39:27 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: fmgr interface [was: plperl inital pass]"
},
{
"msg_contents": "Mark Hollomon wrote:\n\n> AHHH, now I understand the difference. By default, the perl installation\n> does not create a shared library. It creates a static archive only.\n> And the three linux distros that I have experience with don't force\n> the creation of the shared lib. So, my situation is:\n>\n> postgres\n> |\n> |\n> +----------------------+ +-----------------+\n> | plperl.so | | Opcode.so |\n> | +--------------+ | | |\n> | | libperl.a | <-+------------| |\n> | +--------------+ | | |\n> +----------------------+ +-----------------+\n>\n> And it is THAT link that I cannot get to happen without the RTLD_GLOBAL\n> flag I mentioned.\n\n Yes - we need to understand the differences. After looking at\n some perl manpages (perlxs, perlembed, perlmodlib etc.) and\n consulting Opcode.pm I see the problems clearer now.\n\n Under Tcl, you can simply type \"load <shared-object>\" to load\n a .so and cause a call to it's ..._Init() function. Whatever\n comes there, the .so's ..._Init() function will tell it. And\n since every C function that should be callable from Tcl is\n given to the interpreter as a function pointer from within\n the ..._Init(), nothing except the ..._Init() function itself\n must be really resolved. In fact, the functions called from\n Tcl can be declared static inside the shared object (what's\n true in pltcl) so there are no symbols to resolve.\n\n A safe Tcl interpreter has no load command. But the\n controlling C application can call the .so's ..._Init()\n function directly to simulate the \"load\" (well, it should be\n the ..._SafeInit(), but that's another story). Thus, a C\n application creating a safe interpreter can load modules for\n it even if the interpreter itself can't.\n\n Under Perl, a package using a shared object is allways\n surrounded by some .pm which tells to lookup symbols via the\n dynamic loader (if I understand XSUB's right). So it's still\n a type of a script that controls the Perl->C-function\n bindings, not the shared object itself.\n\n The detail I don't understand is what breaks Perl's\n dynaloader if you use it from inside of plperl.so. Since Perl\n isn't built shared, the entire libperl.a should be linked\n static into plperl.so. What's the exact error message when\n you try to USE Opcode?\n\n>\n> Sorry for the confusion.\n>\n> Hopefully you can help find a way out of this.\n>\n> I had a patch to change the way dynloader worked on linuxelf,\n\n I don't think you should change the entire dynamic loader of\n PostgreSQL for it. This could be a can of worms and you\n should be happy that these problems showed up already on your\n development platform. I don't expect that you're willing to\n fix the dynamic loading under AIX, HP-UX and Solaris too\n (maybe you can't because the lack of appropriate\n environment).\n\n> but over night my disk crashed. brand new UDMA/66 drive. Grrrr.\n\n Ech\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 27 Jul 1999 19:05:28 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: dynloader and PLs [was: plperl intial pass]"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> Mark Hollomon wrote:\n> \n> > I had a patch to change the way dynloader worked on linuxelf,\n> \n> I don't think you should change the entire dynamic loader of\n> PostgreSQL for it. This could be a can of worms and you\n> should be happy that these problems showed up already on your\n> development platform. I don't expect that you're willing to\n> fix the dynamic loading under AIX, HP-UX and Solaris too\n> (maybe you can't because the lack of appropriate\n> environment).\n> \n\nThe problem is that perl and postgres disagree as how to do\nthe dynamic loading. postgres (on linux) _Always_ use aout\nstyle dynamic loading. Perl checks to see if the system is ELF\nand use dlopen if it is. On my ELF system then, postgres is\nloading plperl.so with dl_open (?). Then perl is loading\nOpcode.so using dlopen. The problem seems to be that the symbols\nfrom libperl.a (in plperl.so) are not available for resolving\nmissing symbols in Opcode.so. The error message basically mentions\nevery perl symbol as 'unresolved'.\n\nI noticed in another thread that D'Arcy is strugling with a similar\nproblem in NetBSD.\n\nOn my system, once I got postgres and perl to agree on how to do\ndynamic loading, I got XS stuff working. The code is (mostly)\nalready in plperl.c, but ifdef'ed out.\n\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n",
"msg_date": "Tue, 27 Jul 1999 14:06:25 -0400",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dynloader and PLs [was: plperl intial pass]"
}
] |
[
{
"msg_contents": "Hi everyone!\n\nYou may remember me as the one who was having lots of trouble using\nPostgres 6.4.2 with spinlock, BTP_CHAIN, and other miscellaneous problems.\nI tried patches and other tricks but everything was running really badly\nand we constantly had to intervene to keep the database running.\n\nWell, last Sunday I made the decision to upgrade to Postgres 6.5 - we had\ndone some extensive thrash testing and we were very impressed with how it\nhandled the load. The machine was a lot more idle, the disks were being\nused less, and so on, when compared to similar testing performed on a\n6.4.2 database. Also, we managed to reproduce our spinlock problem under\n6.4.2 but it never appeared on 6.5, probably due to the really cool MVCC\nconcurrency code.\n\nSo we did a pg_dump and reload into 6.5, which ran very smoothly -\nespecially considering we have lots of functions, plpgsql procedures,\ntriggers, rules, views, along with a few million rows in our tables. We\nrestarted our software, and everything worked flawlessly. (Well almost..) \nIt even worked using the libpq from 6.4, although we recompiled with the\n6.5 version just in case. Everything worked right out of the box, which is\ngood because we were worried about compatiblity problems with our programs\nrunning under MVCC. We had one problem with the reload, and that was the\nproblem where the backend would get SIGSEGV when we tried to load in our\nplpgsql functions, which I've written another email about... \n\nThe only thing we had to change were a few GROUP BY clauses in our\nqueries, the parser in 6.4.2 was a bit slack in checking the arguments and\nmaking sure you did a group by on all columns not in an aggregate, and 6.5\ndidn't like some of the queries - we fixed them up and that wasn't a\nproblem any more. \n\nThe next day, everything ran like a complete dream. The machine was idle\nalmost all day (compared to very busy on the CPU and disks) even during\ntimes when Postgres was doing lots of queries. In the past, when we ran\nreally large queries during times when the machine was loaded down, we\nwould always get either BTP_CHAIN or a spinlock problem - this is no more!\nWe can thrash the machine to our hearts content and it always performs\nbrilliantly without any one else even noticing!\n\nThe postmaster has been running for a week now without requiring any form\nof human intervention, compared to once per day under 6.4.2. As you can\nimagine this is also good :)\n\nAs of now, we've only found one problem with the plpgsql functions, but\napart from that, I am *really* happy with this version of Postgres. With\n6.4.2 I was having serious doubts about the reliability of the code, it\njust wasn't working for us - we were almost ready to give up on it and go\nto Oracle or something which I can imagine has its own set of problems and\nissues as well. 6.5 on the other hand, especially with the MVCC support,\nis absolutely awesome and my faith in Postgres has been restored. This is\ndefinitely the first commercial quality release that I could recommend to\nothers. (By commercial quality, I mean reliable enough that you can thrash\nit real hard and it won't ever fall over). 6.4 and predecessors used to\nwork fine but only under lightly loaded conditions and would sometimes\nbreak with wierd error messages.\n\nSo for anyone who is using anything less than 6.5, you should not walk,\nbut *RUN* to your local mirror site and grab a copy of 6.5 and install\nimmediately! It is absolutely awesome .... Probably the most exciting part\nabout it is the MVCC stuff, it puts Postgres up there in the big leagues\nand you don't have to lock whole tables when doing writes, this makes a\nhuge difference. Also the snapshot backups are really cool too - my\nproject runs 24 hours per day and it can't be shut down to perform backups\nso this is a real plus.\n\nI'd just like to offer my congratulations and thanks to all the developers\nof Postgres for your efforts and work - it is truly a great program and\nI hope to be able to help more with its development and support.\n\n[I hope this information is useful to the developers, I think someone\nsaid they wanted some feedback on 6.5]\n\nMVCC!\n\nRegards,\nWayne\n\n------------------------------------------------------------------------------\nWayne Piekarski Tel: (08) 8221 5221\nResearch & Development Manager Fax: (08) 8221 5220\nSE Network Access Pty Ltd Mob: 0407 395 889\n222 Grote Street Email: [email protected]\nAdelaide SA 5000 WWW: http://www.senet.com.au\n",
"msg_date": "Sat, 24 Jul 1999 13:22:05 +0930 (CST)",
"msg_from": "Wayne Piekarski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres 6.5 Is Fantastic!"
}
] |
[
{
"msg_contents": "Hi,\n\nI sent a bug report in a week ago about problems with pg_proc and getting\nSIGSEGV when trying to create largish functions.\n\n[please see my previous posting for details]\nI'm not sure what to do here, I tried dropping the pg_proc_prosrc_index\nand it died, and so I have to leave it hanging around.\n\nAlso, my functions are all quite small, (less than 2k) except for two of\nthem, which are just over 2k, but nowhere near 4k in size which would\ncause the btree index to die as someone mentioned.\n\nSo can someone give me some advice? Do you want me to provide a stack dump\nor something? \n\nRight now I'm reluctant to play with my plpgsql functions because I'm\nscared its going to die and I wont be able to reload them back in.\n\nIn order to get my pg_dump to reload I had to do a few hacks, like\ncreating some functions before reloading the backup to cause them to get\nloaded in a different order. pg_proc can also be corrupted with BTP_CHAIN\nand things as a result of this problem as well.\n\nciao,\nWayne\n\n------------------------------------------------------------------------------\nWayne Piekarski Tel: (08) 8221 5221\nResearch & Development Manager Fax: (08) 8221 5220\nSE Network Access Pty Ltd Mob: 0407 395 889\n222 Grote Street Email: [email protected]\nAdelaide SA 5000 WWW: http://www.senet.com.au\n",
"msg_date": "Sat, 24 Jul 1999 13:38:58 +0930 (CST)",
"msg_from": "Wayne Piekarski <[email protected]>",
"msg_from_op": true,
"msg_subject": "SIGSEGV on CREATE FUNCTION with plpgsql"
},
{
"msg_contents": "Wayne Piekarski <[email protected]> writes:\n> So can someone give me some advice? Do you want me to provide a stack dump\n> or something? \n\nA stack trace might help --- I'm not sure why you are seeing this\nproblem if there are no functions approaching 4k of text.\n\n> Right now I'm reluctant to play with my plpgsql functions because I'm\n> scared its going to die and I wont be able to reload them back in.\n\nYou can play with them in a playpen installation... I wouldn't do that\nsort of testing on a production installation either. A playpen is\nalways a good thing to have. Note you can put multiple playpens on\none machine --- all you need is a separate data directory and socket\nnumber for each one.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 24 Jul 1999 11:54:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SIGSEGV on CREATE FUNCTION with plpgsql "
},
{
"msg_contents": ">\n> Wayne Piekarski <[email protected]> writes:\n> > So can someone give me some advice? Do you want me to provide a stack dump\n> > or something?\n>\n> A stack trace might help --- I'm not sure why you are seeing this\n> problem if there are no functions approaching 4k of text.\n\n Would be interesting if the problem is index related. I still\n wonder (while looking at the code) what that\n ProcedureSrcIndex is really good for.\n\n I've tracked it down that it is only once used in pg_proc.c\n to check if an sql language function that implements a SET\n already exists (weired method to do IMHO). The code was\n already there in version 1.1 (initial load) of the code, so\n it might be an old Postgres 4.2 thing that's obsolete.\n\n Additionally, very doubtful, is the fact that we considered\n functions returning SET's as broken, so again I wonder if\n there's any code that automatically creates such functions\n (if not created automatically like the _RET rules of views\n are, identifying by this wouldn't allways work). The\n targetlists attached to SET functions don't work, so I assume\n removing the index wouldn't break anything.\n\n I'll dig out the 4.2 sources and search for a reason for that\n index there. If I find anything, I can check if that's still\n in our code.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 26 Jul 1999 11:03:59 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SIGSEGV on CREATE FUNCTION with plpgsql"
},
{
"msg_contents": "> Wayne Piekarski <[email protected]> writes:\n> > So can someone give me some advice? Do you want me to provide a stack dump\n> > or something? \n> \n> A stack trace might help --- I'm not sure why you are seeing this\n> problem if there are no functions approaching 4k of text.\n\nOk, I've got a test postgres set up which I used for my profiling so I'll\nhave a play and get a stack dump and see if I can work out whats causing\nthis. Been busy lately so haven't had a chance ... more later on this. \n\n> > Right now I'm reluctant to play with my plpgsql functions because I'm\n> > scared its going to die and I wont be able to reload them back in.\n> \n> You can play with them in a playpen installation... I wouldn't do that\n> sort of testing on a production installation either. A playpen is\n> always a good thing to have. Note you can put multiple playpens on\n> one machine --- all you need is a separate data directory and socket\n> number for each one.\n\nI've got multiple testing databases, but due to the random nature of the\nproblem, the functions will reload normally, but when the pg_dump output\nreloads them, it does it in a different order and dies. So it is dependent\non order and a bunch of other things, and so even with testing, I still\ncan't be sure it won't break the real production database.\n\nI'll have a look through the code and see if I can spot something obvious.\nMy functions are all <= 1k and so I'm miles away from 2k or 4k problems\nwith btree indices.\n\nthanks,\nWayne\n\n------------------------------------------------------------------------------\nWayne Piekarski Tel: (08) 8221 5221\nResearch & Development Manager Fax: (08) 8221 5220\nSE Network Access Pty Ltd Mob: 0407 395 889\n222 Grote Street Email: [email protected]\nAdelaide SA 5000 WWW: http://www.senet.com.au\n\n",
"msg_date": "Sat, 7 Aug 1999 17:46:18 +0930 (CST)",
"msg_from": "Wayne Piekarski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] SIGSEGV on CREATE FUNCTION with plpgsql"
}
] |
[
{
"msg_contents": "Hello, Could you please help me with my problem. I am currenlty implementing somewhat of a proxy. Meaning, there will be a lot of sql queries doing at the same time. Every URL address will be check whether it exist in the database. After opening a lot of sites, suddenly i receive a \"backend message type 0x50 arrived while idle\". Please reply as soon as possible because we have a deadline for the project. Thank you in advance.\n\nBy the way i am using postgres 6.4.2 running on a Pentuim II-350 with 64 Megs RAM\n\nDonny Ryan Chong\n\n\n\n",
"msg_date": "24 Jul 99 16:47:08 +0800",
"msg_from": "Donny Ryan Chong <[email protected]>",
"msg_from_op": true,
"msg_subject": "Please help re backend message type 0x50"
}
] |
[
{
"msg_contents": "I did some benchmarks of my Web site and notice I lost some hits\nwhich I accumulate in postgres (6.5.1) database on Linux 2.0.36 system\n\nHere is what I had before testing - 181 hits for msg_id=1463\n\n 1463| 181|Sat 24 Jul 12:12:24 1999 MSD|Sat 24 Jul 12:12:34 1999 MSD\n(11 rows)\n\n12:12[zeus]:/usr/local/apache/bin>ab -c 20 -n 200 http://astronet.sai.msu.su/db/pubs.html\\?msg_id=1463; psql discovery -c 'select * from hits where msg_id=1463;'\n\nAfter running 20 concurent connections, total number requests of 200 I \nexpected hit count must be increased by 200, but some hits doesn't recorded.\ntest reports all requests completed successfully and there were nothing\nwrong in apache error logs. It's interesting that sometimes I got even\n*more* hits than expected ! I didn't noticed any problem if I use smaller\nnumber of concurrent connections. \nI didn't use explicit locking - just insert/update into table using\nplpgsql function. Do I need something special to take care many concurrent\ninserts/updates ?\n\n\tRegards,\n\n\t\tOleg\n\n\n\nHere is my test results:\n\n\nThis is ApacheBench, Version 1.3\nCopyright (c) 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/\nCopyright (c) 1998-1999 The Apache Group, http://www.apache.org/\n\nServer Software: Apache/1.3.6\nServer Hostname: astronet.sai.msu.su\nServer Port: 80\n\nDocument Path: /db/pubs.html?msg_id=1463\nDocument Length: 3564 bytes\n\nConcurrency Level: 20\nTime taken for tests: 10.120 seconds\nComplete requests: 200\nFailed requests: 0\nTotal transferred: 769800 bytes\nHTML transferred: 712800 bytes\nRequests per second: 19.76\nTransfer rate: 76.07 kb/s received\n\nConnnection Times (ms)\n min avg max\nConnect: 0 58 380\nProcessing: 58 734 4919\nTotal: 58 792 5299\nmsg_id|count|first_access |last_access\n------+-----+----------------------------+----------------------------\n 1463| 370|Sat 24 Jul 12:12:24 1999 MSD|Sat 24 Jul 12:13:24 1999 MSD\n(1 row)\n ^^^^\n must be 381\n\nHere is a entry from apache config file:\n\n--------------------------------\nPerlModule Apache::HitsDBI\n<Location /db>\n PerlLogHandler Apache::HitsDBI\n</Location>\n\n---------------------------------\npackage Apache::HitsDBI;\nuse Apache::Constants qw(:common);\n\nuse strict;\n# preloaded in startup.pl\n#use DBI ();\n\nsub handler {\n my $orig = shift;\n my $url = $orig->uri;\n my $args = $orig->args();\n if ( $url =~ /pubs\\.html/ && $args =~ /msg_id=(\\d+)/ ) {\n my $dbh = DBI->connect(\"dbi:Pg:dbname=discovery\") || die DBI->errstr;\n my $sth = $dbh->do(\"SELECT acc_hits($1)\") || die $dbh->errstr;\n } \n return OK;\n}\n\n1;\n__END__\n\n-------------------------------\ncreate table hits ( \n msg_id int4 not null,\n count int4 not null,\n first_access datetime default now(),\n last_access datetime\n);\ncreate index idx_hits on hits(msg_id);\n\nCREATE FUNCTION \"acc_hits\" (int4) RETURNS int4 AS '\nDeclare\n keyval Alias For $1;\n cnt int4;\n curtime datetime;\nBegin\n curtime := ''now'';\n Select count into cnt from hits where msg_id = keyval;\n if Not Found then\n cnt := 1;\n -- first_access inserted on default, last_access is NULL\n Insert Into hits (msg_id,count) values (keyval, cnt);\n else\n cnt := cnt + 1;\n Update hits set count = cnt,last_access = curtime where msg_id = keyval;\n End If;\n return cnt;\nEnd;\n' LANGUAGE 'plpgsql';\n\n---------------------------------\n\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n\n",
"msg_date": "Sat, 24 Jul 1999 13:48:29 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "inserts/updates problem under stressing !"
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> I did some benchmarks of my Web site and notice I lost some hits\n> which I accumulate in postgres (6.5.1) database on Linux 2.0.36 system\n\n> CREATE FUNCTION \"acc_hits\" (int4) RETURNS int4 AS '\n> Declare\n> keyval Alias For $1;\n> cnt int4;\n> curtime datetime;\n> Begin\n> curtime := ''now'';\n> Select count into cnt from hits where msg_id = keyval;\n> if Not Found then\n> cnt := 1;\n> -- first_access inserted on default, last_access is NULL\n> Insert Into hits (msg_id,count) values (keyval, cnt);\n> else\n> cnt := cnt + 1;\n> Update hits set count = cnt,last_access = curtime where msg_id = keyval;\n> End If;\n> return cnt;\n> End;\n> ' LANGUAGE 'plpgsql';\n\nI wonder whether this doesn't have a problem with concurrent access:\n\n1. Transaction A does 'Select count into cnt', gets (say) 200.\n2. Transaction B does 'Select count into cnt', gets 200.\n3. Transaction A writes 201 into hits record.\n4. Transaction B writes 201 into hits record.\n\nand variants thereof. (Even if A has already written 201, I don't think\nB will see it until A has committed...)\n\nI am not too clear on MVCC yet, but I think you need \"SELECT FOR UPDATE\"\nor possibly an explicit lock on the hits table in order to avoid this\nproblem. Vadim, any comments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 24 Jul 1999 12:29:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] inserts/updates problem under stressing ! "
},
{
"msg_contents": "Tom,\n\nI just posted my latest results and it seems I have no\nproblem at all at home - numbers from access_log and and database\nare consistent. They are diffrent from what Apache Benchmarks reports\nbut I'm fine ( I think ab reports something different :-)\nI see the problem at work - Linux SMP. As I posted running test cause\nduplicated records in database ! Could be SMP somehow affects to\npostgres under stressing ? I'm developing rather big informational \nWeb channel with all content generated from postgres database and\nworry about reliability. Performance is ok. but simple logging to db\ngetting me totally lost ! \n\nDoes somebody has an experience with SMP+postgres under high stressing. \nProbably we need some pages on Postgres Web server with \nrecommendations and experience from real life. Especially after\nintroducing of MVCC ! I've seen in mailing lists several threads\nabout administrations of postgres in 27*7*365 systems but never got\na final opinion what's the best and safe. Probably this is my\nproblem :-) But it might be more usefull if some expert could summarize\ndiscusion and submit summary to www.postgresql.org\n\n\tRegards,\n\t\tOleg\n\nOn Sat, 24 Jul 1999, Tom Lane wrote:\n\n> Date: Sat, 24 Jul 1999 12:29:06 -0400\n> From: Tom Lane <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected], [email protected]\n> Subject: Re: [SQL] inserts/updates problem under stressing ! \n> \n> Oleg Bartunov <[email protected]> writes:\n> > I did some benchmarks of my Web site and notice I lost some hits\n> > which I accumulate in postgres (6.5.1) database on Linux 2.0.36 system\n> \n> > CREATE FUNCTION \"acc_hits\" (int4) RETURNS int4 AS '\n> > Declare\n> > keyval Alias For $1;\n> > cnt int4;\n> > curtime datetime;\n> > Begin\n> > curtime := ''now'';\n> > Select count into cnt from hits where msg_id = keyval;\n> > if Not Found then\n> > cnt := 1;\n> > -- first_access inserted on default, last_access is NULL\n> > Insert Into hits (msg_id,count) values (keyval, cnt);\n> > else\n> > cnt := cnt + 1;\n> > Update hits set count = cnt,last_access = curtime where msg_id = keyval;\n> > End If;\n> > return cnt;\n> > End;\n> > ' LANGUAGE 'plpgsql';\n> \n> I wonder whether this doesn't have a problem with concurrent access:\n> \n> 1. Transaction A does 'Select count into cnt', gets (say) 200.\n> 2. Transaction B does 'Select count into cnt', gets 200.\n> 3. Transaction A writes 201 into hits record.\n> 4. Transaction B writes 201 into hits record.\n> \n> and variants thereof. (Even if A has already written 201, I don't think\n> B will see it until A has committed...)\n> \n> I am not too clear on MVCC yet, but I think you need \"SELECT FOR UPDATE\"\n> or possibly an explicit lock on the hits table in order to avoid this\n> problem. Vadim, any comments?\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sat, 24 Jul 1999 21:00:45 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] inserts/updates problem under stressing ! "
},
{
"msg_contents": "On Sat, 24 Jul 1999, Oleg Bartunov wrote:\n\n> Date: Sat, 24 Jul 1999 21:00:45 +0400 (MSD)\n> From: Oleg Bartunov <[email protected]>\n> To: Tom Lane <[email protected]>\n> Cc: [email protected], [email protected]\n> Subject: [HACKERS] Re: [SQL] inserts/updates problem under stressing ! \n> \n> Tom,\n> \n> I just posted my latest results and it seems I have no\n> problem at all at home - numbers from access_log and and database\n\nBlin, just add stressing at home and also got duplicates !\nfunny, that at home I have P166, 64Mb system but had to raise\na number of concurrent connections to 20 to get duplicates.\nAt work I got them already at 10 concurrent connections.\nProbably this fact illustrates a big progress in Linux kernel development - \nI run at home 2.2.10 version while at work - 2.0.36 SMP.\n\n\tRegards,\n\n\t\tOleg\n\n\n> are consistent. They are diffrent from what Apache Benchmarks reports\n> but I'm fine ( I think ab reports something different :-)\n> I see the problem at work - Linux SMP. As I posted running test cause\n> duplicated records in database ! Could be SMP somehow affects to\n> postgres under stressing ? I'm developing rather big informational \n> Web channel with all content generated from postgres database and\n> worry about reliability. Performance is ok. but simple logging to db\n> getting me totally lost ! \n> \n> Does somebody has an experience with SMP+postgres under high stressing. \n> Probably we need some pages on Postgres Web server with \n> recommendations and experience from real life. Especially after\n> introducing of MVCC ! I've seen in mailing lists several threads\n> about administrations of postgres in 27*7*365 systems but never got\n> a final opinion what's the best and safe. Probably this is my\n> problem :-) But it might be more usefull if some expert could summarize\n> discusion and submit summary to www.postgresql.org\n> \n> \tRegards,\n> \t\tOleg\n> \n> On Sat, 24 Jul 1999, Tom Lane wrote:\n> \n> > Date: Sat, 24 Jul 1999 12:29:06 -0400\n> > From: Tom Lane <[email protected]>\n> > To: Oleg Bartunov <[email protected]>\n> > Cc: [email protected], [email protected]\n> > Subject: Re: [SQL] inserts/updates problem under stressing ! \n> > \n> > Oleg Bartunov <[email protected]> writes:\n> > > I did some benchmarks of my Web site and notice I lost some hits\n> > > which I accumulate in postgres (6.5.1) database on Linux 2.0.36 system\n> > \n> > > CREATE FUNCTION \"acc_hits\" (int4) RETURNS int4 AS '\n> > > Declare\n> > > keyval Alias For $1;\n> > > cnt int4;\n> > > curtime datetime;\n> > > Begin\n> > > curtime := ''now'';\n> > > Select count into cnt from hits where msg_id = keyval;\n> > > if Not Found then\n> > > cnt := 1;\n> > > -- first_access inserted on default, last_access is NULL\n> > > Insert Into hits (msg_id,count) values (keyval, cnt);\n> > > else\n> > > cnt := cnt + 1;\n> > > Update hits set count = cnt,last_access = curtime where msg_id = keyval;\n> > > End If;\n> > > return cnt;\n> > > End;\n> > > ' LANGUAGE 'plpgsql';\n> > \n> > I wonder whether this doesn't have a problem with concurrent access:\n> > \n> > 1. Transaction A does 'Select count into cnt', gets (say) 200.\n> > 2. Transaction B does 'Select count into cnt', gets 200.\n> > 3. Transaction A writes 201 into hits record.\n> > 4. Transaction B writes 201 into hits record.\n> > \n> > and variants thereof. (Even if A has already written 201, I don't think\n> > B will see it until A has committed...)\n> > \n> > I am not too clear on MVCC yet, but I think you need \"SELECT FOR UPDATE\"\n> > or possibly an explicit lock on the hits table in order to avoid this\n> > problem. Vadim, any comments?\n> > \n> > \t\t\tregards, tom lane\n> > \n> \n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sat, 24 Jul 1999 21:23:20 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [SQL] inserts/updates problem under stressing ! "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> I wonder whether this doesn't have a problem with concurrent access:\n> \n> 1. Transaction A does 'Select count into cnt', gets (say) 200.\n> 2. Transaction B does 'Select count into cnt', gets 200.\n> 3. Transaction A writes 201 into hits record.\n> 4. Transaction B writes 201 into hits record.\n> \n> and variants thereof. (Even if A has already written 201, I don't think\n> B will see it until A has committed...)\n\nYou're right, Tom.\n\n> I am not too clear on MVCC yet, but I think you need \"SELECT FOR UPDATE\"\n> or possibly an explicit lock on the hits table in order to avoid this\n> problem. Vadim, any comments?\n\nSELECT FOR UPDATE will not help: if there was not record for\nparticular key then nothing will be locked and\n",
"msg_date": "Mon, 26 Jul 1999 10:39:14 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] inserts/updates problem under stressing !"
},
{
"msg_contents": "(Sorry for incomplete prev message).\n\nTom Lane wrote:\n> \n> I wonder whether this doesn't have a problem with concurrent access:\n> \n> 1. Transaction A does 'Select count into cnt', gets (say) 200.\n> 2. Transaction B does 'Select count into cnt', gets 200.\n> 3. Transaction A writes 201 into hits record.\n> 4. Transaction B writes 201 into hits record.\n> \n> and variants thereof. (Even if A has already written 201, I don't think\n> B will see it until A has committed...)\n\nYou're right, Tom.\n\n> I am not too clear on MVCC yet, but I think you need \"SELECT FOR UPDATE\"\n> or possibly an explicit lock on the hits table in order to avoid this\n> problem. Vadim, any comments?\n\nSELECT FOR UPDATE will not help: if there was no record for\nparticular key then nothing will be locked and two records with\nthe same key will be inserted.\n\nOleg, use LOCK IN SHARE ROW EXCLUSIVE MODE.\n\nVadim\n",
"msg_date": "Mon, 26 Jul 1999 10:43:00 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] inserts/updates problem under stressing !"
},
{
"msg_contents": "On Mon, 26 Jul 1999, Vadim Mikheev wrote:\n\n> Date: Mon, 26 Jul 1999 10:43:00 +0800\n> From: Vadim Mikheev <[email protected]>\n> To: Tom Lane <[email protected]>\n> Cc: Oleg Bartunov <[email protected]>, [email protected],\n> [email protected]\n> Subject: Re: [SQL] inserts/updates problem under stressing !\n> \n> (Sorry for incomplete prev message).\n> \n> Tom Lane wrote:\n> > \n> > I wonder whether this doesn't have a problem with concurrent access:\n> > \n> > 1. Transaction A does 'Select count into cnt', gets (say) 200.\n> > 2. Transaction B does 'Select count into cnt', gets 200.\n> > 3. Transaction A writes 201 into hits record.\n> > 4. Transaction B writes 201 into hits record.\n> > \n> > and variants thereof. (Even if A has already written 201, I don't think\n> > B will see it until A has committed...)\n> \n> You're right, Tom.\n> \n> > I am not too clear on MVCC yet, but I think you need \"SELECT FOR UPDATE\"\n> > or possibly an explicit lock on the hits table in order to avoid this\n> > problem. Vadim, any comments?\n> \n> SELECT FOR UPDATE will not help: if there was no record for\n> particular key then nothing will be locked and two records with\n> the same key will be inserted.\n> \n> Oleg, use LOCK IN SHARE ROW EXCLUSIVE MODE.\n\nThanks Vadim. Just tried this, but still I see a difference between\ncount hits (accumulated) from db and access_log. In my test these numbers are:\n95 and 109. So I lost 14 hits ! And no errors !\nIn my handler I have now:\n\nmy $sth = $dbh->do(\"LOCK TABLE hits IN SHARE ROW EXCLUSIVE MODE\");\nmy $sth = $dbh->do(\"SELECT acc_hits($1)\") || die $dbh->errstr;\n\nam I right ?\n\nI created hits table as:\ncreate table hits ( \n msg_id int4 not null primary key,\n count int4 not null,\n first_access datetime default now(),\n last_access datetime\n);\n\nand in error_log sometimes I see \nERROR: Cannot insert a duplicate key into a unique index\nHow this could be possible if I use \nLOCK TABLE hits IN SHARE ROW EXCLUSIVE MODE ?\n\n\n\tOleg\n\nPS.\nI remind my functions is:\n\nCREATE FUNCTION \"acc_hits\" (int4) RETURNS int4 AS '\nDeclare\n keyval Alias For $1;\n cnt int4;\n curtime datetime;\nBegin\n curtime := ''now'';\n Select count into cnt from hits where msg_id = keyval;\n if Not Found then\n cnt := 1;\n -- first_access inserted on default, last_access is NULL\n Insert Into hits (msg_id,count) values (keyval, cnt);\n else\n cnt := cnt + 1;\n Update hits set count = cnt,last_access = curtime where msg_id = keyval\n End If;\n return cnt;\nEnd;\n' LANGUAGE 'plpgsql';\n\n\n\n> \n> Vadim\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 26 Jul 1999 10:13:54 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] inserts/updates problem under stressing !"
},
{
"msg_contents": "Oleg Bartunov wrote:\n> \n> >\n> > SELECT FOR UPDATE will not help: if there was no record for\n> > particular key then nothing will be locked and two records with\n> > the same key will be inserted.\n> >\n> > Oleg, use LOCK IN SHARE ROW EXCLUSIVE MODE.\n> \n> Thanks Vadim. Just tried this, but still I see a difference between\n> count hits (accumulated) from db and access_log. In my test these numbers are:\n> 95 and 109. So I lost 14 hits ! And no errors !\n> In my handler I have now:\n> \n> my $sth = $dbh->do(\"LOCK TABLE hits IN SHARE ROW EXCLUSIVE MODE\");\n> my $sth = $dbh->do(\"SELECT acc_hits($1)\") || die $dbh->errstr;\n> \n> am I right ?\n\nYou should run LOCK and SELECT inside BEGIN/END (i.e. in\nthe same transaction), do you?\n\nVadim\n",
"msg_date": "Mon, 26 Jul 1999 14:26:06 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] inserts/updates problem under stressing !"
},
{
"msg_contents": "On Mon, 26 Jul 1999, Vadim Mikheev wrote:\n\n> Date: Mon, 26 Jul 1999 14:26:06 +0800\n> From: Vadim Mikheev <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: Tom Lane <[email protected]>, [email protected],\n> [email protected]\n> Subject: Re: [SQL] inserts/updates problem under stressing !\n> \n> Oleg Bartunov wrote:\n> > \n> > >\n> > > SELECT FOR UPDATE will not help: if there was no record for\n> > > particular key then nothing will be locked and two records with\n> > > the same key will be inserted.\n> > >\n> > > Oleg, use LOCK IN SHARE ROW EXCLUSIVE MODE.\n> > \n> > Thanks Vadim. Just tried this, but still I see a difference between\n> > count hits (accumulated) from db and access_log. In my test these numbers are:\n> > 95 and 109. So I lost 14 hits ! And no errors !\n> > In my handler I have now:\n> > \n> > my $sth = $dbh->do(\"LOCK TABLE hits IN SHARE ROW EXCLUSIVE MODE\");\n> > my $sth = $dbh->do(\"SELECT acc_hits($1)\") || die $dbh->errstr;\n> > \n> > am I right ?\n> \n> You should run LOCK and SELECT inside BEGIN/END (i.e. in\n> the same transaction), do you?\n\nGood question.\n\nI use perl DBI interface to work with postgres and I supposed it does\ntransaction automatically. Will check it.\nAha, got the problem. Now everything works !!!\n\n\tTnanks again,\n\n\t\tOleg\n\nSo, here is a working handler to *accumulate* hit statistics.\n\npackage Apache::HitsDBI;\nuse Apache::Constants qw(:common);\n\nuse strict;\n# preloaded in startup.pl\n#use DBI ();\n\nsub handler {\n my $orig = shift;\n my $url = $orig->uri;\n if ( $orig->args() =~ /msg_id=(\\d+)/ ) {\n my $dbh = DBI->connect(\"dbi:Pg:dbname=discovery\") || die DBI->errstr;\n $dbh->{AutoCommit} = 0;\n my $sth = $dbh->do(\"LOCK TABLE hits IN SHARE ROW EXCLUSIVE MODE\") || die $dbh->errstr;\n my $sth = $dbh->do(\"SELECT acc_hits($1)\") || die $dbh->errstr;\n my $rc = $dbh->commit || die $dbh->errstr;\n }\n return OK;\n}\n\n1;\n__END__\n\n\n\n\tOleg\n> \n> Vadim\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 26 Jul 1999 10:49:11 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] inserts/updates problem under stressing !"
},
{
"msg_contents": "> >\n> > my $sth = $dbh->do(\"LOCK TABLE hits IN SHARE ROW EXCLUSIVE MODE\");\n> > my $sth = $dbh->do(\"SELECT acc_hits($1)\") || die $dbh->errstr;\n> >\n> > am I right ?\n> \n> You should run LOCK and SELECT inside BEGIN/END (i.e. in\n> the same transaction), do you?\n\nYes, in DBI that translates to switching AutoCommit off, and doing an\nexplicit commit, (roughly)\n\n\t$dbh->{AutoCommit} = 0;\n\teval {\n\t $dbh->do (...)\n\t ...\n\t};\n\tif ($@) {\n\t // There was an error\n\t $dbh->rollback();\n \t} else {\n\t $dbh->commit();\n }\n\nI think you need to set RaiseError=>1 as well when connecting to the\ndatabase, to get die's inside the eval.\n\nAdriaan\n",
"msg_date": "Mon, 26 Jul 1999 09:57:18 +0300",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] inserts/updates problem under stressing !"
}
] |
[
{
"msg_contents": "I answer to my previous post:\n\nProbably ab reports wrong number of requests and \nrecords from access_log and hits from database are consistent,\n\nSo, probably there are no problem with database,\nbut I'd like to know do I need something else to safely log\ninto database.\n\n\tRegards,\n\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n---------- Forwarded message ----------\nDate: Sat, 24 Jul 1999 13:48:29 +0400 (MSD)\nFrom: Oleg Bartunov <[email protected]>\nTo: [email protected]\nCc: [email protected]\nSubject: inserts/updates problem under stressing !\n\nI did some benchmarks of my Web site and notice I lost some hits\nwhich I accumulate in postgres (6.5.1) database on Linux 2.0.36 system\n\nHere is what I had before testing - 181 hits for msg_id=1463\n\n 1463| 181|Sat 24 Jul 12:12:24 1999 MSD|Sat 24 Jul 12:12:34 1999 MSD\n(11 rows)\n\n12:12[zeus]:/usr/local/apache/bin>ab -c 20 -n 200 http://astronet.sai.msu.su/db/pubs.html\\?msg_id=1463; psql discovery -c 'select * from hits where msg_id=1463;'\n\nAfter running 20 concurent connections, total number requests of 200 I \nexpected hit count must be increased by 200, but some hits doesn't recorded.\ntest reports all requests completed successfully and there were nothing\nwrong in apache error logs. It's interesting that sometimes I got even\n*more* hits than expected ! I didn't noticed any problem if I use smaller\nnumber of concurrent connections. \nI didn't use explicit locking - just insert/update into table using\nplpgsql function. Do I need something special to take care many concurrent\ninserts/updates ?\n\n\tRegards,\n\n\t\tOleg\n\n\n\nHere is my test results:\n\n\nThis is ApacheBench, Version 1.3\nCopyright (c) 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/\nCopyright (c) 1998-1999 The Apache Group, http://www.apache.org/\n\nServer Software: Apache/1.3.6\nServer Hostname: astronet.sai.msu.su\nServer Port: 80\n\nDocument Path: /db/pubs.html?msg_id=1463\nDocument Length: 3564 bytes\n\nConcurrency Level: 20\nTime taken for tests: 10.120 seconds\nComplete requests: 200\nFailed requests: 0\nTotal transferred: 769800 bytes\nHTML transferred: 712800 bytes\nRequests per second: 19.76\nTransfer rate: 76.07 kb/s received\n\nConnnection Times (ms)\n min avg max\nConnect: 0 58 380\nProcessing: 58 734 4919\nTotal: 58 792 5299\nmsg_id|count|first_access |last_access\n------+-----+----------------------------+----------------------------\n 1463| 370|Sat 24 Jul 12:12:24 1999 MSD|Sat 24 Jul 12:13:24 1999 MSD\n(1 row)\n ^^^^\n must be 381\n\nHere is a entry from apache config file:\n\n--------------------------------\nPerlModule Apache::HitsDBI\n<Location /db>\n PerlLogHandler Apache::HitsDBI\n</Location>\n\n---------------------------------\npackage Apache::HitsDBI;\nuse Apache::Constants qw(:common);\n\nuse strict;\n# preloaded in startup.pl\n#use DBI ();\n\nsub handler {\n my $orig = shift;\n my $url = $orig->uri;\n my $args = $orig->args();\n if ( $url =~ /pubs\\.html/ && $args =~ /msg_id=(\\d+)/ ) {\n my $dbh = DBI->connect(\"dbi:Pg:dbname=discovery\") || die DBI->errstr;\n my $sth = $dbh->do(\"SELECT acc_hits($1)\") || die $dbh->errstr;\n } \n return OK;\n}\n\n1;\n__END__\n\n-------------------------------\ncreate table hits ( \n msg_id int4 not null,\n count int4 not null,\n first_access datetime default now(),\n last_access datetime\n);\ncreate index idx_hits on hits(msg_id);\n\nCREATE FUNCTION \"acc_hits\" (int4) RETURNS int4 AS '\nDeclare\n keyval Alias For $1;\n cnt int4;\n curtime datetime;\nBegin\n curtime := ''now'';\n Select count into cnt from hits where msg_id = keyval;\n if Not Found then\n cnt := 1;\n -- first_access inserted on default, last_access is NULL\n Insert Into hits (msg_id,count) values (keyval, cnt);\n else\n cnt := cnt + 1;\n Update hits set count = cnt,last_access = curtime where msg_id = keyval;\n End If;\n return cnt;\nEnd;\n' LANGUAGE 'plpgsql';\n\n---------------------------------\n\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n\n\n",
"msg_date": "Sat, 24 Jul 1999 14:41:59 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: inserts/updates problem under stressing !"
}
] |
[
{
"msg_contents": "At 08:54 23/07/99 -0300, The Hermit Hacker wrote:\n>\n>Can't we do this already with views?\n>\n\nNot really. A combination of Views, Triggers and Rules will almost do it,\nbut at the expense of being harder to maintain and more difficult to\nunderstand. It may be worth giving a real-world example:\n\nCreate Table Access_Codes(ACCESS_CODE Char(4), DESCRIPTION Varchar(62));\nInsert into ACCESS_CODES Values('SUPR','User may perform any action');\n...+various others\n\nCreate Table USER_ACCESS(USER_ID Int4, ACCESS_CODE Char(4));\n\nCreate Table USERS(USER_ID Int4, USERNAME Varchar(30));\n\nCreate Table GROUPS(GROUP_ID Int4, GROUP_NAME Varchar(30));\n\nCreate Table USER_GROUPS(GROUP_ID Int4, USER_ID Int4);\nInsert Into...etc\n\nThe idea is to have 'ACCESS_CODES' function like priviledges - possibly\noverriding group membership, and have groups function a lot like unix groups.\n\nNext define the things you want to control (in my case documents stored as\nblobs):\n\nCreate Table DOCUMENTS(DOCUMENT_ID Int4, DOCUMENT_SOURCE <Blob>, ....) etc.\n\nCreate Table DOCUMENT_GROUPS(DOCUMENT_ID Int4, GROUP_ID Int4);\n\nThe idea is that documents can be members of groups, and that a user must\nbe a member of a group before they can change the document.\n\nNext write the 'update' procedure:\n\nCREATE FUNCTION Update_Document (int4,...<args>...) \n RETURNS Varchar(255) AS '\nDeclare\n DocID Alias for $1;\n UserID int4;\n Msg\tVarchar(255);\n isOK\tint4;\n...declare some other stuff..\nBegin\n Set :isOK = 1;\n Set Msg = 'OK';\n Set UserID = (Select USER_ID From USERS Where USERNAME = CURRENT_USER;\n If not exists(Select * From USER_GROUPS UG, DOCUMENT_GROUPS DG Where\n UG.USER_ID = UserID\n\t\t\tAnd DG.GROUP_ID = UG.GROUP_ID\n And DG.DOCUMENT_ID = DocID) Then\n\n If Not Exists(Select * From USER_ACCESS Where USER_ID = UserID \n and ACCESS_CODE = 'SUPR') \n Then\n Set :isOK = False;\n Set :Msg = 'User has no access to document';\n End If;\n End If;\n\n If isOK == 1 Then\n <Do The Update>;\n End If;\n\n Return Msg;\n\nEnd;\n\nAnd finally, set the table protections:\n\nRevoke All On Table <All> from <All>;\nGrant All On Table <All> To SPECIAL_USER;\n\nGrant Execute on Function UPDATE_DOCUMENT To Public;\n\nSet Authorization On Function UPDATE_DOCUMENT To SPECIAL_USER;\n^\n|\n+-- This is the important bit.\n\n\nWhat we now have is a table that can only be updated according to a set of\nrules contained in one procedure, and which returns a useful error message\nwhen it fails. The rules for access can be as complex as you like, and this\nsystem does not preclude the use of triggers to enforce both integrity and\nfurther security.\n \nThe same could probably be achieved using rules and triggers for updates,\nbut would not return a nice message on failure, and would, IMO, be less\n'clear'.\n\nSorry for the length of the example, but I hope it puts things a little\nmore clearly.\n\n>On Fri, 23 Jul 1999, Philip Warner wrote:\n>\n>> A very useful feature in some database systems is the ability to\nrestrict who can run certain external or stored procedures, and to grant\nextra access rights to users when they do run those procedures.\n>> \n>> The usefulness of this may not be imediately obvious, but it is a very\npowerful feature, especially for preserving integrity and security:\n>> \n>> Simple uses include:\n>> \n>> 1. Make all tables 'read-only', then all updates must happen through\nprocedures. The procedures can make data-based security checks, and can\nensure integrity.\n>> \n>> 2. Make some tables unreadable, then data can only be retrieved via\nprocedures. Once again, data-based security can be achieved.\n>> \n>> The way this is implemented it to specify that when a procedure is run\nby *any* user, the procedure runs with the access rights of another\nuser/group/entity. \n>> \n>> Procedures must also have security associated with them: it is necessary\nto grant 'execute' access on procedures to the users who need to execute them.\n>> \n>> Since this *seems* like it is not likely to get too far into the\ninternals of the optimizer, and seems to be an area that is not under\nactive development by others, and since I am looking for a way to\ncontribute to development, I would be interested in comments that:\n>> \n>> 1. Tell me if this is much bigger than I think it is.\n>> 2. Tell me if it sounds useful.\n>> 3. Is a good learning excercise.\n>> 4. If it is stepping on other people's toes.\n>> 5. How to do it 8-}\n>> \n>> I look forward to comments and suggestions...I think.\n>> \n>> \n>> \n>> ----------------------------------------------------------------\n>> Philip Warner | __---_____\n>> Albatross Consulting Pty. Ltd. |----/ - \\\n>> (A.C.N. 008 659 498) | /(@) ______---_\n>> Tel: +61-03-5367 7422 | _________ \\\n>> Fax: +61-03-5367 7430 | ___________ |\n>> Http://www.rhyme.com.au | / \\|\n>> | --________--\n>> PGP key available upon request, | /\n>> and from pgp5.ai.mit.edu:11371 |/\n>> \n>\n>Marc G. Fournier ICQ#7615664 IRC Nick:\nScrappy\n>Systems Administrator @ hub.org \n>primary: [email protected] secondary:\nscrappy@{freebsd|postgresql}.org \n>\n>\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sat, 24 Jul 1999 22:54:56 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] RFC: Security and Impersonation [With Word Wrap!]"
}
] |
[
{
"msg_contents": "At 10:51 23/07/99 -0400, you wrote:\n>\n>We have some of this, I think, from ACLs on tables and views. But\n>as far as I know there is not a notion of a \"suid view\", one with\n>different privileges from its caller. It sounds like a good thing\n>to work on. Is there any standard in the area?\n>\n\nI'll look through the SQL3 stuff, and see what I can find.\n\nI've now done this,and it's in the SQL3 standard. It is implemented via\nModules. The idea being that all routines (procedures and functions) apear\nin a module, and that the module can have a 'Module Authorization\nIdentifier'. The syntax is:\n\nCreate Module MY_MODULE Language SQL\n\tAuthorization SOME_ID\n\nProcedure Some_Procedure....\n\n...etc\n\nEnd Module;\n\nIf the auth. ID is specified, then (quoting from the standard p. 95):\n\n \"... that <module authorization\n identifier> is used as the current <authorization identifier> for\n the execution of all <routine>s in the <module>. If the <module\n authorization identifier> is not specified, then the SQL-session\n <authorization identifier> is used as the current <authorization\n identifier> for the execution of each <routine> in the <module>.\n\nLet me know if you want to know more. The relevant standard can be found at:\n\nftp://gatekeeper.dec.com/pub/standards/sql/sql-foundation-aug94.txt\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sat, 24 Jul 1999 23:12:58 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] RFC: Security and Impersonation "
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> I'll look through the SQL3 stuff, and see what I can find.\n>\n> I've now done this,and it's in the SQL3 standard. It is implemented via\n> Modules. The idea being that all routines (procedures and functions) apear\n> in a module, and that the module can have a 'Module Authorization\n> Identifier'.\n\nCool. I doubt anyone will object to adding this SQL3 feature to\nPostgres, if you feel like working on it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 24 Jul 1999 12:32:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RFC: Security and Impersonation "
}
] |
[
{
"msg_contents": "subscribe\n\n\n",
"msg_date": "Sat, 24 Jul 1999 15:43:16 +0200",
"msg_from": "\"F J Cuberos\" <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
}
] |
[
{
"msg_contents": "Will there be a 6.5.2 release? Due to me being late a the latest bug fixes\nin ECPG didn't make it into 6.5.1. They are in the archive at the moment\nthough.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Sat, 24 Jul 1999 16:14:05 +0200",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "6.5.2"
},
{
"msg_contents": "> Will there be a 6.5.2 release? Due to me being late a the latest bug fixes\n> in ECPG didn't make it into 6.5.1. They are in the archive at the moment\n> though.\n> \n\nI don't think where will be a 6.5.2. I recommend putting it in the ftp\npatches directory.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 25 Jul 1999 15:40:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5.2"
}
] |
[
{
"msg_contents": "On Saturday, July 24, 1999 5:37 PM, Tom Lane [SMTP:[email protected]] wrote:\n> I wrote:\n> > [ a bunch of stuff ]\n> \n> After looking into this morning's patches digest, I see that half of\n> this already occurred to you :-).\n> \n> I'd still suggest extending the client to fall back to non-SSL if the\n> server rejects the connection (unless it is told by the application\n> that it must make an SSL connection). Then there's no compatibility\n> problem at all, even for mix-and-match SSL-enabled and not-SSL-enabled\n> clients and servers.\n\nThat sounds like a good thing to do.\n\nAs it is right now, it should work in all combinations except a 6.6 client\ncompiled with SSL support connecting to a pre-6.6 server. It already\nfalls-back if the server is 6.6 (without SSL support). And the 6.6 client\ncompiled without SSL works.\n\nThere is not yet a way in the client to specify that SSL connection is\nrequired (it can be specified on the server). I'm planning to put that in,\nbut I thought it would be good to get the \"base code\" approved first - which\nproved to be a good thing :-)\n\nI'll see if I can wrap something up before I leave on vacation (leaving\npretty soon, be gone about a week). Not sure I'll make it, though. Should I\ndo this as a patch against what I have now, or keep sending in \"the one big\npatch\"?\n\n\n//Magnus\n",
"msg_date": "Sat, 24 Jul 1999 18:10:25 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Re: SSL patch "
},
{
"msg_contents": "Magnus Hagander <[email protected]> writes:\n> As it is right now, it should work in all combinations except a 6.6 client\n> compiled with SSL support connecting to a pre-6.6 server. It already\n> falls-back if the server is 6.6 (without SSL support). And the 6.6 client\n> compiled without SSL works.\n\nActually, it shouldn't matter whether the server is 6.6-without-SSL\nor pre-6.6. At least in the way I envisioned it, they'd act the same.\n\n> There is not yet a way in the client to specify that SSL connection is\n> required (it can be specified on the server). I'm planning to put that in,\n> but I thought it would be good to get the \"base code\" approved first - which\n> proved to be a good thing :-)\n> I'll see if I can wrap something up before I leave on vacation (leaving\n> pretty soon, be gone about a week). Not sure I'll make it, though. Should I\n> do this as a patch against what I have now, or keep sending in \"the one big\n> patch\"?\n\nI don't think anyone has applied your patch yet, so why don't you just\nresubmit the whole thing after cleaning up the loose ends.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 24 Jul 1999 12:40:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: SSL patch "
}
] |
[
{
"msg_contents": "Hello,\n\nSorry for long postings. I'm triyng to identify my problem with logging \nto Postgres.\n\nI have some problem with comparison of accumulated hits count\nfrom DB, from apache's access_log and what test tool reports (I'm using\nab which comes with apache distribution). \nIt's obvious to expect these numbers should be the same, but they're \ndifferent ! Trying to find out what's the problem I wrote\nshell script hit_test which simplifies testing.\n\n1: counts number of lines in access_log and current hits count in DB\n2: running concurrent requests using Apache benchmark (ab)\n3. The same as 1:\n\nRunning several times this script I noticed several weirdness:\n1) All numbers are different:\n ab reports - 100 completed requests\n access_log - 109 requests\n database - 87 records\n2) As you can see below after running test I got\n 9 duplicated records !\n\nIt's difficult to find the problem for 1) - apache, modperl, database,\nbut 2) is a postgres issue. This is a latest REL6_5_PATCHES CVS,\nLinux 2.0.36 SMP. I had problem with 'vacuum analyze' with this\ndatabase, but after dumping-restoring vacuum seems works ok.\nInteresting to note, that at home I never got duplicates,\n'vacuum analyze' problem and numbers from access_log and database\nare consistent, so I'm fine at home (I think Apache Benchmark reports\nwrong number of requests).\n\nPostgres at home is slightly older (Jul 21) than\nat work (Jul 23). Another difference is SMP - at work I have\nDUAL PPRO system and Linux compiled with SMP support. Could be\nSMP the problem ?\n\n\n\tRegards,\n\n\t\tOleg\n\n\n---------------------------------------------------\n\n\n\n19:34[zeus]:~/app/discovery/test>hit_test 1464\n--------- START ------------------------------ 1464\n 31836 /usr/local/apache/logs/proxy.access_log\nmsg_id|count|first_access|last_access\n------+-----+------------+-----------\n(0 rows)\n\n--------- RUN ----------------------------\nThis is ApacheBench, Version 1.3\nCopyright (c) 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/\nCopyright (c) 1998-1999 The Apache Group, http://www.apache.org/\n\nServer Software: Apache/1.3.6\nServer Hostname: astronet.sai.msu.su\nServer Port: 80\n\nDocument Path: /db/pubs.html?msg_id=1464\nDocument Length: 3535 bytes\n\nConcurrency Level: 10\nTime taken for tests: 5.057 seconds\nComplete requests: 100\nFailed requests: 0\nTotal transferred: 382000 bytes\nHTML transferred: 353500 bytes\nRequests per second: 19.77\nTransfer rate: 75.54 kb/s received\n\nConnnection Times (ms)\n min avg max\nConnect: 0 42 618\nProcessing: 49 409 1928\nTotal: 49 451 2546\n--------- STOP -------------------------------\nmsg_id|count|first_access |last_access\n------+-----+----------------------------+----------------------------\n 1464| 87|Sat 24 Jul 19:34:39 1999 MSD|Sat 24 Jul 19:34:45 1999 MSD\n 1464| 87|Sat 24 Jul 19:34:39 1999 MSD|Sat 24 Jul 19:34:45 1999 MSD\n 1464| 87|Sat 24 Jul 19:34:39 1999 MSD|Sat 24 Jul 19:34:45 1999 MSD\n 1464| 87|Sat 24 Jul 19:34:39 1999 MSD|Sat 24 Jul 19:34:45 1999 MSD\n 1464| 87|Sat 24 Jul 19:34:39 1999 MSD|Sat 24 Jul 19:34:45 1999 MSD\n 1464| 87|Sat 24 Jul 19:34:40 1999 MSD|Sat 24 Jul 19:34:45 1999 MSD\n 1464| 87|Sat 24 Jul 19:34:39 1999 MSD|Sat 24 Jul 19:34:45 1999 MSD\n 1464| 87|Sat 24 Jul 19:34:39 1999 MSD|Sat 24 Jul 19:34:45 1999 MSD\n 1464| 87|Sat 24 Jul 19:34:40 1999 MSD|Sat 24 Jul 19:34:45 1999 MSD\n(9 rows)\n\n 31945 /usr/local/apache/logs/proxy.access_log\n\n---------------------------------------------------------------\nScript hit_test:\n---------------------------------------------------------------\n#!/bin/sh\n#set -vx\n\nif [ u$1 = 'u' ]\nthen\n echo \"Usage: $0 msg_id\"\n exit 1\nfi\n\nMSG_ID=$1\n\nURL=http://astronet.sai.msu.su/db/pubs.html\nLOG=/usr/local/apache/logs/proxy.access_log\nCONCURRENCY=10\nREQUESTS=100\n\necho \"--------- START ------------------------------\" $MSG_ID\nwc -l $LOG\npsql discovery -c 'select * from hits where msg_id='$MSG_ID\";'\"\necho \"--------- RUN ----------------------------\"\nab -c $CONCURRENCY -n $REQUESTS $URL?msg_id=$MSG_ID\necho \"--------- STOP -------------------------------\"\nsleep 5\npsql discovery -c 'select * from hits where msg_id='$MSG_ID\";'\"\nsleep 5\nwc -l $LOG\n\n\n---------------------------------------------------------------\nI do logging using simple perl handler:\n---------------------------------------------------------------\npackage Apache::HitsDBI;\nuse Apache::Constants qw(:common);\n\nuse strict;\n# preloaded in startup.pl\n#use DBI ();\n\nsub handler {\n my $orig = shift;\n my $url = $orig->uri;\n if ( $orig->args() =~ /msg_id=(\\d+)/ ) {\n my $dbh = DBI->connect(\"dbi:Pg:dbname=discovery\") || die DBI->errstr;\n my $sth = $dbh->do(\"SELECT acc_hits($1)\") || die $dbh->errstr;\n } \n return OK;\n}\n\n1;\n__END__\n\n---------------------------------------------------------------\nfunction acc_hits and table is here:\n---------------------------------------------------------------\ncreate table hits (\n msg_id int4 not null,\n count int4 not null,\n first_access datetime default now(),\n last_access datetime\n);\ncreate index idx_hits on hits(msg_id);\n\nCREATE FUNCTION \"acc_hits\" (int4) RETURNS int4 AS '\nDeclare\n keyval Alias For $1;\n cnt int4;\n curtime datetime;\nBegin\n curtime := ''now'';\n Select count into cnt from hits where msg_id = keyval;\n if Not Found then\n cnt := 1;\n -- first_access inserted on default, last_access is NULL\n Insert Into hits (msg_id,count) values (keyval, cnt);\n else\n cnt := cnt + 1;\n Update hits set count = cnt,last_access = curtime where msg_id = keyval\n End If;\n return cnt;\nEnd;\n' LANGUAGE 'plpgsql';\n\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sat, 24 Jul 1999 20:30:20 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "duplicate records (6.5.1)"
}
] |
[
{
"msg_contents": "> > As it is right now, it should work in all combinations except a 6.6\nclient\n> > compiled with SSL support connecting to a pre-6.6 server. It already\n> > falls-back if the server is 6.6 (without SSL support). And the 6.6\nclient\n> > compiled without SSL works.\n> \n> Actually, it shouldn't matter whether the server is 6.6-without-SSL\n> or pre-6.6. At least in the way I envisioned it, they'd act the same.\n\nNot quite.\nThe 6.6-without-SSL still knows about the NEGOTIATE_SSL_CODE packet that is\nsent, and can respond \"No, I can't do SSL\". The pre-6.6 does not know about\nthe existance of this packet, and will thus respond with \"Unsupported\nFrontend Protocol\" (since it's a normal StartupPacket with the version\nnumber set to something very large (like the cancel request was\nimplemented)).\n\n\n//Magnus\n",
"msg_date": "Sat, 24 Jul 1999 19:38:37 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Re: SSL patch "
},
{
"msg_contents": "Magnus Hagander <[email protected]> writes:\n>> Actually, it shouldn't matter whether the server is 6.6-without-SSL\n>> or pre-6.6. At least in the way I envisioned it, they'd act the same.\n\n> The 6.6-without-SSL still knows about the NEGOTIATE_SSL_CODE packet that is\n> sent, and can respond \"No, I can't do SSL\". The pre-6.6 does not know about\n> the existance of this packet, and will thus respond with \"Unsupported\n> Frontend Protocol\" (since it's a normal StartupPacket with the version\n> number set to something very large (like the cancel request was\n> implemented)).\n\nOK, the point being that then the client can either break the connection\n(if it doesn't want to do an insecure connection) or send a\nStartupPacket to continue with an insecure connection. I agree this\nwill be a little quicker than tearing down the connection and starting\na new one, which is what must happen in the case of an old server.\n\nBut you could save some code on both sides if you just made the\nteardown/new connection behavior the only path for falling back to\nnon-SSL. I'm not sure that SSL-enabled-client-talking-to-6.6-but-\nnot-SSL-enabled-server will be a sufficiently common scenario to\njustify a lot of extra code to make it a tad faster. You'd expect\nthat most installations will have SSL at the server if they have\nit anywhere.\n\nIf it's only a small amount of extra code then it doesn't matter,\nof course. I'm just dubious that it's worth taking extra care\nfor that case when you are going to have the other case anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 24 Jul 1999 13:57:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: SSL patch "
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.