threads
listlengths
1
2.99k
[ { "msg_contents": "I have been doing some testing with multi-segment relations.\n(Not having sixty gigabytes laying around, like some folk I've been\ntalking to recently, I rebuilt with a very small RELSEG_SIZE for\ntesting...) I have discovered that performance goes to zero as the\nnumber of segments gets large --- in particular, if the number of\nsegments exceeds the number of kernel file descriptors that fd.c\nthinks it can use, you can take a coffee break while inserting a\nfew tuples :-(. The main problem is mdnblocks() in md.c, which\nis called for each tuple inserted; it insists on touching every\nsegment of the relation to verify its length via lseek(). That's an\nunreasonable number of kernel calls in any case, and if you add a file\nopen and close to each touch because of file-descriptor thrashing,\nit's coffee break time.\n\nIt seems to me that mdnblocks should assume that all segments\npreviously verified to be of length RELSEG_SIZE are still of that\nlength, and only do an _mdnblocks() call on the last element of the\nsegment chain. That way the number of kernel interactions needed\nis a constant independent of the number of segments in the table.\n(This doesn't make truncation of the relation by a different backend\nany more or less dangerous; we're still hosed if we don't get a\nnotification before we try to do something with the relation.\nI think that is fixed, and if it's not this doesn't make it worse.)\n\nAny objections?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Oct 1999 00:12:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "mdnblocks is an amazing time sink in huge relations" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> Sent: Tuesday, October 12, 1999 1:12 PM\n> To: [email protected]\n> Subject: [HACKERS] mdnblocks is an amazing time sink in huge relations\n> \n> \n> I have been doing some testing with multi-segment relations.\n> (Not having sixty gigabytes laying around, like some folk I've been\n> talking to recently, I rebuilt with a very small RELSEG_SIZE for\n> testing...) I have discovered that performance goes to zero as the\n> number of segments gets large --- in particular, if the number of\n> segments exceeds the number of kernel file descriptors that fd.c\n> thinks it can use, you can take a coffee break while inserting a\n> few tuples :-(. The main problem is mdnblocks() in md.c, which\n> is called for each tuple inserted; it insists on touching every\n> segment of the relation to verify its length via lseek(). That's an\n> unreasonable number of kernel calls in any case, and if you add a file\n> open and close to each touch because of file-descriptor thrashing,\n> it's coffee break time.\n>\n\nIt's me who removed the following code from mdnblocks().\nThe code caused a disaster in case of mdtruncate().\n\n if (v->mdfd_lstbcnt == RELSEG_SIZE\n\t \t|| .... \n\nAt that time I was anxious about the change of performance\nbut I've forgotten,sorry.\n\n> It seems to me that mdnblocks should assume that all segments\n> previously verified to be of length RELSEG_SIZE are still of that\n> length, and only do an _mdnblocks() call on the last element of the\n> segment chain. That way the number of kernel interactions needed\n> is a constant independent of the number of segments in the table.\n> (This doesn't make truncation of the relation by a different backend\n> any more or less dangerous; we're still hosed if we don't get a\n> notification before we try to do something with the relation.\n> I think that is fixed, and if it's not this doesn't make it worse.)\n>\n\nCurrently only the first segment is opened when mdopen() etc is\ncalled. Would you change to check all segments at first open ?\n\nIf there is a relation which has multi-segment of size 0,which would\nbe the last segment ?\n\nRegards. \n \nHiroshi Inoue\[email protected]\n", "msg_date": "Tue, 12 Oct 1999 16:11:42 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] mdnblocks is an amazing time sink in huge relations" }, { "msg_contents": "> It seems to me that mdnblocks should assume that all segments\n> previously verified to be of length RELSEG_SIZE are still of that\n> length, and only do an _mdnblocks() call on the last element of the\n> segment chain. That way the number of kernel interactions needed\n> is a constant independent of the number of segments in the table.\n> (This doesn't make truncation of the relation by a different backend\n> any more or less dangerous; we're still hosed if we don't get a\n> notification before we try to do something with the relation.\n> I think that is fixed, and if it's not this doesn't make it worse.)\n> \n> Any objections?\n\nSounds great. Now that the segments work properly, it is time for some\noptimization. Vadim has complained about the lseek() from day-1. \nSomeday he wants a shared catalog cache to prevent that from being\nneeded.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Oct 1999 09:57:53 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mdnblocks is an amazing time sink in huge relations" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> It's me who removed the following code from mdnblocks().\n> The code caused a disaster in case of mdtruncate().\n>\n> if (v->mdfd_lstbcnt == RELSEG_SIZE\n> \t \t|| .... \n\nOK, but do you think there's still a risk, given that we've changed\nthe relcache-level interlocking?\n\n> Currently only the first segment is opened when mdopen() etc is\n> called. Would you change to check all segments at first open ?\n\nNo, I don't see a need to change that logic. I was thinking that\nsince mdnblocks adds another link to the chain whenever it sees that\nthe current segment is exactly RELSEG_SIZE, it could just assume that\nif the next-segment link is not NULL then this segment must be of\nsize RELSEG_SIZE and it doesn't need to check that again. It'd only\nneed to do an actual check on the last chain element (plus any elements\nit adds during the current call, of course). We'd rely on the\nhigher-level interlock to close and reopen the virtual file when a\ntruncate has happened.\n\n> If there is a relation which has multi-segment of size 0,which would\n> be the last segment ?\n\nOnly the last segment can have any size other than RELSEG_SIZE, I think.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Oct 1999 10:57:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] mdnblocks is an amazing time sink in huge relations " }, { "msg_contents": "> \n> > Currently only the first segment is opened when mdopen() etc is\n> > called. Would you change to check all segments at first open ?\n> \n> No, I don't see a need to change that logic. I was thinking that\n> since mdnblocks adds another link to the chain whenever it sees that\n> the current segment is exactly RELSEG_SIZE, it could just assume that\n> if the next-segment link is not NULL then this segment must be of\n> size RELSEG_SIZE and it doesn't need to check that again. It'd only\n> need to do an actual check on the last chain element (plus any elements\n> it adds during the current call, of course). We'd rely on the\n> higher-level interlock to close and reopen the virtual file when a\n> truncate has happened.\n>\n\nYou are right.\nThe last segment is always lseeked.\nAnd does this mean that the first mdnblocks() opens all segments and\nchecks them ?\n \n> > If there is a relation which has multi-segment of size 0,which would\n> > be the last segment ?\n> \n> Only the last segment can have any size other than RELSEG_SIZE, I think.\n>\n\nThis may be another issue.\nI have been suspicious about current implementation of md.c.\nIt relies so much on information about existent phisical files.\n\nHow do you think about the following ?\n\n1. Partial blocks(As you know,I have changed the handling of this\n kind of blocks recently).\n2. If a backend was killed or crashed in the middle of execution of \n mdunlink()/mdtruncate(),half of segments wouldn't be unlink/\n truncated.\n3. In cygwin port,mdunlink()/mdtruncate() may leave segments of 0\n length. \n4. We couldn't mdcreate() existent files and coudn't mdopen()/md\n unlink() non-existent files. So there are some cases that we\n could neither CREATE TABLE nor DROP TABLE. \n\nI have no solution but seems the count of valid segments and blocks\nshould be held in other places.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Wed, 13 Oct 1999 09:29:16 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] mdnblocks is an amazing time sink in huge relations " }, { "msg_contents": "(Sorry for slow response, I've been off chasing psort problems...)\n\n\"Hiroshi Inoue\" <[email protected]> writes:\n> I have been suspicious about current implementation of md.c.\n> It relies so much on information about existent phisical files.\n\nYes, but on the other hand we rely completely on those same physical\nfiles to hold our data ;-). I don't see anything fundamentally\nwrong with using the existence and size of a data file as useful\ninformation. It's not a substitute for a lock, of course, and there\nmay be places where we need cross-backend interlocks that we haven't\ngot now.\n\n> How do you think about the following ?\n>\n> 1. Partial blocks(As you know,I have changed the handling of this\n> kind of blocks recently).\n\nYes. I think your fix was good.\n\n> 2. If a backend was killed or crashed in the middle of execution of \n> mdunlink()/mdtruncate(),half of segments wouldn't be unlink/\n> truncated.\n\nThat's bothered me too. A possible answer would be to do the unlinking\nback-to-front (zap the last file first); that'd require a few more lines\nof code in md.c, but a crash midway through would then leave a legal\nfile configuration that another backend could still do something with.\n\n> 3. In cygwin port,mdunlink()/mdtruncate() may leave segments of 0\n> length. \n\nI don't understand what causes this. Can you explain?\n\nBTW, I think that having the last segment be 0 length is OK and indeed\nexpected --- mdnblocks will create the next segment as soon as it\nnotices the currently last segment has reached RELSEG_SIZE, even if\nthere's not yet a disk page to put in the next segment. This seems\nOK to me, although it's not really necessary.\n\n> 4. We couldn't mdcreate() existent files and coudn't mdopen()/md\n> unlink() non-existent files. So there are some cases that we\n> could neither CREATE TABLE nor DROP TABLE. \n\nTrue, but I think this is probably the best thing for safety's sake.\nIt seems to me there is too much risk of losing or overwriting valid\ndata if md.c bulls ahead when it finds an unexpected file configuration.\nI'd rather rely on manual cleanup if things have gotten that seriously\nout of whack... (but that's just my opinion, perhaps I'm in the\nminority?)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 17 Oct 1999 19:53:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] mdnblocks is an amazing time sink in huge relations " }, { "msg_contents": "> \"Hiroshi Inoue\" <[email protected]> writes:\n> > I have been suspicious about current implementation of md.c.\n> > It relies so much on information about existent phisical files.\n>\n> Yes, but on the other hand we rely completely on those same physical\n> files to hold our data ;-). I don't see anything fundamentally\n> wrong with using the existence and size of a data file as useful\n> information. It's not a substitute for a lock, of course, and there\n> may be places where we need cross-backend interlocks that we haven't\n> got now.\n>\n\nWe have to lseek() each time to know the number of blocks of a table\nfile. Isn't it a overhead ?\n\n> > How do you think about the following ?\n> >\n> > 2. If a backend was killed or crashed in the middle of execution of\n> > mdunlink()/mdtruncate(),half of segments wouldn't be unlink/\n> > truncated.\n>\n> That's bothered me too. A possible answer would be to do the unlinking\n> back-to-front (zap the last file first); that'd require a few more lines\n> of code in md.c, but a crash midway through would then leave a legal\n> file configuration that another backend could still do something with.\n\nOops,it's more serious than I have thought.\nmdunlink() may only truncates a table file by a crash while unlinking\nback-to-front.\nA crash while unlinking front-to-back may leave unlinked segments\nand they would suddenly appear as segments of the recreated table.\nSeems there's no easy fix.\n\n> > 3. In cygwin port,mdunlink()/mdtruncate() may leave segments of 0\n> > length.\n>\n> I don't understand what causes this. Can you explain?\n>\n\nYou call FileUnlink() after FileTrucnate() to unlink in md.c. If\nFileUnlink()\nfails there remains segments of 0 length. But it seems not critical in\nthis issue.\n\n> > 4. We couldn't mdcreate() existent files and coudn't mdopen()/md\n> > unlink() non-existent files. So there are some cases that we\n> > could neither CREATE TABLE nor DROP TABLE.\n>\n> True, but I think this is probably the best thing for safety's sake.\n> It seems to me there is too much risk of losing or overwriting valid\n> data if md.c bulls ahead when it finds an unexpected file configuration.\n> I'd rather rely on manual cleanup if things have gotten that seriously\n> out of whack... (but that's just my opinion, perhaps I'm in the\n> minority?)\n>\n\nThere is another risk.\nWe may remove other table files manually by mistake.\nAnd if I were a newcomer,I would not consider PostgreSQL as\na real DBMS(Fortunately I have never seen the reference to this).\n\nHowever,I don't object to you because I also have the same anxiety\nand could provide no easy solution,\n\nProbably it would require a lot of work to fix correctly.\nPostponing real unlink/truncating until commit and creating table\nfiles which correspond to their oids ..... etc ...\nIt's same as \"DROP TABLE inside transations\" requires.\n\nHmm,is it worth the work ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Mon, 18 Oct 1999 14:40:47 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] mdnblocks is an amazing time sink in huge relations " }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> ... I don't see anything fundamentally\n>> wrong with using the existence and size of a data file as useful\n>> information. It's not a substitute for a lock, of course, and there\n>> may be places where we need cross-backend interlocks that we haven't\n>> got now.\n\n> We have to lseek() each time to know the number of blocks of a table\n> file. Isn't it a overhead ?\n\nTrue, but lseek is pretty cheap as kernel calls go (the kernel just has\nto consult the file's inode, which should be in memory already). We're\nnot going to get the info for free; any other way of keeping track of\nit is going to have its own costs. Vadim's been muttering about using\na shared cache for system catalog tuples, which might be a win but I'm\nnot sure (I'm worried about contention for the cache, especially if it's\nprotected by just one or a few spinlocks). Anyway, if we did have one\nthen keeping an accurate block count in the relation's pg_class row\nwould be a practical alternative.\n\n>> That's bothered me too. A possible answer would be to do the unlinking\n>> back-to-front (zap the last file first); that'd require a few more lines\n>> of code in md.c, but a crash midway through would then leave a legal\n>> file configuration that another backend could still do something with.\n\n> Oops,it's more serious than I have thought.\n> mdunlink() may only truncates a table file by a crash while unlinking\n> back-to-front.\n> A crash while unlinking front-to-back may leave unlinked segments\n> and they would suddenly appear as segments of the recreated table.\n> Seems there's no easy fix.\n\nWell, it seems to me that the first misbehavior (incomplete delete becomes\na partial truncate, and you can try again) is a lot better than the\nsecond (incomplete delete leaves an undeletable, unrecreatable table).\nShould I go ahead and make delete/truncate work back-to-front, or do you\nsee a reason why that'd be a bad thing to do?\n\n>>>> 3. In cygwin port,mdunlink()/mdtruncate() may leave segments of 0\n>>>> length.\n>> \n>> I don't understand what causes this. Can you explain?\n\n> You call FileUnlink() after FileTrucnate() to unlink in md.c. If\n> FileUnlink()\n> fails there remains segments of 0 length. But it seems not critical in\n> this issue.\n\nAh, I see. It's not specific to cygwin then. I think you are right\nthat it is not critical, at least not if we change to do the operations\nback-to-front. Truncating and then failing to delete will still leave\na valid file configuration.\n\n> Probably it would require a lot of work to fix correctly.\n> Postponing real unlink/truncating until commit and creating table\n> files which correspond to their oids ..... etc ...\n> It's same as \"DROP TABLE inside transations\" requires.\n> Hmm,is it worth the work ?\n\nI'm not eager to do that either (not just because of the low return on\nwork invested, but because naming table files by OIDs would be a big\nhandicap for debugging and database admin work). However, I'm not\nquite sure why you see it as a solution to the problem of recovering\nfrom a failed md.c operation. The risks would be the same if the\ndeletion was happening during commit, no? And I'd be a lot *more*\nworried about deleting the wrong file during manual cleanup if the\nfiles were named by OID ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Oct 1999 11:10:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] mdnblocks is an amazing time sink in huge relations " }, { "msg_contents": ">\n> \"Hiroshi Inoue\" <[email protected]> writes:\n> >> ... I don't see anything fundamentally\n> >> wrong with using the existence and size of a data file as useful\n> >> information. It's not a substitute for a lock, of course, and there\n> >> may be places where we need cross-backend interlocks that we haven't\n> >> got now.\n>\n> > We have to lseek() each time to know the number of blocks of a table\n> > file. Isn't it a overhead ?\n>\n> True, but lseek is pretty cheap as kernel calls go (the kernel just has\n> to consult the file's inode, which should be in memory already). We're\n> not going to get the info for free; any other way of keeping track of\n> it is going to have its own costs. Vadim's been muttering about using\n> a shared cache for system catalog tuples, which might be a win but I'm\n> not sure (I'm worried about contention for the cache, especially if it's\n> protected by just one or a few spinlocks). Anyway, if we did have one\n> then keeping an accurate block count in the relation's pg_class row\n> would be a practical alternative.\n>\n\nSeems it's related to a TODO.\n* Shared catalog cache, reduce lseek()'s by caching table size in shared\narea\n\nBut there would be a problem if we use shared catalog cache.\nBeing updated system tuples are only visible to an updating backend\nand other backends should see committed tuples.\nOn the other hand,an accurate block count should be visible to all\nbackends.\nWhich tuple of a row should we load to catalog cache and update ?\n\n> >> That's bothered me too. A possible answer would be to do the unlinking\n> >> back-to-front (zap the last file first); that'd require a few\n> more lines\n> >> of code in md.c, but a crash midway through would then leave a legal\n> >> file configuration that another backend could still do something with.\n>\n> > Oops,it's more serious than I have thought.\n> > mdunlink() may only truncates a table file by a crash while unlinking\n> > back-to-front.\n> > A crash while unlinking front-to-back may leave unlinked segments\n> > and they would suddenly appear as segments of the recreated table.\n> > Seems there's no easy fix.\n>\n> Well, it seems to me that the first misbehavior (incomplete delete becomes\n> a partial truncate, and you can try again) is a lot better than the\n> second (incomplete delete leaves an undeletable, unrecreatable table).\n> Should I go ahead and make delete/truncate work back-to-front, or do you\n> see a reason why that'd be a bad thing to do?\n>\n\nI also think back-to-front is better.\n\n> > Probably it would require a lot of work to fix correctly.\n> > Postponing real unlink/truncating until commit and creating table\n> > files which correspond to their oids ..... etc ...\n> > It's same as \"DROP TABLE inside transations\" requires.\n> > Hmm,is it worth the work ?\n>\n> I'm not eager to do that either (not just because of the low return on\n> work invested, but because naming table files by OIDs would be a big\n> handicap for debugging and database admin work). However, I'm not\n> quite sure why you see it as a solution to the problem of recovering\n> from a failed md.c operation. The risks would be the same if the\n> deletion was happening during commit, no? And I'd be a lot *more*\n> worried about deleting the wrong file during manual cleanup if the\n> files were named by OID ;-)\n>\n\nWe don't have to delete relation files even after commit.\nBackend would never see them and access to the files\ncorresponding to new oids of (being) recreated relations.\nDeletion is necessary only not to consume disk space.\n\nFor example vacuum could remove not deleted files.\nIt may be a PostgreSQL style.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Tue, 19 Oct 1999 10:02:42 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] mdnblocks is an amazing time sink in huge relations " }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> a shared cache for system catalog tuples, which might be a win but I'm\n>> not sure (I'm worried about contention for the cache, especially if it's\n>> protected by just one or a few spinlocks). Anyway, if we did have one\n>> then keeping an accurate block count in the relation's pg_class row\n>> would be a practical alternative.\n\n> But there would be a problem if we use shared catalog cache.\n> Being updated system tuples are only visible to an updating backend\n> and other backends should see committed tuples.\n> On the other hand,an accurate block count should be visible to all\n> backends.\n> Which tuple of a row should we load to catalog cache and update ?\n\nGood point --- rolling back a transaction would cancel changes to the\npg_class row, but it mustn't cause the relation's file to get truncated\n(since there could be tuples of other uncommitted transactions in the\nnewly added block(s)).\n\nThis says that having a block count column in pg_class is the Wrong\nThing; we should get rid of relpages entirely. The Right Thing is a\nseparate data structure in shared memory that stores the current\nphysical block count for each active relation. The first backend to\ntouch a given relation would insert an entry, and then subsequent\nextensions/truncations/deletions would need to update it. We already\nobtain a special lock when extending a relation, so seems like there'd\nbe no extra locking cost to have a table like this.\n\nAnyone up for actually implementing this ;-) ? I have other things\nI want to work on...\n\n>> Well, it seems to me that the first misbehavior (incomplete delete becomes\n>> a partial truncate, and you can try again) is a lot better than the\n>> second (incomplete delete leaves an undeletable, unrecreatable table).\n>> Should I go ahead and make delete/truncate work back-to-front, or do you\n>> see a reason why that'd be a bad thing to do?\n\n> I also think back-to-front is better.\n\nOK, I have a couple other little things I want to do in md.c, so I'll\nsee what I can do about that. Even with a shared-memory relation\nlength table, back-to-front truncation would be the safest way to\nproceed, so we'll want to make this change in any case.\n\n> Deletion is necessary only not to consume disk space.\n>\n> For example vacuum could remove not deleted files.\n\nHmm ... interesting idea ... but I can hear the complaints\nfrom users already...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Oct 1999 23:10:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] mdnblocks is an amazing time sink in huge relations " }, { "msg_contents": "Tom Lane wrote:\n> \n> >> a shared cache for system catalog tuples, which might be a win but I'm\n> >> not sure (I'm worried about contention for the cache, especially if it's\n> >> protected by just one or a few spinlocks). Anyway, if we did have one\n\nCommercial DBMSes have this... Isn't it a good reason? -:)\n\n> > But there would be a problem if we use shared catalog cache.\n> > Being updated system tuples are only visible to an updating backend\n> > and other backends should see committed tuples.\n> > On the other hand,an accurate block count should be visible to all\n> > backends.\n> > Which tuple of a row should we load to catalog cache and update ?\n> \n> Good point --- rolling back a transaction would cancel changes to the\n> pg_class row, but it mustn't cause the relation's file to get truncated\n> (since there could be tuples of other uncommitted transactions in the\n> newly added block(s)).\n> \n> This says that having a block count column in pg_class is the Wrong\n> Thing; we should get rid of relpages entirely. The Right Thing is a\n> separate data structure in shared memory that stores the current\n> physical block count for each active relation. The first backend to\n> touch a given relation would insert an entry, and then subsequent\n> extensions/truncations/deletions would need to update it. We already\n> obtain a special lock when extending a relation, so seems like there'd\n> be no extra locking cost to have a table like this.\n\nI supposed that each backend will still use own catalog \ncache (after reading entries from shared one) and synchronize \nshared/private caches on commit - e.g. update reltuples!\nrelpages will be updated immediately after physical changes -\nwhat's problem with this?\n\n> Anyone up for actually implementing this ;-) ? I have other things\n> I want to work on...\n\nAnd me too -:))\n\nVadim\n", "msg_date": "Tue, 19 Oct 1999 12:40:40 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mdnblocks is an amazing time sink in huge relations" }, { "msg_contents": "> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n\n[snip]\n \n> \n> > Deletion is necessary only not to consume disk space.\n> >\n> > For example vacuum could remove not deleted files.\n> \n> Hmm ... interesting idea ... but I can hear the complaints\n> from users already...\n>\n\nMy idea is only an analogy of PostgreSQL's simple recovery\nmechanism of tuples.\n\nAnd my main point is\n\t\"delete fails after commit\" doesn't harm the database\n\texcept that not deleted files consume disk space.\n\nOf cource,it's preferable to delete relation files immediately\nafter(or just when) commit.\nUseless files are visible though useless tuples are invisible.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Tue, 19 Oct 1999 18:44:36 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] mdnblocks is an amazing time sink in huge relations " }, { "msg_contents": "> \n> Tom Lane wrote:\n> > \n> > >> a shared cache for system catalog tuples, which might be a \n> win but I'm\n> > >> not sure (I'm worried about contention for the cache, \n> especially if it's\n> > >> protected by just one or a few spinlocks). Anyway, if we \n> did have one\n> \n> Commercial DBMSes have this... Isn't it a good reason? -:)\n> \n> > > But there would be a problem if we use shared catalog cache.\n> > > Being updated system tuples are only visible to an updating backend\n> > > and other backends should see committed tuples.\n> > > On the other hand,an accurate block count should be visible to all\n> > > backends.\n> > > Which tuple of a row should we load to catalog cache and update ?\n> > \n> > Good point --- rolling back a transaction would cancel changes to the\n> > pg_class row, but it mustn't cause the relation's file to get truncated\n> > (since there could be tuples of other uncommitted transactions in the\n> > newly added block(s)).\n> > \n> > This says that having a block count column in pg_class is the Wrong\n> > Thing; we should get rid of relpages entirely. The Right Thing is a\n> > separate data structure in shared memory that stores the current\n> > physical block count for each active relation. The first backend to\n> > touch a given relation would insert an entry, and then subsequent\n> > extensions/truncations/deletions would need to update it. We already\n> > obtain a special lock when extending a relation, so seems like there'd\n> > be no extra locking cost to have a table like this.\n> \n> I supposed that each backend will still use own catalog \n> cache (after reading entries from shared one) and synchronize \n> shared/private caches on commit - e.g. update reltuples!\n> relpages will be updated immediately after physical changes -\n> what's problem with this?\n>\n\nDoes this mean the following ?\n\n1. shared cache holds committed system tuples.\n2. private cache holds uncommitted system tuples.\n3. relpages of shared cache are updated immediately by\n phisical change and corresponding buffer pages are\n marked dirty.\n4. on commit, the contents of uncommitted tuples except\n relpages,reltuples,... are copied to correponding tuples\n in shared cache and the combined contents are\n committed.\n\nIf so,catalog cache invalidation would be no longer needed.\nBut synchronization of the step 4. may be difficult.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Tue, 19 Oct 1999 19:03:22 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] mdnblocks is an amazing time sink in huge relations" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> 1. shared cache holds committed system tuples.\n> 2. private cache holds uncommitted system tuples.\n> 3. relpages of shared cache are updated immediately by\n> phisical change and corresponding buffer pages are\n> marked dirty.\n> 4. on commit, the contents of uncommitted tuples except\n> relpages,reltuples,... are copied to correponding tuples\n> in shared cache and the combined contents are\n> committed.\n> If so,catalog cache invalidation would be no longer needed.\n> But synchronization of the step 4. may be difficult.\n\nI think the main problem is that relpages and reltuples shouldn't\nbe kept in pg_class columns at all, because they need to have\nvery different update behavior from the other pg_class columns.\n\nThe rest of pg_class is update-on-commit, and we can lock down any one\nrow in the normal MVCC way (if transaction A has modified a row and\ntransaction B also wants to modify it, B waits for A to commit or abort,\nso it can know which version of the row to start from). Furthermore,\nthere can legitimately be several different values of a row in use in\ndifferent places: the latest committed, an uncommitted modification, and\none or more old values that are still being used by active transactions\nbecause they were current when those transactions started. (BTW, the\npresent relcache is pretty bad about maintaining pure MVCC transaction\nsemantics like this, but it seems clear to me that that's the direction\nwe want to go in.)\n\nrelpages cannot operate this way. To be useful for avoiding lseeks,\nrelpages *must* change exactly when the physical file changes. It\nmatters not at all whether the particular transaction that extended the\nfile ultimately commits or not. Moreover there can be only one correct\nvalue (per relation) across the whole system, because there is only one\nlength of the relation file.\n\nIf we want to take reltuples seriously and try to maintain it\non-the-fly, then I think it needs still a third behavior. Clearly\nit cannot be updated using MVCC rules, or we lose all writer\nconcurrency (if A has added tuples to a rel, B would have to wait\nfor A to commit before it could update reltuples...). Furthermore\n\"updating\" isn't a simple matter of storing what you think the new\nvalue is; otherwise two transactions adding tuples in parallel would\nleave the wrong answer after B commits and overwrites A's value.\nI think it would work for each transaction to keep track of a net delta\nin reltuples for each table it's changed (total tuples added less total\ntuples deleted), and then atomically add that value to the table's\nshared reltuples counter during commit. But that still leaves the\nproblem of how you use the counter during a transaction to get an\naccurate answer to the question \"If I scan this table now, how many tuples\nwill I see?\" At the time the question is asked, the current shared\ncounter value might include the effects of transactions that have\ncommitted since your transaction started, and therefore are not visible\nunder MVCC rules. I think getting the correct answer would involve\nmaking an instantaneous copy of the current counter at the start of\nyour xact, and then adding your own private net-uncommitted-delta to\nthe saved shared counter value when asked the question. This doesn't\nlook real practical --- you'd have to save the reltuples counts of\n*all* tables in the database at the start of each xact, on the off\nchance that you might need them. Ugh. Perhaps someone has a better\nidea. In any case, reltuples clearly needs different mechanisms than\nthe ordinary fields in pg_class do, because updating it will be a\nperformance bottleneck otherwise.\n\nIf we allow reltuples to be updated only by vacuum-like events, as\nit is now, then I think keeping it in pg_class is still OK.\n\nIn short, it seems clear to me that relpages should be removed from\npg_class and kept somewhere else if we want to make it more reliable\nthan it is now, and the same for reltuples (but reltuples doesn't\nbehave the same as relpages, and probably ought to be handled\ndifferently).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Oct 1999 10:09:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] mdnblocks is an amazing time sink in huge relations " }, { "msg_contents": "Hiroshi Inoue wrote:\n> \n> > I supposed that each backend will still use own catalog\n> > cache (after reading entries from shared one) and synchronize\n> > shared/private caches on commit - e.g. update reltuples!\n> > relpages will be updated immediately after physical changes -\n> > what's problem with this?\n> >\n> \n> Does this mean the following ?\n> \n> 1. shared cache holds committed system tuples.\n> 2. private cache holds uncommitted system tuples.\n> 3. relpages of shared cache are updated immediately by\n> phisical change and corresponding buffer pages are\n> marked dirty.\n> 4. on commit, the contents of uncommitted tuples except\n> relpages,reltuples,... are copied to correponding tuples\n ^^^^^^^^^\nreltuples in shared catalog cache (SCC) will be updated!\nIf transaction inserted some tuples then SCC->reltuples\nwill be incremented, etc.\n\n> in shared cache and the combined contents are\n> committed.\n> \n> If so,catalog cache invalidation would be no longer needed.\n\nI never liked our invalidation code -:)\n\n> But synchronization of the step 4. may be difficult.\n\nWhat's easy in this life? -:)\n\nVadim\n", "msg_date": "Wed, 20 Oct 1999 10:28:23 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mdnblocks is an amazing time sink in huge relations" }, { "msg_contents": "Tom Lane wrote:\n> \n> I think the main problem is that relpages and reltuples shouldn't\n> be kept in pg_class columns at all, because they need to have\n> very different update behavior from the other pg_class columns.\n\nYes, but is this reason to move them somewhere else?\nLet's update them differently (i.e. update in-place) \nbut keep in pg_class.\nShould we provide read consistency for these internal-use columns?\nI'm not sure.\n\n> If we want to take reltuples seriously and try to maintain it\n> on-the-fly, then I think it needs still a third behavior. Clearly\n\n...snip...\nI agreed that there is no way to get accurate estimation for\n# of rows to be seen by a query...\nWell, let's keep up-to-date # of rows present in relation:\nin any case a query will have to read them and this is what\nwe need to estimate cost of simple scans, as for costs of\njoins - now way, currently(?) -:(\n\nBut please remember that there is another SCC goal -\nfaster catalog access...\n\nVadim\n", "msg_date": "Wed, 20 Oct 1999 11:47:16 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mdnblocks is an amazing time sink in huge relations" }, { "msg_contents": "> I agreed that there is no way to get accurate estimation for\n> # of rows to be seen by a query...\n> Well, let's keep up-to-date # of rows present in relation:\n> in any case a query will have to read them and this is what\n> we need to estimate cost of simple scans, as for costs of\n> joins - now way, currently(?) -:(\n> \n> But please remember that there is another SCC goal -\n> faster catalog access...\n\nI want to index access more cache entries on cache miss for 7.0.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 19 Oct 1999 23:57:44 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mdnblocks is an amazing time sink in huge relations" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > I agreed that there is no way to get accurate estimation for\n> > # of rows to be seen by a query...\n> > Well, let's keep up-to-date # of rows present in relation:\n> > in any case a query will have to read them and this is what\n> > we need to estimate cost of simple scans, as for costs of\n> > joins - now way, currently(?) -:(\n> >\n> > But please remember that there is another SCC goal -\n> > faster catalog access...\n> \n> I want to index access more cache entries on cache miss for 7.0.\n\nGood. But in any case we'll gain faster access from _shared_ cache.\n\nVadim\n", "msg_date": "Wed, 20 Oct 1999 12:01:15 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mdnblocks is an amazing time sink in huge relations" }, { "msg_contents": "> \n> Hiroshi Inoue wrote:\n> > \n> > > I supposed that each backend will still use own catalog\n> > > cache (after reading entries from shared one) and synchronize\n> > > shared/private caches on commit - e.g. update reltuples!\n> > > relpages will be updated immediately after physical changes -\n> > > what's problem with this?\n> > >\n> > \n> > Does this mean the following ?\n> > \n> > 1. shared cache holds committed system tuples.\n> > 2. private cache holds uncommitted system tuples.\n> > 3. relpages of shared cache are updated immediately by\n> > phisical change and corresponding buffer pages are\n> > marked dirty.\n> > 4. on commit, the contents of uncommitted tuples except\n> > relpages,reltuples,... are copied to correponding tuples\n> ^^^^^^^^^\n> reltuples in shared catalog cache (SCC) will be updated!\n> If transaction inserted some tuples then SCC->reltuples\n> will be incremented, etc.\n>\n\nSystem tuples are only modifiled or (insert and delet)ed like\nuser tuples when reltuples are updated ?\nIf only modified,we couldn't use it in SERIALIZABLE mode.\n \n> > in shared cache and the combined contents are\n> > committed.\n> > \n> > If so,catalog cache invalidation would be no longer needed.\n> \n> I never liked our invalidation code -:)\n> \n> > But synchronization of the step 4. may be difficult.\n> \n> What's easy in this life? -:)\n>\n\nAs for relpages(block count),my first thought was that\nPostgreSQL relation files have their control data on their\nfirst page, I was surprised to know there's no such data.\n\nSeems catalog cache sharing is another issue as Tom\nsays. Of cource it's unnatural to hold separate catalog\ncache.\n\nI will be absent this week after now.\nI would think for a while.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Wed, 20 Oct 1999 13:05:06 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] mdnblocks is an amazing time sink in huge relations" }, { "msg_contents": "Vadim Mikheev <[email protected]> writes:\n> I agreed that there is no way to get accurate estimation for\n> # of rows to be seen by a query...\n> Well, let's keep up-to-date # of rows present in relation:\n> in any case a query will have to read them and this is what\n> we need to estimate cost of simple scans, as for costs of\n> joins - now way, currently(?) -:(\n\nThe optimizer is perfectly happy with approximate tuple counts. It has\nto make enough other guesstimates that I really doubt an exact tuple\ncount could help it measurably. So updating the count via VACUUM is an\nappropriate solution as far as the optimizer is concerned. The only\ngood reason I've ever seen for trying to keep an exact tuple count at\nall times is to allow short-circuiting of SELECT COUNT(*) queries ---\nand even there, it's only useful if there's no WHERE or GROUP BY.\n\nAs far as I can see, keeping an exact tuple count couldn't possibly\nbe worth the overhead it'd take. Keeping an exact block count may\nhave the same problem, but I'm not sure either way.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Oct 1999 00:29:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] mdnblocks is an amazing time sink in huge relations " }, { "msg_contents": "Hiroshi Inoue wrote:\n> \n> > > Does this mean the following ?\n> > >\n> > > 1. shared cache holds committed system tuples.\n> > > 2. private cache holds uncommitted system tuples.\n> > > 3. relpages of shared cache are updated immediately by\n> > > phisical change and corresponding buffer pages are\n> > > marked dirty.\n> > > 4. on commit, the contents of uncommitted tuples except\n> > > relpages,reltuples,... are copied to correponding tuples\n> > ^^^^^^^^^\n> > reltuples in shared catalog cache (SCC) will be updated!\n> > If transaction inserted some tuples then SCC->reltuples\n> > will be incremented, etc.\n> >\n> \n> System tuples are only modifiled or (insert and delet)ed like\n> user tuples when reltuples are updated ?\n> If only modified,we couldn't use it in SERIALIZABLE mode.\n ^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^\n ...this...\n\nI'm not sure that we must provide read consistency\nfor internal-use columns...\nNevertheless, I agreed that keeping internal-use columns\nin table is bad thing, but let's do it for awhile: I believe\nthat someday we'll re-implement our mdmgr - using separate\nfile for each table/index is bad thing too, - and move these\ncolumns \"somewhere else\" in that day.\n\nVadim\n", "msg_date": "Wed, 20 Oct 1999 12:30:38 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mdnblocks is an amazing time sink in huge relations" }, { "msg_contents": "Tom Lane wrote:\n> \n> Vadim Mikheev <[email protected]> writes:\n> > I agreed that there is no way to get accurate estimation for\n> > # of rows to be seen by a query...\n> > Well, let's keep up-to-date # of rows present in relation:\n> > in any case a query will have to read them and this is what\n> > we need to estimate cost of simple scans, as for costs of\n> > joins - now way, currently(?) -:(\n> \n> The optimizer is perfectly happy with approximate tuple counts. It has\n> to make enough other guesstimates that I really doubt an exact tuple\n> count could help it measurably. So updating the count via VACUUM is an\n> appropriate solution as far as the optimizer is concerned. The only\n\nI'm not sure: scans read all (visible/unvisible, committed/uncommittd) \ntuples before making visibility decision... though, relpages is \nprimary value for cost estimation.\n\n> good reason I've ever seen for trying to keep an exact tuple count at\n> all times is to allow short-circuiting of SELECT COUNT(*) queries ---\n> and even there, it's only useful if there's no WHERE or GROUP BY.\n\n...and only in READ COMMITTED mode...\n\n> As far as I can see, keeping an exact tuple count couldn't possibly\n> be worth the overhead it'd take. Keeping an exact block count may\n> have the same problem, but I'm not sure either way.\n\nrelpages is more significant thing to know. Extending relation\nby one block at time is bad for insert performance. It would be nice\nto have two values per relation in shared cache - # of blocks and\nlast block used. On the other hand this is mostly mdmgr issue.\n\nBut in any case, it seems to me that using shmem for \nshared catalog cache is much worther than for\nshared cache invalidation...\n\nVadim\n", "msg_date": "Wed, 20 Oct 1999 15:12:35 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mdnblocks is an amazing time sink in huge relations" }, { "msg_contents": "> \n> Hiroshi Inoue wrote:\n> > \n> > > > Does this mean the following ?\n> > > >\n> > > > 1. shared cache holds committed system tuples.\n> > > > 2. private cache holds uncommitted system tuples.\n> > > > 3. relpages of shared cache are updated immediately by\n> > > > phisical change and corresponding buffer pages are\n> > > > marked dirty.\n> > > > 4. on commit, the contents of uncommitted tuples except\n> > > > relpages,reltuples,... are copied to correponding tuples\n> > > ^^^^^^^^^\n> > > reltuples in shared catalog cache (SCC) will be updated!\n> > > If transaction inserted some tuples then SCC->reltuples\n> > > will be incremented, etc.\n> > >\n> > \n> > System tuples are only modifiled or (insert and delet)ed like\n> > user tuples when reltuples are updated ?\n> > If only modified,we couldn't use it in SERIALIZABLE mode.\n> ^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^\n> ...this...\n> \n> I'm not sure that we must provide read consistency\n> for internal-use columns...\n\nAs for relpages,read consistency has no meaning\nbecause they are out of transaction control.\nBut as for reltuples,isn't it difficult to commit/rollback\ncorrectly without using insert-delete updation ?\nDoes your WAL system make it possible ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Thu, 21 Oct 1999 01:26:19 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] mdnblocks is an amazing time sink in huge relations" }, { "msg_contents": "> \n> \n> But in any case, it seems to me that using shmem for \n> shared catalog cache is much worther than for\n> shared cache invalidation...\n>\n\nI don't like current cache invalidation either.\nAnd shared catalog cache would make it easy to rollback\ncatalog cache(catalog/relation cache is not rollbacked correct-\nly now).\n\nBut there are some problems to implement shared catalog\ncache.\n\n1. Relation cache invalidation remains\n It has almost same mechanism as catalog cache invaldation.\n Cache invaldation is still incomprehensible as a whole.\n\n2. Is it clear how consistency is kept between system tuples ?\n It's quite unclear for me and there are 4 anxieties at least.\n\n a. Locking for system tuples is short term(this is for DDL\n commands inside transactions). This would break 2-phase\n lock easily. Is there another principle instead ?\n\n b. SnapshotNow is used to access system tuples in most\n cases. SnapshotNow isn't a real snapshot.\n i.e SnapshotNow couldn't guarantee read consistency.\n\n c. Row level locking is not implemented for system tuples yet.\n\n d. AccessShare lock are acquired for system tuples in many\n places. But does it have any sense ?\n\n3. If neither relpages nor reltuples is held,are there any other\n speeding up things ?\n\nRegards. \n\nHiroshi Inoue\[email protected]\n", "msg_date": "Sat, 23 Oct 1999 10:45:49 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] mdnblocks is an amazing time sink in huge relations" } ]
[ { "msg_contents": "Welcome\n\nMay next question.\nSorry :|)\n\nI have PostgreSQL on Linux and few users connetcing to databases from\nWindows 95 where client application is Access 97.\nWhen I shutdown Win by reset or the power broke I see that on my Linux\nproceses which was created by connection from Win stay and waiting.\nOn Win sometime when main server lost connection with local computer\n(users have his own Access file on it) i see only white forms without\nany data and postgres don't ask me abou password when I restart Access.\n\nAny idea what I have to do with this.\n\n\nBest Regards\n\n", "msg_date": "Tue, 12 Oct 1999 10:21:44 +0200", "msg_from": "Grzegorz =?iso-8859-2?Q?Prze=BCdziecki?=\n\t<[email protected]>", "msg_from_op": true, "msg_subject": "Backend proceses" } ]
[ { "msg_contents": "Dear Fellow Developers and Hackers,\n\nAfter installing PostgreSQL 6.5.2 (on my Sparc 7) and configured as\nfollows:\n�--with-mb=UNICODE�\n\nI am getting a truncation of some varchar columns. When accessing a\nPostgreSQL table1. I get this error in my postmaster window:\n\n�ERROR: Conversion between UNICODE and SQL_ASCII is not supported�\n\nAnd this error in my DOS window (which Visual Caf� Database Edition uses\nto run applets from):\n\n�The maximum width size for column 2 is: 17�\n\ntable1:\n+------------------------------------------------+\n! Field ! Type ! Length !\n+------------------------------------------------+\n! data_code ! varchar() ! 15 !\n! data_field ! varchar() ! 43 !\n+------------------------------------------------+\n\n>From my PC Java GUI, running in Visual Caf�, I am able to enter a full\n15 character string into data_code but I get the following error when I\ntry to enter a 43 (or greater than 17) character string into data_field:\n\n�Invalid value for the column data�\n\nand it truncates the data I enter to 17 characters.\n\n>From a psql prompt on the same Sun that is running the postmaster I can\nchange (select) the table without the truncation and no mention of\nUNICODE!\n\n+__________________________________________________________________________+\n\nWithout UNICODE configured I got the following errors:\n\n>From the Postmaster:\n\n\"ERROR: MultiByte strings (MB) must be enabled to use this function.\"\n\n>From Visual Caf�:\n\n\"The maximum width size for column 2 is: 17\"\n\n+__________________________________________________________________________+\n\nAny suggestion will be greatly appreciated, even a work around. What I\nwant to do is just access SQL_ASCII and leave UNICODE alone!\n\nSincerely,\n\nAllan\n\n\n\n\n \nDear Fellow Developers and Hackers,\nAfter installing PostgreSQL 6.5.2 (on my Sparc 7) and configured as\nfollows:\n�--with-mb=UNICODE�\nI am getting a truncation of some varchar columns.  When accessing\na PostgreSQL  table1.  I get this error in my postmaster\nwindow:\n�ERROR: Conversion between UNICODE and SQL_ASCII is not supported�\nAnd this error in my DOS window (which Visual Café Database Edition\nuses to run applets from):\n�The maximum width size for column 2 is: 17�\ntable1:\n+------------------------------------------------+\n!     Field     !    \nType         !  Length    \n!\n+------------------------------------------------+\n! data_code     !    \nvarchar()    !    15      \n!\n! data_field    !    \nvarchar()    !    43      \n!\n+------------------------------------------------+\nFrom my PC Java GUI, running in Visual Café, I am able to enter\na full 15 character string into data_code but I get the following\nerror when I try to enter a 43 (or greater than 17) character string into\ndata_field:\n�Invalid value for the column data�\nand it truncates the data I enter to 17 characters.\nFrom a psql prompt on the same Sun that is running the postmaster I\ncan change (select) the table without the truncation and no mention of\nUNICODE!\n+__________________________________________________________________________+\nWithout UNICODE configured I got the following errors:\nFrom the Postmaster:\n\"ERROR: MultiByte strings (MB) must be enabled to use this function.\"\nFrom Visual Café:\n\"The maximum width size for column 2 is: 17\"\n+__________________________________________________________________________+\nAny suggestion will be greatly appreciated, even a work around. \nWhat I want to do is just access SQL_ASCII and leave UNICODE alone!\nSincerely,\nAllan", "msg_date": "Tue, 12 Oct 1999 11:50:35 +0200", "msg_from": "\"Allan Huffman\" <[email protected]>", "msg_from_op": true, "msg_subject": "HELP - Accessing SQL_ASCII from Java" } ]
[ { "msg_contents": "Hi,\n\nI have created a small patch that makes possible to compile pgsql on newer\nCygwin snapshots (tested on 990115 which is recommended to use - it fixes\nsome errors in B20.1)\n\nAnd I have another patch for including <sys/ipc.h> before <sys/sem.h> in\nbackend/storage/lmgr/proc.c - it is required due the design of cygipc\nheaders\n\n\t\t\tDan\n\nPS: I think there are missing some files in the interfaces/libpgeasy\ndirectory (snapshot via CVS a few hours old)\n\n----------------------------------------------\nDaniel Horak\nnetwork and system administrator\ne-mail: [email protected]\nprivat e-mail: [email protected] ICQ:36448176\n----------------------------------------------", "msg_date": "Tue, 12 Oct 1999 12:08:48 +0200", "msg_from": "Horak Daniel <[email protected]>", "msg_from_op": true, "msg_subject": "win32 port on newer Cygwin snapshots (990115)" }, { "msg_contents": "[Charset iso-8859-2 unsupported, filtering to ASCII...]\n> Hi,\n> \n> I have created a small patch that makes possible to compile pgsql on newer\n> Cygwin snapshots (tested on 990115 which is recommended to use - it fixes\n> some errors in B20.1)\n> \n> And I have another patch for including <sys/ipc.h> before <sys/sem.h> in\n> backend/storage/lmgr/proc.c - it is required due the design of cygipc\n> headers\n\nThanks. Applied to both trees, so this will be in 6.5.3.\n\nMarc, I need to version stamp 6.5.3. I will do it now.\n\n> \n> \t\t\tDan\n> \n> PS: I think there are missing some files in the interfaces/libpgeasy\n> directory (snapshot via CVS a few hours old)\n\nYes, cleaning that up now. I don't know how I mess CVS up so often.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Oct 1999 10:55:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] win32 port on newer Cygwin snapshots (990115)" } ]
[ { "msg_contents": "... on Slashdot of all places.\n\nSome fellow wants to install a large database on Linux and is considering\nOracle vs. PostgreSQL. (Perhaps the poster with the 60GB astronomical db\nfrom the other day could come forward?) I know you guys always wanted to\nbe on Slashdot, so here's your chance.\n\nNot to start this all over again, but I especially liked this post:\n\n\"If you are looking for mission critical data, why not give a chance to\nMySQL?\"\n\nTake a look.\n\n\t-Peter\n\n", "msg_date": "Tue, 12 Oct 1999 12:38:41 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "It's PR time!" }, { "msg_contents": "Peter Eisentraut wrote on Tue, 12 Oct 1999 12:38:41 +0200\n>(Perhaps the poster with the 60GB astronomical db\n>from the other day could come forward?)\n\nThat was me . . . I'm on the 2MASS (Two Micron All Sky Survey)\nScience Team. This survey, now nearly 70% complete, has\namassed about 20TB of image data and from this we compile\ncatalogs of stellar and galaxian sources. The star catalog\nwill contain about 500 million sources and be about 100GB\non disk in condensed form.\n\nOur goals for pgsql is to recommend a local data server system\nfor astronomers who want to spin and query all or parts of \nthis catalog.\n\nThe URLs for the project are:\n\n\thttp://pegasus.astro.umass.edu\n\thttp://www.ipac.caltech.edu/2mass\n\nI am not the right person to head an \"official\" pgsql response,\nbut I'd be happy to help and \"donote\" additional info on our\napplicaton.\n\n--Martin\n\n===========================================================================\n\nMartin Weinberg Phone: (413) 545-3821\nDept. of Physics and Astronomy FAX: (413) 545-2117/0648\n530 Graduate Research Tower\t [email protected]\nUniversity of Massachusetts\t http://www.astro.umass.edu/~weinberg/\nAmherst, MA 01003-4525\n\n\n", "msg_date": "Tue, 12 Oct 1999 09:42:58 -0300", "msg_from": "Martin Weinberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] It's PR time! " } ]
[ { "msg_contents": "I'll have to check /.\n\nThe 60Gb Astronomical DB is probably the TASS project (which I'm\nassociated with), but it's not 60Gb yet (about 18 at the moment), but\nit's going to grow by about 200Mb a day when it really starts.\n\nThe best person (who I know is on atleast the interfaces list) is Chris\nRobertson (I think I have his surname correct), but I only have his\nemail address at home.\n\nYou may find it on http://www.tass-survey.org\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Peter Eisentraut [mailto:[email protected]]\nSent: 12 October 1999 11:39\nTo: [email protected]\nSubject: [HACKERS] It's PR time!\n\n\n... on Slashdot of all places.\n\nSome fellow wants to install a large database on Linux and is\nconsidering\nOracle vs. PostgreSQL. (Perhaps the poster with the 60GB astronomical db\nfrom the other day could come forward?) I know you guys always wanted to\nbe on Slashdot, so here's your chance.\n\nNot to start this all over again, but I especially liked this post:\n\n\"If you are looking for mission critical data, why not give a chance to\nMySQL?\"\n\nTake a look.\n\n\t-Peter\n\n\n************\n", "msg_date": "Tue, 12 Oct 1999 11:57:27 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] It's PR time!" } ]
[ { "msg_contents": "Marc, I added a dead directory called interfaces/pgeasy that I have\nremoved via CVS, but I know the directory still exists in the CVS tree. \n(Now called libpgeasy). In fact, I know there are many directories that\nare empty in the tree.\n\nCan you figure out a way to remove them from the CVSROOT tree?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Oct 1999 10:52:32 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Dead CVS directories" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Marc, I added a dead directory called interfaces/pgeasy that I have\n> removed via CVS, but I know the directory still exists in the CVS tree. \n> (Now called libpgeasy). In fact, I know there are many directories that\n> are empty in the tree.\n\n> Can you figure out a way to remove them from the CVSROOT tree?\n\nAFAIK, a directory in the CVS repository is forever, because there are\nalways going to be files in it --- maybe files that are dead as far as\nthe current version is concerned, but CVS wants to be able to give you\nback any past state of the tree, so there are still ,v files in there.\n\nNormally you should be running checkouts and updates with -p (prune)\nswitch, which removes directories from *your checked out copy* if they\ncontain no live files as of the version you are checking out. But\nthey have to stay there in the CVS repository.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Oct 1999 11:28:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Dead CVS directories " } ]
[ { "msg_contents": "I also have a slight feeling that 6.5.2 is missing bits of the current\nJDBC source. I'm not certain (won't be able to check until tonight), but\nI think this is the case.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: The Hermit Hacker [mailto:[email protected]]\nSent: 12 October 1999 15:44\nTo: Tom Lane\nCc: Bruce Momjian; Thomas Lockhart; Postgres Hackers List\nSubject: Re: [HACKERS] Features for next release \n\n\nOn Tue, 12 Oct 1999, Tom Lane wrote:\n\n> >> Bruce, you asked for a v6.5.3 to be released...anything outstanding\nthat\n> >> should prevent me from doing that tomorrow afternoon? \n> \n> > Not that I know of. I was waiting to see if we could come up with\nother\n> > patches, but I don't think that is going to happen anytime soon.\n> \n> On the other hand, is there a reason to be in a rush to put out 6.5.3?\n> I didn't think we had many important changes from 6.5.2 yet.\n\nv6.5.3, I believe, was because PgAccess somehow got removed in v6.5.2 :(\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick:\nScrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary:\nscrappy@{freebsd|postgresql}.org \n\n\n************\n", "msg_date": "Tue, 12 Oct 1999 15:56:19 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Features for next release " }, { "msg_contents": "On Tue, 12 Oct 1999, Peter Mount wrote:\n\n> I also have a slight feeling that 6.5.2 is missing bits of the current\n> JDBC source. I'm not certain (won't be able to check until tonight), but\n> I think this is the case.\n\nDone, let me know when you are finished/ready ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 12 Oct 1999 12:32:01 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Features for next release " }, { "msg_contents": "> On Tue, 12 Oct 1999, Peter Mount wrote:\n> \n> > I also have a slight feeling that 6.5.2 is missing bits of the current\n> > JDBC source. I'm not certain (won't be able to check until tonight), but\n> > I think this is the case.\n> \n> Done, let me know when you are finished/ready ...\n\nI am ready. Thomas, please check out my new INSTALL file. I did it\nwith sgmltools and Netscape. Seems to be OK.\n\nI am not sure if you have the 6.5.* branch, so I am attaching the file\nfor everyone's review.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\nChapter 0. Installation\n\nTable of Contents\nRequirements to Run Postgres\nInstallation Procedure\nPlaying with Postgres\nThe Next Step\nPorting Notes\n\n Complete installation instructions for Postgres v6.5.3.\n\nBefore installing Postgres, you may wish to visit www.postgresql.org for up\nto date information, patches, etc.\n\nThese installation instructions assume:\n\n * Commands are Unix-compatible. See note below.\n\n * Defaults are used except where noted.\n\n * User postgres is the Postgres superuser.\n\n * The source path is /usr/src/pgsql (other paths are possible).\n\n * The runtime path is /usr/local/pgsql (other paths are possible).\n\nCommands were tested on RedHat Linux version 5.2 using the tcsh shell.\nExcept where noted, they will probably work on most systems. Commands like\nps and tar may vary wildly between platforms on what options you should use.\nUse common sense before typing in these commands.\n\nOur Makefiles require GNU make (called \"gmake\" in this document). They will\nnot work with non-GNU make programs. If you have GNU make installed under\nthe name \"make\" instead of \"gmake\", then you will use the command make\ninstead. That's OK, but you need to have the GNU form of make to succeed\nwith an installation.\n\nRequirements to Run Postgres\n\nUp to date information on supported platforms is at\nhttp://www.postgresql.org/docs/admin/install.htm. In general, most\nUnix-compatible platforms with modern libraries should be able to run\nPostgres.\n\nAlthough the minimum required memory for running Postgres is as little as\n8MB, there are noticable improvements in runtimes for the regression tests\nwhen expanding memory up to 96MB on a relatively fast dual-processor system\nrunning X-Windows. The rule is you can never have too much memory.\n\nCheck that you have sufficient disk space. You will need about 30 Mbytes for\n/usr/src/pgsql, about 5 Mbytes for /usr/local/pgsql (excluding your\ndatabase) and 1 Mbyte for an empty database. The database will temporarily\ngrow to about 20 Mbytes during the regression tests. You will also need\nabout 3 Mbytes for the distribution tar file.\n\nWe therefore recommend that during installation and testing you have well\nover 20 Mbytes free under /usr/local and another 25 Mbytes free on the disk\npartition containing your database. Once you delete the source files, tar\nfile and regression database, you will need 2 Mbytes for /usr/local/pgsql, 1\nMbyte for the empty database, plus about five times the space you would\nrequire to store your database data in a flat file.\n\nTo check for disk space, use\n\n$ df -k\n\n\n ------------------------------------------------------------------------\n\nInstallation Procedure\n\nPostgres Installation\n\nFor a fresh install or upgrading from previous releases of Postgres:\n\n 1. Read any last minute information and platform specific porting notes.\n There are some platform specific notes at the end of this file for\n Ultrix4.x, Linux, BSD/OS and NeXT. There are other files in directory\n /usr/src/pgsql/doc, including files FAQ-Irix and FAQ-Linux. Also look\n in directory ftp://ftp.postgresql.org/pub. If there is a file called\n INSTALL in this directory then this file will contain the latest\n installation information.\n\n Please note that a \"tested\" platform in the list given earlier simply\n means that someone went to the effort at some point of making sure that\n a Postgres distribution would compile and run on this platform without\n modifying the code. Since the current developers will not have access\n to all of these platforms, some of them may not compile cleanly and\n pass the regression tests in the current release due to minor problems.\n Any such known problems and their solutions will be posted in\n ftp://ftp.postgresql.org/pub/INSTALL.\n\n 2. Create the Postgres superuser account (postgres is commonly used) if it\n does not already exist.\n\n The owner of the Postgres files can be any unprivileged user account.\n It must not be root, bin, or any other account with special access\n rights, as that would create a security risk.\n\n 3. Log in to the Postgres superuser account. Most of the remaining steps\n in the installation will happen in this account.\n\n 4. Ftp file ftp://ftp.postgresql.org/pub/postgresql-v6.5.3.tar.gz from the\n Internet. Store it in your home directory.\n\n 5. Some platforms use flex. If your system uses flex then make sure you\n have a good version. To check, type\n\n $ flex --version\n\n If the flex command is not found then you probably do not need it. If\n the version is 2.5.2 or 2.5.4 or greater then you are okay. If it is\n 2.5.3 or before 2.5.2 then you will have to upgrade flex. You may get\n it at ftp://prep.ai.mit.edu/pub/gnu/flex-2.5.4.tar.gz.\n\n If you need flex and don't have it or have the wrong version, then you\n will be told so when you attempt to compile the program. Feel free to\n skip this step if you aren't sure you need it. If you do need it then\n you will be told to install/upgrade flex when you try to compile\n Postgres.\n\n You may want to do the entire flex installation from the root account,\n though that is not absolutely necessary. Assuming that you want the\n installation to place files in the usual default areas, type the\n following:\n\n $ su -\n $ cd /usr/local/src\n ftp prep.ai.mit.edu\n ftp> cd /pub/gnu/\n ftp> binary\n ftp> get flex-2.5.4.tar.gz\n ftp> quit\n $ gunzip -c flex-2.5.4.tar.gz | tar xvf -\n $ cd flex-2.5.4\n $ configure --prefix=/usr\n $ gmake\n $ gmake check\n # You must be root when typing the next line:\n $ gmake install\n $ cd /usr/local/src\n $ rm -rf flex-2.5.4\n\n This will update files /usr/man/man1/flex.1, /usr/bin/flex,\n /usr/lib/libfl.a, /usr/include/FlexLexer.h and will add a link\n /usr/bin/flex++ which points to flex.\n\n 6. If you are not upgrading an existing system then skip to . If you are\n upgrading from 6.5, you do not need to dump/reload or initdb. Simply\n compile the source code, stop the postmaster, do a \"make install\", and\n restart the postmaster. If you are upgrading from 6.4.* or earlier,\n back up your database. For alpha- and beta-level releases, the database\n format is liable to change, often every few weeks, with no notice\n besides a quick comment in the HACKERS mailing list. Full releases\n always require a dump/reload from previous releases. It is therefore a\n bad idea to skip this step.\n\n Tip: Do not use the pg_dumpall script from v6.0 or everything\n will be owned by the Postgres super user.\n\n To dump your fairly recent post-v6.0 database installation, type\n\n $ pg_dumpall > db.out\n\n To use the latest pg_dumpall script on your existing older database\n before upgrading Postgres, pull the most recent version of pg_dumpall\n from the new distribution:\n\n $ cd\n $ gunzip -c postgresql-v6.5.3.tar.gz \\\n | tar xvf - src/bin/pg_dump/pg_dumpall\n $ chmod a+x src/bin/pg_dump/pg_dumpall\n $ src/bin/pg_dump/pg_dumpall > db.out\n $ rm -rf src\n\n If you wish to preserve object id's (oids), then use the -o option when\n running pg_dumpall. However, unless you have a special reason for doing\n this (such as using OIDs as keys in tables), don't do it.\n\n If the pg_dumpall command seems to take a long time and you think it\n might have died, then, from another terminal, type\n\n $ ls -l db.out\n\n several times to see if the size of the file is growing.\n\n Please note that if you are upgrading from a version prior to\n Postgres95 v1.09 then you must back up your database, install\n Postgres95 v1.09, restore your database, then back it up again. You\n should also read the release notes which should cover any\n release-specific issues.\n\n Caution\n You must make sure that your database is not updated in the middle of your\n backup. If necessary, bring down postmaster, edit the permissions in file\n /usr/local/pgsql/data/pg_hba.conf to allow only you on, then bring\n postmaster back up.\n 7. If you are upgrading an existing system then kill the postmaster. Type\n\n $ ps -ax | grep postmaster\n\n This should list the process numbers for a number of processes. Type\n the following line, with pid replaced by the process id for process\n postmaster. (Do not use the id for process \"grep postmaster\".) Type\n\n $ kill pid\n\n to actually stop the process.\n\n Tip: On systems which have Postgres started at boot time,\n there is probably a startup file which will accomplish the\n same thing. For example, on my Linux system I can type\n\n $ /etc/rc.d/init.d/postgres.init stop\n\n to halt Postgres.\n\n 8. If you are upgrading an existing system then move the old directories\n out of the way. If you are short of disk space then you may have to\n back up and delete the directories instead. If you do this, save the\n old database in the /usr/local/pgsql/data directory tree. At a minimum,\n save file /usr/local/pgsql/data/pg_hba.conf.\n\n Type the following:\n\n $ su -\n $ cd /usr/src\n $ mv pgsql pgsql_6_0\n $ cd /usr/local\n $ mv pgsql pgsql_6_0\n $ exit\n\n If you are not using /usr/local/pgsql/data as your data directory\n (check to see if environment variable PGDATA is set to something else)\n then you will also want to move this directory in the same manner.\n\n 9. Make new source and install directories. The actual paths can be\n different for your installation but you must be consistent throughout\n this procedure.\n\n Note: There are two places in this installation procedure\n where you will have an opportunity to specify installation\n locations for programs, libraries, documentation, and other\n files. Usually it is sufficient to specify these at the gmake\n install stage of installation.\n\n Type\n\n $ su\n $ cd /usr/src\n $ mkdir pgsql\n $ chown postgres:postgres pgsql\n $ cd /usr/local\n $ mkdir pgsql\n $ chown postgres:postgres pgsql\n $ exit\n\n 10. Unzip and untar the new source file. Type\n\n $ cd /usr/src/pgsql\n $ gunzip -c ~/postgresql-v6.5.3.tar.gz | tar xvf -\n\n 11. Configure the source code for your system. It is this step at which you\n can specify your actual installation path for the build process (see\n the --prefix option below). Type\n\n $ cd /usr/src/pgsql/src\n $ ./configure [ options ]\n\n a. Among other chores, the configure script selects a system-specific\n \"template\" file from the files provided in the template\n subdirectory. If it cannot guess which one to use for your system,\n it will say so and exit. In that case you'll need to figure out\n which one to use and run configure again, this time giving the\n --with-template=TEMPLATE option to make the right file be chosen.\n\n Please Report Problems: If your system is not\n automatically recognized by configure and you have to do\n this, please send email to [email protected] with the\n output of the program ./config.guess. Indicate what the\n template file should be.\n\n b. Choose configuration options. Check for details. However, for a\n plain-vanilla first installation with no extra options like\n multi-byte character support or locale collation support it may be\n adequate to have chosen the installation areas and to run\n configure without extra options specified. The configure script\n accepts many additional options that you can use if you don't like\n the default configuration. To see them all, type\n\n ./configure --help\n\n Some of the more commonly used ones are:\n\n --prefix=BASEDIR Selects a different base directory for the\n installation of the Postgres configuration.\n The default is /usr/local/pgsql.\n --with-template=TEMPLATE\n Use template file TEMPLATE - the template\n files are assumed to be in the directory\n src/template, so look there for proper values.\n --with-tcl Build interface libraries and programs requiring\n Tcl/Tk, including libpgtcl, pgtclsh, and pgtksh.\n --with-perl Build the Perl interface library.\n --with-odbc Build the ODBC driver package.\n --enable-hba Enables Host Based Authentication (DEFAULT)\n --disable-hba Disables Host Based Authentication\n --enable-locale Enables USE_LOCALE\n --enable-cassert Enables ASSERT_CHECKING\n --with-CC=compiler\n Use a specific C compiler that the configure\n script cannot find.\n --with-CXX=compiler\n --without-CXX\n Use a specific C++ compiler that the configure\n script cannot find, or exclude C++ compilation\n altogether. (This only affects libpq++ at\n present.)\n\n c. Here is the configure script used on a Sparc Solaris 2.5 system\n with /opt/postgres specified as the installation base directory:\n\n $ ./configure --prefix=/opt/postgres \\\n --with-template=sparc_solaris-gcc --with-pgport=5432 \\\n --enable-hba --disable-locale\n\n Tip: Of course, you may type these three lines all on\n the same line.\n\n 12. Install the man and HTML documentation. Type\n\n $ cd /usr/src/pgsql/doc\n $ gmake install\n\n The documentation is also available in Postscript format. Look for\n files ending with .ps.gz in the same directory.\n\n 13. Compile the program. Type\n\n $ cd /usr/src/pgsql/src\n $ gmake all > make.log 2>&1 &\n $ tail -f make.log\n\n The last line displayed will hopefully be\n\n All of PostgreSQL is successfully made. Ready to install.\n\n Remember, \"gmake\" may be called \"make\" on your system. At this point,\n or earlier if you wish, type control-C to get out of tail. (If you have\n problems later on you may wish to examine file make.log for warning and\n error messages.)\n\n Note: You will probably find a number of warning messages in\n make.log. Unless you have problems later on, these messages\n may be safely ignored.\n\n If the compiler fails with a message stating that the flex command\n cannot be found then install flex as described earlier. Next, change\n directory back to this directory, type\n\n $ gmake clean\n\n then recompile again.\n\n Compiler options, such as optimization and debugging, may be specified\n on the command line using the COPT variable. For example, typing\n\n $ gmake COPT=\"-g\" all > make.log 2>&1 &\n\n would invoke your compiler's -g option in all steps of the build. See\n src/Makefile.global.in for further details.\n\n 14. Install the program. Type\n\n $ cd /usr/src/pgsql/src\n $ gmake install > make.install.log 2>&1 &\n $ tail -f make.install.log\n\n The last line displayed will be\n\n gmake[1]: Leaving directory `/usr/src/pgsql/src/man'\n\n At this point, or earlier if you wish, type control-C to get out of\n tail. Remember, \"gmake\" may be called \"make\" on your system.\n\n 15. If necessary, tell your system how to find the new shared libraries.\n You can do one of the following, preferably the first:\n\n a. As root, edit file /etc/ld.so.conf. Add a line\n\n /usr/local/pgsql/lib\n\n to the file. Then run command /sbin/ldconfig.\n\n b. In a bash shell, type\n\n export LD_LIBRARY_PATH=/usr/local/pgsql/lib\n\n c. In a csh shell, type\n\n setenv LD_LIBRARY_PATH /usr/local/pgsql/lib\n\n Please note that the above commands may vary wildly for different\n operating systems. Check the platform specific notes, such as those for\n Ultrix4.x or and for non-ELF Linux.\n\n If, when you create the database, you get the message\n\n pg_id: can't load library 'libpq.so'\n\n then the above step was necessary. Simply do this step, then try to\n create the database again.\n\n 16. If you used the --with-perl option to configure, check the install log\n to see whether the Perl module was actually installed. If you've\n followed our advice to make the Postgres files be owned by an\n unprivileged userid, then the Perl module won't have been installed,\n for lack of write privileges on the Perl library directories. You can\n complete its installation, either now or later, by becoming the user\n that does own the Perl library (often root) (via su) and doing\n\n $ cd /usr/src/pgsql/src/interfaces/perl5\n $ gmake install\n\n\n 17. If it has not already been done, then prepare account postgres for\n using Postgres. Any account that will use Postgres must be similarly\n prepared.\n\n There are several ways to influence the runtime environment of the\n Postgres server. Refer to the Administrator's Guide for more\n information.\n\n Note: The following instructions are for a bash/sh shell.\n Adapt accordingly for other shells.\n\n a. Add the following lines to your login environment: shell,\n ~/.bash_profile:\n\n PATH=$PATH:/usr/local/pgsql/bin\n MANPATH=$MANPATH:/usr/local/pgsql/man\n PGLIB=/usr/local/pgsql/lib\n PGDATA=/usr/local/pgsql/data\n export PATH MANPATH PGLIB PGDATA\n\n\n b. Several regression tests could fail if the user's locale collation\n scheme is different from that of the standard C locale.\n\n If you configure and compile Postgres with --enable-locale then\n you should set the locale environment to \"C\" (or unset all \"LC_*\"\n variables) by putting these additional lines to your login\n environment before starting postmaster:\n\n LC_COLLATE=C\n LC_CTYPE=C\n export LC_COLLATE LC_CTYPE\n\n\n\n\n\n c. Make sure that you have defined these variables before continuing\n with the remaining steps. The easiest way to do this is to type:\n\n $ source ~/.bash_profile\n\n\n 18. Create the database installation from your Postgres superuser account\n (typically account postgres). Do not do the following as root! This\n would be a major security hole. Type\n\n $ initdb\n\n 19. Set up permissions to access the database system. Do this by editing\n file /usr/local/pgsql/data/pg_hba.conf. The instructions are included\n in the file. (If your database is not located in the default location,\n i.e. if PGDATA is set to point elsewhere, then the location of this\n file will change accordingly.) This file should be made read only again\n once you are finished. If you are upgrading from v6.0 or later you can\n copy file pg_hba.conf from your old database on top of the one in your\n new database, rather than redoing the file from scratch.\n\n 20. Briefly test that the backend will start and run by running it from the\n command line.\n\n a. Start the postmaster daemon running in the background by typing\n\n $ cd\n $ nohup postmaster -i > pgserver.log 2>&1 &\n\n b. Create a database by typing\n\n $ createdb\n\n c. Connect to the new database:\n\n $ psql\n\n d. And run a sample query:\n\n postgres=> SELECT datetime 'now';\n\n e. Exit psql:\n\n postgres=> \\q\n\n f. Remove the test database (unless you will want to use it later for\n other tests):\n\n $ destroydb\n\n 21. Run postmaster in the background from your Postgres superuser account\n (typically account postgres). Do not run postmaster from the root\n account!\n\n Usually, you will want to modify your computer so that it will\n automatically start postmaster whenever it boots. It is not required;\n the Postgres server can be run successfully from non-privileged\n accounts without root intervention.\n\n Here are some suggestions on how to do this, contributed by various\n users.\n\n Whatever you do, postmaster must be run by the Postgres superuser\n (postgres?) and not by root. This is why all of the examples below\n start by switching user (su) to postgres. These commands also take into\n account the fact that environment variables like PATH and PGDATA may\n not be set properly. The examples are as follows. Use them with extreme\n caution.\n\n o If you are installing from a non-privileged account and have no\n root access, then start the postmaster and send it to the\n background:\n\n $ cd\n $ nohup postmaster > regress.log 2>&1 &\n\n o Edit file rc.local on NetBSD or file rc2.d on SPARC Solaris 2.5.1\n to contain the following single line:\n\n su postgres -c \"/usr/local/pgsql/bin/postmaster -S -D /usr/local/pgsql/data\"\n\n o In FreeBSD 2.2-RELEASE edit /usr/local/etc/rc.d/pgsql.sh to\n contain the following lines and make it chmod 755 and chown\n root:bin.\n\n #!/bin/sh\n [ -x /usr/local/pgsql/bin/postmaster ] && {\n su -l pgsql -c 'exec /usr/local/pgsql/bin/postmaster\n -D/usr/local/pgsql/data\n -S -o -F > /usr/local/pgsql/errlog' &\n echo -n ' pgsql'\n }\n\n You may put the line breaks as shown above. The shell is smart\n enough to keep parsing beyond end-of-line if there is an\n expression unfinished. The exec saves one layer of shell under the\n postmaster process so the parent is init.\n\n o In RedHat Linux add a file /etc/rc.d/init.d/postgres.init which is\n based on the example in contrib/linux/. Then make a softlink to\n this file from /etc/rc.d/rc5.d/S98postgres.init.\n\n o In RedHat Linux edit file /etc/inittab to add the following as a\n single line:\n\n pg:2345:respawn:/bin/su - postgres -c\n \"/usr/local/pgsql/bin/postmaster -D/usr/local/pgsql/data\n /usr/local/pgsql/server.log 2&1 /dev/null\"\n\n (The author of this example says this example will revive the\n postmaster if it dies, but he doesn't know if there are other side\n effects.)\n\n 22. Run the regression tests. The file\n /usr/src/pgsql/src/test/regress/README has detailed instructions for\n running and interpreting the regression tests. A short version follows\n here:\n\n a. Type\n\n $ cd /usr/src/pgsql/src/test/regress\n $ gmake clean\n $ gmake all runtest\n\n You do not need to type gmake clean if this is the first time you\n are running the tests.\n\n You should get on the screen (and also written to file\n ./regress.out) a series of statements stating which tests passed\n and which tests failed. Please note that it can be normal for some\n tests to \"fail\" on some platforms. The script says a test has\n failed if there is any difference at all between the actual output\n of the test and the expected output. Thus, tests may \"fail\" due to\n minor differences in wording of error messages, small differences\n in floating-point roundoff, etc, between your system and the\n regression test reference platform. \"Failures\" of this type do not\n indicate a problem with Postgres. The file ./regression.diffs\n contains the textual differences between the actual test output on\n your machine and the \"expected\" output (which is simply what the\n reference system produced). You should carefully examine each\n difference listed to see whether it appears to be a significant\n issue.\n\n For example,\n\n + For a i686/Linux-ELF platform, no tests failed since this is\n the v6.5.3 regression testing reference platform.\n\n Even if a test result clearly indicates a real failure, it may be\n a localized problem that will not affect you. An example is that\n the int8 test will fail, producing obviously incorrect output, if\n your machine and C compiler do not provide a 64-bit integer data\n type (or if they do but configure didn't discover it). This is not\n something to worry about unless you need to store 64-bit integers.\n\n Conclusion? If you do see failures, try to understand the nature\n of the differences and then decide if those differences will\n affect your intended use of Postgres. The regression tests are a\n helpful tool, but they may require some study to be useful.\n\n After running the regression tests, type\n\n $ destroydb regression\n $ cd /usr/src/pgsql/src/test/regress\n $ gmake clean\n\n to recover the disk space used for the tests. (You may want to\n save the regression.diffs file in another place before doing\n this.)\n\n 23. If you haven't already done so, this would be a good time to modify\n your computer to do regular maintainence. The following should be done\n at regular intervals:\n\n Minimal Backup Procedure\n\n 1. Run the SQL command VACUUM. This will clean up your database.\n\n 2. Back up your system. (You should probably keep the last few\n backups on hand.) Preferably, no one else should be using the\n system at the time.\n\n Ideally, the above tasks should be done by a shell script that is run\n nightly or weekly by cron. Look at the man page for crontab for a\n starting point on how to do this. (If you do it, please e-mail us a\n copy of your shell script. We would like to set up our own systems to\n do this too.)\n\n 24. If you are upgrading an existing system then reinstall your old\n database. Type\n\n $ cd\n $ psql -e template1 < db.out\n\n If your pre-v6.2 database uses either path or polygon geometric data\n types, then you will need to upgrade any columns containing those\n types. To do so, type (from within psql)\n\n UPDATE FirstTable SET PathCol = UpgradePath(PathCol);\n UPDATE SecondTable SET PathCol = UpgradePath(PathCol);\n ...\n VACUUM;\n\n UpgradePath() checks to see that a path value is consistant with the\n old syntax, and will not update a column which fails that examination.\n UpgradePoly() cannot verify that a polygon is in fact from an old\n syntax, but RevertPoly() is provided to reverse the effects of a\n mis-applied upgrade.\n\n 25. If you are a new user, you may wish to play with Postgres as described\n below.\n\n 26. Clean up after yourself. Type\n\n $ rm -rf /usr/src/pgsql_6_5\n $ rm -rf /usr/local/pgsql_6_5\n # Also delete old database directory tree if it is not in\n # /usr/local/pgsql_6_5/data\n $ rm ~/postgresql-v6.5.3.tar.gz\n\n 27. You will probably want to print out the documentation. If you have a\n Postscript printer, or have your machine already set up to accept\n Postscript files using a print filter, then to print the User's Guide\n simply type\n\n $ cd /usr/local/pgsql/doc\n $ gunzip user.ps.tz | lpr\n\n Here is how you might do it if you have Ghostscript on your system and\n are writing to a laserjet printer.\n\n $ alias gshp='gs -sDEVICE=laserjet -r300 -dNOPAUSE'\n $ export GS_LIB=/usr/share/ghostscript:/usr/share/ghostscript/fonts\n $ gunzip user.ps.gz\n $ gshp -sOUTPUTFILE=user.hp user.ps\n $ gzip user.ps\n $ lpr -l -s -r manpage.hp\n\n 28. The Postgres team wants to keep Postgres working on all of the\n supported platforms. We therefore ask you to let us know if you did or\n did not get Postgres to work on you system. Please send a mail message\n to [email protected] telling us the following:\n\n o The version of Postgres (v6.5.3, 6.5, beta 990318, etc.).\n\n o Your operating system (i.e. RedHat v5.1 Linux v2.0.34).\n\n o Your hardware (SPARC, i486, etc.).\n\n o Did you compile, install and run the regression tests cleanly? If\n not, what source code did you change (i.e. patches you applied,\n changes you made, etc.), what tests failed, etc. It is normal to\n get many warning when you compile. You do not need to report\n these.\n\n 29. Now create, access and manipulate databases as desired. Write client\n programs to access the database server. In other words, enjoy!\n\n ------------------------------------------------------------------------\n\nPlaying with Postgres\n\nAfter Postgres is installed, a database system is created, a postmaster\ndaemon is running, and the regression tests have passed, you'll want to see\nPostgres do something. That's easy. Invoke the interactive interface to\nPostgres, psql:\n\n% psql template1\n\n(psql has to open a particular database, but at this point the only one that\nexists is the template1 database, which always exists. We will connect to it\nonly long enough to create another one and switch to it.)\n\nThe response from psql is:\n\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: template1\n\ntemplate1=>\n\nCreate the database foo:\n\ntemplate1=> create database foo;\nCREATEDB\n\n(Get in the habit of including those SQL semicolons. Psql won't execute\nanything until it sees the semicolon or a \"\\g\" and the semicolon is required\nto delimit multiple statements.)\n\nNow connect to the new database:\n\ntemplate1=> \\c foo\nconnecting to new database: foo\n\n(\"slash\" commands aren't SQL, so no semicolon. Use \\? to see all the slash\ncommands.)\n\nAnd create a table:\n\nfoo=> create table bar (i int4, c char(16));\nCREATE\n\nThen inspect the new table:\n\nfoo=> \\d bar\n\nTable = bar\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| i | int4 | 4 |\n| c | (bp)char | 16 |\n+----------------------------------+----------------------------------+-------+\n\nAnd so on. You get the idea.\n\n ------------------------------------------------------------------------\n\n\nThe Next Step\n\nQuestions? Bugs? Feedback? First, read the files in directory\n/usr/src/pgsql/doc/. The FAQ in this directory may be particularly useful.\n\nIf Postgres failed to compile on your computer then fill out the form in\nfile /usr/src/pgsql/doc/bug.template and mail it to the location indicated\nat the top of the form.\n\nCheck on the web site at http://www.postgresql.org For more information on\nthe various support mailing lists.\n\n ------------------------------------------------------------------------\n\n\nPorting Notes\n\nCheck for any platform-specific FAQs in the doc/ directory of the source\ndistribution.", "msg_date": "Tue, 12 Oct 1999 11:36:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Features for next release" } ]
[ { "msg_contents": "rm?\n\nWell it's a last resort I've used on one of my home cvs trees, but it\ndoes loose the archived stuff under that directory.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:[email protected]]\nSent: 12 October 1999 15:53\nTo: Marc G. Fournier\nCc: PostgreSQL-development\nSubject: [HACKERS] Dead CVS directories\n\n\nMarc, I added a dead directory called interfaces/pgeasy that I have\nremoved via CVS, but I know the directory still exists in the CVS tree. \n(Now called libpgeasy). In fact, I know there are many directories that\nare empty in the tree.\n\nCan you figure out a way to remove them from the CVSROOT tree?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n\n************\n", "msg_date": "Tue, 12 Oct 1999 15:57:45 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Dead CVS directories" }, { "msg_contents": "> rm?\n> \n> Well it's a last resort I've used on one of my home cvs trees, but it\n> does loose the archived stuff under that directory.\n> \n> Peter\n\nYes, but in most cases we don't care. That stuff is very old. I don't\nknow of any directory that has no valid files that needs to be saved.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Oct 1999 11:08:43 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Dead CVS directories" }, { "msg_contents": "On Tue, 12 Oct 1999, Bruce Momjian wrote:\n\n> > rm?\n> > \n> > Well it's a last resort I've used on one of my home cvs trees, but it\n> > does loose the archived stuff under that directory.\n> > \n> > Peter\n> \n> Yes, but in most cases we don't care. That stuff is very old. I don't\n> know of any directory that has no valid files that needs to be saved.\n\nWhy would we want to remove it in the first place? If its been 'deleted',\nit won't show up unless you want it to (cvs update -APd removed old\nfiles/directories)...\n\nOops, careful in the above, it also removed any 'sticky/branch' tags...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 12 Oct 1999 12:30:46 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Dead CVS directories" }, { "msg_contents": "> On Tue, 12 Oct 1999, Bruce Momjian wrote:\n> \n> > > rm?\n> > > \n> > > Well it's a last resort I've used on one of my home cvs trees, but it\n> > > does loose the archived stuff under that directory.\n> > > \n> > > Peter\n> > \n> > Yes, but in most cases we don't care. That stuff is very old. I don't\n> > know of any directory that has no valid files that needs to be saved.\n> \n> Why would we want to remove it in the first place? If its been 'deleted',\n> it won't show up unless you want it to (cvs update -APd removed old\n> files/directories)...\n> \n> Oops, careful in the above, it also removed any 'sticky/branch' tags...\n\nI just thought we could fiddle with CVS to remove the stuff totally. No\nreal need to, though. I just thought it would be confusing for people\nwho didn't do -P, or people managing CVSROOT directly.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Oct 1999 11:35:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Dead CVS directoriesh" }, { "msg_contents": "Then <[email protected]> spoke up and said:\n> > Why would we want to remove it in the first place? If its been 'deleted',\n> > it won't show up unless you want it to (cvs update -APd removed old\n> > files/directories)...\n> > \n> > Oops, careful in the above, it also removed any 'sticky/branch' tags...\n> \n> I just thought we could fiddle with CVS to remove the stuff totally. No\n> real need to, though. I just thought it would be confusing for people\n> who didn't do -P, or people managing CVSROOT directly.\n\nThere is only one good reason for \"really\" pruning those old\ndirectories out of the source tree: revision purging. If you purge\nall revisions of the files that live in those directories (so that\nthey really *are* empty), then removing the directories is a good idea.\n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================", "msg_date": "13 Oct 1999 10:08:36 -0400", "msg_from": "Brian E Gallew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Dead CVS directoriesh" } ]
[ { "msg_contents": "I'll check it tonight, so you should know in about 3 hours.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: 12 October 1999 16:24\nTo: The Hermit Hacker\nCc: Bruce Momjian; Thomas Lockhart; Postgres Hackers List\nSubject: Re: [HACKERS] Features for next release \n\n\nThe Hermit Hacker <[email protected]> writes:\n>> On the other hand, is there a reason to be in a rush to put out\n6.5.3?\n>> I didn't think we had many important changes from 6.5.2 yet.\n\n> v6.5.3, I believe, was because PgAccess somehow got removed in v6.5.2\n:(\n\nAh, right, I'd forgotten that. OK, we need a 6.5.3.\n\nIt sounds like Peter wants another day to check JDBC though...\n\n\t\t\tregards, tom lane\n\n************\n", "msg_date": "Tue, 12 Oct 1999 16:33:10 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Features for next release " } ]
[ { "msg_contents": "And besides, they will be backed up before removal, right?\n\n>> -----Original Message-----\n>> From: Bruce Momjian [mailto:[email protected]]\n>> Sent: Tuesday, October 12, 1999 5:09 PM\n>> To: Peter Mount\n>> Cc: Marc G. Fournier; PostgreSQL-development\n>> Subject: Re: [HACKERS] Dead CVS directories\n>> \n>> \n>> > rm?\n>> > \n>> > Well it's a last resort I've used on one of my home cvs \n>> trees, but it\n>> > does loose the archived stuff under that directory.\n>> > \n>> > Peter\n>> \n>> Yes, but in most cases we don't care. That stuff is very \n>> old. I don't\n>> know of any directory that has no valid files that needs to be saved.\n>> \n>> -- \n>> Bruce Momjian | http://www.op.net/~candle\n>> [email protected] | (610) 853-3000\n>> + If your life is a hard drive, | 830 Blythe Avenue\n>> + Christ can be your backup. | Drexel Hill, \n>> Pennsylvania 19026\n>> \n>> ************\n>> \n", "msg_date": "Tue, 12 Oct 1999 18:01:16 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Dead CVS directories" } ]
[ { "msg_contents": "Here is a new HISTORY file for 6.5.3.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\nChapter 0. Release Notes\n\n\tTable of Contents\n\tRelease 6.5.3\n\tRelease 6.5.2\n\tRelease 6.5.1\n\tRelease 6.5\n\tRelease 6.4.2\n\tRelease 6.4.1\n\tRelease 6.4\n\tRelease 6.3.2\n\tRelease 6.3.1\n\tRelease 6.3\n\tRelease 6.2.1\n\tRelease 6.2\n\tRelease 6.1.1\n\tRelease 6.1\n\tRelease v6.0\n\tRelease v1.09\n\tRelease v1.02\n\tRelease v1.01\n\tRelease v1.0\n\tPostgres95 Beta 0.03\n\tPostgres95 Beta 0.02\n\tPostgres95 Beta 0.01\n\tTiming Results\n\nRelease 6.5.3\n\nThis is basically a cleanup release for 6.5.2. We have added a new pgaccess\nthat was missing in 6.5.2, and installed an NT-specific fix.\n\nMigration to v6.5.3\n\nA dump/restore is not required for those running 6.5.*.\n\nDetailed Change List\n\nUpdated version of pgaccess 0.98\nNT-specific patch\n\n ------------------------------------------------------------------------\n Release 6.5.2\n Release Notes\n ------------------------------------------------------------------------\n\nRelease 6.5.2\n\nThis is basically a cleanup release for 6.5.1. We have fixed a variety of\nproblems reported by 6.5.1 users.\n\nMigration to v6.5.2\n\nA dump/restore is not required for those running 6.5.*.\n\nDetailed Change List\n\nsubselect+CASE fixes(Tom)\nAdd SHLIB_LINK setting for solaris_i386 and solaris_sparc ports(Daren Sefcik)\nFixes for CASE in WHERE join clauses(Tom)\nFix BTScan abort(Tom)\nRepair the check for redundant UNIQUE and PRIMARY KEY indices(Thomas)\nImprove it so that it checks for multi-column constraints(Thomas)\nFix for Win32 making problem with MB enabled(Hiroki Kataoka)\nAllow BSD yacc and bison to compile pl code(Bruce)\nFix SET NAMES working\nint8 fixes(Thomas)\nFix vacuum's memory consumption(Hiroshi,Tatsuo)\nReduce the total memory consumption of vacuum(Tom)\nFix for timestamp(datetime)\nRule deparsing bugfixes(Tom)\nFix quoting problems in mkMakefile.tcldefs.sh.in and mkMakefile.tkdefs.sh.in(Tom)\nThis is to re-use space on index pages freed by vacuum(Vadim)\ndocument -x for pg_dump(Bruce)\nFix for unary operators in rule deparser(Tom)\nComment out FileUnlink of excess segments during mdtruncate()(Tom)\nIrix linking fix from Yu Cao [email protected]\nRepair logic error in LIKE: should not return LIKE_ABORT\n when reach end of pattern before end of text(Tom)\nRepair incorrect cleanup of heap memory allocation during transaction abort(Tom)\nUpdated version of pgaccess 0.98\n\n ------------------------------------------------------------------------\nRelease Notes Release 6.5.1\n Release Notes\n ------------------------------------------------------------------------\n\nRelease 6.5.1\n\nThis is basically a cleanup release for 6.5. We have fixed a variety of\nproblems reported by 6.5 users.\n\nMigration to v6.5.1\n\nA dump/restore is not required for those running 6.5.\n\nDetailed Change List\n\nAdd NT README file\nPortability fixes for linux_ppc, Irix, linux_alpha, OpenBSD, alpha\nRemove QUERY_LIMIT, use SELECT...LIMIT\nFix for EXPLAIN on inheritance(Tom)\nPatch to allow vacuum on multi-segment tables(Hiroshi)\nR-Tree optimizer selectivity fix(Tom)\nACL file descriptor leak fix(Atsushi Ogawa)\nNew expresssion subtree code(Tom)\nAvoid disk writes for read-only transactions(Vadim)\nFix for removal of temp tables if last transaction was aborted(Bruce)\nFix to prevent too large tuple from being created(Bruce)\nplpgsql fixes\nAllow port numbers 32k - 64k(Bruce)\nAdd ^ precidence(Bruce)\nRename sort files called pg_temp to pg_sorttemp(Bruce)\nFix for microseconds in time values(Tom)\nTutorial source cleanup\nNew linux_m68k port\nFix for sorting of NULL's in some cases(Tom)\nShared library dependencies fixed (Tom)\nFixed glitches affecting GROUP BY in subselects(Tom)\nFix some compiler warnings (Tomoaki Nishiyama)\nAdd Win1250 (Czech) support (Pavel Behal)\n\n ------------------------------------------------------------------------\nRelease 6.5.2 Release 6.5\n Release Notes\n ------------------------------------------------------------------------\n\nRelease 6.5\n\nThis release marks a major step in the development team's mastery of the\nsource code we inherited from Berkeley. You will see we are now easily\nadding major features, thanks to the increasing size and experience of our\nworld-wide development team.\n\nHere is a brief summary of the more notable changes:\n\nMulti-version concurrency control(MVCC)\n\n This removes our old table-level locking, and replaces it with a\n locking system that is superior to most commercial database systems. In\n a traditional system, each row that is modified is locked until\n committed, preventing reads by other users. MVCC uses the natural\n multi-version nature of PostgreSQL to allow readers to continue reading\n consistent data during writer activity. Writers continue to use the\n compact pg_log transaction system. This is all performed without having\n to allocate a lock for every row like traditional database systems. So,\n basically, we no longer are restricted by simple table-level locking;\n we have something better than row-level locking.\n\nHot backups from pg_dump\n\n pg_dump takes advantage of the new MVCC features to give a consistant\n database dump/backup while the database stays online and available for\n queries.\n\nNumeric data type\n\n We now have a true numeric data type, with user-specified precision.\n\nTemporary tables\n\n Temporary tables are guaranteed to have unique names within a database\n session, and are destroyed on session exit.\n\nNew SQL features\n\n We now have CASE, INTERSECT, and EXCEPT statement support. We have new\n LIMIT/OFFSET, SET TRANSACTION ISOLATION LEVEL, SELECT ... FOR UPDATE,\n and an improved LOCK TABLE command.\n\nSpeedups\n\n We continue to speed up PostgreSQL, thanks to the variety of talents\n within our team. We have sped up memory allocation, optimization, table\n joins, and row transfer routines.\n\nPorts\n\n We continue to expand our port list, this time including WinNT/ix86 and\n NetBSD/arm32.\n\nInterfaces\n\n Most interfaces have new versions, and existing functionality has been\n improved.\n\nDocumentation\n\n New and updated material is present throughout the documentation. New\n FAQs have been contributed for SGI and AIX platforms. The Tutorial has\n introductory information on SQL from Stefan Simkovics. For the User's\n Guide, there are reference pages covering the postmaster and more\n utility programs, and a new appendix contains details on date/time\n behavior. The Administrator's Guide has a new chapter on\n troubleshooting from Tom Lane. And the Programmer's Guide has a\n description of query processing, also from Stefan, and details on\n obtaining the Postgres source tree via anonymous CVS and CVSup.\n\nMigration to v6.5\n\nA dump/restore using pg_dump is required for those wishing to migrate data\nfrom any previous release of Postgres. pg_upgrade can not be used to upgrade\nto this release because the on-disk structure of the tables has changed\ncompared to previous releases.\n\nThe new Multi-Version Concurrency Control (MVCC) features can give somewhat\ndifferent behaviors in multi-user environments. Read and understand the\nfollowing section to ensure that your existing applications will give you\nthe behavior you need.\n\nMulti-Version Concurrency Control\n\nBecause readers in 6.5 don't lock data, regardless of transaction isolation\nlevel, data read by one transaction can be overwritten by another. In other\nwords, if a row is returned by SELECT it doesn't mean that this row really\nexists at the time it is returned (i.e. sometime after the statement or\ntransaction began) nor that the row is protected from being deleted or\nupdated by concurrent transactions before the current transaction does a\ncommit or rollback.\n\nTo ensure the actual existence of a row and protect it against concurrent\nupdates one must use SELECT FOR UPDATE or an appropriate LOCK TABLE\nstatement. This should be taken into account when porting applications from\nprevious releases of Postgres and other environments.\n\nKeep the above in mind if you are using contrib/refint.* triggers for\nreferential integrity. Additional technics are required now. One way is to\nuse LOCK parent_table IN SHARE ROW EXCLUSIVE MODE command if a transaction\nis going to update/delete a primary key and use LOCK parent_table IN SHARE\nMODE command if a transaction is going to update/insert a foreign key.\n\n Note: Note that if you run a transaction in SERIALIZABLE mode then\n you must execute the LOCK commands above before execution of any\n DML statement (SELECT/INSERT/DELETE/UPDATE/FETCH/COPY_TO) in the\n transaction.\n\nThese inconveniences will disappear in the future when the ability to read\ndirty (uncommitted) data (regardless of isolation level) and true\nreferential integrity will be implemented.\n\nDetailed Change List\n\nBug Fixes\n---------\nFix text<->float8 and text<->float4 conversion functions(Thomas)\nFix for creating tables with mixed-case constraints(Billy)\nChange exp()/pow() behavior to generate error on underflow/overflow(Jan)\nFix bug in pg_dump -z\nMemory overrun cleanups(Tatsuo)\nFix for lo_import crash(Tatsuo)\nAdjust handling of data type names to suppress double quotes(Thomas)\nUse type coersion for matching columns and DEFAULT(Thomas)\nFix deadlock so it only checks once after one second of sleep(Bruce)\nFixes for aggregates and PL/pgsql(Hiroshi)\nFix for subquery crash(Vadim)\nFix for libpq function PQfnumber and case-insensitive names(Bahman Rafatjoo)\nFix for large object write-in-middle, no extra block, memory consumption(Tatsuo)\nFix for pg_dump -d or -D and quote special characters in INSERT\nRepair serious problems with dynahash(Tom)\nFix INET/CIDR portability problems\nFix problem with selectivity error in ALTER TABLE ADD COLUMN(Bruce)\nFix executor so mergejoin of different column types works(Tom)\nFix for Alpha OR selectivity bug\nFix OR index selectivity problem(Bruce)\nFix so \\d shows proper length for char()/varchar()(Ryan)\nFix tutorial code(Clark)\nImprove destroyuser checking(Oliver)\nFix for Kerberos(Rodney McDuff)\nFix for dropping database while dirty buffers(Bruce)\nFix so sequence nextval() can be case-sensitive(Bruce)\nFix !!= operator\nDrop buffers before destroying database files(Bruce)\nFix case where executor evaluates functions twice(Tatsuo)\nAllow sequence nextval actions to be case-sensitive(Bruce)\nFix optimizer indexing not working for negative numbers(Bruce)\nFix for memory leak in executor with fjIsNull\nFix for aggregate memory leaks(Erik Riedel)\nAllow username containing a dash GRANT permissions\nCleanup of NULL in inet types\nClean up system table bugs(Tom)\nFix problems of PAGER and \\? command(Masaaki Sakaida)\nReduce default multi-segment file size limit to 1GB(Peter)\nFix for dumping of CREATE OPERATOR(Tom)\nFix for backward scanning of cursors(Hiroshi Inoue)\nFix for COPY FROM STDIN when using \\i(Tom)\nFix for subselect is compared inside an expression(Jan)\nFix handling of error reporting while returning rows(Tom)\nFix problems with reference to array types(Tom,Jan)\nPrevent UPDATE SET oid(Jan)\nFix pg_dump so -t option can handle case-sensitive tablenames\nFixes for GROUP BY in special cases(Tom, Jan)\nFix for memory leak in failed queries(Tom)\nDEFAULT now supports mixed-case identifiers(Tom)\nFix for multi-segment uses of DROP/RENAME table, indexes(Ole Gjerde)\nDisable use of pg_dump with both -o and -d options(Bruce)\nAllow pg_dump to properly dump GROUP permissions(Bruce)\nFix GROUP BY in INSERT INTO table SELECT * FROM table2(Jan)\nFix for computations in views(Jan)\nFix for aggregates on array indexes(Tom)\nFix for DEFAULT handles single quotes in value requiring too many quotes\nFix security problem with non-super users importing/exporting large objects(Tom)\nRollback of transaction that creates table cleaned up properly(Tom)\nFix to allow long table and column names to generate proper serial names(Tom)\n\nEnhancements\n------------\nAdd \"vacuumdb\" utility\nSpeed up libpq by allocating memory better(Tom)\nEXPLAIN all indices used(Tom)\nImplement CASE, COALESCE, NULLIF expression(Thomas)\nNew pg_dump table output format(Constantin)\nAdd string min()/max() functions(Thomas)\nExtend new type coersion techniques to aggregates(Thomas)\nNew moddatetime contrib(Terry)\nUpdate to pgaccess 0.96(Constantin)\nAdd routines for single-byte \"char\" type(Thomas)\nImproved substr() function(Thomas)\nImproved multi-byte handling(Tatsuo)\nMulti-version concurrency control/MVCC(Vadim)\nNew Serialized mode(Vadim)\nFix for tables over 2gigs(Peter)\nNew SET TRANSACTION ISOLATION LEVEL(Vadim)\nNew LOCK TABLE IN ... MODE(Vadim)\nUpdate ODBC driver(Byron)\nNew NUMERIC data type(Jan)\nNew SELECT FOR UPDATE(Vadim)\nHandle \"NaN\" and \"Infinity\" for input values(Jan)\nImproved date/year handling(Thomas)\nImproved handling of backend connections(Magnus)\nNew options ELOG_TIMESTAMPS and USE_SYSLOG options for log files(Massimo)\nNew TCL_ARRAYS option(Massimo)\nNew INTERSECT and EXCEPT(Stefan)\nNew pg_index.indisprimary for primary key tracking(D'Arcy)\nNew pg_dump option to allow dropping of tables before creation(Brook)\nSpeedup of row output routines(Tom)\nNew READ COMMITTED isolation level(Vadim)\nNew TEMP tables/indexes(Bruce)\nPrevent sorting if result is already sorted(Jan)\nNew memory allocation optimization(Jan)\nAllow psql to do \\p\\g(Bruce)\nAllow multiple rule actions(Jan)\nAdded LIMIT/OFFSET functionality(Jan)\nImprove optimizer when joining a large number of tables(Bruce)\nNew intro to SQL from S. Simkovics' Master's Thesis (Stefan, Thomas)\nNew intro to backend processing from S. Simkovics' Master's Thesis (Stefan)\nImproved int8 support(Ryan Bradetich, Thomas, Tom)\nNew routines to convert between int8 and text/varchar types(Thomas)\nNew bushy plans, where meta-tables are joined(Bruce)\nEnable right-hand queries by default(Bruce)\nAllow reliable maximum number of backends to be set at configure time\n (--with-maxbackends and postmaster switch (-N backends))(Tom)\nGEQO default now 10 tables because of optimizer speedups(Tom)\nAllow NULL=Var for MS-SQL portability(Michael, Bruce)\nModify contrib check_primary_key() so either \"automatic\" or \"dependent\"(Anand)\nAllow psql \\d on a view show query(Ryan)\nSpeedup for LIKE(Bruce)\nEcpg fixes/features, see src/interfaces/ecpg/ChangeLog file(Michael)\nJDBC fixes/features, see src/interfaces/jdbc/CHANGELOG(Peter)\nMake % operator have precedence like /(Bruce)\nAdd new postgres -O option to allow system table structure changes(Bruce)\nUpdate contrib/pginterface/findoidjoins script(Tom)\nMajor speedup in vacuum of deleted rows with indexes(Vadim)\nAllow non-SQL functions to run different versions based on arguments(Tom)\nAdd -E option that shows actual queries sent by \\dt and friends(Masaaki Sakaida)\nAdd version number in startup banners for psql(Masaaki Sakaida)\nNew contrib/vacuumlo removes large objects not referenced(Peter)\nNew initialization for table sizes so non-vacuumed tables perform better(Tom)\nImprove error messages when a connection is rejected(Tom)\nSupport for arrays of char() and varchar() fields(Massimo)\nOverhaul of hash code to increase reliability and performance(Tom)\nUpdate to PyGreSQL 2.4(D'Arcy)\nChanged debug options so -d4 and -d5 produce different node displays(Jan)\nNew pg_options: pretty_plan, pretty_parse, pretty_rewritten(Jan)\nBetter optimization statistics for system table access(Tom)\nBetter handling of non-default block sizes(Massimo)\nImprove GEQO optimizer memory consumption(Tom)\nUNION now suppports ORDER BY of columns not in target list(Jan)\nMajor libpq++ improvements(Vince Vielhaber)\npg_dump now uses -z(ACL's) as default(Bruce)\nbackend cache, memory speedups(Tom)\nhave pg_dump do everything in one snapshot transaction(Vadim)\nfix for large object memory leakage, fix for pg_dumping(Tom)\nINET type now respects netmask for comparisons\nMake VACUUM ANALYZE only use a readlock(Vadim)\nAllow VIEWs on UNIONS(Jan)\npg_dump now can generate consistent snapshots on active databases(Vadim)\n\nSource Tree Changes\n-------------------\nImprove port matching(Tom)\nPortability fixes for SunOS\nAdd NT/Win32 backend port and enable dynamic loading(Magnus and Daniel Horak)\nNew port to Cobalt Qube(Mips) running Linux(Tatsuo)\nPort to NetBSD/m68k(Mr. Mutsuki Nakajima)\nPort to NetBSD/sun3(Mr. Mutsuki Nakajima)\nPort to NetBSD/macppc(Toshimi Aoki)\nFix for tcl/tk configuration(Vince)\nRemoved CURRENT keyword for rule queries(Jan)\nNT dynamic loading now works(Daniel Horak)\nAdd ARM32 support(Andrew McMurry)\nBetter support for HPUX 11 and Unixware\nImprove file handling to be more uniform, prevent file descriptor leak(Tom)\nNew install commands for plpgsql(Jan)\n\n\n ------------------------------------------------------------------------\nRelease 6.5.1 Release 6.4.2\n Release Notes\n ------------------------------------------------------------------------\n\nRelease 6.4.2\n\nThe 6.4.1 release was improperly packaged. This also has one additional bug\nfix.\n\nMigration to v6.4.2\n\nA dump/restore is not required for those running 6.4.*.\n\nDetailed Change List\n\nFix for datetime constant problem on some platforms(Thomas)\n\n ------------------------------------------------------------------------\nRelease 6.5 Release 6.4.1\n Release Notes\n ------------------------------------------------------------------------\n\nRelease 6.4.1\n\nThis is basically a cleanup release for 6.4. We have fixed a variety of\nproblems reported by 6.4 users.\n\nMigration to v6.4.1\n\nA dump/restore is not required for those running 6.4.\n\nDetailed Change List\n\nAdd pg_dump -N flag to force double quotes around identifiers. This is\n the default(Thomas)\nFix for NOT in where clause causing crash(Bruce)\nEXPLAIN VERBOSE coredump fix(Vadim)\nFix shared-library problems on Linux\nFix test for table existance to allow mixed-case and whitespace in\n the table name(Thomas)\nFix a couple of pg_dump bugs\nConfigure matches template/.similar entries better(Tom)\nChange builtin function names from SPI_* to spi_*\nOR WHERE clause fix(Vadim)\nFixes for mixed-case table names(Billy)\ncontrib/linux/postgres.init.csh/sh fix(Thomas)\nlibpq memory overrun fix\nSunOS fixes(Tom)\nChange exp() behavior to generate error on underflow(Thomas)\npg_dump fixes for memory leak, inheritance constraints, layout change\nupdate pgaccess to 0.93\nFix prototype for 64-bit platforms\nMulti-byte fixes(Tatsuo)\nNew ecpg man page\nFix memory overruns(Tatsuo)\nFix for lo_import() crash(Bruce)\nBetter search for install program(Tom)\nTimezone fixes(Tom)\nHPUX fixes(Tom)\nUse implicit type coersion for matching DEFAULT values(Thomas)\nAdd routines to help with single-byte (internal) character type(Thomas)\nCompilation of libpq for Win32 fixes(Magnus)\nUpgrade to PyGreSQL 2.2(D'Arcy)\n\n ------------------------------------------------------------------------\nRelease 6.4.2 Release 6.4\n Release Notes\n ------------------------------------------------------------------------\n\nRelease 6.4\n\nThere are many new features and improvements in this release. Thanks to our\ndevelopers and maintainers, nearly every aspect of the system has received\nsome attention since the previous release. Here is a brief, incomplete\nsummary:\n\n * Views and rules are now functional thanks to extensive new code in the\n rewrite rules system from Jan Wieck. He also wrote a chapter on it for\n the Programmer's Guide.\n\n * Jan also contributed a second procedural language, PL/pgSQL, to go with\n the original PL/pgTCL procedural language he contributed last release.\n\n * We have optional multiple-byte character set support from Tatsuo Iishi\n to complement our existing locale support.\n\n * Client/server communications has been cleaned up, with better support\n for asynchronous messages and interrupts thanks to Tom Lane.\n\n * The parser will now perform automatic type coersion to match arguments\n to available operators and functions, and to match columns and\n expressions with target columns. This uses a generic mechanism which\n supports the type extensibility features of Postgres. There is a new\n chapter in the User's Guide which covers this topic.\n\n * Three new data types have been added. Two types, inet and cidr, support\n various forms of IP network, subnet, and machine addressing. There is\n now an 8-byte integer type available on some platforms. See the chapter\n on data types in the User's Guide for details. A fourth type, serial,\n is now supported by the parser as an amalgam of the int4 type, a\n sequence, and a unique index.\n\n * Several more SQL92-compatible syntax features have been added,\n including INSERT DEFAULT VALUES\n\n * The automatic configuration and installation system has received some\n attention, and should be more robust for more platforms than it has\n ever been.\n\nMigration to v6.4\n\nA dump/restore using pg_dump or pg_dumpall is required for those wishing to\nmigrate data from any previous release of Postgres.\n\nDetailed Change List\n\nBug Fixes\n---------\nFix for a tiny memory leak in PQsetdb/PQfinish(Bryan)\nRemove char2-16 data types, use char/varchar(Darren)\nPqfn not handles a NOTICE message(Anders)\nReduced busywaiting overhead for spinlocks with many backends (dg)\nStuck spinlock detection (dg)\nFix up \"ISO-style\" timespan decoding and encoding(Thomas)\nFix problem with table drop after rollback of transaction(Vadim)\nChange error message and remove non-functional update message(Vadim)\nFix for COPY array checking\nFix for SELECT 1 UNION SELECT NULL\nFix for buffer leaks in large object calls(Pascal)\nChange owner from oid to int4 type(Bruce)\nFix a bug in the oracle compatibility functions btrim() ltrim() and rtrim()\nFix for shared invalidation cache overflow(Massimo)\nPrevent file descriptor leaks in failed COPY's(Bruce)\nFix memory leak in libpgtcl's pg_select(Constantin)\nFix problems with username/passwords over 8 characters(Tom)\nFix problems with handling of asynchronous NOTIFY in backend(Tom)\nFix of many bad system table entries(Tom)\n\nEnhancements\n------------\nUpgrade ecpg and ecpglib,see src/interfaces/ecpc/ChangeLog(Michael)\nShow the index used in an EXPLAIN(Zeugswetter)\nEXPLAIN invokes rule system and shows plan(s) for rewritten queries(Jan)\nMulti-byte awareness of many data types and functions, via configure(Tatsuo)\nNew configure --with-mb option(Tatsuo)\nNew initdb --pgencoding option(Tatsuo)\nNew createdb -E multibyte option(Tatsuo)\nSelect version(); now returns PostgreSQL version(Jeroen)\nLibpq now allows asynchronous clients(Tom)\nAllow cancel from client of backend query(Tom)\nPsql now cancels query with Control-C(Tom)\nLibpq users need not issue dummy queries to get NOTIFY messages(Tom)\nNOTIFY now sends sender's PID, so you can tell whether it was your own(Tom)\nPGresult struct now includes associated error message, if any(Tom)\nDefine \"tz_hour\" and \"tz_minute\" arguments to date_part()(Thomas)\nAdd routines to convert between varchar and bpchar(Thomas)\nAdd routines to allow sizing of varchar and bpchar into target columns(Thomas)\nAdd bit flags to support timezonehour and minute in data retrieval(Thomas)\nAllow more variations on valid floating point numbers (e.g. \".1\", \"1e6\")(Thomas)\nFixes for unary minus parsing with leading spaces(Thomas)\nImplement TIMEZONE_HOUR, TIMEZONE_MINUTE per SQL92 specs(Thomas)\nCheck for and properly ignore FOREIGN KEY column constraints(Thomas)\nDefine USER as synonym for CURRENT_USER per SQL92 specs(Thomas)\nEnable HAVING clause but no fixes elsewhere yet.\nMake \"char\" type a synonym for \"char(1)\" (actually implemented as bpchar)(Thomas)\nSave string type if specified for DEFAULT clause handling(Thomas)\nCoerce operations involving different data types(Thomas)\nAllow some index use for columns of different types(Thomas)\nAdd capabilities for automatic type conversion(Thomas)\nCleanups for large objects, so file is truncated on open(Peter)\nReadline cleanups(Tom)\nAllow psql \\f \\ to make spaces as delimiter(Bruce)\nPass pg_attribute.atttypmod to the frontend for column field lengths(Tom,Bruce)\nMsql compatibility library in /contrib(Aldrin)\nRemove the requirement that ORDER/GROUP BY clause identifiers be\nincluded in the target list(David)\nConvert columns to match columns in UNION clauses(Thomas)\nRemove fork()/exec() and only do fork()(Bruce)\nJdbc cleanups(Peter)\nShow backend status on ps command line(only works on some platforms)(Bruce)\nPg_hba.conf now has a sameuser option in the database field\nMake lo_unlink take oid param, not int4\nNew DISABLE_COMPLEX_MACRO for compilers that can't handle our macros(Bruce)\nLibpgtcl now handles NOTIFY as a Tcl event, need not send dummy queries(Tom)\nlibpgtcl cleanups(Tom)\nAdd -error option to libpgtcl's pg_result command(Tom)\nNew locale patch, see docs/README/locale(Oleg)\nFix for pg_dump so CONSTRAINT and CHECK syntax is correct(ccb)\nNew contrib/lo code for large object orphan removal(Peter)\nNew psql command \"SET CLIENT_ENCODING TO 'encoding'\" for multi-bytes\nfeature, see /doc/README.mb(Tatsuo)\n/contrib/noupdate code to revoke update permission on a column\nLibpq can now be compiled on win32(Magnus)\nAdd PQsetdbLogin() in libpq\nNew 8-byte integer type, checked by configure for OS support(Thomas)\nBetter support for quoted table/column names(Thomas)\nSurround table and column names with double-quotes in pg_dump(Thomas)\nPQreset() now works with passwords(Tom)\nHandle case of GROUP BY target list column number out of range(David)\nAllow UNION in subselects\nAdd auto-size to screen to \\d? commands(Bruce)\nUse UNION to show all \\d? results in one query(Bruce)\nAdd \\d? field search feature(Bruce)\nPg_dump issues fewer \\connect requests(Tom)\nMake pg_dump -z flag work better, document it in manual page(Tom)\nAdd HAVING clause with full support for subselects and unions(Stephan)\nFull text indexing routines in contrib/fulltextindex(Maarten)\nTransaction ids now stored in shared memory(Vadim)\nNew PGCLIENTENCODING when issuing COPY command(Tatsuo)\nSupport for SQL92 syntax \"SET NAMES\"(Tatsuo)\nSupport for LATIN2-5(Tatsuo)\nAdd UNICODE regression test case(Tatsuo)\nLock manager cleanup, new locking modes for LLL(Vadim)\nAllow index use with OR clauses(Bruce)\nAllows \"SELECT NULL ORDER BY 1;\"\nExplain VERBOSE prints the plan, and now pretty-prints the plan to\nthe postmaster log file(Bruce)\nAdd Indices display to \\d command(Bruce)\nAllow GROUP BY on functions(David)\nNew pg_class.relkind for large objects(Bruce)\nNew way to send libpq NOTICE messages to a different location(Tom)\nNew \\w write command to psql(Bruce)\nNew /contrib/findoidjoins scans oid columns to find join relationships(Bruce)\nAllow binary-compatible indices to be considered when checking for valid\nindices for restriction clauses containing a constant(Thomas)\nNew ISBN/ISSN code in /contrib/isbn_issn\nAllow NOT LIKE, IN, NOT IN, BETWEEN, and NOT BETWEEN constraint(Thomas)\nNew rewrite system fixes many problems with rules and views(Jan)\n * Rules on relations work\n * Event qualifications on insert/update/delete work\n * New OLD variable to reference CURRENT, CURRENT will be remove in future\n * Update rules can reference NEW and OLD in rule qualifications/actions\n * Insert/update/delete rules on views work\n * Multiple rule actions are now supported, surrounded by parentheses\n * Regular users can create views/rules on tables they have RULE permits\n * Rules and views inherit the permissions on the creator\n * No rules at the column level\n * No UPDATE NEW/OLD rules\n * New pg_tables, pg_indexes, pg_rules and pg_views system views\n * Only a single action on SELECT rules\n * Total rewrite overhaul, perhaps for 6.5\n * handle subselects\n * handle aggregates on views\n * handle insert into select from view works\nSystem indexes are now multi-key(Bruce)\nOidint2, oidint4, and oidname types are removed(Bruce)\nUse system cache for more system table lookups(Bruce)\nNew backend programming language PL/pgSQL in backend/pl(Jan)\nNew SERIAL data type, auto-creates sequence/index(Thomas)\nEnable assert checking without a recompile(Massimo)\nUser lock enhancements(Massimo)\nNew setval() command to set sequence value(Massimo)\nAuto-remove unix socket file on startup if no postmaster running(Massimo)\nConditional trace package(Massimo)\nNew UNLISTEN command(Massimo)\nPsql and libpq now compile under win32 using win32.mak(Magnus)\nLo_read no longer stores trailing NULL(Bruce)\nIdentifiers are now truncated to 31 characters internally(Bruce)\nCreateuser options now availble on the command line\nCode for 64-bit integer supported added, configure tested, int8 type(Thomas)\nPrevent file descriptor leaf from failed COPY(Bruce)\nNew pg_upgrade command(Bruce)\nUpdated /contrib directories(Massimo)\nNew CREATE TABLE DEFAULT VALUES statement available(Thomas)\nNew INSERT INTO TABLE DEFAULT VALUES statement available(Thomas)\nNew DECLARE and FETCH feature(Thomas)\nlibpq's internal structures now not exported(Tom)\nAllow up to 8 key indexes(Bruce)\nRemove ARCHIVE keyword, that is no longer used(Thomas)\npg_dump -n flag to supress quotes around indentifiers\ndisable system columns for views(Jan)\nnew INET and CIDR types for network addresses(TomH, Paul)\nno more double quotes in psql output\npg_dump now dumps views(Terry)\nnew SET QUERY_LIMIT(Tatsuo,Jan)\n\nSource Tree Changes\n-------------------\n/contrib cleanup(Jun)\nInline some small functions called for every row(Bruce)\nAlpha/linux fixes\nHp/UX cleanups(Tom)\nMulti-byte regression tests(Soonmyung.)\nRemove --disabled options from configure\nDefine PGDOC to use POSTGRESDIR by default\nMake regression optional\nRemove extra braces code to pgindent(Bruce)\nAdd bsdi shared library support(Bruce)\nNew --without-CXX support configure option(Brook)\nNew FAQ_CVS\nUpdate backend flowchart in tools/backend(Bruce)\nChange atttypmod from int16 to int32(Bruce, Tom)\nGetrusage() fix for platforms that do not have it(Tom)\nAdd PQconnectdb, PGUSER, PGPASSWORD to libpq man page\nNS32K platform fixes(Phil Nelson, John Buller)\nSco 7/UnixWare 2.x fixes(Billy,others)\nSparc/Solaris 2.5 fixes(Ryan)\nPgbuiltin.3 is obsolete, move to doc files(Thomas)\nEven more documention(Thomas)\nNextstep support(Jacek)\nAix support(David)\npginterface manual page(Bruce)\nshared libraries all have version numbers\nmerged all OS-specific shared library defines into one file\nsmarter TCL/TK configuration checking(Billy)\nsmarter perl configuration(Brook)\nconfigure uses supplied install-sh if no install script found(Tom)\nnew Makefile.shlib for shared library configuration(Tom)\n\n ------------------------------------------------------------------------\nRelease 6.4.1 Release 6.3.2\n Release Notes\n ------------------------------------------------------------------------\n\nRelease 6.3.2\n\nThis is a bugfix release for 6.3.x. Refer to the release notes for v6.3 for\na more complete summary of new features.\n\nSummary:\n\n * Repairs automatic configuration support for some platforms, including\n Linux, from breakage inadvertently introduced in v6.3.1.\n\n * Correctly handles function calls on the left side of BETWEEN and LIKE\n clauses.\n\nA dump/restore is NOT required for those running 6.3 or 6.3.1. A 'make\ndistclean', 'make', and 'make install' is all that is required. This last\nstep should be performed while the postmaster is not running. You should\nre-link any custom applications that use Postgres libraries.\n\nFor upgrades from pre-v6.3 installations, refer to the installation and\nmigration instructions for v6.3.\n\nDetailed Change List\n\nChanges\n-------\nConfigure detection improvements for tcl/tk(Brook Milligan, Alvin)\nManual page improvements(Bruce)\nBETWEEN and LIKE fix(Thomas)\nfix for psql \\connect used by pg_dump(Oliver Elphick)\nNew odbc driver\npgaccess, version 0.86\nqsort removed, now uses libc version, cleanups(Jeroen)\nfix for buffer over-runs detected(Maurice Gittens)\nfix for buffer overrun in libpgtcl(Randy Kunkee)\nfix for UNION with DISTINCT or ORDER BY(Bruce)\ngettimeofday configure check(Doug Winterburn)\nFix \"indexes not used\" bug(Vadim)\ndocs additions(Thomas)\nFix for backend memory leak(Bruce)\nlibreadline cleanup(Erwan MAS)\nRemove DISTDIR(Bruce)\nMakefile dependency cleanup(Jeroen van Vianen)\nASSERT fixes(Bruce)\n\n ------------------------------------------------------------------------\nRelease 6.4 Release 6.3.1\n Release Notes\n ------------------------------------------------------------------------\n\nRelease 6.3.1\n\nSummary:\n\n * Additional support for multi-byte character sets.\n\n * Repair byte ordering for mixed-endian clients and servers.\n\n * Minor updates to allowed SQL syntax.\n\n * Improvements to the configuration autodetection for installation.\n\nA dump/restore is NOT required for those running 6.3. A 'make distclean',\n'make', and 'make install' is all that is required. This last step should be\nperformed while the postmaster is not running. You should re-link any custom\napplications that use Postgres libraries.\n\nFor upgrades from pre-v6.3 installations, refer to the installation and\nmigration instructions for v6.3.\n\nDetailed Change List\n\nChanges\n-------\necpg cleanup/fixes, now version 1.1(Michael Meskes)\npg_user cleanup(Bruce)\nlarge object fix for pg_dump and tclsh (alvin)\nLIKE fix for multiple adjacent underscores\nfix for redefining builtin functions(Thomas)\nultrix4 cleanup\nupgrade to pg_access 0.83\nupdated CLUSTER manual page\nmulti-byte character set support, see doc/README.mb(Tatsuo)\nconfigure --with-pgport fix\npg_ident fix\nbig-endian fix for backend communications(Kataoka)\nSUBSTR() and substring() fix(Jan)\nseveral jdbc fixes(Peter)\nlibpgtcl improvements, see libptcl/README(Randy Kunkee)\nFix for \"Datasize = 0\" error(Vadim)\nPrevent \\do from wrapping(Bruce)\nRemove duplicate Russian character set entries\nSunos4 cleanup\nAllow optional TABLE keyword in LOCK and SELECT INTO(Thomas)\nCREATE SEQUENCE options to allow a negative integer(Thomas)\nAdd \"PASSWORD\" as an allowed column identifier(Thomas)\nAdd checks for UNION target fields(Bruce)\nFix Alpha port(Dwayne Bailey)\nFix for text arrays containing quotes(Doug Gibson)\nSolaris compile fix(Albert Chin-A-Young)\nBetter identify tcl and tk libs and includes(Bruce)\n\n ------------------------------------------------------------------------\nRelease 6.3.2 Release 6.3\n Release Notes\n ------------------------------------------------------------------------\n\nRelease 6.3\n\nThere are many new features and improvements in this release. Here is a\nbrief, incomplete summary:\n\n * Many new SQL features, including full SQL92 subselect capability\n (everything is here but target-list subselects).\n\n * Support for client-side environment variables to specify time zone and\n date style.\n\n * Socket interface for client/server connection. This is the default now\n so you may need to start postmaster with the \"-i\" flag.\n\n * Better password authorization mechanisms. Default table permissions\n have changed.\n\n * Old-style \"time travel\" has been removed. Performance has been\n improved.\n\n Note: Bruce Momjian wrote the following notes to introduce the new\n release.\n\nThere are some general 6.3 issues that I want to mention. These are only the\nbig items that can not be described in one sentence. A review of the\ndetailed changes list is still needed.\n\nFirst, we now have subselects. Now that we have them, I would like to\nmention that without subselects, SQL is a very limited language. Subselects\nare a major feature, and you should review your code for places where\nsubselects provide a better solution for your queries. I think you will find\nthat there are more uses for subselects than you may think. Vadim has put us\non the big SQL map with subselects, and fully functional ones too. The only\nthing you can't do with subselects is to use them in the target list.\n\nSecond, 6.3 uses unix domain sockets rather than TCP/IP by default. To\nenable connections from other machines, you have to use the new postmaster\n-i option, and of course edit pg_hba.conf. Also, for this reason, the format\nof pg_hba.conf has changed.\n\nThird, char() fields will now allow faster access than varchar() or text.\nSpecifically, the text and varchar() have a penalty for access to any\ncolumns after the first column of this type. char() used to also have this\naccess penalty, but it no longer does. This may suggest that you redesign\nsome of your tables, especially if you have short character columns that you\nhave defined as varchar() or text. This and other changes make 6.3 even\nfaster than earlier releases.\n\nWe now have passwords definable independent of any Unix file. There are new\nSQL USER commands. See the pg_hba.conf manual page for more information.\nThere is a new table, pg_shadow, which is used to store user information and\nuser passwords, and it by default only SELECT-able by the postgres\nsuper-user. pg_user is now a view of pg_shadow, and is SELECT-able by\nPUBLIC. You should keep using pg_user in your application without changes.\n\nUser-created tables now no longer have SELECT permission to PUBLIC by\ndefault. This was done because the ANSI standard requires it. You can of\ncourse GRANT any permissions you want after the table is created. System\ntables continue to be SELECT-able by PUBLIC.\n\nWe also have real deadlock detection code. No more sixty-second timeouts.\nAnd the new locking code implements a FIFO better, so there should be less\nresource starvation during heavy use.\n\nMany complaints have been made about inadequate documenation in previous\nreleases. Thomas has put much effort into many new manuals for this release.\nCheck out the doc/ directory.\n\nFor performance reasons, time travel is gone, but can be implemented using\ntriggers (see pgsql/contrib/spi/README). Please check out the new \\d command\nfor types, operators, etc. Also, views have their own permissions now, not\nbased on the underlying tables, so permissions on them have to be set\nseparately. Check /pgsql/interfaces for some new ways to talk to Postgres.\n\nThis is the first release that really required an explanation for existing\nusers. In many ways, this was necessary because the new release removes many\nlimitations, and the work-arounds people were using are no longer needed.\n\nMigration to v6.3\n\nA dump/restore using pg_dump or pg_dumpall is required for those wishing to\nmigrate data from any previous release of Postgres.\n\nDetailed Change List\n\nBug Fixes\n---------\nFix binary cursors broken by MOVE implementation(Vadim)\nFix for tcl library crash(Jan)\nFix for array handling, from Gerhard Hintermayer\nFix acl error, and remove duplicate pqtrace(Bruce)\nFix psql \\e for empty file(Bruce)\nFix for textcat on varchar() fields(Bruce)\nFix for DBT Sendproc (Zeugswetter Andres)\nFix vacuum analyze syntax problem(Bruce)\nFix for international identifiers(Tatsuo)\nFix aggregates on inherited tables(Bruce)\nFix substr() for out-of-bounds data\nFix for select 1=1 or 2=2, select 1=1 and 2=2, and select sum(2+2)(Bruce)\nFix notty output to show status result. -q option still turns it off(Bruce)\nFix for count(*), aggs with views and multiple tables and sum(3)(Bruce)\nFix cluster(Bruce)\nFix for PQtrace start/stop several times(Bruce)\nFix a variety of locking problems like newer lock waiters getting\n lock before older waiters, and having readlock people not share\n locks if a writer is waiting for a lock, and waiting writers not\n getting priority over waiting readers(Bruce)\nFix crashes in psql when executing queries from external files(James)\nFix problem with multiple order by columns, with the first one having\n NULL values(Jeroen)\nUse correct hash table support functions for float8 and int4(Thomas)\nRe-enable JOIN= option in CREATE OPERATOR statement (Thomas)\nChange precedence for boolean operators to match expected behavior(Thomas)\nGenerate elog(ERROR) on over-large integer(Bruce)\nAllow multiple-argument functions in constraint clauses(Thomas)\nCheck boolean input literals for 'true','false','yes','no','1','0'\n and throw elog(ERROR) if unrecognized(Thomas)\nMajor large objects fix\nFix for GROUP BY showing duplicates(Vadim)\nFix for index scans in MergeJion(Vadim)\n\nEnhancements\n------------\nSubselects with EXISTS, IN, ALL, ANY keywords (Vadim, Bruce, Thomas)\nNew User Manual(Thomas, others)\nSpeedup by inlining some frequently-called functions\nReal deadlock detection, no more timeouts(Bruce)\nAdd SQL92 \"constants\" CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP,\n CURRENT_USER(Thomas)\nModify constraint syntax to be SQL92-compliant(Thomas)\nImplement SQL92 PRIMARY KEY and UNIQUE clauses using indices(Thomas)\nRecognize SQL92 syntax for FOREIGN KEY. Throw elog notice(Thomas)\nAllow NOT NULL UNIQUE constraint clause (each allowed separately before)(Thomas)\nAllow Postgres-style casting (\"::\") of non-constants(Thomas)\nAdd support for SQL3 TRUE and FALSE boolean constants(Thomas)\nSupport SQL92 syntax for IS TRUE/IS FALSE/IS NOT TRUE/IS NOT FALSE(Thomas)\nAllow shorter strings for boolean literals (e.g. \"t\", \"tr\", \"tru\")(Thomas)\nAllow SQL92 delimited identifiers(Thomas)\nImplement SQL92 binary and hexadecimal string decoding (b'10' and x'1F')(Thomas)\nSupport SQL92 syntax for type coercion of literal strings\n (e.g. \"DATETIME 'now'\")(Thomas)\nAdd conversions for int2, int4, and OID types to and from text(Thomas)\nUse shared lock when building indices(Vadim)\nFree memory allocated for an user query inside transaction block after\n this query is done, was turned off in <= 6.2.1(Vadim)\nNew SQL statement CREATE PROCEDURAL LANGUAGE(Jan)\nNew Postgres Procedural Language (PL) backend interface(Jan)\nRename pg_dump -H option to -h(Bruce)\nAdd Java support for passwords, European dates(Peter)\nUse indices for LIKE and ~, !~ operations(Bruce)\nAdd hash functions for datetime and timespan(Thomas)\nTime Travel removed(Vadim, Bruce)\nAdd paging for \\d and \\z, and fix \\i(Bruce)\nAdd Unix domain socket support to backend and to frontend library(Goran)\nImplement CREATE DATABASE/WITH LOCATION and initlocation utility(Thomas)\nAllow more SQL92 and/or Postgres reserved words as column identifiers(Thomas)\nAugment support for SQL92 SET TIME ZONE...(Thomas)\nSET/SHOW/RESET TIME ZONE uses TZ backend environment variable(Thomas)\nImplement SET keyword = DEFAULT and SET TIME ZONE DEFAULT(Thomas)\nEnable SET TIME ZONE using TZ environment variable(Thomas)\nAdd PGDATESTYLE environment variable to frontend and backend initialization(Thomas)\nAdd PGTZ, PGCOSTHEAP, PGCOSTINDEX, PGRPLANS, PGGEQO\n frontend library initialization environment variables(Thomas)\nRegression tests time zone automatically set with \"setenv PGTZ PST8PDT\"(Thomas)\nAdd pg_description table for info on tables, columns, operators, types, and\n aggregates(Bruce)\nIncrease 16 char limit on system table/index names to 32 characters(Bruce)\nRename system indices(Bruce)\nAdd 'GERMAN' option to SET DATESTYLE(Thomas)\nDefine an \"ISO-style\" timespan output format with \"hh:mm:ss\" fields(Thomas)\nAllow fractional values for delta times (e.g. '2.5 days')(Thomas)\nValidate numeric input more carefully for delta times(Thomas)\nImplement day of year as possible input to date_part()(Thomas)\nDefine timespan_finite() and text_timespan() functions(Thomas)\nRemove archive stuff(Bruce)\nAllow for a pg_password authentication database that is separate from\n the system password file(Todd)\nDump ACLs, GRANT, REVOKE permissions(Matt)\nDefine text, varchar, and bpchar string length functions(Thomas)\nFix Query handling for inheritance, and cost computations(Bruce)\nImplement CREATE TABLE/AS SELECT (alternative to SELECT/INTO)(Thomas)\nAllow NOT, IS NULL, IS NOT NULL in constraints(Thomas)\nImplement UNIONs for SELECT(Bruce)\nAdd UNION, GROUP, DISTINCT to INSERT(Bruce)\nvarchar() stores only necessary bytes on disk(Bruce)\nFix for BLOBs(Peter)\nMega-Patch for JDBC...see README_6.3 for list of changes(Peter)\nRemove unused \"option\" from PQconnectdb()\nNew LOCK command and lock manual page describing deadlocks(Bruce)\nAdd new psql \\da, \\dd, \\df, \\do, \\dS, and \\dT commands(Bruce)\nEnhance psql \\z to show sequences(Bruce)\nShow NOT NULL and DEFAULT in psql \\d table(Bruce)\nNew psql .psqlrc file startup(Andrew)\nModify sample startup script in contrib/linux to show syslog(Thomas)\nNew types for IP and MAC addresses in contrib/ip_and_mac(TomH)\nUnix system time conversions with date/time types in contrib/unixdate(Thomas)\nUpdate of contrib stuff(Massimo)\nAdd Unix socket support to DBD::Pg(Goran)\nNew python interface (PyGreSQL 2.0)(D'Arcy)\nNew frontend/backend protocol has a version number, network byte order(Phil)\nSecurity features in pg_hba.conf enhanced and documented, many cleanups(Phil)\nCHAR() now faster access than VARCHAR() or TEXT\necpg embedded SQL preprocessor\nReduce system column overhead(Vadmin)\nRemove pg_time table(Vadim)\nAdd pg_type attribute to identify types that need length (bpchar, varchar)\nAdd report of offending line when COPY command fails\nAllow VIEW permissions to be set separately from the underlying tables.\n For security, use GRANT/REVOKE on views as appropriate(Jan)\nTables now have no default GRANT SELECT TO PUBLIC. You must\n explicitly grant such permissions.\nClean up tutorial examples(Darren)\n\nSource Tree Changes\n-------------------\nAdd new html development tools, and flow chart in /tools/backend\nFix for SCO compiles\nStratus computer port Robert Gillies\nAdded support for shlib for BSD44_derived & i386_solaris\nMake configure more automated(Brook)\nAdd script to check regression test results\nBreak parser functions into smaller files, group together(Bruce)\nRename heap_create to heap_create_and_catalog, rename heap_creatr\n to heap_create()(Bruce)\nSparc/Linux patch for locking(TomS)\nRemove PORTNAME and reorganize port-specific stuff(Marc)\nAdd optimizer README file(Bruce)\nRemove some recursion in optimizer and clean up some code there(Bruce)\nFix for NetBSD locking(Henry)\nFix for libptcl make(Tatsuo)\nAIX patch(Darren)\nChange IS TRUE, IS FALSE, ... to expressions using \"=\" rather than\n function calls to istrue() or isfalse() to allow optimization(Thomas)\nVarious fixes NetBSD/Sparc related(TomH)\nAlpha linux locking(Travis,Ryan)\nChange elog(WARN) to elog(ERROR)(Bruce)\nFAQ for FreeBSD(Marc)\nBring in the PostODBC source tree as part of our standard distribution(Marc)\nA minor patch for HP/UX 10 vs 9(Stan)\nNew pg_attribute.atttypmod for type-specific info like varchar length(Bruce)\nUnixware patches(Billy)\nNew i386 'lock' for spin lock asm(Billy)\nSupport for multiplexed backends is removed\nStart an OpenBSD port\nStart an AUX port\nStart a Cygnus port\nAdd string functions to regression suite(Thomas)\nExpand a few function names formerly truncated to 16 characters(Thomas)\nRemove un-needed malloc() calls and replace with palloc()(Bruce)\n\n ------------------------------------------------------------------------\nRelease 6.3.1 Release 6.2.1\n Release Notes\n ------------------------------------------------------------------------\n\nRelease 6.2.1\n\nv6.2.1 is a bug-fix and usability release on v6.2.\n\nSummary:\n\n * Allow strings to span lines, per SQL92.\n\n * Include example trigger function for inserting user names on table\n updates.\n\nThis is a minor bug-fix release on v6.2. For upgrades from pre-v6.2 systems,\na full dump/reload is required. Refer to the v6.2 release notes for\ninstructions.\n\nMigration from v6.2 to v6.2.1\n\nThis is a minor bug-fix release. A dump/reload is not required from v6.2,\nbut is required from any release prior to v6.2.\n\nIn upgrading from v6.2, if you choose to dump/reload you will find that\navg(money) is now calculated correctly. All other bug fixes take effect upon\nupdating the executables.\n\nAnother way to avoid dump/reload is to use the following SQL command from\npsql to update the existing system table:\n\n update pg_aggregate set aggfinalfn = 'cash_div_flt8'\n where aggname = 'avg' and aggbasetype = 790;\n\nThis will need to be done to every existing database, including template1.\n\nDetailed Change List\n\nChanges in this release\n-----------------------\nAllow TIME and TYPE column names(Thomas)\nAllow larger range of true/false as boolean values(Thomas)\nSupport output of \"now\" and \"current\"(Thomas)\nHandle DEFAULT with INSERT of NULL properly(Vadim)\nFix for relation reference counts problem in buffer manager(Vadim)\nAllow strings to span lines, like ANSI(Thomas)\nFix for backward cursor with ORDER BY(Vadim)\nFix avg(cash) computation(Thomas)\nFix for specifying a column twice in ORDER/GROUP BY(Vadim)\nDocumented new libpq function to return affected rows, PQcmdTuples(Bruce)\nTrigger function for inserting user names for INSERT/UPDATE(Brook Milligan)\n\n ------------------------------------------------------------------------\nRelease 6.3 Release 6.2\n Release Notes\n ------------------------------------------------------------------------\n\nRelease 6.2\n\nA dump/restore is required for those wishing to migrate data from previous\nreleases of Postgres.\n\nMigration from v6.1 to v6.2\n\nThis migration requires a complete dump of the 6.1 database and a restore of\nthe database in 6.2.\n\nNote that the pg_dump and pg_dumpall utility from 6.2 should be used to dump\nthe 6.1 database.\n\nMigration from v1.x to v6.2\n\nThose migrating from earlier 1.* releases should first upgrade to 1.09\nbecause the COPY output format was improved from the 1.02 release.\n\nDetailed Change List\n\nBug Fixes\n---------\nFix problems with pg_dump for inheritance, sequences, archive tables(Bruce)\nFix compile errors on overflow due to shifts, unsigned, and bad prototypes\n from Solaris(Diab Jerius)\nFix bugs in geometric line arithmetic (bad intersection calculations)(Thomas)\nCheck for geometric intersections at endpoints to avoid rounding ugliness(Thomas)\nCatch non-functional delete attempts(Vadim)\nChange time function names to be more consistent(Michael Reifenberg)\nCheck for zero divides(Michael Reifenberg)\nFix very old bug which made tuples changed/inserted by a commnd\n visible to the command itself (so we had multiple update of\n updated tuples, etc)(Vadim)\nFix for SELECT null, 'fail' FROM pg_am (Patrick)\nSELECT NULL as EMPTY_FIELD now allowed(Patrick)\nRemove un-needed signal stuff from contrib/pginterface\nFix OR (where x != 1 or x isnull didn't return tuples with x NULL) (Vadim)\nFix time_cmp function (Vadim)\nFix handling of functions with non-attribute first argument in\n WHERE clauses (Vadim)\nFix GROUP BY when order of entries is different from order\n in target list (Vadim)\nFix pg_dump for aggregates without sfunc1 (Vadim)\n\nEnhancements\n------------\nDefault genetic optimizer GEQO parameter is now 8(Bruce)\nAllow use parameters in target list having aggregates in functions(Vadim)\nAdded JDBC driver as an interface(Adrian & Peter)\npg_password utility\nReturn number of tuples inserted/affected by INSERT/UPDATE/DELETE etc.(Vadim)\nTriggers implemented with CREATE TRIGGER (SQL3)(Vadim)\nSPI (Server Programming Interface) allows execution of queries inside\n C-functions (Vadim)\nNOT NULL implemented (SQL92)(Robson Paniago de Miranda)\nInclude reserved words for string handling, outer joins, and unions(Thomas)\nImplement extended comments (\"/* ... */\") using exclusive states(Thomas)\nAdd \"//\" single-line comments(Bruce)\nRemove some restrictions on characters in operator names(Thomas)\nDEFAULT and CONSTRAINT for tables implemented (SQL92)(Vadim & Thomas)\nAdd text concatenation operator and function (SQL92)(Thomas)\nSupport WITH TIME ZONE syntax (SQL92)(Thomas)\nSupport INTERVAL unit TO unit syntax (SQL92)(Thomas)\nDefine types DOUBLE PRECISION, INTERVAL, CHARACTER,\n and CHARACTER VARYING (SQL92)(Thomas)\nDefine type FLOAT(p) and rudimentary DECIMAL(p,s), NUMERIC(p,s) (SQL92)(Thomas)\nDefine EXTRACT(), POSITION(), SUBSTRING(), and TRIM() (SQL92)(Thomas)\nDefine CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP (SQL92)(Thomas)\nAdd syntax and warnings for UNION, HAVING, INNER and OUTER JOIN (SQL92)(Thomas)\nAdd more reserved words, mostly for SQL92 compliance(Thomas)\nAllow hh:mm:ss time entry for timespan/reltime types(Thomas)\nAdd center() routines for lseg, path, polygon(Thomas)\nAdd distance() routines for circle-polygon, polygon-polygon(Thomas)\nCheck explicitly for points and polygons contained within polygons\n using an axis-crossing algorithm(Thomas)\nAdd routine to convert circle-box(Thomas)\nMerge conflicting operators for different geometric data types(Thomas)\nReplace distance operator \"<===>\" with \"<->\"(Thomas)\nReplace \"above\" operator \"!^\" with \">^\" and \"below\" operator \"!|\" with \"<^\"(Thomas)\nAdd routines for text trimming on both ends, substring, and string position(Thomas)\nAdded conversion routines circle(box) and poly(circle)(Thomas)\nAllow internal sorts to be stored in memory rather than in files(Bruce & Vadim)\nAllow functions and operators on internally-identical types to succeed(Bruce)\nSpeed up backend startup after profiling analysis(Bruce)\nInline frequently called functions for performance(Bruce)\nReduce open() calls(Bruce)\npsql: Add PAGER for \\h and \\?,\\C fix\nFix for psql pager when no tty(Bruce)\nNew entab utility(Bruce)\nGeneral trigger functions for referential integrity (Vadim)\nGeneral trigger functions for time travel (Vadim)\nGeneral trigger functions for AUTOINCREMENT/IDENTITY feature (Vadim)\nMOVE implementation (Vadim)\n\nSource Tree Changes\n-------------------\nHPUX 10 patches (Vladimir Turin)\nAdded SCO support, (Daniel Harris)\nmkLinux patches (Tatsuo Ishii)\nChange geometric box terminology from \"length\" to \"width\"(Thomas)\nDeprecate temporary unstored slope fields in geometric code(Thomas)\nRemove restart instructions from INSTALL(Bruce)\nLook in /usr/ucb first for install(Bruce)\nFix c++ copy example code(Thomas)\nAdd -o to psql manual page(Bruce)\nPrevent relname unallocated string length from being copied into database(Bruce)\nCleanup for NAMEDATALEN use(Bruce)\nFix pg_proc names over 15 chars in output(Bruce)\nAdd strNcpy() function(Bruce)\nremove some (void) casts that are unnecessary(Bruce)\nnew interfaces directory(Marc)\nReplace fopen() calls with calls to fd.c functions(Bruce)\nMake functions static where possible(Bruce)\nenclose unused functions in #ifdef NOT_USED(Bruce)\nRemove call to difftime() in timestamp support to fix SunOS(Bruce & Thomas)\nChanges for Digital Unix\nPortability fix for pg_dumpall(Bruce)\nRename pg_attribute.attnvals to attdisbursion(Bruce)\n\"intro/unix\" manual page now \"pgintro\"(Bruce)\n\"built-in\" manual page now \"pgbuiltin\"(Bruce)\n\"drop\" manual page now \"drop_table\"(Bruce)\nAdd \"create_trigger\", \"drop_trigger\" manual pages(Thomas)\nAdd constraints regression test(Vadim & Thomas)\nAdd comments syntax regression test(Thomas)\nAdd PGINDENT and support program(Bruce)\nMassive commit to run PGINDENT on all *.c and *.h files(Bruce)\nFiles moved to /src/tools directory(Bruce)\nSPI and Trigger programming guides (Vadim & D'Arcy)\n\n ------------------------------------------------------------------------\nRelease 6.2.1 Release 6.1.1\n Release Notes\n ------------------------------------------------------------------------\n\nRelease 6.1.1\n\nMigration from v6.1 to v6.1.1\n\nThis is a minor bug-fix release. A dump/reload is not required from v6.1,\nbut is required from any release prior to v6.1. Refer to the release notes\nfor v6.1 for more details.\n\nDetailed Change List\n\nChanges in this release\n-----------------------\nfix for SET with options (Thomas)\nallow pg_dump/pg_dumpall to preserve ownership of all tables/objects(Bruce)\nnew psql \\connect option allows changing usernames without changing databases\nfix for initdb --debug option(Yoshihiko Ichikawa))\nlextest cleanup(Bruce)\nhash fixes(Vadim)\nfix date/time month boundary arithmetic(Thomas)\nfix timezone daylight handling for some ports(Thomas, Bruce, Tatsuo)\ntimestamp overhauled to use standard functions(Thomas)\nother code cleanup in date/time routines(Thomas)\npsql's \\d now case-insensitive(Bruce)\npsql's backslash commands can now have trailing semicolon(Bruce)\nfix memory leak in psql when using \\g(Bruce)\nmajor fix for endian handling of communication to server(Thomas, Tatsuo)\nFix for Solaris assembler and include files(Yoshihiko Ichikawa)\nallow underscores in usernames(Bruce)\npg_dumpall now returns proper status, portability fix(Bruce)\n\n ------------------------------------------------------------------------\nRelease 6.2 Release 6.1\n Release Notes\n ------------------------------------------------------------------------\n\nRelease 6.1\n\nThe regression tests have been adapted and extensively modified for the v6.1\nrelease of Postgres.\n\nThree new data types (datetime, timespan, and circle) have been added to the\nnative set of Postgres types. Points, boxes, paths, and polygons have had\ntheir output formats made consistant across the data types. The polygon\noutput in misc.out has only been spot-checked for correctness relative to\nthe original regression output.\n\nPostgres v6.1 introduces a new, alternate optimizer which uses genetic\nalgorithms. These algorithms introduce a random behavior in the ordering of\nquery results when the query contains multiple qualifiers or multiple tables\n(giving the optimizer a choice on order of evaluation). Several regression\ntests have been modified to explicitly order the results, and hence are\ninsensitive to optimizer choices. A few regression tests are for data types\nwhich are inherently unordered (e.g. points and time intervals) and tests\ninvolving those types are explicitly bracketed with set geqo to 'off' and\nreset geqo.\n\nThe interpretation of array specifiers (the curly braces around atomic\nvalues) appears to have changed sometime after the original regression tests\nwere generated. The current ./expected/*.out files reflect this new\ninterpretation, which may not be correct!\n\nThe float8 regression test fails on at least some platforms. This is due to\ndifferences in implementations of pow() and exp() and the signaling\nmechanisms used for overflow and underflow conditions.\n\nThe \"random\" results in the random test should cause the \"random\" test to be\n\"failed\", since the regression tests are evaluated using a simple diff.\nHowever, \"random\" does not seem to produce random results on my test machine\n(Linux/gcc/i686).\n\nMigration to v6.1\n\nThis migration requires a complete dump of the 6.0 database and a restore of\nthe database in 6.1.\n\nThose migrating from earlier 1.* releases should first upgrade to 1.09\nbecause the COPY output format was improved from the 1.02 release.\n\nDetailed Change List\n\nBug Fixes\n---------\npacket length checking in library routines\nlock manager priority patch\ncheck for under/over flow of float8(Bruce)\nmulti-table join fix(Vadim)\nSIGPIPE crash fix(Darren)\nlarge object fixes(Sven)\nallow btree indexes to handle NULLs(Vadim)\ntimezone fixes(D'Arcy)\nselect SUM(x) can return NULL on no rows(Thomas)\ninternal optimizer, executor bug fixes(Vadim)\nfix problem where inner loop in < or <= has no rows(Vadim)\nprevent re-commuting join index clauses(Vadim)\nfix join clauses for multiple tables(Vadim)\nfix hash, hashjoin for arrays(Vadim)\nfix btree for abstime type(Vadim)\nlarge object fixes(Raymond)\nfix buffer leak in hash indices (Vadim)\nfix rtree for use in inner scan (Vadim)\nfix gist for use in inner scan, cleanups (Vadim, Andrea)\navoid unnecessary local buffers allocation (Vadim, Massimo)\nfix local buffers leak in transaction aborts (Vadim)\nfix file manager memmory leaks, cleanups (Vadim, Massimo)\nfix storage manager memmory leaks (Vadim)\nfix btree duplicates handling (Vadim)\nfix deleted tuples re-incarnation caused by vacuum (Vadim)\nfix SELECT varchar()/char() INTO TABLE made zero-length fields(Bruce)\nmany psql, pg_dump, and libpq memory leaks fixed using Purify (Igor)\n\nEnhancements\n------------\nattribute optimization statistics(Bruce)\nmuch faster new btree bulk load code(Paul)\nBTREE UNIQUE added to bulk load code(Vadim)\nnew lock debug code(Massimo)\nmassive changes to libpg++(Leo)\nnew GEQO optimizer speeds table multi-table optimization(Martin)\nnew WARN message for non-unique insert into unique key(Marc)\nupdate x=-3, no spaces, now valid(Bruce)\nremove case-sensitive identifier handling(Bruce,Thomas,Dan)\ndebug backend now pretty-prints tree(Darren)\nnew Oracle character functions(Edmund)\nnew plaintext password functions(Dan)\nno such class or insufficient privilege changed to distinct messages(Dan)\nnew ANSI timestamp function(Dan)\nnew ANSI Time and Date types (Thomas)\nmove large chunks of data in backend(Martin)\nmulti-column btree indexes(Vadim)\nnew SET var TO value command(Martin)\nupdate transaction status on reads(Dan)\nnew locale settings for character types(Oleg)\nnew SEQUENCE serial number generator(Vadim)\nGROUP BY function now possible(Vadim)\nre-organize regression test(Thomas,Marc)\nnew optimizer operation weights(Vadim)\nnew psql \\z grant/permit option(Marc)\nnew MONEY data type(D'Arcy,Thomas)\ntcp socket communication speed improved(Vadim)\nnew VACUUM option for attribute statistics, and for certain columns (Vadim)\nmany geometric type improvements(Thomas,Keith)\nadditional regression tests(Thomas)\nnew datestyle variable(Thomas,Vadim,Martin)\nmore comparison operators for sorting types(Thomas)\nnew conversion functions(Thomas)\nnew more compact btree format(Vadim)\nallow pg_dumpall to preserve database ownership(Bruce)\nnew SET GEQO=# and R_PLANS variable(Vadim)\nold (!GEQO) optimizer can use right-sided plans (Vadim)\ntypechecking improvement in SQL parser(Bruce)\nnew SET, SHOW, RESET commands(Thomas,Vadim)\nnew \\connect database USER option\nnew destroydb -i option (Igor)\nnew \\dt and \\di psql commands (Darren)\nSELECT \"\\n\" now escapes newline (A. Duursma)\nnew geometry conversion functions from old format (Thomas)\n\nSource tree changes\n-------------------\nnew configuration script(Marc)\nreadline configuration option added(Marc)\nOS-specific configuration options removed(Marc)\nnew OS-specific template files(Marc)\nno more need to edit Makefile.global(Marc)\nre-arrange include files(Marc)\nnextstep patches (Gregor Hoffleit)\nremoved WIN32-specific code(Bruce)\nremoved postmaster -e option, now only postgres -e option (Bruce)\nmerge duplicate library code in front/backends(Martin)\nnow works with eBones, international Kerberos(Jun)\nmore shared library support\nc++ include file cleanup(Bruce)\nwarn about buggy flex(Bruce)\nDG-UX, Ultrix, Irix, AIX portability fixes\n\n ------------------------------------------------------------------------\nRelease 6.1.1 Release v6.0\n Release Notes\n ------------------------------------------------------------------------\n\nRelease v6.0\n\nA dump/restore is required for those wishing to migrate data from previous\nreleases of Postgres.\n\nMigration from v1.09 to v6.0\n\nThis migration requires a complete dump of the 1.09 database and a restore\nof the database in 6.0.\n\nMigration from pre-v1.09 to v6.0\n\nThose migrating from earlier 1.* releases should first upgrade to 1.09\nbecause the COPY output format was improved from the 1.02 release.\n\nDetailed Change List\n\nBug Fixes\n---------\nALTER TABLE bug - running postgress process needs to re-read table definition\nAllow vacuum to be run on one table or entire database(Bruce)\nArray fixes\nFix array over-runs of memory writes(Kurt)\nFix elusive btree range/non-range bug(Dan)\nFix for hash indexes on some types like time and date\nFix for pg_log size explosion\nFix permissions on lo_export()(Bruce)\nFix unitialized reads of memory(Kurt)\nFixed ALTER TABLE ... char(3) bug(Bruce)\nFixed a few small memory leaks\nFixed EXPLAIN handling of options and changed full_path option name\nFixed output of group acl permissions\nMemory leaks (hunt and destroy with tools like Purify(Kurt)\nMinor improvements to rules system\nNOTIFY fixes\nNew asserts for run-checking\nOverhauled parser/analyze code to properly report errors and increase speed\nPg_dump -d now handles NULL's properly(Bruce)\nPrevent SELECT NULL from crashing server (Bruce)\nProperly report errors when INSERT ... SELECT columns did not match\nProperly report errors when insert column names were not correct\nPsql \\g filename now works(Bruce)\nPsql fixed problem with multiple statements on one line with multiple outputs\nRemoved duplicate system oid's\nSELECT * INTO TABLE . GROUP/ORDER BY gives unlink error if table exists(Bruce)\nSeveral fixes for queries that crashed the backend\nStarting quote in insert string errors(Bruce)\nSubmitting an empty query now returns empty status, not just \" \" query(Bruce)\n\nEnhancements\n------------\nAdd EXPLAIN manual page(Bruce)\nAdd UNIQUE index capability(Dan)\nAdd hostname/user level access control rather than just hostname and user\nAdd synonym of != for (Bruce)\nAllow \"select oid,* from table\"\nAllow BY,ORDER BY to specify columns by number, or by non-alias table.column(Bruce)\nAllow COPY from the frontend(Bryan)\nAllow GROUP BY to use alias column name(Bruce)\nAllow actual compression, not just reuse on the same page(Vadim)\nAllow installation-configuration option to auto-add all local users(Bryan)\nAllow libpq to distinguish between text value '' and null(Bruce)\nAllow non-postgres users with createdb privs to destroydb's\nAllow restriction on who can create C functions(Bryan)\nAllow restriction on who can do backend COPY(Bryan)\nCan shrink tables, pg_time and pg_log(Vadim & Erich)\nChange debug level 2 to print queries only, changed debug heading layout(Bruce)\nChange default decimal constant representation from float4 to float8(Bruce)\nEuropean date format now set when postmaster is started\nExecute lowercase function names if not found with exact case\nFixes for aggregate/GROUP processing, allow 'select sum(func(x),sum(x+y) from z'\nGist now included in the distrubution(Marc)\nIdend authentication of local users(Bryan)\nImplement BETWEEN qualifier(Bruce)\nImplement IN qualifier(Bruce)\nLibpq has PQgetisnull()(Bruce)\nLibpq++ improvements\nNew options to initdb(Bryan)\nPg_dump allow dump of oid's(Bruce)\nPg_dump create indexes after tables are loaded for speed(Bruce)\nPg_dumpall dumps all databases, and the user table\nPginterface additions for NULL values(Bruce)\nPrevent postmaster from being run as root\nPsql \\h and \\? is now readable(Bruce)\nPsql allow backslashed, semicolons anywhere on the line(Bruce)\nPsql changed command prompt for lines in query or in quotes(Bruce)\nPsql char(3) now displays as (bp)char in \\d output(Bruce)\nPsql return code now more accurate(Bryan?)\nPsql updated help syntax(Bruce)\nRe-visit and fix vacuum(Vadim)\nReduce size of regression diffs, remove timezone name difference(Bruce)\nRemove compile-time parameters to enable binary distributions(Bryan)\nReverse meaning of HBA masks(Bryan)\nSecure Authentication of local users(Bryan)\nSpeed up vacuum(Vadim)\nVacuum now had VERBOSE option(Bruce)\n\nSource tree changes\n-------------------\nAll functions now have prototypes that are compared against the calls\nAllow asserts to be disabled easly from Makefile.global(Bruce)\nChange oid constants used in code to #define names\nDecoupled sparc and solaris defines(Kurt)\nGcc -Wall compiles cleanly with warnings only from unfixable constructs\nMajor include file reorganization/reduction(Marc)\nMake now stops on compile failure(Bryan)\nMakefile restructuring(Bryan, Marc)\nMerge bsdi_2_1 to bsdi(Bruce)\nMonitor program removed\nName change from Postgres95 to PostgreSQL\nNew config.h file(Marc, Bryan)\nPG_VERSION now set to 6.0 and used by postmaster\nPortability additions, including Ultrix, DG/UX, AIX, and Solaris\nReduced the number of #define's, centeralized #define's\nRemove duplicate OIDS in system tables(Dan)\nRemove duplicate system catalog info or report mismatches(Dan)\nRemoved many os-specific #define's\nRestructured object file generation/location(Bryan, Marc)\nRestructured port-specific file locations(Bryan, Marc)\nUnused/uninialized variables corrected\n\n ------------------------------------------------------------------------\nRelease 6.1 Release v1.09\n Release Notes\n ------------------------------------------------------------------------\n\nRelease v1.09\n\nSorry, we stopped keeping track of changes from 1.02 to 1.09. Some of the\nchanges listed in 6.0 were actually included in the 1.02.1 to 1.09 releases.\n\n ------------------------------------------------------------------------\nRelease v6.0 Release v1.02\n Release Notes\n ------------------------------------------------------------------------\n\nRelease v1.02\n\nMigration from v1.02 to v1.02.1\n\nHere is a new migration file for 1.02.1. It includes the 'copy' change and a\nscript to convert old ascii files.\n\n Note: The following notes are for the benefit of users who want to\n migrate databases from postgres95 1.01 and 1.02 to postgres95\n 1.02.1.\n\n If you are starting afresh with postgres95 1.02.1 and do not need\n to migrate old databases, you do not need to read any further.\n\nIn order to upgrade older postgres95 version 1.01 or 1.02 databases to\nversion 1.02.1, the following steps are required:\n\n 1. Start up a new 1.02.1 postmaster\n\n 2. Add the new built-in functions and operators of 1.02.1 to 1.01 or 1.02\n databases. This is done by running the new 1.02.1 server against your\n own 1.01 or 1.02 database and applying the queries attached at the end\n of thie file. This can be done easily through psql. If your 1.01 or\n 1.02 database is named \"testdb\" and you have cut the commands from the\n end of this file and saved them in addfunc.sql:\n\n % psql testdb -f addfunc.sql\n\n Those upgrading 1.02 databases will get a warning when executing the\n last two statements in the file because they are already present in\n 1.02. This is not a cause for concern.\n\nDump/Reload Procedure\n\nIf you are trying to reload a pg_dump or text-mode 'copy tablename to\nstdout' generated with a previous version, you will need to run the attached\nsed script on the ASCII file before loading it into the database. The old\nformat used '.' as end-of-data, while '\\.' is now the end-of-data marker.\nAlso, empty strings are now loaded in as '' rather than NULL. See the copy\nmanual page for full details.\n\n sed 's/^\\.$/\\\\./g' in_file out_file\n\nIf you are loading an older binary copy or non-stdout copy, there is no\nend-of-data character, and hence no conversion necessary.\n\n-- following lines added by agc to reflect the case-insensitive\n-- regexp searching for varchar (in 1.02), and bpchar (in 1.02.1)\ncreate operator ~* (leftarg = bpchar, rightarg = text, procedure = texticregexeq);\ncreate operator !~* (leftarg = bpchar, rightarg = text, procedure = texticregexne);\ncreate operator ~* (leftarg = varchar, rightarg = text, procedure = texticregexeq);\ncreate operator !~* (leftarg = varchar, rightarg = text, procedure = texticregexne);\n\nDetailed Change List\n\nSource code maintenance and development\n * worldwide team of volunteers\n * the source tree now in CVS at ftp.ki.net\n\nEnhancements\n * psql (and underlying libpq library) now has many more options for\n formatting output, including HTML\n * pg_dump now output the schema and/or the data, with many fixes to\n enhance completeness.\n * psql used in place of monitor in administration shell scripts.\n monitor to be depreciated in next release.\n * date/time functions enhanced\n * NULL insert/update/comparison fixed/enhanced\n * TCL/TK lib and shell fixed to work with both tck7.4/tk4.0 and tcl7.5/tk4.1\n\nBug Fixes (almost too numerous to mention)\n * indexes\n * storage management\n * check for NULL pointer before dereferencing\n * Makefile fixes\n\nNew Ports\n * added SolarisX86 port\n * added BSDI 2.1 port\n * added DGUX port\n\n ------------------------------------------------------------------------\nRelease v1.09 Release v1.01\n Release Notes\n ------------------------------------------------------------------------\n\nRelease v1.01\n\nMigration from v1.0 to v1.01\n\nThe following notes are for the benefit of users who want to migrate\ndatabases from postgres95 1.0 to postgres95 1.01.\n\nIf you are starting afresh with postgres95 1.01 and do not need to migrate\nold databases, you do not need to read any further.\n\nIn order to postgres95 version 1.01 with databases created with postgres95\nversion 1.0, the following steps are required:\n\n 1. Set the definition of NAMEDATALEN in src/Makefile.global to 16 and\n OIDNAMELEN to 20.\n\n 2. Decide whether you want to use Host based authentication.\n\n a. If you do, you must create a file name \"pg_hba\" in your top-level\n data directory (typically the value of your $PGDATA).\n src/libpq/pg_hba shows an example syntax.\n\n b. If you do not want host-based authentication, you can comment out\n the line\n\n HBA = 1\n\n in src/Makefile.global\n\n Note that host-based authentication is turned on by default, and\n if you do not take steps A or B above, the out-of-the-box 1.01\n will not allow you to connect to 1.0 databases.\n\n 3. Compile and install 1.01, but DO NOT do the initdb step.\n\n 4. Before doing anything else, terminate your 1.0 postmaster, and backup\n your existing $PGDATA directory.\n\n 5. Set your PGDATA environment variable to your 1.0 databases, but set up\n path up so that 1.01 binaries are being used.\n\n 6. Modify the file $PGDATA/PG_VERSION from 5.0 to 5.1\n\n 7. Start up a new 1.01 postmaster\n\n 8. Add the new built-in functions and operators of 1.01 to 1.0 databases.\n This is done by running the new 1.01 server against your own 1.0\n database and applying the queries attached and saving in the file\n 1.0_to_1.01.sql. This can be done easily through psql. If your 1.0\n database is name \"testdb\":\n\n % psql testdb -f 1.0_to_1.01.sql\n\n and then execute the following commands (cut and paste from here):\n\n -- add builtin functions that are new to 1.01\n\n create function int4eqoid (int4, oid) returns bool as 'foo'\n language 'internal';\n create function oideqint4 (oid, int4) returns bool as 'foo'\n language 'internal';\n create function char2icregexeq (char2, text) returns bool as 'foo'\n language 'internal';\n create function char2icregexne (char2, text) returns bool as 'foo'\n language 'internal';\n create function char4icregexeq (char4, text) returns bool as 'foo'\n language 'internal';\n create function char4icregexne (char4, text) returns bool as 'foo'\n language 'internal';\n create function char8icregexeq (char8, text) returns bool as 'foo'\n language 'internal';\n create function char8icregexne (char8, text) returns bool as 'foo'\n language 'internal';\n create function char16icregexeq (char16, text) returns bool as 'foo'\n language 'internal';\n create function char16icregexne (char16, text) returns bool as 'foo'\n language 'internal';\n create function texticregexeq (text, text) returns bool as 'foo'\n language 'internal';\n create function texticregexne (text, text) returns bool as 'foo'\n language 'internal';\n\n -- add builtin functions that are new to 1.01\n\n create operator = (leftarg = int4, rightarg = oid, procedure = int4eqoid);\n create operator = (leftarg = oid, rightarg = int4, procedure = oideqint4);\n create operator ~* (leftarg = char2, rightarg = text, procedure = char2icregexeq);\n create operator !~* (leftarg = char2, rightarg = text, procedure = char2icregexne);\n create operator ~* (leftarg = char4, rightarg = text, procedure = char4icregexeq);\n create operator !~* (leftarg = char4, rightarg = text, procedure = char4icregexne);\n create operator ~* (leftarg = char8, rightarg = text, procedure = char8icregexeq);\n create operator !~* (leftarg = char8, rightarg = text, procedure = char8icregexne);\n create operator ~* (leftarg = char16, rightarg = text, procedure = char16icregexeq);\n create operator !~* (leftarg = char16, rightarg = text, procedure = char16icregexne);\n create operator ~* (leftarg = text, rightarg = text, procedure = texticregexeq);\n create operator !~* (leftarg = text, rightarg = text, procedure = texticregexne);\n\nDetailed Change List\n\nIncompatibilities:\n * 1.01 is backwards compatible with 1.0 database provided the user\n follow the steps outlined in the MIGRATION_from_1.0_to_1.01 file.\n If those steps are not taken, 1.01 is not compatible with 1.0 database.\n\nEnhancements:\n * added PQdisplayTuples() to libpq and changed monitor and psql to use it\n * added NeXT port (requires SysVIPC implementation)\n * added CAST .. AS ... syntax\n * added ASC and DESC keywords\n * added 'internal' as a possible language for CREATE FUNCTION\n internal functions are C functions which have been statically linked\n into the postgres backend.\n * a new type \"name\" has been added for system identifiers (table names,\n attribute names, etc.) This replaces the old char16 type. The\n of name is set by the NAMEDATALEN #define in src/Makefile.global\n * a readable reference manual that describes the query language.\n * added host-based access control. A configuration file ($PGDATA/pg_hba)\n is used to hold the configuration data. If host-based access control\n is not desired, comment out HBA=1 in src/Makefile.global.\n * changed regex handling to be uniform use of Henry Spencer's regex code\n regardless of platform. The regex code is included in the distribution\n * added functions and operators for case-insensitive regular expressions.\n The operators are ~* and !~*.\n * pg_dump uses COPY instead of SELECT loop for better performance\n\nBug fixes:\n * fixed an optimizer bug that was causing core dumps when\n functions calls were used in comparisons in the WHERE clause\n * changed all uses of getuid to geteuid so that effective uids are used\n * psql now returns non-zero status on errors when using -c\n * applied public patches 1-14\n\n ------------------------------------------------------------------------\nRelease v1.02 Release v1.0\n Release Notes\n ------------------------------------------------------------------------\n\nRelease v1.0\n\nDetailed Change List\n\nCopyright change:\n * The copyright of Postgres 1.0 has been loosened to be freely modifiable\n and modifiable for any purpose. Please read the COPYRIGHT file.\n Thanks to Professor Michael Stonebraker for making this possible.\n\nIncompatibilities:\n * date formats have to be MM-DD-YYYY (or DD-MM-YYYY if you're using\n EUROPEAN STYLE). This follows SQL-92 specs.\n * \"delimiters\" is now a keyword\n\nEnhancements:\n * sql LIKE syntax has been added\n * copy command now takes an optional USING DELIMITER specification.\n delimiters can be any single-character string.\n * IRIX 5.3 port has been added.\n Thanks to Paul Walmsley and others.\n * updated pg_dump to work with new libpq\n * \\d has been added psql\n Thanks to Keith Parks\n * regexp performance for architectures that use POSIX regex has been\n improved due to caching of precompiled patterns.\n Thanks to Alistair Crooks\n * a new version of libpq++\n Thanks to William Wanders\n\nBug fixes:\n * arbitrary userids can be specified in the createuser script\n * \\c to connect to other databases in psql now works.\n * bad pg_proc entry for float4inc() is fixed\n * users with usecreatedb field set can now create databases without\n having to be usesuper\n * remove access control entries when the entry no longer has any\n permissions\n * fixed non-portable datetimes implementation\n * added kerberos flags to the src/backend/Makefile\n * libpq now works with kerberos\n * typographic errors in the user manual have been corrected.\n * btrees with multiple index never worked, now we tell you they don't\n work when you try to use them\n\n ------------------------------------------------------------------------\nRelease v1.01 Postgres95 Beta 0.03\n Release Notes\n ------------------------------------------------------------------------\n\nPostgres95 Beta 0.03\n\nDetailed Change List\n\nIncompatible changes:\n * BETA-0.3 IS INCOMPATIBLE WITH DATABASES CREATED WITH PREVIOUS VERSIONS\n (due to system catalog changes and indexing structure changes).\n * double-quote (\") is deprecated as a quoting character for string literals;\n you need to convert them to single quotes (').\n * name of aggregates (eg. int4sum) are renamed in accordance with the\n SQL standard (eg. sum).\n * CHANGE ACL syntax is replaced by GRANT/REVOKE syntax.\n * float literals (eg. 3.14) are now of type float4 (instead of float8 in\n previous releases); you might have to do typecasting if you depend on it\n being of type float8. If you neglect to do the typecasting and you assign\n a float literal to a field of type float8, you may get incorrect values\n stored!\n * LIBPQ has been totally revamped so that frontend applications\n can connect to multiple backends\n * the usesysid field in pg_user has been changed from int2 to int4 to\n allow wider range of Unix user ids.\n * the netbsd/freebsd/bsd o/s ports have been consolidated into a\n single BSD44_derived port. (thanks to Alistair Crooks)\n\nSQL standard-compliance (the following details changes that makes postgres95\nmore compliant to the SQL-92 standard):\n * the following SQL types are now built-in: smallint, int(eger), float, real,\n char(N), varchar(N), date and time.\n\n The following are aliases to existing postgres types:\n smallint -> int2\n integer, int -> int4\n float, real -> float4\n char(N) and varchar(N) are implemented as truncated text types. In\n addition, char(N) does blank-padding.\n * single-quote (') is used for quoting string literals; '' (in addition to\n \\') is supported as means of inserting a single quote in a string\n * SQL standard aggregate names (MAX, MIN, AVG, SUM, COUNT) are used\n (Also, aggregates can now be overloaded, i.e. you can define your\n own MAX aggregate to take in a user-defined type.)\n * CHANGE ACL removed. GRANT/REVOKE syntax added.\n - Privileges can be given to a group using the \"GROUP\" keyword.\n For example:\n GRANT SELECT ON foobar TO GROUP my_group;\n The keyword 'PUBLIC' is also supported to mean all users.\n\n Privileges can only be granted or revoked to one user or group\n at a time.\n\n \"WITH GRANT OPTION\" is not supported. Only class owners can change\n access control\n - The default access control is to to grant users readonly access.\n You must explicitly grant insert/update access to users. To change\n this, modify the line in\n src/backend/utils/acl.h\n that defines ACL_WORLD_DEFAULT\n\nBug fixes:\n * the bug where aggregates of empty tables were not run has been fixed. Now,\n aggregates run on empty tables will return the initial conditions of the\n aggregates. Thus, COUNT of an empty table will now properly return 0.\n MAX/MIN of an empty table will return a tuple of value NULL.\n * allow the use of \\; inside the monitor\n * the LISTEN/NOTIFY asynchronous notification mechanism now work\n * NOTIFY in rule action bodies now work\n * hash indices work, and access methods in general should perform better.\n creation of large btree indices should be much faster. (thanks to Paul\n Aoki)\n\nOther changes and enhancements:\n * addition of an EXPLAIN statement used for explaining the query execution\n plan (eg. \"EXPLAIN SELECT * FROM EMP\" prints out the execution plan for\n the query).\n * WARN and NOTICE messages no longer have timestamps on them. To turn on\n timestamps of error messages, uncomment the line in\n src/backend/utils/elog.h:\n /* define ELOG_TIMESTAMPS */\n * On an access control violation, the message\n \"Either no such class or insufficient privilege\"\n will be given. This is the same message that is returned when\n a class is not found. This dissuades non-privileged users from\n guessing the existence of privileged classes.\n * some additional system catalog changes have been made that are not\n visible to the user.\n\nlibpgtcl changes:\n * The -oid option has been added to the \"pg_result\" tcl command.\n pg_result -oid returns oid of the last tuple inserted. If the\n last command was not an INSERT, then pg_result -oid returns \"\".\n * the large object interface is available as pg_lo* tcl commands:\n pg_lo_open, pg_lo_close, pg_lo_creat, etc.\n\nPortability enhancements and New Ports:\n * flex/lex problems have been cleared up. Now, you should be able to use\n flex instead of lex on any platforms. We no longer make assumptions of\n what lexer you use based on the platform you use.\n * The Linux-ELF port is now supported. Various configuration have been\n tested: The following configuration is known to work:\n kernel 1.2.10, gcc 2.6.3, libc 4.7.2, flex 2.5.2, bison 1.24\n with everything in ELF format,\n\nNew utilities:\n * ipcclean added to the distribution\n ipcclean usually does not need to be run, but if your backend crashes\n and leaves shared memory segments hanging around, ipcclean will\n clean them up for you.\n\nNew documentation:\n * the user manual has been revised and libpq documentation added.\n\n ------------------------------------------------------------------------\nRelease v1.0 Postgres95 Beta 0.02\n Release Notes\n ------------------------------------------------------------------------\n\nPostgres95 Beta 0.02\n\nDetailed Change List\n\nIncompatible changes:\n * The SQL statement for creating a database is 'CREATE DATABASE' instead\n of 'CREATEDB'. Similarly, dropping a database is 'DROP DATABASE' instead\n of 'DESTROYDB'. However, the names of the executables 'createdb' and\n 'destroydb' remain the same.\n\nNew tools:\n * pgperl - a Perl (4.036) interface to Postgres95\n * pg_dump - a utility for dumping out a postgres database into a\n script file containing query commands. The script files are in a ASCII\n format and can be used to reconstruct the database, even on other\n machines and other architectures. (Also good for converting\n a Postgres 4.2 database to Postgres95 database.)\n\nThe following ports have been incorporated into postgres95-beta-0.02:\n * the NetBSD port by Alistair Crooks\n * the AIX port by Mike Tung\n * the Windows NT port by Jon Forrest (more stuff but not done yet)\n * the Linux ELF port by Brian Gallew\n\nThe following bugs have been fixed in postgres95-beta-0.02:\n * new lines not escaped in COPY OUT and problem with COPY OUT when first\n attribute is a '.'\n * cannot type return to use the default user id in createuser\n * SELECT DISTINCT on big tables crashes\n * Linux installation problems\n * monitor doesn't allow use of 'localhost' as PGHOST\n * psql core dumps when doing \\c or \\l\n * the \"pgtclsh\" target missing from src/bin/pgtclsh/Makefile\n * libpgtcl has a hard-wired default port number\n * SELECT DISTINCT INTO TABLE hangs\n * CREATE TYPE doesn't accept 'variable' as the internallength\n * wrong result using more than 1 aggregate in a SELECT\n\n ------------------------------------------------------------------------\nPostgres95 Beta 0.03 Postgres95 Beta 0.01\n Release Notes\n ------------------------------------------------------------------------\n\nPostgres95 Beta 0.01\n\nInitial release.\n\n ------------------------------------------------------------------------\nPostgres95 Beta 0.02 Timing Results\n Release Notes\nPrev\n ------------------------------------------------------------------------\n\nTiming Results\n\nThese timing results are from running the regression test with the commands\n\n% cd src/test/regress\n% make all\n% time make runtest\n\n\nTiming under Linux 2.0.27 seems to have a roughly 5% variation from run to\nrun, presumably due to the scheduling vagaries of multitasking systems.\n\nv6.5\n\nAs has been the case for previous releases, timing between releases is not\ndirectly comparable since new regression tests have been added. In general,\nv6.5 is faster than previous releases.\n\nTiming with fsync() disabled:\n\n Time System\n 02:00 Dual Pentium Pro 180, 224MB, UW-SCSI, Linux 2.0.36, gcc 2.7.2.3 -O2 -m486\n 04:38 Sparc Ultra 1 143MHz, 64MB, Solaris 2.6\n\n\nTiming with fsync() enabled:\n\n Time System\n 04:21 Dual Pentium Pro 180, 224MB, UW-SCSI, Linux 2.0.36, gcc 2.7.2.3 -O2 -m486\n\n\nFor the linux system above, using UW-SCSI disks rather than (older) IDE\ndisks leads to a 50% improvement in speed on the regression test.\n\nv6.4beta\n\nThe times for this release are not directly comparable to those for previous\nreleases since some additional regression tests have been included. In\ngeneral, however, v6.4 should be slightly faster than the previous release\n(thanks, Bruce!).\n\n Time System\n 02:26 Dual Pentium Pro 180, 96MB, UW-SCSI, Linux 2.0.30, gcc 2.7.2.1 -O2 -m486\n\nv6.3\n\nThe times for this release are not directly comparable to those for previous\nreleases since some additional regression tests have been included and some\nobsolete tests involving time travel have been removed. In general, however,\nv6.3 is substantially faster than previous releases (thanks, Bruce!).\n\n Time System\n 02:30 Dual Pentium Pro 180, 96MB, UW-SCSI, Linux 2.0.30, gcc 2.7.2.1 -O2 -m486\n 04:12 Dual Pentium Pro 180, 96MB, EIDE, Linux 2.0.30, gcc 2.7.2.1 -O2 -m486\n\nv6.1\n\n Time System\n 06:12 Pentium Pro 180, 32MB, EIDE, Linux 2.0.30, gcc 2.7.2 -O2 -m486\n 12:06 P-100, 48MB, Linux 2.0.29, gcc\n 39:58 Sparc IPC 32MB, Solaris 2.5, gcc 2.7.2.1 -O -g\n\n ------------------------------------------------------------------------\nPrev Home\nPostgres95 Beta 0.01", "msg_date": "Tue, 12 Oct 1999 12:04:27 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "New HISTORY file" } ]
[ { "msg_contents": "Here is my proposal for an outline for a PostgreSQL book. Many of us\nhave been asked by publishers about writing a book. Here is what I\nthink would be a good outline for the book.\n\nI am interested in whether this is a good outline for a PostgreSQL book,\nhow our existing documentation matches this outline, where our existing\ndocumentation can be managed into a published book, etc.\n\nAny comments would be welcome.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n...................................................................\n\nThe attached document is in both web page and text formats.\nView the one which looks best. Also in PDF format.\n\n\n\n\nPostgreSQL Book Proposal\n\n\n\n\n\n\n\n\n\n\n\nPostgreSQL Book Proposal\nBruce Momjian\n\n\n\n1.\nIntroduction\n2.\nInstallation\n\n\n(a)\nGetting POSTGRESQL\n(b)\nCompiling\n(c)\nInitialization\n(d)\nStarting the server\n(e)\nCreating a database\n(f)\nIssuing database commands \n3.\nIntroduction to SQL\n\n\n\n(a)\nWhy a database?\n(b)\nCreating tables \n(c)\nAdding data with INSERT\n(d)\nViewing data with SELECT\n(e)\nRemoving data with DELETE\n(f)\nModifying data with UPDATE\n(g)\nRestriction with WHERE\n(h)\nSorting data with ORDER BY\n(i)\nUsage of NULL values\n4.\nAdvanced SQL Commands\n\n\n\n(a)\nInserting data from a SELECT\n(b)\nAggregates: COUNT, SUM, etc.\n(c)\nGROUP BY with aggregates\n(d)\nHAVING with aggregates\n(e)\nJoining tables\n(f)\nUsing table aliases\n(g)\nUNION clause\n(h)\nSubqueries\n(i)\nTransactions\n(j)\nCursors\n(k)\nIndexing\n(l)\nTable defaults\n(m)\nPrimary/Foreign keys\n(n)\nAND/OR usage \n(o)\nLIKE clause usage\n(p)\nTemporary tables\n(q)\nImporting data\n5.\nPOSTGRESQL'S Unique Features\n\n\n\n(a)\nObject ID'S (OID)\n(b)\nMulti-version Concurrency Control (MVCC)\n(c)\nLocking and Deadlocks\n(d)\nVacuum\n(e)\nViews\n(f)\nRules\n(g)\nSequences\n(h)\nTriggers\n(i)\nLarge Objects(BLOBS)\n(j)\nAdding User-defined Functions\n(k)\nAdding User-defined Operators\n(l)\nAdding User-defined Types\n(m)\nExotic Preinstalled Types\n(n)\nArrays\n(o)\nInheritance\n6.\nInterfacing to the POSTGRESQL Database\n\n\n\n(a)\nC Language API\n(b)\nEmbedded C\n(c)\nC++\n(d)\nJAVA\n(e)\nODBC\n(f)\nPERL\n(g)\nTCL/TK\n(h)\nPYTHON\n(i)\nWeb access (PHP)\n(j)\nServer-side programming (PLPGSQL and SPI)\n7.\nPOSTGRESQL Adminstration\n\n\n\n(a)\nCreating users and databases\n(b)\nBackup and restore\n(c)\nPerformance tuning\n(d)\nTroubleshooting\n(e)\nCustomization options\n(f)\nSetting access permissions\n8.\nAdditional Resources\n\n\n\n(a)\nFrequently Asked Questions (FAQ'S)\n(b)\nMailing list support\n(c)\nSupplied documentation\n(d)\nCommercial support\n(e)\nModifying the source code\n\n\n\n\n\n\n PostgreSQL Book Proposal\n \n Bruce Momjian\n \n 1.\n Introduction\n 2.\n Installation\n (a)\n Getting POSTGRESQL\n (b)\n Compiling\n (c)\n Initialization\n (d)\n Starting the server\n (e)\n Creating a database\n (f)\n Issuing database commands\n 3.\n Introduction to SQL\n (a)\n Why a database?\n (b)\n Creating tables\n (c)\n Adding data with INSERT\n (d)\n Viewing data with SELECT\n (e)\n Removing data with DELETE\n (f)\n Modifying data with UPDATE\n (g)\n Restriction with WHERE\n (h)\n Sorting data with ORDER BY\n (i)\n Usage of NULL values\n 4.\n Advanced SQL Commands\n (a)\n Inserting data from a SELECT\n (b)\n Aggregates: COUNT, SUM, etc.\n (c)\n GROUP BY with aggregates\n (d)\n HAVING with aggregates\n (e)\n Joining tables\n (f)\n Using table aliases\n (g)\n UNION clause\n (h)\n Subqueries\n (i)\n Transactions\n (j)\n Cursors\n (k)\n Indexing\n (l)\n Table defaults\n (m)\n Primary/Foreign keys\n (n)\n AND/OR usage\n (o)\n LIKE clause usage\n (p)\n Temporary tables\n (q)\n Importing data\n 5.\n POSTGRESQL'S Unique Features\n (a)\n Object ID'S (OID)\n (b)\n Multi-version Concurrency Control (MVCC)\n (c)\n Locking and Deadlocks\n (d)\n Vacuum\n (e)\n Views\n (f)\n Rules\n (g)\n Sequences\n (h)\n Triggers\n (i)\n Large Objects(BLOBS)\n (j)\n Adding User-defined Functions\n (k)\n Adding User-defined Operators\n (l)\n Adding User-defined Types\n (m)\n Exotic Preinstalled Types\n (n)\n Arrays\n (o)\n Inheritance\n 6.\n Interfacing to the POSTGRESQL Database\n (a)\n C Language API\n (b)\n Embedded C\n (c)\n C++\n (d)\n JAVA\n (e)\n ODBC\n (f)\n PERL\n (g)\n TCL/TK\n (h)\n PYTHON\n (i)\n Web access (PHP)\n (j)\n Server-side programming (PLPGSQL and SPI)\n 7.\n POSTGRESQL Adminstration\n (a)\n Creating users and databases\n (b)\n Backup and restore\n (c)\n Performance tuning\n (d)\n Troubleshooting\n (e)\n Customization options\n (f)\n Setting access permissions\n 8.\n Additional Resources\n (a)\n Frequently Asked Questions (FAQ'S)\n (b)\n Mailing list support\n (c)\n Supplied documentation\n (d)\n Commercial support\n (e)\n Modifying the source code", "msg_date": "Tue, 12 Oct 1999 13:09:05 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL book proposal" } ]
[ { "msg_contents": "Here is the new PDF version of the outline. The previous version used\nthe Palatino font, which didn't convert to PDF for some reason.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026", "msg_date": "Tue, 12 Oct 1999 13:11:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Book PDF file was corrupt" } ]
[ { "msg_contents": "Is it worth to install cvsweb\n( http://stud.fh-heilbronn.de/~zeller/cgi/cvsweb.cgi/ )\n\nHere is some info:\nhis is a WWW interface to a sample CVS tree to demonstrate the features of this\nimproved cvsweb. You can browse the file hierarchy by picking directories (which\nhave slashes after them, e.g. src/). If you pick a file, you will see the revision history\nfor that file. Selecting a revision number will download that revision of the file. There\nis a link at each revision to display (colored) diffs between that revision and the\nprevious one or to annotate a revision. A form at the bottom of the page that allows\nyou to display diffs between arbitrary revisions. \n\n\nRegards,\n\n\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 12 Oct 1999 22:30:40 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "cvsweb" }, { "msg_contents": "\nthere was a time when we did...wasn't there? *raised eyebrow8 I swore I\nremember it, but I just went looking around the directory structure and\nonly found:\n\nhttp://www.postgresql.org/cgi/cvswebtest.cgi\n\n\nOn Tue, 12 Oct 1999, Oleg Bartunov wrote:\n\n> Is it worth to install cvsweb\n> ( http://stud.fh-heilbronn.de/~zeller/cgi/cvsweb.cgi/ )\n> \n> Here is some info:\n> his is a WWW interface to a sample CVS tree to demonstrate the features of this\n> improved cvsweb. You can browse the file hierarchy by picking directories (which\n> have slashes after them, e.g. src/). If you pick a file, you will see the revision history\n> for that file. Selecting a revision number will download that revision of the file. There\n> is a link at each revision to display (colored) diffs between that revision and the\n> previous one or to annotate a revision. A form at the bottom of the page that allows\n> you to display diffs between arbitrary revisions. \n> \n> \n> Regards,\n> \n> \tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 12 Oct 1999 15:52:26 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] cvsweb" }, { "msg_contents": "Oleg Bartunov wrote:\n> \n> Is it worth to install cvsweb\n> ( http://stud.fh-heilbronn.de/~zeller/cgi/cvsweb.cgi/ )\n\nMust have been...\n\nhttp://www.postgresql.org/cgi/cvswebtest.cgi/pgsql\n\n--\nLamar Owen\nWGCR Internet Radio \nPisgah Forest, North Carolina\n1 Peter 4:11\n", "msg_date": "Tue, 12 Oct 1999 14:53:05 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] cvsweb" }, { "msg_contents": "On Tue, 12 Oct 1999, The Hermit Hacker wrote:\n\n> Date: Tue, 12 Oct 1999 15:52:26 -0300 (ADT)\n> From: The Hermit Hacker <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] cvsweb\n> \n> \n> there was a time when we did...wasn't there? *raised eyebrow8 I swore I\n> remember it, but I just went looking around the directory structure and\n> only found:\n> \n> http://www.postgresql.org/cgi/cvswebtest.cgi\n> \n\nYes, I've seen it. The version I wrote about is a little better (imo)\n\n\n\tOleg\n\n> \n> On Tue, 12 Oct 1999, Oleg Bartunov wrote:\n> \n> > Is it worth to install cvsweb\n> > ( http://stud.fh-heilbronn.de/~zeller/cgi/cvsweb.cgi/ )\n> > \n> > Here is some info:\n> > his is a WWW interface to a sample CVS tree to demonstrate the features of this\n> > improved cvsweb. You can browse the file hierarchy by picking directories (which\n> > have slashes after them, e.g. src/). If you pick a file, you will see the revision history\n> > for that file. Selecting a revision number will download that revision of the file. There\n> > is a link at each revision to display (colored) diffs between that revision and the\n> > previous one or to annotate a revision. A form at the bottom of the page that allows\n> > you to display diffs between arbitrary revisions. \n> > \n> > \n> > Regards,\n> > \n> > \tOleg\n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: [email protected], http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> > \n> > \n> > ************\n> > \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n> ************\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 12 Oct 1999 23:13:13 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] cvsweb" }, { "msg_contents": "\nOn 12-Oct-99 Oleg Bartunov wrote:\n> On Tue, 12 Oct 1999, The Hermit Hacker wrote:\n> \n>> Date: Tue, 12 Oct 1999 15:52:26 -0300 (ADT)\n>> From: The Hermit Hacker <[email protected]>\n>> To: Oleg Bartunov <[email protected]>\n>> Cc: [email protected]\n>> Subject: Re: [HACKERS] cvsweb\n>> \n>> \n>> there was a time when we did...wasn't there? *raised eyebrow8 I swore I\n>> remember it, but I just went looking around the directory structure and\n>> only found:\n>> \n>> http://www.postgresql.org/cgi/cvswebtest.cgi\n>> \n> \n> Yes, I've seen it. The version I wrote about is a little better (imo)\n> \n> \n> Oleg\n> \n\nDo you mean this?\n\nhttp://www.postgresql.org/cgi/cvswebtest.cgi/pgsql\n\nIt's in Info Central. Besides the gray background, it works.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Tue, 12 Oct 1999 16:13:09 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] cvsweb" } ]
[ { "msg_contents": "As I begin to get a feel for the changes in PostgreSQL 7, I begin to\nthink about the changes I will have to make to the RedHat RPMs. Since\nthe 7 code is nowhere near a packageable condition as yet, I have\nrestricted my thinking to some general issues.\n\nIn the latest RPM's for RedHat for version 6.5.2, a couple of scripts\nand some documentation is packaged that is not part of the main\ntarball. Of the\nadditions, I find that there are pieces that could be useful as part of\nthe main tarball.\n\nThe first and foremost piece that I would like to see included (in a\nmodified form) is the migration script set that Oliver Elphick wrote and\nI modified. A further modification could be made to let these scripts\nwork for the general case -- environment variables, et al, to direct the\nmigration work.\n\nThere are two scripts of interest:\n\n1.) pg_dumpall_new: A modified pg_dumpall that uses a different port\nnumber and linking loader directives to allow execution of backup copies\nof postmaster, postgres, libpq.so, and psql to dump an old format\ndatabase after the new version has been installed.\n\n2.) postgresql-dump: A script that performs the work of migrating --\nwhether it be a dump-restore, or whatnot. This is a quite comprehensive\nscript.\n\nWork would have to be done to these two scripts for the general case --\nwould such work be worth the effort for those who don't run the RPM or\nDebian versions?? The work to be done would comprise: writing a script\n(as part of make install?) to pull the backup executables, and modifying\nthe two scripts above to work with postgresql installed WHEREVER. The\nscripts could be integrated with a configure directive, maybe?\n\nI realize that these scripts solve a problem that is not widespread\n(wide relative to number of platforms, not number of users), but someone\nother than a Debian or RedHat user might find them useful.\n\nAlso packaged in the RPM set is a README for the rpm set, an initscript\nfor the RedHat SysV-style init system, and the PostgreSQL-HOWTO. Of\nthese three, for the general case, the PostgreSQL-HOWTO is by far the\nmost useful -- in fact, a good portion of Bruce's outline is contained\nwithin the HOWTO. The PostgreSQL HOWTO can be found on the Linux\nDocumentation Project's site (metalab.unc.edu/linux).\n\nWith the popularity of the RPM version (RedHat and Mandrake ship\nsubstantially the same RPM set, with SuSE and Caldera shipping others),\nwould it be prudent to include some RPM-specific (and Debian specific)\nnotes in the main documentation, and where?? With all the different\nlayouts between the various packages, is it desireable to maybe make a\n'packages FAQ'??\n\nMy next substantial challenge is going to be making the RedHat-specific\nRPM set function on a non-RedHat RPM-based distribution, such as the\naforementioned Caldera and SuSE. A generic RPM set will then be the\nresult -- and then any RPM-based distribution can use the enhancements\nand will be generally compatible. I got e-mail last week from a SuSE\nuser who wanted to use the 6.5.2-1 RPM set that I put together -- and,\nquite frankly, I was clueless to SuSE, which was embarassing. The\nsituation with Caldera is worse -- they packaged 6.2.1 a long time ago\n(COL 1.3), and the latest version of COL doesn't even have PostgreSQL.\n\nI am open to suggestions.\n\nIn any case, have a good one...\n\n--\nLamar Owen\nWGCR Internet Radio\nPisgah Forest, North Carolina\n1 Peter 4:11\n", "msg_date": "Tue, 12 Oct 1999 14:46:25 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Packaging questions and ideas for 7" } ]
[ { "msg_contents": "Just to mention - there is a problem in current CVS:\n\nom:~$ createlang plpgsql template1\nERROR: Unable to locate type oid 0 in catalog\nCannot install language\n\n\tOleg\n\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 12 Oct 1999 23:15:11 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "createlang plpgsql template1 failed in current CVS" } ]
[ { "msg_contents": "it would be good idea to enable \"empty automatic insert\" like this:\n\nmydb => create table pgx_replid ( repltime time DEFAULT current_time );\nmydb => insert into pgx_replid;\n\n====\nthe above should insert the value of current_time into database pgx_replid, however it is impossible yet (ver.6.3.x / 6.5.2). Now, everytime needed, one should type more complex lines like:\n\nmydb => insert into pgx_replid values ( current_time );\n\n====\nin case the table was created without any DEFAULT the empty insert should insert NULL or NULLs into all (or specified) fields of table as the general behavior.\n\n--\ndan peder\[email protected]\n\n", "msg_date": "Tue, 12 Oct 1999 20:31:24 +0100", "msg_from": "=?iso-8859-2?Q?Daniel_P=E9der?= <[email protected]>", "msg_from_op": true, "msg_subject": "empty/automatic insert availability" }, { "msg_contents": "> it would be good idea to enable \"empty automatic insert\" like this:\n> mydb => create table pgx_replid ( repltime time DEFAULT current_time );\n> mydb => insert into pgx_replid;\n> the above should insert the value of current_time into database pgx_replid, however it is \n> impossible yet (ver.6.3.x / 6.5.2).\n\nNot true :)\n\nWe support the SQL92-standard syntax:\n\n mydb => insert into pgx_replid default values;\n\nHave fun with it...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 15 Oct 1999 04:55:54 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] empty/automatic insert availability" } ]
[ { "msg_contents": "> > > Marc, let's not get Jan upset. :-)\n> > \n> > Bruce, to upset me Marc needs alot of more efford!\n> > \n> > And why not? If you UPDATE you'll see that we can get rid of\n> > most of the entire body. Remember - a picture says more than\n> > a thousand words :-)\n> \n> Wow, that is cool. No need for text at bottom, except to list names and\n> e-mail addresses.\n\nI see we don't even need the e-mail address, because you got that in\nthere too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Oct 1999 15:53:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New developer globe (was: Re: [HACKERS] Interesting Quote you\n\tmight" }, { "msg_contents": "Bruce Momjian wrote:\n> > Wow, that is cool. No need for text at bottom, except to list names and\n> > e-mail addresses.\n> \n> I see we don't even need the e-mail address, because you got that in\n> there too.\n\n\nIt is way cool.\n\nUntil you turn off Javascript for security reasons. Or you surf with\nKonqueror (the KDE file manager)....\n\nOr if you surf with images turned off because, on your Toshiba 225CDS\nnotebook, Netscape farkles the video driver with regularity IF AND ONLY\nIF images are loaded.... :-)\n\nThe globe is stunning -- but the text below also has a place, IMHO.\n\n--\nLamar Owen\nWGCR Internet Radio\nPisgah Forest, North Carolina\n1 Peter 4:11\n", "msg_date": "Tue, 12 Oct 1999 16:16:23 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New developer globe (was: Re: [HACKERS] Interesting Quote you\n\tmight" }, { "msg_contents": "On Tue, 12 Oct 1999, Bruce Momjian wrote:\n> I see we don't even need the e-mail address, because you got that in\n> there too.\n\nAll that is missing are mug-shots.\n\n-- \n| Matthew N. Dodd | '78 Datsun 280Z | '75 Volvo 164E | FreeBSD/NetBSD |\n| [email protected] | 2 x '84 Volvo 245DL | ix86,sparc,pmax |\n| http://www.jurai.net/~winter | This Space For Rent | ISO8802.5 4ever |\n\n", "msg_date": "Wed, 13 Oct 1999 00:10:34 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New developer globe (was: Re: [HACKERS] Interesting Quote you\n\tmight" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n> The globe is stunning -- but the text below also has a place, IMHO.\n\nYes, we must keep the text info too. The globe is beautiful, but\nnot everyone will want to/be able to look at it.\n\nAlso, we seem to be missing some names/pins. Hiroshi's name is\nconspicuous by its absence, and if it weren't so late at night\nI'd probably think of some more...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Oct 1999 00:51:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New developer globe (was: Re: [HACKERS] Interesting Quote you\n\tmight" }, { "msg_contents": "> Lamar Owen <[email protected]> writes:\n> > The globe is stunning -- but the text below also has a place, IMHO.\n> \n> Yes, we must keep the text info too. The globe is beautiful, but\n> not everyone will want to/be able to look at it.\n> \n> Also, we seem to be missing some names/pins. Hiroshi's name is\n> conspicuous by its absence, and if it weren't so late at night\n> I'd probably think of some more...\n\nVince is missing too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 13 Oct 1999 06:58:13 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New developer globe (was: Re: [HACKERS] Interesting Quote you\n\tmight" }, { "msg_contents": "Matthew N. Dodd wrote:\n>\n> On Tue, 12 Oct 1999, Bruce Momjian wrote:\n> > I see we don't even need the e-mail address, because you got that in\n> > there too.\n>\n> All that is missing are mug-shots.\n\n Yepp, you're right. Do a reload and look at my pin (on the\n final page I'll be shaved a little better, it's just a quick\n grab from my camera taken 20 minutes ago).\n\n I really like that very much, now this is MY entry and not\n just some information about me. Thank you very much, Matthew!\n\n It's an 80x80 JPG and only 1931 bytes. A good size to be\n recognizable and not overloading the popup.\n\n So would anyone please send me some picture (or just a note\n that he doesn't want one). If it's not actually 80x80 ready\n to stick in, please send something at least 3x the size so I\n can crop and downscale it without much quality loss.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 18 Oct 1999 21:25:47 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: New developer globe (was: Re: [HACKERS] Interesting Quote you" }, { "msg_contents": "[Cc trimmed.]\n\nOn Mon, 18 Oct 1999, Jan Wieck wrote:\n> Yepp, you're right. Do a reload and look at my pin (on the\n> final page I'll be shaved a little better, it's just a quick\n> grab from my camera taken 20 minutes ago).\n\nVery nice! Its good to be able to place names with faces.\n\n> I really like that very much, now this is MY entry and not\n> just some information about me. Thank you very much, Matthew!\n> \n> It's an 80x80 JPG and only 1931 bytes. A good size to be\n> recognizable and not overloading the popup.\n\nIndeed.\n\n> So would anyone please send me some picture (or just a note\n> that he doesn't want one). If it's not actually 80x80 ready\n> to stick in, please send something at least 3x the size so I\n> can crop and downscale it without much quality loss.\n\nI think people with no pictures should have a default picture... Maybe\n'Barney' or something evil like that to encourage them to send in a pic.\n:)\n\nGood job though. Any chance you'll clean the code up and release the\npackage in a form that others can use? I think its a very nice layout and\nthat other open source projects might like to make use of it.\n\nIf its backended into PostgreSQL it might make it easier for users to\nmanage their own entries. Picture submissions could be handled that way\nwith no intervention.\n\nAnyhow, I'm rambling.\n\n-- \n| Matthew N. Dodd | '78 Datsun 280Z | '75 Volvo 164E | FreeBSD/NetBSD |\n| [email protected] | 2 x '84 Volvo 245DL | ix86,sparc,pmax |\n| http://www.jurai.net/~winter | This Space For Rent | ISO8802.5 4ever |\n\n", "msg_date": "Mon, 18 Oct 1999 16:36:43 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New developer globe (was: Re: [HACKERS] Interesting Quote you" }, { "msg_contents": "I think that we have by far the coolest developers page of any\nopen-source'ish project out there. What I miss is something like a line\nwhat people do in their Real Life. From what I gather most people around\nhere are probably one of programmer, system admin, or in academia. But it\nwould put a refreshing realistic context to the whole thing.\n\n*** puts on asbestos suit ***\n\nAlso I think putting the photos below the globe (at least in addition)\nmight be better because for people digging around in Europe or the North\nAmerican east coast it obscures too much and it's also hard to get an\noverview.\n\n\t-Peter\n\n\nOn Mon, 18 Oct 1999, Jan Wieck wrote:\n\n> Yepp, you're right. Do a reload and look at my pin (on the\n> final page I'll be shaved a little better, it's just a quick\n> grab from my camera taken 20 minutes ago).\n> \n> I really like that very much, now this is MY entry and not\n> just some information about me. Thank you very much, Matthew!\n> \n> It's an 80x80 JPG and only 1931 bytes. A good size to be\n> recognizable and not overloading the popup.\n> \n> So would anyone please send me some picture (or just a note\n> that he doesn't want one). If it's not actually 80x80 ready\n> to stick in, please send something at least 3x the size so I\n> can crop and downscale it without much quality loss.\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 19 Oct 1999 12:53:41 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New developer globe" }, { "msg_contents": "Peter Eisentraut wrote:\n\n> I think that we have by far the coolest developers page of any\n> open-source'ish project out there. What I miss is something like a line\n> what people do in their Real Life. From what I gather most people around\n> here are probably one of programmer, system admin, or in academia. But it\n> would put a refreshing realistic context to the whole thing.\n\n Exactly that is something NOT to put on this page. Don't make\n the job of headhunters too easy!\n\n>\n> *** puts on asbestos suit ***\n\n Looser, that's not bullet proof :-)\n\n>\n> Also I think putting the photos below the globe (at least in addition)\n> might be better because for people digging around in Europe or the North\n> American east coast it obscures too much and it's also hard to get an\n> overview.\n\n I could turn the text in the pages body into a table and add\n the images there too. But I absolutely like them in the popup\n and will let them in.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 19 Oct 1999 13:28:11 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: New developer globe" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n>> Also I think putting the photos below the globe (at least in addition)\n>> might be better because for people digging around in Europe or the North\n>> American east coast it obscures too much and it's also hard to get an\n>> overview.\n\n> I could turn the text in the pages body into a table and add\n> the images there too. But I absolutely like them in the popup\n> and will let them in.\n\nWhen I was looking at the page last night, I could *not* get Netscape\nto show me the images in the popups at all; I just got the \"unloaded\nimage\" icon. This probably had something to do with the fact that\nI normally browse with autoload images off, and had come to the page\nin that state. There's no way to click on an image that's inside a\npopup to get it to load :-(. But even after I turned on autoload\nand reloaded the page, no popup images.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Oct 1999 10:13:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: New developer globe " }, { "msg_contents": ">\n> [email protected] (Jan Wieck) writes:\n> >> Also I think putting the photos below the globe (at least in addition)\n> >> might be better because for people digging around in Europe or the North\n> >> American east coast it obscures too much and it's also hard to get an\n> >> overview.\n>\n> > I could turn the text in the pages body into a table and add\n> > the images there too. But I absolutely like them in the popup\n> > and will let them in.\n>\n> When I was looking at the page last night, I could *not* get Netscape\n> to show me the images in the popups at all; I just got the \"unloaded\n> image\" icon. This probably had something to do with the fact that\n> I normally browse with autoload images off, and had come to the page\n> in that state. There's no way to click on an image that's inside a\n> popup to get it to load :-(. But even after I turned on autoload\n> and reloaded the page, no popup images.\n\n Should be better now. I've turned the body into the mentioned\n table. IMHO this requires that we get images for ALL\n developers, since the current mixing of items with/without is\n ugly.\n\n The images referenced in the body are the same as is the\n popup. Thus, when you've loaded them (maybe with the general\n Images button) they should popup.\n\n Works at least with Netscape4.6 Linux.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 19 Oct 1999 16:23:38 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: New developer globe" }, { "msg_contents": "Jan Wieck wrote:\n> \n> Peter Eisentraut wrote:\n> \n> > I think that we have by far the coolest developers page of any\n> > open-source'ish project out there. What I miss is something like a line\n> > what people do in their Real Life. From what I gather most people around\n> > here are probably one of programmer, system admin, or in academia. But it\n> > would put a refreshing realistic context to the whole thing.\n> \n> Exactly that is something NOT to put on this page. Don't make\n> the job of headhunters too easy!\n\nI don't mind people knowing that in 'real life' I'm a broadcast\nengineer/webmaster/network admin/preacher....\n\n> > *** puts on asbestos suit ***\n> \n> Looser, that's not bullet proof :-)\n\nNomex/Kevlar is.... And fire-resistant too.... ;-)\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 19 Oct 1999 11:52:15 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: New developer globe" }, { "msg_contents": "On Tue, Oct 19, 1999 at 01:28:11PM +0200, Jan Wieck wrote:\n> Exactly that is something NOT to put on this page. Don't make\n> the job of headhunters too easy!\n\nWhy not? Personally I have no problem with a headhunter offering me a good\njob. \n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Tue, 19 Oct 1999 20:29:46 +0200", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: New developer globe" }, { "msg_contents": "Jan Wieck wrote:\n> \n> > Also I think putting the photos below the globe (at least in addition)\n> > might be better because for people digging around in Europe or the North\n> > American east coast it obscures too much and it's also hard to get an\n> > overview.\n> \n> I could turn the text in the pages body into a table and add\n> the images there too. But I absolutely like them in the popup\n> and will let them in.\n\nMe too.\n\nVadim\n", "msg_date": "Wed, 20 Oct 1999 10:34:02 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: New developer globe" } ]
[ { "msg_contents": "\nWhile catching up with my email, I noticed this on the general list. Is\nthere any news/updates on when SP will be available.\n\nIf were going for a 7.0 release after 6.5.3, it would be nice (and I'm\nthinking of JDBC here) of having this. It would allow me to remove the\nkludge that's always been there for PrepareStatement, where it emulates\nthis by filling in the gaps on the client.\n\nPeter\n\n--\n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n---------- Forwarded message ----------\nDate: Sun, 10 Oct 1999 13:44:00 -0700\nFrom: Yin-So Chen <[email protected]>\nTo: [email protected]\nSubject: Re: [GENERAL] stored procedure revisited\n\namy cheng wrote:\n> \n> forgive my ignorance. why \"multi-resultset, multi-level transaction\" SP is\n> so important? no work-around? I rememeber there were some discussion on\n> multiple-return-value-function in the past. My impression is that they are\n> not that crucial and usually can\n> find rather simple work-arounds.\n> \n\nSP is important for a lot of reasons. First it allows faster network\ntransmission because you don't have to send the query over and over\nagain, second it allows for faster execution because the server doesn't\nneed to reparse the query every time, third it allows for conceptual\nabstraction so the queries can be moved into the database layer, etc... \n\"multi-resultset, multi-level transaction\" is just an indication of what\nother database can do with SP's. All I want to know is if there is SP\nfor postgresql, or _better_than_SP_ alternatives.\n\nWork-arounds are, exactly that, work-arounds. They are something that\nwill work _for_now_, but not the best solution. I ask the question not\nbecause I don't know how to live without SP, but because I want to see\nwhat the mentality is behind the whole thing - is there something\nintrinsically wrong with having SP, or is there some better stuffs than\nSP out there, etc. What makes a piece of software great? When its\ndevelopers do not settle for work-arounds.\n\nMy questions still stand. Please can someone fill in on the status with\nSP, thanks.\n\nRegards,\n\nyin-so chen\n\n************\n\n", "msg_date": "Tue, 12 Oct 1999 21:07:15 +0100 (GMT)", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] stored procedure revisited (fwd)" } ]
[ { "msg_contents": "Man, talk about serendipity! Here's the latest on cvsweb, straight from\nthe redhat-announce list!\n\n--\nLamar Owen\nWGCR Internet Radio\nPisgah Forest, North Carolina\n1 Peter 4:11\n--------------------------------------------\n\nPeter Hanecak wrote:\n> \n> hello,\n> \n> i made cvsweb-1.73-1.noarch RPMs available at:\n> \n> http://www.megaloman.com/~hany/RPM/cvsweb.html\n> \n> $ rpm -qi cvsweb\n> Name : cvsweb Relocations: (not relocateable)\n> Version : 1.73 Vendor: Mega & Loman (http://www.megaloman.com/)\n> Release : 1 Build Date: Tue Oct 12 21:39:55 1999\n> Install date: Tue Oct 12 21:40:03 1999 Build Host: megaloman.megaloman.sk\n> Group : Development/Tools Source RPM: cvsweb-1.73-1.src.rpm\n> Size : 101941 License: BSD type\n> Packager : Peter Hanecak <[email protected]>\n> URL : http://stud.fh-heilbronn.de/~zeller/cgi/cvsweb.cgi/\n> Summary : visual (www) interface to explore a cvs repository\n> Description :\n> cvsweb is a visual (www) interface to explore a cvs repository. This is an\n> enhanced cvsweb developed by Henner Zeller. Enhancements include\n> recognition\n> and display of popular mime-types, visual, color-coded, side by side diffs\n> of changes and the ability sort the file display and to hide old files\n> from view. One living example of the enhanced cvsweb is the KDE cvsweb\n> \n> cvsweb requires the server to have cvs and a cvs repository worth\n> exploring.\n> \n> have fun\n> \n> hany\n> \n> ===================================================================\n> Peter Hanecak - production manager\n> Mega & Loman, Stare grunty 52, 842 44 Bratislava, Slovakia\n> tel., fax: +421-7-654 211 52\n> [email protected], http://www.megaloman.com\n> GPG pub.key: http://www.megaloman.com/gpg/hanecak-megaloman.txt\n> ===================================================================\n> \n> --\n> To unsubscribe:\n> mail -s unsubscribe [email protected] < /dev/null\n", "msg_date": "Tue, 12 Oct 1999 16:21:15 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: cvsweb-1.73-1.noarch]" } ]
[ { "msg_contents": "The copy on cvs looks ok, although should I update it to say 6.5.3, or as\n6.5.3 is simply to include pgaccess, leave it alone?\n\nI'd go for the latter.\n\nPeter\n\n--\n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n", "msg_date": "Tue, 12 Oct 1999 21:56:29 +0100 (GMT)", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "JDBC Driver" }, { "msg_contents": "> The copy on cvs looks ok, although should I update it to say 6.5.3, or as\n> 6.5.3 is simply to include pgaccess, leave it alone?\n> \n> I'd go for the latter.\n\nI can update files as part of the release, but I don't see anywhere in\nthe jdbc directory that says 6.5.x.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Oct 1999 17:11:53 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] JDBC Driver" } ]
[ { "msg_contents": "OK, new outline, updated to show subsection detail, with suggested\nchanges made.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n...................................................................\n\nThe attached document is in both web page and text formats.\nView the one which looks best.\n\n\n\n\nPostgreSQL Book Proposal\n\n\n\n\n\n\n\n\n\n\n\nPostgreSQL Book Proposal\nBruce Momjian\n\n\nPreface\n\n\n\n1.\nIntroduction\n\n\n(a)\nHistory of POSTGRESQL\n\n\ni.\nUNIVERSITY OF CALIFORNIA AT BERKELEY\nii.\nMICHAEL STONEBRAKER\niii.\nJOLLY CHEN and ANDREW YU\niv.\nPOSTGRESQL GLOBAL DEVELOPMENT TEAM\n(b)\nOpen source software\n\n\n\ni.\ndevelopment methods\nii.\npeer review\niii.\nrelease process\niv.\nproblem reporting\nv.\nsupport\n(c)\nWhen to use a database\n\n\n\ni.\nlarge volume\nii.\nrapid retrieval\niii.\nfrequent modification\niv.\nreporting\n2.\nIssuing database commands\n\n\n\n(a)\nStarting a database session\n\n\ni.\nchoosing an access method\nii.\nchoosing a database\niii.\nstarting a session\n(b)\nControlling a session\n\n\n\ni.\ntyping in the query buffer\nii.\ndisplaying the query buffer\niii.\nediting the query buffer\niv.\nerasing the query buffer\n(c)\nSendinging queries\n(d)\nGetting help\n(e)\nRequesting server information\n3.\nIntroduction to SQL\n\n\n\n(a)\nRelational Databases\n\n\ni.\ntables, rows, and columns\nii.\ncolumn types\niii.\ncolumn selection\niv.\nrow restriction\n(b)\nCreating tables\n\n\n\ni.\nnaming columns\nii.\ncolumn types\n(c)\nAdding data with INSERT\n\n\ni.\ntarget columns\nii.\ncolumn types\niii.\nmissing values\n(d)\nViewing data with SELECT\n\n\ni.\ntarget columns\nii.\nFROM clause\n(e)\nRemoving data with DELETE\n(f)\nModifying data with UPDATE\n(g)\nRestricting with WHERE\n(h)\nSorting data with ORDER BY\n(i)\nUsing NULL values\n4.\nAdvanced SQL Commands\n\n\n\n(a)\nInserting data using SELECT\n(b)\nAggregates: COUNT, SUM, ...\n(c)\nGROUP BY with aggregates\n(d)\nHAVING with aggregates\n(e)\nJoining tables\n(f)\nUsing table aliases\n(g)\nUNION clause\n(h)\nUPDATE with FROM\n(i)\nSubqueries\n\n\ni.\nreturning a single row\nii.\nreturning multiple rows\niii.\ncorrelated Subqueries\n(j)\nTransactions\n\n\n\ni.\nBEGIN...END\nii.\nABORT transaction\n(k)\nCursors\n\n\n\ni.\nDECLARE\nii.\nFETCH\niii.\nCLOSE\n(l)\nIndexing\n\n\n\ni.\nusage\nii.\ntypes\niii.\ndefinition\niv.\nfunctional indexes\n(m)\nColumn defaults\n(n)\nReferential integrity\n\n\n\ni.\nprimary keys\nii.\nforeign keys\n(o)\nAND/OR usage\n(p)\nPattern matching\n\n\n\ni.\nLIKE clause\nii.\nregular expressions\n(q)\nTemporary tables\n(r)\nImporting data\n\n\n\ni.\nCOPY\nii.\nDELIMITERS\niii.\nBINARY\niv.\nfrontend COPY\n5.\nPOSTGRESQL'S Unique Features\n\n\n\n(a)\nObject ID'S (OID'S)\n\n\ni.\nunique row assignment\nii.\njoin usage\n(b)\nMulti-Version Concurrency Control\n\n\n\ni.\nwrite locks \nii.\nread locks\niii.\nconcurrency\niv.\nsolutions\n(c)\nLocking and deadlocks\n\n\n\ni.\nneed for locking\nii.\ndeadlocks\n(d)\nVacuum\n\n\n\ni.\nscheduling\nii.\nANALYZE\n(e)\nViews\n\n\n\ni.\ncreation\nii.\nlimitations\n(f)\nRules\n\n\n\ni.\ncreation\nii.\nlimitations\n(g)\nSequences\n\n\n\ni.\npurpose\nii.\ncreation\niii.\nmanagement\n(h)\nTriggers\n\n\n\ni.\npurpose\nii.\ncreation\n(i)\nLarge objects(BLOBS)\n\n\n\ni.\napplications\nii.\ncreation\niii.\nmanagement\n(j)\nAdding user-defined functions\n\n\n\ni.\npurpose\nii.\ncreation\niii.\nexamples\n(k)\nAdding user-defined operators\n\n\n\ni.\narithmetic processing\nii.\ncreation\n(l)\nAdding user-defined types\n\n\n\ni.\npurpose\nii.\ncreation\niii.\nindexing\n(m)\nExotic pre-installed types\n\n\n\ni.\ndate/time\nii.\ngeometric\niii.\ncharacter string\niv.\ninternet\nv.\ninternal\n(n)\nArrays\n\n\n\ni.\ncreation\nii.\naccess\n(o)\nInheritance\n\n\n\ni.\npurpose\nii.\ncreation\niii.\nexamples\n6.\nInterfacing to the POSTGRESQL Database\n\n\n\n(a)\nC Language API (LIBPQ)\n(b)\nEmbedded C (ECPG)\n(c)\nC++ (LIBPQ++)\n(d)\nJAVA (JDBC)\n(e)\nODBC\n(f)\nPERL (PGSQL_PERL5)\n(g)\nTCL/TK (LIBPGTCL)\n(h)\nPYTHON (PYGRESQL)\n(i)\nWeb access (PHP)\n(j)\nServer-side programming\n\n\ni.\nPLPGSQL \nii.\nSPI\n7.\nPOSTGRESQL Administration\n\n\n\n(a)\nCreating users and databases\n(b)\nBackup and restore\n(c)\nPerformance\n(d)\nTroubleshooting\n(e)\nCustomization\n(f)\nAccess configuration\n\n\ni.\nserver access\nii.\ndatabase access\niii.\ntable access\n(g)\nInternationalization \n\n\n\ni.\nnational character encodings\nii.\ndate formats\n8.\nAdditional Resources\n\n\n\n(a)\nFrequently Asked Questions (FAQ'S)\n(b)\nMailing list support\n(c)\nSupplied documentation\n(d)\nCommercial support\n(e)\nModifying the source code\n9.\nAppendix: Installation\n\n\n\n(a)\nGetting POSTGRESQL\n\n\ni.\nFTP\nii.\nweb\niii.\nCDROM\n(b)\nCompiling\n\n\n\ni.\ncompiler\nii.\nRPM\n(c)\nInitialization\n(d)\nStarting the server\n(e)\nCreating a database\n10.\nAnnotated Bibliography\n\n\n\n\n\n\n PostgreSQL Book Proposal\n \n Bruce Momjian\n \n Preface\n \n 1.\n Introduction\n (a)\n History of POSTGRESQL\n i.\n UNIVERSITY OF CALIFORNIA AT BERKELEY\n ii.\n MICHAEL STONEBRAKER\n iii.\n JOLLY CHEN and ANDREW YU\n iv.\n POSTGRESQL GLOBAL DEVELOPMENT TEAM\n (b)\n Open source software\n i.\n development methods\n ii.\n peer review\n iii.\n release process\n iv.\n problem reporting\n v.\n support\n (c)\n When to use a database\n i.\n large volume\n ii.\n rapid retrieval\n iii.\n frequent modification\n iv.\n reporting\n 2.\n Issuing database commands\n (a)\n Starting a database session\n i.\n choosing an access method\n ii.\n choosing a database\n iii.\n starting a session\n (b)\n Controlling a session\n i.\n typing in the query buffer\n ii.\n displaying the query buffer\n iii.\n editing the query buffer\n iv.\n erasing the query buffer\n (c)\n Sendinging queries\n (d)\n Getting help\n (e)\n Requesting server information\n 3.\n Introduction to SQL\n (a)\n Relational Databases\n i.\n tables, rows, and columns\n ii.\n column types\n iii.\n column selection\n iv.\n row restriction\n (b)\n Creating tables\n i.\n naming columns\n ii.\n column types\n (c)\n Adding data with INSERT\n i.\n target columns\n ii.\n column types\n iii.\n missing values\n (d)\n Viewing data with SELECT\n i.\n target columns\n ii.\n FROM clause\n (e)\n Removing data with DELETE\n (f)\n Modifying data with UPDATE\n (g)\n Restricting with WHERE\n (h)\n Sorting data with ORDER BY\n (i)\n Using NULL values\n 4.\n Advanced SQL Commands\n (a)\n Inserting data using SELECT\n (b)\n Aggregates: COUNT, SUM, ...\n (c)\n GROUP BY with aggregates\n (d)\n HAVING with aggregates\n (e)\n Joining tables\n (f)\n Using table aliases\n (g)\n UNION clause\n (h)\n UPDATE with FROM\n (i)\n Subqueries\n i.\n returning a single row\n ii.\n returning multiple rows\n iii.\n correlated Subqueries\n (j)\n Transactions\n i.\n BEGIN...END\n ii.\n ABORT transaction\n (k)\n Cursors\n i.\n DECLARE\n ii.\n FETCH\n iii.\n CLOSE\n (l)\n Indexing\n i.\n usage\n ii.\n types\n iii.\n definition\n iv.\n functional indexes\n (m)\n Column defaults\n (n)\n Referential integrity\n i.\n primary keys\n ii.\n foreign keys\n (o)\n AND/OR usage\n (p)\n Pattern matching\n i.\n LIKE clause\n ii.\n regular expressions\n (q)\n Temporary tables\n (r)\n Importing data\n i.\n COPY\n ii.\n DELIMITERS\n iii.\n BINARY\n iv.\n frontend COPY\n 5.\n POSTGRESQL'S Unique Features\n (a)\n Object ID'S (OID'S)\n i.\n unique row assignment\n ii.\n join usage\n (b)\n Multi-Version Concurrency Control\n i.\n write locks\n ii.\n read locks\n iii.\n concurrency\n iv.\n solutions\n (c)\n Locking and deadlocks\n i.\n need for locking\n ii.\n deadlocks\n (d)\n Vacuum\n i.\n scheduling\n ii.\n ANALYZE\n (e)\n Views\n i.\n creation\n ii.\n limitations\n (f)\n Rules\n i.\n creation\n ii.\n limitations\n (g)\n Sequences\n i.\n purpose\n ii.\n creation\n iii.\n management\n (h)\n Triggers\n i.\n purpose\n ii.\n creation\n (i)\n Large objects(BLOBS)\n i.\n applications\n ii.\n creation\n iii.\n management\n (j)\n Adding user-defined functions\n i.\n purpose\n ii.\n creation\n iii.\n examples\n (k)\n Adding user-defined operators\n i.\n arithmetic processing\n ii.\n creation\n (l)\n Adding user-defined types\n i.\n purpose\n ii.\n creation\n iii.\n indexing\n (m)\n Exotic pre-installed types\n i.\n date/time\n ii.\n geometric\n iii.\n character string\n iv.\n internet\n v.\n internal\n (n)\n Arrays\n i.\n creation\n ii.\n access\n (o)\n Inheritance\n i.\n purpose\n ii.\n creation\n iii.\n examples\n 6.\n Interfacing to the POSTGRESQL Database\n (a)\n C Language API (LIBPQ)\n (b)\n Embedded C (ECPG)\n (c)\n C++ (LIBPQ++)\n (d)\n JAVA (JDBC)\n (e)\n ODBC\n (f)\n PERL (PGSQL_PERL5)\n (g)\n TCL/TK (LIBPGTCL)\n (h)\n PYTHON (PYGRESQL)\n (i)\n Web access (PHP)\n (j)\n Server-side programming\n i.\n PLPGSQL\n ii.\n SPI\n 7.\n POSTGRESQL Administration\n (a)\n Creating users and databases\n (b)\n Backup and restore\n (c)\n Performance\n (d)\n Troubleshooting\n (e)\n Customization\n (f)\n Access configuration\n i.\n server access\n ii.\n database access\n iii.\n table access\n (g)\n Internationalization\n i.\n national character encodings\n ii.\n date formats\n 8.\n Additional Resources\n (a)\n Frequently Asked Questions (FAQ'S)\n (b)\n Mailing list support\n (c)\n Supplied documentation\n (d)\n Commercial support\n (e)\n Modifying the source code\n 9.\n Appendix: Installation\n (a)\n Getting POSTGRESQL\n i.\n FTP\n ii.\n web\n iii.\n CDROM\n (b)\n Compiling\n i.\n compiler\n ii.\n RPM\n (c)\n Initialization\n (d)\n Starting the server\n (e)\n Creating a database\n 10.\n Annotated Bibliography", "msg_date": "Tue, 12 Oct 1999 22:46:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Updated book outline" }, { "msg_contents": "I know this is probably the wrong group for this, but it's the only one\nI'm subscribed to -- where the heck in the build tree is the Win32 client\nstuff??? How do I build it? Or can I use that 6.4.something client in\nthe bindist on FTP against my spiffy new 6.5.1 Solaris postmaster?\n\nThis line of inquiry started out as an innocent attempt to play around,\nand here I am back in that learning curve morass. Sigh.\n\n\n", "msg_date": "Tue, 12 Oct 1999 23:15:56 -0500 (EST)", "msg_from": "\"J. Michael Roberts\" <[email protected]>", "msg_from_op": false, "msg_subject": "Win32 client..." }, { "msg_contents": "Add a subsection on chapter 1 with title:\n\n(d) PostgreSQL Known Limitation\n(i) Maximum record\n(ii) Maximum field\n(iii) Maximum file size\netc....\n\nRegards\nChairudin\n\n\n> Bruce Momjian wrote:\n> \n> OK, new outline, updated to show subsection detail, with suggested\n> changes made.\n> \n> \n> PostgreSQL Book Proposal\n> \n> Bruce Momjian\n> \n> Preface\n> \n> 1. Introduction\n> \n> (a) History of POSTGRESQL\n> \n> i. UNIVERSITY OF CALIFORNIA AT BERKELEY\n> ii. MICHAEL STONEBRAKER\n> iii.\n> JOLLY CHEN and ANDREW YU\n> iv. POSTGRESQL GLOBAL DEVELOPMENT TEAM\n> (b) Open source software\n> \n> i. development methods\n> ii. peer review\n> iii.\n> release process\n> iv. problem reporting\n> v. support\n> (c) When to use a database\n> \n> i. large volume\n> ii. rapid retrieval\n> iii.\n> frequent modification\n> iv. reporting\n", "msg_date": "Wed, 13 Oct 1999 17:01:42 +0700", "msg_from": "Chairudin Sentosa Harjo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated book outline" }, { "msg_contents": "> Add a subsection on chapter 1 with title:\n> \n> (d) PostgreSQL Known Limitation\n> (i) Maximum record\n> (ii) Maximum field\n> (iii) Maximum file size\n> etc....\n\nI will get those in there somewhere.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 13 Oct 1999 07:34:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Updated book outline" } ]
[ { "msg_contents": "Right now any user on the local machine can log on as postgres to the\ntemplate1 database. I don't like that, so I wish to turn on password\nchecking. \n\nOK so I edit pg_hba.conf and put:\n\nlocal all password \nhost all 127.0.0.1 255.255.255.255 password \n\nThen I have problems logging in as ANY user. Couldn't figure out what the\ndefault password for the postgres user was. Only after some messing around\nI found that I could log on as the postgres user with the password \\N. Not\nobvious, at least to me.\n\nI only guessed it after looking at the pg_pwd file and noticing a \\N there.\nIs this where the passwords are stored? By the way should they be stored in\nthe clear and in a 666 permissions file? How about hashing them with some\nsalt?\n\nNow the next problem is: How do I change the postgres user password? \n\nI find the package's handling of passwords rather nonintuitive.\n\n1) There is no obvious way to specify the password for users when you\ncreate a user using the supplied shell script createuser. One has to resort\nto psql and stuff.\n2) Neither is there an obvious and easy way to change the user's password.\n3) You can specify a password for a user by using pg_passwd and stick it\ninto a separate password file, but then there really is no link between\ncreateuser and pg_passwd. \n\nI find the bundled scripts and their associated documentation make things\nvery nonintuitive when one switches from a blind trust postgres to an\nauthenticated postgres. \n\nCheerio,\n\nLink.\n\n\n\n\n", "msg_date": "Wed, 13 Oct 1999 14:55:09 +0800", "msg_from": "Lincoln Yeoh <[email protected]>", "msg_from_op": true, "msg_subject": "How do I activate and change the postgres user's password?" }, { "msg_contents": "Lincoln Yeoh wrote:\n >Right now any user on the local machine can log on as postgres to the\n >template1 database. I don't like that, so I wish to turn on password\n >checking. \n >\n >OK so I edit pg_hba.conf and put:\n >\n >local all password \n >host all 127.0.0.1 255.255.255.255 password \n >\n >Then I have problems logging in as ANY user. Couldn't figure out what the\n >default password for the postgres user was. Only after some messing around\n >I found that I could log on as the postgres user with the password \\N. Not\n >obvious, at least to me.\n >\n >I only guessed it after looking at the pg_pwd file and noticing a \\N there.\n >Is this where the passwords are stored? By the way should they be stored in\n >the clear and in a 666 permissions file? How about hashing them with some\n >salt?\n \nThe PGDATA directory should have permission rwx------, so that no one can\ndescend into it to look at pg_pwd; therefore the file's own permissions are\nunimportant.\n\n >Now the next problem is: How do I change the postgres user password? \n \nALTER USER will change passwords held in pg_shadow, including that of the\npostgres user, but will not, I think, change those set by pg_passwd.\n\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"And he shall judge among the nations, and shall rebuke\n many people; and they shall beat their swords into \n plowshares, and their spears into pruninghooks; nation\n shall not lift up sword against nation, neither shall \n they learn war any more.\" Isaiah 2:4 \n\n\n", "msg_date": "Wed, 13 Oct 1999 10:29:19 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] How do I activate and change the postgres user's\n\tpassword?" }, { "msg_contents": "On Oct 13, Lincoln Yeoh mentioned:\n\n> Then I have problems logging in as ANY user. Couldn't figure out what the\n> default password for the postgres user was. Only after some messing around\n> I found that I could log on as the postgres user with the password \\N. Not\n> obvious, at least to me.\n\nThere is a todo item for the postgres user to have a password by default.\nI'm not sure though how that would be done. Probably in initdb. (?)\n\n> I only guessed it after looking at the pg_pwd file and noticing a \\N there.\n> Is this where the passwords are stored? By the way should they be stored in\n> the clear and in a 666 permissions file? How about hashing them with some\n> salt?\n\nI had this on my personal things-to-consider-working-on list but I don't\nsee an official todo item. I am personally not sure why this is not done\nbut authentication and security are not most people's specialty around here.\n(including me)\n\n> 1) There is no obvious way to specify the password for users when you\n> create a user using the supplied shell script createuser. One has to resort\n> to psql and stuff.\n\nAah. Another misguided user. Some people are of the opinion that using the\ncreateuser scripts is a bad idea because it gives you the wrong impression\nof how things work. (All createuser does is call psql.) Of course, we\ncould somehow put a password prompt in there, I'll put that on the above\nmentioned list.\n\n> 2) Neither is there an obvious and easy way to change the user's password.\n\nalter user joe with password \"foo\";\n\nI'm not sure how obvious it is but it's certainly easy.\n\n> 3) You can specify a password for a user by using pg_passwd and stick it\n> into a separate password file, but then there really is no link between\n> createuser and pg_passwd. \n\nThis shows how bad the idea of the scripts was in the first place.\n\n> I find the bundled scripts and their associated documentation make things\n> very nonintuitive when one switches from a blind trust postgres to an\n> authenticated postgres. \n\nSo that would put your vote in the \"drop altogether\" column? Voting is\nstill in progress!\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 13 Oct 1999 21:56:15 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] How do I activate and change the postgres user's\n\tpassword?" }, { "msg_contents": "> On Oct 13, Lincoln Yeoh mentioned:\n> \n> > Then I have problems logging in as ANY user. Couldn't figure out what the\n> > default password for the postgres user was. Only after some messing around\n> > I found that I could log on as the postgres user with the password \\N. Not\n> > obvious, at least to me.\n> \n> There is a todo item for the postgres user to have a password by default.\n> I'm not sure though how that would be done. Probably in initdb. (?)\n\nWe could enabled it as part of initdb. Prompt them for it there, and\nassign it. Seems like there should be one on that account espeically.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 13 Oct 1999 17:15:13 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] How do I activate and change the postgres\n\tuser's password?" }, { "msg_contents": "Bruce Momjian wrote:\n> > There is a todo item for the postgres user to have a password by default.\n> > I'm not sure though how that would be done. Probably in initdb. (?)\n> \n> We could enabled it as part of initdb. Prompt them for it there, and\n> assign it. Seems like there should be one on that account espeically.\n\nAlso, allow a command line option to set the password for those who need\nto automate things (like us RedHat people...). This is, I assume, for\nthe postgres user INSIDE the initial database structure, as opposed to\nthe postgres user on the OS.\n\nSince, under the RedHat installation, the initdb likely will happen\nduring initial system startup, having a prompt for a password at that\npoint is IMHO not good. Having a default password (in the initdb'd\npg_shadow) would be better.\n\nIf this is about the OS userame 'postgres', ignore that. The RPM\ninstallation already creates him, and makes it impossible to directly\nlog in as 'postgres' -- until root changes his password.\n\nIMHO, of course.\n\n--\nLamar Owen\nWGCR Internet Radio\nPisgah Forest, North Carolina\n1 Peter 4:11\n", "msg_date": "Wed, 13 Oct 1999 17:46:12 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] How do I activate and change the\n\tpostgresuser's password?" }, { "msg_contents": "> Bruce Momjian wrote:\n> > > There is a todo item for the postgres user to have a password by default.\n> > > I'm not sure though how that would be done. Probably in initdb. (?)\n> > \n> > We could enabled it as part of initdb. Prompt them for it there, and\n> > assign it. Seems like there should be one on that account espeically.\n> \n> Also, allow a command line option to set the password for those who need\n> to automate things (like us RedHat people...). This is, I assume, for\n> the postgres user INSIDE the initial database structure, as opposed to\n> the postgres user on the OS.\n> \n> Since, under the RedHat installation, the initdb likely will happen\n> during initial system startup, having a prompt for a password at that\n> point is IMHO not good. Having a default password (in the initdb'd\n> pg_shadow) would be better.\n> \n> If this is about the OS userame 'postgres', ignore that. The RPM\n> installation already creates him, and makes it impossible to directly\n> log in as 'postgres' -- until root changes his password.\n\nNo, this is about the pgsql-supplied postgres password.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 13 Oct 1999 18:09:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] How do I activate and change the\n\tpostgresuser's password?" }, { "msg_contents": "Hi,\n\nfollowin this thread, I think\nIt would be useful to allow user to connect to database he owned (created)\nwithout password even if pg_hba.conf is configured with password requirement\nto this database. Or owner of database could maintain list of\nusers/groups whom he granted trusted connection. After user connects\nusual grant priviliges could works. Currently it's a pain to\nwork with authentification system - I have to input my password\nevery time I use psql and moreover I had to specify it in\nperl scripts I developed. Sometimes it's not easy to maintain secure\nfile permissions espec. if several developers share common work.\nAny user (even not postgres user) could use stealed password to connects\nto your database. In my proposal, security is rely on local login\nsecurity. You already passed password control. There are another checks\nlike priviliges. You write your scripts without hardcoded passwords !\nOf course this could be just an option in case you need \"paranoic\" security.\nHaving more granulated privilege types as Mysql does would only make\nmy proposal more secure. You're allowed to connect, but owner of database\ncould restrict you even list of tables, indices et. all.\n\n\tRegards,\n\n \tOleg\n\nPS.\n I didn't find any plans to improve authen. in TODO\n\nOn Wed, 13 Oct 1999, Peter Eisentraut wrote:\n\n> Date: Wed, 13 Oct 1999 21:56:15 +0200 (CEST)\n> From: Peter Eisentraut <[email protected]>\n> To: Lincoln Yeoh <[email protected]>\n> Cc: [email protected], [email protected]\n> Subject: [HACKERS] Re: [GENERAL] How do I activate and change the postgres user's password?\n> \n> On Oct 13, Lincoln Yeoh mentioned:\n> \n> > Then I have problems logging in as ANY user. Couldn't figure out what the\n> > default password for the postgres user was. Only after some messing around\n> > I found that I could log on as the postgres user with the password \\N. Not\n> > obvious, at least to me.\n> \n> There is a todo item for the postgres user to have a password by default.\n> I'm not sure though how that would be done. Probably in initdb. (?)\n> \n> > I only guessed it after looking at the pg_pwd file and noticing a \\N there.\n> > Is this where the passwords are stored? By the way should they be stored in\n> > the clear and in a 666 permissions file? How about hashing them with some\n> > salt?\n> \n> I had this on my personal things-to-consider-working-on list but I don't\n> see an official todo item. I am personally not sure why this is not done\n> but authentication and security are not most people's specialty around here.\n> (including me)\n> \n> > 1) There is no obvious way to specify the password for users when you\n> > create a user using the supplied shell script createuser. One has to resort\n> > to psql and stuff.\n> \n> Aah. Another misguided user. Some people are of the opinion that using the\n> createuser scripts is a bad idea because it gives you the wrong impression\n> of how things work. (All createuser does is call psql.) Of course, we\n> could somehow put a password prompt in there, I'll put that on the above\n> mentioned list.\n> \n> > 2) Neither is there an obvious and easy way to change the user's password.\n> \n> alter user joe with password \"foo\";\n> \n> I'm not sure how obvious it is but it's certainly easy.\n> \n> > 3) You can specify a password for a user by using pg_passwd and stick it\n> > into a separate password file, but then there really is no link between\n> > createuser and pg_passwd. \n> \n> This shows how bad the idea of the scripts was in the first place.\n> \n> > I find the bundled scripts and their associated documentation make things\n> > very nonintuitive when one switches from a blind trust postgres to an\n> > authenticated postgres. \n> \n> So that would put your vote in the \"drop altogether\" column? Voting is\n> still in progress!\n> \n> \t-Peter\n> \n> -- \n> Peter Eisentraut Sernanders vaeg 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n> \n> \n> ************\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Thu, 14 Oct 1999 02:11:00 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] How do I activate and change the postgres\n\tuser's password?" }, { "msg_contents": "At 09:56 PM 13-10-1999 +0200, Peter Eisentraut wrote:\n>There is a todo item for the postgres user to have a password by default.\n>I'm not sure though how that would be done. Probably in initdb. (?)\n\nInitdb sounds ok. Just have no password by default. \\N is strange!\n\n>> the clear and in a 666 permissions file? How about hashing them with some\n>> salt?\n>\n>I had this on my personal things-to-consider-working-on list but I don't\n>see an official todo item. I am personally not sure why this is not done\n>but authentication and security are not most people's specialty around here.\n>(including me)\n\nWell I don't really know C or C++. \n\nBut you could do the following:\np= plain password\ns= salt (some random stuff).\np=p+s (append salt to password).\nmsg= random number from 1 to 4. \nDo following msg times: p=hash(p);\n\nStore in password file as\nhashed password= p\nsalt = s\nMultiple salt grinds= msg\n\nIf msg set to 0 and salt to null you can have plaintext passwords (this can\nbe convenient sometimes).\n\nHash function = SHA1, MD5, etc. You might wish to store hash type, e.g. 1=\nSHA1, 2=MD5..\n\n>> 2) Neither is there an obvious and easy way to change the user's password.\n>\n>alter user joe with password \"foo\";\n>\n>I'm not sure how obvious it is but it's certainly easy.\n\nHmm, I couldn't find that tho. And I did look at the Admin guide docs. \n\nIn fact I tried altering user permissions and stuff by trying UPDATEs on\nthe template1.pg_user table and somehow that didn't work. Is there a reason\nwhy that doesn't work? It says 0 rows affected, and my where clause works\nif it's a SELECT. I was the postgres superuser too.\n\n>> 3) You can specify a password for a user by using pg_passwd and stick it\n>> into a separate password file, but then there really is no link between\n>> createuser and pg_passwd. \n>\n>This shows how bad the idea of the scripts was in the first place.\n\nWell I know what pg_passwd can be used for. Useful but it seems like it's\nslapped on- what's a good way to use and admin it? If I set up pg_hba.conf\nto use an optional password file would the Postgres super user\nauthentication be taken from there too?\n\n>> I find the bundled scripts and their associated documentation make things\n>> very nonintuitive when one switches from a blind trust postgres to an\n>> authenticated postgres. \n>\n>So that would put your vote in the \"drop altogether\" column? Voting is\n>still in progress!\n\nI'm neutral. I don't mind doing everything from psql. \n\nPerhaps the Admin guide should have a section on \"How Real Postgres Admins\ndo stuff\"- e.g. using psql for admin stuff. \n\nI believe the scripts were created when Postgres users didn't really bother\nabout authentication. They could be fine if they have authentication in mind. \n\nBut as is, it's like:\n1) No authentication.\nScripts fine- convenient too.\nPsql fine.\nEverything fine.\n\n2) Authentication on.\nScripts don't work.\nPsql works if you can figure out the Postgres user password.\n\nIck.\n\nAlso there's \n1) A shadow file\n2) A pg_pwd file (why this and shadow?)\n3) An option to have a password file.\n\nThis is just some grumbling, overall Postgres 6.5.x is quite impressive.\nGreat improvement from Postgres95 which I tried and gave up on two years\nago- I switched to MySQL. \n\nI still get a \"cleaner\" and clearer impression about MySQL authentication\nand access controls. The MySQL docs are very clear on that, in general the\nMySQL documentation is good. \n\nMaybe the current postgres scripts do confuse things. Still, the current\nPostgres docs are better than the Oracle docs, they are actually useful ;).\nIs it just me, or is installing Oracle based on the Oracle installation\nmanual like doing surgery following an academic textbook? e.g. chapter 1\nhas 100 ways to do an incision. Chapter 2 has 20 ways on sewing up. Chapter\n3 discusses anaesthesia. Chapter 4- tying blood vessels, (by the way please\nrefer to chapter 2 for more sewing hints).. And so on. In the end one has\nto go to the web and look for a HOWTO :). \n\nCheerio,\n\nLink.\n\n", "msg_date": "Thu, 14 Oct 1999 15:51:38 +0800", "msg_from": "Lincoln Yeoh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] How do I activate and change the postgres user's\n\tpassword?" }, { "msg_contents": "On Wed, 13 Oct 1999, Lamar Owen wrote:\n\n> Bruce Momjian wrote:\n> > > There is a todo item for the postgres user to have a password by default.\n> > > I'm not sure though how that would be done. Probably in initdb. (?)\n> > \n> > We could enabled it as part of initdb. Prompt them for it there, and\n> > assign it. Seems like there should be one on that account espeically.\n> \n> Also, allow a command line option to set the password for those who need\n> to automate things (like us RedHat people...). This is, I assume, for\n> the postgres user INSIDE the initial database structure, as opposed to\n> the postgres user on the OS.\n> \n> Since, under the RedHat installation, the initdb likely will happen\n> during initial system startup, having a prompt for a password at that\n> point is IMHO not good. Having a default password (in the initdb'd\n> pg_shadow) would be better.\n\nUm, a default pre-packaged password would yield the whole concept next to\nuseless. I personally think an advice after initdb (\"You really ought to\nassign a password to the superuser now.\") will suffice. If people don't do\nwhat they're told, too bad. On the other hand, I'll check into that \\N\ndefault password.\n\nThis whole security concept needs overhaul anyway, but I don't see it\nhappening anytime soon.\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 14 Oct 1999 13:18:29 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] How do I activate and change the\n\tpostgresuser's password?" }, { "msg_contents": "hi...\n\n> >There is a todo item for the postgres user to have a password by default.\n> >I'm not sure though how that would be done. Probably in initdb. (?)\n> \n> Initdb sounds ok. Just have no password by default. \\N is strange!\n\nor how about a prompt for a password? when you run initdb it asks for a\npassword? or even a yes/no? i don't like leaving things to command line\nswitches, its too easy for a user to ignore/be ignorant of them and create a\nsituation that isn't secure or isn't what they want w/out knowing it. this only\nreflects badly on the product at large instead of the clueless admin. :o/\n\n \n> >> 2) Neither is there an obvious and easy way to change the user's password.\n> >\n> >alter user joe with password \"foo\";\n> >\n> >I'm not sure how obvious it is but it's certainly easy.\n> \n> Hmm, I couldn't find that tho. And I did look at the Admin guide docs. \n\ni wasn't aware of this either (being relatively new to postgres.. less than a\nyear) but it smacks of ugly, imo.\n\n> \n> >> I find the bundled scripts and their associated documentation make things\n> >> very nonintuitive when one switches from a blind trust postgres to an\n> >> authenticated postgres. \n> >\n> >So that would put your vote in the \"drop altogether\" column? Voting is\n> >still in progress!\n> \n> I'm neutral. I don't mind doing everything from psql. \n> \n> Perhaps the Admin guide should have a section on \"How Real Postgres Admins\n> do stuff\"- e.g. using psql for admin stuff. \n\npersonally, i think that psql should not be allowed to do any admin stuff..\nothewise it become a potential security hazard on a machine used by lots of\npeople. it should (imo) only be a database structure and data\nretrieval/manipulation tool... \n\nadmin functions should occur from a seperate stand alone program. this way, you\navoid the ugliness of the scripts, which are inherently inflexible and bound to\nbe broken... also, you have one central agency that can be put under the\npermissions of the postgres user or the DBA group on the box its installed on. \n\nalso, if its a stand-alone command-line program (C/C++/whatever) we can then\nput a nice GUI front end on it and have a graphical admin tool which would be\nAMAZINGLY useful. i'd probably even be willing to help write it (i'm just now\ndipping into the world QT and finding it extremely exciting =)\n\nthis would also demand that we all \"sit down\" and come up with a standard, well\nthough-out process for security to be implemented in the admin tool. \nperhaps even a different mailing list would be in order for this ... \n\nthe things i would love the admin tool to cover are (in no particular order):\n\no creating and admining users \n o passwords\n o access privileges\no creating databases\no creating and admining back up policies and procedures\no maintaining a postgres installation\n o disk usage\n o postmaster options\n o back-end options\n o default policies for new databases\no logging and usage analysis\no source code management (important, imo, for open source projects)\n o compile-time options (with/without TCL, etc)\n o application of patches (to alleviate the need to do this \"by hand\")\n 0 upgrading from version w.x to version x.y\n\nthe tool could be made modular, so we can create a skeleton system and\nadd/remove/alter modules as we (the user community) desire.\n\n> Is it just me, or is installing Oracle based on the Oracle installation\n> manual like doing surgery following an academic textbook? e.g. chapter 1\n> has 100 ways to do an incision. Chapter 2 has 20 ways on sewing up. Chapter\n> 3 discusses anaesthesia. Chapter 4- tying blood vessels, (by the way please\n> refer to chapter 2 for more sewing hints).. And so on. In the end one has\n\nROFL!!!! yes!!!! why is it that commercial software vendors INSIST on making\ntheir manuals so arcane that they are as readable as the product is usable\nw/out a manual? haha...\n\n-- \nAaron J. Seigo\nSys Admin\n", "msg_date": "Thu, 14 Oct 1999 10:51:53 -0600", "msg_from": "\"Aaron J. Seigo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] How do I activate and change the postgres user's\n\tpassword?" } ]
[ { "msg_contents": "That looks good. At least this time I'm the correct side of London :-)\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]]\nSent: 12 October 1999 17:20\nTo: [email protected]\nCc: [email protected]; [email protected]; [email protected];\[email protected]; [email protected]; [email protected]\nSubject: New developer globe (was: Re: [HACKERS] Interesting Quote you\nmight enjoy about PGSQL.)\n\n\n>\n> > > > Wow, that web page looks good now, with the quote at the bottom.\nJan,\n> > > > we need the nicer world image.\n> > >\n> > > You mean one with the mountains - no?\n> > >\n> > > Well, I'll spend some time, polish up the Povray sources etc.\n> > > so Vince can easily maintain the map after - only that he\n> > > needs Povray 3.1 and maybe Tcl/Tk 8.0 to do it, but that\n> > > shouldn't be a problem since both are freely available and\n> > > easily to install.\n>\n> I have tcl 8.0.5 and povray here too.\n\n I've setup an example for the new developers page at\n\n <http://www.PostgreSQL.ORG/~wieck>\n\n The image size is adjusted for the page width.\n\n To maintain the hotspots I made a little, slightly\n overspecialized, Tcl/Tk application that creates the imagemap\n so it can easily be pasted into the page.\n\n I'll contact you and Vince via private mail after packing it\n up.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n\n************\n", "msg_date": "Wed, 13 Oct 1999 08:50:58 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: New developer globe (was: Re: [HACKERS] Interesting Quote you\n\tmight enjoy about PGSQL.)" } ]
[ { "msg_contents": "Palatino isn't one of the Base14 fonts, so unless it was included within\nthe PDF, it wouldn't work unless the viewer had the font.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:[email protected]]\nSent: 12 October 1999 18:12\nTo: PostgreSQL-development; PostgreSQL-documentation\nSubject: [HACKERS] Book PDF file was corrupt\n\n\nHere is the new PDF version of the outline. The previous version used\nthe Palatino font, which didn't convert to PDF for some reason.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n", "msg_date": "Wed, 13 Oct 1999 08:52:56 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Book PDF file was corrupt" }, { "msg_contents": "> Palatino isn't one of the Base14 fonts, so unless it was included within\n> the PDF, it wouldn't work unless the viewer had the font.\n\nStrange thing is that it is embedded in the PDF file. I can see the\nfile size with Palatino is 10 times the size with Times. I later\nrealized that it is xpdf that has the problem. gv/ghostscript are fine\nviewing the Palatino PDF file.\n\nI then read in the xpdf manual:\n\n At this point, the biggest problem is that embedded fonts\n are not handled properly.\n\nso it seems xpdf has a limitation here. Palatino is good for outlines.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 13 Oct 1999 07:04:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] RE: [HACKERS] Book PDF file was corrupt" } ]
[ { "msg_contents": "There's three places, but only two go down to the sub release:\n\n\tsrc/interfaces/jdbc/postgres/jdbc1/DatabaseMetaData.java\n\tsrc/interfaces/jdbc/postgres/jdbc2/DatabaseMetaData.java\n\nThere's a string currently saying \"6.5.2\".\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:[email protected]]\nSent: 12 October 1999 22:12\nTo: Peter Mount\nCc: PostgreSQL Hackers; The Hermit Hacker\nSubject: Re: [HACKERS] JDBC Driver\n\n\n> The copy on cvs looks ok, although should I update it to say 6.5.3, or\nas\n> 6.5.3 is simply to include pgaccess, leave it alone?\n> \n> I'd go for the latter.\n\nI can update files as part of the release, but I don't see anywhere in\nthe jdbc directory that says 6.5.x.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n\n************\n", "msg_date": "Wed, 13 Oct 1999 08:58:30 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] JDBC Driver" }, { "msg_contents": "> There's three places, but only two go down to the sub release:\n> \n> \tsrc/interfaces/jdbc/postgres/jdbc1/DatabaseMetaData.java\n> \tsrc/interfaces/jdbc/postgres/jdbc2/DatabaseMetaData.java\n> \n> There's a string currently saying \"6.5.2\".\n\nDone, and added to release update list. Any others, let me know.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 13 Oct 1999 07:08:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] JDBC Driver" } ]
[ { "msg_contents": ">> New version attached. No PDF version this time. Do people like PDF?\nYes!\n", "msg_date": "Wed, 13 Oct 1999 10:08:40 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "> >> New version attached. No PDF version this time. Do people like PDF?\n> Yes!\n\nOK, here is the PDF. Three pages in length.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026", "msg_date": "Wed, 13 Oct 1999 07:12:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] RE: [HACKERS] Outline for PostgreSQL book" } ]
[ { "msg_contents": "Okay, I have the following voting results:\n\n2 for pgcreatedb, pgdropdb, pgcreateuser, pgdropuser (Bruce, me)\n1 for pg_createdb, pg_dropdb, etc. (Sergio K.)\n1/2 for pgadduser, etc., or sth. like that (Marc)\n1/2 for leave as is (Thomas)\n1 for drop altogether (Marc)\n\nWell, since this is not on the immediate agenda I'll leave it open for a\nwhile, but you see the leading vote getter.\n\nIn addition I'll add a configure option --with-pragmatic-scrappy which\nwill prevent the installation of the scripts altogether.\n\n\t-Peter\n\n\n\n", "msg_date": "Wed, 13 Oct 1999 10:47:08 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Scripts again" } ]
[ { "msg_contents": "Hi Everyone,\n\nI suffer from an affliction where I can't do anything technical without one of\nthem stone-aged book type things to kick me off. I would really welcome one\nupon PostgreSQL, because although I've been following this mailing list for a\nwhile, and would really love to see PostgreSQL kick Oracle et al into touch, I\njust gotta have that book in my hand first. I suspect there are many out there\nlike me (on which Tim O'Reilly has based his fortune), and that a really good\n\"Learning PostgreSQL\" type book would become a touchstone, and really blast it\ninto the big time. The small furry hot-blooded PostgreSQL could then really\nget its teeth into the monolithic propriertary dinosaurs, and start bringing\nthem down, and moving the rest of us lightweights forward.\n\nAs a pre-newbie, I'd be very much inclined to want a book in the O'Reilly\n\"Apache: The Definitive Guide\" style or the \"Learning Perl/Tk\" school, with\nChapter One entitled \"Getting Started\", right up to Chapter Ten \"Come on You\nLightweight, It's Really Not That Difficult\".\n\nI don't think everything, or indeed a tenth of everything, has to be in the\nfirst book; all the heavy technical stuff can be in the quasi-equivalents of\n\"Programming PostgreSQL\", \"Advanced PostgreSQL\" and \"Effective PostgresSQL\". \nThis will not only help pre-newbies like me, as we pick it up, and then move\nonto the heavier manuals, but give many different people the chance to become\nauthors, without leadening a vast single comprehensive book with a dull,\npolitically-correct \"text by large committee\" style, which will crush potential\nsales (thereby enabling the dinosaurs to keep a steady grip on their database\nstranglehold).\n\nAll the first book has to do is get a reluctant database hack like me, to get\nour first PostgreSQL database going, and our first mini-system running along\nthe lines of Oracle's SCOTT/TIGER set-up, with a bit more for bonus points :-) \nI'm sure there is an enormous marketplace of people like me, who've been\nsitting out here like lemons, waiting for this kind of thing to appear. We're\nsick of the thralldom of the West Coast Database Billionaires, but we're mostly\ntied-in with a salaried financial dependence on their products, and possess the\nsad aspect of rabbits trapped in headlights. It's pathetic I know, but if you\nget it right, you could blow all these megalomaniacs away, and set the rest of\nus free.\n\nGood luck! :-)\n\nRgds,\nAndyD\nEx-Informix 7.14 DBA\nEx-Sybase 11 User\nOracle8 Certified Professional DBA\nChief Lickspittle of the \"Propriertary Databases Against Progress\" Corporation\n&\nFuture (hopefully) PostgreSQL Hacker\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n", "msg_date": "Wed, 13 Oct 1999 02:37:37 -0700 (PDT)", "msg_from": "Andy Duncan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "Can you imagine the \"PostgreSQL\" book written by people from different\ncountries that never each other's face?\nLet's imagine, 20 chapters book written by 20 people.\nEach chapter is written by the person who knows best his/her part of\nPostgreSQL.\nWow this is amazing!!!\n\nI think this is like Linux to Microsoft,\nbut this one is PostgreSQL to Oracle.\n\nLinux + PostgreSQL is going to beat Microsoft + Oracle.\n\nI am waiting for this to happen.\n\nRegards,\nChai\n", "msg_date": "Wed, 13 Oct 1999 16:55:21 +0700", "msg_from": "Chairudin Sentosa Harjo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "On Wed, 13 Oct 1999, Chairudin Sentosa Harjo wrote:\n\n> Can you imagine the \"PostgreSQL\" book written by people from different\n> countries that never each other's face?\n> Let's imagine, 20 chapters book written by 20 people.\n> Each chapter is written by the person who knows best his/her part of\n> PostgreSQL.\n> Wow this is amazing!!!\n> \n> I think this is like Linux to Microsoft,\n> but this one is PostgreSQL to Oracle.\n> \n> Linux + PostgreSQL is going to beat Microsoft + Oracle.\n\nGag me with a spoon...please don't let it be Linux+PostgreSQL :( What a\ntaint on the world *that* would be...like, come on...Linux? Use a real\noperating system :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 13 Oct 1999 08:51:08 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "On Wed, 13 Oct 1999, The Hermit Hacker wrote:\n\n> On Wed, 13 Oct 1999, Chairudin Sentosa Harjo wrote:\n> \n> > Can you imagine the \"PostgreSQL\" book written by people from different\n> > countries that never each other's face?\n> > Let's imagine, 20 chapters book written by 20 people.\n> > Each chapter is written by the person who knows best his/her part of\n> > PostgreSQL.\n> > Wow this is amazing!!!\n> > \n> > I think this is like Linux to Microsoft,\n> > but this one is PostgreSQL to Oracle.\n> > \n> > Linux + PostgreSQL is going to beat Microsoft + Oracle.\n> \n> Gag me with a spoon...please don't let it be Linux+PostgreSQL :( What a\n> taint on the world *that* would be...like, come on...Linux? Use a real\n> operating system :)\n\n*rofl*\n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 13 Oct 1999 08:04:48 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Outline for PostgreSQL book" } ]
[ { "msg_contents": "I came across problems with sorting a huge (2.4GB) table.\n\no it took 46 minutes to complete following query:\n\n\tselect * from test2 order by i desc limit 100;\n\n to get 0 results.\n\n\ti|t\n\t-+-\n\t(0 rows)\n\n I assume this is a failure.\n\n note: this is Pentium III x2 with 512MB RAM running RedHat Linux 6.0.\n\no I got NOTICE: BufFileRead: should have flushed after writing\n at the very end of the processing.\n\no it produced 7 sort temp files each having size of 1.4GB (total 10GB)\n\nHere is the table I used for testing(no index):\n\nCREATE TABLE test2 (\n i int4,\n t text);\n\nThis has 10000000 records and the table file sizes are:\n\n$ ls -ls test2*\n1049604 -rw------- 1 postgres postgres 1073741824 Oct 4 18:32 test2\n1049604 -rw------- 1 postgres postgres 1073741824 Oct 5 01:19 test2.1\n 327420 -rw------- 1 postgres postgres 334946304 Oct 13 17:40 test2.2\n\n--\nTatsuo Ishii\n", "msg_date": "Wed, 13 Oct 1999 19:03:20 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "sort on huge table" }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> I came across problems with sorting a huge (2.4GB) table.\n\nThe current sorting code will fail if the data volume exceeds whatever\nthe maximum file size is on your OS. (Actually, if long is 32 bits,\nit might fail at 2gig even if your OS can handle 4gig; not sure, but\nit is doing signed-long arithmetic with byte offsets...)\n\nI am just about to commit code that fixes this by allowing temp files\nto have multiple segments like tables can.\n\n> o it took 46 minutes to complete following query:\n\nWhat -S setting are you using? Increasing it should reduce the time\nto sort, so long as you don't make it so large that the backend starts\nto swap. The current default seems to be 512 (Kb) which is probably\non the conservative side for modern machines.\n\n> o it produced 7 sort temp files each having size of 1.4GB (total 10GB)\n\nYes, I've been seeing space consumption of about 4x the actual data\nvolume. Next step is to revise the merge algorithm to reduce that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Oct 1999 10:18:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table " }, { "msg_contents": "I wrote:\n> The current sorting code will fail if the data volume exceeds whatever\n> the maximum file size is on your OS. (Actually, if long is 32 bits,\n> it might fail at 2gig even if your OS can handle 4gig; not sure, but\n> it is doing signed-long arithmetic with byte offsets...)\n\n> I am just about to commit code that fixes this by allowing temp files\n> to have multiple segments like tables can.\n\nOK, committed. I have tested this code using a small RELSEG_SIZE,\nand it seems to work, but I don't have the spare disk space to try\na full-scale test with > 4Gb of data. Anyone care to try it?\n\nI have not yet done anything about the excessive space consumption\n(4x data volume), so plan on using 16+Gb of diskspace to sort a 4+Gb\ntable --- and that's not counting where you put the output ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Oct 1999 11:10:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table " }, { "msg_contents": "> > The current sorting code will fail if the data volume exceeds whatever\n> > the maximum file size is on your OS. (Actually, if long is 32 bits,\n> > it might fail at 2gig even if your OS can handle 4gig; not sure, but\n> > it is doing signed-long arithmetic with byte offsets...)\n> \n> > I am just about to commit code that fixes this by allowing temp files\n> > to have multiple segments like tables can.\n> \n> OK, committed. I have tested this code using a small RELSEG_SIZE,\n> and it seems to work, but I don't have the spare disk space to try\n> a full-scale test with > 4Gb of data. Anyone care to try it?\n\nI will test it with my 2GB table. Creating 4GB would probably be\npossible, but I don't have enough sort space for that:-) I ran my\nprevious test on 6.5.2, not on current. I hope current is stable\nenough to perform my testing.\n\n> I have not yet done anything about the excessive space consumption\n> (4x data volume), so plan on using 16+Gb of diskspace to sort a 4+Gb\n> table --- and that's not counting where you put the output ;-)\n\nTalking about the -S, I did use the default since setting -S seems to\nconsume too much memory. For example, if I set it to 128MB, backend\nprocess grows over 512MB and it was killed due to swap space was run\nout. Maybe 4x law can be also applicated to -S?\n---\nTatsuo Ishii\n", "msg_date": "Thu, 14 Oct 1999 10:34:03 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] sort on huge table " }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> OK, committed. I have tested this code using a small RELSEG_SIZE,\n>> and it seems to work, but I don't have the spare disk space to try\n>> a full-scale test with > 4Gb of data. Anyone care to try it?\n\n> I will test it with my 2GB table. Creating 4GB would probably be\n> possible, but I don't have enough sort space for that:-)\n\nOK. I am working on reducing the space requirement, but it would be\nnice to test the bottom-level multi-temp-file code before layering\nmore stuff on top of it. Anyone else have a whole bunch of free\ndisk space they could try a big sort with?\n\n> I ran my previous test on 6.5.2, not on current. I hope current is\n> stable enough to perform my testing.\n\nIt seems reasonably stable here, though I'm not doing much except\ntesting... main problem is you'll need to initdb, which means importing\nyour large dataset...\n\n> Talking about the -S, I did use the default since setting -S seems to\n> consume too much memory. For example, if I set it to 128MB, backend\n> process grows over 512MB and it was killed due to swap space was run\n> out. Maybe 4x law can be also applicated to -S?\n\nIf the code is working correctly then -S should be obeyed ---\napproximately, anyway, since psort.c only counts the actual tuple data;\nit doesn't know anything about AllocSet overhead &etc. But it looked\nto me like there might be some plain old memory leaks in psort.c, which\ncould account for actual usage being much more than intended. I am\ngoing to work on cleaning up psort.c after I finish building\ninfrastructure for it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 14 Oct 1999 10:39:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table " }, { "msg_contents": ">> I will test it with my 2GB table. Creating 4GB would probably be\n>> possible, but I don't have enough sort space for that:-)\n>\n>OK. I am working on reducing the space requirement, but it would be\n>nice to test the bottom-level multi-temp-file code before layering\n>more stuff on top of it. Anyone else have a whole bunch of free\n>disk space they could try a big sort with?\n>\n>> I ran my previous test on 6.5.2, not on current. I hope current is\n>> stable enough to perform my testing.\n>\n>It seems reasonably stable here, though I'm not doing much except\n>testing... main problem is you'll need to initdb, which means importing\n>your large dataset...\n\nI have done the 2GB test on current (with your fixes). This time the\nsorting query worked great! I saw lots of temp files, but the total\ndisk usage was almost same as before (~10GB). So I assume this is ok.\n\n>> Talking about the -S, I did use the default since setting -S seems to\n>> consume too much memory. For example, if I set it to 128MB, backend\n>> process grows over 512MB and it was killed due to swap space was run\n>> out. Maybe 4x law can be also applicated to -S?\n>\n>If the code is working correctly then -S should be obeyed ---\n>approximately, anyway, since psort.c only counts the actual tuple data;\n>it doesn't know anything about AllocSet overhead &etc. But it looked\n>to me like there might be some plain old memory leaks in psort.c, which\n>could account for actual usage being much more than intended. I am\n>going to work on cleaning up psort.c after I finish building\n>infrastructure for it.\n\nI did set the -S to 8MB, and it seems boost the performance. It took\nonly 22:37 (previous result was ~45:00).\n---\nTatsuo Ishii\n\n", "msg_date": "Thu, 14 Oct 1999 23:59:13 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] sort on huge table " }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> I have done the 2GB test on current (with your fixes). This time the\n> sorting query worked great! I saw lots of temp files, but the total\n> disk usage was almost same as before (~10GB). So I assume this is ok.\n\nSounds like it is working then. Thanks for running the test. I'll try\nto finish the next step this weekend.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 14 Oct 1999 11:03:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table " }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> I have done the 2GB test on current (with your fixes). This time the\n> sorting query worked great! I saw lots of temp files, but the total\n> disk usage was almost same as before (~10GB). So I assume this is ok.\n\nI have now committed another round of changes that reduce the temp file\nsize to roughly the volume of data to be sorted. It also reduces the\nnumber of temp files --- there will be only one per GB of sort data.\nIf you could try sorting a table larger than 4GB with this code, I'd be\nmuch obliged. (It *should* work, of course, but I just want to be sure\nthere are no places that will have integer overflows when the logical\nfile size exceeds 4GB.) I'd also be interested in how the speed\ncompares to the old code on a large table.\n\nStill need to look at the memory-consumption issue ... and CREATE INDEX\nhasn't been taught about any of these fixes yet.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 16 Oct 1999 16:29:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table " }, { "msg_contents": "OK, I have now finished up my psort reconstruction project. Sort nodes\nand btree CREATE INDEX now use the same sorting module, which is better\nthan either one was to start with.\n\nThis resolves the following TODO items:\n\n* Make index creation use psort code, because it is now faster(Vadim)\n* Allow creation of sort temp tables > 1 Gig\n\nAlso, sorting will now notice if it runs out of disk space, which it\nfrequently would not before :-(. Both memory and disk space are used\nmore sparingly than before, as well.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 17 Oct 1999 18:29:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table " }, { "msg_contents": "> OK, I have now finished up my psort reconstruction project. Sort nodes\n> and btree CREATE INDEX now use the same sorting module, which is better\n> than either one was to start with.\n> \n> This resolves the following TODO items:\n> \n> * Make index creation use psort code, because it is now faster(Vadim)\n> * Allow creation of sort temp tables > 1 Gig\n> \n> Also, sorting will now notice if it runs out of disk space, which it\n> frequently would not before :-(. Both memory and disk space are used\n> more sparingly than before, as well.\n\nGreat. TODO changes made.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 17 Oct 1999 21:07:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table" }, { "msg_contents": ">Tatsuo Ishii <[email protected]> writes:\n>> I have done the 2GB test on current (with your fixes). This time the\n>> sorting query worked great! I saw lots of temp files, but the total\n>> disk usage was almost same as before (~10GB). So I assume this is ok.\n>\n>I have now committed another round of changes that reduce the temp file\n>size to roughly the volume of data to be sorted. It also reduces the\n>number of temp files --- there will be only one per GB of sort data.\n>If you could try sorting a table larger than 4GB with this code, I'd be\n>much obliged. (It *should* work, of course, but I just want to be sure\n>there are no places that will have integer overflows when the logical\n>file size exceeds 4GB.) I'd also be interested in how the speed\n>compares to the old code on a large table.\n>\n>Still need to look at the memory-consumption issue ... and CREATE INDEX\n>hasn't been taught about any of these fixes yet.\n\nI tested with a 1GB+ table (has a segment file) and a 4GB+ table (has\nfour segment files) and got same error message:\n\nERROR: ltsWriteBlock: failed to write block 131072 of temporary file\n Perhaps out of disk space?\n\nOf course disk space is enough, and no physical errors were\nreported. Seems the error is raised when the temp file hits 1GB?\n--\nTatsuo Ishii\n\n", "msg_date": "Mon, 18 Oct 1999 15:08:59 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] sort on huge table " }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> If you could try sorting a table larger than 4GB with this code, I'd be\n>> much obliged.\n\n> ERROR: ltsWriteBlock: failed to write block 131072 of temporary file\n> Perhaps out of disk space?\n\nDrat. I'll take a look --- thanks for running the test.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Oct 1999 10:34:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table " }, { "msg_contents": "I wrote:\n> Tatsuo Ishii <[email protected]> writes:\n>>> If you could try sorting a table larger than 4GB with this code, I'd be\n>>> much obliged.\n\n>> ERROR: ltsWriteBlock: failed to write block 131072 of temporary file\n>> Perhaps out of disk space?\n\n> Drat. I'll take a look --- thanks for running the test.\n\nThat's what I get for not testing the interaction between logtape.c\nand buffile.c at a segment boundary --- it didn't work, of course :-(.\nI rebuilt with a small RELSEG_SIZE and debugged it. I'm still concerned\nabout possible integer overflow problems, so please update and try again\nwith a large file.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Oct 1999 23:17:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table " }, { "msg_contents": ">That's what I get for not testing the interaction between logtape.c\n>and buffile.c at a segment boundary --- it didn't work, of course :-(.\n>I rebuilt with a small RELSEG_SIZE and debugged it. I'm still concerned\n>about possible integer overflow problems, so please update and try again\n>with a large file.\n\nIt worked with 2GB+ table but was much slower than before.\n\nBefore(with 8MB sort memory): 22 minutes\n\nAfter(with 8MB sort memory): 1 hour and 5 minutes\nAfter(with 80MB sort memory): 42 minutes.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 19 Oct 1999 17:49:22 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] sort on huge table " }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> It worked with 2GB+ table but was much slower than before.\n> Before(with 8MB sort memory): 22 minutes\n> After(with 8MB sort memory): 1 hour and 5 minutes\n> After(with 80MB sort memory): 42 minutes.\n\nOh dear. I had tested it with smaller files and concluded that it was\nno slower than before ... I guess there is some effect I'm not seeing\nhere. Can you tell whether the extra time is computation or I/O (how\nmuch does the runtime of the backend change between old and new code)?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Oct 1999 10:17:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table " }, { "msg_contents": ">Oh dear. I had tested it with smaller files and concluded that it was\n>no slower than before ... I guess there is some effect I'm not seeing\n>here. Can you tell whether the extra time is computation or I/O (how\n>much does the runtime of the backend change between old and new code)?\n\nHow can I do this? Maybe I should run the backend in stand alone mode?\n---\nTatsuo Ishii\n\n", "msg_date": "Wed, 20 Oct 1999 10:05:19 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] sort on huge table " }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> It worked with 2GB+ table but was much slower than before.\n> Before(with 8MB sort memory): 22 minutes\n> After(with 8MB sort memory): 1 hour and 5 minutes\n> After(with 80MB sort memory): 42 minutes.\n\nI've committed some changes to tuplesort.c to try to improve\nperformance. Would you try your test case again with current\nsources? Also, please see if you can record the CPU time\nconsumed by the backend while doing the sort.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 30 Oct 1999 13:37:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table " }, { "msg_contents": ">Tatsuo Ishii <[email protected]> writes:\n>> It worked with 2GB+ table but was much slower than before.\n>> Before(with 8MB sort memory): 22 minutes\n>> After(with 8MB sort memory): 1 hour and 5 minutes\n>> After(with 80MB sort memory): 42 minutes.\n>\n>I've committed some changes to tuplesort.c to try to improve\n>performance. Would you try your test case again with current\n>sources? Also, please see if you can record the CPU time\n>consumed by the backend while doing the sort.\n\nIt's getting better, but still slower than before.\n\n52:50 (with 8MB sort memory)\n\nps shows 7:15 was consumed by the backend. I'm going to test with 80MB \nsort memory.\n--\nTatsuo Ishii\n", "msg_date": "Mon, 01 Nov 1999 15:35:33 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] sort on huge table " }, { "msg_contents": ">>> It worked with 2GB+ table but was much slower than before.\n>>> Before(with 8MB sort memory): 22 minutes\n>>> After(with 8MB sort memory): 1 hour and 5 minutes\n>>> After(with 80MB sort memory): 42 minutes.\n>>\n>>I've committed some changes to tuplesort.c to try to improve\n>>performance. Would you try your test case again with current\n>>sources? Also, please see if you can record the CPU time\n>>consumed by the backend while doing the sort.\n>\n>It's getting better, but still slower than before.\n>\n>52:50 (with 8MB sort memory)\n>\n>ps shows 7:15 was consumed by the backend. I'm going to test with 80MB \n>sort memory.\n\nDone.\n\n32:06 (with 80MB sort memory)\nCPU time was 5:11.\n--\nTatsuo Ishii\n", "msg_date": "Mon, 01 Nov 1999 16:10:27 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] sort on huge table " }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>>>> It worked with 2GB+ table but was much slower than before.\n>>>> Before(with 8MB sort memory): 22 minutes\n>>>> After(with 8MB sort memory): 1 hour and 5 minutes\n>>>> After(with 80MB sort memory): 42 minutes.\n>> \n> It's getting better, but still slower than before.\n> 52:50 (with 8MB sort memory)\n> ps shows 7:15 was consumed by the backend.\n> 32:06 (with 80MB sort memory)\n> CPU time was 5:11.\n\nOK, so it's basically all I/O time, which is what I suspected.\n\nWhat's causing this is the changes I made to reduce disk space usage;\nthe price of that savings is more-random access to the temporary file.\nApparently your setup is not coping very well with that.\n\nThe original code used seven separate temp files, each of which was\nwritten and read in a purely sequential fashion. Only problem: as\nthe merge steps proceed, all the data is read from one temp file and\ndumped into another, and because of the way the merges are overlapped,\nyou end up with total space usage around 4X the actual data volume.\n\nWhat's in there right now is just the same seven-tape merge algorithm,\nbut all the \"tapes\" are stored in a single temp file. As soon as any\nblock of a \"tape\" is read in, it's recycled to become available space\nfor the current \"output tape\" (since we know we won't need to read that\nblock again). This is why the disk space usage is roughly actual data\nvolume and not four times as much. However, the access pattern to this\nsingle temp file looks a lot more random to the OS than the access\npatterns for the original temp files.\n\nI figured that I could get away with this from a performance standpoint\nbecause, while the old code processed each temp file sequentially, the\nread and write accesses were interleaved --- on average, you'd expect\na merge pass to read one block from each of the N source tapes in the\nsame time span that it is writing N blocks to the current output tape;\non average, no two successive block read or write requests will go to\nthe same temp file. So it appears to me that the old code should cause\na disk seek for each block read or written. The new code's behavior\ncan't be any worse than that; it's just doing those seeks within one\ntemp file instead of seven.\n\nOf course the flaw in this reasoning is that it assumes the OS isn't\ngetting in the way. On the HPUX system I've been testing on, the\nperformance does seem to be about the same, but evidently it's much\nworse on your system. (Exactly what OS are you running, anyway, and\non what hardware?) I speculate that your OS is applying some sort of\nread-ahead algorithm that is getting hopelessly confused by lots of\nseeks within a single file. Perhaps it's reading the next block in\nsequence after every program-requested read, and then throwing away that\nwork when it sees the program lseek the file instead of reading.\n\nNext question is what to do about it. I don't suppose we have any way\nof turning off the OS' read-ahead algorithm :-(. We could forget about\nthis space-recycling improvement and go back to separate temp files.\nThe objection to that, of course, is that while sorting might be faster,\nit doesn't matter how fast the algorithm is if you don't have the disk\nspace to execute it.\n\nA possible compromise is to use separate temp files but drop the\npolyphase merge and go to a balanced merge, which'd still access each\ntemp file sequentially but would have only a 2X space penalty instead of\n4X (since all the data starts on one set of tapes and gets copied to the\nother set during a complete merge pass). The balanced merge is a little\nslower than polyphase --- more merge passes --- but the space savings\nprobably justify it.\n\nOne thing I'd like to know before we make any decisions is whether\nthis problem is widespread. Can anyone else run performance tests\nof the speed of large sorts, using current sources vs. 6.5.* ?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Nov 1999 10:42:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table " }, { "msg_contents": "> Of course the flaw in this reasoning is that it assumes the OS isn't\n> getting in the way. On the HPUX system I've been testing on, the\n> performance does seem to be about the same, but evidently it's much\n> worse on your system. (Exactly what OS are you running, anyway, and\n> on what hardware?) I speculate that your OS is applying some sort of\n> read-ahead algorithm that is getting hopelessly confused by lots of\n> seeks within a single file. Perhaps it's reading the next block in\n> sequence after every program-requested read, and then throwing away that\n> work when it sees the program lseek the file instead of reading.\n> \n> Next question is what to do about it. I don't suppose we have any way\n> of turning off the OS' read-ahead algorithm :-(. We could forget about\n> this space-recycling improvement and go back to separate temp files.\n> The objection to that, of course, is that while sorting might be faster,\n> it doesn't matter how fast the algorithm is if you don't have the disk\n> space to execute it.\n\nThat is the key. On BSDI, the kernel code is more complicated. If it\ndoes a read on an already open file, and the requested buffer is not in\ncore, it assumes that the readahead that was performed by the previous\nread was useless, and scales back the readahead algorithm. At least\nthat is my interpretation of the code and comments.\n\nI suspect other OS's do similar work, but it is possible they do it more\nsimplistically, saying if someone does _any_ seek, they must be\naccessing it non-sequentially, so read-ahead should be turned off.\n\nRead-ahead on random file access is a terrible thing, and most OS's\nfigure out a way to turn off read-ahead in non-sequential cases. Of\ncourse, lack of read-ahead in sequential access also is a problem.\n\nTatsuo, what OS are you using? Maybe I can check the kernel to see how\nit is behaving.\n\n> \n> A possible compromise is to use separate temp files but drop the\n> polyphase merge and go to a balanced merge, which'd still access each\n> temp file sequentially but would have only a 2X space penalty instead of\n> 4X (since all the data starts on one set of tapes and gets copied to the\n> other set during a complete merge pass). The balanced merge is a little\n> slower than polyphase --- more merge passes --- but the space savings\n> probably justify it.\n> \n> One thing I'd like to know before we make any decisions is whether\n> this problem is widespread. Can anyone else run performance tests\n> of the speed of large sorts, using current sources vs. 6.5.* ?\n\nI may be able to test that today on BSDI, but I doubt BSDI is typical.\nThey are probably state-of-the-art in kernel algorithm design.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Nov 1999 11:52:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table" }, { "msg_contents": "> Next question is what to do about it. I don't suppose we have any way\n> of turning off the OS' read-ahead algorithm :-(. We could forget about\n> this space-recycling improvement and go back to separate temp files.\n> The objection to that, of course, is that while sorting might be faster,\n> it doesn't matter how fast the algorithm is if you don't have the disk\n> space to execute it.\n\n\nLook what I found. I downloaded Linux kernel source for 2.2.0, and\nstarted looking for the word 'ahead' in the file system files. I found\nthat read-ahead seems to be controlled by f_reada, and look where I\nfound it being turned off? Seems like any seek turns off read-ahead on\nLinux.\n\nWhen you do a read or write, it seems to be turned on again. Once you\nread/write, the next read/write will do read-ahead, assuming you don't\ndo any lseek() before the second read/write().\n\nSeems like the algorithm in psort now is rarely having read-ahead on\nLinux, while other OS's check to see if the read-ahead was eventually\nused, and control read-ahead that way.\n\nread-head also seems be off on the first read from a file.\n\n---------------------------------------------------------------------------\n\n/*\n * linux/fs/ext2/file.c\n...\n/*\n * Make sure the offset never goes beyond the 32-bit mark..\n */\nstatic long long ext2_file_lseek(\n\tstruct file *file,\n\tlong long offset,\n\tint origin)\n{\n\tstruct inode *inode = file->f_dentry->d_inode;\n\n\tswitch (origin) {\n\t\tcase 2:\n\t\t\toffset += inode->i_size;\n\t\t\tbreak;\n\t\tcase 1:\n\t\t\toffset += file->f_pos;\n\t}\n\tif (((unsigned long long) offset >> 32) != 0) {\n#if BITS_PER_LONG < 64\n\t\treturn -EINVAL;\n#else\n\t\tif (offset > ext2_max_sizes[EXT2_BLOCK_SIZE_BITS(inode->i_sb)])\n\t\t\treturn -EINVAL;\n#endif\n\t} \n\tif (offset != file->f_pos) {\n\t\tfile->f_pos = offset;\n\t\tfile->f_reada = 0;\n\t\tfile->f_version = ++event;\n\t}\n\treturn offset;\n}\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Nov 1999 13:00:21 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table" }, { "msg_contents": "> Next question is what to do about it. I don't suppose we have any way\n> of turning off the OS' read-ahead algorithm :-(. We could forget about\n> this space-recycling improvement and go back to separate temp files.\n> The objection to that, of course, is that while sorting might be faster,\n> it doesn't matter how fast the algorithm is if you don't have the disk\n> space to execute it.\n\nIf I am correct on the Linux seek thing, and Tatsuo is running Linux, is\nthere any way to fake out the kernel on only Linux, so we issue two\nreads in a row before doing a seek?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Nov 1999 13:32:08 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table" }, { "msg_contents": "hi...\n\n> Look what I found. I downloaded Linux kernel source for 2.2.0, and\n> started looking for the word 'ahead' in the file system files. I found\n> that read-ahead seems to be controlled by f_reada, and look where I\n> found it being turned off? Seems like any seek turns off read-ahead on\n> Linux.\n\nthe current kernel is 2.2.13... =)\n\nthat said, the fs/ext2/file.c is the same in 2.2.13 as it is in 2.2.0 (just\nchecked).. i'm going to put this out on the linux kernel mailing list and see\nwhat comes back, though, as this seems to be an issue that should be\nresolved if accurate....\n\n\n\n-- \nAaron J. Seigo\nSys Admin\n", "msg_date": "Mon, 1 Nov 1999 12:00:55 -0700", "msg_from": "\"Aaron J. Seigo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table" }, { "msg_contents": "> hi...\n> \n> > Look what I found. I downloaded Linux kernel source for 2.2.0, and\n> > started looking for the word 'ahead' in the file system files. I found\n> > that read-ahead seems to be controlled by f_reada, and look where I\n> > found it being turned off? Seems like any seek turns off read-ahead on\n> > Linux.\n> \n> the current kernel is 2.2.13... =)\n\nI need to know what kernel the tester is using. I doubt it is the most\ncurrent one.\n\n> that said, the fs/ext2/file.c is the same in 2.2.13 as it is in 2.2.0 (just\n> checked).. i'm going to put this out on the linux kernel mailing list and see\n> what comes back, though, as this seems to be an issue that should be\n> resolved if accurate....\n\nI am not sure I am accurate either, but I think I am.\n\nIt would be nice to get the kernel fixed, though a fix for that is\nrarely trivial.\n\nLet us know what your find out.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Nov 1999 14:13:46 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> If I am correct on the Linux seek thing, and Tatsuo is running Linux, is\n> there any way to fake out the kernel on only Linux, so we issue two\n> reads in a row before doing a seek?\n\nI dunno. I see that f_reada is turned off by a seek in the extract you\nposted, but I wasn't clear on what turns it on again, nor what happens\nafter it is turned on.\n\nAfter further thought I am not sure that read-ahead or lack of it is\nthe problem. The changes I committed over the weekend were to try to\nimprove locality of access to the temp file by reading tuples from\nlogical tapes in bursts --- in a merge pass that's reading N logical\ntapes, it now tries to grab SortMem/N bytes worth of tuples off any one\nsource tape at a time, rather than just reading an 8K block at a time\nfrom each tape as the first cut did. That seemed to improve performance\non both my system and Tatsuo's, but his is still far below the speed of\nthe 6.5 code. I'm not sure I understand why. The majority of the block\nreads or writes *should* be sequential now, given a reasonable SortMem\n(and he tested with quite large settings). I'm afraid there is some\naspect of the kernel's behavior on his system that we don't have a clue\nabout...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Nov 1999 17:03:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table " }, { "msg_contents": "Tom Lane wrote:\n> the 6.5 code. I'm not sure I understand why. The majority of the block\n> reads or writes *should* be sequential now, given a reasonable SortMem\n> (and he tested with quite large settings). I'm afraid there is some\n> aspect of the kernel's behavior on his system that we don't have a clue\n> about...\n\nHow could I go about duplicating this?? Having multiple RedHat systems\navailable (both of the 2.2 and 2.0 variety), I'd be glad to test it\nhere. I'm pulling a cvs update as I write this. If possible, I'd like\nto duplicate it exactly.\n\nAlso, from prior discussions with Thomas, there is a RedHat 6.0 machine\nat hub.org for testing purposes.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 01 Nov 1999 17:25:40 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table" }, { "msg_contents": ">Of course the flaw in this reasoning is that it assumes the OS isn't\n>getting in the way. On the HPUX system I've been testing on, the\n>performance does seem to be about the same, but evidently it's much\n>worse on your system. (Exactly what OS are you running, anyway, and\n>on what hardware?) I speculate that your OS is applying some sort of\n>read-ahead algorithm that is getting hopelessly confused by lots of\n>seeks within a single file. Perhaps it's reading the next block in\n>sequence after every program-requested read, and then throwing away that\n>work when it sees the program lseek the file instead of reading.\n\nOk. Here are my settings.\n\nRedHat Linux 6.0 (kernel 2.2.5-smp)\nPentium III 500MHz x 2\nRAM: 512MB\nDisk: Ultra Wide SCSI 9GB x 4 + Hardware RAID (RAID 5).\n\nAlso, I could provide testing scripts to reproduce my tests.\n\n>Next question is what to do about it. I don't suppose we have any way\n>of turning off the OS' read-ahead algorithm :-(. We could forget about\n>this space-recycling improvement and go back to separate temp files.\n>The objection to that, of course, is that while sorting might be faster,\n>it doesn't matter how fast the algorithm is if you don't have the disk\n>space to execute it.\n>\n>A possible compromise is to use separate temp files but drop the\n>polyphase merge and go to a balanced merge, which'd still access each\n>temp file sequentially but would have only a 2X space penalty instead of\n>4X (since all the data starts on one set of tapes and gets copied to the\n>other set during a complete merge pass). The balanced merge is a little\n>slower than polyphase --- more merge passes --- but the space savings\n>probably justify it.\n\nI think it depends on the disk space available. Ideally it should be\nable to choice the sort algorithm. If it's impossible, the algorithm\nthat requires least sort space requires would be the way we go. Since\nthe performance problem only occurs when a table is huge.\n\n>One thing I'd like to know before we make any decisions is whether\n>this problem is widespread. Can anyone else run performance tests\n>of the speed of large sorts, using current sources vs. 6.5.* ?\n\nI will test with 6.5.2 again.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 02 Nov 1999 07:42:50 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] sort on huge table " }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> RedHat Linux 6.0 (kernel 2.2.5-smp)\n> Pentium III 500MHz x 2\n> RAM: 512MB\n> Disk: Ultra Wide SCSI 9GB x 4 + Hardware RAID (RAID 5).\n\nOK, no problem with inadequate hardware anyway ;-). Bruce's concern\nabout simplistic read-ahead algorithm in Linux may apply though.\n\n> Also, I could provide testing scripts to reproduce my tests.\n\nPlease. That would be very handy so that we can make sure we are all\ncomparing the same thing. I assume the scripts can be tweaked to vary\nthe amount of disk space used? I can't scare up more than a couple\nhundred meg at the moment. (The natural state of a disk drive is\n\"full\" ...)\n\n> I think it depends on the disk space available. Ideally it should be\n> able to choice the sort algorithm.\n\nI was hoping to avoid that, because of the extra difficulty of testing\nand maintenance. But it may be the only answer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Nov 1999 17:49:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table " }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n> How could I go about duplicating this?? Having multiple RedHat systems\n> available (both of the 2.2 and 2.0 variety), I'd be glad to test it\n> here. I'm pulling a cvs update as I write this. If possible, I'd like\n> to duplicate it exactly.\n\nMe too (modulo disk space issues --- maybe we should try to compare\nsorts of say 100MB, rather than 2GB). Tatsuo said he'd make his test\nscript available.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Nov 1999 18:05:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table " }, { "msg_contents": "> Lamar Owen <[email protected]> writes:\n> > How could I go about duplicating this?? Having multiple RedHat systems\n> > available (both of the 2.2 and 2.0 variety), I'd be glad to test it\n> > here. I'm pulling a cvs update as I write this. If possible, I'd like\n> > to duplicate it exactly.\n> \n> Me too (modulo disk space issues --- maybe we should try to compare\n> sorts of say 100MB, rather than 2GB). Tatsuo said he'd make his test\n> script available.\n\nI would be very interested if Tatsuo could comment out the f_reada line\nin the function I posted, and see if the new kernel is faster on 7.0\nsorts. That would clearly show the cause. I wouldn't be surprised if\n7.0 sorts became faster than 6.5.* sorts.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Nov 1999 21:41:59 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table" }, { "msg_contents": "\nWas this resolved?\n\n\n> >That's what I get for not testing the interaction between logtape.c\n> >and buffile.c at a segment boundary --- it didn't work, of course :-(.\n> >I rebuilt with a small RELSEG_SIZE and debugged it. I'm still concerned\n> >about possible integer overflow problems, so please update and try again\n> >with a large file.\n> \n> It worked with 2GB+ table but was much slower than before.\n> \n> Before(with 8MB sort memory): 22 minutes\n> \n> After(with 8MB sort memory): 1 hour and 5 minutes\n> After(with 80MB sort memory): 42 minutes.\n> --\n> Tatsuo Ishii\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Nov 1999 21:02:40 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Was this resolved?\n\nI tweaked the code some, and am waiting for retest results from Tatsuo.\n\nI think the poor results he is seeing might be platform-dependent; on\nmy machine current code seems to be faster than 6.5.* ... but on the\nother hand I don't have the disk space to run a multi-gig sort test.\n\nCan anyone else take the time to compare speed of large sorts between\n6.5.* and current code?\n\n\t\t\tregards, tom lane\n\n\n>> It worked with 2GB+ table but was much slower than before.\n>> \n>> Before(with 8MB sort memory): 22 minutes\n>> \n>> After(with 8MB sort memory): 1 hour and 5 minutes\n>> After(with 80MB sort memory): 42 minutes.\n>> --\n>> Tatsuo Ishii\n", "msg_date": "Mon, 29 Nov 1999 23:11:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table " }, { "msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <[email protected]> writes:\n> > Was this resolved?\n> \n> I tweaked the code some, and am waiting for retest results from Tatsuo.\n> \n> I think the poor results he is seeing might be platform-dependent; on\n> my machine current code seems to be faster than 6.5.* ... but on the\n> other hand I don't have the disk space to run a multi-gig sort test.\n> \n> Can anyone else take the time to compare speed of large sorts between\n> 6.5.* and current code?\n\nIs there a howto for running an additional development backend ?\n\nIf there is, I could test it on a dual P!!!500MHz IBM Netfinity \nM20 with 1GB memory and >30 GB RAID5 disks.\n\n---------------\nHannu\n", "msg_date": "Tue, 30 Nov 1999 10:42:25 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table" }, { "msg_contents": "On Mon, 29 Nov 1999, Tom Lane wrote:\n\n> Can anyone else take the time to compare speed of large sorts between\n> 6.5.* and current code?\n\nI have a few Linux and FreeBSD machines with rather normal hardware I\ncould use, but I'm not all that familiar with what you were working on, so\nI'd need exact specifications or, better yet, a script.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 30 Nov 1999 12:29:48 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On Mon, 29 Nov 1999, Tom Lane wrote:\n>> Can anyone else take the time to compare speed of large sorts between\n>> 6.5.* and current code?\n\n> I have a few Linux and FreeBSD machines with rather normal hardware I\n> could use, but I'm not all that familiar with what you were working on, so\n> I'd need exact specifications or, better yet, a script.\n\nTatsuo posted his sort test script to pgsql-hackers on 02 Nov 1999\n13:07:32 +0900; you can get it from the archives.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Nov 1999 11:31:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table " }, { "msg_contents": "I ran the sort script without change, the resulting file was about 250MB\nin size. Not sure what kind of sizes you were looking for.\n\n6.5.3\n 696.01 real 0.03 user 0.02 sys\n\n\"7.0\" from last Saturday\n 957.73 real 0.03 user 0.02 sys\none more time\n 936.41 real 0.04 user 0.01 sys\n\n\nFreeBSD 3.3, 200MHz Pentium (P55C), 128MB RAM\nboth installations where done without extras (bare ./configure)\n\nThat almost seems too wacko to be true. I'll be happy to rerun them, with\nother sizes if you want.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n\n\n\n", "msg_date": "Tue, 30 Nov 1999 21:19:54 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sort on huge table " }, { "msg_contents": "\nOK, initdb should now work. There were a variety of non-portable things\nin initdb.sh, like assuming $EUID is defined, and other shell script and\ncommand args that do not exist on BSDI.\n\nI think I got them all. If anyone sees problems, let me know. This is\nnot really Peter's fault. It takes a long time to know what is\nportable and what is not portable.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 17 Dec 1999 23:06:39 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "initdb.sh fixed" }, { "msg_contents": "People, I thought you would at least test this once before applying it. I\nexplicitly (maybe not explicitly enough) mentioned it, because writing\ncomplex shell scripts is a nut job. Maybe this thing should really be\nwritten in C. Then there will be no EUID, no echo, no function, no grep,\nno whoami, or other problems. Perhaps the whole genbki.sh thing could be\nscrapped then, with initdb interpreting the DATA() macros itself. It\nwould even reduce the overhead of calling postgres about 12 times and\ncould get it down to 2 or 3. A project for 7.1?\n\nOn 1999-12-17, Bruce Momjian mentioned:\n\n> OK, initdb should now work. There were a variety of non-portable things\n> in initdb.sh, like assuming $EUID is defined, and other shell script and\n> command args that do not exist on BSDI.\n\nHmm, that $EUID seems to have be the root of all trouble because then the\n'insert ( data data data )' bootstrap commands are containing gaps. On the\nother hand, this was one of the key things that were supposed to be\nimproved because relying on $USER was not su-safe. Maybe $UID would work,\nsince initdb isn't supposed to be setuid anyway.\n\n> I think I got them all. If anyone sees problems, let me know. This is\n> not really Peter's fault. It takes a long time to know what is\n> portable and what is not portable.\n\nThe more time I spend with this the more I think that the only thing\nthat's portable is echo. Oh wait, that's not portable either. :)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n\n\n", "msg_date": "Sat, 18 Dec 1999 17:13:33 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb.sh fixed" }, { "msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> People, I thought you would at least test this once before applying it. I\n> explicitly (maybe not explicitly enough) mentioned it, because writing\n> complex shell scripts is a nut job. Maybe this thing should really be\n> written in C. Then there will be no EUID, no echo, no function, no grep,\n> no whoami, or other problems. Perhaps the whole genbki.sh thing could be\n> scrapped then, with initdb interpreting the DATA() macros itself. It\n> would even reduce the overhead of calling postgres about 12 times and\n> could get it down to 2 or 3. A project for 7.1?\n\nI had enough trouble applying the patch, let alone testing it...\n\nMaking it in C presents all sorts of portability problems that are even\nharder to figure. There is no portability free lunch. I think a script\nis the way to go with this.\n\nThe big problem seems to be reliance on bash-isms like $UID and\nfunctions with spaces like:\n\nfunction func () {\n}\n\nOnly bash knows about that. I have written enough shells scripts to\nknow that, but it is hard to get that knowledge.\n\nAlso, env args without quotes around them is a problem.\n\nAll fixed now.\n\n> \n> On 1999-12-17, Bruce Momjian mentioned:\n> \n> > OK, initdb should now work. There were a variety of non-portable things\n> > in initdb.sh, like assuming $EUID is defined, and other shell script and\n> > command args that do not exist on BSDI.\n> \n> Hmm, that $EUID seems to have be the root of all trouble because then the\n> 'insert ( data data data )' bootstrap commands are containing gaps. On the\n> other hand, this was one of the key things that were supposed to be\n> improved because relying on $USER was not su-safe. Maybe $UID would work,\n> since initdb isn't supposed to be setuid anyway.\n\nAgain, a bash-ism. Let's face, it, the postgres binary is going to\ncroak on root anyway, so we are just doing an extra check in initdb.\n\n> \n> > I think I got them all. If anyone sees problems, let me know. This is\n> > not really Peter's fault. It takes a long time to know what is\n> > portable and what is not portable.\n> \n> The more time I spend with this the more I think that the only thing\n> that's portable is echo. Oh wait, that's not portable either. :)\n\nDon't think so.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 18 Dec 1999 15:21:06 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb.sh fixed" }, { "msg_contents": "On 1999-12-18, Bruce Momjian mentioned:\n\n> The big problem seems to be reliance on bash-isms like $UID and\n> functions with spaces like:\n\nBash tells me that is it's invoked as 'sh' it will behave like 'sh', but\nit's lying ...\n\n> > 'insert ( data data data )' bootstrap commands are containing gaps. On the\n> > other hand, this was one of the key things that were supposed to be\n> > improved because relying on $USER was not su-safe. Maybe $UID would work,\n> > since initdb isn't supposed to be setuid anyway.\n> \n> Again, a bash-ism. Let's face, it, the postgres binary is going to\n> croak on root anyway, so we are just doing an extra check in initdb.\n\nBut the point was to initialize to superuser id in Postgres as that\nnumber, but we might as well start them out at 0, like it is now.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Mon, 20 Dec 1999 01:18:39 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb.sh fixed" }, { "msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> On 1999-12-18, Bruce Momjian mentioned:\n> \n> > The big problem seems to be reliance on bash-isms like $UID and\n> > functions with spaces like:\n> \n> Bash tells me that is it's invoked as 'sh' it will behave like 'sh', but\n> it's lying ...\n\nYes, certain _extensions_ show through.\n\n> \n> > > 'insert ( data data data )' bootstrap commands are containing gaps. On the\n> > > other hand, this was one of the key things that were supposed to be\n> > > improved because relying on $USER was not su-safe. Maybe $UID would work,\n> > > since initdb isn't supposed to be setuid anyway.\n> > \n> > Again, a bash-ism. Let's face, it, the postgres binary is going to\n> > croak on root anyway, so we are just doing an extra check in initdb.\n> \n> But the point was to initialize to superuser id in Postgres as that\n> number, but we might as well start them out at 0, like it is now.\n\nSeems either $USER or $LOGNAME should be set in all cases.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 19 Dec 1999 21:17:00 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb.sh fixed" }, { "msg_contents": "> > > 'insert ( data data data )' bootstrap commands are containing gaps. On the\n> > > other hand, this was one of the key things that were supposed to be\n> > > improved because relying on $USER was not su-safe. Maybe $UID would work,\n> > > since initdb isn't supposed to be setuid anyway.\n> > \n> > Again, a bash-ism. Let's face, it, the postgres binary is going to\n> > croak on root anyway, so we are just doing an extra check in initdb.\n> \n> But the point was to initialize to superuser id in Postgres as that\n> number, but we might as well start them out at 0, like it is now.\n\nI am now using:\n\n\tPOSTGRES_SUPERUSERID=\"`id -u 2>/dev/null || echo 0`\"\n\nLet's see how portable that is?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 19 Dec 1999 21:17:53 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: initdb.sh fixed7" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Seems either $USER or $LOGNAME should be set in all cases.\n\nOne or both is probably set in most shell environments ... but\nit's not necessarily *right*. If you've su'd to postgres from\nyour login account, these env vars may still reflect your login.\n\n> I am now using:\n>\tPOSTGRES_SUPERUSERID=\"`id -u 2>/dev/null || echo 0`\"\n> Let's see how portable that is?\n\nSome quick experimentation shows that id -u isn't too trustworthy,\nwhich is a shame because it's the POSIX standard. But I find that\nthe SunOS implementation ignores -u:\n\n$ id -u\nuid=6902(tgl) gid=50(users0) groups=50(users0)\n\nAnd no doubt there will be platforms that haven't got \"id\" at all.\n\nIt might be best to provide a little bitty C program that calls\ngeteuid() and prints the result...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 19 Dec 1999 22:26:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: initdb.sh fixed " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Seems either $USER or $LOGNAME should be set in all cases.\n> \n> One or both is probably set in most shell environments ... but\n> it's not necessarily *right*. If you've su'd to postgres from\n> your login account, these env vars may still reflect your login.\n> \n> > I am now using:\n> >\tPOSTGRES_SUPERUSERID=\"`id -u 2>/dev/null || echo 0`\"\n> > Let's see how portable that is?\n> \n> Some quick experimentation shows that id -u isn't too trustworthy,\n> which is a shame because it's the POSIX standard. But I find that\n> the SunOS implementation ignores -u:\n> \n> $ id -u\n> uid=6902(tgl) gid=50(users0) groups=50(users0)\n> \n> And no doubt there will be platforms that haven't got \"id\" at all.\n> \n> It might be best to provide a little bitty C program that calls\n> geteuid() and prints the result...\n\nWe could argue that Postgres is the super-user for the database, it\nshould be zero userid.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 19 Dec 1999 22:45:04 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: initdb.sh fixed" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> We could argue that Postgres is the super-user for the database, it\n> should be zero userid.\n\nActually, that's quite a good thought --- is there *any* real need\nfor initdb to extract the UID of the postgres user? What we do need,\nI think, is the *name* of the postgres user, which we might perhaps\nget with something like\n\n\twhoami 2>/dev/null || id -u -n 2>/dev/null || echo postgres\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 19 Dec 1999 23:19:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: initdb.sh fixed " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > We could argue that Postgres is the super-user for the database, it\n> > should be zero userid.\n> \n> Actually, that's quite a good thought --- is there *any* real need\n> for initdb to extract the UID of the postgres user? What we do need,\n> I think, is the *name* of the postgres user, which we might perhaps\n> get with something like\n> \n> \twhoami 2>/dev/null || id -u -n 2>/dev/null || echo postgres\n\nWe currently have:\n\n EffectiveUser=`id -n -u 2> /dev/null` || EffectiveUser=`whoami 2> /dev/null`\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 19 Dec 1999 23:40:52 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: initdb.sh fixed" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> We currently have:\n> EffectiveUser=`id -n -u 2>/dev/null` || EffectiveUser=`whoami 2>/dev/null`\n\nOK, but is that really portable? I'd feel more comfortable with\n\nEffectiveUser=`id -n -u 2>/dev/null || whoami 2>/dev/null`\n\nbecause it's clearer what will happen. I wouldn't have expected an\nerror inside a backquoted subcommand to determine the error result of\nthe command as a whole, which is what the first example is depending on.\nIn a quick test it seemed to work with the ksh I tried it on, but I\nwonder how many shells work that way...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 20 Dec 1999 00:24:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: initdb.sh fixed " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > We currently have:\n> > EffectiveUser=`id -n -u 2>/dev/null` || EffectiveUser=`whoami 2>/dev/null`\n> \n> OK, but is that really portable? I'd feel more comfortable with\n> \n> EffectiveUser=`id -n -u 2>/dev/null || whoami 2>/dev/null`\n> \n> because it's clearer what will happen. I wouldn't have expected an\n> error inside a backquoted subcommand to determine the error result of\n> the command as a whole, which is what the first example is depending on.\n> In a quick test it seemed to work with the ksh I tried it on, but I\n> wonder how many shells work that way...\n\nChange applied.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 20 Dec 1999 00:38:37 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: initdb.sh fixed" }, { "msg_contents": "On Sun, 19 Dec 1999, Bruce Momjian wrote:\n\n> > Bruce Momjian <[email protected]> writes:\n> > > We could argue that Postgres is the super-user for the database, it\n> > > should be zero userid.\n> > \n> > Actually, that's quite a good thought --- is there *any* real need\n> > for initdb to extract the UID of the postgres user? What we do need,\n> > I think, is the *name* of the postgres user, which we might perhaps\n> > get with something like\n> > \n> > \twhoami 2>/dev/null || id -u -n 2>/dev/null || echo postgres\n> \n> We currently have:\n> \n> EffectiveUser=`id -n -u 2> /dev/null` || EffectiveUser=`whoami 2> /dev/null`\n> \n\nIf neither one of these resulted in anything it will ask you to provide a\nstring with --username. But I figure one must have one of those.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 24 Dec 1999 22:29:37 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: initdb.sh fixed" } ]
[ { "msg_contents": "Hi,\n\nI have changed a bit the makefiles for the win32 port - the *.def files\n(created when building shared libraries) are now clean from Makefile.shlib.\n\nI have also removed \"-g\" from CFLAGS in the \"cygwin32\" template - it can be\nenabled when running configure.\n\n\t\t\tDan\n\n\n----------------------------------------------\nDaniel Horak\nnetwork and system administrator\ne-mail: [email protected]\nprivat e-mail: [email protected] ICQ:36448176\n----------------------------------------------", "msg_date": "Wed, 13 Oct 1999 13:11:43 +0200", "msg_from": "Horak Daniel <[email protected]>", "msg_from_op": true, "msg_subject": "update for shlib makefiles on win32" }, { "msg_contents": "\nApplied to 7.0 tree.\n\n[Charset iso-8859-2 unsupported, filtering to ASCII...]\n> Hi,\n> \n> I have changed a bit the makefiles for the win32 port - the *.def files\n> (created when building shared libraries) are now clean from Makefile.shlib.\n> \n> I have also removed \"-g\" from CFLAGS in the \"cygwin32\" template - it can be\n> enabled when running configure.\n> \n> \t\t\tDan\n> \n> \n> ----------------------------------------------\n> Daniel Horak\n> network and system administrator\n> e-mail: [email protected]\n> privat e-mail: [email protected] ICQ:36448176\n> ----------------------------------------------\n> \n\n[Attachment, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 13 Oct 1999 07:38:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] update for shlib makefiles on win32" } ]
[ { "msg_contents": "> >> New version attached. No PDF version this time. Do people like PDF?\n> Yes!\n\nOK, here is the PDF. Three pages in length. Seems I have to use Times\nRoman to get the file size acceptible for the mailing list filter.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026", "msg_date": "Wed, 13 Oct 1999 07:40:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "PDF version of book outline" } ]
[ { "msg_contents": "Okay, I have the following voting results:\n\n2 for pgcreatedb, pgdropdb, pgcreateuser, pgdropuser (Bruce, me)\n1 for pg_createdb, pg_dropdb, etc. (Sergio K.)\n1/2 for pgadduser, etc., or sth. like that (Marc)\n1/2 for leave as is (Thomas)\n1 for drop altogether (Marc)\n\nWell, since this is not on the immediate agenda I'll leave it open for a\nwhile, but you see the leading vote getter.\n\nIn addition I'll add a configure option --with-pragmatic-scrappy which\nwill prevent the installation of the scripts altogether.\n\n -Peter\n\n\n", "msg_date": "Wed, 13 Oct 1999 14:10:46 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Scripts again" }, { "msg_contents": "> Okay, I have the following voting results:\n> \n> 2 for pgcreatedb, pgdropdb, pgcreateuser, pgdropuser (Bruce, me)\n> 1 for pg_createdb, pg_dropdb, etc. (Sergio K.)\n> 1/2 for pgadduser, etc., or sth. like that (Marc)\n> 1/2 for leave as is (Thomas)\n> 1 for drop altogether (Marc)\n\nI can go for pg_createdb too.\n\n> \n> Well, since this is not on the immediate agenda I'll leave it open for a\n> while, but you see the leading vote getter.\n> \n> In addition I'll add a configure option --with-pragmatic-scrappy which\n> will prevent the installation of the scripts altogether.\n\nI like that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 13 Oct 1999 08:52:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Scripts again" }, { "msg_contents": "\nOn 13-Oct-99 Peter Eisentraut wrote:\n> Okay, I have the following voting results:\n> \n> 2 for pgcreatedb, pgdropdb, pgcreateuser, pgdropuser (Bruce, me)\n> 1 for pg_createdb, pg_dropdb, etc. (Sergio K.)\n\n Me too ...\nAnother method is \npgadm createdb .... \n\n> 1/2 for pgadduser, etc., or sth. like that (Marc)\n> 1/2 for leave as is (Thomas)\n> 1 for drop altogether (Marc)\n> \n> Well, since this is not on the immediate agenda I'll leave it open for a\n> while, but you see the leading vote getter.\n> \n> In addition I'll add a configure option --with-pragmatic-scrappy which\n> will prevent the installation of the scripts altogether.\n> \n> -Peter\n> \n> \n> \n> ************\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n", "msg_date": "Wed, 13 Oct 1999 17:17:56 +0400 (MSD)", "msg_from": "Dmitry Samersoff <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Scripts again" }, { "msg_contents": "On Wed, 13 Oct 1999, Bruce Momjian wrote:\n\n> > Okay, I have the following voting results:\n> > \n> > 2 for pgcreatedb, pgdropdb, pgcreateuser, pgdropuser (Bruce, me)\n> > 1 for pg_createdb, pg_dropdb, etc. (Sergio K.)\n> > 1/2 for pgadduser, etc., or sth. like that (Marc)\n> > 1/2 for leave as is (Thomas)\n> > 1 for drop altogether (Marc)\n> \n> I can go for pg_createdb too.\n> \n> > \n> > Well, since this is not on the immediate agenda I'll leave it open for a\n> > while, but you see the leading vote getter.\n> > \n> > In addition I'll add a configure option --with-pragmatic-scrappy which\n> > will prevent the installation of the scripts altogether.\n> \n> I like that.\n\nOkay, who invited this guy? :) How about drop'ng them unlessthe configure\nwith a --with-hand-holding option? or even --disable-sql? :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 13 Oct 1999 10:36:52 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Scripts again" }, { "msg_contents": "On Wed, 13 Oct 1999, Peter Eisentraut wrote:\n\n> Okay, I have the following voting results:\n> \n> 2 for pgcreatedb, pgdropdb, pgcreateuser, pgdropuser (Bruce, me)\n> 1 for pg_createdb, pg_dropdb, etc. (Sergio K.)\n> 1/2 for pgadduser, etc., or sth. like that (Marc)\n> 1/2 for leave as is (Thomas)\n> 1 for drop altogether (Marc)\n\nI vote with Thomas - leave as is. But I don't understand the tallying.\nDid Thomas shrink so as to only get a halfa vote? :)\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 13 Oct 1999 09:51:42 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Scripts again" }, { "msg_contents": "On Oct 13, The Hermit Hacker mentioned:\n\n> On Wed, 13 Oct 1999, Bruce Momjian wrote:\n> \n> > > Okay, I have the following voting results:\n> > > \n> > > 2 for pgcreatedb, pgdropdb, pgcreateuser, pgdropuser (Bruce, me)\n> > > 1 for pg_createdb, pg_dropdb, etc. (Sergio K.)\n> > > 1/2 for pgadduser, etc., or sth. like that (Marc)\n> > > 1/2 for leave as is (Thomas)\n> > > 1 for drop altogether (Marc)\n> > \n> > I can go for pg_createdb too.\n> > \n> > > \n> > > Well, since this is not on the immediate agenda I'll leave it open for a\n> > > while, but you see the leading vote getter.\n> > > \n> > > In addition I'll add a configure option --with-pragmatic-scrappy which\n> > > will prevent the installation of the scripts altogether.\n> > \n> > I like that.\n> \n> Okay, who invited this guy? :) How about drop'ng them unlessthe configure\n> with a --with-hand-holding option? or even --disable-sql? :)\n\nI like that even better. Watch out, it might become a standard option in\nautoconf soon.\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 13 Oct 1999 19:11:09 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Scripts again" }, { "msg_contents": "On Oct 13, Vince Vielhaber mentioned:\n\n> On Wed, 13 Oct 1999, Peter Eisentraut wrote:\n> \n> > Okay, I have the following voting results:\n> > \n> > 2 for pgcreatedb, pgdropdb, pgcreateuser, pgdropuser (Bruce, me)\n> > 1 for pg_createdb, pg_dropdb, etc. (Sergio K.)\n> > 1/2 for pgadduser, etc., or sth. like that (Marc)\n> > 1/2 for leave as is (Thomas)\n> > 1 for drop altogether (Marc)\n> \n> I vote with Thomas - leave as is. But I don't understand the tallying.\n> Did Thomas shrink so as to only get a halfa vote? :)\n\nHe didn't make his opinion exactly clear, except that the underscores are\na sign of the coming apocalypse. (If you ask Marc, he can probably give\nyou an alternate theory here...)\n\nI suppose that would subtract another half vote from the underscore\noption. With Bruce's defection and my own change of mind we now stand at\n\n0 pgblah\n3 1/2 pg_blah (Bruce, Sergio K, Dmitry S, -1/2 Thomas, myself)\n1/2 pgadddb (Marc)\n1 1/2 as is (Thomas, Vince)\n1 none (Marc)\n\nHmm, I guess that does it. pg_createdb and symlinks for one release with\nwarnings for deprecated forms.\n\nPerhaps we should really change the installation instructions to not make\nmention of the scripts, though, to enforce proper learning. But you were\nworking on that anyway, right?\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Wed, 13 Oct 1999 19:27:50 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Scripts again" }, { "msg_contents": "Peter Eisentraut wrote:\n\n> 0 pgblah\n> 3 1/2 pg_blah (Bruce, Sergio K, Dmitry S, -1/2 Thomas, myself)\n> 1/2 pgadddb (Marc)\n> 1 1/2 as is (Thomas, Vince)\n> 1 none (Marc)\n>\n> Hmm, I guess that does it. pg_createdb and symlinks for one release with\n> warnings for deprecated forms.\n\n Make the whole thing configurable and anyone should be happy.\n\n --pg_admin_script_prefix={pg_|pg|*empty*|*whatever_you_prefer*}\n --pg_admin_script_install={yes|no}\n\n It's not a joke. Someone might want to have his user account\n to have access to his production and test DB at the same\n time. So he could setup his PATH to both installations bin\n directories and configure the test DB to use a different\n default PGPORT and different script prefixes. Then\n pg_createdb would contact another postmaster than\n devel_createdb would do. Well, the installed binaries like\n psql would need some configurable prefix too then.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 13 Oct 1999 23:32:20 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Scripts again" }, { "msg_contents": "On Wed, 13 Oct 1999, Jan Wieck wrote:\n\n> Make the whole thing configurable and anyone should be happy.\n> \n> --pg_admin_script_prefix={pg_|pg|*empty*|*whatever_you_prefer*}\n> --pg_admin_script_install={yes|no}\n\nWay initially I was suggesting\n\t--enable-scripts=old|new|both|none (default new)\nbut Bruce found *that* too complicated.\n\nI can see your point here, accessing different db installations, but I\nthink that is a highly specialized case (and you should be using psql\nanyway, but that seems to be a culture issue).\n\nBut the scripts have no concept of default ports etc., that's in libpq. So\nyou can use pg_createdb for whatever your default install is, and\npg_createdb -p foo for your alternate installation. Or you could alias\nthis or something.\n\n(Btw., anyone else think a /etc/services entry is better than a hardwired\ndefault port, at least on the libpq side of things? Of course, I'm not\nsure about Windows here.)\n\n> \n> It's not a joke. Someone might want to have his user account\n> to have access to his production and test DB at the same\n> time. So he could setup his PATH to both installations bin\n> directories and configure the test DB to use a different\n> default PGPORT and different script prefixes. Then\n> pg_createdb would contact another postmaster than\n> devel_createdb would do. Well, the installed binaries like\n> psql would need some configurable prefix too then.\n> \n> \n> Jan\n> \n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 14 Oct 1999 13:10:20 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Scripts again" }, { "msg_contents": "On Thu, 14 Oct 1999, Peter Eisentraut wrote:\n\n> (Btw., anyone else think a /etc/services entry is better than a hardwired\n> default port, at least on the libpq side of things? Of course, I'm not\n> sure about Windows here.)\n\nWould require root intervention to install, which is something that we've\nalways avoided, and discouraged...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 14 Oct 1999 10:17:53 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Scripts again" }, { "msg_contents": "At 6:17 AM -0700 10/14/99, The Hermit Hacker wrote:\n>On Thu, 14 Oct 1999, Peter Eisentraut wrote:\n>\n>> (Btw., anyone else think a /etc/services entry is better than a hardwired\n>> default port, at least on the libpq side of things? Of course, I'm not\n>> sure about Windows here.)\n>\n>Would require root intervention to install, which is something that we've\n>always avoided, and discouraged...\n\nWouldn't it make it harder to build test installations running on the same\nmachine as a production server?\n\nSignature failed Preliminary Design Review.\nFeasibility of a new signature is currently being evaluated.\[email protected], or [email protected]\n", "msg_date": "Thu, 14 Oct 1999 08:47:30 -0700", "msg_from": "\"Henry B. Hotz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Scripts again" }, { "msg_contents": "> > > Okay, I have the following voting results:\n> > > 2 for pgcreatedb, pgdropdb, pgcreateuser, pgdropuser (Bruce, me)\n> > > 1 for pg_createdb, pg_dropdb, etc. (Sergio K.)\n> > > 1/2 for pgadduser, etc., or sth. like that (Marc)\n> > > 1/2 for leave as is (Thomas)\n> > > 1 for drop altogether (Marc)\n> > I vote with Thomas - leave as is. But I don't understand the tallying.\n> > Did Thomas shrink so as to only get a halfa vote? :)\n> He didn't make his opinion exactly clear, except that the underscores are\n> a sign of the coming apocalypse. (If you ask Marc, he can probably give\n> you an alternate theory here...)\n\nOK, let me be clear. imho there is no strong consensus on this, which\nwould lead us toward *leave it as it is*! I'll put Marc (motto: \"no\nwusses!\") on the lunatic fringe for suggesting that we drop all user\nconveniences, but istm that we are solving a problem which isn't a\nproblem. And we are changing the user interface which has been in\nplace for (at least) the last three years based on no documented name\nspace conflict and no widely reported problems from users.\n\nI can see how some might want some clearer way to figure out available\npostgres command-line commands using ls and grep. If so, prepending\n\"pg\" will help, but forget the underscores and convince more of us\nthat it is necessary, please. Why should a regular user have to type\nthe extra two characters anyway? Should we mention in the v7.0 release\nnotes that we are now \"carpal tunnel hostile\"??\n\n> Hmm, I guess that does it. pg_createdb and symlinks for one release with\n> warnings for deprecated forms.\n\nack!\n\n> Perhaps we should really change the installation instructions to not make\n> mention of the scripts, though, to enforce proper learning. But you were\n> working on that anyway, right?\n\nsigh. We should get rid of all of the other language interfaces too;\nany real programmer can do it with psql and bash. Hmm, maybe even psql\nis a luxury ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 15 Oct 1999 05:24:27 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Scripts again" }, { "msg_contents": "Topic dropped.\n\nBut users should be made aware that all createdb does is call psql and the\ncreate database SQL statement, so they see how it fits together. But I\nthink I have to agree with your general point here.\n\n\t-Peter\n\nOn Fri, 15 Oct 1999, Thomas Lockhart wrote:\n\n> > > > Okay, I have the following voting results:\n> > > > 2 for pgcreatedb, pgdropdb, pgcreateuser, pgdropuser (Bruce, me)\n> > > > 1 for pg_createdb, pg_dropdb, etc. (Sergio K.)\n> > > > 1/2 for pgadduser, etc., or sth. like that (Marc)\n> > > > 1/2 for leave as is (Thomas)\n> > > > 1 for drop altogether (Marc)\n> > > I vote with Thomas - leave as is. But I don't understand the tallying.\n> > > Did Thomas shrink so as to only get a halfa vote? :)\n> > He didn't make his opinion exactly clear, except that the underscores are\n> > a sign of the coming apocalypse. (If you ask Marc, he can probably give\n> > you an alternate theory here...)\n> \n> OK, let me be clear. imho there is no strong consensus on this, which\n> would lead us toward *leave it as it is*! I'll put Marc (motto: \"no\n> wusses!\") on the lunatic fringe for suggesting that we drop all user\n> conveniences, but istm that we are solving a problem which isn't a\n> problem. And we are changing the user interface which has been in\n> place for (at least) the last three years based on no documented name\n> space conflict and no widely reported problems from users.\n> \n> I can see how some might want some clearer way to figure out available\n> postgres command-line commands using ls and grep. If so, prepending\n> \"pg\" will help, but forget the underscores and convince more of us\n> that it is necessary, please. Why should a regular user have to type\n> the extra two characters anyway? Should we mention in the v7.0 release\n> notes that we are now \"carpal tunnel hostile\"??\n> \n> > Hmm, I guess that does it. pg_createdb and symlinks for one release with\n> > warnings for deprecated forms.\n> \n> ack!\n> \n> > Perhaps we should really change the installation instructions to not make\n> > mention of the scripts, though, to enforce proper learning. But you were\n> > working on that anyway, right?\n> \n> sigh. We should get rid of all of the other language interfaces too;\n> any real programmer can do it with psql and bash. Hmm, maybe even psql\n> is a luxury ;)\n> \n> - Thomas\n> \n> \n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 15 Oct 1999 11:36:35 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Scripts again" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> Topic dropped.\n> \n> But users should be made aware that all createdb does is call psql and the\n> create database SQL statement, so they see how it fits together. But I\n> think I have to agree with your general point here.\n> \n\nMaybe it should just echo out the commands given through psql, unless \nsome switch (like -s for silent) is given ?\n\n-----------------\nHannu\n", "msg_date": "Fri, 15 Oct 1999 13:20:15 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Scripts again" }, { "msg_contents": "> > Perhaps we should really change the installation instructions to not make\n> > mention of the scripts, though, to enforce proper learning. But you were\n> > working on that anyway, right?\n>\n> sigh. We should get rid of all of the other language interfaces too;\n> any real programmer can do it with psql and bash. Hmm, maybe even psql\n> is a luxury ;)\n\n Real programmers don't use a database at all. If they can,\n they even avoid using a filesystem or any other help of an\n OS, because they love to know exactly where their data is\n left.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 15 Oct 1999 12:28:27 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Scripts again" }, { "msg_contents": "On Fri, 15 Oct 1999, Jan Wieck wrote:\n\n> > > Perhaps we should really change the installation instructions to not make\n> > > mention of the scripts, though, to enforce proper learning. But you were\n> > > working on that anyway, right?\n> >\n> > sigh. We should get rid of all of the other language interfaces too;\n> > any real programmer can do it with psql and bash. Hmm, maybe even psql\n> > is a luxury ;)\n> \n> Real programmers don't use a database at all. If they can,\n> they even avoid using a filesystem or any other help of an\n> OS, because they love to know exactly where their data is\n> left.\n\nYou use an OS?\n\nOkay, people, let's cut the crap. It wasn't my idea. Someone asked about\nit and Bruce said something to the effect that this was something that you\nwanted to do anyway. So I asked around for a vote and got a result. I\ncouldn't guess that this could cause so much heartbreak with some people.\nLet's forget about it and let people use whatever tool (with whatever\nname) they damn well want.\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 15 Oct 1999 12:49:27 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Scripts again" }, { "msg_contents": "On Fri, 15 Oct 1999, Thomas Lockhart wrote:\n\n> > > > Okay, I have the following voting results:\n> > > > 2 for pgcreatedb, pgdropdb, pgcreateuser, pgdropuser (Bruce, me)\n> > > > 1 for pg_createdb, pg_dropdb, etc. (Sergio K.)\n> > > > 1/2 for pgadduser, etc., or sth. like that (Marc)\n> > > > 1/2 for leave as is (Thomas)\n> > > > 1 for drop altogether (Marc)\n> > > I vote with Thomas - leave as is. But I don't understand the tallying.\n> > > Did Thomas shrink so as to only get a halfa vote? :)\n> > He didn't make his opinion exactly clear, except that the underscores are\n> > a sign of the coming apocalypse. (If you ask Marc, he can probably give\n> > you an alternate theory here...)\n> \n> OK, let me be clear. imho there is no strong consensus on this, which\n> would lead us toward *leave it as it is*! I'll put Marc (motto: \"no\n> wusses!\") on the lunatic fringe for suggesting that we drop all user\n> conveniences, but istm that we are solving a problem which isn't a\n> problem. And we are changing the user interface which has been in\n> place for (at least) the last three years based on no documented name\n> space conflict and no widely reported problems from users.\n\nHear hear!!\n\n> > Perhaps we should really change the installation instructions to not make\n> > mention of the scripts, though, to enforce proper learning. But you were\n> > working on that anyway, right?\n> \n> sigh. We should get rid of all of the other language interfaces too;\n> any real programmer can do it with psql and bash. Hmm, maybe even psql\n> is a luxury ;)\n\nHrmmmmm...there's a thought, but doesn't that sort of negate your above?\n:)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 15 Oct 1999 09:07:49 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Scripts again" } ]
[ { "msg_contents": "500 pages isn't that much once you start rambling... I have an internal\ndocument here that's already passed the 100 page mark, and it has hardly\nanything in it - well it's about NT so it's not worth much any how ;-)\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:[email protected]]\nSent: 13 October 1999 13:51\nTo: Peter Eisentraut\nCc: PostgreSQL-documentation; PostgreSQL-development\nSubject: Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book\n\n\n> On Wed, 13 Oct 1999, Bruce Momjian wrote:\n> \n> > Not sure how to merge current documentation into it. I would like\nto\n> > point them to URL locations as much as possible. I thought if I\ngive\n> > them enough to get started, and to understand how the current docs\nfit\n> > together, that would be good.\n> \n> Personally, I always think that computer books that point you to URLs\nto\n> get the complete information are less than desirable. The very point\nof\n> reading the book is that you don't have to get up to your computer all\nthe\n> time. Books should be self-contained and add to the existing\ndocumentation\n> since otherwise I won't need it.\n> \n> Also, think about the fact that the online documentation might change\nmore\n> quickly than a book is published. In a magazine you can do that, but\nin a\n> book that's questionable. I have a few books that are only about two\nyears\n> old and the information in them is still very valid, but the URLs with\nall\n> the examples and all don't work anymore. Who knows why, I don't have\nthe\n> time to find out.\n> \n\nI just don't want to produce a 500 page book to cover these topics.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n\n************\n", "msg_date": "Wed, 13 Oct 1999 14:07:34 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" } ]
[ { "msg_contents": "Hi,\n\nI have compiled postgres on a few different Linux boxes (all Red Hat\ni386).\n\nMy question is regarding the psql program, on some of the machines\nup/down cursor lets me scroll back and forth through the history buffer,\non other machines this luxury is gone and an up cursor results in.\n\ntemplate1=> select date('now');\n date\n----------\n01-01-2000\n(1 row)\n\ntemplate1=> ^[[A\n\n\nWhy is this?\n\nAre there any libraries required by pgsql that maybe are present on some\nmachines and not others hence the different behaviour?\n\nThanks,\n\nNigel Tamplin.\n", "msg_date": "Wed, 13 Oct 1999 15:25:16 +0100", "msg_from": "Nigel Tamplin <[email protected]>", "msg_from_op": true, "msg_subject": "psql history buffer." }, { "msg_contents": "Nigel Tamplin <[email protected]> writes:\n> Are there any libraries required by pgsql that maybe are present on some\n> machines and not others hence the different behaviour?\n\nPrecisely. If the GNU readline library is not present when psql is\nconfigured/built, you don't get history.\n\nIIRC, some people have been burnt because they have installed\nlibreadline.a (or .so), but not readline.h. You need both, or\nconfigure will decide to omit the feature.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Oct 1999 11:13:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql history buffer. " } ]
[ { "msg_contents": "Marc threw this idea at me one day, thougth i'd bring it up.\n\nApparently Oracle has a fold out poster type deal. It shows file\nstructure of teh server, flowcharts etc. It's the kind of thing\nthat geeks put up on their(our) walls. I think it would be a simple\n/good addition to an publication.\n\n\nJeff MacDonald\[email protected]\n\n===================================================================\n So long as the Universe had a beginning, we can suppose it had a \ncreator, but if the Universe is completly self contained , having \nno boundry or edge, it would neither be created nor destroyed\n It would simply be.\n===================================================================\n\n\n", "msg_date": "Wed, 13 Oct 1999 11:57:55 -0300 (ADT)", "msg_from": "Jeff MacDonald <[email protected]>", "msg_from_op": true, "msg_subject": "Bood Idea" } ]
[ { "msg_contents": "ORIGINALLY: [email protected], Tue 21 Sep 1999 16:48:57 +0200\n\nADDENDUM: The problem seems to remain in PostgreSQL 6.5.2. I have found\nrelated comments here at pgsql-hackers since about a year ago; is there any\ngeneral advice?? \n\n[Chapter 39, Programmer's Guide / PostgreSQL 6.4]\nWHAT DO I MAKE WRONG??\n\nDid I try to abuse PostgreSQL dread or was there just a mistake??\nHow do you realize nested relations if not by OIDs??\nDid I misunderstand the purpose of SQL functions on composite types??\n\nWho has an answer??\n\nYou offer SQL functions that return tuples as composite types, but once\nI try to use it by another SQL function with the same class as\nparameter, I receive the error message:\n\n Function '...' has bad return type 18826\n\n/* Example:\n\n CREATE TABLE class (attribute text);\n\n CREATE FUNCTION tuple() RETURNS class\n AS 'SELECT \\'content\\'::text AS attribute;'\n LANGUAGE 'sql';\n\n CREATE FUNCTION get_attribute(class) RETURNS text\n AS 'SELECT $1.attribute AS result;'\n LANGUAGE 'sql';\n\n SELECT get_attribute(tuple());\n\n*/\n\nIf the composite type producer is put into an attribute, whilst trying\nto read from a table containing the composite attribute I will receive:\n\n Functions on sets are not yet supported\n\n/* Example:\n\n CREATE TABLE container (composite_attribute class);\n\n INSERT INTO container VALUES (tuple());\n\n SELECT get_attribute(composite_attribute) FROM container;\n\n*/\n\nIt seems that SQL functions with composite parameters do only work on\n'raw' tuples of a relational table, don't they??\n\nOn the other hand, the attribute(class) function works quite well on\ncomposite returns of SQL functions*.\nBut the interesting thing, reading from composite attributes, fails with\nmessage:\n\n init_fcache: cache lookup failed for procedure 136280528\n\n/* Example:\n\n SELECT attribute(composite_attribute) FROM container;\n\n*/\n\n* There are seemingly some fatal exceptions, which lead to:\n\npqReadData() -- backend closed the channel unexpectedly\n This probably means the backend terminated\n abnormally before or while processing the request.\n We have lost the connection to the backend, so\n further processing is impossible. Terminated.\nnick@weg:~> ~( >;->)\n\n/* Example:\n\n CREATE FUNCTION get_class(class) RETURNS class\n AS 'SELECT $1.attribute AS attribute;'\n LANGUAGE 'sql';\n\n SELECT attribute(get_class(composite_attribute)) FROM container;\n\n*/\n\n@your service@your service@your service@your service@your service@your service@\nJ�rg R. Rudnick\nSoftware Developer\nEMail: [email protected]\n\n", "msg_date": "Wed, 13 Oct 1999 18:00:55 +0100", "msg_from": "=?iso-8859-1?Q?J=F6rg?= Rudnick <[email protected]>", "msg_from_op": true, "msg_subject": "SQL Functions on Composite Types - what's the purpose??" } ]
[ { "msg_contents": "mydb=> create table roid (roid oid, rtext text);\nCREATE\nmydb=> create table rtext ( rtext text );\nCREATE\nmydb=> create rule roidset as on insert to rtext do insert into roid values ( new.oid, new.rtext );\nCREATE\nmydb=> insert into rtext values('text1');\nINSERT 17681 1\nmydb=> insert into rtext values('text2');\nINSERT 17683 1\nmydb=> insert into rtext values('text3');\nINSERT 17685 1\nmydb=> select oid,* from rtext;\n oid|rtext\n-----+-----\n17681|text1\n17683|text2\n17685|text3\n(3 rows)\n\nmydb=> select oid,* from roid;\n oid|roid|rtext\n-----+----+-----\n17680| |text1\n17682| |text2\n17684| |text3\n(3 rows)\n\n", "msg_date": "Wed, 13 Oct 1999 18:50:40 +0100", "msg_from": "=?iso-8859-2?Q?Daniel_P=E9der?= <[email protected]>", "msg_from_op": true, "msg_subject": "the oid is uknown during execution of rule..insert ? (psql ver 6.5.2)" } ]
[ { "msg_contents": "First of all, great work. I love it. Just some minor nitpits:\n\nMy mail address while correct in the text is incorrect in the graphic. The\nmail address listed there is no longer valid, so mail may bounce.\n\nAs for the text, I have done next to nothing for multi-byte support. I guess\nthis one's incorrect.\n\nAlso we list Dr. Andrew Martin (btw if we do list titles, I have a Dr. too\n:-)) as having done the ecpg interface. I don't like to offend him, but\nAndrew what part of the interface did you write? The original source was\ndone by Linus Tolke AFAIK.\n\nMichael\n-- \nDr. Michael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Wed, 13 Oct 1999 20:21:40 +0200", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "The new globe" }, { "msg_contents": "Dr. Michael Meskes wrote:\n^^^\n\n>\n> First of all, great work. I love it. Just some minor nitpits:\n\n Why do so many people like such a simple graphic? :-)\n\n> My mail address while correct in the text is incorrect in the graphic. The\n> mail address listed there is no longer valid, so mail may bounce.\n\n Hmmm - the address in the graphic, the one in the text and\n the mailto: hyperlink on it are all the same:\n <[email protected]> - or do I need new glasses?\n\n> As for the text, I have done next to nothing for multi-byte support. I guess\n> this one's incorrect.\n>\n> Also we list Dr. Andrew Martin (btw if we do list titles, I have a Dr. too\n> :-)) as having done the ecpg interface. I don't like to offend him, but\n> Andrew what part of the interface did you write? The original source was\n> done by Linus Tolke AFAIK.\n\n Now it starts, so let's grab the chance.\n\n Well, the ~wieck developers page was just a quick hack to\n demonstrate how my version would look like. But it seems to\n be the right time to tidy up the entire content before I\n release the lock and hand it back to Bruce and Vince. I\n wondered all the time I worked on it why there's a pin for\n Neil while he's not mentioned in the text at all?!?\n\n Would ANYONE who's listed or pinned on the page/graphic, and\n those who want to be, please drop me a little note including:\n\n o the complete name, maybe title (if they want) and\n location (like {in|near} Hamburg, Germany). For\n locations usually unknown in the world (like Harsefeld in\n my case) we use a bigger town close to that and say near.\n o the LAT/LON position of their location so I don't have to\n lookup everyone in my maps,\n o the correct eMail address,\n o and most important, the contributions they have made to\n PostgreSQL.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 13 Oct 1999 23:20:15 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] The new globe" }, { "msg_contents": "> Well, the ~wieck developers page was just a quick hack to\n> demonstrate how my version would look like. But it seems to\n> be the right time to tidy up the entire content before I\n> release the lock and hand it back to Bruce and Vince. I\n> wondered all the time I worked on it why there's a pin for\n> Neil while he's not mentioned in the text at all?!?\n> \n> Would ANYONE who's listed or pinned on the page/graphic, and\n> those who want to be, please drop me a little note including:\n> \n> o the complete name, maybe title (if they want) and\n> location (like {in|near} Hamburg, Germany). For\n> locations usually unknown in the world (like Harsefeld in\n> my case) we use a bigger town close to that and say near.\n> o the LAT/LON position of their location so I don't have to\n> lookup everyone in my maps,\n> o the correct eMail address,\n> o and most important, the contributions they have made to\n> PostgreSQL.\n\nYes, please folks, let's send him the updates.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 13 Oct 1999 17:38:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] The new globe" }, { "msg_contents": "> > Would ANYONE who's listed or pinned on the page/graphic, and\n> > those who want to be, please drop me a little note including:\n> >\n> > o the complete name, maybe title (if they want) and\n> > location (like {in|near} Hamburg, Germany). For\n> > locations usually unknown in the world (like Harsefeld in\n> > my case) we use a bigger town close to that and say near.\n> > o the LAT/LON position of their location so I don't have to\n> > lookup everyone in my maps,\n> > o the correct eMail address,\n> > o and most important, the contributions they have made to\n> > PostgreSQL.\n>\n> Yes, please folks, let's send him the updates.\n\n Just uploaded the latest version. You need a Shift+Reload to\n get the latest image corresponding to the map. I polished it\n up a little, filled water into some lakes and added some text\n to the lower half.\n\n Added are:\n\n Oleg Bartunov,\n Byron Nikolaidis (just the pin) and\n Lamar Owen.\n\n Now the east coast is more packed - just like europe. :-)\n\n In the text section I've added another cathegory \"Non-code\n contributors\". That's where I added Oleg.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 14 Oct 1999 16:13:15 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] The new globe" }, { "msg_contents": "> > Yes, please folks, let's send him the updates.\n> \n> Just uploaded the latest version. You need a Shift+Reload to\n> get the latest image corresponding to the map. I polished it\n> up a little, filled water into some lakes and added some text\n> to the lower half.\n> \n> Added are:\n> \n> Oleg Bartunov,\n> Byron Nikolaidis (just the pin) and\n> Lamar Owen.\n> \n> Now the east coast is more packed - just like europe. :-)\n> \n> In the text section I've added another cathegory \"Non-code\n> contributors\". That's where I added Oleg.\n\nI think you can put Vince there too. I removed the section because only\nVince was there in the past.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 14 Oct 1999 14:06:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] The new globe" }, { "msg_contents": "On Wed, Oct 13, 1999 at 11:20:15PM +0200, Jan Wieck wrote:\n> Hmmm - the address in the graphic, the one in the text and\n> the mailto: hyperlink on it are all the same:\n> <[email protected]> - or do I need new glasses?\n\nI get [email protected] if I go over my place in the graphic.\n\n> o the complete name, maybe title (if they want) and\n> location (like {in|near} Hamburg, Germany). For\n> locations usually unknown in the world (like Harsefeld in\n> my case) we use a bigger town close to that and say near.\n\nDr. Michael Meskes, near Dusseldorf.\n\n> o the LAT/LON position of their location so I don't have to\n> lookup everyone in my maps,\n\nSince I'm already in, do you still need this?\n\n> o the correct eMail address,\n\[email protected]\n\n> o and most important, the contributions they have made to\n> PostgreSQL.\n\nMaintaining ecpg since version 0.2.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Fri, 15 Oct 1999 15:11:04 +0200", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] The new globe" }, { "msg_contents": ">\n> On Wed, Oct 13, 1999 at 11:20:15PM +0200, Jan Wieck wrote:\n> > Hmmm - the address in the graphic, the one in the text and\n> > the mailto: hyperlink on it are all the same:\n> > <[email protected]> - or do I need new glasses?\n>\n> I get [email protected] if I go over my place in the graphic.\n\n You're right - I need nu ones.\n\n>\n> > o the complete name, maybe title (if they want) and\n> > location (like {in|near} Hamburg, Germany). For\n> > locations usually unknown in the world (like Harsefeld in\n> > my case) we use a bigger town close to that and say near.\n>\n> Dr. Michael Meskes, near Dusseldorf.\n\n I made it D&uuml;sseldorf, or do you insist on Dusseldorf?\n :-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 15 Oct 1999 16:43:40 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] The new globe" }, { "msg_contents": "On Fri, Oct 15, 1999 at 04:43:40PM +0200, Jan Wieck wrote:\n> I made it D&uuml;sseldorf, or do you insist on Dusseldorf?\n> :-)\n\nOf course not. But I'd prefer if everyone would know Erkelenz of course. :-)\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Fri, 15 Oct 1999 21:10:22 +0200", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] The new globe" } ]
[ { "msg_contents": "Hello,\n\nI agree with this point of view : the granularity of the authentication is not small enough to allow a good setup of access security to the PG databases.\nI plan to setup a database backed web servers :\n* the databases are stored on one Linux box,\n* the Apache servers are on another,\n* all machines are exposed to all attacks from the Internet (and there are a lot)\n* some databases must be feed via ODBC connections from workstations.\nI can setup :\n* the firewall on Linux to allow rough and low-level security restrictions,\n* the pg_hba.conf can be setup to allow connections from the Apache box only\n* there is still a problem for the access to the database themselves : site 1 should access database 1, and not database 2, but there should have the least password in the calling scripts\n* etc...\n\nI already posted a message concerning security, but nobody seems to be concerned about this. I read the advices at www.cert.org, and since then, I became paranoiac...\nI don't know exactly how it would be better to do, but a KISS solution would be good (I don't want to setup a Kerberos authentications for instance, because it could work badly with simple workstations updating data via ODBC).\n\nNicolas Huillard\n\n-----Message d'origine-----\nDe:\tOleg Bartunov [SMTP:[email protected]]\nDate:\tjeudi 14 octobre 1999 00:11\n�:\tPeter Eisentraut\nCc:\tLincoln Yeoh; [email protected]; [email protected]\nObjet:\tRe: [HACKERS] Re: [GENERAL] How do I activate and change the postgres user's password?\n\nHi,\n\nfollowin this thread, I think\nIt would be useful to allow user to connect to database he owned (created)\nwithout password even if pg_hba.conf is configured with password requirement\nto this database. Or owner of database could maintain list of\nusers/groups whom he granted trusted connection. After user connects\nusual grant priviliges could works. Currently it's a pain to\nwork with authentification system - I have to input my password\nevery time I use psql and moreover I had to specify it in\nperl scripts I developed. Sometimes it's not easy to maintain secure\nfile permissions espec. if several developers share common work.\nAny user (even not postgres user) could use stealed password to connects\nto your database. In my proposal, security is rely on local login\nsecurity. You already passed password control. There are another checks\nlike priviliges. You write your scripts without hardcoded passwords !\nOf course this could be just an option in case you need \"paranoic\" security.\nHaving more granulated privilege types as Mysql does would only make\nmy proposal more secure. You're allowed to connect, but owner of database\ncould restrict you even list of tables, indices et. all.\n\n\tRegards,\n\n \tOleg\n\nPS.\n I didn't find any plans to improve authen. in TODO\n\nOn Wed, 13 Oct 1999, Peter Eisentraut wrote:\n\n> Date: Wed, 13 Oct 1999 21:56:15 +0200 (CEST)\n> From: Peter Eisentraut <[email protected]>\n> To: Lincoln Yeoh <[email protected]>\n> Cc: [email protected], [email protected]\n> Subject: [HACKERS] Re: [GENERAL] How do I activate and change the postgres user's password?\n> \n> On Oct 13, Lincoln Yeoh mentioned:\n> \n> > Then I have problems logging in as ANY user. Couldn't figure out what the\n> > default password for the postgres user was. Only after some messing around\n> > I found that I could log on as the postgres user with the password \\N. Not\n> > obvious, at least to me.\n> \n> There is a todo item for the postgres user to have a password by default.\n> I'm not sure though how that would be done. Probably in initdb. (?)\n> \n> > I only guessed it after looking at the pg_pwd file and noticing a \\N there.\n> > Is this where the passwords are stored? By the way should they be stored in\n> > the clear and in a 666 permissions file? How about hashing them with some\n> > salt?\n> \n> I had this on my personal things-to-consider-working-on list but I don't\n> see an official todo item. I am personally not sure why this is not done\n> but authentication and security are not most people's specialty around here.\n> (including me)\n> \n> > 1) There is no obvious way to specify the password for users when you\n> > create a user using the supplied shell script createuser. One has to resort\n> > to psql and stuff.\n> \n> Aah. Another misguided user. Some people are of the opinion that using the\n> createuser scripts is a bad idea because it gives you the wrong impression\n> of how things work. (All createuser does is call psql.) Of course, we\n> could somehow put a password prompt in there, I'll put that on the above\n> mentioned list.\n> \n> > 2) Neither is there an obvious and easy way to change the user's password.\n> \n> alter user joe with password \"foo\";\n> \n> I'm not sure how obvious it is but it's certainly easy.\n> \n> > 3) You can specify a password for a user by using pg_passwd and stick it\n> > into a separate password file, but then there really is no link between\n> > createuser and pg_passwd. \n> \n> This shows how bad the idea of the scripts was in the first place.\n> \n> > I find the bundled scripts and their associated documentation make things\n> > very nonintuitive when one switches from a blind trust postgres to an\n> > authenticated postgres. \n> \n> So that would put your vote in the \"drop altogether\" column? Voting is\n> still in progress!\n> \n> \t-Peter\n> \n> -- \n> Peter Eisentraut Sernanders vaeg 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n> \n> \n> ************\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n************\n", "msg_date": "Thu, 14 Oct 1999 10:39:07 +0200", "msg_from": "Nicolas Huillard <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Re: [GENERAL] How do I activate and change the postgres\n\tuser's password?" }, { "msg_contents": "hi..\n\n> * there is still a problem for the access to the database themselves : site\n> 1 should access database 1, and not database 2, but there should have the\n> least password in the calling scripts \n\na quick thought: if you are really paranoid, set up different installations of\npostgres, even if on the same box... don't run them on the default port, set up\nseperate pg_hba files and it should keep everything QUITE seperate.\n\n> I already posted a message concerning security, but nobody seems to be\n> concerned about this. I read the advices at www.cert.org, and since then, I\n> became paranoiac... \n\nas a side note, CERT sucks. they know security, if only because they know about\nmuch of the cracking activity on the net, via reports. however, they are\nclose-mouthed about it all. they don't offer solutions, don't require vendors\nto produce solutions and don't tell the public about the problems until the\nvendor says \"ok, tell 'em now\", which is usually FAR too late. why do you think\nthey lose most of their star players (such as the guy who wrote SATAN?)? A:\nfrustration.\n\nthere are MUCH better security sites/sources than CERT. e.g. security portal.\n\n-- \nAaron J. Seigo\nSys Admin\n", "msg_date": "Thu, 14 Oct 1999 11:11:05 -0600", "msg_from": "\"Aaron J. Seigo\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Re: [GENERAL] How do I activate and change the postgres\n\tuser's password?" } ]
[ { "msg_contents": "I am not that paranoid...\nMaybe the balance between paranoia and \"administrability\" is important too. That's why the security thread is important on this Postgres mailing-lists. And the proposed TODO too.\nSide note : I'll check the security portal...\n\nNH\n\n-----Message d'origine-----\nDe:\tAaron J. Seigo [SMTP:[email protected]]\nDate:\tjeudi 14 octobre 1999 19:11\n�:\tNicolas Huillard; 'Oleg Bartunov'; 'Peter Eisentraut'\nCc:\t'Lincoln Yeoh'; '[email protected]'; '[email protected]'\nObjet:\tRE: [HACKERS] Re: [GENERAL] How do I activate and change the postgres user's password?\n\nhi..\n\n> * there is still a problem for the access to the database themselves : site\n> 1 should access database 1, and not database 2, but there should have the\n> least password in the calling scripts \n\na quick thought: if you are really paranoid, set up different installations of\npostgres, even if on the same box... don't run them on the default port, set up\nseperate pg_hba files and it should keep everything QUITE seperate.\n\n> I already posted a message concerning security, but nobody seems to be\n> concerned about this. I read the advices at www.cert.org, and since then, I\n> became paranoiac... \n\nas a side note, CERT sucks. they know security, if only because they know about\nmuch of the cracking activity on the net, via reports. however, they are\nclose-mouthed about it all. they don't offer solutions, don't require vendors\nto produce solutions and don't tell the public about the problems until the\nvendor says \"ok, tell 'em now\", which is usually FAR too late. why do you think\nthey lose most of their star players (such as the guy who wrote SATAN?)? A:\nfrustration.\n\nthere are MUCH better security sites/sources than CERT. e.g. security portal.\n\n-- \nAaron J. Seigo\nSys Admin\n\n", "msg_date": "Thu, 14 Oct 1999 20:37:20 +0200", "msg_from": "Nicolas Huillard <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Re: [GENERAL] How do I activate and change the postgres\n\tuser's password?" } ]
[ { "msg_contents": "I can't tab anymore in psql:\n\n\ttest=> CREATE TABLE friends (\n\ttest-> \n\tDisplay all 161 possibilities? (y or n)\n\nWhat is this. Looks like readline is assuming my tab means 'tab\ncompletion'. I don't have a problem with tab completion, but I like to\nuse tab when typing queries to indent my SQL.\n\nWhen did this happen?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 14 Oct 1999 18:03:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "TAB doesn't work in psql" }, { "msg_contents": "On Thu, 14 Oct 1999, Bruce Momjian wrote:\n\n> I can't tab anymore in psql:\n> \n> \ttest=> CREATE TABLE friends (\n> \ttest-> \n> \tDisplay all 161 possibilities? (y or n)\n> \n> What is this. Looks like readline is assuming my tab means 'tab\n> completion'. I don't have a problem with tab completion, but I like to\n> use tab when typing queries to indent my SQL.\n> \n> When did this happen?\n\nLast time you messed with your .inputrc maybe??? I have always known it\nthis way. I have made M-Tab to be insert-tab, it's not configured this way\nby default.\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 15 Oct 1999 11:30:39 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TAB doesn't work in psql" }, { "msg_contents": "> On Thu, 14 Oct 1999, Bruce Momjian wrote:\n> \n> > I can't tab anymore in psql:\n> > \n> > \ttest=> CREATE TABLE friends (\n> > \ttest-> \n> > \tDisplay all 161 possibilities? (y or n)\n> > \n> > What is this. Looks like readline is assuming my tab means 'tab\n> > completion'. I don't have a problem with tab completion, but I like to\n> > use tab when typing queries to indent my SQL.\n> > \n> > When did this happen?\n> \n> Last time you messed with your .inputrc maybe??? I have always known it\n> this way. I have made M-Tab to be insert-tab, it's not configured this way\n> by default.\n\nOh. Got it. I have modified my .inputrc. I guess I just thought it\nused to work.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 15 Oct 1999 10:23:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] TAB doesn't work in psql" } ]
[ { "msg_contents": "Hello Bruce, \n\nI've just submitted a patch to the patches list which\nimplements Oracle's COMMENT statement. It will \ninsert/update/delete the appropriate OID for the \ntable or column targeted for the comment. It should\napply cleanly against current. If it passes your\nscrutiny, I was wondering a couple of things:\n\n1. Might it be possible for psql \n (a.k.a Peter Eisentraut) to display the comments\n associated with tables, views, and columns in \n its \\d output? Or perhaps another \\ command?\n\n2. Should I write up SGML for it (as well as for \n TRUNCATE TABLE)?\n\n3. Should I expand it beyond ORACLE's syntax to \n include functions, types, triggers, rules, etc.?\n\nOn the TODO list its listed as:\n\nAllow pg_descriptions when creating types, tables,\ncolumns, and functions \n\nAnyways, \n\nHope its worth something,\n\nMike Mascari\n([email protected])\n\n\n\n=====\n\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n", "msg_date": "Thu, 14 Oct 1999 18:31:34 -0700 (PDT)", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": true, "msg_subject": "ORACLE COMMENT statement" }, { "msg_contents": "> Hello Bruce, \n> \n> I've just submitted a patch to the patches list which\n> implements Oracle's COMMENT statement. It will \n> insert/update/delete the appropriate OID for the \n> table or column targeted for the comment. It should\n> apply cleanly against current. If it passes your\n> scrutiny, I was wondering a couple of things:\n> \n> 1. Might it be possible for psql \n> (a.k.a Peter Eisentraut) to display the comments\n> associated with tables, views, and columns in \n> its \\d output? Or perhaps another \\ command?\n\nSure. \\dd does that already.\n\n> \n> 2. Should I write up SGML for it (as well as for \n> TRUNCATE TABLE)?\n\nI did that for Truncate. You can see it in the docs. If you want to\nwrite on on this, that would be good. It seems more complex.\n\n> \n> 3. Should I expand it beyond ORACLE's syntax to \n> include functions, types, triggers, rules, etc.?\n\nSure, why not. \\dd already handles it.\n\n> \n> On the TODO list its listed as:\n> \n> Allow pg_descriptions when creating types, tables,\n> columns, and functions \n\nYep. Removed the 'table' entry, and marked it as done.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 14 Oct 1999 21:54:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ORACLE COMMENT statement" }, { "msg_contents": "> 2. Should I write up SGML for it (as well as for\n> TRUNCATE TABLE)?\n\nYes (though TRUNCATE TABLE has something already in\nref/truncate.sgml).\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 15 Oct 1999 05:49:16 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ORACLE COMMENT statement" }, { "msg_contents": "On Oct 14, Mike Mascari mentioned:\n\n> 1. Might it be possible for psql \n> (a.k.a Peter Eisentraut) to display the comments\n> associated with tables, views, and columns in \n> its \\d output? Or perhaps another \\ command?\n\nI was sort of sitting in the holes for the below TODO to get finished. I\nwas thinking about the \\d output as well, perhaps one could switch it on\nand off somewhere. I'll see what I can do.\n\n> Allow pg_descriptions when creating types, tables,\n> columns, and functions \n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Sat, 16 Oct 1999 02:50:16 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ORACLE COMMENT statement" }, { "msg_contents": "I have another question regarding this: It seems that you can attach a\ndescription (or comment, as it is) to any oid. (Not with this command, but\nin general). Is this restricted to system oids (like below 16000 or\nwhatever it was)? Otherwise comments on any user tuple could be created.\nPerhaps this should be explicitly prevented or allowed. In the latter case\nperhaps the comment statement could be tweaked. Not sure. Just wondering.\n\n\t-Peter\n\n\nOn Oct 14, Mike Mascari mentioned:\n\n> Hello Bruce, \n> \n> I've just submitted a patch to the patches list which\n> implements Oracle's COMMENT statement. It will \n> insert/update/delete the appropriate OID for the \n> table or column targeted for the comment. It should\n> apply cleanly against current. If it passes your\n> scrutiny, I was wondering a couple of things:\n> \n> 1. Might it be possible for psql \n> (a.k.a Peter Eisentraut) to display the comments\n> associated with tables, views, and columns in \n> its \\d output? Or perhaps another \\ command?\n> \n> 2. Should I write up SGML for it (as well as for \n> TRUNCATE TABLE)?\n> \n> 3. Should I expand it beyond ORACLE's syntax to \n> include functions, types, triggers, rules, etc.?\n> \n> On the TODO list its listed as:\n> \n> Allow pg_descriptions when creating types, tables,\n> columns, and functions \n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 16 Oct 1999 21:37:25 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ORACLE COMMENT statement" } ]
[ { "msg_contents": "Why am I writing a book?\n\nWell, when someone close to an open-source project considers a book deal\nwith a publisher, people have a right to ask.\n\nBasically, I am not doing for the money. If I was interested in only\nmoney, I wouldn't have got involved with PostgreSQL years ago.\n\nI am doing it because I think it will help PostgreSQL, I enjoy writing,\nmy wife and boss think it is a good idea, and because it will be neat to\nget my name on a book. Of course, I will get as many other names as\npossible in there. Marc, Thomas, Jolly Chen, Andrew Yu, and Stonebraker\nare already mentioned in my first chapter. There are many more pages to\ngo.\n\nHowever, the book is going to take time, and because I work on 100%\ncommission, I must make up some of my lost wages while working on the\nbook. A publisher allows me to do that.\n\nI have talked to Marc, Thomas Lockhart, and Tom Lane today by telephone,\nand they all said OK. I haven't made any final decisions yet, but I am\ngetting close. If people have concerns about this, I would like to hear\nthem.\n\nTo summarize, this book is a for first-time database users, so newbies\ncan get started with PostgreSQL. It will have topics on our advanced\nfeatures, but it will be a compliment to our fine documentation Thomas\nLockhart has put together, not a replacement.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 14 Oct 1999 22:37:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Why I am writing a book" } ]
[ { "msg_contents": "(on list; interesting topic)\n\n> Tom, I'm looking at an oracle script and seeing something I haven't seen\n> before. Is this something oracle specific? Can Postgres do it?\n> CREATE OR REPLACE procedure inc_char_for_sort_key (old_char IN OUT CHAR,\n> carry_p OUT INTEGER)\n> It goes on, but it's the CREATE OR REPLACE part that I'm interested in.\n\nThat's interesting. In our case, you would do a \"drop function\" and\nthen the \"create function\" as a two step process. Oracle simplifies it\na bit for you. I'm not sure why we throw an error if you drop a\nfunction which does not exist, since that makes it tough to blindly do\nthe \"drop/create\" pair. Why don't we just signal a warning or notice\ninstead?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 15 Oct 1999 04:43:13 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: can postgres do this?" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I'm not sure why we throw an error if you drop a\n> function which does not exist, since that makes it tough to blindly do\n> the \"drop/create\" pair. Why don't we just signal a warning or notice\n> instead?\n\nIt doesn't matter unless you are inside a transaction --- but I can\nsee the value of replacing a function definition inside a transaction.\n\nPerhaps \"no such <whatever>\" should be downgraded from ERROR to NOTICE\nfor all DROP-type commands. Another TODO item...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Oct 1999 10:13:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: can postgres do this? " } ]
[ { "msg_contents": "Psql \\dT shows int8 as > 18 digits. As far as I can tell, int8 is ~700\ndigits in precision, right?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 15 Oct 1999 00:50:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "int8 type" }, { "msg_contents": "On Oct 15, Bruce Momjian mentioned:\n\n> Psql \\dT shows int8 as > 18 digits. As far as I can tell, int8 is ~700\n> digits in precision, right?\n\nThat should probably be = 18 digits. (unless you consider \"almost 19\" as\n\"> 18\")\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 16 Oct 1999 02:42:55 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] int8 type" } ]
[ { "msg_contents": "Never mind. My use of 'calc' is wrong.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 15 Oct 1999 00:54:29 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "int8" } ]
[ { "msg_contents": " Hello,\n\n> Did you add your pg libraries path into /etc/ld.so.conf \n> (/usr/local/postgres/libs is mine) and run ldconfig?\n\nI searched the whole tree from / for ld.so.conf but did not find one.\n\nC.\n \n\n-- \nCarsten Huettl - <http://www.ahorn-Net.de>\npgp-key on request\n", "msg_date": "Fri, 15 Oct 1999 06:21:11 +0100", "msg_from": "\"Carsten Huettl\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] ld.so failed" }, { "msg_contents": "On Fri, Oct 15, 1999 at 06:21:11AM +0100, Carsten Huettl wrote:\n> Hello,\n> \n> > Did you add your pg libraries path into /etc/ld.so.conf \n> > (/usr/local/postgres/libs is mine) and run ldconfig?\n> \n> I searched the whole tree from / for ld.so.conf but did not find one.\n\n Add the path in /etc/rc.conf to ldconfig_paths. Then reboot.\n\n-- \n\n Regards,\n\n Sascha Schumann\n Consultant\n", "msg_date": "Fri, 15 Oct 1999 13:10:52 +0200", "msg_from": "Sascha Schumann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] ld.so failed" }, { "msg_contents": "> > Did you add your pg libraries path into /etc/ld.so.conf \n> > (/usr/local/postgres/libs is mine) and run ldconfig?\n> \n> I searched the whole tree from / for ld.so.conf but did not find one.\n\nfreebsd doesn't really use one.\n\nyou can manually add the path with \"ldconfig -R /path\"\n\nyou can make the change permanent by adding the path to the ldconfig parameter\nint /etc/rc.conf.\n\nalternately, you can include the ldconfig -R command in your\n/usr/local/etc/rc.d/pgsql.sh script, if you have one.\n\n-- \n[ Jim Mercer Reptilian Research [email protected] +1 416 410-5633 ]\n[ The telephone, for those of you who have forgotten, was a commonly used ]\n[ communications technology in the days before electronic mail. ]\n[ They're still easy to find in most large cities. -- Nathaniel Borenstein ]\n", "msg_date": "Fri, 15 Oct 1999 09:59:09 -0400 (EDT)", "msg_from": "[email protected] (Jim Mercer)", "msg_from_op": false, "msg_subject": "Re: [GENERAL] ld.so failed" }, { "msg_contents": "From: \[email protected] (Jim Mercer)\nSubject: \tRe: [GENERAL] ld.so failed\nTo: \[email protected] (Carsten Huettl)\nDate sent: \tFri, 15 Oct 1999 09:59:09 -0400 (EDT)\nCopies to: \[email protected], [email protected]\n\n> you can make the change permanent by adding the path to the ldconfig\n> parameter int /etc/rc.conf.\n> \nHow do I make this permanent if I need the \"-aout\" option with \nldconfig ?\n\nC.\n\n\n-- \nCarsten Huettl - <http://www.ahorn-Net.de>\npgp-key on request\n", "msg_date": "Wed, 20 Oct 1999 01:01:24 +0200", "msg_from": "\"Carsten Huettl\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] ld.so failed" }, { "msg_contents": "Hi everyone,\n\nShould inserts be so slow?\n\nI've written a perl script to insert 10 million records for testing\npurposes and it looks like it's going to take a LONG time with postgres.\nMySQL is about 150 times faster! I don't have any indexes on either. I am\nusing the DBI and relevant DBD for both.\n\nFor Postgres 6.5.2 it's slow with either of the following table structures.\ncreate table central ( counter serial, number varchar (12), name text,\naddress text );\ncreate table central ( counter serial, number varchar (12), name\nvarchar(80), address varchar(80));\n\nFor MySQL I used:\ncreate table central (counter int not null auto_increment primary key,\nnumber varchar(12), name varchar(80), address varchar(80));\n\nThe relevant perl portion is (same for both):\n\t\t$SQL=<<\"EOT\";\ninsert into central (number,name,address) values (?,?,?)\nEOT\n\t\t$cursor=$dbh->prepare($SQL);\n\n\twhile ($c<10000000) {\n\t\t$number=$c;\n\t\t$name=\"John Doe the number \".$c;\n\t\t$address=\"$c, Jalan SS$c/$c, Petaling Jaya\";\n\t\t$rv=$cursor->execute($number,$name,$address) or die(\"Error executing\ninsert!\",$DBI::errstr);\n\t\tif ($rv==0) {\n\t\t\tdie(\"Error inserting a record with database!\",$DBI::errstr);\n\t\t};\n\t\t$c++;\n\t\t$d++;\n\t\tif ($d>1000) {\n\t\t\tprint \"$c\\n\";\n\t\t\t$d=1;\n\t\t}\n\t}\n\n\n", "msg_date": "Wed, 20 Oct 1999 12:25:50 +0800", "msg_from": "Lincoln Yeoh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Postgres INSERTs much slower than MySQL?" }, { "msg_contents": "Try turning off Autocommit: MySQL doesn't support transactions, so that\nmight be what's causing the speed boost. Just change the connect line from:\n$pg_con=DBI->connect(\"DBI:Pg:....\nto\n$pg_con=DBI->connect(\"DBI:Pg(AutoCommit=>0):....\n\nand add \n\n$pg_con->commit\n\nbefore you disconnect. I may have the syntax wrong, so double check the\ndocs for the DBI and PG modules (perldoc DBD::Pg and perldoc DBI)\n\nAt 01:25 AM 10/20/99, Lincoln Yeoh wrote:\n>Hi everyone,\n>\n>Should inserts be so slow?\n>\n>I've written a perl script to insert 10 million records for testing\n>purposes and it looks like it's going to take a LONG time with postgres.\n>MySQL is about 150 times faster! I don't have any indexes on either. I am\n>using the DBI and relevant DBD for both.\n>\n>For Postgres 6.5.2 it's slow with either of the following table structures.\n>create table central ( counter serial, number varchar (12), name text,\n>address text );\n>create table central ( counter serial, number varchar (12), name\n>varchar(80), address varchar(80));\n>\n>For MySQL I used:\n>create table central (counter int not null auto_increment primary key,\n>number varchar(12), name varchar(80), address varchar(80));\n>\n>The relevant perl portion is (same for both):\n>\t\t$SQL=<<\"EOT\";\n>insert into central (number,name,address) values (?,?,?)\n>EOT\n>\t\t$cursor=$dbh->prepare($SQL);\n>\n>\twhile ($c<10000000) {\n>\t\t$number=$c;\n>\t\t$name=\"John Doe the number \".$c;\n>\t\t$address=\"$c, Jalan SS$c/$c, Petaling Jaya\";\n>\t\t$rv=$cursor->execute($number,$name,$address) or die(\"Error executing\n>insert!\",$DBI::errstr);\n>\t\tif ($rv==0) {\n>\t\t\tdie(\"Error inserting a record with database!\",$DBI::errstr);\n>\t\t};\n>\t\t$c++;\n>\t\t$d++;\n>\t\tif ($d>1000) {\n>\t\t\tprint \"$c\\n\";\n>\t\t\t$d=1;\n>\t\t}\n>\t}\n>\n>\n>\n>************\n>\n\n", "msg_date": "Wed, 20 Oct 1999 02:56:56 -0300", "msg_from": "Charles Tassell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Postgres INSERTs much slower than MySQL?" }, { "msg_contents": "Thanks.\n\nIt's now a lot faster. Now only about 5 or so times slower. Cool.\n\nBut it wasn't unexpected that I got the following after a while ;).\n\nNOTICE: BufferAlloc: cannot write block 990 for joblist/central\n \nNOTICE: BufferAlloc: cannot write block 991 for joblist/central\nDBD::Pg::st execute failed: NOTICE: BufferAlloc: cannot write block 991\nfor joblist/central\nError executing insert!NOTICE: BufferAlloc: cannot write block 991 for\njoblist/central\nDatabase handle destroyed without explicit disconnect. \n\nI don't mind that. I was actually waiting to see what would happen and\nmy jaw would have dropped if MVCC could handle Multi Versions with\n10,000,000 records! \n\nBut the trouble is postgres seemed to behave strangely after that error.\nThe select count(*) from central took so long that I gave up. I tried drop\ntable central, and so far it hasn't dropped yet. Single record selects\nstill work tho.\n\nWell next time I'll commit after a few thousand inserts. But still things\nshouldn't lock up like that right? It's only inserted a few more thousand\nrecords to the 50000 to 60000 records stage, so it's not a big table I'm\ndealing with.\n\nI cancelled the drop, killed postmaster (nicely), restarted it and tried\nvacuuming. Vacuuming found some errors, but now it has got stuck too:\nNOTICE: Index central_counter_key: pointer to EmptyPage (blk 988 off 52) -\nfixing \nNOTICE: Index central_counter_key: pointer to EmptyPage (blk 988 off 53) -\nfixing\nThen nothing for the past 5 minutes.\n \n\nLooks like I may have to manually clean things up with good ol rm. <sigh>.\nNot an urgent problem since this shouldn't happen in production.\n\nBy the way, the 999,999th record has been inserted into MySQL already. It's\npretty good at the rather limited stuff it does. \n\nBut Postgres' MVCC thing sounds real cool. Not as cool as a 10MegaRecord\nMVCC would be tho <grin>.\n\nMust try screwing up Oracle one of these days. I'm pretty good at messing\nthings up ;). \n\nCheerio,\n\nLink.\n\nAt 02:56 AM 20-10-1999 -0300, Charles Tassell wrote:\n>Try turning off Autocommit: MySQL doesn't support transactions, so that\n>might be what's causing the speed boost. Just change the connect line from:\n>$pg_con=DBI->connect(\"DBI:Pg:....\n>to\n>$pg_con=DBI->connect(\"DBI:Pg(AutoCommit=>0):....\n>\n>and add \n>\n>$pg_con->commit\n>\n>before you disconnect. I may have the syntax wrong, so double check the\n>docs for the DBI and PG modules (perldoc DBD::Pg and perldoc DBI)\n>\n>At 01:25 AM 10/20/99, Lincoln Yeoh wrote:\n>>Hi everyone,\n>>\n>>Should inserts be so slow?\n>>\n>>I've written a perl script to insert 10 million records for testing\n>>purposes and it looks like it's going to take a LONG time with postgres.\n>>MySQL is about 150 times faster! I don't have any indexes on either. I am\n>>using the DBI and relevant DBD for both.\n>>\n>>For Postgres 6.5.2 it's slow with either of the following table structures.\n>>create table central ( counter serial, number varchar (12), name text,\n>>address text );\n>>create table central ( counter serial, number varchar (12), name\n>>varchar(80), address varchar(80));\n>>\n>>For MySQL I used:\n>>create table central (counter int not null auto_increment primary key,\n>>number varchar(12), name varchar(80), address varchar(80));\n>>\n>>The relevant perl portion is (same for both):\n>>\t\t$SQL=<<\"EOT\";\n>>insert into central (number,name,address) values (?,?,?)\n>>EOT\n>>\t\t$cursor=$dbh->prepare($SQL);\n>>\n>>\twhile ($c<10000000) {\n>>\t\t$number=$c;\n>>\t\t$name=\"John Doe the number \".$c;\n>>\t\t$address=\"$c, Jalan SS$c/$c, Petaling Jaya\";\n>>\t\t$rv=$cursor->execute($number,$name,$address) or die(\"Error executing\n>>insert!\",$DBI::errstr);\n>>\t\tif ($rv==0) {\n>>\t\t\tdie(\"Error inserting a record with database!\",$DBI::errstr);\n>>\t\t};\n>>\t\t$c++;\n>>\t\t$d++;\n>>\t\tif ($d>1000) {\n>>\t\t\tprint \"$c\\n\";\n>>\t\t\t$d=1;\n>>\t\t}\n>>\t}\n>>\n>>\n>>\n>>************\n>>\n>\n>\n>\n\n", "msg_date": "Wed, 20 Oct 1999 15:38:12 +0800", "msg_from": "Lincoln Yeoh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Postgres INSERTs much slower than MySQL?" }, { "msg_contents": "Lincoln Yeoh wrote:\n> \n> It's now a lot faster. Now only about 5 or so times slower. Cool.\n> \n> But it wasn't unexpected that I got the following after a while ;).\n> \n> NOTICE: BufferAlloc: cannot write block 990 for joblist/central\n> \n> NOTICE: BufferAlloc: cannot write block 991 for joblist/central\n> DBD::Pg::st execute failed: NOTICE: BufferAlloc: cannot write block 991\n> for joblist/central\n> Error executing insert!NOTICE: BufferAlloc: cannot write block 991 for\n> joblist/central\n> Database handle destroyed without explicit disconnect.\n> \n> I don't mind that. I was actually waiting to see what would happen and\n> my jaw would have dropped if MVCC could handle Multi Versions with\n> 10,000,000 records!\n\nIt doesn't seem as MVCC problem. MVCC uses transaction ids,\nnot tuple ones, and so should work with any number of rows\nmodified by concurrent transaction... In theory... -:))\n\nVadim\n", "msg_date": "Wed, 20 Oct 1999 16:12:50 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Postgres INSERTs much slower than MySQL?" }, { "msg_contents": "At 04:12 PM 20-10-1999 +0800, Vadim Mikheev wrote:\n>It doesn't seem as MVCC problem. MVCC uses transaction ids,\n>not tuple ones, and so should work with any number of rows\n>modified by concurrent transaction... In theory... -:))\n\nOK. Dunno what I hit then. I wasn't modifying rows, I was inserting rows.\n\nHow many rows (blocks) can I insert before I have to do a commit?\n\nWell anyway the Postgres inserts aren't so much slower if I only commit\nonce in a while. Only about 3 times slower for the first 100,000 records.\nSo the subject line is now inaccurate :). Not bad, I like it. \n\nBut to fix the resulting problem I had to manually rm the files related to\nthe table. I also dropped the database to make sure ;). That's not good.\n\nCheerio,\n\nLink.\n\n", "msg_date": "Wed, 20 Oct 1999 16:33:18 +0800", "msg_from": "Lincoln Yeoh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Postgres INSERTs much slower than MySQL?" }, { "msg_contents": "Lincoln Yeoh wrote:\n> \n> At 04:12 PM 20-10-1999 +0800, Vadim Mikheev wrote:\n> >It doesn't seem as MVCC problem. MVCC uses transaction ids,\n> >not tuple ones, and so should work with any number of rows\n> >modified by concurrent transaction... In theory... -:))\n> \n> OK. Dunno what I hit then. I wasn't modifying rows, I was inserting rows.\n\nYou hit buffer manager/disk manager problems or eat all disk space.\nAs for \"modifying\" - I meant insertion, deletion, update...\n\n> How many rows (blocks) can I insert before I have to do a commit?\n\nEach transaction can have up to 2^32 commands.\n\n> Well anyway the Postgres inserts aren't so much slower if I only commit\n> once in a while. Only about 3 times slower for the first 100,000 records.\n> So the subject line is now inaccurate :). Not bad, I like it.\n\nHope that it will be much faster when WAL will be implemented...\n\nVadim\n", "msg_date": "Wed, 20 Oct 1999 16:38:09 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Postgres INSERTs much slower than MySQL?" }, { "msg_contents": "At 04:38 PM 20-10-1999 +0800, Vadim Mikheev wrote:\n>You hit buffer manager/disk manager problems or eat all disk space.\n>As for \"modifying\" - I meant insertion, deletion, update...\n\nThere was enough disk space (almost another gig more). So it's probably\nsome buffer manager problem. Is that the postgres buffer manager or is it a\nLinux one?\n\nAre you able to duplicate that problem? All I did was to turn off\nautocommit and start inserting. \n\n>> How many rows (blocks) can I insert before I have to do a commit?\n>Each transaction can have up to 2^32 commands.\n\nWow, that's cool.. Should be enough for everyone. I can't imagine anybody\nmaking 4 billion statements without committing anything, not even politicians!\n\n>> Well anyway the Postgres inserts aren't so much slower if I only commit\n>> once in a while. Only about 3 times slower for the first 100,000 records.\n>> So the subject line is now inaccurate :). Not bad, I like it.\n>\n>Hope that it will be much faster when WAL will be implemented...\n\nWhat's WAL? Is postgres going to be faster than MySQL? That would be pretty\nimpressive- transactions and all. Woohoo!\n\nHope it doesn't stand for Whoops, All's Lost :).\n\nCheerio,\n\nLink.\n\n", "msg_date": "Thu, 21 Oct 1999 09:08:15 +0800", "msg_from": "Lincoln Yeoh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Postgres INSERTs much slower than MySQL?" }, { "msg_contents": "I've seen WAL mentioned several times, but have yet to see anything\nabout what it is.\n\nHelp! :)\n\nWhat is WAL? Or is it something that is only known by the Illuminati? :)\n\nI did a search in the archives and came up empty, no hits. Not even the\nmessages which only mention it. Nothing, nada, zip, no \"gee I'm banging\nmy head against the wall trying...\" or anything else.\n\nAfter having read some of the messages in the archives today I have a\nconfession.\n\nI am very much pro Linux, *BSDs, et al, but I do most of my web browsing\nat work on a Win95 machine using Netscape. I know it's scary, but I am\nnot trying to be. Please forgive me. If at all possible I will try to\natone by installing RH 6.x on my machine at work, if I can do it where\nmy boss can boot (from a shutdown machine) into windows without knowing\nLinux exists. :)\n\nThanks,\n\nJimmie Houchin \n\nLincoln Yeoh wrote:\n> \n> At 04:38 PM 20-10-1999 +0800, Vadim Mikheev wrote:\n> >Hope that it will be much faster when WAL will be implemented...\n> \n> What's WAL? Is postgres going to be faster than MySQL? That would be pretty\n> impressive- transactions and all. Woohoo!\n> \n> Hope it doesn't stand for Whoops, All's Lost :).\n> \n> Cheerio,\n> \n> Link.\n", "msg_date": "Thu, 21 Oct 1999 15:34:15 -0500", "msg_from": "Jimmie Houchin <[email protected]>", "msg_from_op": false, "msg_subject": "What's WAL (wasRe: [GENERAL] Postgres INSERTs much slower than\n MySQL?)" }, { "msg_contents": "Lincoln Yeoh wrote:\n> \n> At 04:38 PM 20-10-1999 +0800, Vadim Mikheev wrote:\n> >You hit buffer manager/disk manager problems or eat all disk space.\n> >As for \"modifying\" - I meant insertion, deletion, update...\n> \n> There was enough disk space (almost another gig more). So it's probably\n> some buffer manager problem. Is that the postgres buffer manager or is it a\n> Linux one?\n> \n> Are you able to duplicate that problem? All I did was to turn off\n> autocommit and start inserting.\n\nI created table with text column and inserted 1000000 rows\nwith '!' x rand(256) without problems on Sun Ultra, 6.5.2\nI run postmaster only with -S flag.\nAnd while inserting I run select count(*) from _table_\nin another session from time to time - wonder what was\nreturned all the time before commit? -:))\n\n> >> Well anyway the Postgres inserts aren't so much slower if I only commit\n> >> once in a while. Only about 3 times slower for the first 100,000 records.\n> >> So the subject line is now inaccurate :). Not bad, I like it.\n> >\n> >Hope that it will be much faster when WAL will be implemented...\n> \n> What's WAL? Is postgres going to be faster than MySQL? That would be pretty\n ^^^^^^^^^^^^^^^^^^^^^^^\n No.\n\n> impressive- transactions and all. Woohoo!\n\nWAL is Write Ahead Log, transaction logging.\nThis will reduce # of fsyncs (among other things) Postgres has\nto perform now.\nTest above took near 38 min without -F flag and 24 min\nwith -F (no fsync at all).\nWith WAL the same test without -F will be near as fast as with\n-F now.\n\nBut what makes me unhappy right now is that with -F COPY FROM takes\nJUST 3 min !!! (And 16 min without -F)\nIsn't parsing/planning overhead toooo big ?!\n\nVadim\n", "msg_date": "Fri, 22 Oct 1999 13:52:40 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Postgres INSERTs much slower than MySQL?" }, { "msg_contents": "> WAL is Write Ahead Log, transaction logging.\n> This will reduce # of fsyncs (among other things) Postgres has\n> to perform now.\n> Test above took near 38 min without -F flag and 24 min\n> with -F (no fsync at all).\n> With WAL the same test without -F will be near as fast as with\n> -F now.\n> \n> But what makes me unhappy right now is that with -F COPY FROM takes\n> JUST 3 min !!! (And 16 min without -F)\n> Isn't parsing/planning overhead toooo big ?!\n\nYikes. I always thought it would be nice to try and cache query plans\nby comparing parse trees, and if they match cached versions, replace any\nconstants with new ones and use cached query plan. Hard to do right,\nthough.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Oct 1999 02:04:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Postgres INSERTs much slower than MySQL?" }, { "msg_contents": "Jimmie Houchin wrote:\n\n> What is WAL? Or is it something that is only known by the Illuminati? :)\n\nI understand your fears. I can also not follow all that the\nlinux cracks around me are talking about.\n\nI also was still using a Windows workstation for quite some\ntime, when we had already started our almost-linux-only startup\nbusiness writing Perl apps for PG.\n\nAnd I still have a dual boot laptop allowing me to use Win98\nand IE5.0 when I cannot get internet banking to work under linux\nor I want to watch a DVD movie or do some other multimedia stuff\nthat I don't understand what it does and only want to enjoy.\n\nI'm not one of the guys who enjoy configuring linux for two\ndays to get some device working. No it rather scares me.\nI enjoy to have working solutions though. (I also never looked\nclosely at the motors inside of any of the cars I owned).\n\nBut what I learned is, that still you have to do some learning.\nYou do it involuntarily as a Windows user. Linux gives you the\nfreedom to do it on your free choice - having great support\nfrom a lot of people who really know what they are doing.\n\nSo - get down from envying the \"Illuminati\" - build up a working\nlinux configuration - step by step - slowly. And ... if you are one of\nthe less brighter guy's like me - don't ask for too much at one\ntime.\n\nE.g.\nI still don't use an Office Suite under Linux. So I made a (very\nbasic) installation of Samba and use an old laptop with Win95 and\nmy Office97 Software on the Linux shares.\nNo sweat. And no apologies necessary.\nThere's nothing to be ashamed of to be a Windows user. Being a\nLinux user can sometimes make you a little proud, though. That's\na difference.\n\nSo lets just think that WAL means : use \"Windows And Linux\".\n\nGood Luck !\nChris\n-- \n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nChristian Rudow E-Mail: [email protected]\nThinX networked business services Stahlrain 10, CH-5200 Brugg\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n", "msg_date": "Fri, 22 Oct 1999 08:25:19 +0200", "msg_from": "Christian Rudow <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's WAL" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> But what makes me unhappy right now is that with -F COPY FROM takes\n>> JUST 3 min !!! (And 16 min without -F)\n>> Isn't parsing/planning overhead toooo big ?!\n\n> Yikes. I always thought it would be nice to try and cache query plans\n> by comparing parse trees, and if they match cached versions, replace any\n> constants with new ones and use cached query plan. Hard to do right,\n> though.\n\nBut INSERT ... VALUES(...) has such a trivial plan that it's hardly\nlikely to be worth caching. We probably ought to do some profiling to\nsee where the time is going, and see if we can't speed things up for\nthis simple case.\n\nIn the meantime, the conventional wisdom is still that you should use\nCOPY, if possible, for bulk data loading. (If you need default values\ninserted in some columns then this won't do...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Oct 1999 11:08:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Postgres INSERTs much slower than MySQL? " }, { "msg_contents": "Hello,\n\nThanks for the reply.\n\nActually I don't fear Linux, I embrace it. :)\nI am quite proficient with Windows but have never considered myself a\nWindows user. I don't own a Windows machine and currently have no\nintentions of it. I do however use it a work because my boss wanted a\nmachine which could play DOOM. :(\n\nI own 4 Macs at home, one of which I have LinuxPPC installed. I will be\npurchasing an Athlon PC this winter for my own use. I will install RH\n6.x on it. I have never really put too much thought on installing Linux\non my PC at work. There are only 2 employees here and one computer. My\nboss is not computer proficient. He plays games, primarily solitaire. I\nrun the computer and the business uses. Whatever I do Linux wise, I\nwanted it to be reasonably transparent to his usage, which is minimal.\nI'll install Mandrake Linux and put a games folder in the \"Start\" menu\nand put solitaire inside of it. He might not ever even no I'm no longer\nin Windows. :)\nHowever, when I'm not in the office and the computer is shutdown I\nwanted him to be able to start up the computer and boot straight into\nwindows. Typing \"win\" at a prompt might lose him. :)\n\nI don't envy the Illuminati, only their understanding of what WAL is. :)\n\nI really wasn't apologizing for being a Windows user, only for using\nWindows. :)\nIt is a totally involuntary action.\n\nI am ready for the day when there is enough educational and edutainment\nsoftware to be available that I can use Linux for my children computers.\nWe use computer extensively in their education. But for now it'll be the\nMacOS. Someday if I get permission from my wife I'll install Linux and\neither Sheepshaver or Mac-on-Linux.\n\nThanks for the opportunity for fun off topic banter. Now back to the\nshow. I'll keep an eye on the Illuminati and maybe one of them will slip\nand reveal the meaning of the Great WAL of PostgreSQL. :)\n\nLater,\n\nJimmie Houchin\n\nChristian Rudow wrote:\n> \n> Jimmie Houchin wrote:\n> \n> > What is WAL? Or is it something that is only known by the Illuminati? :)\n> \n> I understand your fears. I can also not follow all that the\n> linux cracks around me are talking about.\n> \n> I also was still using a Windows workstation for quite some\n> time, when we had already started our almost-linux-only startup\n> business writing Perl apps for PG.\n> \n> And I still have a dual boot laptop allowing me to use Win98\n> and IE5.0 when I cannot get internet banking to work under linux\n> or I want to watch a DVD movie or do some other multimedia stuff\n> that I don't understand what it does and only want to enjoy.\n> \n> I'm not one of the guys who enjoy configuring linux for two\n> days to get some device working. No it rather scares me.\n> I enjoy to have working solutions though. (I also never looked\n> closely at the motors inside of any of the cars I owned).\n> \n> But what I learned is, that still you have to do some learning.\n> You do it involuntarily as a Windows user. Linux gives you the\n> freedom to do it on your free choice - having great support\n> from a lot of people who really know what they are doing.\n> \n> So - get down from envying the \"Illuminati\" - build up a working\n> linux configuration - step by step - slowly. And ... if you are one of\n> the less brighter guy's like me - don't ask for too much at one\n> time.\n> \n> E.g.\n> I still don't use an Office Suite under Linux. So I made a (very\n> basic) installation of Samba and use an old laptop with Win95 and\n> my Office97 Software on the Linux shares.\n> No sweat. And no apologies necessary.\n> There's nothing to be ashamed of to be a Windows user. Being a\n> Linux user can sometimes make you a little proud, though. That's\n> a difference.\n> \n> So lets just think that WAL means : use \"Windows And Linux\".\n> \n> Good Luck !\n> Chris\n> --\n> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> Christian Rudow E-Mail: [email protected]\n> ThinX networked business services Stahlrain 10, CH-5200 Brugg\n> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> \n> ************\n", "msg_date": "Fri, 22 Oct 1999 10:32:27 -0500", "msg_from": "Jimmie Houchin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Re: What's WAL" }, { "msg_contents": "> Thanks for the opportunity for fun off topic banter. Now back to the\n> show. I'll keep an eye on the Illuminati and maybe one of them will slip\n> and reveal the meaning of the Great WAL of PostgreSQL. :)\n> \n\nI pulled this from dejanews as the postgresql list archieve search is\ndown. I have no idea who said it......\n\nWAL is Write Ahead Log, transaction logging. This will reduce # of fsyncs\n(among other things) Postgres has to perform now. \n\n->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<\nJames Thompson 138 Cardwell Hall Manhattan, Ks 66506 785-532-0561 \nKansas State University Department of Mathematics\n->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<\n\n\n", "msg_date": "Fri, 22 Oct 1999 11:05:52 -0500 (CDT)", "msg_from": "James Thompson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Re: What's WAL" }, { "msg_contents": "Hello,\n\nThanks,\n\nYes, the Illuminati has spoken. :)\n\nThe quote is from Vadim in the \"Re: [GENERAL] Postgres INSERTs much\nslower than MySQL?\" thread.\nAfter I sent my message and continued reading the new messages in my\nbox, I read the post to which you refer.\n\nNow the intrigue is over. I are educated. :)\n\nI was not aware that DejaNews had the postgresql mailing lists. I'll\nhave to look into this.\n\nThanks.\n\nJimmie Houchin\n\nJames Thompson wrote:\n> \n> > Thanks for the opportunity for fun off topic banter. Now back to the\n> > show. I'll keep an eye on the Illuminati and maybe one of them will slip\n> > and reveal the meaning of the Great WAL of PostgreSQL. :)\n> >\n> \n> I pulled this from dejanews as the postgresql list archieve search is\n> down. I have no idea who said it......\n> \n> WAL is Write Ahead Log, transaction logging. This will reduce # of fsyncs\n> (among other things) Postgres has to perform now.\n> \n> ->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<\n> James Thompson 138 Cardwell Hall Manhattan, Ks 66506 785-532-0561\n> Kansas State University Department of Mathematics\n> ->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<\n", "msg_date": "Fri, 22 Oct 1999 11:26:26 -0500", "msg_from": "Jimmie Houchin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Re: What's WAL" }, { "msg_contents": "On Fri, 22 Oct 1999, Christian Rudow wrote:\n\n> So - get down from envying the \"Illuminati\" - build up a working\n> linux configuration - step by step - slowly. And ... if you are one of\n> the less brighter guy's like me - don't ask for too much at one\n> time.\n\nActually, 3 out of 4 Illuminati use *BSD ... \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 22 Oct 1999 21:20:59 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Re: What's WAL" }, { "msg_contents": "That's why you don't hear anything about them anymore, they are stuck in \nthe past... :)\n\nAt 09:20 PM 10/22/99, The Hermit Hacker wrote:\n>On Fri, 22 Oct 1999, Christian Rudow wrote:\n>\n> > So - get down from envying the \"Illuminati\" - build up a working\n> > linux configuration - step by step - slowly. And ... if you are one of\n> > the less brighter guy's like me - don't ask for too much at one\n> > time.\n>\n>Actually, 3 out of 4 Illuminati use *BSD ...\n>\n>Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n>Systems Administrator @ hub.org\n>primary: [email protected] secondary: \n>scrappy@{freebsd|postgresql}.org\n>\n>\n>************\n\n", "msg_date": "Sat, 23 Oct 1999 03:29:42 -0300", "msg_from": "Charles Tassell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Re: What's WAL" }, { "msg_contents": "\nWhats BSD ?? Hows itcompare to linux ?\n\nNewbie you know\n\nAt 09:20 PM 10/22/1999 -0300, The Hermit Hacker wrote:\n>On Fri, 22 Oct 1999, Christian Rudow wrote:\n>\n>> So - get down from envying the \"Illuminati\" - build up a working\n>> linux configuration - step by step - slowly. And ... if you are one of\n>> the less brighter guy's like me - don't ask for too much at one\n>> time.\n>\n>Actually, 3 out of 4 Illuminati use *BSD ... \n>\n>Marc G. Fournier ICQ#7615664 IRC Nick:\nScrappy\n>Systems Administrator @ hub.org \n>primary: [email protected] secondary:\nscrappy@{freebsd|postgresql}.org \n>\n>\n>************\n>\n>\n", "msg_date": "Sat, 23 Oct 1999 08:38:50 +0000", "msg_from": "Samy Elashmawy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Re: What's WAL" }, { "msg_contents": "> WAL is Write Ahead Log, transaction logging.\n> This will reduce # of fsyncs (among other things) Postgres has\n> to perform now.\n> Test above took near 38 min without -F flag and 24 min\n> with -F (no fsync at all).\n> With WAL the same test without -F will be near as fast as with\n> -F now.\n\nThis sounds impressive. So I did some testings with my pgbench to see\nhow WAL improves the performance without -F using current.\n\n100000 records insertation + vacuum took 1:02 with -F (4:10 without -F)\n\nTPC-B like transactions(mix of insert/update/select) per second:\n\t21 (with -F)\n\t3 (without -F)\n\nI couldn't see any improvement against 6.5.2 so far. Maybe some part\nof WAL is not yet committed to current?\n---\nTatsuo Ishii\n", "msg_date": "Tue, 26 Oct 1999 09:59:49 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Postgres INSERTs much slower than MySQL? " }, { "msg_contents": "Tatsuo Ishii wrote:\n> \n> > With WAL the same test without -F will be near as fast as with\n> > -F now.\n> \n> This sounds impressive. So I did some testings with my pgbench to see\n> how WAL improves the performance without -F using current.\n> \n> 100000 records insertation + vacuum took 1:02 with -F (4:10 without -F)\n> \n> TPC-B like transactions(mix of insert/update/select) per second:\n> 21 (with -F)\n> 3 (without -F)\n> \n> I couldn't see any improvement against 6.5.2 so far. Maybe some part\n> of WAL is not yet committed to current?\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n...is not implemented.\n\nVadim\n", "msg_date": "Tue, 26 Oct 1999 10:00:24 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Postgres INSERTs much slower than MySQL?" }, { "msg_contents": "At 17:08 +0200 on 22/10/1999, Tom Lane wrote:\n\n\n> In the meantime, the conventional wisdom is still that you should use\n> COPY, if possible, for bulk data loading. (If you need default values\n> inserted in some columns then this won't do...)\n\nYes it would - in two steps. COPY to a temp table that only has the\nnon-default columns. Then INSERT ... SELECT ... from that temp table to\nyour \"real\" table.\n\nHerouth\n\n--\nHerouth Maoz, Internet developer.\nOpen University of Israel - Telem project\nhttp://telem.openu.ac.il/~herutma\n\n\n", "msg_date": "Wed, 27 Oct 1999 19:21:03 +0200", "msg_from": "Herouth Maoz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Postgres INSERTs much slower than MySQL?" } ]
[ { "msg_contents": "Adult VCDs ! \ncheck out: http://movieclips.home.dhs.org\n\nOver 20 Titles to download. Come Check it out !\nbupxcqrdittsjmoshcrthbkhydwbmytlgvsuotmhdnbfsmbzndnsedxknsfnndjkbgchnuskfqnkjczyyst\n\n", "msg_date": "Fri, 15 Oct 1999 05:28:09 GMT", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Adult VCDs ! Look here! 2997" } ]
[ { "msg_contents": "Hello again,\nI thought I should start making some small contibutions before 7.0.\n\nAttached is a patch to the old problem discussed feverly before 6.5.\nWhat is does:\nfor locale-enabled servers: \n\tuse index if last char before '%' is ascii.\nfor non-locale servers: \n\tdo not use locale if last char is non-ascii since it is wrong anyway.\n\nComments?\t\t \n\nregards,\n-- \n-----------------\nG�ran Thyni\nOn quiet nights you can hear Windows NT reboot!", "msg_date": "Fri, 15 Oct 1999 20:10:43 +0200", "msg_from": "Goran Thyni <[email protected]>", "msg_from_op": true, "msg_subject": "indexable and locale" }, { "msg_contents": "> Hello again,\n> I thought I should start making some small contibutions before 7.0.\n> \n> Attached is a patch to the old problem discussed feverly before 6.5.\n> What is does:\n> for locale-enabled servers: \n> \tuse index if last char before '%' is ascii.\n> for non-locale servers: \n> \tdo not use locale if last char is non-ascii since it is wrong anyway.\n> \n> Comments?\t\t \n\nI tried your patches but it seems malformed:\n\n\tpatch: **** unexpected end of file in patch\n\nSo this is a guess from reading them. I think your pacthes break\nnon-ascii multi-byte character sets data and should be surrounded by\n#ifdef LOCALE rather than replacing current codes surrounded by\n#ifndef LOCALE.\n---\nTatsuo Ishii\n\n", "msg_date": "Sat, 16 Oct 1999 11:00:34 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] indexable and locale " }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> Attached is a patch to the old problem discussed feverly before 6.5.\n\n> ... I think your pacthes break\n> non-ascii multi-byte character sets data and should be surrounded by\n> #ifdef LOCALE rather than replacing current codes surrounded by\n> #ifndef LOCALE.\n\nI am worried about this patch too. Under MULTIBYTE could it\ngenerate invalid characters? Also, do all non-ASCII locales sort\ncodes 0-126 in the same order as ASCII? I didn't think they do,\nbut I'm not an expert.\n\nThe approach I was considering for fixing the problem was to use a\nloop that would repeatedly try to generate a string greater than the\nprefix string. The basic loop step would increment the rightmost\nbyte as Goran has done (or, if it's already up to the limit, chop\nit off and increment the next character position). Then test to\nsee whether the '<' operator actually believes the result is\ngreater than the given prefix, and repeat if not. This avoids making\nany strong assumptions about the sort order of different character\ncodes. However, there are two significant issues that would have\nto be surmounted to make it work reliably:\n\n1. In MULTIBYTE mode incrementing the rightmost byte might yield\nan illegal multibyte character. Some way to prevent or detect this\nwould be needed, lest it confuse the comparison operator. I think\nwe have some multibyte routines that could be used to check for\na valid result, but I haven't looked into it.\n\n2. I think there are some locales out there that have context-\nsensitive sorting rules, ie, a given character string may sort\ndifferently than you'd expect from considering the characters in\nisolation. For example, in German isn't \"ss\" treated specially?\nIf \"pqrss\" does not sort between \"pqrs\" and \"pqrt\" then the entire\npremise of *both* sides of the LIKE optimization falls apart,\nbecause you can't be sure what will happen when comparing a prefix\nstring like \"pqrs\" against longer strings from the database.\nI do not know if this is really a problem, nor what we could do\nto avoid it if it is.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 16 Oct 1999 13:31:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] indexable and locale " }, { "msg_contents": "> Tatsuo Ishii <[email protected]> writes:\n> >> Attached is a patch to the old problem discussed feverly before 6.5.\n> \n> > ... I think your pacthes break\n> > non-ascii multi-byte character sets data and should be surrounded by\n> > #ifdef LOCALE rather than replacing current codes surrounded by\n> > #ifndef LOCALE.\n> \n> I am worried about this patch too. Under MULTIBYTE could it\n> generate invalid characters?\n\nI assume you are talking about following code fragment in the pacthes:\n\n\t prefix[prefixlen]++;\n\nThis would not generate invalid characters under MULTIBYTE since it skips the \nmulti-byte characters by:\n\n \tif ((unsigned) prefix[prefixlen] < 126)\n\nThis would not make non-ASCII multi-byte characters indexable,\nhowever.\n\n> Also, do all non-ASCII locales sort\n> codes 0-126 in the same order as ASCII? I didn't think they do,\n> but I'm not an expert.\n\nAs far as I know they do. At least all encodings MULTIBYTE mode can\nhandle have same code point as ASCII in 0-126 range. They have\nfollowing characteristics:\n\no code point 0x00-0x7f are compatible with ASCII.\n\no code point over 0x80 are variable length multi-byte characters. For\n example, ISO-8859-1 (Germany, Fernch etc...) has the multi-byte\n length to always 1, while EUC_JP (Japanese) has 2 to 3.\n\n> The approach I was considering for fixing the problem was to use a\n> loop that would repeatedly try to generate a string greater than the\n> prefix string. The basic loop step would increment the rightmost\n> byte as Goran has done (or, if it's already up to the limit, chop\n> it off and increment the next character position). Then test to\n> see whether the '<' operator actually believes the result is\n> greater than the given prefix, and repeat if not. This avoids making\n> any strong assumptions about the sort order of different character\n> codes. However, there are two significant issues that would have\n> to be surmounted to make it work reliably:\n\nSounds good idea.\n\n> 1. In MULTIBYTE mode incrementing the rightmost byte might yield\n> an illegal multibyte character. Some way to prevent or detect this\n> would be needed, lest it confuse the comparison operator. I think\n> we have some multibyte routines that could be used to check for\n> a valid result, but I haven't looked into it.\n\nI don't think this is an issue as long as locale isn't enabled. For\nmultibyte encodings (Japanese, Chinese etc..) locale is totally\nuseless and usually I don't enable it.\n\n> 2. I think there are some locales out there that have context-\n> sensitive sorting rules, ie, a given character string may sort\n> differently than you'd expect from considering the characters in\n> isolation. For example, in German isn't \"ss\" treated specially?\n> If \"pqrss\" does not sort between \"pqrs\" and \"pqrt\" then the entire\n> premise of *both* sides of the LIKE optimization falls apart,\n> because you can't be sure what will happen when comparing a prefix\n> string like \"pqrs\" against longer strings from the database.\n> I do not know if this is really a problem, nor what we could do\n> to avoid it if it is.\n\nI'm not sure about it but I am afraid it could be a problem. I think\nreal soultion would be supporting the standard CREATE COLLATION.\n---\nTatsuo Ishii\n\n", "msg_date": "Tue, 19 Oct 1999 09:55:17 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] indexable and locale " }, { "msg_contents": "\nApplied.\n\n\n[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Hello again,\n> I thought I should start making some small contibutions before 7.0.\n> \n> Attached is a patch to the old problem discussed feverly before 6.5.\n> What is does:\n> for locale-enabled servers: \n> \tuse index if last char before '%' is ascii.\n> for non-locale servers: \n> \tdo not use locale if last char is non-ascii since it is wrong anyway.\n> \n> Comments?\t\t \n> \n> regards,\n> -- \n> -----------------\n> G_ran Thyni\n> On quiet nights you can hear Windows NT reboot!\n\n> diff -c pgsql/src/backend/optimizer/path/indxpath.c work/pgsql/src/backend/optimizer/path/indxpath.c\n> *** pgsql/src/backend/optimizer/path/indxpath.c\tWed Oct 6 18:33:57 1999\n> --- work/pgsql/src/backend/optimizer/path/indxpath.c\tFri Oct 15 19:54:34 1999\n> ***************\n> *** 1934,1968 ****\n> \top = makeOper(optup->t_data->t_oid, InvalidOid, BOOLOID, 0, NULL);\n> \texpr = make_opclause(op, leftop, (Var *) con);\n> \tresult = lcons(expr, NIL);\n> - \n> \t/*\n> ! \t * In ASCII locale we say \"x <= prefix\\377\". This does not\n> ! \t * work for non-ASCII collation orders, and it's not really\n> ! \t * right even for ASCII. FIX ME!\n> ! \t * Note we assume the passed prefix string is workspace with\n> ! \t * an extra byte, as created by the xxx_fixed_prefix routines above.\n> \t */\n> ! #ifndef USE_LOCALE\n> ! \tprefixlen = strlen(prefix);\n> ! \tprefix[prefixlen] = '\\377';\n> ! \tprefix[prefixlen+1] = '\\0';\n> ! \n> ! \toptup = SearchSysCacheTuple(OPRNAME,\n> ! \t\t\t\t\t\t\t\tPointerGetDatum(\"<=\"),\n> ! \t\t\t\t\t\t\t\tObjectIdGetDatum(datatype),\n> ! \t\t\t\t\t\t\t\tObjectIdGetDatum(datatype),\n> ! \t\t\t\t\t\t\t\tCharGetDatum('b'));\n> ! \tif (!HeapTupleIsValid(optup))\n> ! \t\telog(ERROR, \"prefix_quals: no <= operator for type %u\", datatype);\n> ! \tconval = (datatype == NAMEOID) ?\n> ! \t\t(void*) namein(prefix) : (void*) textin(prefix);\n> ! \tcon = makeConst(datatype, ((datatype == NAMEOID) ? NAMEDATALEN : -1),\n> ! \t\t\t\t\tPointerGetDatum(conval),\n> ! \t\t\t\t\tfalse, false, false, false);\n> ! \top = makeOper(optup->t_data->t_oid, InvalidOid, BOOLOID, 0, NULL);\n> ! \texpr = make_opclause(op, leftop, (Var *) con);\n> ! \tresult = lappend(result, expr);\n> ! #endif\n> ! \n> \treturn result;\n> }\n> --- 1934,1970 ----\n> \top = makeOper(optup->t_data->t_oid, InvalidOid, BOOLOID, 0, NULL);\n> \texpr = make_opclause(op, leftop, (Var *) con);\n> \tresult = lcons(expr, NIL);\n> \t/*\n> ! \t * If last is in ascii range make it indexable,\n> ! \t * else let it be.\n> ! \t * FIXME: find way to use locate for this to support\n> ! \t * indexing of non-ascii characters.\n> \t */\n> ! \tprefixlen = strlen(prefix) - 1;\n> ! \telog(DEBUG, \"XXX1 %s\", prefix);\n> ! \tif ((unsigned) prefix[prefixlen] < 126)\n> ! \t {\n> ! \t prefix[prefixlen]++;\n> ! \t elog(DEBUG, \"XXX2 %s\", prefix);\n> ! \t optup = SearchSysCacheTuple(OPRNAME,\n> ! \t\t\t\t\tPointerGetDatum(\"<=\"),\n> ! \t\t\t\t\tObjectIdGetDatum(datatype),\n> ! \t\t\t\t\tObjectIdGetDatum(datatype),\n> ! \t\t\t\t\tCharGetDatum('b'));\n> ! \t if (!HeapTupleIsValid(optup))\n> ! \t elog(ERROR, \"prefix_quals: no <= operator for type %u\", datatype);\n> ! \t conval = (datatype == NAMEOID) ?\n> ! \t (void*) namein(prefix) : (void*) textin(prefix);\n> ! \t con = makeConst(datatype, ((datatype == NAMEOID) ? NAMEDATALEN : -1),\n> ! \t\t\t PointerGetDatum(conval),\n> ! \t\t\t false, false, false, false);\n> ! \t op = makeOper(optup->t_data->t_oid, InvalidOid, BOOLOID, 0, NULL);\n> ! \t expr = make_opclause(op, leftop, (Var *) con);\n> ! \t result = lappend(result, expr);\n> ! \t }\n> \treturn result;\n> }\n\n[application/x-gzip is not supported, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Nov 1999 20:49:52 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: indexable and locale" }, { "msg_contents": "> > Hello again,\n> > I thought I should start making some small contibutions before 7.0.\n> > \n> > Attached is a patch to the old problem discussed feverly before 6.5.\n> > What is does:\n> > for locale-enabled servers: \n> > \tuse index if last char before '%' is ascii.\n> > for non-locale servers: \n> > \tdo not use locale if last char is non-ascii since it is wrong anyway.\n> > \n> > Comments?\t\t \n> \n> I tried your patches but it seems malformed:\n> \n> \tpatch: **** unexpected end of file in patch\n\n\nYes, I had to apply it manually.\n\n> So this is a guess from reading them. I think your pacthes break\n> non-ascii multi-byte character sets data and should be surrounded by\n> #ifdef LOCALE rather than replacing current codes surrounded by\n> #ifndef LOCALE.\n\nCan you supply a patch against the current tree? I don't understand\nthis. Thanks.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Nov 1999 20:51:27 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] indexable and locale" }, { "msg_contents": "\nSorry, found messages of people objecting to the patch. Patch reversed\nout.\n\n\n[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Hello again,\n> I thought I should start making some small contibutions before 7.0.\n> \n> Attached is a patch to the old problem discussed feverly before 6.5.\n> What is does:\n> for locale-enabled servers: \n> \tuse index if last char before '%' is ascii.\n> for non-locale servers: \n> \tdo not use locale if last char is non-ascii since it is wrong anyway.\n> \n> Comments?\t\t \n> \n> regards,\n> -- \n> -----------------\n> G_ran Thyni\n> On quiet nights you can hear Windows NT reboot!\n\n> diff -c pgsql/src/backend/optimizer/path/indxpath.c work/pgsql/src/backend/optimizer/path/indxpath.c\n> *** pgsql/src/backend/optimizer/path/indxpath.c\tWed Oct 6 18:33:57 1999\n> --- work/pgsql/src/backend/optimizer/path/indxpath.c\tFri Oct 15 19:54:34 1999\n> ***************\n> *** 1934,1968 ****\n> \top = makeOper(optup->t_data->t_oid, InvalidOid, BOOLOID, 0, NULL);\n> \texpr = make_opclause(op, leftop, (Var *) con);\n> \tresult = lcons(expr, NIL);\n> - \n> \t/*\n> ! \t * In ASCII locale we say \"x <= prefix\\377\". This does not\n> ! \t * work for non-ASCII collation orders, and it's not really\n> ! \t * right even for ASCII. FIX ME!\n> ! \t * Note we assume the passed prefix string is workspace with\n> ! \t * an extra byte, as created by the xxx_fixed_prefix routines above.\n> \t */\n> ! #ifndef USE_LOCALE\n> ! \tprefixlen = strlen(prefix);\n> ! \tprefix[prefixlen] = '\\377';\n> ! \tprefix[prefixlen+1] = '\\0';\n> ! \n> ! \toptup = SearchSysCacheTuple(OPRNAME,\n> ! \t\t\t\t\t\t\t\tPointerGetDatum(\"<=\"),\n> ! \t\t\t\t\t\t\t\tObjectIdGetDatum(datatype),\n> ! \t\t\t\t\t\t\t\tObjectIdGetDatum(datatype),\n> ! \t\t\t\t\t\t\t\tCharGetDatum('b'));\n> ! \tif (!HeapTupleIsValid(optup))\n> ! \t\telog(ERROR, \"prefix_quals: no <= operator for type %u\", datatype);\n> ! \tconval = (datatype == NAMEOID) ?\n> ! \t\t(void*) namein(prefix) : (void*) textin(prefix);\n> ! \tcon = makeConst(datatype, ((datatype == NAMEOID) ? NAMEDATALEN : -1),\n> ! \t\t\t\t\tPointerGetDatum(conval),\n> ! \t\t\t\t\tfalse, false, false, false);\n> ! \top = makeOper(optup->t_data->t_oid, InvalidOid, BOOLOID, 0, NULL);\n> ! \texpr = make_opclause(op, leftop, (Var *) con);\n> ! \tresult = lappend(result, expr);\n> ! #endif\n> ! \n> \treturn result;\n> }\n> --- 1934,1970 ----\n> \top = makeOper(optup->t_data->t_oid, InvalidOid, BOOLOID, 0, NULL);\n> \texpr = make_opclause(op, leftop, (Var *) con);\n> \tresult = lcons(expr, NIL);\n> \t/*\n> ! \t * If last is in ascii range make it indexable,\n> ! \t * else let it be.\n> ! \t * FIXME: find way to use locate for this to support\n> ! \t * indexing of non-ascii characters.\n> \t */\n> ! \tprefixlen = strlen(prefix) - 1;\n> ! \telog(DEBUG, \"XXX1 %s\", prefix);\n> ! \tif ((unsigned) prefix[prefixlen] < 126)\n> ! \t {\n> ! \t prefix[prefixlen]++;\n> ! \t elog(DEBUG, \"XXX2 %s\", prefix);\n> ! \t optup = SearchSysCacheTuple(OPRNAME,\n> ! \t\t\t\t\tPointerGetDatum(\"<=\"),\n> ! \t\t\t\t\tObjectIdGetDatum(datatype),\n> ! \t\t\t\t\tObjectIdGetDatum(datatype),\n> ! \t\t\t\t\tCharGetDatum('b'));\n> ! \t if (!HeapTupleIsValid(optup))\n> ! \t elog(ERROR, \"prefix_quals: no <= operator for type %u\", datatype);\n> ! \t conval = (datatype == NAMEOID) ?\n> ! \t (void*) namein(prefix) : (void*) textin(prefix);\n> ! \t con = makeConst(datatype, ((datatype == NAMEOID) ? NAMEDATALEN : -1),\n> ! \t\t\t PointerGetDatum(conval),\n> ! \t\t\t false, false, false, false);\n> ! \t op = makeOper(optup->t_data->t_oid, InvalidOid, BOOLOID, 0, NULL);\n> ! \t expr = make_opclause(op, leftop, (Var *) con);\n> ! \t result = lappend(result, expr);\n> ! \t }\n> \treturn result;\n> }\n\n[application/x-gzip is not supported, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Nov 1999 20:52:43 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: indexable and locale" }, { "msg_contents": "\nHere is Tom's comment on the patch.\n\n> Tatsuo Ishii <[email protected]> writes:\n> >> Attached is a patch to the old problem discussed feverly before 6.5.\n> \n> > ... I think your pacthes break\n> > non-ascii multi-byte character sets data and should be surrounded by\n> > #ifdef LOCALE rather than replacing current codes surrounded by\n> > #ifndef LOCALE.\n> \n> I am worried about this patch too. Under MULTIBYTE could it\n> generate invalid characters? Also, do all non-ASCII locales sort\n> codes 0-126 in the same order as ASCII? I didn't think they do,\n> but I'm not an expert.\n> \n> The approach I was considering for fixing the problem was to use a\n> loop that would repeatedly try to generate a string greater than the\n> prefix string. The basic loop step would increment the rightmost\n> byte as Goran has done (or, if it's already up to the limit, chop\n> it off and increment the next character position). Then test to\n> see whether the '<' operator actually believes the result is\n> greater than the given prefix, and repeat if not. This avoids making\n> any strong assumptions about the sort order of different character\n> codes. However, there are two significant issues that would have\n> to be surmounted to make it work reliably:\n> \n> 1. In MULTIBYTE mode incrementing the rightmost byte might yield\n> an illegal multibyte character. Some way to prevent or detect this\n> would be needed, lest it confuse the comparison operator. I think\n> we have some multibyte routines that could be used to check for\n> a valid result, but I haven't looked into it.\n> \n> 2. I think there are some locales out there that have context-\n> sensitive sorting rules, ie, a given character string may sort\n> differently than you'd expect from considering the characters in\n> isolation. For example, in German isn't \"ss\" treated specially?\n> If \"pqrss\" does not sort between \"pqrs\" and \"pqrt\" then the entire\n> premise of *both* sides of the LIKE optimization falls apart,\n> because you can't be sure what will happen when comparing a prefix\n> string like \"pqrs\" against longer strings from the database.\n> I do not know if this is really a problem, nor what we could do\n> to avoid it if it is.\n> \n> \t\t\tregards, tom lane\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Nov 1999 20:52:56 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] indexable and locale" } ]
[ { "msg_contents": "I've already posted my question about NOTICE message I'm getting\nfrom vacuum but didn't get any response :-(\n\nToday I decided to do some experiments to reproduce my problem.\n\nI run two independent processes:\n\n1. send parallel requests to apache server in loop. On this request server\n does following:\n\n LOCK TABLE hits IN SHARE ROW EXCLUSIVE MODE\n UPDATE hits SET count=count+1,last_access=now() WHERE msg_id=1468\n\n2. vacuum table hits in shell scripts\n\n#!/bin/sh\nwhile true ;do\n/usr/local/pgsql/bin/psql -tq discovery <vacuum_hits.sql\n rc=$?\n i=$((i+1))\n echo Vaccuming: $i, RC=$rc\n sleep 10;\ndone\n\nwhere vacuum_hits.sql:\n\nbegin work;\ndrop index hits_pkey;\ncreate unique index hits_pkey on hits(msg_id);\nend work;\nvacuum analyze hits(msg_id);\n\n\nSometimes I get the message:\n\nNOTICE: Index hits_pkey: NUMBER OF INDEX' TUPLES (173) IS NOT THE SAME AS HEAP' (174)\n\nalso several times I get:\nERROR: Can't create lock file. Is another vacuum cleaner running?\n If not, you may remove the pg_vlock file in the /usr/local/pgsql/data//base/discovery\n directory\n\nI had to remove this file by hand.\n\n\nI understand that experiment is a little bit artificial but I'd like\nto know what I'm doing wrong and what's the safest way to vacuum\ntable which is permanently updating. Actually, I got about\n12 requests/sec on my home P166, 64Mb, Linux 2.2.12 - each request is a\nplenty of database work. I have to vacuum table because at this rate\nI got very quick performance degradation.\n\nThis is 6.5.2, Linux\n\n\tRegards,\n\n\t\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 15 Oct 1999 23:50:25 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "vacuum of permanently updating database " }, { "msg_contents": "Hi\n\nCould you try the current tree ?\n\nAs far as I see,there are 2 possibilities.\n\n1. Relation cache invalidation mechanism is much improved\n by Tom in the current tree.\n In your case,index tuples may be inserted into invalid index\n relation and vanish.\n\n2. If vacuum aborts after the internal commit,the transaction\n status is changed to be ABORT. This causes inconsistency.\n I have changed not to do so in the current tree.\n\n\nIn CURRENT tree,you may have to change vacuum_hits.sql\nas follows.\n\n\tdrop index hits_pkey;\n\tvacuum analyze hits(msg_id);\n\tcreate unique index hits_pkey on hits(msg_id);\n\nProbably DROP INDEX couldn't be executed inside transactions.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n>\n> I've already posted my question about NOTICE message I'm getting\n> from vacuum but didn't get any response :-(\n>\n> Today I decided to do some experiments to reproduce my problem.\n>\n> I run two independent processes:\n>\n> 1. send parallel requests to apache server in loop. On this request server\n> does following:\n>\n> LOCK TABLE hits IN SHARE ROW EXCLUSIVE MODE\n> UPDATE hits SET count=count+1,last_access=now() WHERE msg_id=1468\n>\n> 2. vacuum table hits in shell scripts\n>\n> #!/bin/sh\n> while true ;do\n> /usr/local/pgsql/bin/psql -tq discovery <vacuum_hits.sql\n> rc=$?\n> i=$((i+1))\n> echo Vaccuming: $i, RC=$rc\n> sleep 10;\n> done\n>\n> where vacuum_hits.sql:\n>\n> begin work;\n> drop index hits_pkey;\n> create unique index hits_pkey on hits(msg_id);\n> end work;\n> vacuum analyze hits(msg_id);\n>\n>\n> Sometimes I get the message:\n>\n> NOTICE: Index hits_pkey: NUMBER OF INDEX' TUPLES (173) IS NOT\n> THE SAME AS HEAP' (174)\n>\n> also several times I get:\n> ERROR: Can't create lock file. Is another vacuum cleaner running?\n> If not, you may remove the pg_vlock file in the\n> /usr/local/pgsql/data//base/discovery\n> directory\n>\n> I had to remove this file by hand.\n>\n>\n> I understand that experiment is a little bit artificial but I'd like\n> to know what I'm doing wrong and what's the safest way to vacuum\n> table which is permanently updating. Actually, I got about\n> 12 requests/sec on my home P166, 64Mb, Linux 2.2.12 - each request is a\n> plenty of database work. I have to vacuum table because at this rate\n> I got very quick performance degradation.\n>\n> This is 6.5.2, Linux\n>\n> \tRegards,\n>\n> \t\tOleg\n>\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>\n> ************\n>\n\n", "msg_date": "Sat, 16 Oct 1999 10:06:36 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] vacuum of permanently updating database " }, { "msg_contents": "Hiroshi,\n\nthank you for the message. I'll try current tree but if\nit's a bug (probable ?) why don't try to fix it for 6.5.3 ?\n\n\tRegards,\n\n\t Oleg\n\n\nOn Sat, 16 Oct 1999, Hiroshi Inoue wrote:\n\n> Date: Sat, 16 Oct 1999 10:06:36 +0900\n> From: Hiroshi Inoue <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected]\n> Subject: RE: [HACKERS] vacuum of permanently updating database \n> \n> Hi\n> \n> Could you try the current tree ?\n> \n> As far as I see,there are 2 possibilities.\n> \n> 1. Relation cache invalidation mechanism is much improved\n> by Tom in the current tree.\n> In your case,index tuples may be inserted into invalid index\n> relation and vanish.\n> \n> 2. If vacuum aborts after the internal commit,the transaction\n> status is changed to be ABORT. This causes inconsistency.\n> I have changed not to do so in the current tree.\n> \n> \n> In CURRENT tree,you may have to change vacuum_hits.sql\n> as follows.\n> \n> \tdrop index hits_pkey;\n> \tvacuum analyze hits(msg_id);\n> \tcreate unique index hits_pkey on hits(msg_id);\n> \n> Probably DROP INDEX couldn't be executed inside transactions.\n> \n> Regards.\n> \n> Hiroshi Inoue\n> [email protected]\n> \n> >\n> > I've already posted my question about NOTICE message I'm getting\n> > from vacuum but didn't get any response :-(\n> >\n> > Today I decided to do some experiments to reproduce my problem.\n> >\n> > I run two independent processes:\n> >\n> > 1. send parallel requests to apache server in loop. On this request server\n> > does following:\n> >\n> > LOCK TABLE hits IN SHARE ROW EXCLUSIVE MODE\n> > UPDATE hits SET count=count+1,last_access=now() WHERE msg_id=1468\n> >\n> > 2. vacuum table hits in shell scripts\n> >\n> > #!/bin/sh\n> > while true ;do\n> > /usr/local/pgsql/bin/psql -tq discovery <vacuum_hits.sql\n> > rc=$?\n> > i=$((i+1))\n> > echo Vaccuming: $i, RC=$rc\n> > sleep 10;\n> > done\n> >\n> > where vacuum_hits.sql:\n> >\n> > begin work;\n> > drop index hits_pkey;\n> > create unique index hits_pkey on hits(msg_id);\n> > end work;\n> > vacuum analyze hits(msg_id);\n> >\n> >\n> > Sometimes I get the message:\n> >\n> > NOTICE: Index hits_pkey: NUMBER OF INDEX' TUPLES (173) IS NOT\n> > THE SAME AS HEAP' (174)\n> >\n> > also several times I get:\n> > ERROR: Can't create lock file. Is another vacuum cleaner running?\n> > If not, you may remove the pg_vlock file in the\n> > /usr/local/pgsql/data//base/discovery\n> > directory\n> >\n> > I had to remove this file by hand.\n> >\n> >\n> > I understand that experiment is a little bit artificial but I'd like\n> > to know what I'm doing wrong and what's the safest way to vacuum\n> > table which is permanently updating. Actually, I got about\n> > 12 requests/sec on my home P166, 64Mb, Linux 2.2.12 - each request is a\n> > plenty of database work. I have to vacuum table because at this rate\n> > I got very quick performance degradation.\n> >\n> > This is 6.5.2, Linux\n> >\n> > \tRegards,\n> >\n> > \t\tOleg\n> >\n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: [email protected], http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> >\n> >\n> > ************\n> >\n> \n> \n> ************\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sat, 16 Oct 1999 18:26:38 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] vacuum of permanently updating database " }, { "msg_contents": "> -----Original Message-----\n> From: Oleg Bartunov [mailto:[email protected]]\n> Sent: Saturday, October 16, 1999 11:27 PM\n> To: Hiroshi Inoue\n> Cc: [email protected]\n> Subject: RE: [HACKERS] vacuum of permanently updating database \n> \n> \n> Hiroshi,\n> \n> thank you for the message. I'll try current tree but if\n> it's a bug (probable ?) why don't try to fix it for 6.5.3 ?\n>\n\nYes it's a bug.\nBut as for the 1st bug,it requires a lot of changes to fix.\nSeems Bruce and Tom have thought that it's dangerous\nto apply them to REL6_5.\n\nAs for the 2nd bug,it is fixed easily by the following patch.\nIf there's no objection,I would commit it into REL6_5.\n\n[snip] \n\n> > \n> > Could you try the current tree ?\n> > \n> > As far as I see,there are 2 possibilities.\n> > \n> > 1. Relation cache invalidation mechanism is much improved\n> > by Tom in the current tree.\n> > In your case,index tuples may be inserted into invalid index\n> > relation and vanish.\n> > \n> > 2. If vacuum aborts after the internal commit,the transaction\n> > status is changed to be ABORT. This causes inconsistency.\n> > I have changed not to do so in the current tree.\n> > \n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n*** xact.c\t1999/09/10 07:57:08\t1.4\n--- xact.c\t1999/09/10 08:25:15\n***************\n*** 736,742 ****\n \t * this transaction id in the pg_log relation. We skip it\n \t * if no one shared buffer was changed by this transaction.\n \t */\n! \tif (SharedBufferChanged)\n \t\tTransactionIdAbort(xid);\n \n \tResetBufferPool();\n--- 736,742 ----\n \t * this transaction id in the pg_log relation. We skip it\n \t * if no one shared buffer was changed by this transaction.\n \t */\n! \tif (SharedBufferChanged && !TransactionIdDidCommit(xid))\n \t\tTransactionIdAbort(xid);\n \n \tResetBufferPool();\n\n", "msg_date": "Mon, 18 Oct 1999 08:59:12 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] vacuum of permanently updating database " } ]
[ { "msg_contents": "\n-----Message d'origine-----\nDe : St�phane FILLON <[email protected]>\n� : pgsql-general <[email protected]>\nDate : samedi 16 octobre 1999 17:30\nObjet : Functions documentations\n\n\n>Hi !!\n>\n>I am looking for a complete list of functions we have under PostgreSQL.\n>\n>Where can I found this documentation ?\n>\n>If there is no documentation, I would be please to write one with samples.\n>In this case, send me your\n>experience with source for each functions you have try at my direct e-mail.\n>And I will update this\n>documentation weekly.\n>\n>\n>PS: For my personnal use, I have already make a small CookBook of\nPostgreSQL\n>(100 pages) with\n>lots of samples I have found reading the mailing-list and my own\nexperience.\n>\n>\n>Regards,\n>\n>Stephane FILLON.\n>\n\n", "msg_date": "Sat, 16 Oct 1999 17:30:56 +1100", "msg_from": "\"=?iso-8859-1?Q?St=E9phane_FILLON?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Tr: Functions documentations" }, { "msg_contents": "> >I am looking for a complete list of functions we have under PostgreSQL.\n> >Where can I found this documentation ?\n> >If there is no documentation, I would be please to write one with samples.\n> >In this case, send me your\n> >experience with source for each functions you have try at my direct e-mail.\n> >And I will update this documentation weekly.\n\nThere is a chapter on \"Functions\" in the Postgres SGML-based\ndocumentation. It is probably not complete, but does cover most of the\nmajor classes of functions. If you would like to update or modify\nthat, it would then be available in the main docs set.\n\n doc/src/sgml/func.sgml\n\n> >PS: For my personnal use, I have already make a small CookBook of\n> >PostgreSQL (100 pages) with lots of samples I have found reading\n> >the mailing-list and my own experience.\n\nIf you would like to contribute this, I'm sure it could be of help to\nothers. Our Tutorial and \"HowTo\" docs are the least actively\nmaintained and could use an infusion of information. Perhaps your docs\ncould be a Tutorial or User's Guide?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 18 Oct 1999 13:42:18 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tr: Functions documentations" }, { "msg_contents": "On Sat, 16 Oct 1999, [iso-8859-1] StО©╫phane FILLON wrote:\n\n> Date: Sat, 16 Oct 1999 17:30:56 +1100\n> From: \"[iso-8859-1] StО©╫phane FILLON\" <[email protected]>\n> To: pgsql-hackers <[email protected]>\n> Subject: [HACKERS] Tr: Functions documentations\n> \n> \n> -----Message d'origine-----\n> De : StО©╫phane FILLON <[email protected]>\n> О©╫ : pgsql-general <[email protected]>\n> Date : samedi 16 octobre 1999 17:30\n> Objet : Functions documentations\n> \n> \n> >Hi !!\n> >\n> >I am looking for a complete list of functions we have under PostgreSQL.\n> >\n> >Where can I found this documentation ?\n> >\n> >If there is no documentation, I would be please to write one with samples.\n> >In this case, send me your\n> >experience with source for each functions you have try at my direct e-mail.\n> >And I will update this\n> >documentation weekly.\n> >\n> >\n> >PS: For my personnal use, I have already make a small CookBook of\n> PostgreSQL\n> >(100 pages) with\n> >lots of samples I have found reading the mailing-list and my own\n> experience.\n\nCool, would be glad to see you cookbook in Postgres documentation.\nDo you have it online ?\n\n\tOleg\n\n> >\n> >\n> >Regards,\n> >\n> >Stephane FILLON.\n> >\n> \n> \n> ************\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 18 Oct 1999 18:48:52 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tr: Functions documentations" } ]
[ { "msg_contents": "\n\n Hi hackers,\n\n 1) I programming to_char() routine (inspire with oracle, and with good \n advice from Thomas L.) Current version is on \n ftp://ftp2.zf.jcu.cz/users/zakkr/pg/ora_func.tar.gz, \n\n (In future I want implement more (oracle compatible routines (to_date, \n to_number.. and more)..)\n\n 2) I have comlete imlementation of MD5 routine for PqSQL \n (ftp://ftp2.zf.jcu.cz/users/zakkr/pg/md5.tar.gz). As base for source code\n is used code from Debian md5sum. In Debian is this code distributed \n _without_ some restriction under GPL. Is any problem add this code \n (if we want) to PgSQL contrib? \n\n (If it is not problem I can try make more routines based on md5\n (aggregate func. - md5_count() ...etc)).\n\n I enclose description for the current version of to_char(). Please, send me \n any commets.. \n\n\t\t\t\t\t\t\tZakkr\n\n------------------------------------------------------------------------------\nTO_CHAR(datetime, text)\n \n\t* now is not supported:\n\t\t- spelled-out SP suffix\n\t\t- AM/PM\n\t\t- ...and not supported number to character converting\n\t\tTO_CHAR(number, 'format')\n\n\t* now is supported:\n\n\t\tsuffixes:\n\t\t\tTH ot th\t- ordinal number\t\t\t\t\t\t\t\n\t\t\tFM\t\t- fill mode\n\t\n\t \tHH \t- hour of day (01-12)\n\t \tHH12\t- -- // --\n\t \tHH24\t- hour (00-24)\n\t \tMI\t- minute (00-59)\n\t \tSS\t- socond (00-59)\n\t \tSSSS\t- seconds past midnight (0-86399)\t\n\t \tY,YYY\t- year with comma (full PgSQL datetime range) digits) \n\t \tYYYY\t- year (4 and more (full PgSQL datetime range) digits) \n\t \tYYY\t- last 3 digits of year \n\t \tYY\t- last 2 digits of year \n\t \tY\t- last digit of year \n\t \tMONTH\t- full month name (upper) (9-letter)\n\t \tMonth\t- full month name - first character is upper (9-letter)\n\t \tmonth\t- full month name - all characters is upper (9-letter) \n\t\tMON\t- abbreviated month name (3-letter)\n\t\tMon\t- abbreviated month name (3-letter) - first character is upper \n\t\tmon\t- abbreviated month name (3-letter) - all characters is upper \n\t\tMM\t- month (01-12)\n\t \tDAY\t- full day name (upper) (9-letter)\n\t \tDay\t- full day name - first character is upper (9-letter)\n\t\tday\t- full day name - all characters is upper (9-letter)\n\t \tDY\t- abbreviated day name (3-letter) (upper)\n\t \tDy\t- abbreviated day name (3-letter) - first character is upper \n\t \tDy\t- abbreviated day name (3-letter) - all character is upper \n\t \tDDD\t- day of year (001-366)\n\t \tDD\t- day of month (01-31)\n\t \tD\t- day of week (1-7; SUN=1)\n\t \tWW\t- week number of year\n\t \tCC\t- century (2-digits)\n\t\tQ\t- quarter\n\t\tRM\t- roman numeral month (I=JAN; I-XII)\n\t\tW\t- week of month \n\t\tJ\t- julian day (days since January 1, 4712 BC)\n\n\t* Other:\n\t\t\\\n\t\t\t- must be use as double \\\\\n\t\t\t- if \\\\ is in front \" is \\\\ direct character\n\t\t\tex:\t\n\t\t\t\t\\\\\t-is->\t\\\n\t\t\t\t\\\\\"\t-is->\t\"\n\t\t\n\t\t\" of text \"\t\n\t\t\t- all between \" is output as text (not parsed) \t\t\n\t\t\t\n\t* Note:\n\t\n\t- as base for date and time is used full PostgreSQL DateTime range\n\n* Examples:\n\t\ntemplate1=> select to_char('now', 'HH24:MI:SS, Day, Month, Y,YYY');\nto_char\n-------------------------------------\n16:53:17, Friday , October , 1,999\n\ntemplate1=> select to_char('now', 'HH24:MI:SS, FMDay, FMMonth, Y,YYY');\nto_char\n--------------------------------\n16:55:47, Friday, October, 1,999\n\ntemplate1=> select to_char('now', 'DDDth DDDTH SSth Y,YYYth FMSSth');\nto_char\n----------------------------\n288th 288TH 02nd 1,999th 2nd\n\ntemplate1=> select to_char('now', 'Hello HH:MI:SS day');\nto_char\n------------------------\nHello 05:00:12 friday\n\ntemplate1=> select to_char('now', 'Hello \"day\" HH:MI:SS day');\nto_char\n----------------------------\nHello day 05:00:33 friday\n\ntemplate1=> select to_char('now', '\\\\\"Hello \"day\" HH:MI:SS FMday\\\\\"');\nto_char\n---------------------------\n\"Hello day 05:01:15 friday\"\n\ntemplate1=> select to_char('now', 'HH\\\\MI\\\\SS');\nto_char\n--------\n05\\10\\29\n\n---end-to_char()\n\n------------------------------------------------------------------------------\nORDINAL(int4, text)\n\t\n\t* Translate number to ordinal number and return this as text\n\n\n* Examples: \n\ntemplate1=> select ordinal(21212, 'TH');\nordinal\n-------\n21212ND\n\ntemplate1=> select ordinal(21212, 'th');\nordinal\n-------\n21212nd\n\n---end-ordinal()\n\n------------------------------------------------------------------------------\n\n\n \n\n", "msg_date": "Sat, 16 Oct 1999 15:01:02 +0200 (CEST)", "msg_from": "Zakkr <[email protected]>", "msg_from_op": true, "msg_subject": "to_char(), md5() (long)" }, { "msg_contents": "Zakkr wrote:\n \n > 2) I have comlete imlementation of MD5 routine for PqSQL \n > (ftp://ftp2.zf.jcu.cz/users/zakkr/pg/md5.tar.gz). As base for source cod\n >e\n > is used code from Debian md5sum. In Debian is this code distributed \n > _without_ some restriction under GPL. Is any problem add this code \n > (if we want) to PgSQL contrib? \n \nJust to clarify this, this text is from the copyright statement of the\nDebain dpkg package, which contains /usr/bin/md5sum:\n\n /usr/bin/md5sum is compiled from md5.[ch] (written by Colin Plumb in\n 1993 and modified by Ian Jackson in 1995) and md5sum.c (written by\n Branko Lankester in 1993 and modified by Colin Plumb in 1993 and Ian\n Jackson in 1995). The sources and the binary are all in the public\n domain.\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"But be ye doers of the word, and not hearers only, \n deceiving your own selves.\" James 1:22 \n\n\n", "msg_date": "Sat, 16 Oct 1999 17:27:40 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] to_char(), md5() (long) " }, { "msg_contents": "\n\nOn Sat, 16 Oct 1999, Oliver Elphick wrote:\n\n> Zakkr wrote:\n> \n> > 2) I have comlete imlementation of MD5 routine for PqSQL \n> > (ftp://ftp2.zf.jcu.cz/users/zakkr/pg/md5.tar.gz). As base for source cod\n> >e\n> > is used code from Debian md5sum. In Debian is this code distributed \n> > _without_ some restriction under GPL. Is any problem add this code \n> > (if we want) to PgSQL contrib? \n> \n> Just to clarify this, this text is from the copyright statement of the\n> Debain dpkg package, which contains /usr/bin/md5sum:\n> \n> /usr/bin/md5sum is compiled from md5.[ch] (written by Colin Plumb in\n> 1993 and modified by Ian Jackson in 1995) and md5sum.c (written by\n> Branko Lankester in 1993 and modified by Colin Plumb in 1993 and Ian\n> Jackson in 1995). The sources and the binary are all in the public\n> domain.\n\n Yes, if I good understand it is not problem (md5sum not has RSA or non-US\nrestriction (instead of other cryp. sofrware) ..or not? \n\n\t\t\t\t\t\t\tZakkr\n \n\n", "msg_date": "Mon, 18 Oct 1999 08:37:49 +0200 (CEST)", "msg_from": "Zakkr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] to_char(), md5() (long) " }, { "msg_contents": "Zakkr wrote:\n \n >> /usr/bin/md5sum is compiled from md5.[ch] (written by Colin Plumb in\n >> 1993 and modified by Ian Jackson in 1995) and md5sum.c (written by\n >> Branko Lankester in 1993 and modified by Colin Plumb in 1993 and Ian\n >> Jackson in 1995). The sources and the binary are all in the public\n >> domain.\n >\n > Yes, if I good understand it is not problem (md5sum not has RSA or non-US\n >restriction (instead of other cryp. sofrware) ..or not? \n \n`Public domain' means anybody can do whatever they like with it. The\nauthor has abandoned control altogether. This is a matter of copyright. It\nis entirely separate from USA's insane export-control regulations and patents. \n\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Delight thyself also in the LORD; and he shall give \n thee the desires of thine heart.\" Psalms 37:4\n\n\n", "msg_date": "Mon, 18 Oct 1999 12:33:52 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] to_char(), md5() (long) " }, { "msg_contents": "The following is the next-to-last paragraph on our main news page:\n\nRedHat RPMs for v6.5.1 on i386 machines are now available at\nftp://postgresql.org/pub/RPMS/ Please report any questions or problems\nto [email protected]. For details check out the Latest News. \n\nRPMs for v6.5.2 are available, built by Lamar, and we need to get\nthese posted at postgresql.org (they are shipping with the latest RH\nrelease already). Lamar, I dropped the ball on this; where would I\npick up the RPMs?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 19 Oct 1999 14:49:35 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Need refresh on main page..." }, { "msg_contents": "\n\nOn Tue, 19 Oct 1999, Lamar Owen wrote:\n\n> Vince Vielhaber wrote:\n> > Actually that's on our main page, which is just announcements. The news\n> > page (Latest News on the menu bar) gets quite involved. Would someone\n> > care to give me some updated text for that as well?\n> \n> \"RPM's for PostgreSQL 6.5.2 are now available, and shipped with RedHat\n> Linux version 6.1. Included in these RPMS:\n> \n\nHmm RedHat, and for Debian (and people which needn't Hat for comp-work:-) exists \npackage with 6.5.2?\n\n------------------------------------------------------------------------------\n<[email protected]> http://home.zf.jcu.cz/~zakkr/\n\nKim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\nFTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n------------------------------------------------------------------------------\n ...and cathedral dilapidate\n\n", "msg_date": "Tue, 19 Oct 1999 17:13:02 +0200 (CEST)", "msg_from": "Zakkr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Need refresh on main page..." }, { "msg_contents": "On Tue, 19 Oct 1999, Thomas Lockhart wrote:\n\n> The following is the next-to-last paragraph on our main news page:\n> \n> RedHat RPMs for v6.5.1 on i386 machines are now available at\n> ftp://postgresql.org/pub/RPMS/ Please report any questions or problems\n> to [email protected]. For details check out the Latest News. \n> \n> RPMs for v6.5.2 are available, built by Lamar, and we need to get\n> these posted at postgresql.org (they are shipping with the latest RH\n> release already). Lamar, I dropped the ball on this; where would I\n> pick up the RPMs?\n\nActually that's on our main page, which is just announcements. The news\npage (Latest News on the menu bar) gets quite involved. Would someone\ncare to give me some updated text for that as well?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 19 Oct 1999 11:13:27 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need refresh on main page..." }, { "msg_contents": "Thomas Lockhart wrote:\n \n> RPMs for v6.5.2 are available, built by Lamar, and we need to get\n> these posted at postgresql.org (they are shipping with the latest RH\n> release already). Lamar, I dropped the ball on this; where would I\n> pick up the RPMs?\n\nThere are now many sites -- the best connected of which is rufus.w3.org.\n\nIf you use wget, try\nwget ftp://rufus.w3.org/linux/redhat/redhat-6.1/SRPMS/SRPMS/postgr*\nto get the source rpm.\n\nBinaries for RedHat 6.1 are in \nftp://rufus.w3.org/linux/redhat/redhat-6.1/i386/RedHat/RPMS/postg*\n\nBinaries for RedHat 6.0 and RedHat 5.2 are only available on my site so\nfar:\nwget -r http://www.ramifordistat.net/postgres/RPMS\n\nAll binaries mentioned are Intel. RedHat 6.1, of course, also shipped\nfor sparc and Alpha -- rufus is a good mirror.\n\nI would like to hear from someone who will try the RedHat 6.1 RPMS on a\nRedHat 6._0_ system -- if they work fine, I'll remove the 6.0 rpms, and\nreplace with the 6.1 RPM's. I had already updated my systems, quite\nfoolishly, to 6.1, when I realized that RPMS built on 6.1 _might_ not\nrun on 6.0. I should have run the 6.1 RPMset on 6.0 first, but I\ndidn't. 6.1 is too nice to downgrade from, IMO.\n\nIf, OTOH, they don't work fine, well, I'll cross that bridge when I come\nto it. That bridge will involve me blowing RedHat 6.0 back into one\npartition on my dev box, just like 5.2 has its own partition.\n\nIf any developers who want to try the RPM's, but do not have RedHat and\ndon't want to pay $30 for a set of CD's, check out www.cheapbytes.com --\ncheapest place on the Internet for Linux and *BSD CD's. $1.99 US will\nget you a RedHat 6.1 CD (plus shipping). Of course, you can download the\nISO images and burn your own CD....\n\nBTW: Only one bug in the 6.5.2-1 RPMS has been reported to Bugzilla\n(developer.redhat.com/bugzilla) -- and that was the lack of\npostgresql-clients (needless to say, the bug was quickly resolved ;-)).\n\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 19 Oct 1999 11:18:12 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need refresh on main page..." }, { "msg_contents": "Vince Vielhaber wrote:\n> Actually that's on our main page, which is just announcements. The news\n> page (Latest News on the menu bar) gets quite involved. Would someone\n> care to give me some updated text for that as well?\n\n\"RPM's for PostgreSQL 6.5.2 are now available, and shipped with RedHat\nLinux version 6.1. Included in these RPMS:\n\n* Functional pgAccess 0.98\n* Patches to support the Alpha under RedHat Linux\n* The regression tests (postgresql-test*.rpm)\n* Updated man pages from Thomas Lockhart\n* Patches from Thomas Lockhart to gram.y\n* Migration scripts to aid in moving between database file formats\n* The PostgreSQL-HOWTO.\n\nTo install, simply use 'rpm -i' to install each RPM you want. A minimal\nserver installation is postgresql and postgresql-server. The first time\nthe init script is run, the database structure will automatically be\ncreated if one doesn't already exist.\n\nTo upgrade, simply use 'rpm -U' to upgrade each rpm. The\npostgresql-clients rpm found in versions prior to 6.5 has been split up\ninto several different packages, and will be removed in the upgrade. \nAfter performing 'rpm -U', please read the file\n'/usr/doc/postgresql-6.5.2/README.rpm' for instructions on completing\nthe upgrade.\n\nIf you make heavy use of large objects and other structures that are\ndifficult to dump with pg_dump, you will need to make other upgrade\nprovisions.\n\nMore detailed documentation is also available at\nhttp://www.ramifordistat.net\"\n\n???\n\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 19 Oct 1999 11:44:54 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need refresh on main page..." }, { "msg_contents": "On Tue, 19 Oct 1999, Lamar Owen wrote:\n\n> Vince Vielhaber wrote:\n> > Actually that's on our main page, which is just announcements. The news\n> > page (Latest News on the menu bar) gets quite involved. Would someone\n> > care to give me some updated text for that as well?\n> \n> \"RPM's for PostgreSQL 6.5.2 are now available, and shipped with RedHat\n> Linux version 6.1. Included in these RPMS:\n\nPerfect! Thank you!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 19 Oct 1999 11:54:48 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need refresh on main page..." }, { "msg_contents": "> Perfect! Thank you!\n\nI've fetched the RPMs from rufus.org and from Lamar's own site to give\na complete set of RPMs at ftp://postgresql.org/pub/{RPMS,SRPMS}.\n\nLamar, there is a slight difference in sizes of srpm files fetched\nusing wget from rufus.org and from your site:\n\n>From ramisartpuyaartelh:\n\n... 7456019 Sep 27 13:52 postgresql-6.5.2-1.src.rpm\n\n>From rufus (via wget):\n\n... 7456234 Sep 26 23:01 postgresql-6.5.2-1.src.rpm\n\nI've put your srpm on postgresql.org, but I'm not sure if it is the\nright choice. Why the difference??\n\n - Thomas\n\nbtw, I didn't know about wget; it solves all the problems I was\nworried about wrt updating postgresql.org from your site. Nice\nutility...\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 19 Oct 1999 16:27:51 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need refresh on main page..." }, { "msg_contents": "On Tue, 19 Oct 1999, Thomas Lockhart wrote:\n\n> > Perfect! Thank you!\n> \n> I've fetched the RPMs from rufus.org and from Lamar's own site to give\n> a complete set of RPMs at ftp://postgresql.org/pub/{RPMS,SRPMS}.\n\nWebsite's updated as well.\n\n[snip]\n\n> \n> btw, I didn't know about wget; it solves all the problems I was\n> worried about wrt updating postgresql.org from your site. Nice\n> utility...\n\nYeah it is. I use it all the time.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 19 Oct 1999 12:39:35 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need refresh on main page..." }, { "msg_contents": "Zakkr wrote:\n> Hmm RedHat, and for Debian (and people which needn't Hat for comp-work:-) exists\n> package with 6.5.2?\n\nI don't know if Oliver has updated the Debian packages (which he\nmaintains) for PostgreSQL to 6.5.2.\n\nAs for RPM's for other than RedHat, I'm working on that problem.\n\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 19 Oct 1999 14:03:15 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Need refresh on main page..." }, { "msg_contents": "Vince Vielhaber wrote:\n> > \"RPM's for PostgreSQL 6.5.2 are now available, and shipped with RedHat\n> > Linux version 6.1. Included in these RPMS:\n> \n> Perfect! Thank you!\n\nGlad to help.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 19 Oct 1999 14:05:48 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Need refresh on main page..." }, { "msg_contents": "Vince Vielhaber wrote:\n> \n> On Tue, 19 Oct 1999, Thomas Lockhart wrote:\n> > I've fetched the RPMs from rufus.org and from Lamar's own site to give\n> > a complete set of RPMs at ftp://postgresql.org/pub/{RPMS,SRPMS}.\n> \n> Website's updated as well.\n\nAnd looks good.\n \n> [snip]\n\n> > btw, I didn't know about wget; it solves all the problems I was\n> > worried about wrt updating postgresql.org from your site. Nice\n> > utility...\n \n> Yeah it is. I use it all the time.\n\nRuns great within cron -- the -q switch is your friend ;-). I mirror\nvia cron using wget the update repository on rufus -- another great part\nof wget is the ability to infinitely retry (-t 0). \n\nFor some reason I didn't get the reply from Thomas. I don't _think_ I\nhad a network glitch (he says while firing up a log analyzer...)...\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 19 Oct 1999 14:11:56 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need refresh on main page..." }, { "msg_contents": "Zakkr wrote:\n >\n >\n >On Tue, 19 Oct 1999, Lamar Owen wrote:\n >\n >> Vince Vielhaber wrote:\n >> > Actually that's on our main page, which is just announcements. The news\n >> > page (Latest News on the menu bar) gets quite involved. Would someone\n >> > care to give me some updated text for that as well?\n >> \n >> \"RPM's for PostgreSQL 6.5.2 are now available, and shipped with RedHat\n >> Linux version 6.1. Included in these RPMS:\n >> \n >\n >Hmm RedHat, and for Debian (and people which needn't Hat for comp-work:-) ex\n >ists \n >package with 6.5.2?\n \nThe 6.5.2-2 packages for Debian have just been installed and should be on\nthe Debian mirrors within the next day. They are to be found in the\nunstable (\"potato\") distribution at ftp.debian.org/debian/ (or mirrors)\nas follows:\n\nBinary packages for i386:\ndists/potato/main/binary-i386/misc/postgresql_6.5.2-2.deb\ndists/potato/main/binary-i386/misc/postgresql-client_6.5.2-2.deb\ndists/potato/main/binary-i386/misc/postgresql-contrib_6.5.2-2.deb\ndists/potato/main/binary-i386/libs/ecpg_6.5.2-2.deb\ndists/potato/main/binary-i386/libs/postgresql-pl_6.5.2-2.deb\ndists/potato/main/binary-i386/libs/libpgsql2_6.5.2-2.deb\ndists/potato/main/binary-i386/devel/postgresql-dev_6.5.2-2.deb\ndists/potato/main/binary-all/doc/postgresql-doc_6.5.2-2.deb\ndists/potato/main/binary-all/misc/pgaccess_6.5.2-2.deb\ndists/potato/main/binary-i386/libs/libpgtcl_6.5.2-2.deb\ndists/potato/main/binary-i386/libs/libpgperl_6.5.2-2.deb\ndists/potato/main/binary-i386/libs/python-pygresql_6.5.2-2.deb\ndists/potato/main/binary-i386/libs/odbc-postgresql_6.5.2-2.deb\ndists/potato/main/binary-i386/misc/postgresql-test_6.5.2-2.deb\n\nSource package:\ndists/potato/main/source/misc/postgresql_6.5.2.orig.tar.gz\ndists/potato/main/source/misc/postgresql_6.5.2-2.diff.gz\ndists/potato/main/source/misc/postgresql_6.5.2-2.dsc\n\nBinary packages for other architectures should start to appear shortly;\nsubstitute the appropriate architecture for `i386' in the above paths.\n\nStandard caution:\nNote that these packages are not part of the released stable Debian\ndistribution. They may have dependencies on other unreleased software,\nor other instabilities. Please take care if you wish to install them.\nThe updates will eventually make their way into the next released Debian\ndistribution.\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Commit thy way unto the LORD; trust also in him and \n he shall bring it to pass.\" Psalms 37:5 \n\n\n", "msg_date": "Tue, 19 Oct 1999 23:13:35 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Need refresh on main page... " }, { "msg_contents": "\n\n\n\nOn Tue, 19 Oct 1999, Oliver Elphick wrote:\n\n> The 6.5.2-2 packages for Debian have just been installed and should be on\n> the Debian mirrors within the next day. They are to be found in the\n> unstable (\"potato\") distribution at ftp.debian.org/debian/ (or mirrors)\n> as follows:\n\nThank! (I'm installing it now.)\n\n\n------------------------------------------------------------------------------\n<[email protected]> http://home.zf.jcu.cz/~zakkr/\n\nKim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\nFTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n------------------------------------------------------------------------------\n ...and cathedral dilapidate\n\n", "msg_date": "Wed, 20 Oct 1999 09:09:47 +0200 (CEST)", "msg_from": "Zakkr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Need refresh on main page... " }, { "msg_contents": "\n\n Hi,\n\n current Debian potato pgaccess package is bad. The pre-remove script\n(in DEBIAN directory) pgaccess.prerm is *empty*, and apt tool return\nerror for this. In this script must be (leastways) '#!/bin/sh -e'.\n\n \t\t\t\t\t\tKarel \n\nPS. sorry, it is really my last bugreport today :-)\n\n----------------------------------------------------------------------\nKarel Zak <[email protected]> http://home.zf.jcu.cz/~zakkr/\n\nDocs: http://docs.linux.cz (big docs archive)\t\nKim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\nFTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n-----------------------------------------------------------------------\n\n", "msg_date": "Fri, 17 Dec 1999 15:28:42 +0100 (CET)", "msg_from": "Karel Zak - Zakkr <[email protected]>", "msg_from_op": false, "msg_subject": "bug in Debian's pgaccess package" }, { "msg_contents": "Karel Zak - Zakkr wrote:\n > current Debian potato pgaccess package is bad. The pre-remove script\n >(in DEBIAN directory) pgaccess.prerm is *empty*, and apt tool return\n >error for this. In this script must be (leastways) '#!/bin/sh -e'.\n\nPackaging issues shouldn't go to the hackers list; this should have been\n(and now is) a Debian bug-report.\n\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"For I say, through the grace given unto me, to every \n man that is among you: Do not think of yourself more \n highly than you ought, but rather think of yourself \n with sober judgement, in accordance with the measure \n of faith God has given you.\" Romans 12:3 \n\n\n", "msg_date": "Fri, 17 Dec 1999 16:31:29 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] bug in Debian's pgaccess package " }, { "msg_contents": "> Karel Zak - Zakkr wrote:\n> > current Debian potato pgaccess package is bad. The pre-remove script\n> >(in DEBIAN directory) pgaccess.prerm is *empty*, and apt tool return\n> >error for this. In this script must be (leastways) '#!/bin/sh -e'.\n> \n> Packaging issues shouldn't go to the hackers list; this should have been\n> (and now is) a Debian bug-report.\n\nDebian potato pgaccess -- I don't even want to ask what this is. \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 17 Dec 1999 12:53:45 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] bug in Debian's pgaccess package" }, { "msg_contents": "\n\n\nHi,\n\nI am thinking about using postgreSQL to manage large geographic databases\nconnected to a GIS.\nI would really appreciate if someone could give me some answers to those 3\nquestions :\n what about performances with postgreSQL and large databases,\n the object size limitation (8192 bytes) is really not acceptable for this\n purpose. Is there a way or any hack to overpass it,\n is the geographic index working well.\n\nThanks.\n\nXavier.\n\n\n\n", "msg_date": "Thu, 20 Jul 2000 09:42:25 +0200", "msg_from": "\"Xavier ZIMMERMANN\" <[email protected]>", "msg_from_op": false, "msg_subject": "8Ko limitation" }, { "msg_contents": "\n> what about performances with postgreSQL and large databases,\n> the object size limitation (8192 bytes) is really not acceptable for this\n\n Now you can change this limit in config.h, the possible range is \n8Kb - 32Kb. \n\n In new 7.1 version will this limit dead forever (see TOAST project).\n\n And what is a \"large database\"? 1, 5 .. 10Gb? If yes, (IMHO) the PostgreSQL \nis good choice.\n\n\t\t\t\t\tKarel\n\n", "msg_date": "Thu, 20 Jul 2000 10:00:17 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 8Ko limitation" }, { "msg_contents": "On Thursday 20 July 2000, at 10 h 0, the keyboard of Karel Zak \n<[email protected]> wrote:\n \n> And what is a \"large database\"? 1, 5 .. 10Gb? If yes, (IMHO) the PostgreSQL \n> is good choice.\n\nEven on Linux? I'm studying a database project where the raw data is 10 to 20 \nGb (it will be in several tables in the same database). Linux has a limit of 2 \nGb for a file (even on 64-bits machine, if I'm correct). A colleague told me \nto use NetBSD instead, because PostgreSQL on a Linux machine cannot host more \nthan 2 Gb per database. Any practical experience? (I'm not interested in \"It \nshould work\".)\n\n\n", "msg_date": "Thu, 20 Jul 2000 10:35:41 +0200", "msg_from": "Stephane Bortzmeyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] 8Ko limitation " }, { "msg_contents": "On Thu, Jul 20, 2000 at 10:35:41AM +0200, Stephane Bortzmeyer wrote:\n> On Thursday 20 July 2000, at 10 h 0, the keyboard of Karel Zak \n> <[email protected]> wrote:\n> \n> > And what is a \"large database\"? 1, 5 .. 10Gb? If yes, (IMHO) the PostgreSQL \n> > is good choice.\n> \n> Even on Linux? I'm studying a database project where the raw data is 10 to 20 \n> Gb (it will be in several tables in the same database). Linux has a limit of 2 \n> Gb for a file (even on 64-bits machine, if I'm correct). A colleague told me \n> to use NetBSD instead, because PostgreSQL on a Linux machine cannot host more \n> than 2 Gb per database. Any practical experience? (I'm not interested in \"It \n> should work\".)\n\nPostgres splits large tables into multiple files.\n\nExperience suggests it tends to split at around 1.1G (at least, that's\nwhat it has done on my last project).\n\nFWIW, the 2Gig limit doesn't exist on 64bit linux, AFAIK (at least, not\nwith a 64-bit happy libc; I can't remember if the patches made it into\nthe version we use in Debian).\n\nJules\n\n-- \nJules Bean | Any sufficiently advanced \[email protected] | technology is indistinguishable\[email protected] | from a perl script\n", "msg_date": "Thu, 20 Jul 2000 09:52:01 +0100", "msg_from": "Jules Bean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] 8Ko limitation" }, { "msg_contents": "\nOn Thu, 20 Jul 2000, Stephane Bortzmeyer wrote:\n\n> On Thursday 20 July 2000, at 10 h 0, the keyboard of Karel Zak \n> <[email protected]> wrote:\n> \n> > And what is a \"large database\"? 1, 5 .. 10Gb? If yes, (IMHO) the PostgreSQL \n> > is good choice.\n> \n> Even on Linux? I'm studying a database project where the raw data is 10 to 20 \n> Gb (it will be in several tables in the same database). Linux has a limit of 2 \n> Gb for a file (even on 64-bits machine, if I'm correct). A colleague told me \n> to use NetBSD instead, because PostgreSQL on a Linux machine cannot host more \n> than 2 Gb per database. Any practical experience? (I'm not interested in \"It \n> should work\".)\n\nI must again say: \"The PostgreSQL is good choice\" :-)\n\nThe postgres chunks DB files, not exist 2Gb limit here...\n\n\t\t\t\t\t\tKarel\n\n", "msg_date": "Thu, 20 Jul 2000 11:02:50 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] 8Ko limitation " }, { "msg_contents": "Jules Bean <[email protected]> writes:\n>> A colleague told me to use NetBSD instead, because PostgreSQL on a\n>> Linux machine cannot host more than 2 Gb per database. Any practical\n>> experience? (I'm not interested in \"It should work\".)\n\n> Postgres splits large tables into multiple files.\n\nSegmenting into multiple files used to have some bugs, but that was a\nfew versions back --- I think your colleague's experience is obsolete.\nThere are lots of people using multi-gig tables now.\n\nIt's presently still painful to manage a database that spans multiple\ndisks, however. (You can do it if you're willing to move files around\nand establish symlinks by hand ... but it's painful.) There are plans\nto make this better, but for now you might want to say that the\npractical limit is the size of disk you can buy. Alternatively, if\nyour OS can make logical filesystems that span multiple disks, you\ncan get around the problem that way.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 Jul 2000 10:52:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] 8Ko limitation " }, { "msg_contents": "\"Justin Hickey\" <[email protected]> writes:\n> Will the geometric data types be TOASTable for 7.1?\n\nProbably ... if I get around to it ... or someone else does\n(yes, that's a hint).\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 Jul 2000 11:17:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 8Ko limitation " }, { "msg_contents": "Hello Karel\n\nOn Jul 20, 10:00am, Karel Zak wrote:\n> Subject: Re: [HACKERS] 8Ko limitation\n>\n> > what about performances with postgreSQL and large databases,\n> > the object size limitation (8192 bytes) is really not acceptable for\nthis\n>\n> Now you can change this limit in config.h, the possible range is\n> 8Kb - 32Kb.\n>\n> In new 7.1 version will this limit dead forever (see TOAST project).\n\nWe use Postgres to store polygons and I asked this same question before. The\nreply I got was that the geometric data types were not guaranteed to be\nconverted to TOASTable data types (they were at the bottom of the list of types\nto convert). They hinted that I could help them with this but I have no time.\nHas this changed now? Will the geometric data types be TOASTable for 7.1?\n\n\n-- \nSincerely,\n\nJazzman (a.k.a. Justin Hickey) e-mail: [email protected]\nHigh Performance Computing Center\nNational Electronics and Computer Technology Center (NECTEC)\nBangkok, Thailand\n==================================================================\nPeople who think they know everything are very irritating to those\nof us who do. ---Anonymous\n\nJazz and Trek Rule!!!\n==================================================================\n", "msg_date": "Thu, 20 Jul 2000 16:06:12 +0000", "msg_from": "\"Justin Hickey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8Ko limitation" }, { "msg_contents": " Even on Linux? I'm studying a database project where the raw data is 10 to 20 \n Gb (it will be in several tables in the same database). Linux has a limit of 2 \n Gb for a file (even on 64-bits machine, if I'm correct). A colleague told me \n to use NetBSD instead, because PostgreSQL on a Linux machine cannot host more \n than 2 Gb per database. Any practical experience? (I'm not interested in \"It \n should work\".)\n\nPostgresql and NetBSD work fine together. NetBSD has not had a 2GB\nfile limit for _many_ years and has raidframe for configuring huge\ndisks from many small ones (as well as for normal raid stuff).\n\nCheers,\nBrook\n\n\n", "msg_date": "Thu, 20 Jul 2000 12:01:30 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] 8Ko limitation" }, { "msg_contents": "> Even on Linux? I'm studying a database project where the raw data is 10 to 20 \n> Gb (it will be in several tables in the same database). Linux has a limit of 2 \n> Gb for a file (even on 64-bits machine, if I'm correct). A colleague told me \n\nQuoi?\n\nOn my RedHat6.2 system:\n\n/dev/md0 14111856 257828 13137168 2% /raid\n\n> to use NetBSD instead, because PostgreSQL on a Linux machine cannot host more \n> than 2 Gb per database. Any practical experience? (I'm not interested in \"It \n> should work\".)\n\nFor a heavy-duty server, I would probably pick OpenBSD over Linux, but\nboth will work fine, and both can have filesystems far larger than\n2gb.\n\ne\n", "msg_date": "Thu, 20 Jul 2000 14:05:39 -0700 (PDT)", "msg_from": "Erich <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] 8Ko limitation" }, { "msg_contents": "On Thursday 20 July 2000, at 14 h 5, the keyboard of Erich <[email protected]> \nwrote:\n\n> > Linux has a limit of 2 \n> > Gb for a file (even on 64-bits machine, if I'm correct). \n...\n> Quoi?\n> \n> On my RedHat6.2 system:\n> \n> /dev/md0 14111856 257828 13137168 2% /raid\n...\n> and both can have filesystems far larger than 2gb.\n\nRead the message before replying: I wrote FILE and not FILESYSTEM.\n\n\n", "msg_date": "Fri, 21 Jul 2000 09:39:34 +0200", "msg_from": "Stephane Bortzmeyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] 8Ko limitation " }, { "msg_contents": "Xavier ZIMMERMANN wrote:\n> \n> Hi,\n> \n> I am thinking about using postgreSQL to manage large geographic databases\n> connected to a GIS.\n...\n> is the geographic index working well.\n\nAFAIK r-trees ar used for planar geometry. \nperhaps there something in contrib for geographic (spherical)\ncoordinates.\n\n--------\nHannu\n", "msg_date": "Fri, 21 Jul 2000 14:56:33 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 8Ko limitation" } ]
[ { "msg_contents": "I'm working on psql printing routines here and want to do alignment by\ndatatype. Two questions arose:\n\n1) What is the difference between/merit of \"line\" vs. \"_line\", \"cidr\" vs.\n\"_cidr\", etc.? Do I have to worry about them?\n\n2) Can I assume that the Oids for the datatypes are always the same\n(barring a developer changing them, of course)? Where are they defined?\nWhat would be the best way to digest the output of libpq's PQftype()?\n(Perhaps a char * PQftypetext() would be of general use?)\n\n(The issue here is that I do _not_ want to have to query pg_type for that\ninformation, since psql has no business contacting the database server\nwhen not asked to do so.)\n\nThanks,\n\tPeter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 16 Oct 1999 16:48:56 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "pg_type questions" }, { "msg_contents": "> I'm working on psql printing routines here and want to do alignment by\n> datatype. Two questions arose:\n> \n> 1) What is the difference between/merit of \"line\" vs. \"_line\", \"cidr\" vs.\n> \"_cidr\", etc.? Do I have to worry about them?\n\n_line is for line arrays.\n\n> \n> 2) Can I assume that the Oids for the datatypes are always the same\n> (barring a developer changing them, of course)? Where are they defined?\n> What would be the best way to digest the output of libpq's PQftype()?\n> (Perhaps a char * PQftypetext() would be of general use?)\n\nsrc/include/catalog. There is an unused_oid script in there too.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 16 Oct 1999 21:48:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_type questions" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> What would be the best way to digest the output of libpq's PQftype()?\n> (Perhaps a char * PQftypetext() would be of general use?)\n\n> (The issue here is that I do _not_ want to have to query pg_type for that\n> information, since psql has no business contacting the database server\n> when not asked to do so.)\n\nYou can't have it both ways: either you look up the OID in pg_type,\nor the info you provide is incomplete/unreliable.\n\nFor the standard system types like int4, text, etc, it's probably OK\nfor client code to assume that particular numeric OIDs correspond to\nthose types --- use the #defines that are in catalog/pg_type.h, such as\nBOOLOID, to refer to those types. (I think there are one or two places\nin libpq and/or psql that do this already, eg, to decide whether a\ncolumn is \"numeric\".) The backend does this all over the place, but\nit's a little shakier for frontend code to do it, because a frontend\nmight possibly be used with other database versions than the one it was\ncompiled for. Still, I think you could get away with it for standard\ntypes --- AFAIK no one has any intention of renumbering those.\n\n(Thomas has been muttering dire things about the date/time types, so\nyou might be well advised not to assume anything about those ;-).)\n\nIt would not be an unreasonable idea for libpq to provide a general\npurpose type-OID-to-type-name mapper, with the understanding that this\nmapper *would* query the backend at need. (Of course it should know\nautomatically about the most common standard types, and it should cache\nthe results of previous lookups so any one OID is queried at most once\nper connection.) We have heard from a number of people who have built\nexactly that facility for their applications, so it's obviously useful.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 17 Oct 1999 11:55:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_type questions " }, { "msg_contents": "> Peter Eisentraut <[email protected]> writes:\n> > What would be the best way to digest the output of libpq's PQftype()?\n> > (Perhaps a char * PQftypetext() would be of general use?)\n> \n> > (The issue here is that I do _not_ want to have to query pg_type for that\n> > information, since psql has no business contacting the database server\n> > when not asked to do so.)\n> \n> You can't have it both ways: either you look up the OID in pg_type,\n> or the info you provide is incomplete/unreliable.\n> \n> For the standard system types like int4, text, etc, it's probably OK\n> for client code to assume that particular numeric OIDs correspond to\n> those types --- use the #defines that are in catalog/pg_type.h, such as\n> BOOLOID, to refer to those types. (I think there are one or two places\n> in libpq and/or psql that do this already, eg, to decide whether a\n> column is \"numeric\".) The backend does this all over the place, but\n> it's a little shakier for frontend code to do it, because a frontend\n> might possibly be used with other database versions than the one it was\n> compiled for. Still, I think you could get away with it for standard\n> types --- AFAIK no one has any intention of renumbering those.\n> \n> (Thomas has been muttering dire things about the date/time types, so\n> you might be well advised not to assume anything about those ;-).)\n> \n> It would not be an unreasonable idea for libpq to provide a general\n> purpose type-OID-to-type-name mapper, with the understanding that this\n> mapper *would* query the backend at need. (Of course it should know\n> automatically about the most common standard types, and it should cache\n> the results of previous lookups so any one OID is queried at most once\n> per connection.) We have heard from a number of people who have built\n> exactly that facility for their applications, so it's obviously useful.\n\nYes, Tom is exactly right on all these points. My suggestion to look in\ninclude/catalog was just my quick answer.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 17 Oct 1999 12:50:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_type questions" }, { "msg_contents": "On Oct 17, Tom Lane mentioned:\n\n> For the standard system types like int4, text, etc, it's probably OK\n> for client code to assume that particular numeric OIDs correspond to\n> those types --- use the #defines that are in catalog/pg_type.h, such as\n> BOOLOID, to refer to those types. (I think there are one or two places\n> in libpq and/or psql that do this already, eg, to decide whether a\n> column is \"numeric\".) The backend does this all over the place, but\n> it's a little shakier for frontend code to do it, because a frontend\n> might possibly be used with other database versions than the one it was\n> compiled for. Still, I think you could get away with it for standard\n> types --- AFAIK no one has any intention of renumbering those.\n> \n> (Thomas has been muttering dire things about the date/time types, so\n> you might be well advised not to assume anything about those ;-).)\n\nThe previous alignment \"algorithm\" in psql in essence checked for\n[^-+0-9\\.eE] or sth like that as far as I could tell.\n\nRight now I am just aligning int[248], float[48], and numeric but the\ndate/time types might be nice as well. But the idea was to straighten this\nout a bit so that alignment for other datatypes could easily be added or\nremoved. I think I might have accomplished that ;)\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 17 Oct 1999 22:29:50 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg_type questions " } ]
[ { "msg_contents": "Does anyone have an archive of Marc's request to gather developers for\nPostgres95 development? It would in the May-June, 1996 time period.\n\nI had it for a long time, but accidentally deleted it. Isn't there tar\nfile of all the Postgres95 mailing list messages somewhere? I can find\nit if I can get access to an archive.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 16 Oct 1999 23:01:13 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Marc's initial request to start Postgres95 development" }, { "msg_contents": "\nOn 17-Oct-99 Bruce Momjian wrote:\n> Does anyone have an archive of Marc's request to gather developers for\n> Postgres95 development? It would in the May-June, 1996 time period.\n> \n> I had it for a long time, but accidentally deleted it. Isn't there tar\n> file of all the Postgres95 mailing list messages somewhere? I can find\n> it if I can get access to an archive.\n\nMarc doesn't keep archives of his own posts? How established was the \nmailing list then? Was it being gated at the time? Could it be on\none of the main archive sites?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Sat, 16 Oct 1999 23:15:59 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Marc's initial request to start Postgres95 develop" }, { "msg_contents": "> \n> On 17-Oct-99 Bruce Momjian wrote:\n> > Does anyone have an archive of Marc's request to gather developers for\n> > Postgres95 development? It would in the May-June, 1996 time period.\n> > \n> > I had it for a long time, but accidentally deleted it. Isn't there tar\n> > file of all the Postgres95 mailing list messages somewhere? I can find\n> > it if I can get access to an archive.\n> \n> Marc doesn't keep archives of his own posts? How established was the \n> mailing list then? Was it being gated at the time? Could it be on\n> one of the main archive sites?\n\nIt was posted to the old postgres95 mailing list at the time Jolly ran\nit somewhere at [email protected]. It was before we moved\nit over to his site. The issue is that after we switched over to a new\nmailing list, I was going through the old postgres95 bug reports and\nfinding any valuable patches. When I was done, I deleted the file,\nforgetting that the founding messages where in the same folder.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 16 Oct 1999 23:21:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Marc's initial request to start Postgres95 develop" }, { "msg_contents": "\nOn 17-Oct-99 Bruce Momjian wrote:\n>> \n>> On 17-Oct-99 Bruce Momjian wrote:\n>> > Does anyone have an archive of Marc's request to gather developers for\n>> > Postgres95 development? It would in the May-June, 1996 time period.\n>> > \n>> > I had it for a long time, but accidentally deleted it. Isn't there tar\n>> > file of all the Postgres95 mailing list messages somewhere? I can find\n>> > it if I can get access to an archive.\n>> \n>> Marc doesn't keep archives of his own posts? How established was the \n>> mailing list then? Was it being gated at the time? Could it be on\n>> one of the main archive sites?\n> \n> It was posted to the old postgres95 mailing list at the time Jolly ran\n> it somewhere at [email protected]. It was before we moved\n> it over to his site. The issue is that after we switched over to a new\n> mailing list, I was going through the old postgres95 bug reports and\n> finding any valuable patches. When I was done, I deleted the file,\n> forgetting that the founding messages where in the same folder.\n\nAhhh, I get it. My first of the group of questions wondered about Marc\nkeeping archives of his own posts. I'm now guessing he doesn't. You \n(and I'm guessing many) people may think it strange but I keep archives\nof everything I've posted for the last 3-5 years (except for that hard\ndrive incident I try to forget :)\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Sat, 16 Oct 1999 23:26:07 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Marc's initial request to start Postgres95 develop" }, { "msg_contents": "> >> > I had it for a long time, but accidentally deleted it. Isn't there tar\n> >> > file of all the Postgres95 mailing list messages somewhere? I can find\n> >> > it if I can get access to an archive.\n> >> \n> >> Marc doesn't keep archives of his own posts? How established was the \n> >> mailing list then? Was it being gated at the time? Could it be on\n> >> one of the main archive sites?\n> > \n> > It was posted to the old postgres95 mailing list at the time Jolly ran\n> > it somewhere at [email protected]. It was before we moved\n> > it over to his site. The issue is that after we switched over to a new\n> > mailing list, I was going through the old postgres95 bug reports and\n> > finding any valuable patches. When I was done, I deleted the file,\n> > forgetting that the founding messages where in the same folder.\n> \n> Ahhh, I get it. My first of the group of questions wondered about Marc\n> keeping archives of his own posts. I'm now guessing he doesn't. You \n> (and I'm guessing many) people may think it strange but I keep archives\n> of everything I've posted for the last 3-5 years (except for that hard\n> drive incident I try to forget :)\n\nYes, I normally do, and should have retrieved it off tape at the time,\nbut I figured I was never going to need it. I was wrong. Would be\ninteresting to see at this point. The berkeley postgres95 site says to\njust see our site, so it seems they don't have it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 16 Oct 1999 23:28:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Marc's initial request to start Postgres95 develop" }, { "msg_contents": "On Sat, 16 Oct 1999, Vince Vielhaber wrote:\n\n> \n> On 17-Oct-99 Bruce Momjian wrote:\n> >> \n> >> On 17-Oct-99 Bruce Momjian wrote:\n> >> > Does anyone have an archive of Marc's request to gather developers for\n> >> > Postgres95 development? It would in the May-June, 1996 time period.\n> >> > \n> >> > I had it for a long time, but accidentally deleted it. Isn't there tar\n> >> > file of all the Postgres95 mailing list messages somewhere? I can find\n> >> > it if I can get access to an archive.\n> >> \n> >> Marc doesn't keep archives of his own posts? How established was the \n> >> mailing list then? Was it being gated at the time? Could it be on\n> >> one of the main archive sites?\n> > \n> > It was posted to the old postgres95 mailing list at the time Jolly ran\n> > it somewhere at [email protected]. It was before we moved\n> > it over to his site. The issue is that after we switched over to a new\n> > mailing list, I was going through the old postgres95 bug reports and\n> > finding any valuable patches. When I was done, I deleted the file,\n> > forgetting that the founding messages where in the same folder.\n> \n> Ahhh, I get it. My first of the group of questions wondered about Marc\n> keeping archives of his own posts. I'm now guessing he doesn't. You \n> (and I'm guessing many) people may think it strange but I keep archives\n> of everything I've posted for the last 3-5 years (except for that hard\n> drive incident I try to forget :)\n\nActually, I do too, except that mine appear to end around 2 years ago :(\nI think that's about the time I got really into saving my messages...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 17 Oct 1999 01:57:11 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Marc's initial request to start Postgres95 develop" } ]
[ { "msg_contents": "The intent of the COMMENT ON statement was to \nallow for users to provide comments on user tables, \nviews, and the fields which compose them as in\nORACLE. I'll expand the syntax beyond ORACLE's to \ninclude rules, triggers, and functions. Of course,\nall of these are the non-system OIDs (although the \nPostgreSQL super-user could create/drop comments on \nsystem oid-related objects). The implication is that\nI'll have to modify pg_dump to generate COMMENT ON \ncommands as well for all user tables, views,\nfunctions,\ntriggers, and rules. The patch I submitted uses\nORACLE's syntax which requires you to specify the \nschema object type as well as its name, such as:\n\nCOMMENT ON TABLE employees IS 'Employee Records';\nCOMMENT ON COLUMN employees.employee IS 'Employee ID';\n\nso I'll just add:\n\nCOMMENT ON RULE...\nCOMMENT ON TRIGGER...\nCOMMENT ON FUNCTION...\n\nHopefully, the Win32 ODBC driver is smart enough to\nfetch the comments from pg_description in response\nto a call to ::SQLTables or ::SQLColumns, so ODBC\napplications can see the user comments supplied\n(I'll check this). I don't know about JDBC.\n\nThere's currently nothing to stop a user from\nperforming an INSERT on pg_description using any OID\nthey please. Perhaps that should be restricted, but\nwho knows what applications are out there now which,\nnot having a COMMENT ON statement, are storing \nuser comments already in pg_description. \n\nHopefully, I'll have the other forms done in the next\nfew days. As Bruce pointed out, \\dd already displays\ncomments for any type. I was hoping for a single \npsql '\\' command to display the table, its comments,\nits column definitions, and any comments associated \nwith the columns...(an outer join SURE would be \nnice for that -- altough one could always do a \nSELECT ... WHERE join UNION SELECT WHERE NOT EXISTS..)\n\nAnyways,\n\nHope that helps,\n\nMike Mascari\n([email protected])\n\n--- Peter Eisentraut <[email protected]> wrote:\n> I have another question regarding this: It seems\n> that you can attach a\n> description (or comment, as it is) to any oid. (Not\n> with this command, but\n> in general). Is this restricted to system oids (like\n> below 16000 or\n> whatever it was)? Otherwise comments on any user\n> tuple could be created.\n> Perhaps this should be explicitly prevented or\n> allowed. In the latter case\n> perhaps the comment statement could be tweaked. Not\n> sure. Just wondering.\n> \n> \t-Peter\n>\n> Peter Eisentraut Sernanders vaeg\n> 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n\n\n=====\n\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n", "msg_date": "Sat, 16 Oct 1999 20:57:21 -0700 (PDT)", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ORACLE COMMENT statement" }, { "msg_contents": "On Oct 16, Mike Mascari mentioned:\n\n> Hopefully, I'll have the other forms done in the next\n> few days. As Bruce pointed out, \\dd already displays\n> comments for any type. I was hoping for a single \n> psql '\\' command to display the table, its comments,\n> its column definitions, and any comments associated \n> with the columns...(an outer join SURE would be \n> nice for that -- altough one could always do a \n> SELECT ... WHERE join UNION SELECT WHERE NOT EXISTS..)\n\nI tell you, outer joins will be nice for a lot of things in psql. At this\npoint I'm not sure if I should break backwards compatibility like that,\nbut then psql is supposed to be sort of the example application, so the\nlatest technology ought to be used.\n\nAnyway, the \\dd command was kind of odd in that it only displayed comments\nbut not what the comments went with.\n\nThe way I currently have implemented the comments is like this:\n(Ignoring the actual output format, which is currently under _heavy_\ndevelopment.)\n\npeter@localhost:5432 play=> \\d foobar\nTable \"foobar\"\n \nAttribute | Type | Info\n----------+--------------+---------\na | numeric(9,2) | not null\nb | varchar(5) |\nc | char(10) |\nd | char(1) |\n\npeter@localhost:5432 play=> \\set description \"\"\npeter@localhost:5432 play=> \\d foobar\nTable \"foobar\"\n \nAttribute | Type | Info | Description\n----------+--------------+----------+------------\na | numeric(9,2) | not null |\nb | varchar(5) | |\nc | char(10) | |\nd | char(1) | |\n\npeter@localhost:5432 play=> \\l\nList of databases\n \nDatabase | Owner | Encoding | Description\n----------+----------+----------+---------------------------------------\nplay | postgres | 0 |\npwdb | peter | 0 |\ntemplate1 | postgres | 0 |\ntwig | httpd | 0 | This is for that Twig mailer under PHP\n \n(4 rows)\n\npeter@localhost:5432 play=> \\unset description\npeter@localhost:5432 play=> \\l\nList of databases\n \nDatabase | Owner | Encoding\n----------+----------+---------\nplay | postgres | 0\npwdb | peter | 0\ntemplate1 | postgres | 0\ntwig | httpd | 0\n \n(4 rows)\n\npeter@localhost:5432 play=> \\dd\nObject descriptions\n\n Name | What | Description\n-------------------+----------+------------------------------------------\n! | operator | fraction\n!! | operator | fraction\n!!= | operator | not in\n!~ | operator | does not match regex., case-sensitive\n!~* | operator | does not match regex., case-insensitive\n!~~ | operator | does not match LIKE expression\n# | operator | intersection point\n--<snip>--\nvarcharne | function | not equal\nvarcharoctetlen | function | octet length\nversion | function | PostgreSQL version string\nwidth | function | box width\nxid | type | transaction id\nxideq | function | equal\n| | operator | start of interval\n|/ | operator | square root\n|| | operator | concatenate\n||/ | operator | cube root\n~ | operator | contains\n~ | operator | matches regex., case-sensitive\n~ | operator | path contains point?\n~ | operator | polygon contains point?\n~* | operator | matches regex., case-insensitive\n~= | operator | same as\n~~ | operator | matches LIKE expression\n \n(973 rows)\n\npeter@localhost:5432 play=> \\dd version\nObject descriptions\n \n Name | What | Description\n--------+----------+--------------------------\nversion | function | PostgreSQL version string\n \n(1 row)\n\n\nNow if we just put a description on every pre-installed entity (in\nparticular system tables), this is almost like a built-in quick reference!\n\nThe \\dd doesn't do rules yet, I think. But I'll put that in soon.\n\nSo do you see that as a feasible solution?\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 17 Oct 1999 22:24:13 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ORACLE COMMENT statement" } ]
[ { "msg_contents": "Subject says all.\nIn general I want to limit output from\n\nselect ......\n intersect\nselect ......\n\nCurrent implementation of LIMIT doesn't support this.\nAre there any solutions ?\n\n\n\n\tRegards,\n\t\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sun, 17 Oct 1999 20:40:32 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "is it possible to use LIMIT and INTERSECT ?" }, { "msg_contents": "The only workaround I can think of is to do an INSERT.. SELECT or SELECT\n... INTO TABLE with the INSERSECT, and use LIMIT on the resulting table.\n\n\n> Subject says all.\n> In general I want to limit output from\n> \n> select ......\n> intersect\n> select ......\n> \n> Current implementation of LIMIT doesn't support this.\n> Are there any solutions ?\n> \n> \n> \n> \tRegards,\n> \t\tOleg\n> \n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 17 Oct 1999 13:25:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] is it possible to use LIMIT and INTERSECT ?" }, { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n> select ......\n> intersect\n> select ......\n> Current implementation of LIMIT doesn't support this.\n> Are there any solutions ?\n\nThe problem seems to be right about where I suspected it was...\n\nTry the attached (line numbers are for current, probably are way off\nfor 6.5.*, but the code in that routine hasn't changed much).\n\n\t\t\tregards, tom lane\n\n\n\n*** src/backend/rewrite/rewriteHandler.c.orig\tThu Oct 7 00:23:15 1999\n--- src/backend/rewrite/rewriteHandler.c\tSun Oct 17 19:18:01 1999\n***************\n*** 1806,1811 ****\n--- 1806,1813 ----\n \tbool\t\tisBinary,\n \t\t\t\tisPortal,\n \t\t\t\tisTemp;\n+ \tNode\t *limitOffset,\n+ \t\t\t *limitCount;\n \tCmdType\t\tcommandType = CMD_SELECT;\n \tList\t *rtable_insert = NIL;\n \n***************\n*** 1856,1861 ****\n--- 1858,1865 ----\n \tisBinary = parsetree->isBinary;\n \tisPortal = parsetree->isPortal;\n \tisTemp = parsetree->isTemp;\n+ \tlimitOffset = parsetree->limitOffset;\n+ \tlimitCount = parsetree->limitCount;\n \n \t/*\n \t * The operator tree attached to parsetree->intersectClause is still\n***************\n*** 2057,2062 ****\n--- 2061,2068 ----\n \tresult->isPortal = isPortal;\n \tresult->isBinary = isBinary;\n \tresult->isTemp = isTemp;\n+ \tresult->limitOffset = limitOffset;\n+ \tresult->limitCount = limitCount;\n \n \t/*\n \t * The relation to insert into is attached to the range table of the\n", "msg_date": "Sun, 17 Oct 1999 19:31:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] is it possible to use LIMIT and INTERSECT ? " }, { "msg_contents": "Tom,\n\npatch was applied smoothly to 6.5.2\nWhat's the syntax ?\n\nselect a.msg_id, c.status_set_date, c.title\n from Message_Keyword_map a, messages c, keywords d\n where c.status_id =1 and d.name ~* 'moon' and a.key_id=d.key_id\n and c.msg_id=a.msg_id\nintersect\n select a.msg_id, a.status_set_date, a.title from messages a \n where a.status_id = 1 and a.title ~* 'moon' limit 5;\n\nproduces (10 rows)\n\nselect a.msg_id, c.status_set_date, c.title\n from Message_Keyword_map a, messages c, keywords d\n where c.status_id =1 and d.name ~* 'moon' and a.key_id=d.key_id\n and c.msg_id=a.msg_id limit 5\nintersect\n select a.msg_id, a.status_set_date, a.title from messages a \n where a.status_id = 1 and a.title ~* 'moon' limit 5;\n\nERROR: parser: parse error at or near \"intersect\"\n\n\tOleg\n\nOn Sun, 17 Oct 1999, Tom Lane wrote:\n\n> Date: Sun, 17 Oct 1999 19:31:09 -0400\n> From: Tom Lane <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] is it possible to use LIMIT and INTERSECT ? \n> \n> Oleg Bartunov <[email protected]> writes:\n> > select ......\n> > intersect\n> > select ......\n> > Current implementation of LIMIT doesn't support this.\n> > Are there any solutions ?\n> \n> The problem seems to be right about where I suspected it was...\n> \n> Try the attached (line numbers are for current, probably are way off\n> for 6.5.*, but the code in that routine hasn't changed much).\n> \n> \t\t\tregards, tom lane\n> \n> \n> \n> *** src/backend/rewrite/rewriteHandler.c.orig\tThu Oct 7 00:23:15 1999\n> --- src/backend/rewrite/rewriteHandler.c\tSun Oct 17 19:18:01 1999\n> ***************\n> *** 1806,1811 ****\n> --- 1806,1813 ----\n> \tbool\t\tisBinary,\n> \t\t\t\tisPortal,\n> \t\t\t\tisTemp;\n> + \tNode\t *limitOffset,\n> + \t\t\t *limitCount;\n> \tCmdType\t\tcommandType = CMD_SELECT;\n> \tList\t *rtable_insert = NIL;\n> \n> ***************\n> *** 1856,1861 ****\n> --- 1858,1865 ----\n> \tisBinary = parsetree->isBinary;\n> \tisPortal = parsetree->isPortal;\n> \tisTemp = parsetree->isTemp;\n> + \tlimitOffset = parsetree->limitOffset;\n> + \tlimitCount = parsetree->limitCount;\n> \n> \t/*\n> \t * The operator tree attached to parsetree->intersectClause is still\n> ***************\n> *** 2057,2062 ****\n> --- 2061,2068 ----\n> \tresult->isPortal = isPortal;\n> \tresult->isBinary = isBinary;\n> \tresult->isTemp = isTemp;\n> + \tresult->limitOffset = limitOffset;\n> + \tresult->limitCount = limitCount;\n> \n> \t/*\n> \t * The relation to insert into is attached to the range table of the\n> \n> ************\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 18 Oct 1999 10:20:28 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] is it possible to use LIMIT and INTERSECT ? " }, { "msg_contents": "Oleg Bartunov wrote:\n> \n> Tom,\n> \n> patch was applied smoothly to 6.5.2\n> What's the syntax ?\n> \n> select a.msg_id, c.status_set_date, c.title\n> from Message_Keyword_map a, messages c, keywords d\n> where c.status_id =1 and d.name ~* 'moon' and a.key_id=d.key_id\n> and c.msg_id=a.msg_id\n> intersect\n> select a.msg_id, a.status_set_date, a.title from messages a\n> where a.status_id = 1 and a.title ~* 'moon' limit 5;\n> \n> produces (10 rows)\n> \n> select a.msg_id, c.status_set_date, c.title\n> from Message_Keyword_map a, messages c, keywords d\n> where c.status_id =1 and d.name ~* 'moon' and a.key_id=d.key_id\n> and c.msg_id=a.msg_id limit 5\n> intersect\n> select a.msg_id, a.status_set_date, a.title from messages a\n> where a.status_id = 1 and a.title ~* 'moon' limit 5;\n> \n\nAs the limit is applied to the final result, I guess you can have only one \nLIMIT per query.\n\nSo try removing the limit 5 before intersect .\n\n-----------\nHannu\n", "msg_date": "Mon, 18 Oct 1999 08:22:31 +0000", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] is it possible to use LIMIT and INTERSECT ?" }, { "msg_contents": "On Mon, 18 Oct 1999, Hannu Krosing wrote:\n\n> Date: Mon, 18 Oct 1999 08:22:31 +0000\n> From: Hannu Krosing <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: Tom Lane <[email protected]>, [email protected]\n> Subject: Re: [HACKERS] is it possible to use LIMIT and INTERSECT ?\n> \n> Oleg Bartunov wrote:\n> > \n> > Tom,\n> > \n> > patch was applied smoothly to 6.5.2\n> > What's the syntax ?\n> > \n> > select a.msg_id, c.status_set_date, c.title\n> > from Message_Keyword_map a, messages c, keywords d\n> > where c.status_id =1 and d.name ~* 'moon' and a.key_id=d.key_id\n> > and c.msg_id=a.msg_id\n> > intersect\n> > select a.msg_id, a.status_set_date, a.title from messages a\n> > where a.status_id = 1 and a.title ~* 'moon' limit 5;\n> > \n> > produces (10 rows)\n> > \n> > select a.msg_id, c.status_set_date, c.title\n> > from Message_Keyword_map a, messages c, keywords d\n> > where c.status_id =1 and d.name ~* 'moon' and a.key_id=d.key_id\n> > and c.msg_id=a.msg_id limit 5\n> > intersect\n> > select a.msg_id, a.status_set_date, a.title from messages a\n> > where a.status_id = 1 and a.title ~* 'moon' limit 5;\n> > \n> \n> As the limit is applied to the final result, I guess you can have only one \n> LIMIT per query.\n> \n> So try removing the limit 5 before intersect .\n> \n\nThis was my first try (look above). It works but produces 10 rows instead of 5.\n\n\tOleg\n\n> -----------\n> Hannu\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 18 Oct 1999 12:28:32 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] is it possible to use LIMIT and INTERSECT ?" }, { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n> patch was applied smoothly to 6.5.2\n\n> select a.msg_id, c.status_set_date, c.title\n> from Message_Keyword_map a, messages c, keywords d\n> where c.status_id =1 and d.name ~* 'moon' and a.key_id=d.key_id\n> and c.msg_id=a.msg_id\n> intersect\n> select a.msg_id, a.status_set_date, a.title from messages a \n> where a.status_id = 1 and a.title ~* 'moon' limit 5;\n\n> produces (10 rows)\n\nHmm. It seemed to work as expected in current --- maybe there is\nanother bug still lurking in 6.5.*. I'll look when I get a chance.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Oct 1999 10:37:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] is it possible to use LIMIT and INTERSECT ? " }, { "msg_contents": "I wrote:\n> Hmm. It seemed to work as expected in current --- maybe there is\n> another bug still lurking in 6.5.*. I'll look when I get a chance.\n\nYup, this change that was already in current is also needed:\n\n*** src/backend/parser/gram.y.orig\tMon Oct 18 23:59:35 1999\n--- src/backend/parser/gram.y\tMon Oct 18 23:55:18 1999\n***************\n*** 2768,2773 ****\n--- 2768,2775 ----\n \t\t\t\t /* finally attach the sort clause */\n \t\t\t\t first_select->sortClause = $2;\n \t\t\t\t first_select->forUpdate = $3;\n+ \t\t\t\t first_select->limitOffset = nth(0, $4);\n+ \t\t\t\t first_select->limitCount = nth(1, $4);\n \t\t\t\t $$ = (Node *)first_select;\n \t\t\t\t}\t\t\n \t\t\t\tif (((SelectStmt *)$$)->forUpdate != NULL && QueryIsRule)\n\n\n\nI have updated both current and REL6_5 branches.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Oct 1999 00:42:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] is it possible to use LIMIT and INTERSECT ? " }, { "msg_contents": "Thanks Tom,\n\nI synced REL6_5 tree, compile, install but still query\n\nselect a.msg_id, c.status_set_date, c.title\n from Message_Keyword_map a, messages c, keywords d\n where c.status_id =1 and d.name ~* 'sun' and a.key_id=d.key_id\n and c.msg_id=a.msg_id\n intersect\nselect a.msg_id, a.status_set_date, a.title\n from messages a where a.status_id = 1 and a.title ~* 'sun' limit 10;\n\nproduces more than 10 rows !\n\n\tRegards,\n\t\n\t\tOleg\n\n\n\nOn Tue, 19 Oct 1999, Tom Lane wrote:\n\n> Date: Tue, 19 Oct 1999 00:42:08 -0400\n> From: Tom Lane <[email protected]>\n> To: Oleg Bartunov <[email protected]>, [email protected]\n> Subject: Re: [HACKERS] is it possible to use LIMIT and INTERSECT ? \n> \n> I wrote:\n> > Hmm. It seemed to work as expected in current --- maybe there is\n> > another bug still lurking in 6.5.*. I'll look when I get a chance.\n> \n> Yup, this change that was already in current is also needed:\n> \n> *** src/backend/parser/gram.y.orig\tMon Oct 18 23:59:35 1999\n> --- src/backend/parser/gram.y\tMon Oct 18 23:55:18 1999\n> ***************\n> *** 2768,2773 ****\n> --- 2768,2775 ----\n> \t\t\t\t /* finally attach the sort clause */\n> \t\t\t\t first_select->sortClause = $2;\n> \t\t\t\t first_select->forUpdate = $3;\n> + \t\t\t\t first_select->limitOffset = nth(0, $4);\n> + \t\t\t\t first_select->limitCount = nth(1, $4);\n> \t\t\t\t $$ = (Node *)first_select;\n> \t\t\t\t}\t\t\n> \t\t\t\tif (((SelectStmt *)$$)->forUpdate != NULL && QueryIsRule)\n> \n> \n> \n> I have updated both current and REL6_5 branches.\n> \n> \t\t\tregards, tom lane\n> \n> ************\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 19 Oct 1999 09:28:55 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] is it possible to use LIMIT and INTERSECT ? " }, { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n> Thanks Tom,\n> I synced REL6_5 tree, compile, install but still query\n\n> select a.msg_id, c.status_set_date, c.title\n> from Message_Keyword_map a, messages c, keywords d\n> where c.status_id =1 and d.name ~* 'sun' and a.key_id=d.key_id\n> and c.msg_id=a.msg_id\n> intersect\n> select a.msg_id, a.status_set_date, a.title\n> from messages a where a.status_id = 1 and a.title ~* 'sun' limit 10;\n\n> produces more than 10 rows !\n\nIf you'd care to provide a reproducible stand-alone test case I'll look\ninto it further. I do not feel like trying to reverse-engineer your\ntable declarations.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Oct 1999 01:33:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] is it possible to use LIMIT and INTERSECT ? " } ]
[ { "msg_contents": "... yeah, me neither.\n\nHi all, I have an interesting one for you today. I'm writing a new \\dd\ncommand (one that actually works), and I have come across the following\nsituation:\n\nSELECT DISTINCT a.aggname as \"Name\" FROM pg_aggregate a\n UNION ALL\nSELECT DISTINCT p.proname as \"Name\" FROM pg_proc p\n UNION ALL\nSELECT DISTINCT o.oprname as \"Name\" FROM pg_operator o\n UNION ALL\nSELECT DISTINCT t.typname as \"Name\" FROM pg_type t\n UNION ALL\nSELECT DISTINCT c.relname as \"Name\" FROM pg_class c\n;\n\n(It doesn't make much sense as it stands, but I have picked out the\noffending parts.)\n\nI get\nNOTICE: equal: don't know whether nodes of type 719 are equal\n\nActually, I get several of these. Depending on the number of select\nclauses, I get 1 for the third, 2 for the 4th, 3 for the 5th, etc. So the\nabove query gives me 6 notices. A query with only two select clauses gives\nme none.\n\nWithout the DISTINCTs everything goes fine.\n\nNow this seems to have something to do with a lack of an equal operator\nfor the type \"name\", right? Interestingly enough, the type name has oid\n19, whereas type 719 is \"_circle\", or what does the 719 refer to?\n\nThanks,\n\tPeter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Sun, 17 Oct 1999 21:44:32 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "don't know whether nodes of type 719 are equal" }, { "msg_contents": "I think someone changed the database schema. Try cvs update then\ninitdb. Could it be that the row of type circle is causing it?\n\n\nI don't get that here, and 719 is certainly a strange number to be\ngetting\n\n> ... yeah, me neither.\n> \n> Hi all, I have an interesting one for you today. I'm writing a new \\dd\n> command (one that actually works), and I have come across the following\n> situation:\n> \n> SELECT DISTINCT a.aggname as \"Name\" FROM pg_aggregate a\n> UNION ALL\n> SELECT DISTINCT p.proname as \"Name\" FROM pg_proc p\n> UNION ALL\n> SELECT DISTINCT o.oprname as \"Name\" FROM pg_operator o\n> UNION ALL\n> SELECT DISTINCT t.typname as \"Name\" FROM pg_type t\n> UNION ALL\n> SELECT DISTINCT c.relname as \"Name\" FROM pg_class c\n> ;\n> \n> (It doesn't make much sense as it stands, but I have picked out the\n> offending parts.)\n> \n> I get\n> NOTICE: equal: don't know whether nodes of type 719 are equal\n> \n> Actually, I get several of these. Depending on the number of select\n> clauses, I get 1 for the third, 2 for the 4th, 3 for the 5th, etc. So the\n> above query gives me 6 notices. A query with only two select clauses gives\n> me none.\n> \n> Without the DISTINCTs everything goes fine.\n> \n> Now this seems to have something to do with a lack of an equal operator\n> for the type \"name\", right? Interestingly enough, the type name has oid\n> 19, whereas type 719 is \"_circle\", or what does the 719 refer to?\n> \n> Thanks,\n> \tPeter\n> \n> -- \n> Peter Eisentraut Sernanders vaeg 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n> \n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 17 Oct 1999 15:57:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] don't know whether nodes of type 719 are equal" }, { "msg_contents": "I also got this message with UNION and distinct. \nI've tested 6.5.2 and 6.5.3. Current (6.6 or 7.0 ?) works fine\n\nselect distinct a.msg_id, c.status_set_date, c.title\n from Message_Keyword_map a, messages c, keywords d\n where c.status_id =1 and d.name ~* 'sun' and a.key_id=d.key_id\n and c.msg_id=a.msg_id\n union\n select distinct a.msg_id, a.status_set_date, a.title\n from messages a where a.status_id = 1 and a.title ~* 'sun';\n\n\nNOTICE: equal: don't know whether nodes of type 719 are equal\n\n\tOleg\nThis is with postgres 6.5.3\nOn Sun, 17 Oct 1999, Bruce Momjian wrote:\n\n> Date: Sun, 17 Oct 1999 15:57:52 -0400 (EDT)\n> From: Bruce Momjian <[email protected]>\n> To: Peter Eisentraut <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] don't know whether nodes of type 719 are equal\n> \n> I think someone changed the database schema. Try cvs update then\n> initdb. Could it be that the row of type circle is causing it?\n> \n> \n> I don't get that here, and 719 is certainly a strange number to be\n> getting\n> \n> > ... yeah, me neither.\n> > \n> > Hi all, I have an interesting one for you today. I'm writing a new \\dd\n> > command (one that actually works), and I have come across the following\n> > situation:\n> > \n> > SELECT DISTINCT a.aggname as \"Name\" FROM pg_aggregate a\n> > UNION ALL\n> > SELECT DISTINCT p.proname as \"Name\" FROM pg_proc p\n> > UNION ALL\n> > SELECT DISTINCT o.oprname as \"Name\" FROM pg_operator o\n> > UNION ALL\n> > SELECT DISTINCT t.typname as \"Name\" FROM pg_type t\n> > UNION ALL\n> > SELECT DISTINCT c.relname as \"Name\" FROM pg_class c\n> > ;\n> > \n> > (It doesn't make much sense as it stands, but I have picked out the\n> > offending parts.)\n> > \n> > I get\n> > NOTICE: equal: don't know whether nodes of type 719 are equal\n> > \n> > Actually, I get several of these. Depending on the number of select\n> > clauses, I get 1 for the third, 2 for the 4th, 3 for the 5th, etc. So the\n> > above query gives me 6 notices. A query with only two select clauses gives\n> > me none.\n> > \n> > Without the DISTINCTs everything goes fine.\n> > \n> > Now this seems to have something to do with a lack of an equal operator\n> > for the type \"name\", right? Interestingly enough, the type name has oid\n> > 19, whereas type 719 is \"_circle\", or what does the 719 refer to?\n> > \n> > Thanks,\n> > \tPeter\n> > \n> > -- \n> > Peter Eisentraut Sernanders vaeg 10:115\n> > [email protected] 75262 Uppsala\n> > http://yi.org/peter-e/ Sweden\n> > \n> > \n> > \n> > ************\n> > \n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ************\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 18 Oct 1999 00:21:48 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] don't know whether nodes of type 719 are equal" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> SELECT DISTINCT t.typname as \"Name\" FROM pg_type t\n> UNION ALL\n> SELECT DISTINCT c.relname as \"Name\" FROM pg_class c\n> ;\n> (It doesn't make much sense as it stands, but I have picked out the\n> offending parts.)\n\n> I get\n> NOTICE: equal: don't know whether nodes of type 719 are equal\n\n(consults include/nodes/nodes.h ... hmm, \"SortClause\" ...)\n\nThis is probably happening because UNION/INTERSECT processing tries\nto simplify the node tree using cnfify(), which is really designed\nto work on expressions not whole queries. Ordinarily you can't get a\nsort clause into a subclause of a UNION ... but I guess with DISTINCT\nyou can. (I bet UNIONing things containing GROUP BY fails too,\nsince equal() doesn't know about GroupClause nodes either.)\n\nA quick-fix answer is to extend equal(), of course, but I've been\nwondering for a while why we are cnfify'ing UNION/INTERSECT trees\nat all. The odds of being able to simplify the tree that way seem\nsmall, and what's worse is that UNION does *not* have the same\nsemantics as OR (eg, foo UNION foo should *not* be simplified to foo)\nbut cnfify doesn't know that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 17 Oct 1999 17:47:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] don't know whether nodes of type 719 are equal " }, { "msg_contents": "> (consults include/nodes/nodes.h ... hmm, \"SortClause\" ...)\n> \n> This is probably happening because UNION/INTERSECT processing tries\n> to simplify the node tree using cnfify(), which is really designed\n> to work on expressions not whole queries. Ordinarily you can't get a\n> sort clause into a subclause of a UNION ... but I guess with DISTINCT\n> you can. (I bet UNIONing things containing GROUP BY fails too,\n> since equal() doesn't know about GroupClause nodes either.)\n> \n> A quick-fix answer is to extend equal(), of course, but I've been\n> wondering for a while why we are cnfify'ing UNION/INTERSECT trees\n> at all. The odds of being able to simplify the tree that way seem\n> small, and what's worse is that UNION does *not* have the same\n> semantics as OR (eg, foo UNION foo should *not* be simplified to foo)\n> but cnfify doesn't know that.\n\nMy recollection is that cnfify is not called to simplify, but was\nrequired at one point so you got the right output. That may no longer\nbe the case, but I know it was at some point. Before installed kqso,\nthe author tried to just skip cnfify, and the query with OR's didn't\nwork. Of course, none of us understood cnfify(), so just scratched our\nheads.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 17 Oct 1999 20:49:53 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] don't know whether nodes of type 719 are equal" }, { "msg_contents": "> I also got this message with UNION and distinct. \n> I've tested 6.5.2 and 6.5.3. Current (6.6 or 7.0 ?) works fine\n\nMe too. Current works fine, but 6.5.2 not.\n---\nTatsuo Ishii\n", "msg_date": "Mon, 18 Oct 1999 10:22:27 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] don't know whether nodes of type 719 are equal " }, { "msg_contents": "> > I also got this message with UNION and distinct. \n> > I've tested 6.5.2 and 6.5.3. Current (6.6 or 7.0 ?) works fine\n> \n> Me too. Current works fine, but 6.5.2 not.\n\nBetter than 6.5.* working and current failing. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 17 Oct 1999 22:03:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] don't know whether nodes of type 719 are equal" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> My recollection is that cnfify is not called to simplify, but was\n> required at one point so you got the right output. That may no longer\n> be the case, but I know it was at some point.\n\nFor ordinary qual expressions, the only thing cnfify does that is\nactually *necessary* for downstream processing is that it changes\nthe top-level boolean condition into an implicitly-ANDed list of\nclauses. That is, (AND A B ...) becomes (A B ...), anything else\nbecomes a singleton list ((X)). So you could replace cnfify with\nmake_ands_implicit() and things would still work. (I believe\nPeter Andrews is presently getting useful work done with cnfify\nlobotomized in more or less that fashion --- he's using queries\nthat expand unpleasantly with normal cnfify.)\n\nI am not sure whether this is true for UNION/INTERSECT processing\nthough. There are some really ugly kluges in UNION/INTERSECT, and\nI don't think I understand all of its dependencies.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Oct 1999 00:16:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] don't know whether nodes of type 719 are equal " }, { "msg_contents": "> I am not sure whether this is true for UNION/INTERSECT processing\n> though. There are some really ugly kluges in UNION/INTERSECT, and\n> I don't think I understand all of its dependencies.\n\nYes, that code was not our finest hour. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 18 Oct 1999 00:25:19 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] don't know whether nodes of type 719 are equal" }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> I also got this message with UNION and distinct. \n>> I've tested 6.5.2 and 6.5.3. Current (6.6 or 7.0 ?) works fine\n\n> Me too. Current works fine, but 6.5.2 not.\n\nNo, it's still there in current:\n\nregression=> explain select distinct * from tenk1\nregression-> union select distinct * from tenk1;\nNOTICE: equal: don't know whether nodes of type 719 are equal\nNOTICE: QUERY PLAN:\n... etc ...\n\nIt might be a little harder to get in current. I think that in\na fit of code beautification I rearranged _equalQuery so that the\nsort/group clauses are tested later than they used to be. You\nwon't see this notice if _equalQuery discovers that the query\nnodes are non-identical before it gets to the sort specification.\nThus:\n\nregression=> explain select distinct * from tenk1 t1 \nregression-> union select distinct * from tenk1 t2; \nNOTICE: QUERY PLAN:\n... etc ...\n\nThis entirely equivalent query has different refnames in the rangetables\nof the two subselects, which means equal() considers the nodes\nnon-identical; and the rangetable is checked by equalQuery before it\ngets to the sort clause. So the sort clauses are never compared.\nBingo, no message.\n\nBeing harder to get doesn't make it any less a bug, of course.\nBut I'm not especially concerned about it --- the query works,\nthe message is just noise; so I think we can live with it until\nwe get around to doing the major querytree redesign that we need\nto do for subselects in FROM as well as some less pressing problems\nlike this one...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Oct 1999 01:10:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] don't know whether nodes of type 719 are equal " } ]
[ { "msg_contents": "what about this:\n( it would be nice to have it working, specially for copying values from files into table with default fields, having the default fields doing their job or initialising tables using reduced set of columns )\n\nmydb=> create sequence MYSEQ;\nCREATE\nmydb=> create table MYTAB ( ID int4 default nextval('MYSEQ'), NAME text );\nCREATE\nmydb=> create view MYVIEW as select name from MYTAB;\nCREATE\nmydb=> copy MYVIEW from stdin;\nEnter info followed by a newline\nEnd with a backslash and a period on a line by itself.\n>> jim\n>> john\n>> jack\n>> \\.\nmydb=> select MYVIEW.*;\nname\n----\n(0 rows)\n\n\n--\ndan peder\[email protected]\n\n", "msg_date": "Sun, 17 Oct 1999 21:56:45 +0100", "msg_from": "=?iso-8859-2?Q?Daniel_P=E9der?= <[email protected]>", "msg_from_op": true, "msg_subject": "insertable views - not copy-able ?" }, { "msg_contents": ">\n> what about this:\n> ( it would be nice to have it working, specially for copying values from files into table with default fields, having the default fields doing their job or initialising tables using reduced set of columns )\n>\n> mydb=> create sequence MYSEQ;\n> CREATE\n> mydb=> create table MYTAB ( ID int4 default nextval('MYSEQ'), NAME text );\n> CREATE\n> mydb=> create view MYVIEW as select name from MYTAB;\n> CREATE\n> mydb=> copy MYVIEW from stdin;\n\n First this setup wouldn't work with INSERT too. The INSTEAD\n rule for INSERT is missing. Second COPY isn't a rewritable\n statement, and it will not become such since only commands\n that have a rangetable and a targetlist can be handled by the\n rewriter at all.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 18 Oct 1999 10:53:00 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] insertable views - not copy-able ?" }, { "msg_contents": "At 22:56 +0200 on 17/10/1999, =?iso-8859-2?Q?Daniel_P=E9der?= wrote:\n\n\n> what about this:\n> ( it would be nice to have it working, specially for copying values from\n>files into table with default fields, having the default fields doing\n>their job or initialising tables using reduced set of columns )\n>\n> mydb=> create sequence MYSEQ;\n> CREATE\n> mydb=> create table MYTAB ( ID int4 default nextval('MYSEQ'), NAME text );\n> CREATE\n> mydb=> create view MYVIEW as select name from MYTAB;\n> CREATE\n> mydb=> copy MYVIEW from stdin;\n\nSeems this view is neither insertable nor copyable. To make it insertable,\nyou have to define a rule, you know.\n\nIn any case, I don't think it would work for copy - the rule I mean.\n\nIMO, if you want to copy data and have defaults work, you copy the data\ninto a temporary table with only the necessary fields, and then issue an\ninsert:\n\nCREATE TEMP TABLE tmp_tab ( name text );\nCOPY tmp_tab FROM stdin;\njim\njohn\njack\n\\.\nINSERT INTO mytab (name) SELECT name FROM tmp_tab;\nDROP TABLE tmp_tab;\n\nHerouth\n\n--\nHerouth Maoz, Internet developer.\nOpen University of Israel - Telem project\nhttp://telem.openu.ac.il/~herutma\n\n\n", "msg_date": "Tue, 19 Oct 1999 15:25:19 +0200", "msg_from": "Herouth Maoz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] insertable views - not copy-able ?" } ]
[ { "msg_contents": "--- Peter Eisentraut <[email protected]> wrote:\n> Anyway, the \\dd command was kind of odd in that it\n> only displayed comments\n> but not what the comments went with.\n> \n> The way I currently have implemented the comments is\n> like this:\n> (Ignoring the actual output format, which is\n> currently under _heavy_\n> development.)\n> \n> peter@localhost:5432 play=> \\d foobar\n> Table \"foobar\"\n> \n> Attribute | Type | Info\n> ----------+--------------+---------\n> a | numeric(9,2) | not null\n> b | varchar(5) |\n> c | char(10) |\n> d | char(1) |\n> \n> peter@localhost:5432 play=> \\set description \"\"\n> peter@localhost:5432 play=> \\d foobar\n> Table \"foobar\"\n> \n> Attribute | Type | Info | Description\n> ----------+--------------+----------+------------\n> a | numeric(9,2) | not null |\n> b | varchar(5) | |\n> c | char(10) | |\n> d | char(1) | |\n> \n> peter@localhost:5432 play=> \\l\n> List of databases\n> \n> Database | Owner | Encoding | \n> Description\n>\n----------+----------+----------+---------------------------------------\n> play | postgres | 0 |\n> pwdb | peter | 0 |\n> template1 | postgres | 0 |\n> twig | httpd | 0 | This is for that\n> Twig mailer under PHP\n> \n> (4 rows)\n> \n> peter@localhost:5432 play=> \\unset description\n> peter@localhost:5432 play=> \\l\n> List of databases\n> \n> Database | Owner | Encoding\n> ----------+----------+---------\n> play | postgres | 0\n> pwdb | peter | 0\n> template1 | postgres | 0\n> twig | httpd | 0\n> \n> (4 rows)\n> \n> peter@localhost:5432 play=> \\dd\n> Object descriptions\n> \n> Name | What | Description\n>\n-------------------+----------+------------------------------------------\n> ! | operator | fraction\n> !! | operator | fraction\n> !!= | operator | not in\n> !~ | operator | does not match\n> regex., case-sensitive\n> !~* | operator | does not match\n> regex., case-insensitive\n> !~~ | operator | does not match LIKE\n> expression\n> # | operator | intersection point\n> --<snip>--\n> varcharne | function | not equal\n> varcharoctetlen | function | octet length\n> version | function | PostgreSQL version\n> string\n> width | function | box width\n> xid | type | transaction id\n> xideq | function | equal\n> | | operator | start of interval\n> |/ | operator | square root\n> || | operator | concatenate\n> ||/ | operator | cube root\n> ~ | operator | contains\n> ~ | operator | matches regex.,\n> case-sensitive\n> ~ | operator | path contains point?\n> ~ | operator | polygon contains\n> point?\n> ~* | operator | matches regex.,\n> case-insensitive\n> ~= | operator | same as\n> ~~ | operator | matches LIKE\n> expression\n> \n> (973 rows)\n> \n> peter@localhost:5432 play=> \\dd version\n> Object descriptions\n> \n> Name | What | Description\n> --------+----------+--------------------------\n> version | function | PostgreSQL version string\n> \n> (1 row)\n> \n> \n> Now if we just put a description on every\n> pre-installed entity (in\n> particular system tables), this is almost like a\n> built-in quick reference!\n> \n> The \\dd doesn't do rules yet, I think. But I'll put\n> that in soon.\n> \n> So do you see that as a feasible solution?\n> \n> \t-Peter\n\nI can't speak for others, but I sure like it.\n\nMike Mascari\n([email protected])\n\n\n=====\n\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n", "msg_date": "Sun, 17 Oct 1999 16:00:12 -0700 (PDT)", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: ORACLE COMMENT statement" }, { "msg_contents": "\nWow, that is nice.\n\n\n\n> --- Peter Eisentraut <[email protected]> wrote:\n> > Anyway, the \\dd command was kind of odd in that it\n> > only displayed comments\n> > but not what the comments went with.\n> > \n> > The way I currently have implemented the comments is\n> > like this:\n> > (Ignoring the actual output format, which is\n> > currently under _heavy_\n> > development.)\n> > \n> > peter@localhost:5432 play=> \\d foobar\n> > Table \"foobar\"\n> > \n> > Attribute | Type | Info\n> > ----------+--------------+---------\n> > a | numeric(9,2) | not null\n> > b | varchar(5) |\n> > c | char(10) |\n> > d | char(1) |\n> > \n> > peter@localhost:5432 play=> \\set description \"\"\n> > peter@localhost:5432 play=> \\d foobar\n> > Table \"foobar\"\n> > \n> > Attribute | Type | Info | Description\n> > ----------+--------------+----------+------------\n> > a | numeric(9,2) | not null |\n> > b | varchar(5) | |\n> > c | char(10) | |\n> > d | char(1) | |\n> > \n> > peter@localhost:5432 play=> \\l\n> > List of databases\n> > \n> > Database | Owner | Encoding | \n> > Description\n> >\n> ----------+----------+----------+---------------------------------------\n> > play | postgres | 0 |\n> > pwdb | peter | 0 |\n> > template1 | postgres | 0 |\n> > twig | httpd | 0 | This is for that\n> > Twig mailer under PHP\n> > \n> > (4 rows)\n> > \n> > peter@localhost:5432 play=> \\unset description\n> > peter@localhost:5432 play=> \\l\n> > List of databases\n> > \n> > Database | Owner | Encoding\n> > ----------+----------+---------\n> > play | postgres | 0\n> > pwdb | peter | 0\n> > template1 | postgres | 0\n> > twig | httpd | 0\n> > \n> > (4 rows)\n> > \n> > peter@localhost:5432 play=> \\dd\n> > Object descriptions\n> > \n> > Name | What | Description\n> >\n> -------------------+----------+------------------------------------------\n> > ! | operator | fraction\n> > !! | operator | fraction\n> > !!= | operator | not in\n> > !~ | operator | does not match\n> > regex., case-sensitive\n> > !~* | operator | does not match\n> > regex., case-insensitive\n> > !~~ | operator | does not match LIKE\n> > expression\n> > # | operator | intersection point\n> > --<snip>--\n> > varcharne | function | not equal\n> > varcharoctetlen | function | octet length\n> > version | function | PostgreSQL version\n> > string\n> > width | function | box width\n> > xid | type | transaction id\n> > xideq | function | equal\n> > | | operator | start of interval\n> > |/ | operator | square root\n> > || | operator | concatenate\n> > ||/ | operator | cube root\n> > ~ | operator | contains\n> > ~ | operator | matches regex.,\n> > case-sensitive\n> > ~ | operator | path contains point?\n> > ~ | operator | polygon contains\n> > point?\n> > ~* | operator | matches regex.,\n> > case-insensitive\n> > ~= | operator | same as\n> > ~~ | operator | matches LIKE\n> > expression\n> > \n> > (973 rows)\n> > \n> > peter@localhost:5432 play=> \\dd version\n> > Object descriptions\n> > \n> > Name | What | Description\n> > --------+----------+--------------------------\n> > version | function | PostgreSQL version string\n> > \n> > (1 row)\n> > \n> > \n> > Now if we just put a description on every\n> > pre-installed entity (in\n> > particular system tables), this is almost like a\n> > built-in quick reference!\n> > \n> > The \\dd doesn't do rules yet, I think. But I'll put\n> > that in soon.\n> > \n> > So do you see that as a feasible solution?\n> > \n> > \t-Peter\n> \n> I can't speak for others, but I sure like it.\n> \n> Mike Mascari\n> ([email protected])\n> \n> \n> =====\n> \n> __________________________________________________\n> Do You Yahoo!?\n> Bid and sell for free at http://auctions.yahoo.com\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 17 Oct 1999 20:50:32 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: ORACLE COMMENT statement" } ]
[ { "msg_contents": "Hi all,\nmy problem is simple but very frustrating,\n\nI wrote some program under Windows 95 (MS VC++ 4.0).\nThis program uses MFC ODBC classes (CRecordset and CDatabase) to\ncooperate with my PostgreSQL 6.3 (installed on RedHat 5.1 linux) via\npsqlodbc driver.\nI can connect to my databse, I can scroll, update, insert or delete records.\nEverything's fine.\nProblem starts when more than one user has access to the database.\n\nExample:\n\nPerson A runs my program, connects to PostgreSQL database and opens\ntables. Everything works fine.\nPerson B runs my program,connects to PostgreSQL database and opens\ntables. Also everything works fine.\nBoth persons can scroll records and read their contents without any problem.\n\nNow, let's say, Person A wants to make some modifications (or even add new\nrecord)\nto some records in some table. My program's execution stops and waits until\nPerson B exits application (!!!!!!).\n\nps ax command on my Linux server shows that there are two postgres processes\nrunning with\nsome command line options (it's OK since there're 2 persons logged in).\nAnd of course, there's also postmaster process running (with -i command line\noption specified).\nWhat's the reason of such a strange behavior? It seems, that postgres locks\nentire database\nand makes any updates impossible. Could any1 help me with that?\nI'd appreciate any help...\n\nyacol\n\n\n", "msg_date": "Mon, 18 Oct 1999 16:25:53 +0200", "msg_from": "\"Jacek Witczak\" <[email protected]>", "msg_from_op": true, "msg_subject": "problem with PostgreSQL and LAN" } ]
[ { "msg_contents": "+----------------------------------+------------------------------------+\n| Robert E. Bruccoleri, Ph.D. | Phone: 609 737 6383 |\n| President, Congenomics, Inc. | Fax: 609 737 7528 |\n| 114 W. Franklin Ave, Suite K1,10 | email: [email protected] |\n| P.O. Box 314 | URL: http://www.congen.com/~bruc |\n| Pennington, NJ 08534 | |\n+----------------------------------+------------------------------------+\n\nOn Tue, 23 Apr 1996, Chad Robinson wrote:\n\n> On Tue, 23 Apr 1996, Jolly Chen wrote:\n> \n> > I've posted a TODO list on the postgres95 web site \n> > (http://s2k-ftp.CS.Berkeley.EDU:8000/postgres95/www/todo.html)\n> > I've casually sorted the list by priority and I have some editorial\n> > comments on some of them. \n> > \n> > If all the items on the TODO list were completed, postgres95 would be\n> > much improved, and would really be a viable replacement for commercial\n> > RDBMSs in some settings. Some of the items require quite a bit of work\n> > and deep knowledge of postgres95 internals, though. We would need a few\n> > contributors with quite a lot of volunteer hours to make this happen\n> > anytime soon. (A large number of contributors each with only a little\n> > bit of time to contribute would not be equivalent)\n> \n> Some of these things were on my own list to do also. I'd like to start\n> working on some of them, but the thing is, I'd also like to see a better\n> distribution form. The last update was several months ago, even though\n> there are several known `good' patches that need to be applied to fix\n> various bugs. What are we missing? :-) Do we need a maintainer? We only\n> have a 28.8 link right now (T1 in a few months) but I'd be happy to provide\n> at least a basic FTP server with space, and some time to process patches,\n> updates, and so forth. I can at least be a mirror...\n>\n\n\tIf it helps, I'd be willing to setup a cvs database, including\nappropriate accounts for a core few developers that patches can go through.\n>From there, it wouldn't be too hard to do a weekly \"distribution\" that is\nftpable.\n\n\tI don't know enough about the server backend to offer much more\nthen that :(\n\nMarc G. Fournier [email protected]\nSystems Administrator @ ki.net [email protected]", "msg_date": "Mon, 18 Oct 1999 17:01:20 -0400 (EDT)", "msg_from": "[email protected] (Robert E. Bruccoleri)", "msg_from_op": true, "msg_subject": "Historical post of Marc to support Postgres95 development" } ]
[ { "msg_contents": "+----------------------------------+------------------------------------+\n| Robert E. Bruccoleri, Ph.D. | Phone: 609 737 6383 |\n| President, Congenomics, Inc. | Fax: 609 737 7528 |\n| 114 W. Franklin Ave, Suite K1,10 | email: [email protected] |\n| P.O. Box 314 | URL: http://www.congen.com/~bruc |\n| Pennington, NJ 08534 | |\n+----------------------------------+------------------------------------+\n\nHi...\n\n\tAwhile back, there was talk of a TODO list and development\nmoving forward on Postgres95...at which point in time I volunteered\nto put up a cvs archive and sup server so that making updates (and getting\nat the \"newest source code\") was easier to do...\n\n\tSo far as I can see on the list itself, this has fallen to the wayside,\nwith everyone posting about database corruptions and whatnot...but few\nsolutions being brought up :( right now, I'm in the middle of cursing over\nthe fact that I can't seem to get a project I'm working on to be stable on \neither a Solaris box or a FreeBSD box (the FreeBSD box is th emore stable of\nthe two, mind you)...I've just rebuilt the FreeBSD server...\n\n\tPersonally, I think that both Jolly and Andrew have done a fantastic\njob of bringing it to its currently level, but they, like most of the ppl on\nthis list, have full time jobs that sap alot of their time...\n\n\t...so, unless someone out there has already done this, and \nunless Jolly/Andrew tell me I can't (guys?)...I'm going to go ahead with\nwhat I wanted to do a few months ago...setup a development site similar to \nwhat is done with FreeBSD...\n\n\tFirst stage will be to get a cvs archive of postgres 1.01 online\ntonight, with a sup server so that everyone has access to the source code.\n\n\tIf anyone has any patches they wish to submit based off of 1.01,\nplease send them to [email protected] and I'll commit those in as soon as cvs\nis up and running.\n\n\tUnless there are any disaggremenets with this (or someone else has\ndone this that I missed in mail...sorry if I did...)...I'll send out further\ndata on this as soon as its up and running...\n\nMarc G. Fournier [email protected]\nSystems Administrator @ ki.net [email protected]", "msg_date": "Mon, 18 Oct 1999 17:04:10 -0400 (EDT)", "msg_from": "[email protected] (Robert E. Bruccoleri)", "msg_from_op": true, "msg_subject": "Another historical message from the early days of PostgreSQL\n\tdevelopment" }, { "msg_contents": "\"Robert E. Bruccoleri\" wrote:\n> \n> Subject: [PG95]: Developers interested in improving PG95?\n> Date: Mon, 08 Jul 1996 22:12:19 -0400 (EDT)\n ^^^^^^^^^^^\nLet's consider this as birthday of our project?\n-:)\n\nVadim\n", "msg_date": "Tue, 19 Oct 1999 12:29:40 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another historical message from the early days of\n\tPostgreSQL development" } ]
[ { "msg_contents": "Does this look strange to anyone:\n\ntest=> create table kk1 (born date);\nCREATE\ntest=> select * from kk1;\nborn\n----\n(0 rows)\n\ntest=> insert into kk1 values ('1/1/1990');\nINSERT 18588 1\ntest=> select * from kk1;\n born\n----------\n01-01-1990\n(1 row)\n\nLook how 'born' is right-shifted in the column. Any idea why?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 19 Oct 1999 00:13:31 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "funny psql output" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> test=> insert into kk1 values ('1/1/1990');\n> INSERT 18588 1\n> test=> select * from kk1;\n> born\n> ----------\n> 01-01-1990\n> (1 row)\n\n> Look how 'born' is right-shifted in the column. Any idea why?\n\nI think libpq's print routine is deciding that the column is numeric.\n(all digits and minus signs ... and IIRC it's not very picky about\nwhere the minus signs are...)\n\nPerhaps Peter will fix this during his massive rewrite.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Oct 1999 01:05:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] funny psql output " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > test=> insert into kk1 values ('1/1/1990');\n> > INSERT 18588 1\n> > test=> select * from kk1;\n> > born\n> > ----------\n> > 01-01-1990\n> > (1 row)\n> \n> > Look how 'born' is right-shifted in the column. Any idea why?\n> \n> I think libpq's print routine is deciding that the column is numeric.\n> (all digits and minus signs ... and IIRC it's not very picky about\n> where the minus signs are...)\n> \n> Perhaps Peter will fix this during his massive rewrite.\n\nOh, that's fine. I just never realized it did that, but I see it now in\nall my numeric columns. Interesting.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 19 Oct 1999 05:00:19 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] funny psql output" } ]
[ { "msg_contents": "\n------- Forwarded Message\n\nDate: Tue, 19 Oct 1999 07:45:23 +1300\nFrom: Andrew McMillan <[email protected]>\nTo: Oliver Elphick <[email protected]>\nSubject: Postgres problems with 6.4 / 6.5\n\nHi,\n\nI have a couple of problems with Postgres 6.5 and I'm not sure where to\nput them (who to tell?).\n\nDo you know if there is a place to notify bugs to for Postgres? I am\nusing the Debian packages, so I can enter them there if necessary. \nAnyway, here's a brief description of the bugs I'm experiencing:\n\n1)\tDoing a pg_dump and psql -f on a database I get lots of errors saying\n\"query buffer max length of 16384 exceeded\" and then (eventually) I get\na segmentation fault. The load lines don't seem to be that large (the\nfull insert statement, including error, is maybe 220 bytes. It seems\nthat if I split the dumped file into 40-line chunks and do a vacuum\nafter each one, I can get the whole thing to load without the errors.\n\nI have only tested this on Version 6.5.1.\n\n\n2)\tI have a table with around 85 fields in it, and a cron job running\nevery 20 minutes which did a \"SELECT INTO ...\" from that table, did some\nprocessing and then DROPped the new table. After a few days I found\nthat my database was around 13MB, which seemed odd. A couple of days\nlater it was around 17MB, and only a couple of records had been added.\n\nFurther investigation reveals that if I do a VACUUM immediately after\nthe DROP TABLE that things are OK, but otherwise the pg_attribute* files\nin the database directory just get bigger and bigger. This is even the\ncase when I do a VACUUM after every second 'DROP TABLE' - for the space\nto be recovered, I have to VACUUM immediately after a DROP TABLE, which\ndoesn't seem right somehow.\n\nThe same behaviour seems to happen on both version 6.5.1 and 6.4.3 .\n\n\n\nIf you can pass these bugs on to an appropriate person I would\nappreciate it. In our company we are just starting to use Postgres and\nI would like to see it becoming an important part of our repertoire.\n\nMany thanks,\n\t\t\t\t\tAndrew McMillan.\n\n_____________________________________________________________________\n Andrew McMillan, e-mail: [email protected]\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267\n\n\n------- End of Forwarded Message\n\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Commit thy way unto the LORD; trust also in him and \n he shall bring it to pass.\" Psalms 37:5 \n\n\n", "msg_date": "Tue, 19 Oct 1999 06:45:43 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres problems with 6.4 / 6.5 (fwd)" }, { "msg_contents": "Hi Andrew,\n\n> 1)\tDoing a pg_dump and psql -f on a database I get lots of errors saying\n> \"query buffer max length of 16384 exceeded\" and then (eventually) I get\n> a segmentation fault. The load lines don't seem to be that large (the\n> full insert statement, including error, is maybe 220 bytes. It seems\n> that if I split the dumped file into 40-line chunks and do a vacuum\n> after each one, I can get the whole thing to load without the errors.\n\nI think there must be some specific peculiarity in your data that's\ncausing this; certainly lots of people rely on pg_dump for backup\nwithout problems. Can you provide a sample script that triggers the\nproblem?\n\n> Further investigation reveals that if I do a VACUUM immediately after\n> the DROP TABLE that things are OK, but otherwise the pg_attribute* files\n> in the database directory just get bigger and bigger. This is even the\n> case when I do a VACUUM after every second 'DROP TABLE' - for the space\n> to be recovered, I have to VACUUM immediately after a DROP TABLE, which\n> doesn't seem right somehow.\n\nThat does seem odd. If you just create and drop tables like mad then\nI'd expect pg_class, pg_attribute, etc to grow --- the rows in them\nthat describe your dropped tables don't get recycled until you vacuum.\nBut vacuum should reclaim the space.\n\nActually, wait a minute. Is it pg_attribute itself that fails to shrink\nafter vacuum, or is it the indexes on pg_attribute? IIRC we have a known\nproblem with vacuum failing to reclaim space in indexes. There is a\npatch available that improves the behavior for 6.5.*, and I believe that\nimproving it further is on the TODO list for 7.0.\n\nI think you can find that patch in the patch mailing list archives at\nwww.postgresql.org, or it may already be in 6.5.2 (or failing that,\nin the upcoming 6.5.3). [Anyone know for sure?]\n\nFor user tables it's possible to work around the problem by dropping and\nrebuilding indexes every so often, but DO NOT try that on pg_attribute.\nAs a stopgap solution you might consider not dropping and recreating\nyour temp table; leave it around and just delete all the rows in it\nbetween uses.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Oct 1999 11:01:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Postgres problems with 6.4 / 6.5 (fwd) " } ]
[ { "msg_contents": "unsubscribe\n\n\n", "msg_date": "Tue, 19 Oct 1999 06:17:15 +0000 (GMT)", "msg_from": "\"Victoria W.\" <[email protected]>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "Jan, I like it.\n\nThere's an oldish pic of me on http://www.retep.org.uk/contact/ - it's\nabout 6 months old, so I could do a new one if you want.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Peter Eisentraut [mailto:[email protected]]\nSent: 19 October 1999 11:54\nTo: Jan Wieck\nCc: [email protected]\nSubject: [HACKERS] Re: New developer globe\n\n\nI think that we have by far the coolest developers page of any\nopen-source'ish project out there. What I miss is something like a line\nwhat people do in their Real Life. From what I gather most people around\nhere are probably one of programmer, system admin, or in academia. But\nit\nwould put a refreshing realistic context to the whole thing.\n\n*** puts on asbestos suit ***\n\nAlso I think putting the photos below the globe (at least in addition)\nmight be better because for people digging around in Europe or the North\nAmerican east coast it obscures too much and it's also hard to get an\noverview.\n\n\t-Peter\n\n\nOn Mon, 18 Oct 1999, Jan Wieck wrote:\n\n> Yepp, you're right. Do a reload and look at my pin (on the\n> final page I'll be shaved a little better, it's just a quick\n> grab from my camera taken 20 minutes ago).\n> \n> I really like that very much, now this is MY entry and not\n> just some information about me. Thank you very much, Matthew!\n> \n> It's an 80x80 JPG and only 1931 bytes. A good size to be\n> recognizable and not overloading the popup.\n> \n> So would anyone please send me some picture (or just a note\n> that he doesn't want one). If it's not actually 80x80 ready\n> to stick in, please send something at least 3x the size so I\n> can crop and downscale it without much quality loss.\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n************\n", "msg_date": "Tue, 19 Oct 1999 12:10:19 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Re: New developer globe" }, { "msg_contents": "Peter Mount wrote:\n\n> Jan, I like it.\n\n Tnx.\n\n> There's an oldish pic of me on http://www.retep.org.uk/contact/ - it's\n> about 6 months old, so I could do a new one if you want.\n\n If I want? It's you to decide!\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 19 Oct 1999 13:42:01 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: New developer globe" } ]
[ { "msg_contents": "Use the one that's currently online.\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]]\nSent: 19 October 1999 12:42\nTo: [email protected]\nCc: [email protected]; [email protected]\nSubject: Re: [HACKERS] Re: New developer globe\n\n\nPeter Mount wrote:\n\n> Jan, I like it.\n\n Tnx.\n\n> There's an oldish pic of me on http://www.retep.org.uk/contact/ - it's\n> about 6 months old, so I could do a new one if you want.\n\n If I want? It's you to decide!\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n", "msg_date": "Tue, 19 Oct 1999 12:46:38 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Re: New developer globe" } ]
[ { "msg_contents": "Here is something I read as part of the Alladin Ghostscript 6.0 beta\nrelease. I must admit I don't understand the logic of the issue. It\nseems the issue is that you can link non-GPL to GPL libraries, but you\ncan't distribute the result. Maybe it doesn't apply to us because we\ndon't copyright our code.\n\nIt seems to suggest that we could be prevented from distributing\nreadline in the future. Not sure though.\n\nIt sounds like the old US crypto restriction where you couldn't\ndistribute software that had hooks in it to add crypto.\n\nRemoval of readline would certainly affect psql users.\n\nThe actual file is gs5.94/doc/Make.htm.1\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n GNU readline\n \n Aladdin Ghostscript does not include an interface to GNU readline.\n Even though the GNU License (GPL) allows linking GPL'ed code (such as\n the GNU readline library package) with non-GPL'ed code (such as all\n the rest of Ghostscript) if one doesn't distribute the result, the\n Free Software Foundation, creators of the GPL, have told us that in\n their opinion, the GPL forbids distributing non-GPL'ed code that is\n merely intended to be linked with GPL'ed code. We understand that FSF\n takes this position in order to prevent the construction of software\n that is partly GPL'ed and partly not GPL'ed, even though the GPL does\n not in fact literally forbid this (it only forbids distribution of\n such software). We think that FSF's position is both legally\n questionable and harmful to users, but we do not have the resources to\n challenge it, especially since FSF's attorney apparently supports it.\n Therefore, even though we added a user-contributed interface to GNU\n readline in internal Aladdin Ghostscript version 5.71 and had it\n working in version 5.93 (the next-to-last beta version before the 6.0\n release), we have removed it from the Aladdin Ghostscript 6.0\n distribution.\n \n GNU Ghostscript distributions will include support for GNU readline.\n As with other GNU Ghostscript components that are not included in\n Aladdin Ghostscript, Aladdin will not attempt to run, link, or even\n compile this code, or keep it current across changes in the rest of\n Ghostscript. We will, however, welcome bug fixes or updates, and\n distribute them with subsequent releases of GNU Ghostscript.\n \n The first GNU Ghostscript distribution that will include GNU readline\n support will be GNU Ghostscript 6.0, currently scheduled for release\n in the third quarter of 2000. Before that time, we may return the\n copyright of Ghostscript's GNU readline interface module, which the\n original author assigned to Aladdin Enterprises, to the author, so\n that users of GNU Ghostscript will have have access to it. However,\n since it requires internal changes that are not and will not be\n available in any released GNU Ghostscript version before 6.0, any user\n who gets this code and links it with Aladdin Ghostscript 6.0 will,\n according to FSF, be violating the intent (though not the letter) of\n the GPL.\n \n We have, in fact, put considerable work into making it possible for\n Ghostscript to use GNU readline, including the creation and/or\n adjustment of internal software interfaces specifically to serve this\n purpose. In principle, we should have undone this work in Aladdin\n Ghostscript as well, lest FSF object to it too as intended to\n facilitate linking Aladdin Ghostscript with GNU readline (as the U.S.\n government has been said to do for code that merely provides APIs\n where encryption may be added). However, we are willing to take this\n risk rather than spend the time to undo the interface changes.\n \n If you have comments or questions about this situation, please feel\n free to contact the Free Software Foundation, authors of the GPL and\n copyright holders of GNU readline, at [email protected], and/or Aladdin\n Enterprises, author and copyright holder of Ghostscript, at\n [email protected].", "msg_date": "Tue, 19 Oct 1999 09:41:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Readline use in trouble?" }, { "msg_contents": "> Removal of readline would certainly affect psql users.\n\nafaik the Alladin product is not in the same licensing category as\nPostgres (there are restrictions that, for example, prohibit RedHat\nfrom distributing a recent version of gs with their package).\n\nNot to worry...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 19 Oct 1999 14:32:23 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble?" }, { "msg_contents": "Hello!\n\nOn Tue, 19 Oct 1999, Bruce Momjian wrote:\n> Here is something I read as part of the Alladin Ghostscript 6.0 beta\n> release. I must admit I don't understand the logic of the issue. It\n> seems the issue is that you can link non-GPL to GPL libraries, but you\n> can't distribute the result. Maybe it doesn't apply to us because we\n> don't copyright our code.\n> \n> It seems to suggest that we could be prevented from distributing\n> readline in the future. Not sure though.\n\n It is second or third time I see this, so I think I understand. This is\nthe way FSF protects GNU-licensed code - you can link with GNU code, but\nyou cannot distribute non-GNU code in binary form linked with GNU code.\n If you want to distribute non-GNU code in binary form only, either you\nmust NOT to link it with GNU code; or link it with GNU code and provide a\nway to user to relink to other versions of GNU code; or just publish your\nsources.\n The second way means - publish your *.o for all platforms. The way\nnumber 3 means \"give all users a way to compile and link it as they want,\nwith or without GNU code\". I think this applied to PostgreSQL - we have\nsource code published, so I do not expect problems with readline.\n Binary-only programs are in GNUtroubles, really. Somewhere on\nwww.gnu.org I saw a story about a company that made a program, linked it\nwith libreadline and distributed it in binary-only form. FSF contacted the\ncompany asked to remove libreadline. The company instead published the\nwhole sources. FSF considered it as a Big Win!\n BTW, readline is a special case here - it protected by GNU GPL, which is\nvery restrictive. Most free/opensource libs are protected with GNU LGPL,\nwhich is less restrictive. GNU readline is the way FSF forces people to\npublsih sources!\n\n Sorry, my English is far from perfect, if you do not understand my\nexplanations - we may raise a discussion here, and I'll try to find a\nbetter words...\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n\n", "msg_date": "Tue, 19 Oct 1999 14:33:43 +0000 (GMT)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble?" }, { "msg_contents": "Bruce Momjian wrote:\n\n> Here is something I read as part of the Alladin Ghostscript 6.0 beta\n> release. I must admit I don't understand the logic of the issue. It\n> seems the issue is that you can link non-GPL to GPL libraries, but you\n> can't distribute the result. Maybe it doesn't apply to us because we\n> don't copyright our code.\n>\n> It seems to suggest that we could be prevented from distributing\n> readline in the future. Not sure though.\n>\n> It sounds like the old US crypto restriction where you couldn't\n> distribute software that had hooks in it to add crypto.\n>\n> Removal of readline would certainly affect psql users.\n\n Now the time has come that the FSF has grown that big that\n they try to redefine the meaning of \"Free\". Next they claim\n \"Free\" is their trademark :-(\n\n I think readline isn't our biggest problem. What about if\n they notice that our parser can only be compiled when using\n bison, and that we ship the generated output for the case\n someone doesn't has bison installed?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 19 Oct 1999 16:35:22 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Here is something I read as part of the Alladin Ghostscript 6.0 beta\n> release. I must admit I don't understand the logic of the issue. It\n> seems the issue is that you can link non-GPL to GPL libraries, but you\n> can't distribute the result. Maybe it doesn't apply to us because we\n> don't copyright our code.\n\nHuh? We certainly do --- or have you missed that\n * Copyright (c) 1994, Regents of the University of California\nthat's plastered across all the source files?\n\nThe GPL does restrict the conditions under which GPL'd code can be\ndistributed; in particular it can't be distributed as part of a program\nthat is not all GPL'd (more or less --- I have not read the terms lately).\nSo, because we use BSD license rather than GNU, we cannot *include in\nour distribution* any library that is under GPL.\n\nAny end user who does not intend to redistribute the result can\ncertainly obtain our distribution and readline and build them together.\nSo it's no issue for source distributions, but I wonder about RPMs.\nOur RPMs do not include the actual libreadline file, do they?\n\n> Even though the GNU License (GPL) allows linking GPL'ed code (such as\n> the GNU readline library package) with non-GPL'ed code (such as all\n> the rest of Ghostscript) if one doesn't distribute the result, the\n> Free Software Foundation, creators of the GPL, have told us that in\n> their opinion, the GPL forbids distributing non-GPL'ed code that is\n> merely intended to be linked with GPL'ed code.\n\nAs stated, this is ridiculous on its face. The FSF has no possible\nright to prevent the distribution of software that they didn't write\nand that doesn't fall under the GPL.\n\nAlthough I haven't been paying close attention to the Ghostscript\nsituation, I suspect that the real story is either that the readline\ninterface code that someone contributed to Ghostscript was contributed\nwith GPL terms already attached to it, or that Aladdin is concerned\nabout being able to distribute full-featured precompiled binaries of\nGhostscript. (BTW, Peter Deutsch has a history of forcing the issue\nwhen he thinks that someone else is being unreasonable, and I suspect\nthat he's deliberately overreacting in hopes of making FSF change\ntheir position.)\n\nAnyway, this sort of thing is why it's a bad idea to accept any GPL'd\ncode into Postgres --- the GPL does not play nice with other licenses.\nI think the FSF is not doing the free software movement any service\nwith this foolishness, but they're entitled to distribute their code\nwith any terms they want, of course.\n\nMy inclination is to ignore the issue until and unless we hear a\ncomplaint from the libreadline authors --- and if we do, we yank all\ntrace of readline support from psql. End of story.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Oct 1999 10:44:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble? " }, { "msg_contents": "> I think readline isn't our biggest problem. What about if\n> they notice that our parser can only be compiled when using\n> bison, and that we ship the generated output for the case\n> someone doesn't has bison installed?\n\nafaik this is explicitly covered as \"conforming behavior\" in the GNU\nlicense for bison. It was not always so, but the license for bison was\nrecently updated to allow distributing generated code.\n\nI should point out that rms himself is on speaking terms with us; he\nrecently referred someone here to ask about Postgres vis a vis Oracle\ncompatibility. I'm pretty sure we are one of \"the good guys\" in Open\nSource. ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 19 Oct 1999 14:55:46 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble?" }, { "msg_contents": "On Tue, 19 Oct 1999, Jan Wieck wrote:\n> I think readline isn't our biggest problem. What about if\n> they notice that our parser can only be compiled when using\n> bison, and that we ship the generated output for the case\n> someone doesn't has bison installed?\n\n Until they make a significant change in their license we don't need to\nworry. GPL specifically states that the RESULTS of GNU-protected programs\nare not covered at all. These results can be used in any way you want,\nincluding commercial ways. Only program's code matter.\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Tue, 19 Oct 1999 14:56:42 +0000 (GMT)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble?" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> I think readline isn't our biggest problem. What about if\n> they notice that our parser can only be compiled when using\n> bison, and that we ship the generated output for the case\n> someone doesn't has bison installed?\n\nNon problem. From the Bison manual:\n\nFile: bison.info, Node: Conditions, Next: Copying, Prev: Introduction, Up: Top\n\nConditions for Using Bison\n**************************\n\n As of Bison version 1.24, we have changed the distribution terms for\n`yyparse' to permit using Bison's output in non-free programs.\nFormerly, Bison parsers could be used only in programs that were free\nsoftware.\n\n The other GNU programming tools, such as the GNU C compiler, have\nnever had such a requirement. They could always be used for non-free\nsoftware. The reason Bison was different was not due to a special\npolicy decision; it resulted from applying the usual General Public\nLicense to all of the Bison source code.\n\n The output of the Bison utility--the Bison parser file--contains a\nverbatim copy of a sizable piece of Bison, which is the code for the\n`yyparse' function. (The actions from your grammar are inserted into\nthis function at one point, but the rest of the function is not\nchanged.) When we applied the GPL terms to the code for `yyparse', the\neffect was to restrict the use of Bison output to free software.\n\n We didn't change the terms because of sympathy for people who want to\nmake software proprietary. *Software should be free.* But we\nconcluded that limiting Bison's use to free software was doing little to\nencourage people to make other software free. So we decided to make the\npractical conditions for using Bison match the practical conditions for\nusing the other GNU tools.\n\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Oct 1999 11:11:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble? " }, { "msg_contents": "Tom Lane wrote:\n> So it's no issue for source distributions, but I wonder about RPMs.\n> Our RPMs do not include the actual libreadline file, do they?\n\nNo.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 19 Oct 1999 11:22:44 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble?" }, { "msg_contents": ">From GPL, section 0:\n\n\"Activities other than copying, distribution and modification are not\ncovered by this License; they are outside its scope.\" \n\nWe are not copying, distributing, or modifying readline.\n\n\"The act of running the Program is not restricted, ...\"\n\nWriting\nchar * foo = readline(\"\");\nis the same as writing\nint bar = system(\"/bin/gzip\");\njust that they chose to create their product in a way that you have to use\nthe former method rather than the latter.\n\n\"... and the output from the Program is covered only if its contents\nconstitute a work based on the Program\" \n\nI always thunk that the output of readline is a work based on the user.\n\nAnyway, when the BSD folks get their libedit act together, we can easily\nreplace one with the other. That was one of the requests by the folks out\nthere, and I got it done.\n\n\t-Peter\n\n\n\nOn Tue, 19 Oct 1999, Bruce Momjian wrote:\n\n> Here is something I read as part of the Alladin Ghostscript 6.0 beta\n> release. I must admit I don't understand the logic of the issue. It\n> seems the issue is that you can link non-GPL to GPL libraries, but you\n> can't distribute the result. Maybe it doesn't apply to us because we\n> don't copyright our code.\n> \n> It seems to suggest that we could be prevented from distributing\n> readline in the future. Not sure though.\n> \n> It sounds like the old US crypto restriction where you couldn't\n> distribute software that had hooks in it to add crypto.\n> \n> Removal of readline would certainly affect psql users.\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 19 Oct 1999 18:24:24 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble?" }, { "msg_contents": "On Tue, 19 Oct 1999, Tom Lane wrote:\n\n> Huh? We certainly do --- or have you missed that\n> * Copyright (c) 1994, Regents of the University of California\n> that's plastered across all the source files?\n\nRegarding which I have a question: at other locations I see (c) 1994-7\nUniv. of California, or even (c) 1996-9 PostgreSQL Global Development\nTeam.\n\nI am not an expert in any of this, but I'm just wondering: when did the\ninvolvement of the U of C end, when was the Global Development Team (tm)\nformed and do both copyrights exits in parallel? What if someone\ncontributes something really major and fairly independent (say like\npg_access) and wants to keep his own copyright (with compatible license of\ncourse)?\n\nAnd is the PostgreSQL Global Development Team any real entity that could\ntheoretically enforce that copyright or is it just an alias for \"whoever\ncontributed\"?\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 19 Oct 1999 18:34:32 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble? " }, { "msg_contents": "> > Removal of readline would certainly affect psql users.\n> \n> Now the time has come that the FSF has grown that big that\n> they try to redefine the meaning of \"Free\". Next they claim\n> \"Free\" is their trademark :-(\n> \n> I think readline isn't our biggest problem. What about if\n> they notice that our parser can only be compiled when using\n> bison, and that we ship the generated output for the case\n> someone doesn't has bison installed?\n> \n\nI always thought that was OK because we distribute full source code,\nright?\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 19 Oct 1999 17:32:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Readline use in trouble?" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Here is something I read as part of the Alladin Ghostscript 6.0 beta\n> > release. I must admit I don't understand the logic of the issue. It\n> > seems the issue is that you can link non-GPL to GPL libraries, but you\n> > can't distribute the result. Maybe it doesn't apply to us because we\n> > don't copyright our code.\n> \n> Huh? We certainly do --- or have you missed that\n> * Copyright (c) 1994, Regents of the University of California\n> that's plastered across all the source files?\n\nOh. I remember that now. :-)\n> \n> The GPL does restrict the conditions under which GPL'd code can be\n> distributed; in particular it can't be distributed as part of a program\n> that is not all GPL'd (more or less --- I have not read the terms lately).\n> So, because we use BSD license rather than GNU, we cannot *include in\n> our distribution* any library that is under GPL.\n\nBut Alladin wasn't doing that either. They were just distributing\nsource code that could use readline, like we do.\n\n> \n> Any end user who does not intend to redistribute the result can\n> certainly obtain our distribution and readline and build them together.\n> So it's no issue for source distributions, but I wonder about RPMs.\n> Our RPMs do not include the actual libreadline file, do they?\n\nI think we dynamically load libreadline, which is OK, maybe.\n\n> \n> > Even though the GNU License (GPL) allows linking GPL'ed code (such as\n> > the GNU readline library package) with non-GPL'ed code (such as all\n> > the rest of Ghostscript) if one doesn't distribute the result, the\n> > Free Software Foundation, creators of the GPL, have told us that in\n> > their opinion, the GPL forbids distributing non-GPL'ed code that is\n> > merely intended to be linked with GPL'ed code.\n> \n> As stated, this is ridiculous on its face. The FSF has no possible\n> right to prevent the distribution of software that they didn't write\n> and that doesn't fall under the GPL.\n\nTotally true, as far I an can figure. The US government stupidly tries\nto do this under an existing export law. Don't know of any FSF laws.\n\n> Although I haven't been paying close attention to the Ghostscript\n> situation, I suspect that the real story is either that the readline\n> interface code that someone contributed to Ghostscript was contributed\n> with GPL terms already attached to it, or that Aladdin is concerned\n\nOh, that is an interesting issue that I never considered. Reminds us we\ncan't use GPL code.\n\n> about being able to distribute full-featured precompiled binaries of\n> Ghostscript. (BTW, Peter Deutsch has a history of forcing the issue\n> when he thinks that someone else is being unreasonable, and I suspect\n> that he's deliberately overreacting in hopes of making FSF change\n> their position.)\n\nGood for him.\n\n> My inclination is to ignore the issue until and unless we hear a\n> complaint from the libreadline authors --- and if we do, we yank all\n> trace of readline support from psql. End of story.\n\nAgreed.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 19 Oct 1999 17:38:33 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Readline use in trouble?" }, { "msg_contents": "> > I think readline isn't our biggest problem. What about if\n> > they notice that our parser can only be compiled when using\n> > bison, and that we ship the generated output for the case\n> > someone doesn't has bison installed?\n> \n> afaik this is explicitly covered as \"conforming behavior\" in the GNU\n> license for bison. It was not always so, but the license for bison was\n> recently updated to allow distributing generated code.\n> \n> I should point out that rms himself is on speaking terms with us; he\n> recently referred someone here to ask about Postgres vis a vis Oracle\n> compatibility. I'm pretty sure we are one of \"the good guys\" in Open\n> Source. ;)\n\nWait until you read my preface. It makes us sound like heros. Maybe we\nare.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 19 Oct 1999 17:39:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Readline use in trouble?" }, { "msg_contents": "\nYes, I head hear this. They basically said, \"Our leverage isn't working.\"\n\n\n> As of Bison version 1.24, we have changed the distribution terms for\n> `yyparse' to permit using Bison's output in non-free programs.\n> Formerly, Bison parsers could be used only in programs that were free\n> software.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 19 Oct 1999 17:42:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Readline use in trouble?" }, { "msg_contents": "> On Tue, 19 Oct 1999, Tom Lane wrote:\n> \n> > Huh? We certainly do --- or have you missed that\n> > * Copyright (c) 1994, Regents of the University of California\n> > that's plastered across all the source files?\n> \n> Regarding which I have a question: at other locations I see (c) 1994-7\n> Univ. of California, or even (c) 1996-9 PostgreSQL Global Development\n> Team.\n> \n> I am not an expert in any of this, but I'm just wondering: when did the\n> involvement of the U of C end, when was the Global Development Team (tm)\n> formed and do both copyrights exits in parallel? What if someone\n> contributes something really major and fairly independent (say like\n> pg_access) and wants to keep his own copyright (with compatible license of\n> course)?\n> \n> And is the PostgreSQL Global Development Team any real entity that could\n> theoretically enforce that copyright or is it just an alias for \"whoever\n> contributed\"?\n\nNow there's a good question. How long does the BSD imprint remain. I\nassume forever. It is still on BSD/OS files. Only the ones they right\nfrom scrath get a BSDI imprint.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 19 Oct 1999 17:45:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Readline use in trouble?" }, { "msg_contents": "> > > And is the PostgreSQL Global Development Team any real entity that could\n> > > theoretically enforce that copyright or is it just an alias for \"whoever\n> > > contributed\"?\n> > \n> > Now there's a good question. How long does the BSD imprint remain. I\n> > assume forever. It is still on BSD/OS files. Only the ones they right\n> > from scrath get a BSDI imprint.\n> \n> So, should we be extending the Date of the BSD license? Like, is there no\n> copyright *after* 1997? Or, can we do something like:\n> \n> * Copyright (c) 1997-9\n> * PostgreSQL Global Development Team\n> * Copyright (c) 1994-7\n> * The Regents of the University of California. All rights reserved.\n\nBeats me. BSDI stdio.h has:\n\n/*-\n * Copyright (c) 1990, 1993\n * The Regents of the University of California. All rights reserved.\n *\n * This code is derived from software contributed to Berkeley by\n * Chris Torek.\n \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 19 Oct 1999 20:35:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Readline use in trouble?" }, { "msg_contents": "On Tue, 19 Oct 1999, Bruce Momjian wrote:\n\n> > On Tue, 19 Oct 1999, Tom Lane wrote:\n> > \n> > > Huh? We certainly do --- or have you missed that\n> > > * Copyright (c) 1994, Regents of the University of California\n> > > that's plastered across all the source files?\n> > \n> > Regarding which I have a question: at other locations I see (c) 1994-7\n> > Univ. of California, or even (c) 1996-9 PostgreSQL Global Development\n> > Team.\n> > \n> > I am not an expert in any of this, but I'm just wondering: when did the\n> > involvement of the U of C end, when was the Global Development Team (tm)\n> > formed and do both copyrights exits in parallel? What if someone\n> > contributes something really major and fairly independent (say like\n> > pg_access) and wants to keep his own copyright (with compatible license of\n> > course)?\n> > \n> > And is the PostgreSQL Global Development Team any real entity that could\n> > theoretically enforce that copyright or is it just an alias for \"whoever\n> > contributed\"?\n> \n> Now there's a good question. How long does the BSD imprint remain. I\n> assume forever. It is still on BSD/OS files. Only the ones they right\n> from scrath get a BSDI imprint.\n\nSo, should we be extending the Date of the BSD license? Like, is there no\ncopyright *after* 1997? Or, can we do something like:\n\n * Copyright (c) 1997-9\n * PostgreSQL Global Development Team\n * Copyright (c) 1994-7\n * The Regents of the University of California. All rights reserved.\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 19 Oct 1999 21:35:56 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble?" }, { "msg_contents": "> > > > Huh? We certainly do --- or have you missed that\n> > > > * Copyright (c) 1994, Regents of the University of California\n> > > > that's plastered across all the source files?\n> > > Regarding which I have a question: at other locations I see (c) 1994-7\n> > > Univ. of California, or even (c) 1996-9 PostgreSQL Global Development\n> > > Team.\n> > > I am not an expert in any of this, but I'm just wondering: when did the\n> > > involvement of the U of C end, when was the Global Development Team (tm)\n> > > formed and do both copyrights exits in parallel? What if someone\n> > > contributes something really major and fairly independent (say like\n> > > pg_access) and wants to keep his own copyright (with compatible license of\n> > > course)?\n\nI'm the one who started slapping new copyright notices around (in the\ndocs, mostly). imho UCB's involvement ended when they released source\ncode, with copyright provisions designed to retain acknowledgement of\ntheir work while releasing them from any liability resulting from any\nand all uses of the code.\n\nWe certainly *can't* simply extend UCB's copyright dates, since they\nare not involved in that process. imo we *should* apply a new\ncopyright which enforces UCB's original provisions, which are designed\nto keep the code in play while deflecting liability.\n\n> > > And is the PostgreSQL Global Development Team any real entity that could\n> > > theoretically enforce that copyright or is it just an alias for \"whoever\n> > > contributed\"?\n> > Now there's a good question. How long does the BSD imprint remain. I\n> > assume forever. It is still on BSD/OS files. Only the ones they right\n> > from scrath get a BSDI imprint.\n\nWe'll decide if PostgreSQL Global Development Team is a real entity\nfor copyright purposes when we need to ;) Really, the BSD-style\nlicense has *no* restrictions, so what are we going to enforce? But we\n*should* put some mention of things in the code to avoid cases like\nthe idiot American yahoo who tried to lay claim to \"linux\".\n\n> So, should we be extending the Date of the BSD license? Like, is there no\n> copyright *after* 1997? Or, can we do something like:\n> * Copyright (c) 1997-9\n> * PostgreSQL Global Development Team\n> * Copyright (c) 1994-7\n> * The Regents of the University of California. All rights reserved.\n\nI'm not a lawyer, and I don't even play one on TV. But y'all can't\nextend a UCB copyright arbitrarily. We're a work derived from the\noriginal Berkeley distribution, and we are complying with the UCB\ncopyright in all respects afaik. What happens after that is up to us,\nnot UCB...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 20 Oct 1999 05:27:29 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble?" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Regarding which I have a question: at other locations I see (c) 1994-7\n> Univ. of California, or even (c) 1996-9 PostgreSQL Global Development\n> Team.\n\n> I am not an expert in any of this, but I'm just wondering: when did the\n> involvement of the U of C end, when was the Global Development Team (tm)\n> formed and do both copyrights exits in parallel?\n\nJudging from the historical messages recently posted, Berkeley had\ncontrol of the code up to about '96. I suppose '94 was the last major\nrelease made from Berkeley. (Another Berkeley project that I've been\ninvolved with, Ptolemy, has always made a point of updating its\ncopyright boilerplate to current year just before each major release.\nPerhaps the Postgres guys were less punctilious about copyright dates,\nbut anyway it's clearly been several years since Berkeley was in\ncharge.)\n\nIf we were really doing this with full legal care, we'd probably have\nsomething like this in every source file:\n\n * Copyright (c) 1986-1994\n * The Regents of the University of California. All rights reserved.\n * Copyright (c) 1996-1999\n * PostgreSQL Global Development Team\n\n(or whatever the exact date ranges should be). The Berkeley copyright\nwill never lapse as long as there is visible Berkeley heritage in the\ncode, but the Postgres group can also claim copyright on the\nmodifications and additions we've made. As long as we are happy with\ndistributing our work under the BSD license terms, there's no conflict.\n\nNow a lawyer would immediately point out that the \"PostgreSQL Global\nDevelopment Team\" is not a legally existent entity and so has no ability\nto sue anyone for copyright violation. If we thought we might have to\nenforce our wishes legally, we'd need to form an actual corporation.\n(Perhaps the core team has already quietly done that, but I sure don't\nknow about it...)\n\n> What if someone contributes something really major and fairly\n> independent (say like pg_access) and wants to keep his own copyright\n> (with compatible license of course)?\n\nI've noticed that Jan and a couple of other people have put copyright\nnotices in their own names on files that they've created from scratch,\nbut I feel uncomfortable with that practice. The Ghostscript/readline\nfiasco illustrates the potential problems you can get into with\ndivergent copyrights on chunks of code that need to be distributed\ntogether. My personal feeling is that if you're a member of the team,\nstick the team copyright on it; don't open a can of legal worms.\n\n(If we were building in a green field it might be profitable to debate\nwhat that team copyright should be --- but unless we want to start from\nscratch, BSD is it, for better or worse.)\n\nDisclaimer: I'm not a lawyer, I don't play one on TV, yadda yadda...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Oct 1999 01:47:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble? " }, { "msg_contents": ">>>>> \"TL\" == Tom Lane <[email protected]> writes:\n\n TL> The GPL does restrict the conditions under which GPL'd code can\n TL> be distributed; in particular it can't be distributed as part of\n TL> a program that is not all GPL'd (more or less --- I have not\n TL> read the terms lately). So, because we use BSD license rather\n TL> than GNU, we cannot *include in our distribution* any library\n TL> that is under GPL.\n\n[All IMHO, I'm not a lawyer etc. too.]\n\nI think that from the point of GPL there is basically no problem with\nPostgreSQL license, since it contains no restriction incompatible with\nGPL.\n\nThe situation with Aladdin Ghostscript is quite different, it is under\nnon-free license, its license is in conflict with GPL and so it clearly\ncan't use GPLed code.\n\nHowever, including GPLed code into PostgreSQL, though I think it's fully\nlegal, means that third party can't take the PostgreSQL as a whole and\ndistribute it under license violating GPL, e.g. as a proprietary product\nwithout available sources. If it is important for you to support *more*\nrestrictive licensing than GPL, then you should avoid inclusion of GPLed\ncode into PostgreSQL.\n\nMilan Zamazal\n", "msg_date": "20 Oct 1999 09:47:17 +0200", "msg_from": "Milan Zamazal <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble?" }, { "msg_contents": ">>>>> \"TL\" == Tom Lane <[email protected]> writes:\n\n TL> The GPL does restrict the conditions under which GPL'd code can\n TL> be distributed; in particular it can't be distributed as part of\n TL> a program that is not all GPL'd (more or less --- I have not\n TL> read the terms lately). So, because we use BSD license rather\n TL> than GNU, we cannot *include in our distribution* any library\n TL> that is under GPL.\n\n[All IMHO, I'm not a lawyer etc. too.]\n\nI think that from the point of GPL there is basically no problem with\nPostgreSQL license, since it contains no restriction incompatible with\nGPL.\n\nThe situation with Aladdin Ghostscript is quite different, it is under\nnon-free license, its license is in conflict with GPL and so it clearly\ncan't use GPLed code.\n\nHowever, including GPLed code into PostgreSQL, though I think it's fully\nlegal, means that third party can't take the PostgreSQL as a whole and\ndistribute it under license violating GPL, e.g. as a proprietary product\nwithout available sources. If it is important for you to support *more*\nrestrictive licensing than GPL, then you should avoid inclusion of GPLed\ncode into PostgreSQL.\n\nMilan Zamazal\n", "msg_date": "20 Oct 1999 10:03:17 +0200", "msg_from": "Milan Zamazal <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble?" }, { "msg_contents": "On Wed, 20 Oct 1999, Tom Lane wrote:\n\n> If we were really doing this with full legal care, we'd probably have\n> something like this in every source file:\n> \n> * Copyright (c) 1986-1994\n> * The Regents of the University of California. All rights reserved.\n> * Copyright (c) 1996-1999\n> * PostgreSQL Global Development Team\n\nThat's what I thought. Perhaps one should also add to the actual copyright\nnotice, that, besides that U of C, no member of the PostgreSQL Global\nDevelopment Team will assume any liability for nuttin', etc.\n\n> Now a lawyer would immediately point out that the \"PostgreSQL Global\n> Development Team\" is not a legally existent entity and so has no ability\n> to sue anyone for copyright violation. If we thought we might have to\n> enforce our wishes legally, we'd need to form an actual corporation.\n> (Perhaps the core team has already quietly done that, but I sure don't\n> know about it...)\n\nWhat about Pgsql, Inc.? Perhaps they should trademark the product name and\nact as the legal guardian. Isn't that sort of what the Apache Software\nFoundation does? \n\nOf course, I don't recall the project being in legal trouble lately, but\nwho knows how fast it can happen. The FSF could get anal, or [commercial\ndb vendor] files senseless claims, or [joe idiot] trademarks \"PostgreSQL\",\netc. Once you're in the spotlight, the weirdos come out. And we want to be\nin the spotlight, don't we?\n\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Wed, 20 Oct 1999 12:26:30 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble? " }, { "msg_contents": "On Wed, 20 Oct 1999, Thomas Lockhart wrote:\n\n> We'll decide if PostgreSQL Global Development Team is a real entity\n> for copyright purposes when we need to ;) Really, the BSD-style\n\nAt least the Global Development Team was entity enough to register the\ndomain postgresql.org:\n\nRegistrant:\nPostgreSQL Global Development Group (POSTGRESQL-DOM)\n 203-112 Highland Ave\n Wolfville, Nova Scotia B0P 1X0\n CA\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 20 Oct 1999 13:10:38 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble?" }, { "msg_contents": "On Wed, 20 Oct 1999, Tom Lane wrote:\n\n> Now a lawyer would immediately point out that the \"PostgreSQL Global\n> Development Team\" is not a legally existent entity and so has no ability\n> to sue anyone for copyright violation. If we thought we might have to\n> enforce our wishes legally, we'd need to form an actual corporation.\n> (Perhaps the core team has already quietly done that, but I sure don't\n> know about it...)\n\nI don't know how things work in Canada - where Marc is - but in the US\na simple DBA would be all that's necessary for an entity to copyright\nin their own name. Many clubs and user groups do this regularly. A \nform that's filled out every five years is all that's needed to maintain\nthe business name. (DBA = Doing Business As)\n\n> Disclaimer: I'm not a lawyer, I don't play one on TV, yadda yadda...\n\nSame here, but I have filed and received a copyright for one of the \nabove unmentioned clubs when I was a member.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 20 Oct 1999 07:15:30 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble? " }, { "msg_contents": "> > something like this in every source file:\n> > * Copyright (c) 1986-1994\n> > * The Regents of the University of California. All rights reserved.\n> > * Copyright (c) 1996-1999\n> > * PostgreSQL Global Development Team\n> That's what I thought. Perhaps one should also add to the actual copyright\n> notice, that, besides that U of C, no member of the PostgreSQL Global\n> Development Team will assume any liability for nuttin', etc.\n\nHere is what we already have in the docs:\n\n(from http://www.postgresql.org/docs/postgres/copyright.htm)\n\nCopyrights and Trademarks\n\nPostgreSQL is Copyright � 1996-9 by the PostgreSQL Global Development\nGroup, and is distributed under the terms of the Berkeley license. \n\nPostgres95 is Copyright � 1994-5 by the Regents of the University of\nCalifornia. Permission to use, copy, modify, and distribute this\nsoftware and its documentation for any purpose, without fee, and\nwithout a written agreement is hereby granted, provided that the above\ncopyright notice and this paragraph and the following two paragraphs\nappear in all copies. \n\nIn no event shall the University of California be liable to any party\nfor direct, indirect, special, incidental, or consequential damages,\nincluding lost profits, arising out of the use of this software and\nits documentation, even if the University of California has been\nadvised of the possibility of such damage. \n\nThe University of California specifically disclaims any warranties,\nincluding, but not limited to, the implied warranties of\nmerchantability and fitness for a particular purpose. The software\nprovided hereunder is on an \"as-is\" basis, and the University of\nCalifornia has no obligations to provide maintainance, support,\nupdates, enhancements, or modifications. \n\n\nI wrote the minimalist \"me too\" copyright notice as the first\nparagraph above to avoid making claims or statements that the group\nwould find a problem. But istm that our copyright notice should\nprobably be a bit more wordy, perhaps mimicing the whole UCB copyright\nstatement. Comments?\n\n> > Now a lawyer would immediately point out that the \"PostgreSQL Global\n> > Development Team\" is not a legally existent entity and so has no ability\n> > to sue anyone for copyright violation. If we thought we might have to\n> > enforce our wishes legally, we'd need to form an actual corporation.\n> > (Perhaps the core team has already quietly done that, but I sure don't\n> > know about it...)\n\nistm that we \"operate\" in more countries than most any real company,\nand it would be prohibitive to preemptively file for legal status\neverywhere it might matter. The copyright is intended to keep the code\navailable *and* to deflect liability; we only need to invoke it if\nsomeone comes after us (maybe we'll all end up living with Marc on\nsome island in Canada, hiding from the US lawyers :) Perhaps it is\nbetter to do The Right Thing developing software with an appropriate\ncopyright and leave the rest...\n\nbtw, Marc has already run into domain name pirates/speculators, who\nsnagged postgresql.com. They would be happy to sell the name back to\nus :(((\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 20 Oct 1999 13:45:48 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble?" }, { "msg_contents": "\n\nMilan Zamazal wrote:\n\n> >>>>> \"TL\" == Tom Lane <[email protected]> writes:\n>\n> TL> The GPL does restrict the conditions under which GPL'd code can\n> TL> be distributed; in particular it can't be distributed as part of\n> TL> a program that is not all GPL'd (more or less --- I have not\n> TL> read the terms lately). So, because we use BSD license rather\n> TL> than GNU, we cannot *include in our distribution* any library\n> TL> that is under GPL.\n>\n> [All IMHO, I'm not a lawyer etc. too.]\n>\n> I think that from the point of GPL there is basically no problem with\n> PostgreSQL license, since it contains no restriction incompatible with\n> GPL.\n>\n> The situation with Aladdin Ghostscript is quite different, it is under\n> non-free license, its license is in conflict with GPL and so it clearly\n> can't use GPLed code.\n>\n> However, including GPLed code into PostgreSQL, though I think it's fully\n> legal, means that third party can't take the PostgreSQL as a whole and\n> distribute it under license violating GPL, e.g. as a proprietary product\n> without available sources. If it is important for you to support *more*\n> restrictive licensing than GPL, then you should avoid inclusion of GPLed\n> code into PostgreSQL.\n\nPlease clear this up one way or another.\nI was looking to include PostgreSQL into my companies proprietary software\npackage.\nI was going to make all of the arrangements to follow the PostreSQL\nCopyright.\nI can not include source code of my companies software, since it is to\ncopyrighted to my company.\n\nPlease list the steps i would need to comply with PostgreSQL's Copyright.\n(I was under the impression that the Copyright from the Regents of U of CA\nwas it)\nList any modules that is would need to include/execlude to avoid voilating\nother Copyrights used by PostgreSQL.\n\nYour scaring me guys.\n\nP.S.\n\nYou guys have a great product from what i have seen so far. I am realy\nlooking forward to release 7.\n\nJohn Ingram\nSenior Software Engineer\n\n>\n>\n> Milan Zamazal\n\n\n\n", "msg_date": "Wed, 20 Oct 1999 09:59:23 -0400", "msg_from": "Intrac Systems Inc <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble?" }, { "msg_contents": "Milan Zamazal <[email protected]> writes:\n>>>>>> \"TL\" == Tom Lane <[email protected]> writes:\nTL> The GPL does restrict the conditions under which GPL'd code can\nTL> be distributed; in particular it can't be distributed as part of\nTL> a program that is not all GPL'd (more or less --- I have not\nTL> read the terms lately). So, because we use BSD license rather\nTL> than GNU, we cannot *include in our distribution* any library\nTL> that is under GPL.\n\n> I think that from the point of GPL there is basically no problem with\n> PostgreSQL license, since it contains no restriction incompatible with\n> GPL.\n\nActually it's the other way around: BSD-type license doesn't care about\nGPL'd stuff in the same distribution ... but GPL license does. The GPL\ninsists that all its terms, including its restrictions, apply exactly\nto the whole of any program containing any GPL'd code. So we'd be\nviolating the GPL if we had parts of Postgres under GPL and parts under\nBSD, because BSD is *less* restrictive than GPL (it puts fewer\nrequirements on a recipient of the code than GPL does). And we can't\njust arbitrarily change the Berkeley-derived code from BSD to GPL.\n\nIn practice this is probably all just nit-picking; the Postgres group\nitself isn't doing anything with Postgres that doesn't fall within the\nterms of the GPL. But from a legalistic point of view the two licenses\nare not compatible.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Oct 1999 10:48:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble? " }, { "msg_contents": "On Tue, 19 Oct 1999, Bruce Momjian wrote:\n> Removal of readline would certainly affect psql users.\n> \n> The actual file is gs5.94/doc/Make.htm.1\n\nOne could always switch to libedit, which is BSD licensed. Its not yet\nported to as many platforms as readline though...\n\n-- \n| Matthew N. Dodd | '78 Datsun 280Z | '75 Volvo 164E | FreeBSD/NetBSD |\n| [email protected] | 2 x '84 Volvo 245DL | ix86,sparc,pmax |\n| http://www.jurai.net/~winter | This Space For Rent | ISO8802.5 4ever |\n\n", "msg_date": "Fri, 22 Oct 1999 03:12:11 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble?" } ]
[ { "msg_contents": "And, of course, in the next release (or in current),\nyou'll be able to do a:\n\nTRUNCATE TABLE frequentlyusedtable;\nINSERT INTO frequentlyusedtable SELECT...;\n\nand not have to worry about ever-growing indexes,\ngrants, etc.\n\n;-)\n\nMike Mascari\n([email protected])\n\n--- Tom Lane <[email protected]> wrote:\n> Hi Andrew,\n> \n> > 1)\tDoing a pg_dump and psql -f on a database I get\n> lots of errors saying\n> > \"query buffer max length of 16384 exceeded\" and\n> then (eventually) I get\n> > a segmentation fault. The load lines don't seem\n> to be that large (the\n> > full insert statement, including error, is maybe\n> 220 bytes. It seems\n> > that if I split the dumped file into 40-line\n> chunks and do a vacuum\n> > after each one, I can get the whole thing to load\n> without the errors.\n> \n> I think there must be some specific peculiarity in\n> your data that's\n> causing this; certainly lots of people rely on\n> pg_dump for backup\n> without problems. Can you provide a sample script\n> that triggers the\n> problem?\n> \n> > Further investigation reveals that if I do a\n> VACUUM immediately after\n> > the DROP TABLE that things are OK, but otherwise\n> the pg_attribute* files\n> > in the database directory just get bigger and\n> bigger. This is even the\n> > case when I do a VACUUM after every second 'DROP\n> TABLE' - for the space\n> > to be recovered, I have to VACUUM immediately\n> after a DROP TABLE, which\n> > doesn't seem right somehow.\n> \n> That does seem odd. If you just create and drop\n> tables like mad then\n> I'd expect pg_class, pg_attribute, etc to grow ---\n> the rows in them\n> that describe your dropped tables don't get recycled\n> until you vacuum.\n> But vacuum should reclaim the space.\n> \n> Actually, wait a minute. Is it pg_attribute itself\n> that fails to shrink\n> after vacuum, or is it the indexes on pg_attribute? \n> IIRC we have a known\n> problem with vacuum failing to reclaim space in\n> indexes. There is a\n> patch available that improves the behavior for\n> 6.5.*, and I believe that\n> improving it further is on the TODO list for 7.0.\n> \n> I think you can find that patch in the patch mailing\n> list archives at\n> www.postgresql.org, or it may already be in 6.5.2\n> (or failing that,\n> in the upcoming 6.5.3). [Anyone know for sure?]\n> \n> For user tables it's possible to work around the\n> problem by dropping and\n> rebuilding indexes every so often, but DO NOT try\n> that on pg_attribute.\n> As a stopgap solution you might consider not\n> dropping and recreating\n> your temp table; leave it around and just delete all\n> the rows in it\n> between uses.\n> \n> \t\t\tregards, tom lane\n\n\n=====\n\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n", "msg_date": "Tue, 19 Oct 1999 12:18:35 -0700 (PDT)", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [BUGS] Postgres problems with 6.4 / 6.5 (fwd) " } ]
[ { "msg_contents": "OK, yesterday I signed contracts from Addison, Wesley, Longman for a\nbook on PostgreSQL. I got approval from Marc, Thomas, Vadim, and Tom\nLane. I also sent them a resume which was approved by the core group.\n\nI have redesigned the outline because some of the chapters were too\nlong, and have written about 30 pages so far. With the missing\nchapters, it comes to 64 pages. I am waiting for some text from the\npublisher, just stating that the book is being written under contract\nfor them, and I will make the book available in PDF and HTML formats on\nthe web as soon as possible.\n\nI plan to add the PostgreSQL manual pages as an appendix to the book.\n\nYes, I am being payed for the book, at some future date. I realize\nother developers are using PostgreSQL in their work or in consulting,\nbut this is a much more public involvement. All I can say is that I\nhave always been ready to throw money into the PostgreSQL project, and\nwill continue to do so when possible.\n\nCan't wait for everyone to see it. Shouldn't be too long now. I will\npost when it is ready.\n\n---------------------------------------------------------------------------\n\nBTW, where do people want the PDF and HTML files? My idea is to update\nthem every night.\n\nFYI, I just cleaned up my PDF version. Seems that ps2pdf requires me to\nuse Latin1 font encoding, or all my fonts in the PDF file are bitmapped\nfonts and not spline/curved fonts. This is mentioned in the ps2pdf\nmanual pages. I just never realized the LyX's default encoding is ASCII\nand not Latin1. \n\nBitmapped fonts don't matter if you print out the PDF, but if you view\nit on the screen, Adobe Acrobat Reader showed the text looking terrible.\nGhostview was much better at rendering the bitmapped fonts, but I am\nsure there are many people who want to view the PDF file with Acrobat,\nso I have that fixed. xpdf couldn't render the bitmapped fonts either. \nI have switched to Palatino, and upgraded to Alladin Ghostscript\n5.94beta, so the PDF problems people reported with the previous PDF\nshould be fixed now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 19 Oct 1999 18:20:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "book status" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Yes, I am being payed for the book, at some future date. I realize\n> other developers are using PostgreSQL in their work or in consulting,\n> but this is a much more public involvement. All I can say is that I\n> have always been ready to throw money into the PostgreSQL project, and\n> will continue to do so when possible.\n\nSince you'll be doing the bulk of the work that is specific to the book,\nI can't see any reason to object to you being the one getting paid for\nit. Many (most?) of us are using Postgres for work purposes, or\notherwise deriving some kind of personal/corporate/proprietary benefit\nfrom it. A book project based on Postgres seems no different to me\nfrom the money my company hopes to make from running a Postgres-based\napplication. Indeed, this book project is considerably more likely\nto return tangible benefit to the Postgres group (in the form of new\nusers/contributors attracted to the project) than most other ways\npeople might be using Postgres to make money.\n\nIn short, you needn't offer the slightest apology for collecting the\nbook royalties personally. From what I've heard of the book-writing\nbiz, you're unlikely to get rich off it anyway :-(\n\nSince I didn't see any howls of outrage on the mailing list, I imagine\neveryone else thinks the same anyway, but if you need some reassurance\nthese are my two cents.\n\n> BTW, where do people want the PDF and HTML files? My idea is to update\n> them every night.\n\nStick 'em on the website/ftpsite somewhere under the docs page. I'd\nprobably gripe if I found them getting pulled as part of the regular CVS\nsource module --- those updates are slow enough already --- but if you\nwant to keep them in CVS I suppose a separate CVS module could be set up.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Oct 1999 01:11:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] book status " }, { "msg_contents": "On Wed, 20 Oct 1999, Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> > Yes, I am being payed for the book, at some future date. I realize\n> > other developers are using PostgreSQL in their work or in consulting,\n> > but this is a much more public involvement. All I can say is that I\n> > have always been ready to throw money into the PostgreSQL project, and\n> > will continue to do so when possible.\n> \n> Since you'll be doing the bulk of the work that is specific to the book,\n> I can't see any reason to object to you being the one getting paid for\n> it. Many (most?) of us are using Postgres for work purposes, or\n> otherwise deriving some kind of personal/corporate/proprietary benefit\n> from it. A book project based on Postgres seems no different to me\n> from the money my company hopes to make from running a Postgres-based\n> application. Indeed, this book project is considerably more likely\n> to return tangible benefit to the Postgres group (in the form of new\n> users/contributors attracted to the project) than most other ways\n> people might be using Postgres to make money.\n> \n> In short, you needn't offer the slightest apology for collecting the\n> book royalties personally. From what I've heard of the book-writing\n> biz, you're unlikely to get rich off it anyway :-(\n> \n> Since I didn't see any howls of outrage on the mailing list, I imagine\n> everyone else thinks the same anyway, but if you need some reassurance\n> these are my two cents.\n\nWell put Tom. Couldn't have said it better.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 20 Oct 1999 06:55:32 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] book status " }, { "msg_contents": "I will have to agree whole heartily with the sentiments here. This a\nwork that your are doing and should rightly be appropriately\ncompensated. All within the PostgreSQL community will benefit.\n\nThe community will benefit with increased documentation, increased\nexposure and visibility and increasing credibility. Philosophically, I\nbelieve all who use PostgreSQL benefit economically simply by keeping\nuntold numbers of dollars (or other currency) in their pocket. Like you\nI think those who do use PostgreSQL and can contribute should. But it is\nnice to know that a quality option is available for those who can't\ncontribute financially.\n\nI don't think you should feel obligated to contribute any of the\nearnings to PostgreSQL. Let your giving be by choice and joy not\nobligation due to pressure from any part of the community.\n\nAbsolutely no apology required. Your contribution is a blessing to the\ncommunity, not a burden. :)\n\nPersonally, I am ready to buy. Where can I get my copy. :)\n\nI also hope have more books for PostgreSQL cooking on the back burner.\n\nJimmie Houchin\n\n\nTom Lane wrote:\n> \n> Bruce Momjian <[email protected]> writes:\n> > Yes, I am being payed for the book, at some future date. I realize\n> > other developers are using PostgreSQL in their work or in consulting,\n> > but this is a much more public involvement. All I can say is that I\n> > have always been ready to throw money into the PostgreSQL project, and\n> > will continue to do so when possible.\n> \n> Since you'll be doing the bulk of the work that is specific to the book,\n> I can't see any reason to object to you being the one getting paid for\n> it. Many (most?) of us are using Postgres for work purposes, or\n> otherwise deriving some kind of personal/corporate/proprietary benefit\n> from it. A book project based on Postgres seems no different to me\n> from the money my company hopes to make from running a Postgres-based\n> application. Indeed, this book project is considerably more likely\n> to return tangible benefit to the Postgres group (in the form of new\n> users/contributors attracted to the project) than most other ways\n> people might be using Postgres to make money.\n> \n> In short, you needn't offer the slightest apology for collecting the\n> book royalties personally. From what I've heard of the book-writing\n> biz, you're unlikely to get rich off it anyway :-(\n> \n> Since I didn't see any howls of outrage on the mailing list, I imagine\n> everyone else thinks the same anyway, but if you need some reassurance\n> these are my two cents.\n> \n> > BTW, where do people want the PDF and HTML files? My idea is to update\n> > them every night.\n> \n> Stick 'em on the website/ftpsite somewhere under the docs page. I'd\n> probably gripe if I found them getting pulled as part of the regular CVS\n> source module --- those updates are slow enough already --- but if you\n> want to keep them in CVS I suppose a separate CVS module could be set up.\n> \n> regards, tom lane\n> \n> ************\n", "msg_date": "Wed, 20 Oct 1999 13:36:55 -0500", "msg_from": "Jimmie Houchin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] book status" } ]
[ { "msg_contents": "OK, I have the needed text, and it is ready to go in both PDF and HTML.\n\nWhere do people want it?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 19 Oct 1999 19:17:46 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "book" }, { "msg_contents": "On Tue, 19 Oct 1999, Bruce Momjian wrote:\n\n> OK, I have the needed text, and it is ready to go in both PDF and HTML.\n> \n> Where do people want it?\n\nUpcoming PostgreSQL Book section in our documentation? \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 19 Oct 1999 21:37:23 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] book" }, { "msg_contents": "> On Tue, 19 Oct 1999, Bruce Momjian wrote:\n> \n> > OK, I have the needed text, and it is ready to go in both PDF and HTML.\n> > \n> > Where do people want it?\n> \n> Upcoming PostgreSQL Book section in our documentation? \n\nSounds good. Just under our existing docs in html and pdf format as a\nnew item in the list, or do you want a new section?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 19 Oct 1999 20:37:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] book" }, { "msg_contents": "On Tue, 19 Oct 1999, Bruce Momjian wrote:\n\n> > On Tue, 19 Oct 1999, Bruce Momjian wrote:\n> > \n> > > OK, I have the needed text, and it is ready to go in both PDF and HTML.\n> > > \n> > > Where do people want it?\n> > \n> > Upcoming PostgreSQL Book section in our documentation? \n> \n> Sounds good. Just under our existing docs in html and pdf format as a\n> new item in the list, or do you want a new section?\n\nFormer sounds good to me :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 19 Oct 1999 22:29:16 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] book" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Hiroshi Inoue [mailto:[email protected]]\n> Sent: Tuesday, October 19, 1999 6:45 PM\n> To: Tom Lane\n> Cc: [email protected]\n> Subject: RE: [HACKERS] mdnblocks is an amazing time sink in huge\n> relations \n> \n> \n> > \n> > \"Hiroshi Inoue\" <[email protected]> writes:\n> \n> [snip]\n> \n> > \n> > > Deletion is necessary only not to consume disk space.\n> > >\n> > > For example vacuum could remove not deleted files.\n> > \n> > Hmm ... interesting idea ... but I can hear the complaints\n> > from users already...\n> >\n> \n> My idea is only an analogy of PostgreSQL's simple recovery\n> mechanism of tuples.\n> \n> And my main point is\n> \t\"delete fails after commit\" doesn't harm the database\n> \texcept that not deleted files consume disk space.\n> \n> Of cource,it's preferable to delete relation files immediately\n> after(or just when) commit.\n> Useless files are visible though useless tuples are invisible.\n>\n\nAnyway I don't need \"DROP TABLE inside transactions\" now\nand my idea is originally for that issue.\n\nAfter a thought,I propose the following solution.\n\n1. mdcreate() couldn't create existent relation files.\n If the existent file is of length zero,we would overwrite\n the file.(seems the comment in md.c says so but the\n code doesn't do so). \n If the file is an Index relation file,we would overwrite\n the file.\n\n2. mdunlink() couldn't unlink non-existent relation files.\n mdunlink() doesn't call elog(ERROR) even if the file\n doesn't exist,though I couldn't find where to change\n now.\n mdopen() doesn't call elog(ERROR) even if the file\n doesn't exist and leaves the relation as CLOSED. \n\nComments ?\n\nRegards. \n\nHiroshi Inoue\[email protected]\n", "msg_date": "Wed, 20 Oct 1999 10:09:13 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] mdnblocks is an amazing time sink in huge relations " }, { "msg_contents": "> \n> After a thought,I propose the following solution.\n> \n> 1. mdcreate() couldn't create existent relation files.\n> If the existent file is of length zero,we would overwrite\n> the file.(seems the comment in md.c says so but the\n> code doesn't do so). \n> If the file is an Index relation file,we would overwrite\n> the file.\n>\n\nThis may allow to CREATE TABLE simultaneously for the\nsame table name. I would change to check the existence\nof the same table name correctly in heap_create_with_ca\ntalog().\n \n> 2. mdunlink() couldn't unlink non-existent relation files.\n> mdunlink() doesn't call elog(ERROR) even if the file\n> doesn't exist,though I couldn't find where to change\n> now.\n\n_mdfd_getrelnfd(),mdnblocks() doesn't call elog().\nReturn code will be checked.\n\n> mdopen() doesn't call elog(ERROR) even if the file\n> doesn't exist and leaves the relation as CLOSED. \n> \n> Comments ?\n>\n\nRecently I saw 2 postings about this in pgsql MLs.\nSo I want to change as above.\n\n2. was changed by Tom(mdunlink/mdopen) and\nTatsuo(mdopen) recently.\nAny Problems ?\n\nRegards.\n\nHiroshi Inoue\[email protected] \n", "msg_date": "Sat, 23 Oct 1999 09:40:21 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] mdnblocks is an amazing time sink in huge relations " }, { "msg_contents": "> > \n> > After a thought,I propose the following solution.\n> > \n> > 1. mdcreate() couldn't create existent relation files.\n> > If the existent file is of length zero,we would overwrite\n> > the file.(seems the comment in md.c says so but the\n> > code doesn't do so). \n> > If the file is an Index relation file,we would overwrite\n> > the file.\n> >\n> \n> This may allow to CREATE TABLE simultaneously for the\n> same table name. I would change to check the existence\n\nAs I was afraid,2 tables of a same name could be made.\nAfter a short investigating,I found that system indexes are\nnever unique indexes.\nWhy ?\nWithout duplicate index check,it's very difficult to prevent\nobjects from having same name.\n\nComments ?\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Sun, 24 Oct 1999 08:48:38 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "System indexes are never unique indexes( was RE: [HACKERS] mdnblocks\n\tis an amazing time sink in huge relations)" }, { "msg_contents": "> As I was afraid,2 tables of a same name could be made.\n> After a short investigating,I found that system indexes are\n> never unique indexes.\n> Why ?\n> Without duplicate index check,it's very difficult to prevent\n> objects from having same name.\n\nThey certainly should be unique.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 25 Oct 1999 23:58:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System indexes are never unique indexes( was RE: [HACKERS]\n\tmdnblocks is an amazing time sink in huge relations)" }, { "msg_contents": "> \n> > As I was afraid,2 tables of a same name could be made.\n> > After a short investigating,I found that system indexes are\n> > never unique indexes.\n> > Why ?\n> > Without duplicate index check,it's very difficult to prevent\n> > objects from having same name.\n> \n> They certainly should be unique.\n>\n\nAll should be unique ?\nI don't know system indexes well.\n\nRegards.\n\nHiroshi Inoue\[email protected] \n", "msg_date": "Tue, 26 Oct 1999 13:27:22 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: System indexes are never unique indexes( was RE: [HACKERS]\n\tmdnblocksis an amazing time sink in huge relations)" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > \n> > > As I was afraid,2 tables of a same name could be made.\n> > > After a short investigating,I found that system indexes are\n> > > never unique indexes.\n> > > Why ?\n> > > Without duplicate index check,it's very difficult to prevent\n> > > objects from having same name.\n> > \n> > They certainly should be unique.\n> >\n> \n> All should be unique ?\n> I don't know system indexes well.\n\nNot sure. I don't remember which ones. I can take a look when I add\nmore indexes for 7.0.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 26 Oct 1999 01:10:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System indexes are never unique indexes( was RE: [HACKERS]\n\tmdnblocksis an amazing time sink in huge relations)" }, { "msg_contents": ">\n> [Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > >\n> > > > As I was afraid,2 tables of a same name could be made.\n> > > > After a short investigating,I found that system indexes are\n> > > > never unique indexes.\n> > > > Why ?\n> > > > Without duplicate index check,it's very difficult to prevent\n> > > > objects from having same name.\n> > >\n> > > They certainly should be unique.\n> > >\n> >\n> > All should be unique ?\n> > I don't know system indexes well.\n>\n> Not sure. I don't remember which ones. I can take a look when I add\n> more indexes for 7.0.\n\n Don't remember if really or what, but wasn't there some\n problem with cached system relations, unique indices and\n concurrency?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 26 Oct 1999 10:11:46 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: System indexes are never unique indexes( was RE: [HACKERS]\n\tmdnblocksis" }, { "msg_contents": "> > Not sure. I don't remember which ones. I can take a look when I add\n> > more indexes for 7.0.\n> \n> Don't remember if really or what, but wasn't there some\n> problem with cached system relations, unique indices and\n> concurrency?\n> \n\nI don't remember anything about that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 26 Oct 1999 12:25:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System indexes are never unique indexes( was RE: [HACKERS]\n\tmdnblocksis" }, { "msg_contents": ">\n> > > Not sure. I don't remember which ones. I can take a look when I add\n> > > more indexes for 7.0.\n> >\n> > Don't remember if really or what, but wasn't there some\n> > problem with cached system relations, unique indices and\n> > concurrency?\n> >\n>\n> I don't remember anything about that.\n>\n\nI don't know old PostgreSQL at all.\nOnly one thing I could suppose is the following.\n\nBefore MVCC it was unnecessary to read dirty(uncommited) tuples\nto check uniqueness because a table level exclusive lock was acquired\nautomatically. As for user tuples,the consistency was perserved because\nthe lock was held until transaction end. As for system tuples,the\nconsistency\ncould be broken if the lock was a short term lock.\n\nAfter MVCC,dirty(uncommitted) tuples are taken into account to check\nuniqueness and any lock is no longer needed.\n\nAFAIK,there are no other means to check(lock ?) (logically) non-existent\nrows now(Referencial Integrity would provide the second one).\nSo probably PostgreSQL couldn't guarantee the uniquness of system\ntuples in many cases.\n\nAnyway,I want to change the implementation of mdcreate() to reuse\nexistent files but the uniqueness of table name is preserved by the\ncurrent implementation narrowly.\n\nFirst of all,I would change pg_type,pg_class.\nIt's OK ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Wed, 27 Oct 1999 09:52:55 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: System indexes are never unique indexes( was RE: [HACKERS]\n\tmdnblocksis" }, { "msg_contents": "> I don't know old PostgreSQL at all.\n> Only one thing I could suppose is the following.\n> \n> Before MVCC it was unnecessary to read dirty(uncommited) tuples\n> to check uniqueness because a table level exclusive lock was acquired\n> automatically. As for user tuples,the consistency was perserved because\n> the lock was held until transaction end. As for system tuples,the\n> consistency\n> could be broken if the lock was a short term lock.\n> \n> After MVCC,dirty(uncommitted) tuples are taken into account to check\n> uniqueness and any lock is no longer needed.\n> \n> AFAIK,there are no other means to check(lock ?) (logically) non-existent\n> rows now(Referencial Integrity would provide the second one).\n> So probably PostgreSQL couldn't guarantee the uniquness of system\n> tuples in many cases.\n> \n> Anyway,I want to change the implementation of mdcreate() to reuse\n> existent files but the uniqueness of table name is preserved by the\n> current implementation narrowly.\n> \n> First of all,I would change pg_type,pg_class.\n> It's OK ?\n\nSure.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 26 Oct 1999 21:24:57 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System indexes are never unique indexes( was RE: [HACKERS]\n\tmdnblocksis" }, { "msg_contents": "> >\n> > Anyway,I want to change the implementation of mdcreate() to reuse\n> > existent files but the uniqueness of table name is preserved by the\n> > current implementation narrowly.\n> >\n> > First of all,I would change pg_type,pg_class.\n> > It's OK ?\n>\n> Sure.\n>\n\nI made a patch.\nBut I'm not sure my solution is right.\nIs there a better way ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n*** ../../head/pgcurrent/backend/bootstrap/bootscanner.l\tTue Sep 14\n12:17:34 1999\n--- backend/bootstrap/bootscanner.l\tTue Oct 26 23:36:08 1999\n***************\n*** 90,95 ****\n--- 90,96 ----\n \"declare\"\t\t{ return(XDECLARE); }\n \"build\"\t\t\t{ return(XBUILD); }\n \"indices\"\t\t{ return(INDICES); }\n+ \"unique\"\t\t{ return(UNIQUE); }\n \"index\"\t\t\t{ return(INDEX); }\n \"on\"\t\t\t{ return(ON); }\n \"using\"\t\t\t{ return(USING); }\n*** ../../head/pgcurrent/backend/bootstrap/bootparse.y\tMon Jul 26 12:44:44\n1999\n--- backend/bootstrap/bootparse.y\tTue Oct 26 23:47:20 1999\n***************\n*** 80,86 ****\n %token <ival> CONST ID\n %token OPEN XCLOSE XCREATE INSERT_TUPLE\n %token STRING XDEFINE\n! %token XDECLARE INDEX ON USING XBUILD INDICES\n %token COMMA EQUALS LPAREN RPAREN\n %token OBJ_ID XBOOTSTRAP NULLVAL\n %start TopLevel\n--- 80,86 ----\n %token <ival> CONST ID\n %token OPEN XCLOSE XCREATE INSERT_TUPLE\n %token STRING XDEFINE\n! %token XDECLARE INDEX ON USING XBUILD INDICES UNIQUE\n %token COMMA EQUALS LPAREN RPAREN\n %token OBJ_ID XBOOTSTRAP NULLVAL\n %start TopLevel\n***************\n*** 106,111 ****\n--- 106,112 ----\n \t\t| Boot_CreateStmt\n \t\t| Boot_InsertStmt\n \t\t| Boot_DeclareIndexStmt\n+ \t\t| Boot_DeclareUniqueIndexStmt\n \t\t| Boot_BuildIndsStmt\n \t\t;\n\n***************\n*** 226,231 ****\n--- 227,245 ----\n \t\t\t\t\t\t\t\tLexIDStr($3),\n \t\t\t\t\t\t\t\tLexIDStr($7),\n \t\t\t\t\t\t\t\t$9, NIL, 0, 0, 0, NIL);\n+ \t\t\t\t\tDO_END;\n+ \t\t\t\t}\n+ \t\t;\n+\n+ Boot_DeclareUniqueIndexStmt:\n+ \t\t XDECLARE UNIQUE INDEX boot_ident ON boot_ident USING boot_ident\nLPAREN boot_index_params RPAREN\n+ \t\t\t\t{\n+ \t\t\t\t\tDO_START;\n+\n+ \t\t\t\t\tDefineIndex(LexIDStr($6),\n+ \t\t\t\t\t\t\t\tLexIDStr($4),\n+ \t\t\t\t\t\t\t\tLexIDStr($8),\n+ \t\t\t\t\t\t\t\t$10, NIL, 1, 0, 0, NIL);\n \t\t\t\t\tDO_END;\n \t\t\t\t}\n \t\t;\n*** ../../head/pgcurrent/backend/catalog/genbki.sh.in\tMon Jul 26 12:44:44\n1999\n--- backend/catalog/genbki.sh.in\tTue Oct 26 22:03:43 1999\n***************\n*** 164,169 ****\n--- 164,183 ----\n \tprint \"declare index \" data\n }\n\n+ /^DECLARE_UNIQUE_INDEX\\(/ {\n+ # ----\n+ # end any prior catalog data insertions before starting a define unique\nindex\n+ # ----\n+ \tif (reln_open == 1) {\n+ #\t\tprint \"show\";\n+ \t\tprint \"close \" catalog;\n+ \t\treln_open = 0;\n+ \t}\n+\n+ \tdata = substr($0, 22, length($0) - 22);\n+ \tprint \"declare unique index \" data\n+ }\n+\n /^BUILD_INDICES/\t{ print \"build indices\"; }\n\n # ----------------\n*** ../../head/pgcurrent/include/postgres.h\tMon Oct 25 22:13:13 1999\n--- include/postgres.h\tTue Oct 26 21:45:27 1999\n***************\n*** 138,143 ****\n--- 138,144 ----\n #define DATA(x) extern int errno\n #define DESCR(x) extern int errno\n #define DECLARE_INDEX(x) extern int errno\n+ #define DECLARE_UNIQUE_INDEX(x) extern int errno\n\n #define BUILD_INDICES\n #define BOOTSTRAP\n*** ../../head/pgcurrent/include/catalog/indexing.h\tMon Oct 4 14:25:34\n1999\n--- include/catalog/indexing.h\tTue Oct 26 21:47:24 1999\n***************\n*** 102,112 ****\n DECLARE_INDEX(pg_proc_oid_index on pg_proc using btree(oid oid_ops));\n DECLARE_INDEX(pg_proc_proname_narg_type_index on pg_proc using\nbtree(proname name_ops, pronargs int2_ops, proargtypes oid8_ops));\n\n! DECLARE_INDEX(pg_type_oid_index on pg_type using btree(oid oid_ops));\n! DECLARE_INDEX(pg_type_typname_index on pg_type using btree(typname\nname_ops));\n\n! DECLARE_INDEX(pg_class_oid_index on pg_class using btree(oid oid_ops));\n! DECLARE_INDEX(pg_class_relname_index on pg_class using btree(relname\nname_ops));\n\n DECLARE_INDEX(pg_attrdef_adrelid_index on pg_attrdef using btree(adrelid\noid_ops));\n\n--- 102,112 ----\n DECLARE_INDEX(pg_proc_oid_index on pg_proc using btree(oid oid_ops));\n DECLARE_INDEX(pg_proc_proname_narg_type_index on pg_proc using\nbtree(proname name_ops, pronargs int2_ops, proargtypes oid8_ops));\n\n! DECLARE_UNIQUE_INDEX(pg_type_oid_index on pg_type using btree(oid\noid_ops));\n! DECLARE_UNIQUE_INDEX(pg_type_typname_index on pg_type using btree(typname\nname_ops));\n\n! DECLARE_UNIQUE_INDEX(pg_class_oid_index on pg_class using btree(oid\noid_ops));\n! DECLARE_UNIQUE_INDEX(pg_class_relname_index on pg_class using\nbtree(relname name_ops));\n\n DECLARE_INDEX(pg_attrdef_adrelid_index on pg_attrdef using btree(adrelid\noid_ops));\n\n\n", "msg_date": "Wed, 27 Oct 1999 18:45:49 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: System indexes are never unique indexes( was RE: [HACKERS]\n\tmdnblocksis" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > >\n> > > Anyway,I want to change the implementation of mdcreate() to reuse\n> > > existent files but the uniqueness of table name is preserved by the\n> > > current implementation narrowly.\n> > >\n> > > First of all,I would change pg_type,pg_class.\n> > > It's OK ?\n> >\n> > Sure.\n> >\n> \n> I made a patch.\n> But I'm not sure my solution is right.\n> Is there a better way ?\n\nLooks perfect to me.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 27 Oct 1999 12:28:39 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System indexes are never unique indexes( was RE: [HACKERS]\n\tmdnblocksis" }, { "msg_contents": "> \n> [Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > > >\n> > > > Anyway,I want to change the implementation of mdcreate() to reuse\n> > > > existent files but the uniqueness of table name is preserved by the\n> > > > current implementation narrowly.\n> > > >\n> > > > First of all,I would change pg_type,pg_class.\n> > > > It's OK ?\n> > >\n> > > Sure.\n> > >\n> > \n> > I made a patch.\n> > But I'm not sure my solution is right.\n> > Is there a better way ?\n> \n> Looks perfect to me.\n>\n\nThanks.\nI would commit it with some other changes.\n\nRegards.\n\nHiroshi Inoue\[email protected] \n", "msg_date": "Thu, 28 Oct 1999 08:15:25 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: System indexes are never unique indexes( was RE: [HACKERS]\n\tmdnblocksis" }, { "msg_contents": "> > Looks perfect to me.\n> >\n> \n> Thanks.\n> I would commit it with some other changes.\n> \n> Regards.\n> \n> Hiroshi Inoue\n> [email protected] \n> \n\nI want to add some system indexes to cache lookups are faster, but am\nhaving problems with the bootup code. Let me know if you are interested\nin looking at it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 27 Oct 1999 20:06:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System indexes are never unique indexes( was RE: [HACKERS]\n\tmdnblocksis" }, { "msg_contents": "Hiroshi, there are two things I want to do for 7.0.\n\nFirst, I want to make more of the system indexes unique. Are you aware\nof any reasons not to do that? I see you have done some of them\nalready. I talked to Tom Lane, and he thinks that it will not cause\nproblems because unique insertions/updates wait for transactions to\ncommit before doing a conflicting change to the index, right?\n \nSecond, I want to add more system indexes to match all caches. Anything\nthat could cause problems there?\n\nAre you working on any of this?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Nov 1999 23:37:56 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Unique indexes on system tables" }, { "msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Tuesday, November 16, 1999 1:38 PM\n> To: Hiroshi Inoue\n> Cc: PostgreSQL-development\n> Subject: Unique indexes on system tables\n> \n> \n> Hiroshi, there are two things I want to do for 7.0.\n> \n> First, I want to make more of the system indexes unique. Are you aware\n> of any reasons not to do that?\n\nNo.\nIt should be done to guarantee the uniqueness of system tuples.\n\n> I see you have done some of them\n> already. I talked to Tom Lane, and he thinks that it will not cause\n> problems because unique insertions/updates wait for transactions to\n> commit before doing a conflicting change to the index, right?\n>\n\nYes.\n \n> Second, I want to add more system indexes to match all caches. Anything\n> that could cause problems there?\n>\n\nI am only afraid of index corruption.\nThe more we have system indexes,the more index corruption would happen.\n\nHow could we recover from the state ?\nTom suggested rebuilding indexes in vacuum.\nAccording to Jan,there was a utility called reindexdb.\nWAL by Vadim may be able to recover indexes completely in case of crash.\n\n> Are you working on any of this?\n>\n\nNo.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n \n\n", "msg_date": "Tue, 16 Nov 1999 14:25:37 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Unique indexes on system tables" }, { "msg_contents": "On Tue, 16 Nov 1999, Hiroshi Inoue wrote:\n\n> I am only afraid of index corruption.\n> The more we have system indexes,the more index corruption would happen.\n\nJust a concerned user question: Why does index corruption seem to happen\nso often or is a genuine concern? Wouldn't the next thing be table\ncorruption? Or are indices optimized for speed rather than correctness\nbecause they don't contain important data?\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 17 Nov 1999 12:11:39 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: Unique indexes on system tables" }, { "msg_contents": ">\n> On Tue, 16 Nov 1999, Hiroshi Inoue wrote:\n>\n> > I am only afraid of index corruption.\n> > The more we have system indexes,the more index corruption would happen.\n>\n> Just a concerned user question: Why does index corruption seem to happen\n> so often or is a genuine concern? Wouldn't the next thing be table\n> corruption? Or are indices optimized for speed rather than correctness\n> because they don't contain important data?\n\n There are more complicated concurrency issues on indices than\n for regular tables. That's where the corrupt indices but not\n tables come from.\n\n For a user index, this isn't very critical, because a\n drop/create index sequence will recover to consistent data.\n\n For system catalog indices, this is a desaster, because you\n cannot drop and recreate indices on system tables. At least\n we need to tackle this problem by reincarnating reindexdb.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 17 Nov 1999 13:39:34 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: Unique indexes on system tables" } ]
[ { "msg_contents": "\n\n\n\n\nBENCHMARK SUPPLY\n7540 BRIDGEGATE COURT\nATLANTA GA 30350\n\n***LASER PRINTER TONER CARTRIDGES***\n***FAX AND COPIER TONER***\n \n CHECK OUT OUR NEW CARTRIDGE PRICES :\n \n\nAPPLE \n \n LASER WRITER PRO 600 OR 16/600 $69\n LASER WRITER SELECT 300,310.360 $69\n LASER WRITER 300, 320 $54 \n LASER WRITER LS,NT,2NTX,2F,2G & 2SC $54 \n LASER WRITER 12/640 $79 \n\nHEWLETT PACKARD \n\n LASERJET SERIES 2,3 & 3D (95A) $49 \n LASERJET SERIES 2P AND 3P (75A) $54 \n LASERJET SERIES 3SI AND 4SI (91A) $75 \n LASERJET SERIES 4L AND 4P $49 \n LASERJET SERIES 4, 4M, 5, 5M, 4+ (98A) $59 \n LASERJET SERIES 4000 HIGH YIELD (27X) $99 \n LASERJET SERIES 4V $95 \n LASERJET SERIES 5SI , 8000 $95 \n LASERJET SERIES 5L AND 6L $49 \n LASERJET SERIES 5P, 5MP, 6P, 6MP $59 \n LASERJET SERIES 5000 (29A) $135\n LASERJET SERIES 1100 (92A) $49 \n LASERJET SERIES 2100 (96A) $89\n LASERJET SERIES 8100 (82X)\t\t $145\n\n\nHP LASERFAX \n\n LASERFAX 500, 700, FX1, $59 \n LASERFAX 5000, 7000, FX2, $59 \n LASERFAX FX3 $69 \n LASERFAX FX4 $79 \n \n\nLEXMARK \n\n OPTRA 4019, 4029 HIGH YIELD $135 \n OPTRA R, 4039, 4049 HIGH YIELD $135 \n OPTRA S 4059 HIGH YIELD $135 \n OPTRA E $59 \n OPTRA N $115 \n \n\nEPSON \n\n EPL-7000, 8000 $105 \n EPL-1000, 1500 $105 \n \n\nCANON \n\n LBP-430 $49 \n LBP-460, 465 $59 \n LBP-8 II $54 \n LBP-LX $54 \n LBP-MX $95 \n LBP-AX $49 \n LBP-EX $59 \n LBP-SX $49 \n LBP-BX $95 \n LBP-PX $49 \n LBP-WX $95 \n LBP-VX $59 \n CANON FAX L700 THRU L790 FX1 $59 \n CANONFAX L5000 L70000 FX2 $59 \n \n\nCANON COPIERS \n\n PC 20, 25 ETC.... $89 \n PC 3, 6RE, 7, 11 (A30) $69 \n PC 320 THRU 780 (E40) $89 \n \n\nNEC \n\n SERIES 2 LASER MODEL 90,95 $105\n\n\nPLEASE NOTE:\n\n1) ALL OUR CARTRIDGES ARE GENUINE OEM CARTRIDGES.\n2) WE DO NOT SEND OUT CATALOGS OR PRICE LISTS \n3) WE DO NOT FAX QUOTES OR PRICE LISTS. \n4) WE DO NOT SELL TO RESELLERS OR BUY FROM DISTRIBUTERS\n5) WE DO NOT CARRY: BROTHER-MINOLTA-KYOSERA-PANASONIC PRODUCTS\n6) WE DO NOT CARRY: XEROX-FUJITSU-OKIDATA OR SHARP PRODUCTS\n7) WE DO NOT CARRY ANY COLOR PRINTER SUPPLIES \n8) WE DO NOT CARRY DESKJET/INKJET OR BUBBLEJET SUPPLIES\n9) WE DO NOT BUY FROM OR SELL TO RECYCLERS OR REMANUFACTURERS\n\n WE ACCEPT GOVERNMENT, SCHOOL & UNIVERSITY PURCHASE ORDERS\n JUST LEAVE YOUR PO # WITH CORRECT BILLING & SHIPPING ADDRESS\n\n \n\n****OUR ORDER LINE IS 770-399-0953 ****\n\n****OUR CUSTOMER SERVICE LINE IS 800-586-0540****\n****OUR E-MAIL REMOVAL AND COMPLAINT LINE IS 888-532-7170****\n\n****PLACE YOUR ORDER AS FOLLOWS**** :\n\nBY PHONE 770-399-0953 \n\nBY FAX: 770-698-9700 \nBY MAIL: BENCHMARK PRINT SUPPLY\n 7540 BRIDGEGATE COURT\n, ATLANTA GA 30350\n\nMAKE SURE YOU INCLUDE THE FOLLOWING INFORMATION IN YOUR ORDER: \n\n 1) YOUR PHONE NUMBER \n 2) COMPANY NAME \n 3) SHIPPING ADDRESS \n 4) YOUR NAME \n 5) ITEMS NEEDED WITH QUANTITIES \n 6) METHOD OF PAYMENT. (COD OR CREDIT CARD) \n 7) CREDIT CARD NUMBER WITH EXPIRATION DATE \n\n \n1) WE SHIP UPS GROUND. ADD $4.5 FOR SHIPPING AND HANDLING.\n2) COD CHECK ORDERS ADD $3.5 TO YOUR SHIPPING COST.\n2) WE ACCEPT ALL MAJOR CREDIT CARD OR \"COD\" ORDERS.\n3) OUR STANDARD MERCHANDISE REFUND POLICY IS NET 30 DAYS\n4) OUR STANDARD MERCHANDISE REPLCAMENT POLICY IS NET 90 DAYS. \n\n\nNOTE NUMBER (1): \n\nPLEASE DO NOT CALL OUR ORDER LINE TO REMOVE YOUR E-MAIL \nADDRESS OR COMPLAIN. OUR ORDER LINE IS NOT SETUP TO FORWARD \nYOUR E-MAIL ADDRESS REMOVAL REQUESTS OR PROCESS YOUR \nCOMPLAINTS..IT WOULD BE A WASTED PHONE CALL.YOUR ADDRESS \nWOULD NOT BE REMOVED AND YOUR COMPLAINTS WOULD NOT BE \nHANDLED.PLEASE CALL OUR TOLL FREE E-MAIL REMOVAL AND \nCOMPLAINT LINE TO DO THAT.\n\nNOTE NUMBER (2):\n\nOUR E-MAIL RETURN ADDRESS IS NOT SETUP TO ANSWER ANY \nQUESTIONS YOU MIGHT HAVE REGARDING OUR PRODUCTS. OUR E-MAIL \nRETURN ADDRESS IS ALSO NOT SETUP TO TAKE ANY ORDERS AT \nTHIS TIME. PLEASE CALL THE ORDER LINE TO PLACE YOUR ORDER\n OR HAVE ANY QUESTIONS ANSWERED. OTHERWISE PLEASE CALL OUR \nCUSTOMER SERCICE LINE.\n\n\nNOTE NUMBER (3):\n\nOWNERS OF ANY OF THE DOMAINS THAT APPEAR IN THE HEADER OF \nTHIS MESSAGE,ARE IN NO WAY ASSOCIATED WITH, PROMOTING, \nDISTRIBUTING OR ENDORSING ANY OF THE PRODUCTS ADVERTISED \nHEREIN AND ARE NOT LIABLE TO ANY CLAIMS THAT MAY ARISE \nTHEREOF. \n \n\n\n\n\n\n \n \n \n \n \n \n \n \n", "msg_date": "Wed, 20 Oct 1999 01:23:46", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "laser printer toner advertisement" } ]
[ { "msg_contents": "\nNow I thought this was discussed recently and this:\n\ncreate table foo(\nx int,\ny datetime default current_time);\n\nwould put the current date and time into y whenever a new record was\ninserted. It appears to give the date and time the stupid table was\ncreated. Is it me or is something broke?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Tue, 19 Oct 1999 22:05:33 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "current_time?" }, { "msg_contents": "Vince Vielhaber <[email protected]> writes:\n> Now I thought this was discussed recently and this:\n> create table foo(\n> x int,\n> y datetime default current_time);\n> would put the current date and time into y whenever a new record was\n> inserted. It appears to give the date and time the stupid table was\n> created. Is it me or is something broke?\n\nThe behavior for this was changed very recently. Since current sources\nrefuse the above:\n\nregression=> create table foo(\nregression-> x int,\nregression-> y datetime default current_time);\nERROR: Attribute 'y' is of type 'datetime' but default expression is of type 'time'\n You will need to rewrite or cast the expression\n\nI am guessing you are trying it with 6.5.*, where indeed you will likely\nget the time of table creation. Recommended approach is\n\ty datetime default now()\nwhich works the way you want in all Postgres versions AFAIK.\n\nNext question is whether current sources are broken to refuse the above.\nSince I get\n\nregression=> create table zz (x datetime);\nCREATE\nregression=> insert into zz values(current_time);\nERROR: Attribute 'x' is of type 'datetime' but expression is of type 'time'\n You will need to rewrite or cast the expression\n\nit seems I managed to make default-expression handling consistent\nwith the rest of the system, but that doesn't necessarily mean this\nbehavior is desirable... Thomas, what say you?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Oct 1999 00:52:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] current_time? " }, { "msg_contents": "On Wed, 20 Oct 1999, Tom Lane wrote:\n\n> Vince Vielhaber <[email protected]> writes:\n> > Now I thought this was discussed recently and this:\n> > create table foo(\n> > x int,\n> > y datetime default current_time);\n> > would put the current date and time into y whenever a new record was\n> > inserted. It appears to give the date and time the stupid table was\n> > created. Is it me or is something broke?\n> \n> The behavior for this was changed very recently. Since current sources\n> refuse the above:\n> \n> regression=> create table foo(\n> regression-> x int,\n> regression-> y datetime default current_time);\n> ERROR: Attribute 'y' is of type 'datetime' but default expression is of type 'time'\n> You will need to rewrite or cast the expression\n> \n> I am guessing you are trying it with 6.5.*, where indeed you will likely\n> get the time of table creation. Recommended approach is\n> \ty datetime default now()\n> which works the way you want in all Postgres versions AFAIK.\n\nThis works. I had tried something earlier (during the thread a couple\nweeks back) DEFAULT TEXT 'now' which didn't work at all for me. A\nlittle playing and I just now figured out why it didn't work.. When I\ntried that I had y as a text field - which only put a 'now' in it so I\nwas avoiding using now under the assumption that it wouldn't work.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 20 Oct 1999 06:47:36 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] current_time? " } ]
[ { "msg_contents": "Since no one replied in pg-general, I am reposting to hackers.\n\nQ1:\nLet's say I want to create a '+' operator for <my own type> + int4. Do I\nreally have to\ndefine two '+' operators, one\n\n<my own type> + int\n\nand the other\n\nint + <my own type>\n\nQ2:\n\nCan I create an operator '::', such as <my own type>::double ?\n\nGene Sokolov.\n\n\n\n", "msg_date": "Wed, 20 Oct 1999 10:09:58 +0400", "msg_from": "\"Gene Sokolov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Creating operators" }, { "msg_contents": "\"Gene Sokolov\" <[email protected]> writes:\n> Q1:\n> Let's say I want to create a '+' operator for <my own type> + int4. Do I\n> really have to\n> define two '+' operators, one\n> <my own type> + int\n> and the other\n> int + <my own type>\n\nYes. There's nothing compelling them to behave the same, after all\n(consider '-' instead of '+'). If they do behave the same you\nshould indicate this with \"commutator\" links --- see the discussion\nin the manual.\n\n> Can I create an operator '::', such as <my own type>::double ?\n\nYou can't redefine the meaning of the typecast construct '::',\nif that's what you meant. But perhaps what you really meant\nwas that you want to provide a conversion from your type to\ndouble. For that you just make a function named 'double',\nyielding double, and taking your type as input. The typecast\ncode will use it automatically.\n\n(Of course \"double\" is spelled \"float8\" in Postgres-land,\nbut you knew that...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Oct 1999 02:51:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Creating operators " }, { "msg_contents": "> > Can I create an operator '::', such as <my own type>::double ?\n\nThis brings up something: I see mention in Date and Darwen of an SQL3\nenumerated type. Nice feature, but they show the syntax being:\n\n <type>::<value>\n\nwhich reuses the \"::\" operator in a way which may be incompatible with\nPostgres' usage (seems to me to have the fields reversed). btw, I\ncould imagine implementing enumerated types as Postgres arrays or as a\nseparate table per type.\n\nHas anyone come across this SQL3 feature yet? Any opinions??\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 20 Oct 1999 12:51:14 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Creating operators" } ]
[ { "msg_contents": "> application. Indeed, this book project is considerably more likely\n> to return tangible benefit to the Postgres group (in the form of new\n> users/contributors attracted to the project) than most other ways\n> people might be using Postgres to make money.\n\nExactly! No need to apologize for attracting attention to PostgreSQL.\n\n> In short, you needn't offer the slightest apology for collecting the\n> book royalties personally. From what I've heard of the book-writing\n> biz, you're unlikely to get rich off it anyway :-(\n\nAnd if you do get rich, you can retire and spend all your time on\nenhancements to PostgreSQL :-)\n\n", "msg_date": "Wed, 20 Oct 1999 08:17:37 +0200", "msg_from": "Kaare Rasmussen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] book status " }, { "msg_contents": "I think a bit of explanation is required for this story:\n\nhttp://www.newsalert.com/bin/story?StoryId=CozDUWbKbytiXnZy&FQ=Linux&Nav=na-search-&StoryTitle=Linux\n\nUp until now, the MySQL people have been boasting performance as the\nproduct's great advantage. Now this contradicts thi sfor the first time. I\nbelieve it has to do with the test. Perhaps MySQL is faster when you just\ndo one simple SELECT * FROM table, and that it has never really been\ntested in a real-life (or as close as possible) environment?\n\n", "msg_date": "Tue, 15 Aug 2000 09:26:35 +0200", "msg_from": "Kaare Rasmussen <[email protected]>", "msg_from_op": true, "msg_subject": "Open Source Database Routs Competition in New Benchmark Tests" }, { "msg_contents": "\n\nKaare Rasmussen <[email protected]> wrote:\n\n> I think a bit of explanation is required for this story:\n> \n> http://www.newsalert.com/bin/story?StoryId=CozDUWbKbytiXnZy&FQ=Linux&Nav=na-search-&StoryTitle=Linux\n> \n> Up until now, the MySQL people have been boasting performance as the\n> product's great advantage. Now this contradicts thi sfor the first time. I\n> believe it has to do with the test. Perhaps MySQL is faster when you just\n> do one simple SELECT * FROM table, and that it has never really been\n> tested in a real-life (or as close as possible) environment?\n\nI wouldn't say that this is exactly the first time we've heard \nabout problems with MySQL's famed \"speed\". Take the Tim Perdue \narticle that came out a while back: \n\nhttp://www.phpbuilder.com/columns/tim20000705.php3?page=1\n\n The most interesting thing about my test results was to\n see how much of a load Postgres could withstand before\n giving any errors. In fact, Postgres seemed to scale 3\n times higher than MySQL before giving any errors at\n all. MySQL begins collapsing at about 40-50 concurrent\n connections, whereas Postgres handily scaled to 120\n before balking. My guess is, that Postgres could have\n gone far past 120 connections with enough memory and CPU.\n\n On the surface, this can appear to be a huge win for\n Postgres, but if you look at the results in more detail,\n you'll see that Postgres took up to 2-3 times longer to\n generate each page, so it needs to scale 2-3 times higher\n just to break even with MySQL. So in terms of max numbers\n of pages generated concurrently without giving errors,\n it's pretty much a dead heat between the two\n databases. In terms of generating one page at a time,\n MySQL does it up to 2-3 times faster.\n\nAs written, this not exactly slanted toward postgresql, but\nyou could easily rephrase this as \"MySQL is fast, but not\nunder heavy load. When heavily loaded, it degrades much\nfaster than Postgresql, and they're both roughly the same\nspeed, despite the fact that postgresql is doing more\n(transaction processing, etc.).\"\n\nThis story has made slashdot: \n\nhttp://slashdot.org/article.pl?sid=00/08/14/2128237&amp;mode=nested\n\nSome of the comments are interesting. One MySQL defender\nclaims that the bottle neck in the benchmarks Great Bridge\nused is the ODBC drivers. It's possible that all the test\nreally shows is that MySQL has a poor ODBC driver.\n\n", "msg_date": "Tue, 15 Aug 2000 01:24:59 -0700", "msg_from": "Joe Brenner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Open Source Database Routs Competition in New Benchmark Tests " }, { "msg_contents": "At 09:26 AM 8/15/00 +0200, Kaare Rasmussen wrote:\n>I think a bit of explanation is required for this story:\n>\n>http://www.newsalert.com/bin/story?StoryId=CozDUWbKbytiXnZy&FQ=Linux&Nav=na\n-search-&StoryTitle=Linux\n>\n>Up until now, the MySQL people have been boasting performance as the\n>product's great advantage. Now this contradicts thi sfor the first time. I\n>believe it has to do with the test. Perhaps MySQL is faster when you just\n>do one simple SELECT * FROM table, and that it has never really been\n>tested in a real-life (or as close as possible) environment?\n\nIt's no secret that MySQL falls apart under load when there are inserts\nand updates in the mix. They do table-level locking. If you read \nvarious threads about \"hints and tricks\" in MySQL-land concerning \nperformance in high-concurrency (i.e. web site) situations, there are\nall sorts of suggestions about periodically caching copies of tables for\nreading so readers don't get blocked, etc.\n\nThe sickness lies in the fact that the folks writing these complex workarounds\nare still convinced that MySQL is the fastest, most efficient DB tool\navailable,\nthat the lack of transactions is making their system faster, and that the\nconcurrency problems they see are no worse than are seen with \"real\" a RDBMS\nlike Oracle or Postgres. \n\nThe level of ignorance in the MySQL world is just stunning at times, mostly\ndue to a lot of openly dishonest (IMO) claims and advocacy by the authors\nof MySQL, in their documentation, for instance. A significant percentage\nof MySQL users seem to take these statements as gospel and are offended when\nyou suggest, for instance, that table-level locking isn't such a hot idea\nfor a DB used to drive a popular website.\n\nAt least now when they all shout \"Slashdot's popular, and they use MySQL\" \nwe can answer, \"yeah, but the Slashdot folks are the ones who paid \nfor the integration of MySQL with the SleepyCat backend, and guess why?\"\nAnd the Slashdot folks have been openly talking about rewriting their\ncode to be more DB agnostic (I refuse to call MySQL an RDBMS) and about\nperhaps switching to Oracle in the future. Maybe tests like this and more\nuser advocacy will convince them to consider Postgres!\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Tue, 15 Aug 2000 06:02:35 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Open Source Database Routs Competition in New\n Benchmark Tests" }, { "msg_contents": "> Some of the comments are interesting. One MySQL defender\n> claims that the bottle neck in the benchmarks Great Bridge\n> used is the ODBC drivers. It's possible that all the test\n> really shows is that MySQL has a poor ODBC driver.\n\nNope. If it were due to the ODBC driver, then MySQL and PostgreSQL would\nnot have had comparable performance in the 1-2 user case. The ODBC\ndriver is a per-client interface so would have no role as the number of\nusers goes up.\n\nThe Postgres core group were as suprised as anyone with the test\nresults. There was no effort to \"cook the books\" on the testing: afaik\nGB did the testing as part of *their* evaluation of whether PostgreSQL\nwould be a viable product for their company. I believe that the tests\nwere all run on the same system, and, especially given the results, they\ndid go through and verify that the settings for each DB were reasonable.\n\nThe AS3AP test is a *read only test*, which should have been MySQL's\nbread and butter according to their marketing literature. The shape of\nthat curve shows that MySQL started wheezing at about 4 users, and\ntailed off rapidly after that point. The other guys barely made it out\nof the starting gate :)\n\nThe thing that was the most fun about this (the PostgreSQL steering\ncommittee got a sneak preview of the results a couple of months ago) was\nthat we have never made an effort to benchmark Postgres against other\ndatabases, so we had no quantitative measurement on how we were doing.\nAnd we are doing pretty good!\n\n - Thomas\n", "msg_date": "Tue, 15 Aug 2000 13:59:00 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Open Source Database Routs Competition in New Benchmark\n Tests" }, { "msg_contents": "On Tue, 15 Aug 2000, Thomas Lockhart wrote:\n\n> The AS3AP test is a *read only test*, which should have been MySQL's\n> bread and butter according to their marketing literature. The shape of\n> that curve shows that MySQL started wheezing at about 4 users, and\n> tailed off rapidly after that point. The other guys barely made it out\n> of the starting gate :)\n\nAh, cool, that answers one of my previous questions ... and scary that we\nbeat \"the best database for read only apps\" *grin*\n\n\n", "msg_date": "Tue, 15 Aug 2000 11:08:48 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Open Source Database Routs Competition in New Benchmark\n Tests" }, { "msg_contents": "On Tue, 15 Aug 2000, Don Baccus wrote:\n\n> It's no secret that MySQL falls apart under load when there are\n> inserts and updates in the mix. They do table-level locking. If you\n> read various threads about \"hints and tricks\" in MySQL-land concerning\n> performance in high-concurrency (i.e. web site) situations, there are\n> all sorts of suggestions about periodically caching copies of tables\n> for reading so readers don't get blocked, etc.\n\nHere's one you might like. I am aware of a site (not one I\nrun, and I shouldn't give its name) which has a share feed\n(or several). This means that, every 15 minutes, they have\nto get a bunch of rows into a few tables in a real hurry.\n\nMySQL's table level locking causes them such trouble that\nthey run two instances. No big surprises there, but here's\nthe fun bit: they both point at the same datafiles.\n\nTheir web code accesses a mysqld which was started with\ntheir --readonly and --no-locking flags, so that it never\nwrites to the datafiles. And the share feed goes through\na separate, writable database.\n\nEvery now and then a query fails with an error like \"Eek!\nThe table changed under us.\" so they modified (or wrapped -\nI'm not sure) the DBI driver to retry a couple of times under\nsuch circumstances.\n\nThe result: it works. An actually quite well (ie. a lot\nbetter than before). I believe (hope!) that they are using\nthe breathing space to investigate alternative solutions.\n\nMatthew.\n\n", "msg_date": "Tue, 15 Aug 2000 16:28:08 +0100 (BST)", "msg_from": "Matthew Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Open Source Database Routs Competition in New Benchmark\n Tests" } ]
[ { "msg_contents": "\nTom wrote:\n\n[snip]\n\n> I've noticed that Jan and a couple of other people have put copyright\n> notices in their own names on files that they've created from scratch,\n> but I feel uncomfortable with that practice. The Ghostscript/readline\n> fiasco illustrates the potential problems you can get into with\n> divergent copyrights on chunks of code that need to be distributed\n> together. My personal feeling is that if you're a member of the team,\n> stick the team copyright on it; don't open a can of legal worms.\n\nI agree with you. I'm not sure if there are anything like this in the\njdbc or contrib/lo stuff saying my copyright, but if there is, feel free\nto change it to the team's copyright - I tend to run in autopilot when\nit comes to adding copyrights.\n\nSaying that, I'm doubtful that all of the JDBC source has a copyright\nnotice of any kind in there. I'll trawl my copy of the source to see if\nthis is the case.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n", "msg_date": "Wed, 20 Oct 1999 08:08:15 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Readline use in trouble? " }, { "msg_contents": "Peter Mount wrote:\n\n> Tom wrote:\n>\n> [snip]\n>\n> > I've noticed that Jan and a couple of other people have put copyright\n> > notices in their own names on files that they've created from scratch,\n> > but I feel uncomfortable with that practice. The Ghostscript/readline\n> > fiasco illustrates the potential problems you can get into with\n> > divergent copyrights on chunks of code that need to be distributed\n> > together. My personal feeling is that if you're a member of the team,\n> > stick the team copyright on it; don't open a can of legal worms.\n>\n> I agree with you. I'm not sure if there are anything like this in the\n> jdbc or contrib/lo stuff saying my copyright, but if there is, feel free\n> to change it to the team's copyright - I tend to run in autopilot when\n> it comes to adding copyrights.\n>\n> Saying that, I'm doubtful that all of the JDBC source has a copyright\n> notice of any kind in there. I'll trawl my copy of the source to see if\n> this is the case.\n\n Agreed. I consider anything in the CVS actually owned by the\n team. I don't have the time, so if someone else would like\n to, change all my copyright notices to the team.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 20 Oct 1999 11:56:01 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble?" } ]
[ { "msg_contents": "\nBruce wrote:\n\n[snip]\n\n> > Although I haven't been paying close attention to the Ghostscript\n> > situation, I suspect that the real story is either that the readline\n> > interface code that someone contributed to Ghostscript was\ncontributed\n> > with GPL terms already attached to it, or that Aladdin is concerned\n> \n> Oh, that is an interesting issue that I never considered. Reminds us\nwe\n> can't use GPL code.\n\nThat was why when I merged Adrian's and my JDBC drivers for inclusion, I\nused Adrians as the core, and re-wrote the additions I made to mine to\nit, as mine was GPL'ed, and his wasn't.\n\nJust a thought: How does this affect anything placed in the contrib\ndirectory? If someone writes a tool under the GPL, can it be included\nunder the src/contrib directory, or would we fall foul just because it's\nincluded with our source?\n\nI don't think we have a problem with the CD distribution, as they are\nclearly separate, but contrib is not that clear cut.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n", "msg_date": "Wed, 20 Oct 1999 08:12:23 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Readline use in trouble?" }, { "msg_contents": "Peter Mount <[email protected]> writes:\n> Just a thought: How does this affect anything placed in the contrib\n> directory? If someone writes a tool under the GPL, can it be included\n> under the src/contrib directory, or would we fall foul just because it's\n> included with our source?\n\nGood question. The GPL contains a clause to the effect that \"mere\naggregation\" of a GPL'd piece of code in a source distribution with\nunrelated pieces of code is OK, even if those other pieces of code\nare not GPL'd. But the contrib directory is not exactly unrelated\nto the main Postgres distribution, so I'm not sure that we can point\nto this clause to justify putting a GPL'd program in contrib. It'd\nbe a gray area...\n\nI'd be inclined to say \"if you want to put your tool under GPL, fine,\nbut then distributing it is up to you\". We don't need to be taking\nany legal risks on this point. A safe policy is that everything\ndistributed by the Postgres group has to carry the same BSD license.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Oct 1999 10:34:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble? " }, { "msg_contents": "On Wed, 20 Oct 1999, Tom Lane wrote:\n\n> Peter Mount <[email protected]> writes:\n> > Just a thought: How does this affect anything placed in the contrib\n> > directory? If someone writes a tool under the GPL, can it be included\n> > under the src/contrib directory, or would we fall foul just because it's\n> > included with our source?\n> \n> Good question. The GPL contains a clause to the effect that \"mere\n> aggregation\" of a GPL'd piece of code in a source distribution with\n> unrelated pieces of code is OK, even if those other pieces of code\n> are not GPL'd. But the contrib directory is not exactly unrelated\n> to the main Postgres distribution, so I'm not sure that we can point\n> to this clause to justify putting a GPL'd program in contrib. It'd\n> be a gray area...\n\nItems in the contrib section aren't required for the use of PostgreSQL,\nhowever PostgreSQL *is* required to use those items. So shouldn't the\nitems in contrib have to change to a Berkeley style license? :)\n\nI mean it's only fair!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 20 Oct 1999 10:52:14 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble? " }, { "msg_contents": " Good question. The GPL contains a clause to the effect that \"mere\n aggregation\" of a GPL'd piece of code in a source distribution with\n unrelated pieces of code is OK, even if those other pieces of code\n are not GPL'd. But the contrib directory is not exactly unrelated\n to the main Postgres distribution, so I'm not sure that we can point\n to this clause to justify putting a GPL'd program in contrib. It'd\n be a gray area...\n\nThe problem only comes if I, for example, want to distribute all of\npostgresql (contrib included) in a non-source (i.e., proprietary) way.\nThat is fine if contrib includes no GPL code; if it does, I need to\ndistribute the code for that portion only. Thus, if we want to\nmaintain as broad a potential as possible (including non-source\ndistributions) we need to encourage adoption of the BSD license for\nall source.\n\nTo make it easier for those distributing postgresql to keep track of\nthis stuff, perhaps we need a gnu (or gpl) directory (like or under\ncontrib) in which would go GPL code. Then it would be crystal clear\nwhich portion of the code has which restrictions. It would also be\nclear that this is an aggregation. This is the mechanism used by\nNetBSD for their code tree, which does include some gnu software.\n\nStill, encouraging non-GPL contrib stuff is a good thing in order to\nmaintain future options, because GPL contrib code _cannot_ be added to\nthe main tree without affecting the distribution of the entire thing.\n\nCheers,\nBrook\n", "msg_date": "Wed, 20 Oct 1999 09:37:49 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble?" }, { "msg_contents": "> Good question. The GPL contains a clause to the effect that \"mere\n> aggregation\" of a GPL'd piece of code in a source distribution with\n> unrelated pieces of code is OK, even if those other pieces of code\n> are not GPL'd. But the contrib directory is not exactly unrelated\n> to the main Postgres distribution, so I'm not sure that we can point\n> to this clause to justify putting a GPL'd program in contrib. It'd\n> be a gray area...\n> \n> The problem only comes if I, for example, want to distribute all of\n> postgresql (contrib included) in a non-source (i.e., proprietary) way.\n> That is fine if contrib includes no GPL code; if it does, I need to\n> distribute the code for that portion only. Thus, if we want to\n> maintain as broad a potential as possible (including non-source\n> distributions) we need to encourage adoption of the BSD license for\n> all source.\n\nBut Alladin Ghostscript is distributed in source form. This GPL legal\nstuff is a terrible hassle.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 20 Oct 1999 11:42:33 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> That is fine if contrib includes no GPL code; if it does, I need to\n>> distribute the code for that portion only. Thus, if we want to\n>> maintain as broad a potential as possible (including non-source\n>> distributions) we need to encourage adoption of the BSD license for\n>> all source.\n\n> But Alladin Ghostscript is distributed in source form.\n\nBut not *only* in source form. Aladdin make their living by selling\nGhostscript to printer manufacturers and so forth. The printer makers\nare not about to ship out printers with copies of source code, nor\neven with notices explaining where to get the printer source code.\nIf they obtained Ghostscript under GPL then they'd have to make not\nonly the PS interpreter source available, but probably the entire\nfirmware for the printer (it's a derived work, no?) and they are\ncertainly not about to do that. So they pay Aladdin for the rights\nto use Ghostscript with a commercial license instead of GPL.\n\nIn the same way, if we distributed Postgres under GPL, it would not be\npossible to sell proprietary systems that use Postgres as a component.\nThat is, in fact, exactly what the GPL is designed to prevent. But it\ndoesn't strike me as something we want for Postgres. We'd be cutting\noff too much of the potential \"market\" of Postgres users. (Not only\nwould we lose companies who had an immediate interest in selling\nDBMS-based code, but also those who had any thought of possibly doing\nso in the future; that could be a lot of people.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Oct 1999 11:57:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble? " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> That is fine if contrib includes no GPL code; if it does, I need to\n> >> distribute the code for that portion only. Thus, if we want to\n> >> maintain as broad a potential as possible (including non-source\n> >> distributions) we need to encourage adoption of the BSD license for\n> >> all source.\n> \n> > But Alladin Ghostscript is distributed in source form.\n> \n> But not *only* in source form. Aladdin make their living by selling\n> Ghostscript to printer manufacturers and so forth. The printer makers\n> are not about to ship out printers with copies of source code, nor\n> even with notices explaining where to get the printer source code.\n> If they obtained Ghostscript under GPL then they'd have to make not\n> only the PS interpreter source available, but probably the entire\n> firmware for the printer (it's a derived work, no?) and they are\n> certainly not about to do that. So they pay Aladdin for the rights\n> to use Ghostscript with a commercial license instead of GPL.\n\nOh, I didn't realize they had a binary-only distribution that was\n_different_ from the source distribution.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 20 Oct 1999 12:09:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble?]" }, { "msg_contents": "Vince Vielhaber wrote:\n> Items in the contrib section aren't required for the use of PostgreSQL,\n> however PostgreSQL *is* required to use those items. So shouldn't the\n> items in contrib have to change to a Berkeley style license? :)\n> \n> I mean it's only fair!\n\nI know of at least two items in contrib that are required to run the\nregression tests -- which, arguably, make PostgreSQL require those two\ncomponents (autoinc and refint).\n\nAnd, the GPL is not fair. It is highly restrictive to programmer\nfreedom in ways (and promotes code freedom in others -- for many things\nit makes sense). It's not called the 'GNU Public Virus' without merit. \nLots of great software has been GPL'd -- and that's fine. But\nPostgreSQL is not -- and if PostgreSQL wants to remain BSD'd, then GPL'd\ncode is a real sticky mess that's best left alone.\n\nI am not against either of these two licenses -- but the known issues of\ndealing with them have to be understood, or problems may arise.\n\nJMHO, of course.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 20 Oct 1999 12:44:28 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble?" }, { "msg_contents": " Oh, I didn't realize they had a binary-only distribution that was\n _different_ from the source distribution.\n\nDIFFERENT is not relevant. I could today ship a binary verion of\npostgresql in its present form as a proprietary product with no source\ncode. No license problems arise from doing so. Allowing GPL code\ninto the base system causes the problem. \n\nAs just said, this is a good thing from the point of view of\nencouraging participation and commercial success. As long as the open\nsource version of postgresql remains a well-designed, solid product it\nbehooves any commercial distributor to aid in its maintenance rather\nthan take on the whole thing. Ideally, they will contribute any fixes\nthey make so that all can benefit and perhaps more importantly so they\ndon't have to maintain the separate fixes any longer.\n\nCheers,\nBrook\n", "msg_date": "Wed, 20 Oct 1999 11:15:43 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble?]" }, { "msg_contents": "> Oh, I didn't realize they had a binary-only distribution that was\n> _different_ from the source distribution.\n> \n> DIFFERENT is not relevant. I could today ship a binary verion of\n> postgresql in its present form as a proprietary product with no source\n> code. No license problems arise from doing so. Allowing GPL code\n> into the base system causes the problem. \n\nDoes GPL require the source to be included, or just available for free?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 20 Oct 1999 13:41:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble?]" }, { "msg_contents": " Does GPL require the source to be included, or just available for free?\n\nPretty sure just a pointer to location is good enough. The catch is\nthat it has to be the exact version shipped and there is a time limit\n(3 years?) for availability, so pointing to the gnu ftp site probably\ndoesn't work.\n\nCheers,\nBrook\n", "msg_date": "Wed, 20 Oct 1999 11:58:50 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble?]" }, { "msg_contents": "On Wed, 20 Oct 1999, Bruce Momjian wrote:\n> Does GPL require the source to be included, or just available for free?\n\n Require sources to be available. It is enough to distribute a binary and\nprovide a pointer to sources. (There are some obscured words that the\npointer should be available for general public...)\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Thu, 21 Oct 1999 07:33:12 +0000 (GMT)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble?]" }, { "msg_contents": "Lamar Owen wrote:\n> \n> Vince Vielhaber wrote:\n> > Items in the contrib section aren't required for the use of PostgreSQL,\n> > however PostgreSQL *is* required to use those items. So shouldn't the\n> > items in contrib have to change to a Berkeley style license? :)\n> >\n> > I mean it's only fair!\n> \n> I know of at least two items in contrib that are required to run the\n> regression tests -- which, arguably, make PostgreSQL require those two\n> components (autoinc and refint).\n\nWell, I made them... so you know what copyright is... -:)\n\nVadim\n", "msg_date": "Fri, 22 Oct 1999 12:59:54 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble?" } ]
[ { "msg_contents": "unsubscribe\n", "msg_date": "Wed, 20 Oct 1999 09:36:55 +0200 (CEST)", "msg_from": "Gerd Thielemann AEK <[email protected]>", "msg_from_op": true, "msg_subject": "" } ]
[ { "msg_contents": "What's the status of patch which enables using indices in \nORDER BY .../ desc ? I use it with 6.5.2 without any problem.\nWill it come to upcoming 6.5.3 ? \nIt seems that it's already applied to current.\n\n\n\tRegards,\n\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 20 Oct 1999 13:11:00 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "enabling using index in ORDER BY ... desc" } ]
[ { "msg_contents": "Is there anyone knows how to hack 'Chris Jones' Encryption algorithmn or,\nwhether there are softwares for hacking the key/algorithmn of 'Chris Jones\nEncryption'?\n\nThanks.\n\nPeter\n\n\n\n\n", "msg_date": "Wed, 20 Oct 1999 17:27:43 +0800", "msg_from": "\"HSBC\" <@hkstar.com>", "msg_from_op": true, "msg_subject": "How to hack Chris Jones Encryption Algorithmn" } ]
[ { "msg_contents": "> I have downloaded and installed your packaged RPM for the Pg.pm module for\n> PostgreSQL 6.5.1 on RedHat 6.0 i386 linux and cannot be found by perl.\n> Is my problem unique? When I run a perl script with 'use Pg;' in it I get:\n> Can't locate Pg.pm in @INC (@INC contains: /usr/lib/perl5/5.00503/i386-linux /usr/lib/perl5/5.00503 /usr/lib/perl5/site_perl/5.005/i386-linux /usr/lib/perl5/site_perl/5.005 .) at ../src/Perl/pg.pl line 1.\n> BEGIN failed--compilation aborted at ../src/Perl/pg.pl line 1.\n> The Pg.pm has been installed into:\n> /usr/lib/perl5/site_perl\n\nI haven't had the chance to use much perl in the last year or two, but\nit may be that the default @INC does not include the directories we\nused for the installation. Also, the RPMs were generated on a RH5.2\nsystem, which may have a slightly different configuration.\n\nOn my RH5.2 system, I have the following INC paths:\n\n> perl -e 'foreach $i (@INC) { printf \"$i\\n\" }'\n/usr/lib/perl5/i386-linux/5.00405\n/usr/lib/perl5\n/usr/lib/perl5/site_perl/i386-linux\n/usr/lib/perl5/site_perl\n.\n\nAnd the RPM puts files in:\n\n[root@golem i386]# rpm -qlp postgresql-perl-6.5.1-2.i386.rpm\n/usr/lib/perl5/man/man3/Pg.3\n/usr/lib/perl5/site_perl/Pg.pm\n/usr/lib/perl5/site_perl/auto/Pg\n/usr/lib/perl5/site_perl/auto/Pg/autosplit.ix\n/usr/lib/perl5/site_perl/i386-linux/auto\n/usr/lib/perl5/site_perl/i386-linux/auto/Pg\n/usr/lib/perl5/site_perl/i386-linux/auto/Pg/Pg.bs\n/usr/lib/perl5/site_perl/i386-linux/auto/Pg/Pg.so\n\nwhich seems to be consistant. One thing you can try is to add\n/usr/lib/perl5/site_perl to INC at the top of your program and see if\nthat helps. Try:\n\n unshift(@INC, \"/usr/lib/perl5/site_perl\");\n ...\n\nOne of the problems with generating RPMs is that unless you have more\nthan one machine you can't *really* test how they behave, but need\nothers to try them out. \n\nbtw, it would probably be better to focus on new RPMs posted on\nftp://postgresql.org/pub/RPMS/ (and elsewhere), generated by Lamar\nOwens who has taken over the RPM development and maintenance. These\ninclude RPMs generated specifically for RH6.0, but Lamar had just put\nout a request for someone to test the RH6.1-generated RPMs on a RH6.0\nmachine to see if they are compatible. If you have time, I'm sure he\nwould appreciate trying that out.\n\nGood luck.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 20 Oct 1999 13:10:03 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL Perl Module" }, { "msg_contents": "Thomas Lockhart wrote:\n> \n> > I have downloaded and installed your packaged RPM for the Pg.pm module for\n> > PostgreSQL 6.5.1 on RedHat 6.0 i386 linux and cannot be found by perl.\n> > Is my problem unique? When I run a perl script with 'use Pg;' in it I get:\n> > Can't locate Pg.pm in @INC (@INC contains: /usr/lib/perl5/5.00503/i386-linux /usr/lib/perl5/5.00503 /usr/lib/perl5/site_perl/5.005/i386-linux /usr/lib/perl5/site_perl/5.005 .) at ../src/Perl/pg.pl line 1.\n> > BEGIN failed--compilation aborted at ../src/Perl/pg.pl line 1.\n> > The Pg.pm has been installed into:\n> > /usr/lib/perl5/site_perl\n> \n> I haven't had the chance to use much perl in the last year or two, but\n> it may be that the default @INC does not include the directories we\n> used for the installation. Also, the RPMs were generated on a RH5.2\n> system, which may have a slightly different configuration.\n[snip]\n> which seems to be consistant. One thing you can try is to add\n> /usr/lib/perl5/site_perl to INC at the top of your program and see if\n> that helps. Try:\n\nWon't help. RedHat 5.2's perl is much older than RedHat 6.0's, and\nthere are other major incompatibilities for compiled modules. Not to\nmention the directory differences in the install. I recommend that\nanyone who wants to use the RPM's with non-standard perl versions\nrebuild from the source RPM, since the perl client is highly sensitive\nto versioning problems.\n \n> include RPMs generated specifically for RH6.0, but Lamar had just put\n> out a request for someone to test the RH6.1-generated RPMs on a RH6.0\n> machine to see if they are compatible. If you have time, I'm sure he\n> would appreciate trying that out.\n\nYes, that would be NICE. The perl version is the same for 6.1 as for\n6.0.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 20 Oct 1999 15:21:06 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Perl Module" } ]
[ { "msg_contents": "AFAICT, the last remaining textual length limits in the backend are a\ncouple of fixed-size buffers in rewriteDefine.c and array support.\nIn the next few days (probably this weekend) I intend to fix that code\nand then remove all #define constants that have anything to do with\nstring length limits. (Tentative hit list is attached.)\n\nAlthough the backend is in fairly good shape, there are parts of\ninterfaces/ and bin/ that are not. In particular:\n\npg_dump has a lot of uses of MAX_QUERY_SIZE. I believe Michael Ansley\nis working on revising pg_dump to use expansible string buffers, so for\nnow I will leave that code alone (I'll just temporarily stick a\ndefinition of MAX_QUERY_SIZE into pg_dump.h).\n\necpg's lexer needs the same fixes recently applied to parser/scan.l to\neliminate a fixed-size literal buffer and allow flex's input buffer to\ngrow as needed. Michael Meskes usually handles ecpg --- Michael, do\nyou want to deal with this issue or shall I take a cut at it?\n\nThe odbc, python, and cli interfaces will all break because they contain\nreferences to symbols I propose to remove. Since I don't use any of\nthese, and they aren't built by default, I can face this prospect\nwithout flinching ;-). This is a call for whoever maintains these\nmodules to get busy.\n\nODBC will probably need some actual thought, since what it seems to be\ndoing with these symbols is making their values available to client\nprograms on request. Does ODBC's API for this function have a concept\nof \"no specific upper limit\"?\n\n\t\t\tregards, tom lane\n\nSay goodnight to:\n\nMAX_PARSE_BUFFER\nMAX_QUERY_SIZE\nERROR_MSG_LENGTH\nMAX_LINES\nMAX_STRING_LENGTH\nREMARK_LENGTH\nELOG_MAXLEN\nMAX_BUFF_SIZE\nPQ_BUFFER_SIZE\nMAXPGPATH\nMAX_STRING_LENGTH\nYY_BUF_SIZE\t\tremove hacking in makefiles\nYY_READ_BUF_SIZE\tno need to alter default\nMaxHeapTupleSize\t(not used)\nMaxAttributeSize\t(not used)\nMaxAttrSize\t\t(where used for buffer sizing)\n\nSuspicious-looking symbols found only in various interfaces/ dirs;\nI don't plan to remove these but someone should:\n\nSQL_PACKET_SIZE\t\todbc\nMAX_MESSAGE_LEN\t\tinterfaces/cli, odbc\nMAX_STATEMENT_LEN\todbc\nTEXT_FIELD_SIZE\t\todbc\nMAX_VARCHAR_SIZE\todbc\nDRV_VARCHAR_SIZE\todbc\nDRV_LONGVARCHAR_SIZE\todbc\nMAX_CONNECT_STRING\todbc\nMAX_BUFFER_SIZE\t\tinterfaces/python\nMAX_FIELDS\t\todbc (does this relate to anything real?)\n\n", "msg_date": "Wed, 20 Oct 1999 11:38:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Planning final assault on query length limits" }, { "msg_contents": " AFAICT, the last remaining textual length limits in the backend are a\n couple of fixed-size buffers in rewriteDefine.c and array support.\n In the next few days (probably this weekend) I intend to fix that code\n and then remove all #define constants that have anything to do with\n string length limits. (Tentative hit list is attached.)\n\nGreat!\n\nJan, does this mean that we can also lose the \"rewrite string too big\"\nproblem with rules?\n\nThat would be a huge win.\n\nCheers,\nBrook\n", "msg_date": "Wed, 20 Oct 1999 11:18:08 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Planning final assault on query length limits" }, { "msg_contents": "> AFAICT, the last remaining textual length limits in the backend are a\n> couple of fixed-size buffers in rewriteDefine.c and array support.\n> In the next few days (probably this weekend) I intend to fix that code\n> and then remove all #define constants that have anything to do with\n> string length limits. (Tentative hit list is attached.)\n> \n> Great!\n> \n> Jan, does this mean that we can also lose the \"rewrite string too big\"\n> problem with rules?\n> \n> That would be a huge win.\n\n\nNo. We have to have long tuples.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 20 Oct 1999 13:41:27 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Planning final assault on query length limits" }, { "msg_contents": " > Jan, does this mean that we can also lose the \"rewrite string too big\"\n > problem with rules?\n\n No. We have to have long tuples.\n\nDarn. Oh well, I guess this is a major step in that direction.\n\nCheers,\nBrook\n", "msg_date": "Wed, 20 Oct 1999 11:59:58 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Planning final assault on query length limits" }, { "msg_contents": "Brook Milligan <[email protected]> writes:\n>> Jan, does this mean that we can also lose the \"rewrite string too big\"\n>> problem with rules?\n\n> No. We have to have long tuples.\n\n> Darn. Oh well, I guess this is a major step in that direction.\n\nI'm hoping that once this is done, someone who knows the guts of the\nstorage managers better than I will feel motivated to work on letting\nstored tuples cross block boundaries. (Paging Vadim...) That seems\nto be the last piece of the puzzle.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Oct 1999 14:47:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Planning final assault on query length limits " }, { "msg_contents": "On Wed, Oct 20, 1999 at 11:38:00AM -0400, Tom Lane wrote:\n> ecpg's lexer needs the same fixes recently applied to parser/scan.l to\n> eliminate a fixed-size literal buffer and allow flex's input buffer to\n\nI see.\n\n> grow as needed. Michael Meskes usually handles ecpg --- Michael, do\n> you want to deal with this issue or shall I take a cut at it?\n\nI'm currently very busy, so I guess it will take some some until I can\ntackle this problem. So if you have some spare time left I have no problem\nif you make these changes. Otherwise I will once I find time.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Thu, 21 Oct 1999 15:16:57 +0200", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planning final assault on query length limits" }, { "msg_contents": "Tom Lane wrote:\n> \n> Brook Milligan <[email protected]> writes:\n> >> Jan, does this mean that we can also lose the \"rewrite string too big\"\n> >> problem with rules?\n> \n> > No. We have to have long tuples.\n> \n> > Darn. Oh well, I guess this is a major step in that direction.\n> \n> I'm hoping that once this is done, someone who knows the guts of the\n> storage managers better than I will feel motivated to work on letting\n> stored tuples cross block boundaries. (Paging Vadim...) That seems\n> to be the last piece of the puzzle.\n\nYou know that I'm busy with WAL...\nAnd I already made some step in big tuples dirrection\nwhen made memory/disk tuple presentations different -:)\n\ntypedef struct HeapTupleData\n{\n uint32 t_len; /* length of *t_data */\n ItemPointerData t_self; /* SelfItemPointer */\n HeapTupleHeader t_data; /* */\n ^^^^^^^^^^^^^^^^^^^^^^\n On-disk data\n\n} HeapTupleData;\n\nI hope that something could be added here for tuple chunks...\nTupleTableSlot.ttc_buffer (and ttc_shouldFree?) is good candidate\nto be moved here from TupleTableSlot. \n\nAs for smgr part - it's not hard at all.\n\nVadim\n", "msg_date": "Fri, 22 Oct 1999 10:17:21 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Planning final assault on query length limits" }, { "msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Tom Lane wrote:\n>> I'm hoping that once this is done, someone who knows the guts of the\n>> storage managers better than I will feel motivated to work on letting\n>> stored tuples cross block boundaries. (Paging Vadim...) That seems\n>> to be the last piece of the puzzle.\n\n> You know that I'm busy with WAL...\n\nWell, Vadim doesn't want to do it, and I don't want to do it (I've\nreally got to spend some more time on the planner/optimizer, because\nthere's too much stuff half-fixed in there).\n\nAny volunteers out there? It'd be a shame to not have this problem\narea completely licked for 7.0.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Oct 1999 23:42:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Planning final assault on query length limits " }, { "msg_contents": "> Vadim Mikheev <[email protected]> writes:\n> > Tom Lane wrote:\n> >> I'm hoping that once this is done, someone who knows the guts of the\n> >> storage managers better than I will feel motivated to work on letting\n> >> stored tuples cross block boundaries. (Paging Vadim...) That seems\n> >> to be the last piece of the puzzle.\n> \n> > You know that I'm busy with WAL...\n> \n> Well, Vadim doesn't want to do it, and I don't want to do it (I've\n> really got to spend some more time on the planner/optimizer, because\n> there's too much stuff half-fixed in there).\n> \n> Any volunteers out there? It'd be a shame to not have this problem\n> area completely licked for 7.0.\n\nWelcome to the small club, Tom. For the first 2 & 1/2 years, the only\nperson who could tackle those big jobs was Vadim. Now you are in the\nclub too.\n\nThe problem is that there are no more. I can't imagine anyone is going\nto be able to jump out of the woodwork and take on a job like that. We\nwill just have to do the best job we can, and maybe save something for\n7.1.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Oct 1999 00:12:29 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Planning final assault on query length limits" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > > You know that I'm busy with WAL...\n> >\n> > Well, Vadim doesn't want to do it, and I don't want to do it (I've\n> > really got to spend some more time on the planner/optimizer, because\n> > there's too much stuff half-fixed in there).\n> >\n> > Any volunteers out there? It'd be a shame to not have this problem\n> > area completely licked for 7.0.\n> \n> Welcome to the small club, Tom. For the first 2 & 1/2 years, the only\n> person who could tackle those big jobs was Vadim. Now you are in the\n> club too.\n> \n> The problem is that there are no more. I can't imagine anyone is going\n> to be able to jump out of the woodwork and take on a job like that. We\n> will just have to do the best job we can, and maybe save something for\n> 7.1.\n\nThere is Jan!...\nBut he's busy too -:)\n\nLet's wait for 7.0 beta - \"big tuples\" seems as work for 2 weeks...\n\nVadim\n", "msg_date": "Fri, 22 Oct 1999 12:54:28 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Planning final assault on query length limits" }, { "msg_contents": "> > The problem is that there are no more. I can't imagine anyone is going\n> > to be able to jump out of the woodwork and take on a job like that. We\n> > will just have to do the best job we can, and maybe save something for\n> > 7.1.\n> \n> There is Jan!...\n> But he's busy too -:)\n> \n> Let's wait for 7.0 beta - \"big tuples\" seems as work for 2 weeks...\n\nOops, forgot about him. Sorry Jan.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Oct 1999 01:22:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Planning final assault on query length limits" }, { "msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Bruce Momjian wrote:\n>>>> Any volunteers out there? It'd be a shame to not have this problem\n>>>> area completely licked for 7.0.\n>> \n>> Welcome to the small club, Tom. For the first 2 & 1/2 years, the only\n>> person who could tackle those big jobs was Vadim. Now you are in the\n>> club too.\n>> \n>> The problem is that there are no more. I can't imagine anyone is going\n>> to be able to jump out of the woodwork and take on a job like that. We\n>> will just have to do the best job we can, and maybe save something for\n>> 7.1.\n\n> There is Jan!...\n> But he's busy too -:)\n\n> Let's wait for 7.0 beta - \"big tuples\" seems as work for 2 weeks...\n\nThing is, if Vadim could do it in two weeks (sounds about right), then\nmaybe I could do it in three or four (I'd have to spend time studying\nparts of the backend that Vadim already knows, but I don't). It seems\nto me that some aspiring hacker who's already a little bit familiar\nwith backend coding could do it in a month or two, with suitable study,\nand would in the process make great strides towards gurudom. This is\na fairly localized task, if I'm not greatly mistaken about it. And\nthere's plenty of time left before 7.0. So this seems like a perfect\nproject for someone who wants to learn more about the backend and has\nsome time to spend doing so.\n\nA year ago I didn't know a darn thing about the backend, so I'm a bit\nbemused to find myself being called a member of \"the small club\".\nProgramming skills don't come out of nowhere, they come out of study\nand practice. (See http://www.tuxedo.org/~esr/faqs/loginataka.html)\n\nIn short, I'd like to see the club get bigger...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Oct 1999 02:04:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Planning final assault on query length limits " }, { "msg_contents": "Tom Lane wrote:\n> \n> > Let's wait for 7.0 beta - \"big tuples\" seems as work for 2 weeks...\n> \n> Thing is, if Vadim could do it in two weeks (sounds about right), then\n> maybe I could do it in three or four (I'd have to spend time studying\n> parts of the backend that Vadim already knows, but I don't). It seems\n> to me that some aspiring hacker who's already a little bit familiar\n> with backend coding could do it in a month or two, with suitable study,\n> and would in the process make great strides towards gurudom. This is\n> a fairly localized task, if I'm not greatly mistaken about it. And\n ^^^^^^^^^^^^^^\nI'm not sure. Seems that we'll have to change heap_getattr:\nif a column crosses page boundary then we'll have to re-construct\nit in memory and pfree it after using...\n\n> there's plenty of time left before 7.0. So this seems like a perfect\n> project for someone who wants to learn more about the backend and has\n> some time to spend doing so.\n\nAnd we always ready to help...\n\n> A year ago I didn't know a darn thing about the backend, so I'm a bit\n> bemused to find myself being called a member of \"the small club\".\n> Programming skills don't come out of nowhere, they come out of study\n> and practice. (See http://www.tuxedo.org/~esr/faqs/loginataka.html)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n-:)))\n\nVadim\n", "msg_date": "Fri, 22 Oct 1999 14:45:50 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Planning final assault on query length limits" }, { "msg_contents": "Vadim Mikheev <[email protected]> writes:\n>> a fairly localized task, if I'm not greatly mistaken about it. And\n> ^^^^^^^^^^^^^^\n> I'm not sure. Seems that we'll have to change heap_getattr:\n> if a column crosses page boundary then we'll have to re-construct\n> it in memory and pfree it after using...\n\nI was thinking more along the lines of reconstructing the whole tuple\nin palloc'd memory and leaving heap_getattr as-is. Otherwise we have\nproblems with the system relations whose tuples are accessed as C\nstructs. You'd need to somehow guarantee that those tuples are never\nsplit if you do it as above.\n\nOf course, that just moves the palloc/pfree bookkeeping problem down\na level; it's still going to be tricky to avoid storage leaks.\nWe might be able to get some win from storing reassembled tuples in\nTupleTableSlots, though.\n\n>> there's plenty of time left before 7.0. So this seems like a perfect\n>> project for someone who wants to learn more about the backend and has\n>> some time to spend doing so.\n\n> And we always ready to help...\n\nRight. Questions can be answered.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Oct 1999 11:15:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Planning final assault on query length limits " } ]
[ { "msg_contents": "OK, the book in on our web site now in HTML and PDF formats. It will be\nupdated automatically very night.\n\nGo to:\n\n\thttp://www.postgresql.org/docs\n\nUnder documentation, you will see the entry \"Published Book\".\n\n>From our main web site, it is under Info Central/Documentation.\n\nComments welcomed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 20 Oct 1999 13:57:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Book on web site" } ]
[ { "msg_contents": "Should we be checking it for any semantic errors?\nIf we find one, do we win a prize? ;-) On node24, \n\"Creating Tables\" you have:\n\nThe words CREATE TABLE have special meaning to the \ndatabase server. They indicate that the next request \nfrom the user is to create a database.\n ^^^^^^^^\nMike Mascari\n([email protected])\n\n\n--- Bruce Momjian <[email protected]> wrote:\n> OK, the book in on our web site now in HTML and PDF\n> formats. It will be\n> updated automatically very night.\n> \n> Go to:\n> \n> \thttp://www.postgresql.org/docs\n> \n> Under documentation, you will see the entry\n> \"Published Book\".\n> \n> From our main web site, it is under Info\n> Central/Documentation.\n> \n> Comments welcomed.\n> \n> -- \n> Bruce Momjian | \n> http://www.op.net/~candle\n> [email protected] | (610)\n> 853-3000\n> + If your life is a hard drive, | 830 Blythe\n> Avenue\n> + Christ can be your backup. | Drexel\n> Hill, Pennsylvania 19026\n\n\n=====\n\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n", "msg_date": "Wed, 20 Oct 1999 11:23:35 -0700 (PDT)", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Book on web site" }, { "msg_contents": "> Should we be checking it for any semantic errors?\n> If we find one, do we win a prize? ;-) On node24, \n> \"Creating Tables\" you have:\n> \n> The words CREATE TABLE have special meaning to the \n> database server. They indicate that the next request \n> from the user is to create a database.\n> ^^^^^^^^\n\nYou get your money back on your PostgreSQL purchase. :-)\n\nThanks. Fix will appear tomorrow.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 20 Oct 1999 14:25:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Book on web site" } ]
[ { "msg_contents": "Hello all,\n\nI was looking at the translate function and I think that it does not\nbehave quite right. I modified the translate function in\noracle_compat.c (included below) to make work more like its Oracle\ncounterpart. It seems to work but it returns the following message:\n\tNOTICE: PortalHeapMemoryFree: 0x8241fcc not in alloc set!\n\nBelow are the Oracle and Postgres session transcripts. \n\nselect translate('edwin', 'wi', 'af') from dual;\nORACLE:\nTRANS\n-----\nedafn\n1 row selected.\n\nPOSTGRES\ntranslate\n---------\nedain \n(1 row)\n\nselect translate('edwin', 'wi', 'a') from dual;\nORACLE\nTRAN\n----\nedan\n1 row selected.\n\nPOSTGRES\ntranslate\n---------\nedain \n(1 row)\n\nselect length(translate('edwin', 'wi', 'a')) from dual;\nORACLE\nLENGTH(TRA\n----------\n 4\n1 row selected.\n\nPOSTGRES\nlength\n------\n 5\n(1 row)\n\n\n-----------------------NEW\nFUNCTION--------------------------------------\ntext *\ntranslate(text *string, text *from, text *to)\n{\n\ttext\t *ret;\n\tchar\t *ptr_ret, *from_ptr, *to_ptr, *source, *target, *temp, rep;\n\tint\t m, fromlen, tolen, retlen, i;\n\n\tif ((string == (text *) NULL) ||\n\t\t((m = VARSIZE(string) - VARHDRSZ) <= 0))\n\t\treturn string;\n\n\ttarget = (char *) palloc(VARSIZE(string) - VARHDRSZ);\n\tsource = VARDATA(string);\n\ttemp = target;\n\n\tfromlen = VARSIZE(from) - VARHDRSZ;\n\tfrom_ptr = VARDATA(from);\n\ttolen = VARSIZE(to) - VARHDRSZ;\n\tto_ptr = VARDATA(to);\n\tretlen = 0;\n\twhile (m--)\n\t{\n\t rep = *source;\n\t for(i=0;i<fromlen;i++) {\n\t if(from_ptr[i] == *source) {\n\t if(i < tolen) {\n\t\trep = to_ptr[i];\n\t } else {\n\t\trep = 0;\n\t }\n\t break;\n\t }\n\t }\n\t if(rep != 0) {\n\t *target++ = rep;\n\t retlen++;\n\t }\n\t source++;\n\t}\n\n\tret = (text *) palloc(retlen + VARHDRSZ);\n\tVARSIZE(ret) = retlen + VARHDRSZ;\n\tptr_ret = VARDATA(ret);\n\tfor(i=0;i<retlen;i++) {\n\t *ptr_ret++ = temp[i];\n\t}\n\tpfree(target);\n\treturn ret;\n}\n\n\nThanks,\nEdwin S. Ramirez\n", "msg_date": "Wed, 20 Oct 1999 14:26:56 -0400", "msg_from": "Edwin Ramirez <[email protected]>", "msg_from_op": true, "msg_subject": "translate function (BUG?)" }, { "msg_contents": "You're trying to pfree(target) after having altered the target\npointer inside the main loop of the function...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Oct 1999 21:15:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] translate function (BUG?) " } ]
[ { "msg_contents": "\nIs this the way distinct is supposed to work? My intent is to give\nonly one for each different value of x - like it does in the first\ndistinct example. But when order by is added for the date/time sort\nI get what you see in the second distinct example.\n\npop4=> select * from foo;\nx|y \n-+----------------------------\n1|Wed Oct 20 06:29:41 1999 EDT\n1|Wed Oct 20 06:29:42 1999 EDT\n1|Wed Oct 20 06:29:43 1999 EDT\n1|Wed Oct 20 06:29:48 1999 EDT\n(4 rows)\n\npop4=> select distinct x from foo;\nx\n-\n1\n(1 row)\n\npop4=> select distinct x from foo order by y;\nx\n-\n1\n1\n1\n1\n(4 rows)\n\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Wed, 20 Oct 1999 19:56:49 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "distinct. Is this the correct behaviour?" }, { "msg_contents": "Vince Vielhaber <[email protected]> writes:\n> Is this the way distinct is supposed to work? My intent is to give\n> only one for each different value of x - like it does in the first\n> distinct example. But when order by is added for the date/time sort\n> I get what you see in the second distinct example.\n\nYeah, I think it's a bug too. It's not quite clear what to change,\nthough.\n\nThe \"problem\" is that nodeUnique is doing a bitwise compare across the\nwhole tuple, including the hidden ('junk') y column that is needed to do\nthe sorting. So, because you have four different y values, you get four\nrows out.\n\nHowever, if we fix nodeUnique to ignore junk columns, then the result\nbecomes nondeterministic. Consider\n\n\tx\ty\n\n\t1\t3\n\t1\t5\n\t2\t4\n\nIf we do \"select distinct x from foo order by y\" on this data, then the\norder of the result depends on which of the two tuples with x=1 happens\nto get chosen by the Unique filter. This is not good.\n\nSQL92 gets around this by allowing ORDER BY only on columns of the\ntargetlist, so that you are not allowed to specify this query in the\nfirst place.\n\nI think it is useful to allow ORDER BY on hidden columns, but maybe we\nneed to forbid it when DISTINCT is present. If we do that then the\nimplementation of nodeUnique is OK as it stands, and the bug is that\nthe parser accepts an invalid query.\n\nThis is pretty closely related to the semantic problems of DISTINCT ON,\nonce you see that the trouble is having columns in the query that aren't\nbeing used for (or aren't supposed to be used for) the DISTINCT check.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Oct 1999 21:10:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] distinct. Is this the correct behaviour? " }, { "msg_contents": "\nOn 21-Oct-99 Tom Lane wrote:\n> Vince Vielhaber <[email protected]> writes:\n>\n> If we do \"select distinct x from foo order by y\" on this data, then the\n> order of the result depends on which of the two tuples with x=1 happens\n> to get chosen by the Unique filter. This is not good.\n\nWhat seems logical to me tho is that it should first select all of the\nx cols that are equal, put them in y order, then pick the first one for\nthe distinct. At least this is the behaviour that I'm looking for; perhaps\nI'm going to need to make a more complex call. I also wonder how other\nRDBMS are handling it... I'll hafta see what sybase does (if I remember\n*and* get a chance).\n \n> SQL92 gets around this by allowing ORDER BY only on columns of the\n> targetlist, so that you are not allowed to specify this query in the\n> first place.\n\nI can understand the reason, yet also fail to understand.\n\n> I think it is useful to allow ORDER BY on hidden columns, but maybe we\n> need to forbid it when DISTINCT is present. If we do that then the\n> implementation of nodeUnique is OK as it stands, and the bug is that\n> the parser accepts an invalid query.\n> \n> This is pretty closely related to the semantic problems of DISTINCT ON,\n> once you see that the trouble is having columns in the query that aren't\n> being used for (or aren't supposed to be used for) the DISTINCT check.\n\nOk, well what I'm trying to do is write a web-based discussion forum. I\nwanted to list the subjects in any particular forum, but also want them\nto be in the order in which they were first posted. So if I have 10 \ncomments on one subject which first started last month and the subject\nbegun with a 'z', and another that was started today with the subject\nbeginning with an 'A', I want the end result to be:\n\nzebras have stripes\nAlways cross at the light\n\nas opposed to 10 lines about the zebras and only one on Always. It\nseems elementary, but at the same time it seems complex. Must mean\nit's time for bed.\n \nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Wed, 20 Oct 1999 21:47:52 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] distinct. Is this the correct behaviour?" }, { "msg_contents": "\nThis seems to generally work in postgres for the simple cases I tried:\n select x from foo group by x order by min(y)\n\nNow I don't know if there are any hidden gotchas in that (or wierdness\nwith the spec), but it also feels better to me than using distinct in this\ncase as well, because it seems to explicitly describe how you want y\nordered.\n\n>Ok, well what I'm trying to do is write a web-based discussion forum. I\n>wanted to list the subjects in any particular forum, but also want them\n>to be in the order in which they were first posted. So if I have 10 \n>comments on one subject which first started last month and the subject\n>begun with a 'z', and another that was started today with the subject\n>beginning with an 'A', I want the end result to be:\n>\n>zebras have stripes\n>Always cross at the light\n>\n>as opposed to 10 lines about the zebras and only one on Always. It\n>seems elementary, but at the same time it seems complex. Must mean\n>it's time for bed.\n", "msg_date": "Wed, 20 Oct 1999 22:40:00 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] distinct. Is this the correct behaviour? " }, { "msg_contents": "On Wed, 20 Oct 1999 [email protected] wrote:\n\n> \n> This seems to generally work in postgres for the simple cases I tried:\n> select x from foo group by x order by min(y)\n> \n> Now I don't know if there are any hidden gotchas in that (or wierdness\n> with the spec), but it also feels better to me than using distinct in this\n> case as well, because it seems to explicitly describe how you want y\n> ordered.\n\nHey! That works! Thanks! Just checked sybase and the original query\nacts identical to ours.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 21 Oct 1999 05:52:55 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] distinct. Is this the correct behaviour? " }, { "msg_contents": "Vince Vielhaber <[email protected]> writes:\n> Just checked sybase and the original query\n> acts identical to ours.\n\nHmph, so sybase hasn't thought through the implications of ORDER BY on\na hidden column vs. DISTINCT either. Can anyone try it on some other\nDBMSes?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Oct 1999 09:32:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] distinct. Is this the correct behaviour? " }, { "msg_contents": "[email protected] writes:\n> This seems to generally work in postgres for the simple cases I tried:\n> select x from foo group by x order by min(y)\n\n> Now I don't know if there are any hidden gotchas in that (or wierdness\n> with the spec), but it also feels better to me than using distinct in this\n> case as well, because it seems to explicitly describe how you want y\n> ordered.\n\nYes, I like that better too.\n\nI wonder if we could/should rewrite all uses of DISTINCT into GROUP\nBY under-the-hood...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Oct 1999 09:36:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] distinct. Is this the correct behaviour? " } ]
[ { "msg_contents": "psql now shows new text on startup. The old one just looked bad.\nHope the psql upgrader can merge these changes in. I know we weren't\nsupposed to touch psql, but I suspect he is not touching the banner.\n\n---------------------------------------------------------------------------\n\nOLD:\n\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n\nNEW:\n\nWelcome to the PostgreSQL interactive terminal.\n(Please read the copyright file for legal issues.)\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 20 Oct 1999 20:42:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "New psql startup banner" }, { "msg_contents": "On Oct 20, Bruce Momjian mentioned:\n\n> psql now shows new text on startup. The old one just looked bad.\n> Hope the psql upgrader can merge these changes in. I know we weren't\n> supposed to touch psql, but I suspect he is not touching the banner.\n\nHah!\n\n$ ./psql template1\nWelcome to psql, the PostgreSQL interactive query shell.\n(Please type \\copyright to see the distribution terms of PostgreSQL.)\nPostgreSQL 6.5.2 on i586-pc-linux-gnu, compiled by egcs\n \nType \\h for help with SQL commands,\n \\? for help on internal slash commands,\n \\q to quit,\n \\g or terminate with semicolon to execute query.\ntemplate1=> \\copyright\nmemory clobbered before allocated blockAborted (core dumped)\n\nOops! :)\n\nOkay, I guess the motivation behind this was the question \"Where is that\ndamn COPYRIGHT file?\", or maybe I've just been reading the appendix to the\nGPL too often.\n\nAnyway, I guess I'll let the balance of the suggestions apply.\n\n\t-Peter\n\n> \n> ---------------------------------------------------------------------------\n> \n> OLD:\n> \n> Welcome to the POSTGRESQL interactive sql monitor:\n> Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n> \n> NEW:\n> \n> Welcome to the PostgreSQL interactive terminal.\n> (Please read the copyright file for legal issues.)\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 22 Oct 1999 02:36:50 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New psql startup banner" }, { "msg_contents": "> On Oct 20, Bruce Momjian mentioned:\n> \n> > psql now shows new text on startup. The old one just looked bad.\n> > Hope the psql upgrader can merge these changes in. I know we weren't\n> > supposed to touch psql, but I suspect he is not touching the banner.\n> \n> Hah!\n> \n> $ ./psql template1\n> Welcome to psql, the PostgreSQL interactive query shell.\n> (Please type \\copyright to see the distribution terms of PostgreSQL.)\n> PostgreSQL 6.5.2 on i586-pc-linux-gnu, compiled by egcs\n> \n> Type \\h for help with SQL commands,\n> \\? for help on internal slash commands,\n> \\q to quit,\n> \\g or terminate with semicolon to execute query.\n> template1=> \\copyright\n> memory clobbered before allocated blockAborted (core dumped)\n> \n> Oops! :)\n> \n> Okay, I guess the motivation behind this was the question \"Where is that\n> damn COPYRIGHT file?\", or maybe I've just been reading the appendix to the\n> GPL too often.\n> \n> Anyway, I guess I'll let the balance of the suggestions apply.\n\n\nI like your version better. Let's go with that. I will back out my\npatch.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Oct 1999 21:24:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] New psql startup banner" } ]
[ { "msg_contents": "I've just found this on the cygwin list - may be of interest with this\nthread.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n-----Original Message-----\nFrom: Earnie Boyd [mailto:[email protected]]\nSent: 20 October 1999 20:39\nTo: [email protected]; [email protected]\nSubject: Re: Licensing question\n\n\n--- [email protected] wrote:\n> I have a question regarding the statement, \"if you intend to port a\n> commercial\n> (non-GPL'd) application using Cygwin, you will need the commercial\nlicense to\n> Cygwin that comes with the supported native Win32 GNUPro product\".\nWhat\n> licensing restrictions apply if you plan on using the Unix utilities\nonly,\n> and\n> will not be developing applications that use the cygwin? The Unix\nutilities\n> would be used in a commercial application to enable the customer to\n> transition\n> from a UNIX system to a Windows NT system.\n\nNotice: This is _not_ Legal Advice. I shall not be held liable for\ndamages\ncaused by this response.\n\nIMO, your use of a GPL binary does not cause your commercial package\nproblems. \nIf you include a GPL library in your commercial package such as\nlibreadline.a\nthen your package must then also be covered by the GPL.\n\nIf you distribute a GPL binary then YOU must agree to have the source\navailable\nupon request for three years after the distribution for the version of\nthe\nbinary distributed or also distribute the source when you distribute the\nbinary.\n\nPlease, refer to the COPYING document at the http://www.fsf.org site or\nshould\nbe found with any GPL distribution. It _is_ the legal document for the\nGPL.\n\n\n=====\nEarnie Boyd <mailto:[email protected]>\n\nNewbies, please visit\n<http://www.freeyellow.com/members5/gw32/index.html>\n\n(If you respond to the list, then please don't cc me)\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n\n--\nWant to unsubscribe from this list?\nSend a message to [email protected]\n", "msg_date": "Thu, 21 Oct 1999 07:50:46 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Readline use in trouble?]" } ]
[ { "msg_contents": "After sending this, I noticed that here they are not mentioning the LGPL\n(used be called the Library GPL, now the Lesser GPL), which covers\nlinking GPL'ed libraries into non-GPL'ed binaries.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Peter Mount [mailto:[email protected]]\nSent: 21 October 1999 07:51\nTo: [email protected]\nSubject: Re: [HACKERS] Readline use in trouble?]\n\n\nI've just found this on the cygwin list - may be of interest with this\nthread.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n-----Original Message-----\nFrom: Earnie Boyd [mailto:[email protected]]\nSent: 20 October 1999 20:39\nTo: [email protected]; [email protected]\nSubject: Re: Licensing question\n\n\n--- [email protected] wrote:\n> I have a question regarding the statement, \"if you intend to port a\n> commercial\n> (non-GPL'd) application using Cygwin, you will need the commercial\nlicense to\n> Cygwin that comes with the supported native Win32 GNUPro product\".\nWhat\n> licensing restrictions apply if you plan on using the Unix utilities\nonly,\n> and\n> will not be developing applications that use the cygwin? The Unix\nutilities\n> would be used in a commercial application to enable the customer to\n> transition\n> from a UNIX system to a Windows NT system.\n\nNotice: This is _not_ Legal Advice. I shall not be held liable for\ndamages\ncaused by this response.\n\nIMO, your use of a GPL binary does not cause your commercial package\nproblems. \nIf you include a GPL library in your commercial package such as\nlibreadline.a\nthen your package must then also be covered by the GPL.\n\nIf you distribute a GPL binary then YOU must agree to have the source\navailable\nupon request for three years after the distribution for the version of\nthe\nbinary distributed or also distribute the source when you distribute the\nbinary.\n\nPlease, refer to the COPYING document at the http://www.fsf.org site or\nshould\nbe found with any GPL distribution. It _is_ the legal document for the\nGPL.\n\n\n=====\nEarnie Boyd <mailto:[email protected]>\n\nNewbies, please visit\n<http://www.freeyellow.com/members5/gw32/index.html>\n\n(If you respond to the list, then please don't cc me)\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n\n--\nWant to unsubscribe from this list?\nSend a message to [email protected]\n\n************\n", "msg_date": "Thu, 21 Oct 1999 08:12:48 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Readline use in trouble?]" } ]
[ { "msg_contents": "Afternoon, all\n\nI just tried this on Oracle, and it wouldn't accept the query. It seems\nthat once you mention DISTINCT, it won't expand the target list. The actual\nmessage was:\n\n--------------------------------------------------------\nSQL> select distinct x from mikea_test order by y;\nselect distinct x from mikea_test order by y\n *\nERROR at line 1:\nORA-01791: not a SELECTed expression\n--------------------------------------------------------\n\nSo, there we have it. I think this is probably the best solution, because\nit means that when you do something where the result is not what you would\nexpect, it forces you to do it another way.\n\nMikeA\n\n\n>> -----Original Message-----\n>> From: Tom Lane [mailto:[email protected]]\n>> Sent: Thursday, October 21, 1999 3:37 PM\n>> To: [email protected]\n>> Cc: Vince Vielhaber; [email protected]\n>> Subject: Re: [HACKERS] distinct. Is this the correct behaviour? \n>> \n>> \n>> [email protected] writes:\n>> > This seems to generally work in postgres for the simple \n>> cases I tried:\n>> > select x from foo group by x order by min(y)\n>> \n>> > Now I don't know if there are any hidden gotchas in that \n>> (or wierdness\n>> > with the spec), but it also feels better to me than using \n>> distinct in this\n>> > case as well, because it seems to explicitly describe how \n>> you want y\n>> > ordered.\n>> \n>> Yes, I like that better too.\n>> \n>> I wonder if we could/should rewrite all uses of DISTINCT into GROUP\n>> BY under-the-hood...\n>> \n>> \t\t\tregards, tom lane\n>> \n>> ************\n>> \n", "msg_date": "Thu, 21 Oct 1999 15:44:40 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] distinct. Is this the correct behaviour? " } ]
[ { "msg_contents": "> select distinct f1 from t1 order by f2;\n> \n> Returned the following message under Oracle8 on NT:\n> ORA-01791: not a SELECTed expression\n\nInformix: 309: ORDER BY column (f2) must be in SELECT list. \n\nAndreas\n", "msg_date": "Thu, 21 Oct 1999 16:48:38 +0200", "msg_from": "Zeugswetter Andreas IZ5 <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] distinct. Is this the correct behaviour?" }, { "msg_contents": "On Thu, 21 Oct 1999, Zeugswetter Andreas IZ5 wrote:\n\n> > select distinct f1 from t1 order by f2;\n> > \n> > Returned the following message under Oracle8 on NT:\n> > ORA-01791: not a SELECTed expression\n> \n> Informix: 309: ORDER BY column (f2) must be in SELECT list. \n\nThe version of sybase I tried earlier was 4.9.2. I just tried it\nagain on 11.5 -- same thing.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 21 Oct 1999 11:09:50 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] distinct. Is this the correct behaviour?" } ]
[ { "msg_contents": "\nThis was posted on the cygwin list (and cc'd to me) about the GPL and\nCygWin:\n\n<quote>\nIf you use cygwin1.dll in your application then your\napplication must be made available as open source. The way to avoid\nthis is to compile your program using the -mno-cygwin option.\n\nIf you are only releasing parts of a cygwin release such as cygwin1.dll,\nls.exe, gcc.exe, etc. and you are not compiling any of your own\napplications using the DLL then those parts must be made available under\nthe GPL.\n</quote>\n\nHow would this affect our Win32 port?\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n", "msg_date": "Thu, 21 Oct 1999 16:04:20 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "GPL vs BSD licencing - a new twist" }, { "msg_contents": "On Thu, 21 Oct 1999, Peter Mount wrote:\n\n> If you use cygwin1.dll in your application then your\n> application must be made available as open source. The way to avoid\n> this is to compile your program using the -mno-cygwin option.\n\nOut of curiousity, is cygwin open source?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 21 Oct 1999 11:31:56 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] GPL vs BSD licencing - a new twist" }, { "msg_contents": "On Thu, 21 Oct 1999, Vince Vielhaber wrote:\n\n> On Thu, 21 Oct 1999, Peter Mount wrote:\n> \n> > If you use cygwin1.dll in your application then your\n> > application must be made available as open source. The way to avoid\n> > this is to compile your program using the -mno-cygwin option.\n> \n> Out of curiousity, is cygwin open source?\n\nEarlier today I found out that yes, cygwin is open source. However, they\nhave some similarities with Aladin where cygwin is used in embeded\napplications as well.\n\nPeter\n\n--\n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n", "msg_date": "Fri, 22 Oct 1999 18:15:13 +0100 (GMT)", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] GPL vs BSD licencing - a new twist" } ]
[ { "msg_contents": "Most of it, as I've downloaded the source. I think there are some parts\nthey don't release, but I'm not sure.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n-----Original Message-----\nFrom: Vince Vielhaber [mailto:[email protected]]\nSent: 21 October 1999 16:32\nTo: Peter Mount\nCc: Pgsql-Hackers (E-mail)\nSubject: Re: [HACKERS] GPL vs BSD licencing - a new twist\n\n\nOn Thu, 21 Oct 1999, Peter Mount wrote:\n\n> If you use cygwin1.dll in your application then your\n> application must be made available as open source. The way to avoid\n> this is to compile your program using the -mno-cygwin option.\n\nOut of curiousity, is cygwin open source?\n\nVince.\n-- \n========================================================================\n==\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail:\n/dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n========================================================================\n==\n\n\n", "msg_date": "Thu, 21 Oct 1999 16:32:47 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] GPL vs BSD licencing - a new twist" } ]
[ { "msg_contents": "\"Damond Walker\" <[email protected]> writes:\n> Returned the following message under MS SQL Server 7.0:\n> ORDER BY items must appear in the select list if SELECT DISTINCT is\n> specified.\n\nSure looks like that is the consensus answer to the semantics problem...\nguess we should do the same.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Oct 1999 11:36:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] distinct. Is this the correct behaviour? " }, { "msg_contents": "----- Original Message -----\n> Hmph, so sybase hasn't thought through the implications of ORDER BY on\n> a hidden column vs. DISTINCT either. Can anyone try it on some other\n> DBMSes?\n>\n\nUsing the following script...\n\n create table t1(f1 int, f2 int);\n insert into t1(f1, f2) values(1,1);\n insert into t1(f1, f2) values(1,2);\n insert into t1(f1, f2) values(1,3);\n insert into t1(f1, f2) values(2,4);\n select distinct f1 from t1 order by f2;\n\n\nReturned the following message under Oracle8 on NT:\n ORA-01791: not a SELECTed expression\n\nReturned the following message under MS SQL Server 7.0:\n ORDER BY items must appear in the select list if SELECT DISTINCT is\nspecified.\n\nI could try it on Oracle8i but I suspect the result will be the same.\n\n", "msg_date": "Thu, 21 Oct 1999 09:57:16 -0700", "msg_from": "\"Damond Walker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] distinct. Is this the correct behaviour?" }, { "msg_contents": "> \"Damond Walker\" <[email protected]> writes:\n> > Returned the following message under MS SQL Server 7.0:\n> > ORDER BY items must appear in the select list if SELECT DISTINCT is\n> > specified.\n> \n> Sure looks like that is the consensus answer to the semantics problem...\n> guess we should do the same.\n\nAdded to TODO:\n\n\t* require SELECT DISTINCT target list to have all ORDER BY columns\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 21 Oct 1999 12:58:32 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] distinct. Is this the correct behaviour?" } ]
[ { "msg_contents": "Hello:\n\t(This is a repost from a mail to -general, as nobody could \nanswer me, and acording to the Mailing Lists home page, I'm sending it \nhere.)\n\tMaybe somebody could give some clue about what is happening:\n\n\tI have 6.5.0 running over Solaris 2.5.1 SPARC. I have a \ndatabase with 5 tables, 3 of them < 100 regs. and 2 (\"usuarios\" and \n\"passwd\") with >10000. When querying for:\n\nSELECT u.nombre_cuenta, per.nombre, pas.clave_cifrada, \npas.clave_plana, u.estado FROM usuarios u, perfiles per, passwd pas \nWHERE (u.perfil=per.id_perfil) and (u.id_usr=pas.id_usr) and \n(u.activa) \\g \n\n\tpostmaster starts eating a lot of CPU and it doesn't finish to \nprocess the query in +20 minutes.\n\n\tPostmaster shows:\n[3]ns2:/>su - pgsql\nSun Microsystems Inc. SunOS 5.5.1 Generic May 1996\n[1]ns2:/usr/local/pgsql>bin/postmaster -i -d 2 -N 8 -B 16 -D \n/usr/local/pgsql\n/data\nFindExec: found \"/usr/local/pgsql/bin/postgres\" using argv[0]\nbinding ShmemCreate(key=52e2c1, size=359424)\nbin/postmaster: ServerLoop: handling reading 5\nbin/postmaster: ServerLoop: handling reading 5\nbin/postmaster: ServerLoop: handling writing 5\nbin/postmaster child[2934]: starting with \n(/usr/local/pgsql/bin/postgres -d2\n-B 16 -v131072 -p operaciones )\nFindExec: found \"/usr/local/pgsql/bin/postgres\" using argv[0]\ndebug info:\n User = admin\n RemoteHost = localhost\n RemotePort = 0\n DatabaseName = operaciones\n Verbose = 2\n Noversion = f\n timings = f\n dates = Normal\n bufsize = 16\n sortmem = 512\n query echo = f\nInitPostgres \nStartTransactionCommand\nbin/postmaster: BackendStartup: pid 2934 user admin db operaciones \nsocket 5\nquery: select version();\nProcessQuery\nCommitTransactionCommand\nStartTransactionCommand\nquery: SELECT u.nombre_cuenta, per.nombre, pas.clave_cifrada, \npas.clave_plana, u.estado FROM usuarios u, perfiles per, passwd pas WHERE \n(u.perfil=per.id_perfil) and (u.id_usr=pas.id_usr) and (u.activa)\nProcessQuery\nbin/postmaster: dumpstatus:\n sock 5\nbin/postmaster: dumpstatus:\n sock 5 \n\n\n\tAny hints?\n\n\tTIA and kind regards.\n\nFernando P. Schapachnik\nAdministraci�n de la red\nVIA Net Works Argentina SA\nDiagonal Roque S�enz Pe�a 971, 4� y 5� piso.\n1035 - Capital Federal, Argentina. \n(54-11) 4323-3333\nhttp://www.via-net-works.net.ar\n", "msg_date": "Thu, 21 Oct 1999 12:57:11 -0300 (GMT)", "msg_from": "Fernando Schapachnik <[email protected]>", "msg_from_op": true, "msg_subject": "[GENERAL] Neverending query on 6.5.0 over Solaris 2.5.1" } ]
[ { "msg_contents": "Hello:\n\t(This is a repost from a mail to -general, as nobody could \nanswer me, and acording to the Mailing Lists home page, I'm sending it \nhere.)\n\tMaybe somebody could give some clue about what is happening:\n\n\tI have 6.5.0 running over Solaris 2.5.1 SPARC. I have a \ndatabase with 5 tables, 3 of them < 100 regs. and 2 (\"usuarios\" and \n\"passwd\") with >10000. When querying for:\n\nSELECT u.nombre_cuenta, per.nombre, pas.clave_cifrada, \npas.clave_plana, u.estado FROM usuarios u, perfiles per, passwd pas \nWHERE (u.perfil=per.id_perfil) and (u.id_usr=pas.id_usr) and \n(u.activa) \\g \n\n\tpostmaster starts eating a lot of CPU and it doesn't finish to \nprocess the query in +20 minutes.\n\n\tPostmaster shows:\n[3]ns2:/>su - pgsql\nSun Microsystems Inc. SunOS 5.5.1 Generic May 1996\n[1]ns2:/usr/local/pgsql>bin/postmaster -i -d 2 -N 8 -B 16 -D \n/usr/local/pgsql\n/data\nFindExec: found \"/usr/local/pgsql/bin/postgres\" using argv[0]\nbinding ShmemCreate(key=52e2c1, size=359424)\nbin/postmaster: ServerLoop: handling reading 5\nbin/postmaster: ServerLoop: handling reading 5\nbin/postmaster: ServerLoop: handling writing 5\nbin/postmaster child[2934]: starting with \n(/usr/local/pgsql/bin/postgres -d2\n-B 16 -v131072 -p operaciones )\nFindExec: found \"/usr/local/pgsql/bin/postgres\" using argv[0]\ndebug info:\n User = admin\n RemoteHost = localhost\n RemotePort = 0\n DatabaseName = operaciones\n Verbose = 2\n Noversion = f\n timings = f\n dates = Normal\n bufsize = 16\n sortmem = 512\n query echo = f\nInitPostgres \nStartTransactionCommand\nbin/postmaster: BackendStartup: pid 2934 user admin db operaciones \nsocket 5\nquery: select version();\nProcessQuery\nCommitTransactionCommand\nStartTransactionCommand\nquery: SELECT u.nombre_cuenta, per.nombre, pas.clave_cifrada, \npas.clave_plana, u.estado FROM usuarios u, perfiles per, passwd pas WHERE \n(u.perfil=per.id_perfil) and (u.id_usr=pas.id_usr) and (u.activa)\nProcessQuery\nbin/postmaster: dumpstatus:\n sock 5\nbin/postmaster: dumpstatus:\n sock 5 \n\n\n\tAny hints?\n\n\tTIA and kind regards.\n\nFernando P. Schapachnik\nAdministraci�n de la red\nVIA Net Works Argentina SA\nDiagonal Roque S�enz Pe�a 971, 4� y 5� piso.\n1035 - Capital Federal, Argentina. \n(54-11) 4323-3333\nhttp://www.via-net-works.net.ar\n", "msg_date": "Thu, 21 Oct 1999 13:13:37 -0300 (GMT)", "msg_from": "Fernando Schapachnik <[email protected]>", "msg_from_op": true, "msg_subject": "Neverending query on 6.5.2 over Solaris 2.5.1" }, { "msg_contents": "Fernando Schapachnik <[email protected]> writes:\n> \tI have 6.5.0 running over Solaris 2.5.1 SPARC. I have a \n> database with 5 tables, 3 of them < 100 regs. and 2 (\"usuarios\" and \n> \"passwd\") with >10000. When querying for:\n\n> SELECT u.nombre_cuenta, per.nombre, pas.clave_cifrada, \n> pas.clave_plana, u.estado FROM usuarios u, perfiles per, passwd pas \n> WHERE (u.perfil=per.id_perfil) and (u.id_usr=pas.id_usr) and \n> (u.activa) \\g \n\n> \tpostmaster starts eating a lot of CPU and it doesn't finish to \n> process the query in +20 minutes.\n\nHave you vacuumed the database lately? What does \"explain ...\" show\nfor the query plan being used?\n\nYou might be well advised to create indexes on usarios.id_usr and\npasswd.id_usr, if you don't have them already. I'd expect this\nquery to run reasonably quickly using a mergejoin, but mergejoin\nneeds indexes on the fields being joined. (The system will also\nconsider doing an explicit sort and then a mergejoin, but obviously\nthe sort step takes extra time.)\n\nIf you haven't vacuumed since filling the tables then the optimizer\nmay believe that the tables only contain a few rows, in which case\nit's likely to use a plain nested-loop join (ie, compare every usarios\nrow to every passwd row to find matching id_usr fields). That's nice\nand fast for little tables, but a big loser on big ones...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Oct 1999 15:31:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Neverending query on 6.5.2 over Solaris 2.5.1 " }, { "msg_contents": "En un mensaje anterior, Tom Lane escribi�:\n> Fernando Schapachnik <[email protected]> writes:\n> > \tI have 6.5.0 running over Solaris 2.5.1 SPARC. I have a \n> > database with 5 tables, 3 of them < 100 regs. and 2 (\"usuarios\" and \n> > \"passwd\") with >10000. When querying for:\n> \n> > SELECT u.nombre_cuenta, per.nombre, pas.clave_cifrada, \n> > pas.clave_plana, u.estado FROM usuarios u, perfiles per, passwd pas \n> > WHERE (u.perfil=per.id_perfil) and (u.id_usr=pas.id_usr) and \n> > (u.activa) \\g \n> \n> > \tpostmaster starts eating a lot of CPU and it doesn't finish to \n> > process the query in +20 minutes.\n> \n> Have you vacuumed the database lately? What does \"explain ...\" show\n\nI did this today. I also installed Postgres on a FreeBSD machine \n(comparable -and low- load averages) and updated the version to 6.5.2.\n\nAfter vacuum:\nOn the Sun: 1 minute.\nOn the FreeBSD: 12 seconds.\n\nExplain shows (on both machines):\n\noperaciones=> explain SELECT u.nombre_cuenta, per.nombre, \npas.clave_cifrada, pas.clave_plana, u.estado FROM usuarios u, perfiles per,\npasswd pas WHERE (u.activa) and (u.perfil=per.id_perfil) and \n(u.id_usr=pas.id_usr) \\g\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=503.74 rows=1 width=74)\n -> Nested Loop (cost=500.89 rows=1 width=58)\n -> Seq Scan on usuarios u (cost=498.84 rows=1 width=30)\n -> Index Scan using passwd_id_usr_key on passwd pas \n(cost=2.05 rows=10571 width=28)\n -> Seq Scan on perfiles per (cost=2.85 rows=56 width=16)\n\nEXPLAIN \n> You might be well advised to create indexes on usarios.id_usr and\n> passwd.id_usr, if you don't have them already. I'd expect this\n\nAs usuarios.id_usr and passwd.id_usr are both serial, they have \nindexes automatically created (I double checked that). PgAccess shows \nthat usuarios has no primary key (I don't know why) and that \nusuarios_id_usr_key is an unique, no clustered index. Same on passwd.\n\nI'm running postmaster -N 8 -B 16 because whitout these postmaster \nwouldn't get all the shared memory it needed and won't start. Do you \nthink that this may be in some way related?\n\nThanks for your help!\n\nFernando P. Schapachnik\nAdministraci�n de la red\nVIA Net Works Argentina SA\nDiagonal Roque S�enz Pe�a 971, 4� y 5� piso.\n1035 - Capital Federal, Argentina. \n(54-11) 4323-3333\nhttp://www.via-net-works.net.ar\n", "msg_date": "Fri, 22 Oct 1999 09:38:35 -0300 (GMT)", "msg_from": "Fernando Schapachnik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Neverending query on 6.5.2 over Solaris 2.5.1" }, { "msg_contents": "Fernando Schapachnik <[email protected]> writes:\n>>>> postmaster starts eating a lot of CPU and it doesn't finish to \n>>>> process the query in +20 minutes.\n>> \n>> Have you vacuumed the database lately? What does \"explain ...\" show\n\n> After vacuum:\n> On the Sun: 1 minute.\n> On the FreeBSD: 12 seconds.\n\nThat's a little better, anyway ...\n\n> Explain shows (on both machines):\n\n> Nested Loop (cost=503.74 rows=1 width=74)\n> -> Nested Loop (cost=500.89 rows=1 width=58)\n> -> Seq Scan on usuarios u (cost=498.84 rows=1 width=30)\n> -> Index Scan using passwd_id_usr_key on passwd pas \n> (cost=2.05 rows=10571 width=28)\n> -> Seq Scan on perfiles per (cost=2.85 rows=56 width=16)\n\nOK, that still looks a little bogus. It's estimating it will only\nfind one row in usarios that needs to be joined against the other\ntwo tables. If that were true, then this plan is pretty reasonable,\nbut I bet it's not true. The only WHERE clause that can be used to\neliminate usarios rows in advance of the join is (u.activa), and I'll\nbet you have more than one active user.\n\nDoes the plan change if you do VACUUM ANALYZE instead of just a plain\nvacuum?\n\n> As usuarios.id_usr and passwd.id_usr are both serial, they have \n> indexes automatically created (I double checked that). PgAccess shows \n> that usuarios has no primary key (I don't know why) and that \n> usuarios_id_usr_key is an unique, no clustered index. Same on passwd.\n\nOK, so it *could* make a mergejoin plan without sorting. I think the\nproblem is the unreasonably low estimate for number of matching usarios\nrows; that makes the nested-loop plan look cheap because of its lower\nstartup overhead. But if there's a lot of usarios rows to process then\nit's not so cheap anymore.\n\nAs an experiment you could try forbidding nestloop plans (start psql\nwith environment variable PGOPTIONS=\"-fn\") and see what sort of plan\nyou get then and how long it really takes in comparison to the nestloop.\nThis isn't a good long-term solution, because you might get poor plans\nfor smaller queries, but it would help us see whether and how the\nplanner is making the wrong choice. (I've been trying to collect\nexamples of poor planning so that I can improve the planner --- so\nI'm quite interested in the details of your situation.)\n\n> I'm running postmaster -N 8 -B 16 because whitout these postmaster \n> wouldn't get all the shared memory it needed and won't start. Do you \n> think that this may be in some way related?\n\nWell, that's certainly costing you performance; 16 disk pages is not\nenough buffer space to avoid thrashing. You need to increase your\nkernel's max-shared-memory-block-size (SHMMAX, I think) parameter\nso that you can run with a more reasonable -B setting. A lot of\nkernels ship with SHMMAX settings that are ridiculously small for\nany modern machine.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Oct 1999 10:48:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Neverending query on 6.5.2 over Solaris 2.5.1 " }, { "msg_contents": "En un mensaje anterior, Tom Lane escribi�:\n> > Explain shows (on both machines):\n> \n> > Nested Loop (cost=503.74 rows=1 width=74)\n> > -> Nested Loop (cost=500.89 rows=1 width=58)\n> > -> Seq Scan on usuarios u (cost=498.84 rows=1 width=30)\n> > -> Index Scan using passwd_id_usr_key on passwd pas \n> > (cost=2.05 rows=10571 width=28)\n> > -> Seq Scan on perfiles per (cost=2.85 rows=56 width=16)\n> \n> OK, that still looks a little bogus. It's estimating it will only\n> find one row in usarios that needs to be joined against the other\n> two tables. If that were true, then this plan is pretty reasonable,\n> but I bet it's not true. The only WHERE clause that can be used to\n> eliminate usarios rows in advance of the join is (u.activa), and I'll\n> bet you have more than one active user.\n\nThat's right!\n\n> \n> Does the plan change if you do VACUUM ANALYZE instead of just a plain\n> vacuum?\n\nSorry for not being clear enough, but that was what I did.\n\n> \n> As an experiment you could try forbidding nestloop plans (start psql\n> with environment variable PGOPTIONS=\"-fn\") and see what sort of plan\n> you get then and how long it really takes in comparison to the nestloop.\n\nI took 30 seconds on the Sun, and explain shows:\n\nNOTICE: QUERY PLAN:\n\nMerge Join (cost=1314.02 rows=1 width=74)\n -> Seq Scan (cost=1297.56 rows=1 width=58)\n -> Sort (cost=1297.56 rows=1 width=58)\n -> Hash Join (cost=1296.56 rows=1 width=58)\n -> Seq Scan on passwd pas (cost=447.84 \nrows=10571 width=28)\n -> Hash (cost=498.84 rows=1 width=30)\n -> Seq Scan on usuarios u (cost=498.84 \nrows=1 width=30)\n -> Seq Scan (cost=14.58 rows=56 width=16)\n -> Sort (cost=14.58 rows=56 width=16)\n -> Seq Scan on perfiles per (cost=2.85 rows=56 width=16)\n\nEXPLAIN\n\n> > I'm running postmaster -N 8 -B 16 because whitout these postmaster \n> > wouldn't get all the shared memory it needed and won't start. Do you \n> > think that this may be in some way related?\n> \n> Well, that's certainly costing you performance; 16 disk pages is not\n> enough buffer space to avoid thrashing. You need to increase your\n> kernel's max-shared-memory-block-size (SHMMAX, I think) parameter\n> so that you can run with a more reasonable -B setting. A lot of\n> kernels ship with SHMMAX settings that are ridiculously small for\n> any modern machine.\n\nOk, I'll try to increase it.\n\nRegards.\n\n\n\nFernando P. Schapachnik\nAdministraci�n de la red\nVIA Net Works Argentina SA\nDiagonal Roque S�enz Pe�a 971, 4� y 5� piso.\n1035 - Capital Federal, Argentina. \n(54-11) 4323-3333\nhttp://www.via-net-works.net.ar\n", "msg_date": "Fri, 22 Oct 1999 12:16:23 -0300 (GMT)", "msg_from": "Fernando Schapachnik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Neverending query on 6.5.2 over Solaris 2.5.1" }, { "msg_contents": "En un mensaje anterior, Tom Lane escribi�:\n> > I'm running postmaster -N 8 -B 16 because whitout these postmaster \n> > wouldn't get all the shared memory it needed and won't start. Do you \n> > think that this may be in some way related?\n> \n> Well, that's certainly costing you performance; 16 disk pages is not\n> enough buffer space to avoid thrashing. You need to increase your\n> kernel's max-shared-memory-block-size (SHMMAX, I think) parameter\n> so that you can run with a more reasonable -B setting. A lot of\n> kernels ship with SHMMAX settings that are ridiculously small for\n> any modern machine.\n\nWhat value would you advise for shmmax?\n\nRegards.\n\n\nFernando P. Schapachnik\nAdministraci�n de la red\nVIA Net Works Argentina SA\nDiagonal Roque S�enz Pe�a 971, 4� y 5� piso.\n1035 - Capital Federal, Argentina. \n(54-11) 4323-3333\nhttp://www.via-net-works.net.ar\n", "msg_date": "Fri, 22 Oct 1999 12:18:30 -0300 (GMT)", "msg_from": "Fernando Schapachnik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Neverending query on 6.5.2 over Solaris 2.5.1" }, { "msg_contents": "Fernando Schapachnik <[email protected]> writes:\n>> enough buffer space to avoid thrashing. You need to increase your\n>> kernel's max-shared-memory-block-size (SHMMAX, I think) parameter\n>> so that you can run with a more reasonable -B setting. A lot of\n>> kernels ship with SHMMAX settings that are ridiculously small for\n>> any modern machine.\n\n> What value would you advise for shmmax?\n\nWell, with the default number of buffers (64) Postgres requires about\na megabyte (I think a tad over 1Mb, in 6.5.*). Extra buffers are 8K\nplus a little overhead apiece. If you are running with more than a\ncouple of active backends at a time then you probably want to use\nmore than the default number of buffers. But I have no advice on\nhow many is appropriate for what size of installation --- can anyone\nelse help?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Oct 1999 11:33:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Neverending query on 6.5.2 over Solaris 2.5.1 " }, { "msg_contents": "Then <[email protected]> spoke up and said:\n> Fernando Schapachnik <[email protected]> writes:\n> > What value would you advise for shmmax?\n> \n> more than the default number of buffers. But I have no advice on\n> how many is appropriate for what size of installation --- can anyone\n> else help?\n\nUnless you are severely resource constrained, think big. Generally\nspeaking, the kilobytes of memory you'll lose to kernel structures are\nirrelevant. Performance is generally not an issue, either.\n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================", "msg_date": "22 Oct 1999 11:57:11 -0400", "msg_from": "Brian E Gallew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Neverending query on 6.5.2 over Solaris 2.5.1 " }, { "msg_contents": "Fernando Schapachnik <[email protected]> writes:\n>> As an experiment you could try forbidding nestloop plans (start psql\n>> with environment variable PGOPTIONS=\"-fn\") and see what sort of plan\n>> you get then and how long it really takes in comparison to the nestloop.\n\n> I took 30 seconds on the Sun, and explain shows:\n\nBetter, but still not good.\n\n> NOTICE: QUERY PLAN:\n>\n> Merge Join (cost=1314.02 rows=1 width=74)\n> -> Seq Scan (cost=1297.56 rows=1 width=58)\n> -> Sort (cost=1297.56 rows=1 width=58)\n> -> Hash Join (cost=1296.56 rows=1 width=58)\n> -> Seq Scan on passwd pas (cost=447.84 rows=10571 width=28)\n> -> Hash (cost=498.84 rows=1 width=30)\n> -> Seq Scan on usuarios u (cost=498.84 rows=1 width=30)\n> -> Seq Scan (cost=14.58 rows=56 width=16)\n> -> Sort (cost=14.58 rows=56 width=16)\n> -> Seq Scan on perfiles per (cost=2.85 rows=56 width=16)\n\nIt's still convinced it's only going to get one row out of usuarios.\nWeird. I assume that your 'activa' field is 'bool'? I've been trying\nto duplicate this misbehavior here, and as near as I can tell the system\nhandles selectivity estimates for boolean fields just fine. Whatever\npercentage of 't' values was seen by the last VACUUM ANALYZE is exactly\nwhat it uses.\n\nI am using 6.5.2 and current sources, though, and in your original\nmessage you said you were on 6.5.0. If that's right, seems like the\nfirst thing to try is for you to update to 6.5.2, run another VACUUM\nANALYZE, and then see if you still get the same bogus row estimates.\n\nThe other odd thing about the above plan is that it's doing an\nexplicit sort on perfiles. Didn't you say that you had an index on\nperfiles.id_perfil? It should be scanning that instead of doing\na sort, I'd think. (However, if there really are only 56 rows in\nperfiles, it probably doesn't matter.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Oct 1999 12:20:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Neverending query on 6.5.2 over Solaris 2.5.1 " }, { "msg_contents": "I wrote:\n> Weird. I assume that your 'activa' field is 'bool'? I've been trying\n> to duplicate this misbehavior here, and as near as I can tell the system\n> handles selectivity estimates for boolean fields just fine. Whatever\n> percentage of 't' values was seen by the last VACUUM ANALYZE is exactly\n> what it uses.\n\nOn second thought: 6.5.* can get confused if the column contains more\nNULLs than anything else. Dunno if you have a lot of nulls in activa,\nbut if so you might try changing them all to explicit 'f' and then\nredoing the VACUUM ANALYZE. Next release will be smarter about keeping\nstats in the presence of many nulls.\n\nIt'd be useful to double-check my theory that the system is\nmisestimating the selectivity of the WHERE (u.activa) clause.\nYou could try this:\n\tSELECT count(*) FROM usarios WHERE activa;\n\tEXPLAIN SELECT count(*) FROM usarios WHERE activa;\nand see how far off the row count estimate in the EXPLAIN is\nfrom reality.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Oct 1999 13:15:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Neverending query on 6.5.2 over Solaris 2.5.1 " }, { "msg_contents": "En un mensaje anterior, Tom Lane escribi�:\n> It's still convinced it's only going to get one row out of usuarios.\n> Weird. I assume that your 'activa' field is 'bool'? I've been trying\n> to duplicate this misbehavior here, and as near as I can tell the system\n> handles selectivity estimates for boolean fields just fine. Whatever\n> percentage of 't' values was seen by the last VACUUM ANALYZE is exactly\n> what it uses.\n> \n> I am using 6.5.2 and current sources, though, and in your original\n> message you said you were on 6.5.0. If that's right, seems like the\n> first thing to try is for you to update to 6.5.2, run another VACUUM\n> ANALYZE, and then see if you still get the same bogus row estimates.\n\nI was using 6.5.0 on my first post, then I upgraded and all the vacuum \nand explain commands where from 6.5.2. Here is my complete database \ndefinition:\n\n\nCREATE TABLE usuarios\n\t(id_usr serial,\n\trazon_social text NOT NULL,\n\tnombre_cuenta text NOT NULL,\n\tgrupo int2 NOT NULL, \n\tperfil int2 NOT NULL, \n\testado char(1) NOT NULL DEFAULT 'H' CHECK ((estado='H') or (estado='D')), \n\tid_madre int4 NOT NULL,\n\tfecha_creacion datetime DEFAULT CURRENT_DATE,\n\tfecha_baja datetime,\n\tgratuita bool DEFAULT 'f',\n\tactiva bool DEFAULT 't',\n\tobservaciones text\n\t) \\g\n\nCREATE TABLE passwd\n\t(id_usr serial,\n\tclave_plana text NOT NULL, \n\tclave_cifrada text NOT NULL\n\t) \\g\n\nCREATE TABLE perfiles\n\t(id_perfil serial,\n\tnombre text NOT NULL,\n\tdescripcion text\n\t) \\g\n\nCREATE TABLE grupos\n\t(id_grupo serial,\n\tnombre text NOT NULL,\n\tdescripcion text\n\t) \\g\n\nCREATE TABLE cronometradas\n\t(id_usr serial,\n\tfecha_comienzo_cronometrado datetime DEFAULT CURRENT_DATE,\n\ttipo_cronometrado int2,\n\tmax_segs_vida int4, \n\tmax_segs_consumo int4\n\t) \\g\n\nCREATE TABLE tipos_cronometrado\n\t(id_tipo_cronometrado serial,\n\tnombre text NOT NULL,\n\tdescripcion text\n\t) \\g\n\n\n> \n> The other odd thing about the above plan is that it's doing an\n> explicit sort on perfiles. Didn't you say that you had an index on\n> perfiles.id_perfil? It should be scanning that instead of doing\n\nIt should, as it is serial. What does it mean when PgAccess says a table \ndoesn't has a primary key? Would it impact?\n\nAgain, thanks!\n\n\n\nFernando P. Schapachnik\nAdministraci�n de la red\nVIA Net Works Argentina SA\nDiagonal Roque S�enz Pe�a 971, 4� y 5� piso.\n1035 - Capital Federal, Argentina. \n(54-11) 4323-3333\nhttp://www.via-net-works.net.ar\n", "msg_date": "Sat, 23 Oct 1999 15:25:25 -0300 (GMT)", "msg_from": "Fernando Schapachnik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Neverending query on 6.5.2 over Solaris 2.5.1" }, { "msg_contents": "En un mensaje anterior, Tom Lane escribi�:\n> I wrote:\n> > Weird. I assume that your 'activa' field is 'bool'? I've been trying\n> > to duplicate this misbehavior here, and as near as I can tell the system\n> > handles selectivity estimates for boolean fields just fine. Whatever\n> > percentage of 't' values was seen by the last VACUUM ANALYZE is exactly\n> > what it uses.\n> \n> On second thought: 6.5.* can get confused if the column contains more\n> NULLs than anything else. Dunno if you have a lot of nulls in activa,\n> but if so you might try changing them all to explicit 'f' and then\n> redoing the VACUUM ANALYZE. Next release will be smarter about keeping\n> stats in the presence of many nulls.\n> \n> It'd be useful to double-check my theory that the system is\n> misestimating the selectivity of the WHERE (u.activa) clause.\n> You could try this:\n> \tSELECT count(*) FROM usarios WHERE activa;\n\n10571\n\n\n> \tEXPLAIN SELECT count(*) FROM usarios WHERE activa;\n> and see how far off the row count estimate in the EXPLAIN is\n> from reality.\n\nNOTICE: QUERY PLAN:\n\nAggregate (cost=498.84 rows=1 width=4)\n -> Seq Scan on usuarios (cost=498.84 rows=1 width=4)\n\nEXPLAIN\n\nDon't hesitate in asking any other info/test you may consider useful.\n\nRegards!\n\n\n\nFernando P. Schapachnik\nAdministraci�n de la red\nVIA Net Works Argentina SA\nDiagonal Roque S�enz Pe�a 971, 4� y 5� piso.\n1035 - Capital Federal, Argentina. \n(54-11) 4323-3333\nhttp://www.via-net-works.net.ar\n", "msg_date": "Sat, 23 Oct 1999 15:29:01 -0300 (GMT)", "msg_from": "Fernando Schapachnik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Neverending query on 6.5.2 over Solaris 2.5.1" }, { "msg_contents": "Fernando Schapachnik <[email protected]> writes:\n>> It'd be useful to double-check my theory that the system is\n>> misestimating the selectivity of the WHERE (u.activa) clause.\n>> You could try this:\n>> SELECT count(*) FROM usarios WHERE activa;\n\n> 10571\n\n>> EXPLAIN SELECT count(*) FROM usarios WHERE activa;\n>> and see how far off the row count estimate in the EXPLAIN is\n>> from reality.\n\n> NOTICE: QUERY PLAN:\n\n> Aggregate (cost=498.84 rows=1 width=4)\n> -> Seq Scan on usuarios (cost=498.84 rows=1 width=4)\n\nWell, it's sure confused about the selectivity of WHERE activa,\nall right.\n\nI tried to duplicate this here, by duplicating the table definition you\nsent and filling it with some junk data --- about 1800 rows, 1500 of\nwhich had activa = 't'. I found that after loading the table and\nrunning a plain \"vacuum\", the system indeed estimated one row out, just\nas you show above. But after \"vacuum analyze\", it estimated 1360 rows\nout, which is a lot closer to reality (and would make a big difference\nin the plan selected for a join).\n\nNow I know you said you did a \"vacuum analyze\" on the table, but\nI am wondering if maybe you got confused about what you did.\nPlease try it again just to make sure.\n\nThe only other explanation I can think of is that I am not running this\ntest on a pristine 6.5.2 release, but on a recent CVS update from the\nREL6_5 branch. I don't see any indication that anything has been\nchanged in the selectivity code since 6.5 in that branch, but maybe I\nmissed something. You might need to update to almost-6.5.3. (I am not\nsure if there is a beta-test tarball for 6.5.3 or not; if not, you could\npull the sources from the CVS server, or wait for 6.5.3 which should be\nout very soon.)\n\nBTW, current sources (7.0-to-be) get the estimate spot-on after \"vacuum\nanalyze\", though without it they are not much better than 6.5. The\ncurrent system is estimating 1% of the rows will match, because it's\ntreating the WHERE condition like \"WHERE activa = 't'\" and the default\nestimate for \"=\" selectivity is 1% in the absence of VACUUM ANALYZE\nstatistics. Probably we ought to special-case boolean columns to\ndefault to a 50% estimate if no statistics are available...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 Oct 1999 19:07:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Neverending query on 6.5.2 over Solaris 2.5.1 " }, { "msg_contents": "En un mensaje anterior, Tom Lane escribi�:\n> Well, it's sure confused about the selectivity of WHERE activa,\n> all right.\n> \n> I tried to duplicate this here, by duplicating the table definition you\n> sent and filling it with some junk data --- about 1800 rows, 1500 of\n> which had activa = 't'. I found that after loading the table and\n> running a plain \"vacuum\", the system indeed estimated one row out, just\n> as you show above. But after \"vacuum analyze\", it estimated 1360 rows\n> out, which is a lot closer to reality (and would make a big difference\n> in the plan selected for a join).\n> \n> Now I know you said you did a \"vacuum analyze\" on the table, but\n> I am wondering if maybe you got confused about what you did.\n> Please try it again just to make sure.\n\nI tried again and now it's working better. I think the first problem \nwas due to low shared memory available and a special factor between my \nkeyboard and my chair ;-)\n\nThanks for all you help!\n\n\nFernando P. Schapachnik\nAdministraci�n de la red\nVIA Net Works Argentina SA\nDiagonal Roque S�enz Pe�a 971, 4� y 5� piso.\n1035 - Capital Federal, Argentina. \n(54-11) 4323-3333\nhttp://www.via-net-works.net.ar\n", "msg_date": "Tue, 26 Oct 1999 09:47:11 -0300 (GMT)", "msg_from": "Fernando Schapachnik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Neverending query on 6.5.2 over Solaris 2.5.1" } ]
[ { "msg_contents": "\nThis discussion really scares me in two ways:\n1. U of C licences aren't to be removed or extended (they are still a \ncopyrighted part themselves)\n2. GNU readline is a optional (though convenient) part of postgreSQL\n(If someone wants to make money with postgreSQL he simply must not link with \nreadline)\nBest Postgresql Develgroup could do is to mention this fact in INSTALL as well\nas README and a configure option called --without-readline (already there)\n \n\n\nregards,\nChristoph\n\nPS: It really hurts to see UCB and GPL Licences so misunderstood.\nJust take them as what they are:\nAgreements of the sort: you get something (here: readline) you give sth. \n(sourceaccess(GPL)/mention us(UCB/BSD/GPL))\nIf some people can't agree to some clauses/options it's their problem.\n\nDon't try to find problems when solutions are already implemented\n[Stonebraker late 80's]\n\n-- \nChristoph Hoegl / [email protected]\nComputer Science and Principle Research (ASNS: [email protected])\nSiedlungsstr. 18 93128 Regenstauf Germany\n12a Tellerrd. 94720 Berkeley CA USA\n=> OOOOOOOOOOOOOO = NOW in the \"TUNNEL IN NO TIME\"-team = OOOOOOOOOOOOOOO =>\n\n\n\n", "msg_date": "Thu, 21 Oct 1999 19:26:21 +0200", "msg_from": "Christoph Hoegl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Readline use in trouble?" }, { "msg_contents": ">>>>> \"CH\" == Christoph Hoegl <[email protected]> writes:\n\n CH> 2. GNU readline is a optional (though convenient) part of\n CH> postgreSQL (If someone wants to make money with postgreSQL he\n CH> simply must not link with readline)\n\nJust for exactness: readline doesn't prevent you to make money with\nanything, it only states licensing conditions (these are two quite\ndifferent things).\n\nMilan Zamazal\n\n", "msg_date": "22 Oct 1999 10:15:07 +0200", "msg_from": "Milan Zamazal <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Readline use in trouble?" } ]