threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "Marc (et al),\n\nThere is a new community web site at:\n\nhttp://advogato.org/\n\nA goal is to find common ground between projects.\nThe easy way for many projects has been to\nuse a not-really-a-RDBMS-engine.\nIt would be a good thing for pgsql-core to get \ninvolved to get more support for our engine \nin various free software projects.\n\nregards,\nG�ran\n",
"msg_date": "Sun, 21 Nov 1999 03:38:37 +0100",
"msg_from": "Goran Thyni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Advogato"
}
] |
[
{
"msg_contents": "I have been investigating Oleg's report of backend crashes induced by\ndrop index/vacuum/create index in parallel with table updates. The\nproblem is pretty simple: there's not enough of an interlock between\nDROP INDEX and normal activities on tables. In particular, it's\nentirely possible for DROP INDEX to drop an index that someone else\nis busy scanning or adding entries to :-(\n\nI find that things get a *lot* more stable if DROP INDEX grabs an\nexclusive lock on the parent table, not just on the target index.\nThat prevents the DROP from occurring while someone is actively\nscanning or updating the index as part of a query on the parent\ntable. However, I still get infrequent complaints like\nERROR: IndexSelectivity: index 292801 not found\nThe reason for that is that the parser and planner don't have any\nkind of lock on the tables they are working with --- we don't grab\nlocks until the executor starts. So, the planner can look around\nfor indexes for the tables in the query, find some, and then discover\nthat the indexes have gone away when it tries to do something with them.\n\nI was thinking about solving this by moving lock-acquisition out of\nthe executor and up to the start of the planning process; if we have\nsome kind of lock on every table mentioned in the query, we can be\nsure that DROP INDEX won't be able to remove any indexes that we are\nworking with in the planner.\n\nHowever, that doesn't completely eliminate this class of problems,\nbecause the parser and rewriter are still exposed. They don't care\nabout indexes, but they do care about table definitions. For example,\n\"SELECT * FROM table\" could generate an obsolete expansion for \"*\"\nif an ALTER TABLE ADD/DROP/RENAME COLUMN commits after parsing starts.\n\nGrabbing locks in the parser doesn't seem like a hot idea, mainly\nbecause the parser doesn't have full information --- it has no idea what\nwill happen downstream in the rewriter. If the parser grabs a read-type\nlock on some table, and then a rewrite rule requires us to get a\nstronger lock on that table, then we've just created a potential for\ndeadlock against some other backend. So I'm not sure there is any cure\nthat's not worse than the disease for the parser. (Most of the bad\nscenarios here require ALTER TABLE functionality that we don't have\nanyway.)\n\nI have no idea what can go wrong in the rewriter if tables are being\naltered underneath it, nor whether it's practical to grab locks to\nprevent that. Jan, at what point can we be sure we know the type\nof lock needed for each table used in a query? The planner has\ncomplete info when it starts, but can the rewriter know this before\nit's too late?\n\nI'm going to commit the change to make DROP INDEX lock the parent table,\nbut I wanted to see if anyone has any comments or better ideas about\nobtaining execution locks in the planner instead of the executor...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 21 Nov 1999 14:47:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "locking needed for parsing & planning"
}
] |
[
{
"msg_contents": "I have devised a simple manual way of reproducing that peculiar VACUUM\nnotice that Oleg has been complaining about, but didn't have a reliable\nway of triggering on-demand. It looks like it is caused by some sort of\nbug in the transaction commit logic --- or maybe just VACUUM's piece of\nit, but anyway there is something mucho bad going on here.\n\nSetup:\n\ncreate table hits (msg_id int, nhits int);\ncreate index hits_pkey on hits(msg_id);\ninsert into hits values(42,0);\ninsert into hits values(43,0);\n\nGiven this setup, you can do\n\ndrop index hits_pkey;\nupdate hits set nhits = nhits+1 where msg_id = 42;\ncreate index hits_pkey on hits(msg_id);\nvacuum analyze hits;\n\nall day with no problem.\n\nBUT: start up another psql, and in that other psql begin a transaction\nblock and touch anything at all --- doesn't have to be the table under\ntest:\n\nbegin;\nselect * from int4_tbl;\n\nNow, *without committing* that other transaction, go back to the first\npsql and try again:\n\ndrop index hits_pkey;\nupdate hits set nhits = nhits+1 where msg_id = 42;\ncreate index hits_pkey on hits(msg_id);\nvacuum analyze hits;\nNOTICE: Index hits_pkey: NUMBER OF INDEX' TUPLES (2) IS NOT THE SAME AS HEAP' (3).\nTry recreating the index.\n\nYou can repeat the vacuum (with or without analyze) as often as you want\nand you'll get the same notice each time. If you do more UPDATEs, the\nreported number of heap tuples increases --- rather odd, considering\nthere are obviously only two committed tuples in the table (as can be\nconfirmed by a SELECT).\n\nAs soon as you commit or abort the other transaction, everything goes\nback to normal.\n\nThere are variants of this sequence that also cause the problem. The\ncritical factor seems to be that both the index itself and at least one\ntuple in the table have to be younger than the oldest uncommitted\ntransaction.\n\nAt this point I decided that I was in over my head, so I'm tossing the\nwhole mess in Vadim's direction. I can't tell whether VACUUM itself\nis confused or the transaction logic in general is, but it sure looks\nlike something is looking at the wrong xact to decide whether tuples\nhave been committed or not. This could be a symptom of a fairly serious\nlogic error down inside tuple time qual checks...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 21 Nov 1999 18:00:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Reproducible vacuum complaint!"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> Sent: Monday, November 22, 1999 8:00 AM\n> To: [email protected]\n> Subject: [HACKERS] Reproducible vacuum complaint!\n> \n> \n> I have devised a simple manual way of reproducing that peculiar VACUUM\n> notice that Oleg has been complaining about, but didn't have a reliable\n> way of triggering on-demand. It looks like it is caused by some sort of\n> bug in the transaction commit logic --- or maybe just VACUUM's piece of\n> it, but anyway there is something mucho bad going on here.\n> \n> Setup:\n> \n> create table hits (msg_id int, nhits int);\n> create index hits_pkey on hits(msg_id);\n> insert into hits values(42,0);\n> insert into hits values(43,0);\n> \n> Given this setup, you can do\n> \n> drop index hits_pkey;\n> update hits set nhits = nhits+1 where msg_id = 42;\n> create index hits_pkey on hits(msg_id);\n> vacuum analyze hits;\n> \n> all day with no problem.\n> \n> BUT: start up another psql, and in that other psql begin a transaction\n> block and touch anything at all --- doesn't have to be the table under\n> test:\n> \n> begin;\n> select * from int4_tbl;\n> \n> Now, *without committing* that other transaction, go back to the first\n> psql and try again:\n> \n> drop index hits_pkey;\n> update hits set nhits = nhits+1 where msg_id = 42;\n> create index hits_pkey on hits(msg_id);\n> vacuum analyze hits;\n> NOTICE: Index hits_pkey: NUMBER OF INDEX' TUPLES (2) IS NOT THE \n> SAME AS HEAP' (3).\n> Try recreating the index.\n>\n\nHmm,if \"select * ..\" runs in SERIALIZABLE isolation level,the transaction\nwould see an old \"msg_id=42\" tuple(not new one). So vacuum doesn't\nvanish the old \"msg_id=42\" tuple. Vacuum takes all running transactions\ninto account. But AFAIK,there's no other such stuff.\nCREATE INDEX may be another one which should take all running \ntransactions into account.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Mon, 22 Nov 1999 09:29:17 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Reproducible vacuum complaint!"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> Hmm,if \"select * ..\" runs in SERIALIZABLE isolation level,the transaction\n> would see an old \"msg_id=42\" tuple(not new one). So vacuum doesn't\n> vanish the old \"msg_id=42\" tuple. Vacuum takes all running transactions\n> into account. But AFAIK,there's no other such stuff.\n> CREATE INDEX may be another one which should take all running \n> transactions into account.\n\nOh, I think I see --- you mean that CREATE INDEX needs to make index\nentries for tuples that are committed dead but might still be visible\nto some running transaction somewhere. Yes, that seems to fit what\nI was seeing. VACUUM always complained that there were too few\nindex entries, never too many.\n\nIt looks like btbuild() only indexes tuples that satisfy SnapshotNow,\nso this is definitely a potential problem for btree indexes. The other\nindex types are likely broken in the same way...\n\nComments anyone? What time qual should btbuild and friends be using,\nif not that?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 21 Nov 1999 23:11:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Reproducible vacuum complaint! "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > Hmm,if \"select * ..\" runs in SERIALIZABLE isolation level,the transaction\n> > would see an old \"msg_id=42\" tuple(not new one). So vacuum doesn't\n> > vanish the old \"msg_id=42\" tuple. Vacuum takes all running transactions\n> > into account. But AFAIK,there's no other such stuff.\n> > CREATE INDEX may be another one which should take all running\n> > transactions into account.\n...\n> It looks like btbuild() only indexes tuples that satisfy SnapshotNow,\n> so this is definitely a potential problem for btree indexes. The other\n> index types are likely broken in the same way...\n> \n> Comments anyone? What time qual should btbuild and friends be using,\n> if not that?\n\nSeems that we need in new \n\n#define SnapshotAny\t\t((Snapshot) 0x2)\n\nand new HeapTupleSatisfiesAny() returning TRUE for any tuple\nwith valid and committed (or current xact id) t_xmin.\n\n-:(\n\nSorry, I missed CREATE INDEX case.\n\nVadim\nP.S. I'll comment about indices and vacuum latter...\n",
"msg_date": "Mon, 22 Nov 1999 12:19:11 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Reproducible vacuum complaint!"
},
{
"msg_contents": "When I vacuum the database (PostgreSQL 6.5.3 on SuSE 6.3 Linux, 2.2 kernel), I get the\nfollowing error message:\n\nERROR: HEAP_MOVED_IN was not expected.\nvacuumdb: database vacuum failed on ntis\n\nThis error only seems to occur after I have used the trim function to clean up one of\nthe rows in the msg table of a database called ntis:\n\n\nntis=>update msg set description = trim(description);\nUPDATE 12069\nntis=>\n\n\nTo try and track down the problem, I wrote a C program (using ecpg) that trimmed the\ntable one row at a time and vacuumed between each row operation. I was hoping that\nthis program would reveal a problem with the data in one of my records. Unfortunately\nthe one row at a time approach did not reveal the problem and each vacuum operated\nwithout error.\n\nCan anyone tell me what a HEAP_MOVED_IN error is - I checked the source but was not\nfamiliar enough to understand it? Any ideas on why trim() may have cause it?\n\n\n\n\n\n",
"msg_date": "Thu, 30 Dec 1999 08:43:56 -0800",
"msg_from": "Stephen Birch <[email protected]>",
"msg_from_op": false,
"msg_subject": "HEAP_MOVED_IN error during vacuum?"
},
{
"msg_contents": "Stephen Birch <[email protected]> writes:\n> When I vacuum the database (PostgreSQL 6.5.3 on SuSE 6.3 Linux, 2.2\n> kernel), I get the following error message:\n\n> ERROR: HEAP_MOVED_IN was not expected.\n\n> Can anyone tell me what a HEAP_MOVED_IN error is - I checked the\n> source but was not familiar enough to understand it? Any ideas on why\n> trim() may have cause it?\n\nWhen VACUUM moves a tuple from one disk page to another (to compact the\ntable), the original tuple is marked HEAP_MOVED_OFF and the copy is\nmarked HEAP_MOVED_IN temporarily, until the VACUUM is ready to commit.\nThis is supposed to ensure that a failure partway through VACUUM won't\ncorrupt your table by leaving you with two copies of the same tuple.\n(The HEAP_MOVED_OFF copy is valid until VACUUM commits, and the\nHEAP_MOVED_IN copy is valid afterwards.)\n\nI haven't heard of other reports of this error message, so I suspect\nyou have found some hard-to-hit boundary condition error in VACUUM's\ndata-shuffling logic. I guess that the reason you don't see the error\nafter a single trim() is that not very much data-shuffling is needed to\ncompact the table after just one tuple update.\n\nWhat we need is a reproducible test case so we can chase down the bug\n--- any chance you can provide one?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 30 Dec 1999 21:05:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] HEAP_MOVED_IN error during vacuum? "
}
] |
[
{
"msg_contents": "Can someone explain why there are two network_ops in the pg_opclass\ntable? I am trying to make pg_opclass unique. We have a cache on\npg_opclass.opcname, so we clearly have a problem here. \n\nAlso, is it safe to set opcdeftype to a non-zero value so I can make\nthat index unique too?\n\nThis stuff confusing.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Nov 1999 01:33:55 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "cache question"
},
{
"msg_contents": "> Can someone explain why there are two network_ops in the pg_opclass\n> table? I am trying to make pg_opclass unique. We have a cache on\n> pg_opclass.opcname, so we clearly have a problem here. \n> \n> Also, is it safe to set opcdeftype to a non-zero value so I can make\n> that index unique too?\n> \n> This stuff confusing.\n\nLooks like I fixed it. I made on inet_ops and the other cidr_ops. \nThose names should be unique in there anyway. They are just used when\nspecifying the ops on an index create. The zero entries I just set to\ndummy values for int24 and int42 because there are no type that match\nthem. I set their typedef to be the same as the ops oid. Hope the\nsanity check doesn't complain.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Nov 1999 01:50:50 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] cache question"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Can someone explain why there are two network_ops in the pg_opclass\n> table?\n\nLooks like one is for type inet and the other for type cidr. I'm\nstill confused about the difference between the two types (hey,\nI ain't Paul Vixie) but I suspect we don't really need two entries\nin pg_opclass for them --- the types are binary-equivalent according\nto parse_coerce.h. If we do need two entries, they should be given\ndifferent names.\n\n> This stuff confusing.\n\nIt's 1:30AM EST ... way past time to be doing serious work, at least\nin this time zone ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 Nov 1999 02:02:43 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cache question "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Can someone explain why there are two network_ops in the pg_opclass\n> > table?\n> \n> Looks like one is for type inet and the other for type cidr. I'm\n> still confused about the difference between the two types (hey,\n> I ain't Paul Vixie) but I suspect we don't really need two entries\n> in pg_opclass for them --- the types are binary-equivalent according\n> to parse_coerce.h. If we do need two entries, they should be given\n> different names.\n\nDone. That's how I did it.\n\n> > This stuff confusing.\n> \n> It's 1:30AM EST ... way past time to be doing serious work, at least\n> in this time zone ...\n\nYes, I wanted to get it working. Seems like it works now. Will commit\nsoon.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Nov 1999 10:17:19 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] cache question"
}
] |
[
{
"msg_contents": "The following TODO items seem to be taken care of in current sources,\nbut aren't marked done in TODO:\n\nPARSER\n\n* INSERT INTO ... SELECT with AS columns matching result columns problem\n* UNION with LIMIT fails\n* CREATE TABLE x AS SELECT 1 UNION SELECT 2 fails\n* CREATE TABLE test(col char(2) DEFAULT user) fails in length restriction\n* mismatched types in CREATE TABLE ... DEFAULT causes problems [default]\n* SELECT COUNT('asdf') FROM pg_class WHERE oid=12 crashes\n\nURGENT\n\n* Eliminate limits on query length\n\n(Not quite done, but close enough to put a dash on it...)\n\nTYPES\n\n* Allow compression of large fields or a compressed field type\n\nJan just created a compressed text type, so this is partly done.\nWe speculated a little about adding a lower-level mechanism that would\ncompress whole tuples, so maybe this item should be replaced by one\nmentioning that idea (it wouldn't belong in the TYPES section though).\n\nMISC\n\n* Allow subqueries in target list\n\n* Overhaul mdmgr/smgr to fix double unlinking and double opens, cleanup\n\nIs this one done, or do we still have issues there? I think it's a lot\nbetter than it used to be, anyway...\n\nSOURCE CODE\n\n* Make configure --enable-debug add -g on compile line\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 Nov 1999 11:00:11 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "TODO updates"
},
{
"msg_contents": "> The following TODO items seem to be taken care of in current sources,\n> but aren't marked done in TODO:\n\nThanks. I have trouble figuring out which fixes I hear about go with\nwhich items. I also forget they are on there.\n\n\n> \n> PARSER\n> \n> * INSERT INTO ... SELECT with AS columns matching result columns problem\n> * UNION with LIMIT fails\n> * CREATE TABLE x AS SELECT 1 UNION SELECT 2 fails\n> * CREATE TABLE test(col char(2) DEFAULT user) fails in length restriction\n> * mismatched types in CREATE TABLE ... DEFAULT causes problems [default]\n> * SELECT COUNT('asdf') FROM pg_class WHERE oid=12 crashes\n\nThanks. Done.\n\n> \n> URGENT\n> \n> * Eliminate limits on query length\n> \n> (Not quite done, but close enough to put a dash on it...)\n\nMarked.\n\n> \n> TYPES\n> \n> * Allow compression of large fields or a compressed field type\n> \n> Jan just created a compressed text type, so this is partly done.\n> We speculated a little about adding a lower-level mechanism that would\n> compress whole tuples, so maybe this item should be replaced by one\n> mentioning that idea (it wouldn't belong in the TYPES section though).\n\nMarked. Let someone ask for tuple compression.\n\n> \n> MISC\n> \n> * Allow subqueries in target list\n\nMarked.\n\n> \n> * Overhaul mdmgr/smgr to fix double unlinking and double opens, cleanup\n> \n> Is this one done, or do we still have issues there? I think it's a lot\n> better than it used to be, anyway...\n\nMarked.\n\n> \n> SOURCE CODE\n> \n> * Make configure --enable-debug add -g on compile line\n\nMarked.\n\nThanks. Updated.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Nov 1999 13:03:39 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TODO updates"
},
{
"msg_contents": "\none of the things I remember seeing discussed, and wonder about current\nstatus on:\n\n\tremoval of the whole pg_vlock requirement on vacuum?\n\nlast I recall, MVCC eliminated that requirement, and also made it easier\nto do a vacuum on 'live tables'?\n\nOn Mon, 22 Nov 1999, Bruce Momjian wrote:\n\n> > The following TODO items seem to be taken care of in current sources,\n> > but aren't marked done in TODO:\n> \n> Thanks. I have trouble figuring out which fixes I hear about go with\n> which items. I also forget they are on there.\n> \n> \n> > \n> > PARSER\n> > \n> > * INSERT INTO ... SELECT with AS columns matching result columns problem\n> > * UNION with LIMIT fails\n> > * CREATE TABLE x AS SELECT 1 UNION SELECT 2 fails\n> > * CREATE TABLE test(col char(2) DEFAULT user) fails in length restriction\n> > * mismatched types in CREATE TABLE ... DEFAULT causes problems [default]\n> > * SELECT COUNT('asdf') FROM pg_class WHERE oid=12 crashes\n> \n> Thanks. Done.\n> \n> > \n> > URGENT\n> > \n> > * Eliminate limits on query length\n> > \n> > (Not quite done, but close enough to put a dash on it...)\n> \n> Marked.\n> \n> > \n> > TYPES\n> > \n> > * Allow compression of large fields or a compressed field type\n> > \n> > Jan just created a compressed text type, so this is partly done.\n> > We speculated a little about adding a lower-level mechanism that would\n> > compress whole tuples, so maybe this item should be replaced by one\n> > mentioning that idea (it wouldn't belong in the TYPES section though).\n> \n> Marked. Let someone ask for tuple compression.\n> \n> > \n> > MISC\n> > \n> > * Allow subqueries in target list\n> \n> Marked.\n> \n> > \n> > * Overhaul mdmgr/smgr to fix double unlinking and double opens, cleanup\n> > \n> > Is this one done, or do we still have issues there? I think it's a lot\n> > better than it used to be, anyway...\n> \n> Marked.\n> \n> > \n> > SOURCE CODE\n> > \n> > * Make configure --enable-debug add -g on compile line\n> \n> Marked.\n> \n> Thanks. Updated.\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 22 Nov 1999 21:45:21 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: TODO updates"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> one of the things I remember seeing discussed, and wonder about current\n> status on:\n> \tremoval of the whole pg_vlock requirement on vacuum?\n\nI have that on my to-do list; as far as I know it's a trivial code\nchange, but I just haven't gotten to it. Maybe I'll try it tonight.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 Nov 1999 20:51:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: TODO updates "
},
{
"msg_contents": "On Mon, 22 Nov 1999, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > one of the things I remember seeing discussed, and wonder about current\n> > status on:\n> > \tremoval of the whole pg_vlock requirement on vacuum?\n> \n> I have that on my to-do list; as far as I know it's a trivial code\n> change, but I just haven't gotten to it. Maybe I'll try it tonight.\n\nis this something that could safely be back-patched into v6.5.x's tree?\nhave at least one project that could really use the ability to vacuum a\ndatabase without the tables being locked :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 22 Nov 1999 22:51:42 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: TODO updates "
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n>>>> removal of the whole pg_vlock requirement on vacuum?\n>> \n>> I have that on my to-do list; as far as I know it's a trivial code\n>> change, but I just haven't gotten to it. Maybe I'll try it tonight.\n\n> is this something that could safely be back-patched into v6.5.x's tree?\n\nWell, mumble. We could probably back-patch the ability to run more than\none vacuum at a time, since that'd be local to vacuum.c. But I think\nthat'd encourage people to run vacuum in parallel with *other* database\nactivities, something we know is not very safe in 6.5! That whole\nset of changes I made to enforce more careful locking was mainly\nmotivated by Oleg's examples of crashes when system table changes were\nmade in parallel with vacuuming of the system tables.\n\nIn short, I think it'd be a risky thing to do. I'm not even 100%\nconfident that it will work reliably in current sources; I'll check\nit out before I commit it, but I won't be really comfortable until\nwe've had a beta-test cycle on it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 Nov 1999 23:58:08 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: TODO updates "
}
] |
[
{
"msg_contents": "\n> > * Since no one has picked up on my idea to run the tests \n> directly on the\n> > backend, I will keep reiterating this idea until someone shuts me up\n> > :*) The idea would be to have a target \"check\" in the top \n> level makefile\n> > like this (conceptually):\n> \n> Running the backend standalone does not use locking with \n> other backends,\n> so it is dangerous.\n> \n\nWell, he is talking about a newly created database that resides in the build\n\ndirectory tree, so this is not of concern, since it is nowhere that any\nother backend \ncan connect to.\n\nMy concern would rather be, that you would not be testing the locking, exev\n....\nIs it possible to start one backend in the same way postmaster would start\nit ?\n\nAndreas\n",
"msg_date": "Mon, 22 Nov 1999 17:07:14 +0100",
"msg_from": "Zeugswetter Andreas SEV <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] psql & regress tests"
}
] |
[
{
"msg_contents": "\n> > once inserted, a row keeps its oid. so, when performing complex\n> > selects, i'll often grab the oid too... do some tests on \n> the returned\n> > values, and if an action is appropriate on that row, i \n> reference it by\n> > its oid. the only chance of this failing is if the \n> database is dumped\n> > then restored between the select and the update (not gonna \n> happen, as\n> > the program requires the database available for execution)... using\n> > the oid this way, its often simpler and faster to update a \n> known row,\n> > especially when the initial select involved many fields.\n> \n> Yes, I use 'em the same way. I think an OID is kind of like a pointer\n> in a C program: good for fast, unique access to an object within the\n> context of the execution of a particular application (and maybe not\n> even that long). You don't write pointers into files to be used again\n> by other programs, though, and in the same way an OID isn't a good\n> candidate for a long-lasting reference from one table to another.\n> \n> \t\t\tregards, tom lane\n> \n\nI thought this special case is where the new xid access method would come\nin.\nIt is actually a lot faster than oid access, since it marks the physical\nposition\nthat would otherwise need to be looked up in the oid index. This same oid\nindex\nwould also add unneeded overhead to the inserts and updates.\n\nIs someone still working on the xid access ?\n\nAndreas\n",
"msg_date": "Mon, 22 Nov 1999 17:28:11 +0100",
"msg_from": "Zeugswetter Andreas SEV <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] Getting OID in psql of recent insert "
},
{
"msg_contents": "Zeugswetter Andreas SEV <[email protected]> writes:\n>> Yes, I use 'em the same way. I think an OID is kind of like a pointer\n>> in a C program: good for fast, unique access to an object within the\n>> context of the execution of a particular application (and maybe not\n>> even that long). You don't write pointers into files to be used again\n>> by other programs, though, and in the same way an OID isn't a good\n>> candidate for a long-lasting reference from one table to another.\n\n> I thought this special case is where the new xid access method would come\n> in.\n\nGood point, but (AFAIK) you could only use it for tables that you were\nsure no other client was updating in parallel. Otherwise you might be\nupdating a just-obsoleted tuple. Or is there a solution for that?\n\n> Is someone still working on the xid access ?\n\nI think we have the ability to refer to CTID in WHERE now, but not yet an\naccess method that actually makes it fast...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 Nov 1999 14:10:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] Getting OID in psql of recent insert "
},
{
"msg_contents": "> Zeugswetter Andreas SEV <[email protected]> writes:\n> >> Yes, I use 'em the same way. I think an OID is kind of like a pointer\n> >> in a C program: good for fast, unique access to an object within the\n> >> context of the execution of a particular application (and maybe not\n> >> even that long). You don't write pointers into files to be used again\n> >> by other programs, though, and in the same way an OID isn't a good\n> >> candidate for a long-lasting reference from one table to another.\n> \n> > I thought this special case is where the new xid access method would come\n> > in.\n> \n> Good point, but (AFAIK) you could only use it for tables that you were\n> sure no other client was updating in parallel. Otherwise you might be\n> updating a just-obsoleted tuple. Or is there a solution for that?\n> \n> > Is someone still working on the xid access ?\n> \n> I think we have the ability to refer to CTID in WHERE now, but not yet an\n> access method that actually makes it fast...\n\nHiroshi supplied a patch to allow it in the executor, and I applied it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Nov 1999 14:37:54 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] Getting OID in psql of recent insert"
}
] |
[
{
"msg_contents": "[I've been working with Jef Peeraer on building the RPM's on SuSE Linux,\nand ran across what I think is an autoconf issue. The configure script\ncomplained that it couldn't find the tk config script\n(/usr/lib/tkConfig.sh on most tk installs), which botched the whole\nbuild in an obscure place.]\n\n>Subject : postgres RPM build on a Suse 6.2 machine.\n>Tk is installed, but it is maybe the location that matters :\n>/usr/X11R6/lib/tkConfig.sh. \n>Is a part of this path hard coded somewhere ? ( auto-configure ). \n\nBruce, I seem to recall that the PATH_TO_WISH issue with pgaccess\nprompted some autoconf stuff -- is this related? \n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 22 Nov 1999 11:40:17 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres RPM build on Suse linux 6.2"
},
{
"msg_contents": "> [I've been working with Jef Peeraer on building the RPM's on SuSE Linux,\n> and ran across what I think is an autoconf issue. The configure script\n> complained that it couldn't find the tk config script\n> (/usr/lib/tkConfig.sh on most tk installs), which botched the whole\n> build in an obscure place.]\n> \n> >Subject : postgres RPM build on a Suse 6.2 machine.\n> >Tk is installed, but it is maybe the location that matters :\n> >/usr/X11R6/lib/tkConfig.sh. \n> >Is a part of this path hard coded somewhere ? ( auto-configure ). \n> \n> Bruce, I seem to recall that the PATH_TO_WISH issue with pgaccess\n> prompted some autoconf stuff -- is this related? \n\nNot related, I think. PATH_TO_WISH is just set to @WISH@ in the actual\nexecutable. I never touched the tkConfig stuff.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Nov 1999 12:47:26 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: postgres RPM build on Suse linux 6.2"
},
{
"msg_contents": "Bruce Momjian wrote:\n> > Bruce, I seem to recall that the PATH_TO_WISH issue with pgaccess\n> > prompted some autoconf stuff -- is this related?\n> \n> Not related, I think. PATH_TO_WISH is just set to @WISH@ in the actual\n> executable. I never touched the tkConfig stuff.\n\nOk. I'm going to have to do some digging -- there are a multitude of\nother X11-related configure shenanigans I'm going to have to take care\nof for building the RPMs on SuSE. I eventually hope to have it where\npeople can rebuild the RPM set with a simple 'rpm --rebuild' instead of\nwhat some are having to do now. On RedHat Intel, Sparc, or Alpha, the\n--rebuild is enough -- but RedHat is not the only RPM-based distribution\n(nor is linux the only OS that can have RPM installed....). Time to buy\nCheapBytes' Mondo CD pack (five linux distributions on CD)....\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 22 Nov 1999 13:37:39 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: postgres RPM build on Suse linux 6.2"
},
{
"msg_contents": "Lamar Owen <[email protected]> writes:\n> [I've been working with Jef Peeraer on building the RPM's on SuSE Linux,\n> and ran across what I think is an autoconf issue. The configure script\n> complained that it couldn't find the tk config script\n> (/usr/lib/tkConfig.sh on most tk installs), which botched the whole\n> build in an obscure place.]\n\n--with-tclconfig and/or --with-tkconfig should fix this. I forget\nwhether those are in 6.5.* though.\n\nSince we also look for \"wish\" in the PATH, it would be nice if we could\nask wish where the tcl/tk config files are, instead of having to search\nfor them ourselves. But AFAIK you can't fire up a wish without having\nan X display for it to connect to ... and configure can't assume that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 Nov 1999 14:15:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: postgres RPM build on Suse linux 6.2 "
},
{
"msg_contents": "\nOn 22-Nov-99 Tom Lane wrote:\n> Lamar Owen <[email protected]> writes:\n>> [I've been working with Jef Peeraer on building the RPM's on SuSE Linux,\n>> and ran across what I think is an autoconf issue. The configure script\n>> complained that it couldn't find the tk config script\n>> (/usr/lib/tkConfig.sh on most tk installs), which botched the whole\n>> build in an obscure place.]\n> \n> --with-tclconfig and/or --with-tkconfig should fix this. I forget\n> whether those are in 6.5.* though.\n> \n> Since we also look for \"wish\" in the PATH, it would be nice if we could\n> ask wish where the tcl/tk config files are, instead of having to search\n> for them ourselves. But AFAIK you can't fire up a wish without having\n> an X display for it to connect to ... and configure can't assume that.\n\nI added --with-tkconfig somewhere around March 4, 1999. That's the date\non my patch file anyway. Does that help any?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Mon, 22 Nov 1999 16:11:34 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: postgres RPM build on Suse linux 6.2"
},
{
"msg_contents": "Vince Vielhaber wrote:\n> > --with-tclconfig and/or --with-tkconfig should fix this. I forget\n> > whether those are in 6.5.* though.\n \n> I added --with-tkconfig somewhere around March 4, 1999. That's the date\n> on my patch file anyway. Does that help any?\n\nWell, I told the guy to symlink /usr/X11/lib/tkConfig.sh to\n/usr/lib/tkConfig.sh to assist in troubleshooting the build -- and\nthat's when we run up against the X stuff. He is confirming that he\nindeed has the X11 development headers and libs installed.\n\nThe long term solution is to add distribution-specific configure lines\nto the spec file, selected with RPM directives. We just have to figure\nout the configure options and other shenanigans for each distribution,\nthen put the appropriate directives in place. Once we get his system\nbuilding the RPM's reliably, then we'll do the --with-tkconfig etc\ndirectives in the spec file, with the selector directives inline. At\nthat point, whether you are on RedHat or SuSE, you just type 'rpm\n--rebuild postgresql-6.5.3-x.src.rpm' and the entire build, install, and\npackaging operations are executed in a fully automatic fashion,\nproducing installable binary RPM's. At least that's my goal ;-).\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 22 Nov 1999 16:28:33 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: postgres RPM build on Suse linux 6.2"
},
{
"msg_contents": "> Ok. I'm going to have to do some digging -- there are a multitude of\n> other X11-related configure shenanigans I'm going to have to take care\n> of for building the RPMs on SuSE. I eventually hope to have it where\n> people can rebuild the RPM set with a simple 'rpm --rebuild' instead of\n> what some are having to do now. On RedHat Intel, Sparc, or Alpha, the\n> --rebuild is enough -- but RedHat is not the only RPM-based distribution\n> (nor is linux the only OS that can have RPM installed....). Time to buy\n> CheapBytes' Mondo CD pack (five linux distributions on CD)....\n\nI've sent off mail to the Mandrake folks regarding the Postgres RPMs;\nwill let you know what I find out. Current problems:\n\n1) they don't have the latest release. I asked whether they had a\nmechanism for releasing updates to packages over and above the limited\nnumber I see on their site.\n\n2) they seem to have omitted the .src.rpm from their distro, so I\ncan't see how they build their i586-specific packages.\n\nbtw, they show the same /usr/lib/pgsql permissions problem.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 23 Nov 1999 02:51:12 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Mandrake RPMs (was RPM build on Suse linux 6.2)"
},
{
"msg_contents": "On Mon, 22 Nov 1999, Thomas Lockhart wrote:\n> > --rebuild is enough -- but RedHat is not the only RPM-based distribution\n> > (nor is linux the only OS that can have RPM installed....). Time to buy\n> > CheapBytes' Mondo CD pack (five linux distributions on CD)....\n \n> I've sent off mail to the Mandrake folks regarding the Postgres RPMs;\n> will let you know what I find out. Current problems:\n \n> 1) they don't have the latest release. I asked whether they had a\n> mechanism for releasing updates to packages over and above the limited\n> number I see on their site.\n\nThe last Mandrake release is for the Cooker development setup (go to\nrpmfind.net, Mandrake Cooker, pull up a RPM list by name, and go to the P's.). \nThey last put in 6.5.2-1 in Cooker. And Cooker has the src.rpm.\n\n> btw, they show the same /usr/lib/pgsql permissions problem.\n\nYeah, that one is going to bite us one day. There was never a documented need\nto change it before. It is changed in my local copy of the spec file already,\nso it will go into the next build (as the gruesome HOWTO gets canned at the\nsame time.... I didn't want to look _too_ eager to find an excuse to release\nanother RPM set so soon after our HOWTO discussion and resolution....). \n\nDo you have a preference as to doing a quick bugfix RPM release versus waiting\na little to get other bugs squashed at the same time?\n\nCan you forward Mandrake's reply to me??\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 22 Nov 1999 22:58:02 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Mandrake RPMs (was RPM build on Suse linux 6.2)"
},
{
"msg_contents": "> > > --rebuild is enough -- but RedHat is not the only RPM-based distribution\n> > > (nor is linux the only OS that can have RPM installed....). Time to buy\n> > > CheapBytes' Mondo CD pack (five linux distributions on CD)....\n> > I've sent off mail to the Mandrake folks regarding the Postgres RPMs;\n> > will let you know what I find out. Current problems:\n> > 1) they don't have the latest release. I asked whether they had a\n> > mechanism for releasing updates to packages over and above the limited\n> > number I see on their site.\n> The last Mandrake release is for the Cooker development setup (go to\n> rpmfind.net, Mandrake Cooker, pull up a RPM list by name, and go to the P's.).\n> They last put in 6.5.2-1 in Cooker. And Cooker has the src.rpm.\n\nI'm just using yours for now. Interesting: on my new laptop it builds\n*i686* rpms, since the Mandrake /usr/lib/rpm/rpmrc is very aggressive\nabout matching the build machine. I have created a /root/.rpmrc to\noverride this, but I'm planning on posting some optimized RPMs at\npostgresql.org.\n\nWe might also want to consider building some non-locale-enabled RPMs\nso folks can get the speed boost if they aren't using non-ascii\nEnglish.\n\nI've changed a couple of lines in the spec file; diffs included below.\n\n> Can you forward Mandrake's reply to me??\n\nSure. Haven't heard from them yet...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California",
"msg_date": "Tue, 23 Nov 1999 15:26:30 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Mandrake RPMs (was RPM build on Suse linux 6.2)"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> > They last put in 6.5.2-1 in Cooker. And Cooker has the src.rpm.\n> \n> I'm just using yours for now. Interesting: on my new laptop it builds\n> *i686* rpms, since the Mandrake /usr/lib/rpm/rpmrc is very aggressive\n> about matching the build machine. I have created a /root/.rpmrc to\n> override this, but I'm planning on posting some optimized RPMs at\n> postgresql.org.\n\nSounds good. Yeah, Mandrake has been quite aggressive on the CPU\noptimization front -- however, I still have some 486's running here, so\nthe i386 binaries work for me. If we want to distribute i586 and i686\nversions, I have no complaints.\n\nI looked at the Mandrake Cooker spec file -- and there are differences\nall over the place -- most noticeably in the truncation of the\nChangelog. The most notable difference is in the use of bzip2 to\ncompress the man pages.\n\n> We might also want to consider building some non-locale-enabled RPMs\n> so folks can get the speed boost if they aren't using non-ascii\n> English.\n\nOn my TODO list for the next RPM release (which is looking to be sooner\nthan I had expected....).\n\n> I've changed a couple of lines in the spec file; diffs included below.\n\nLooks familiar -- fixing the permissions bug, and bumping the version.\n\n> > Can you forward Mandrake's reply to me??\n \n> Sure. Haven't heard from them yet...\n\nThanks. I had e-mailed Axalon Bloodstone directly, as he did the\npackaging for the Cooker RPM. \n\nAlso, Jef and I got the RPM's built, installed, and running under SuSE\n6.2. I'm going to put up a page on ramifordistat detailing the steps and\npackages needed. Man, SuSE is far different from RedHat!\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 23 Nov 1999 14:42:30 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Mandrake RPMs (was RPM build on Suse linux 6.2)"
}
] |
[
{
"msg_contents": "\n> testdb=> \\set singlestep on\n> testdb=> \\set sql_interpol '#'\n> testdb=> \\set foo 'pg_class'\n> testdb=> select * from #foo#;\n\nThis is great, but may I object to the syntax ?\nThe standard sql way to use host variables seems to be:\nselect * from :foo where id = :id\n\nThere will always be the problem with conflicting operators,\nand this one syntax, already needed by ecpg, it is hard enough.\n\nAndreas\n",
"msg_date": "Mon, 22 Nov 1999 18:06:46 +0100",
"msg_from": "Zeugswetter Andreas SEV <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] Getting OID in psql of recent insert "
},
{
"msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> \n> > testdb=> \\set singlestep on\n> > testdb=> \\set sql_interpol '#'\n> > testdb=> \\set foo 'pg_class'\n> > testdb=> select * from #foo#;\n> \n> This is great, but may I object to the syntax ?\n> The standard sql way to use host variables seems to be:\n> select * from :foo where id = :id\n> \n> There will always be the problem with conflicting operators,\n> and this one syntax, already needed by ecpg, it is hard enough.\n\nAgreed.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Nov 1999 12:47:47 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] Getting OID in psql of recent insert"
},
{
"msg_contents": "Hi, \n\n> > testdb=> \\set singlestep on\n> > testdb=> \\set sql_interpol '#'\n> > testdb=> \\set foo 'pg_class'\n> > testdb=> select * from #foo#;\n\n In order to solve these problems, I have adopted a approach \nwhich is different from psql. It is 'pgbash'. The pgbash is \nthe system which offers the direct SQL/embedded SQL interface \nfor PostgreSQL by being included in the BASH(Version 2) shell. \nPlease refer to the next URL.\n http://www.psn.co.jp/PostgreSQL/pgbash/index-e.html\n\n ex.)\npgbash> exec_sql \"insert into test values(111,'aaa','bbb')\"\npgabsh> rOID=$SQLOID <---- OID of recent insert\npgbash> exec_sql \"begin\"\npgbash> exec_sql \"declare cur cursor fr select * from test \n> where oid >= $rOID2 and oid <= $rOID\"\npgbash> exec_sql \"fetch in cur into :NUM1, :NAME1\"\npgbash> exec_sql \"fetch in cur into :NUM2, :NAME2\" \npgbash> NUM=$(( $NUM1+$NUM2 ))\npgbash> echo $NUM, $NAME1, $NAME2\npgbash> exec_sql \"end\"\n\n Now, pgbash version is 1.2.3 and this version needs 'exec_sql'\nto execute SQL. However, I have changed a parser of BASH-2.03, \nand pgbash becomes BASH itself in the next version. It does not \nneed to describe 'exec_sql' in order to execute SQL.\n\nex.)\npgbash> insert into test values(111,'aaa','bbb'); \npgbash> rOID = $SQLOID\npgbash> select * from test where oid=$rOID; &> /tmp/work.dat\n\n 'SQL;' becomes one command of BASH shell. Therefore, it is \npossible to use ridirection/pipe with SQL. By this, pgbash has \nthe operability equal to psql and it will also have many functions \nwhich are higher than psql. \n\n I think this approach useful. Comments?\n\n--\nRegards,\nSAKAIDA Masaaki <[email protected]>\nPersonal Software, Inc. Osaka, Japan\n",
"msg_date": "Tue, 23 Nov 1999 18:24:52 +0900",
"msg_from": "SAKAIDA Masaaki <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Getting OID in psql of recent insert "
},
{
"msg_contents": "On 1999-11-22, Zeugswetter Andreas SEV mentioned:\n\n> > testdb=> \\set singlestep on\n> > testdb=> \\set sql_interpol '#'\n> > testdb=> \\set foo 'pg_class'\n> > testdb=> select * from #foo#;\n> \n> This is great, but may I object to the syntax ?\n> The standard sql way to use host variables seems to be:\n> select * from :foo where id = :id\n> \n> There will always be the problem with conflicting operators,\n> and this one syntax, already needed by ecpg, it is hard enough.\n\nI just pulled that syntax out of my hat, since it was the most\nnon-interfering way to go for now, but thanks for the tip, I'll be on the\ntask in a second. Of course, since the SQL standard is such a widely\navailable document, I should have found that myself ;)\n\nIs there also a rule on what those variables can contain? I mean currently\nthey act like C macros, they can contain unbalances quotes, incomplete\nbackslash commands, everything. Or should they be restricted to SQL?\n\nAlso, looking for possible conflicts here, it seems that there is an\noperator ':' for exponentiation, which is of course extremely mnemonic.\nThis reminds me of the ';' operator for logarithms, which I also use all\nthe time in mathematical writing. Are those operators actually standard or\njust somebody's personal idea?\n\nFor example, what does this mean:\nplay=> select value, :value from test;\n value | ?column?\n-------+----------------------\n 5 | 148.413159102577\n 6 | 403.428793492735\n 99 | 9.88903031934695e+42\n(3 rows)\n\nSimilarly for the ';' operator, where there are obvious problems.\nRationale anybody? Are they from PostQUEL times?\n\nUm, okay, this goes even further. There seem to be functions called log\nand dlog1 as well as exp and dexp with essentially the same functionality;\nthe one with the 'd' goes with float8 arguments, the other one with\nnumeric. Is this making sense to anybody?\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Wed, 24 Nov 1999 01:56:43 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] Getting OID in psql of recent insert "
},
{
"msg_contents": "> Is there also a rule on what those variables can contain? I mean currently\n> they act like C macros, they can contain unbalances quotes, incomplete\n> backslash commands, everything. Or should they be restricted to SQL?\n> \n> Also, looking for possible conflicts here, it seems that there is an\n> operator ':' for exponentiation, which is of course extremely mnemonic.\n> This reminds me of the ';' operator for logarithms, which I also use all\n> the time in mathematical writing. Are those operators actually standard or\n> just somebody's personal idea?\n\nI recommend we rename : and ; to something else. : should be used for\nvariables, and ; is too much like the trailing semicolon.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 Nov 1999 20:31:45 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] Getting OID in psql of recent insert"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Also, looking for possible conflicts here, it seems that there is an\n> operator ':' for exponentiation, which is of course extremely mnemonic.\n> This reminds me of the ';' operator for logarithms, which I also use all\n> the time in mathematical writing. Are those operators actually standard or\n> just somebody's personal idea?\n\nMy guess is that someone back in the Berkeley days put them in as a tour\nde force in parser-writing. An impressive show of skill it is, too;\nI'm continually astonished that we are not having severe grammar ambiguity\nproblems from the fact that ';' can be an operator as well as a\nstatement terminator. ':' is only marginally safer. Yet, by golly,\nthey parse, and have continued to parse despite extensive later changes\nto the pgsql grammar. Whoever that was knew his stuff.\n\nSooner or later, however, they'll probably break.\n\nI'd not object if we removed these operators and instead provided\nfunctions with the standard names log() and exp() for all the\nnon-integral numeric types. Comments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Nov 1999 01:29:01 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] Getting OID in psql of recent insert "
},
{
"msg_contents": "> I'd not object if we removed these operators and instead provided\n> functions with the standard names log() and exp() for all the\n> non-integral numeric types. Comments?\n\nI have no great fondness for \";\" and \":\" as operators, but would like\nto see some operators assigned to these functions. Certainly the carat\n\"^\" could work for exp() (or maybe \"**\" to make those old Fortran\nprogrammers feel better about themselves ;), and perhaps \"!^\" for\nlog()? Any other ideas for appropriate symbols for these operators??\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 29 Nov 1999 08:21:09 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] Getting OID in psql of recent insert"
},
{
"msg_contents": "On Mon, 29 Nov 1999, Thomas Lockhart wrote:\n\n> > I'd not object if we removed these operators and instead provided\n> > functions with the standard names log() and exp() for all the\n> > non-integral numeric types. Comments?\n> \n> I have no great fondness for \";\" and \":\" as operators, but would like\n> to see some operators assigned to these functions. Certainly the carat\n> \"^\" could work for exp() (or maybe \"**\" to make those old Fortran\n> programmers feel better about themselves ;), and perhaps \"!^\" for\n> log()? Any other ideas for appropriate symbols for these operators??\n\nI wasn't aware of any Obfuscated SQL Contest ...\n\nI personally think that the greatest possible benefit can only be derived\nif all of this looks as much as possible like actual mathematical writing.\nThus I could agree with a power operator '^' and perhaps even a unary '^'\nexponential operator, although that's already questionable. But by\ninventing non-standard operators for every function under the sun, just to\nhave one, you're not doing anyone (including yourself) a favour.\n\nThat's just me though. As you said yourself, good ideas will stand the\ntest of an extended discussion ;)\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Mon, 29 Nov 1999 12:58:22 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] Getting OID in psql of recent insert"
}
] |
[
{
"msg_contents": "I have made most system indexes unique.\n\nI have added indexes to match all system caches.\n\nI have renamed some cache names to be clearer.\n\nI have re-ordered the cache entries to be alphabetical.\n\nI have renamed the inheritance *rel columns to be *relid.\n\nI have added a large comment to syscache.c listing steps needed to\ninstall a new system index.\n\nI saw no speed improvement from my changes, but I can imagine cases\nwhere this would be a speedup.\n\nThe only thing missing is that I can't seem to get pg_shadow to use an\nindex from the cache. When I try it, initdb runs really slow, and the\nresulting installation is unusable. Any ideas anyone? You can see my\ncommented-out code in indexing.c and syscache.c. My guess is that the\nstrange way we issue pg_exec_query_dest() calls is the cause. I have no\ncall to CatalogIndexInsert() for the pg_shadow because of this. Anyone\nwant to rewrite user.c to use heap_insert() instead.\n\ninitdb required.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Nov 1999 12:59:25 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "System cache index cleanup"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I have made most system indexes unique.\n\nBruce, I think you need to revert the following changes to pg_opclass.h:\n\n*** src/include/catalog/pg_opclass.h\t1999/09/29 21:13:30\t1.20\n--- src/include/catalog/pg_opclass.h\t1999/11/22 17:36:15\n***************\n*** 68,76 ****\n DESCR(\"\");\n DATA(insert OID = 423 (\tfloat8_ops\t\t701 ));\n DESCR(\"\");\n! DATA(insert OID = 424 (\tint24_ops\t\t 0 ));\n DESCR(\"\");\n! DATA(insert OID = 425 (\tint42_ops\t\t 0 ));\n DESCR(\"\");\n DATA(insert OID = 426 (\tint4_ops\t\t 23 ));\n DESCR(\"\");\n--- 68,78 ----\n DESCR(\"\");\n DATA(insert OID = 423 (\tfloat8_ops\t\t701 ));\n DESCR(\"\");\n! /* Technically, deftype is wrong, but it must be unique for index, bjm */\n! DATA(insert OID = 424 (\tint24_ops\t\t424 ));\n DESCR(\"\");\n! /* Technically, deftype is wrong, but it must be unique for index, bjm */\n! DATA(insert OID = 425 (\tint42_ops\t\t425 ));\n DESCR(\"\");\n DATA(insert OID = 426 (\tint4_ops\t\t 23 ));\n DESCR(\"\");\n***************\n*** 85,91 ****\n DESCR(\"\");\n DATA(insert OID = 432 (\tabstime_ops\t\t702 ));\n DESCR(\"\");\n! DATA(insert OID = 433 (\tbigbox_ops\t\t603 ));\n DESCR(\"\");\n DATA(insert OID = 434 (\tpoly_ops\t\t604 ));\n DESCR(\"\");\n--- 87,94 ----\n DESCR(\"\");\n DATA(insert OID = 432 (\tabstime_ops\t\t702 ));\n DESCR(\"\");\n! /* Technically, deftype is wrong, but it must be unique for index, bjm */\n! DATA(insert OID = 433 (\tbigbox_ops\t\t433 ));\n DESCR(\"\");\n DATA(insert OID = 434 (\tpoly_ops\t\t604 ));\n DESCR(\"\");\n\nand make the corresponding index non-unique.\n\n(a) It is not supposed to be a unique column --- we'd not need the\nconcept of index opclasses at all if there were only one possible\noperator set for any given column type!\n\n(b) The above changes are making the oidjoins and opr_sanity regress\ntests fail, as indeed they should...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 Nov 1999 23:35:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] System cache index cleanup "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I have made most system indexes unique.\n> \n> Bruce, I think you need to revert the following changes to pg_opclass.h:\n> \n> *** src/include/catalog/pg_opclass.h\t1999/09/29 21:13:30\t1.20\n> --- src/include/catalog/pg_opclass.h\t1999/11/22 17:36:15\n> ***************\n> *** 68,76 ****\n> DESCR(\"\");\n> DATA(insert OID = 423 (\tfloat8_ops\t\t701 ));\n> DESCR(\"\");\n> ! DATA(insert OID = 424 (\tint24_ops\t\t 0 ));\n> DESCR(\"\");\n> ! DATA(insert OID = 425 (\tint42_ops\t\t 0 ));\n> DESCR(\"\");\n> DATA(insert OID = 426 (\tint4_ops\t\t 23 ));\n> DESCR(\"\");\n> --- 68,78 ----\n> DESCR(\"\");\n> DATA(insert OID = 423 (\tfloat8_ops\t\t701 ));\n> DESCR(\"\");\n> ! /* Technically, deftype is wrong, but it must be unique for index, bjm */\n> ! DATA(insert OID = 424 (\tint24_ops\t\t424 ));\n> DESCR(\"\");\n> ! /* Technically, deftype is wrong, but it must be unique for index, bjm */\n> ! DATA(insert OID = 425 (\tint42_ops\t\t425 ));\n> DESCR(\"\");\n> DATA(insert OID = 426 (\tint4_ops\t\t 23 ));\n> DESCR(\"\");\n> ***************\n> *** 85,91 ****\n> DESCR(\"\");\n> DATA(insert OID = 432 (\tabstime_ops\t\t702 ));\n> DESCR(\"\");\n> ! DATA(insert OID = 433 (\tbigbox_ops\t\t603 ));\n> DESCR(\"\");\n> DATA(insert OID = 434 (\tpoly_ops\t\t604 ));\n> DESCR(\"\");\n> --- 87,94 ----\n> DESCR(\"\");\n> DATA(insert OID = 432 (\tabstime_ops\t\t702 ));\n> DESCR(\"\");\n> ! /* Technically, deftype is wrong, but it must be unique for index, bjm */\n> ! DATA(insert OID = 433 (\tbigbox_ops\t\t433 ));\n> DESCR(\"\");\n> DATA(insert OID = 434 (\tpoly_ops\t\t604 ));\n> DESCR(\"\");\n> \n> and make the corresponding index non-unique.\n> \n> (a) It is not supposed to be a unique column --- we'd not need the\n> concept of index opclasses at all if there were only one possible\n> operator set for any given column type!\n> \n> (b) The above changes are making the oidjoins and opr_sanity regress\n> tests fail, as indeed they should...\n\nPatch reverse applied, and index no longer unique. I saw these errors\ntoo but was unsure of the cause and whether it was significant.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Nov 1999 23:48:16 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] System cache index cleanup"
}
] |
[
{
"msg_contents": "Could anyone tell me why the following does not work without a cast?\n\nmm=> select * from test;\n f|i|a\n------+-+---------------------\n404.90|1|{0,1,2,3,4,5,6,7,8,9}\n(1 row)\n\nmm=> select i from test where f=404.90\nmm-> ;\nERROR: Unable to identify an operator '=' for types 'numeric' and 'float8'\n You will have to retype this query using an explicit cast\nmm=> select i from test where f::float=404.90;\ni\n-\n1\n(1 row)\n\nShouldn't there be an implicit cast between numeric and float?\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Mon, 22 Nov 1999 22:06:16 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Float/Numeric?"
},
{
"msg_contents": "> Shouldn't there be an implicit cast between numeric and float?\n\nYes there should. I'll add it to my list for v7.0 unless someone beats\nme to it...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 23 Nov 1999 14:20:37 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Float/Numeric?"
}
] |
[
{
"msg_contents": "Well, I diked out the code in vacuum.c that creates/deletes the pg_vlock\nlockfile, and tried it out. Turns out it's not quite such a no-brainer\nas I'd hoped. Several problems emerged:\n\n1. You can run concurrent \"VACUUM\" this way, but concurrent \"VACUUM\nANALYZE\" blows up. The problem seems to be that \"VACUUM ANALYZE\"'s\nfirst move is to delete all available rows in pg_statistic. That\ngenerates a conflict against other vacuums that might be inserting new\nrows in pg_statistic. The newly started VACUUM will almost always hang\nup on a not-yet-committed pg_statistic row, waiting to see whether that\nrow commits so that it can delete it. Even more interesting, the older\nVACUUM generally hangs up shortly after that; I'm not perfectly clear on\nwhat *it's* waiting on, but it's obviously a mutual deadlock situation.\nThe two VACUUMs don't fail with a nice \"deadlock detected\" message,\neither ... it's more like a 60-second spinlock timeout, followed by\nabort() coredumps in both backends, followed by the postmaster killing\nevery other backend in sight. That's clearly not acceptable behavior\nfor production databases.\n\nI find this really disturbing whether we allow concurrent VACUUMs or\nnot, because now I'm afraid that other sorts of system-table updates\ncan show the same ungraceful response to deadlock situations. I have\na vague recollection that Vadim said something about interlocks between\nmultiple writers only being done properly for user tables not system\ntables ... if that's what this is, I think it's a must-fix problem.\n\n2. I was able to avoid the deadlock by removing the code that tries to\ndelete every pg_statistic tuple in sight. The remaining code deletes\n(and then recreates) pg_statistics tuples for each table it processes,\nwhile it's processing the table and holding an exclusive lock on the\ntable. So, there's no danger of cross-VACUUM deadlocks. The trouble is\nthat pg_statistics tuples for deleted tables won't ever go away, since\nVACUUM will never consider them. I suppose this could be fixed by\nmodifying DROP TABLE to delete pg_statistics tuples applying to the\ntarget table.\n\n3. I tried running VACUUMs in parallel with the regress tests, and saw\na lot of messages like\nNOTICE: Rel tenk1: TID 1/31: InsertTransactionInProgress 29737 - can't shrink relation\nLooking at the code, this is normal behavior for VACUUM when it sees\nnot-yet-committed tuples, and has nothing to do with whether there's\nanother VACUUM going on elsewhere. BUT: why the heck are we getting\nthese at all, especially on user tables? VACUUM's grabbed an exclusive\nlock on the target table; shouldn't that mean that all write\ntransactions on the target have committed? This looks like it could\nbe a symptom of a locking bug.\n\nDo we want to press ahead with fixing these problems, or should I just\ndiscard my changes uncommitted? Two of the three points look like\nthings we need to worry about whether VACUUM is concurrent or not,\nbut maybe I'm misinterpreting what I see. Comments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 23 Nov 1999 01:41:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Concurrent VACUUM: first results"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> Sent: Tuesday, November 23, 1999 3:41 PM\n> To: [email protected]\n> Subject: [HACKERS] Concurrent VACUUM: first results\n> \n> \n> Well, I diked out the code in vacuum.c that creates/deletes the pg_vlock\n> lockfile, and tried it out. Turns out it's not quite such a no-brainer\n> as I'd hoped. Several problems emerged:\n> \n> 3. I tried running VACUUMs in parallel with the regress tests, and saw\n> a lot of messages like\n> NOTICE: Rel tenk1: TID 1/31: InsertTransactionInProgress 29737 - \n> can't shrink relation\n> Looking at the code, this is normal behavior for VACUUM when it sees\n> not-yet-committed tuples, and has nothing to do with whether there's\n> another VACUUM going on elsewhere. BUT: why the heck are we getting\n> these at all, especially on user tables? VACUUM's grabbed an exclusive\n> lock on the target table; shouldn't that mean that all write\n> transactions on the target have committed? This looks like it could\n> be a symptom of a locking bug.\n>\n\nDoesn't DoCopy() in copy.c unlock the target relation too early\nby heap_close() ?\n \nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Tue, 23 Nov 1999 18:23:03 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Concurrent VACUUM: first results"
},
{
"msg_contents": "> \n> Well, I diked out the code in vacuum.c that creates/deletes the pg_vlock\n> lockfile, and tried it out. Turns out it's not quite such a no-brainer\n> as I'd hoped. Several problems emerged:\n> \n> 1. You can run concurrent \"VACUUM\" this way, but concurrent \"VACUUM\n> ANALYZE\" blows up. The problem seems to be that \"VACUUM ANALYZE\"'s\n> first move is to delete all available rows in pg_statistic. That\n> generates a conflict against other vacuums that might be inserting new\n> rows in pg_statistic. The newly started VACUUM will almost always hang\n> up on a not-yet-committed pg_statistic row, waiting to see whether that\n> row commits so that it can delete it. Even more interesting, the older\n> VACUUM generally hangs up shortly after that; I'm not perfectly clear on\n> what *it's* waiting on, but it's obviously a mutual deadlock situation.\n> The two VACUUMs don't fail with a nice \"deadlock detected\" message,\n> either ... it's more like a 60-second spinlock timeout, followed by\n> abort() coredumps in both backends, followed by the postmaster killing\n> every other backend in sight. That's clearly not acceptable behavior\n> for production databases.\n>\n\nThe following stuff in vc_vacuum() may be a cause.\n\n /* get list of relations */\n vrl = vc_getrels(VacRelP);\n\n if (analyze && VacRelP == NULL && vrl != NULL)\n vc_delhilowstats(InvalidOid, 0, NULL);\n\n /* vacuum each heap relation */ \n\n\nCommitTransactionCommand() is executed at the end of vc_getrels()\nand vc_delhilowstats() is executed without StartTransactionCommand().\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Thu, 25 Nov 1999 15:09:33 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Concurrent VACUUM: first results"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> CommitTransactionCommand() is executed at the end of vc_getrels()\n> and vc_delhilowstats() is executed without StartTransactionCommand().\n\nOooh, good eye! I wonder how long that bug's been there?\n\nI'm still inclined to remove that call to vc_delhilowstats, because it\nseems like a global delete of statistics can't help but be a problem.\nBut if we keep it, you're dead right: it has to be inside a transaction.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 Nov 1999 01:17:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Concurrent VACUUM: first results "
},
{
"msg_contents": "> \"Hiroshi Inoue\" <[email protected]> writes:\n> > CommitTransactionCommand() is executed at the end of vc_getrels()\n> > and vc_delhilowstats() is executed without StartTransactionCommand().\n> \n> Oooh, good eye! I wonder how long that bug's been there?\n> \n> I'm still inclined to remove that call to vc_delhilowstats, because it\n> seems like a global delete of statistics can't help but be a problem.\n> But if we keep it, you're dead right: it has to be inside a transaction.\n> \n\nWith the new pg_statistic cache, you can efficiently do a heap_insert or\nheap_replace dending on whether an entry already exists. In the old\ncode, that was hard because you had to scan entire table looking for a\nmatch so I just did a delete, and they I knew to do an insert.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 Nov 1999 13:19:37 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Concurrent VACUUM: first results"
},
{
"msg_contents": ">\n> > \"Hiroshi Inoue\" <[email protected]> writes:\n> > > CommitTransactionCommand() is executed at the end of vc_getrels()\n> > > and vc_delhilowstats() is executed without StartTransactionCommand().\n> >\n> > Oooh, good eye! I wonder how long that bug's been there?\n> >\n> > I'm still inclined to remove that call to vc_delhilowstats, because it\n> > seems like a global delete of statistics can't help but be a problem.\n> > But if we keep it, you're dead right: it has to be inside a transaction.\n> >\n>\n> With the new pg_statistic cache, you can efficiently do a heap_insert or\n> heap_replace dending on whether an entry already exists. In the old\n> code, that was hard because you had to scan entire table looking for a\n> match so I just did a delete, and they I knew to do an insert.\n>\n\nI have a anxiety about the index of pg_statistic(pg_statistic itself also\n?).\n\nvc_updstats() may be called in immediately committed mode.\nvacuum calls TransactionIdCommit() after moving tuples in order\nto delete index tuples and truncate the relation safely.\n\nIt's necessary but the state is out of PostgreSQL's recovery\nmechanism.\nheap_insert() is imediately committed. If index_insert() fails\nthere remains a heap tuple which doesn't have a corresponding\nindex entry.\n\nMoreover duplicate index check is useless. The check is done\nafter heap tuples are inserted(and committed).\n\nShould vc_updstats() be moved before TransactionIdCommit() ?\nI'm not sure.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Fri, 26 Nov 1999 10:08:24 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Concurrent VACUUM: first results"
},
{
"msg_contents": "> I have a anxiety about the index of pg_statistic(pg_statistic itself also\n> ?).\n> \n> vc_updstats() may be called in immediately committed mode.\n> vacuum calls TransactionIdCommit() after moving tuples in order\n> to delete index tuples and truncate the relation safely.\n> \n> It's necessary but the state is out of PostgreSQL's recovery\n> mechanism.\n> heap_insert() is imediately committed. If index_insert() fails\n> there remains a heap tuple which doesn't have a corresponding\n> index entry.\n\n\nHuh. Heap_insert writes to disk, but there it is not used unless the\ntransaction gets committed, right?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 Nov 1999 23:11:25 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Concurrent VACUUM: first results"
},
{
"msg_contents": ">\n> > I have a anxiety about the index of pg_statistic(pg_statistic\n> itself also\n> > ?).\n> >\n> > vc_updstats() may be called in immediately committed mode.\n> > vacuum calls TransactionIdCommit() after moving tuples in order\n> > to delete index tuples and truncate the relation safely.\n> >\n> > It's necessary but the state is out of PostgreSQL's recovery\n> > mechanism.\n> > heap_insert() is imediately committed. If index_insert() fails\n> > there remains a heap tuple which doesn't have a corresponding\n> > index entry.\n>\n>\n> Huh. Heap_insert writes to disk, but there it is not used unless the\n> transaction gets committed, right?\n>\n\nThis could occur only in vacuum.\nThere's a quick hack in vc_rpfheap().\n\n if (num_moved > 0)\n {\n\n /*\n * We have to commit our tuple' movings before we'll\ntruncate\n * relation, but we shouldn't lose our locks. And so - quick\nhac\nk:\n * flush buffers and record status of current transaction as\n * committed, and continue. - vadim 11/13/96\n */\n FlushBufferPool(!TransactionFlushEnabled());\n TransactionIdCommit(myXID);\n\t ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n FlushBufferPool(!TransactionFlushEnabled());\n }\n\nvc_updstats() may be called in the already committed transaction.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Fri, 26 Nov 1999 13:51:29 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Concurrent VACUUM: first results"
},
{
"msg_contents": "> > Huh. Heap_insert writes to disk, but there it is not used unless the\n> > transaction gets committed, right?\n> >\n> \n> This could occur only in vacuum.\n> There's a quick hack in vc_rpfheap().\n> \n> if (num_moved > 0)\n> {\n> \n> /*\n> * We have to commit our tuple' movings before we'll\n> truncate\n> * relation, but we shouldn't lose our locks. And so - quick\n> hac\n> k:\n> * flush buffers and record status of current transaction as\n> * committed, and continue. - vadim 11/13/96\n> */\n> FlushBufferPool(!TransactionFlushEnabled());\n> TransactionIdCommit(myXID);\n> \t ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> FlushBufferPool(!TransactionFlushEnabled());\n> }\n> \n> vc_updstats() may be called in the already committed transaction.\n\nOh, that is tricky that they have committed the transaction and continue\nworking in an already committed. Yikes. Any idea why we have to commit\nit early?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 26 Nov 1999 00:09:25 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Concurrent VACUUM: first results"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > /*\n> > * We have to commit our tuple' movings before we'll truncate\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > * relation, but we shouldn't lose our locks. And so - quick hack:\n ^^^^^^^^\n\n... or moved tuples may be lost in the case of DB/OS crash etc\n that may occur after truncation but before commit...\n\n> > * flush buffers and record status of current transaction as\n> > * committed, and continue. - vadim 11/13/96\n> > */\n> > FlushBufferPool(!TransactionFlushEnabled());\n> > TransactionIdCommit(myXID);\n> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > FlushBufferPool(!TransactionFlushEnabled());\n> > }\n> >\n> > vc_updstats() may be called in the already committed transaction.\n> \n> Oh, that is tricky that they have committed the transaction and continue\n> working in an already committed. Yikes. Any idea why we have to commit\n> it early?\n\nVadim\n",
"msg_date": "Fri, 26 Nov 1999 12:32:31 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Concurrent VACUUM: first results"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n>>>> * We have to commit our tuple' movings before we'll truncate\n>>>> * relation, but we shouldn't lose our locks. And so - quick hack:\n\n> ... or moved tuples may be lost in the case of DB/OS crash etc\n> that may occur after truncation but before commit...\n\nI wonder whether there isn't a cleaner way to do this. What about\nremoving this early commit, and doing everything else the way that\nVACUUM does it, except that the physical truncate of the relation\nfile happens *after* the commit at the end of vc_vacone?\n\nWhile I'm asking silly questions: why does VACUUM relabel tuples\nwith its own xact ID anyway? I suppose that's intended to improve\nrobustness in case of a crash --- but if there's a crash partway\nthrough VACUUM, it seems like data corruption is inevitable. How\ncan you pack tuples into the minimum number of pages without creating\nduplicate or missing tuples, if you are unlucky enough to crash before\ndeleting the tuples from their original pages?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 26 Nov 1999 00:55:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Concurrent VACUUM: first results "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Friday, November 26, 1999 2:56 PM\n> To: Vadim Mikheev\n> Cc: Bruce Momjian; Hiroshi Inoue; [email protected]\n> Subject: Re: [HACKERS] Concurrent VACUUM: first results \n> \n> \n> Vadim Mikheev <[email protected]> writes:\n> >>>> * We have to commit our tuple' movings before we'll truncate\n> >>>> * relation, but we shouldn't lose our locks. And so - quick hack:\n> \n> > ... or moved tuples may be lost in the case of DB/OS crash etc\n> > that may occur after truncation but before commit...\n> \n> I wonder whether there isn't a cleaner way to do this. What about\n> removing this early commit, and doing everything else the way that\n> VACUUM does it, except that the physical truncate of the relation\n> file happens *after* the commit at the end of vc_vacone?\n>\n\nI think there exists another reason.\nWe couldn't delete index tuples for deleted but not yet committed\nheap tuples.\n\nIf we could start another transaction without releasing exclusive\nlock for the target relation,it would be better.\n\nRegards.\n \nHiroshi Inoue\[email protected]\n",
"msg_date": "Fri, 26 Nov 1999 15:24:47 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Concurrent VACUUM: first results "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> I wonder whether there isn't a cleaner way to do this.\n\n> I think there exists another reason.\n> We couldn't delete index tuples for deleted but not yet committed\n> heap tuples.\n\nMy first thought was \"Good point\". But my second was \"why should\nvacuum need to deal with that case?\". If vacuum grabs an exclusive\nlock on a relation, it should *not* ever see tuples with uncertain\ncommit status, no?\n\n> If we could start another transaction without releasing exclusive\n> lock for the target relation,it would be better.\n\nSeems like that might be doable, if we really do need it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 26 Nov 1999 01:36:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Concurrent VACUUM: first results "
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> > Vadim Mikheev <[email protected]> writes:\n> > >>>> * We have to commit our tuple' movings before we'll truncate\n> > >>>> * relation, but we shouldn't lose our locks. And so - quick hack:\n> >\n> > > ... or moved tuples may be lost in the case of DB/OS crash etc\n> > > that may occur after truncation but before commit...\n> >\n> > I wonder whether there isn't a cleaner way to do this. What about\n> > removing this early commit, and doing everything else the way that\n> > VACUUM does it, except that the physical truncate of the relation\n> > file happens *after* the commit at the end of vc_vacone?\n> >\n> \n> I think there exists another reason.\n> We couldn't delete index tuples for deleted but not yet committed\n> heap tuples.\n\nYou're right!\nI just don't remember all reasons why I did as it's done -:))\n\n> If we could start another transaction without releasing exclusive\n> lock for the target relation,it would be better.\n\nSo. What's problem?! Start it! Commit \"moving\" xid, get new xid and go!\n\nVadim\n",
"msg_date": "Fri, 26 Nov 1999 13:38:32 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Concurrent VACUUM: first results"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> >> I wonder whether there isn't a cleaner way to do this.\n> \n> > I think there exists another reason.\n> > We couldn't delete index tuples for deleted but not yet committed\n> > heap tuples.\n> \n> My first thought was \"Good point\". But my second was \"why should\n> vacuum need to deal with that case?\". If vacuum grabs an exclusive\n> lock on a relation, it should *not* ever see tuples with uncertain\n> commit status, no?\n\nWhat if vacuum will crash after deleting index tuples pointing\nto heap tuples in old places but before commit? Index will\nbe broken.\n\nVadim\n",
"msg_date": "Fri, 26 Nov 1999 13:42:07 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Concurrent VACUUM: first results"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> While I'm asking silly questions: why does VACUUM relabel tuples\n> with its own xact ID anyway? I suppose that's intended to improve\n> robustness in case of a crash --- but if there's a crash partway\n> through VACUUM, it seems like data corruption is inevitable. How\n> can you pack tuples into the minimum number of pages without creating\n> duplicate or missing tuples, if you are unlucky enough to crash before\n> deleting the tuples from their original pages?\n\nVACUUM:\n1. has to preserve t_xmin/t_xmax in moved tuples\n (or MVCC will be broken) and so stores xid in t_cmin.\n2. turns HEAP_XMIN_COMMITTED off in both tuple versions \n (in old and new places).\n3. sets HEAP_MOVED_IN in tuples in new places and\n HEAP_MOVED_OFF in tuples in old places.\n\nSeeing HEAP_MOVED_IN/HEAP_MOVED_OFF (this is tested for\ntuples with HEAP_XMIN_COMMITTED off only, just to don't\ntest in all cases) tqual.c funcs will check is tuple->t_cmin\ncommitted or not - ie was VACUUM succeded in moving or not.\nAnd so, single vacuum xid commit ensures that there will be\nneither duplicates nor lost tuples.\n\nSorry, I should to describe this half year ago, but...\n\nVadim\n",
"msg_date": "Fri, 26 Nov 1999 13:58:31 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Concurrent VACUUM: first results"
},
{
"msg_contents": ">\n> Hiroshi Inoue wrote:\n> >\n> > > Vadim Mikheev <[email protected]> writes:\n> > > >>>> * We have to commit our tuple' movings before we'll truncate\n> > > >>>> * relation, but we shouldn't lose our locks. And so - quick hack:\n> > >\n> > > > ... or moved tuples may be lost in the case of DB/OS crash etc\n> > > > that may occur after truncation but before commit...\n> > >\n> > > I wonder whether there isn't a cleaner way to do this. What about\n> > > removing this early commit, and doing everything else the way that\n> > > VACUUM does it, except that the physical truncate of the relation\n> > > file happens *after* the commit at the end of vc_vacone?\n> > >\n> >\n> > I think there exists another reason.\n> > We couldn't delete index tuples for deleted but not yet committed\n> > heap tuples.\n>\n> You're right!\n> I just don't remember all reasons why I did as it's done -:))\n>\n> > If we could start another transaction without releasing exclusive\n> > lock for the target relation,it would be better.\n>\n> So. What's problem?! Start it! Commit \"moving\" xid, get new xid and go!\n>\n\nAfter a thought,I remembered that members of xidHash hold xids.\nMust I replace them by new xid ?\nSeems it isn't a cleaner way than current.\n\nThere's another way.\nWe could commit and start Transaction after truncation and\nbefore vc_updstats().\nOne of the problem is that cache invalidation is issued in vc_\nupdstats().\nIt is preferable that cache invalidation is called immedately\nafter/before truncation,isn't it ?\nAny other problems ?\n\n\nBTW how(or what) would WAL do when vacuum fails after\nTransactionIdCommit() ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Sat, 27 Nov 1999 07:48:05 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Concurrent VACUUM: first results"
},
{
"msg_contents": "I have committed the code change to remove pg_vlock locking from VACUUM.\nIt turns out the problems I was seeing initially were all due to minor\nbugs in the lock manager and vacuum itself.\n\n> 1. You can run concurrent \"VACUUM\" this way, but concurrent \"VACUUM\n> ANALYZE\" blows up. The problem seems to be that \"VACUUM ANALYZE\"'s\n> first move is to delete all available rows in pg_statistic.\n\nThe real problem was that VACUUM ANALYZE tried to delete those rows\n*while it was outside of any transaction*. If there was a concurrent\nVACUUM inserting tuples into pg_statistic, the new VACUUM would end up\ncalling XactLockTableWait() with an invalid XID, which caused a failure\ninside lock.c --- and the failure path neglected to release the spinlock\non the lock table. This was compounded by lmgr.c not bothering to check\nthe return code from LockAcquire(). So the lock apparently succeeded,\nand then all the backends would die with \"stuck spinlock\" next time they\ntried to do any lockmanager operations.\n\nI have fixed the simpler aspects of the problem by adding missing\nSpinRelease() calls to lock.c, making lmgr.c test for failure, and\naltering VACUUM to not do the bogus row deletion. But I suspect that\nthere is more to this that I don't understand. Why does calling\nXactLockTableWait() with an already-committed XID cause the following\ncode in lock.c to trigger? Is this evidence of a logic bug in lock.c,\nor at least of inadequate checks for bogus input?\n\n /*\n * Check the xid entry status, in case something in the ipc\n * communication doesn't work correctly.\n */\n if (!((result->nHolding > 0) && (result->holders[lockmode] > 0)))\n {\n XID_PRINT_AUX(\"LockAcquire: INCONSISTENT \", result);\n LOCK_PRINT_AUX(\"LockAcquire: INCONSISTENT \", lock, lockmode);\n /* Should we retry ? */\n SpinRelease(masterLock); <<<<<<<<<<<< just added by me\n return FALSE;\n }\n\n> 3. I tried running VACUUMs in parallel with the regress tests, and saw\n> a lot of messages like\n> NOTICE: Rel tenk1: TID 1/31: InsertTransactionInProgress 29737 - can't shrink relation\n\nHiroshi pointed out that this was probably due to copy.c releasing the\nlock prematurely on the table that is the destination of a COPY. With\nthat fixed, I get many fewer of these messages, and they're all for\nsystem relations --- which is to be expected. Since we don't hold locks\nfor system relations until xact commit, it's possible for VACUUM to see\nuncommitted tuples when it is vacuuming a system relation. So I think\nan occasional message like the above is OK as long as it mentions a\nsystem relation.\n\nI have been running multiple concurrent vacuums in parallel with the\nregress tests, and things seem to mostly work. Quite a few regress\ntests erratically \"fail\" under this load because they emit results in\ndifferent orders than the expected output shows --- not too surprising\nif a VACUUM has come by and reordered a table.\n\nI am still seeing occasional glitches; for example, one vacuum failed\nwith\n\nNOTICE: FlushRelationBuffers(onek, 24): block 40 is referenced (private 0, global 1)\nFATAL 1: VACUUM (vc_rpfheap): FlushRelationBuffers returned -2\npqReadData() -- backend closed the channel unexpectedly.\n\nI believe that these errors are not specifically caused by concurrent\nvacuums, but are symptoms of locking-related bugs that we still have\nto flush out and fix (cf. discussions on pg_hackers around 9/19/99).\nSo I've gone ahead and committed the change to allow concurrent\nvacuums.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 27 Nov 1999 22:04:24 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Concurrent VACUUM: first results "
},
{
"msg_contents": "> \n> I have committed the code change to remove pg_vlock locking from VACUUM.\n> It turns out the problems I was seeing initially were all due to minor\n> bugs in the lock manager and vacuum itself.\n> \n> > 1. You can run concurrent \"VACUUM\" this way, but concurrent \"VACUUM\n> > ANALYZE\" blows up. The problem seems to be that \"VACUUM ANALYZE\"'s\n> > first move is to delete all available rows in pg_statistic.\n> \n> The real problem was that VACUUM ANALYZE tried to delete those rows\n> *while it was outside of any transaction*. If there was a concurrent\n> VACUUM inserting tuples into pg_statistic, the new VACUUM would end up\n> calling XactLockTableWait() with an invalid XID, which caused a failure\n\nHmm,what I could have seen here was always LockRelation(..,RowExclu\nsiveLock). But the cause may be same.\nWe couldn't get xids of not running *transaction*s because its proc->xid\nis set to 0(InvalidTransactionId). So blocking transaction couldn' find an\nxidLookupEnt in xidTable corresponding to the not running *transaction*\nwhen it tries to LockResolveConflicts() in LockReleaseAll() and couldn't\nGrantLock() to XidLookupEnt corresponding to the not running *transac\ntion*. After all LockAcquire() from not running *transaction* always fails\nonce it is blocked.\n\n> I have fixed the simpler aspects of the problem by adding missing\n> SpinRelease() calls to lock.c, making lmgr.c test for failure, and\n> altering VACUUM to not do the bogus row deletion. But I suspect that\n> there is more to this that I don't understand. Why does calling\n> XactLockTableWait() with an already-committed XID cause the following\n\nIt's seems strange. Isn't it waiting for a being deleted tuple by vc_upd\nstats() in vc_vacone() ?\n\n> code in lock.c to trigger? Is this evidence of a logic bug in lock.c,\n> or at least of inadequate checks for bogus input?\n> \n> /*\n> * Check the xid entry status, in case something in the ipc\n> * communication doesn't work correctly.\n> */\n> if (!((result->nHolding > 0) && (result->holders[lockmode] > 0)))\n> {\n> XID_PRINT_AUX(\"LockAcquire: INCONSISTENT \", result);\n> LOCK_PRINT_AUX(\"LockAcquire: INCONSISTENT \", lock, lockmode);\n> /* Should we retry ? */\n> SpinRelease(masterLock); <<<<<<<<<<<< just added by me\n> return FALSE;\n> }\n>\n\nThis is the third time I came here and it was always caused by\nother bugs. \n\nRegards,\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Mon, 29 Nov 1999 09:32:56 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Re: Concurrent VACUUM: first results "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> We couldn't get xids of not running *transaction*s because its proc->xid\n> is set to 0(InvalidTransactionId). So blocking transaction couldn' find an\n> xidLookupEnt in xidTable corresponding to the not running *transaction*\n> when it tries to LockResolveConflicts() in LockReleaseAll() and couldn't\n> GrantLock() to XidLookupEnt corresponding to the not running *transac\n> tion*. After all LockAcquire() from not running *transaction* always fails\n> once it is blocked.\n\nOK, I can believe that ... I assumed that proc->xid still had the ID of\nthe last transaction, but if it's set to 0 during xact cleanup then this\nbehavior makes sense. Still, it seems like lock.c should detect the\nmissing table entry and fail sooner than it does ...\n\n>> I suspect that\n>> there is more to this that I don't understand. Why does calling\n>> XactLockTableWait() with an already-committed XID cause the following\n>> code in lock.c to trigger? Is this evidence of a logic bug in lock.c,\n>> or at least of inadequate checks for bogus input?\n\n> It's seems strange. Isn't it waiting for a being deleted tuple by vc_upd\n> stats() in vc_vacone() ?\n\nNo --- the process that reaches the \"INCONSISTENT\" exit is the one that\nis trying to do the deletion of pg_statistic rows (during VACUUM\nstartup). Presumably, it's found a row that is stored but not yet\ncommitted by another VACUUM, and is trying to wait for that transaction\nto commit.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 29 Nov 1999 00:13:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: Concurrent VACUUM: first results "
}
] |
[
{
"msg_contents": "Whilst playing around with concurrent VACUUMs (see previous message),\nI looked at some other issues that were on my TODO list. I think\nthese deserve a separate thread:\n\n1. VACUUM does internal start/commit-transaction calls around the\nprocessing of each table it handles. That's good, because it ensures\nthat the results of vacuuming each table are committed down to disk\nbefore we move on to vacuuming the next one (and take the ever-present\nchance of failure). But if VACUUM is invoked inside a BEGIN/END\ntransaction block, those start/commit-transaction calls don't actually\ncommit anything; they reduce to plain statement boundaries within\nthe transaction. This is Bad, Very Bad, on two grounds:\n (a) abort while vacuuming a later table imperils the results for all\n prior tables, since they won't get committed;\n (b) by the time we approach the end of vacuuming a whole database,\n we will hold exclusive locks on just about everything, which will\n not be good for the progress of any actual work being done by\n other backends. Not to mention the very strong possibility of\n hitting a deadlock against locks held by other backends.\n\nI had previously suggested banning VACUUM inside a transaction block\nto forestall these problems. I now see that the problems really only\napply to the case of VACUUMing a whole database --- the variant of\nVACUUM that processes only a single table could be allowed inside\na BEGIN/END block, and it wouldn't create any problems worse than\nany other command that grabs exclusive access on a table. (OK,\nyou could BEGIN and then VACUUM a lot of tables one by one, but\nthen you deserve any problems you get...)\n\nSo: should VACUUM refuse to run inside BEGIN at all, or should it\nrefuse only the whole-database variant? The former would be more\nconsistent and easier to explain, but the latter might be more useful.\n\n2. It is a serious security breach that any random user can VACUUM\nany table. Particularly in the current code, where VACUUM can be\ndone inside a transaction, because that means the user can sit on\nthe locks acquired by VACUUM. I can't do \"lock table pg_shadow\"\nas an unprivileged user --- but I can do \"begin; vacuum pg_shadow;\n<twiddle thumbs>\". Guess what happens when subsequent users try\nto connect.\n\nEven if we disallow all forms of VACUUM inside transactions, one\ncould still mount a moderately effective denial-of-service attack\nby issuing a continuous stream of \"vacuum pg_shadow\" commands,\nor perhaps repeated vacuums of some large-and-mission-critical\nuser table.\n\nI think a reasonable answer to this is to restrict VACUUM on any\ntable to be allowed only to the table owner and Postgres superuser.\nDoes anyone have an objection or better idea?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 23 Nov 1999 02:10:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "VACUUM as a denial-of-service attack"
},
{
"msg_contents": "On Tue, 23 Nov 1999, Tom Lane wrote:\n\n> I think a reasonable answer to this is to restrict VACUUM on any\n> table to be allowed only to the table owner and Postgres superuser.\n> Does anyone have an objection or better idea?\n\nTo me, this sounds perfectly reasonable and sane...\n\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 29 Nov 1999 16:55:03 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] VACUUM as a denial-of-service attack"
}
] |
[
{
"msg_contents": "\n> >> Yes, I use 'em the same way. I think an OID is kind of \n> like a pointer\n> >> in a C program: good for fast, unique access to an object \n> within the\n> >> context of the execution of a particular application (and maybe not\n> >> even that long). You don't write pointers into files to \n> be used again\n> >> by other programs, though, and in the same way an OID isn't a good\n> >> candidate for a long-lasting reference from one table to another.\n> \n> > I thought this special case is where the new xid access \n> method would come\n> > in.\n> \n> Good point, but (AFAIK) you could only use it for tables that you were\n> sure no other client was updating in parallel. Otherwise you might be\n> updating a just-obsoleted tuple. Or is there a solution for that?\n\nOk, the fact, that the row changed is known, because we can check the \nsnapshot. We also know, that the new row must be near the physical end \nof the table, so maybe we could do a backward scan ?\nMaybe we could also simply bail out, like Oracle with a \"snapshot too old\" \nerror message ?\n(I know that this is not the same situation as the stated Oracle error)\n\n> \n> > Is someone still working on the xid access ?\n> \n> I think we have the ability to refer to CTID in WHERE now, \n\nDo we use the sql syntax 'where rowid = :xxx' for it, \nor do we say 'where ctid = :xxx'.\nI would like the rowid naming, because Informix, Oracle (and DB/2 ?) use it.\n\n> but not yet an access method that actually makes it fast...\n\nWell that is of course only half the fun :-(\nCould it be done like an index access, \nwhere the first part of the work is skipped, or tunneled through ?\n\nAndreas\n",
"msg_date": "Tue, 23 Nov 1999 09:58:08 +0100",
"msg_from": "Zeugswetter Andreas SEV <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: [HACKERS] Getting OID in psql of recent insert "
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Zeugswetter\n> Andreas SEV\n> Sent: Tuesday, November 23, 1999 5:58 PM\n> To: '[email protected]'\n> Subject: AW: AW: [HACKERS] Getting OID in psql of recent insert \n> \n> > \n> > > Is someone still working on the xid access ?\n> > \n> > I think we have the ability to refer to CTID in WHERE now, \n> \n> Do we use the sql syntax 'where rowid = :xxx' for it, \n> or do we say 'where ctid = :xxx'.\n> I would like the rowid naming, because Informix, Oracle (and DB/2 \n> ?) use it.\n>\n\nYou could say 'where ctid= ...' in current tree.\nIt has been rejected due to the lack of equal operator for type TID. \nThe syntax itself has been allowed by parser.\n \n> > but not yet an access method that actually makes it fast...\n> \n> Well that is of course only half the fun :-(\n> Could it be done like an index access, \n> where the first part of the work is skipped, or tunneled through ?\n\nI would commit the implementation of direct scan by tuple id soon.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Tue, 23 Nov 1999 18:44:51 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: AW: [HACKERS] Getting OID in psql of recent insert "
},
{
"msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> \n> > >> Yes, I use 'em the same way. I think an OID is kind of \n> > like a pointer\n> > >> in a C program: good for fast, unique access to an object \n> > within the\n> > >> context of the execution of a particular application (and maybe not\n> > >> even that long). You don't write pointers into files to \n> > be used again\n> > >> by other programs, though, and in the same way an OID isn't a good\n> > >> candidate for a long-lasting reference from one table to another.\n> > \n> > > I thought this special case is where the new xid access \n> > method would come\n> > > in.\n> > \n> > Good point, but (AFAIK) you could only use it for tables that you were\n> > sure no other client was updating in parallel. Otherwise you might be\n> > updating a just-obsoleted tuple. Or is there a solution for that?\n> \n> Ok, the fact, that the row changed is known, because we can check the \n> snapshot. We also know, that the new row must be near the physical end \n> of the table, so maybe we could do a backward scan ?\n> Maybe we could also simply bail out, like Oracle with a \"snapshot too old\" \n> error message ?\n> (I know that this is not the same situation as the stated Oracle error)\n\nThat is too strange. If the tuple is superceeded, not sure how to\nhandle that. My guess is that we just let them access it. How do we\nknow if it is still a valid tuple in their own transaction? I am unsure\nof this, though. Maybe there is a way to know.\n\n> \n> > \n> > > Is someone still working on the xid access ?\n> > \n> > I think we have the ability to refer to CTID in WHERE now, \n> \n> Do we use the sql syntax 'where rowid = :xxx' for it, \n> or do we say 'where ctid = :xxx'.\n> I would like the rowid naming, because Informix, Oracle (and DB/2 ?) use it.\n\nIs Informix rowid an actual physical row location. If so, it would be\nnice to auto-rename the column references to ctid on input. Good idea.\n\n> \n> > but not yet an access method that actually makes it fast...\n> \n> Well that is of course only half the fun :-(\n> Could it be done like an index access, \n> where the first part of the work is skipped, or tunneled through ?\n\nThey get the location they ask for, or a failure. Hunting around for\nthe new tuple seems like a real waste, and if someone vacuums, it is\ngone, no?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 Nov 1999 13:01:02 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: [HACKERS] Getting OID in psql of recent insert"
}
] |
[
{
"msg_contents": "I see everybody using the following PostgreSQL statements:\n\n\"begin\" instead of \"begin work\"\n\"end\" instead of \"commit work\"\n\nThis is really bad, because it is not standard, and can easily be taken for\na statement block, which it is definitely not ! It is a transaction block.\n\nI vote for issuing a NOTICE for these in V7 and remove them in V8,\nat least the single \"end\"\n\nBruce, please don't use \"begin\" and \"end\" in your book.\n\nAndreas\n",
"msg_date": "Tue, 23 Nov 1999 11:31:04 +0100",
"msg_from": "Zeugswetter Andreas SEV <[email protected]>",
"msg_from_op": true,
"msg_subject": "SQL statements: begin and end"
},
{
"msg_contents": "Whatever happened to \nBEGIN TRANSACTION;\n ...\nCOMMIT;\n\nI never liked END to begin with, since it doesn't really imply that you\nare committing anything. Or what is the non-committing counterpart of END?\n\nBut I think we should go strictly with the SQL standard, even if that\ncontradicts what I just said. (?)\n\n\t-Peter\n\nOn Tue, 23 Nov 1999, Zeugswetter Andreas SEV wrote:\n\n> I see everybody using the following PostgreSQL statements:\n> \n> \"begin\" instead of \"begin work\"\n> \"end\" instead of \"commit work\"\n> \n> This is really bad, because it is not standard, and can easily be taken for\n> a statement block, which it is definitely not ! It is a transaction block.\n> \n> I vote for issuing a NOTICE for these in V7 and remove them in V8,\n> at least the single \"end\"\n> \n> Bruce, please don't use \"begin\" and \"end\" in your book.\n> \n> Andreas\n> \n> ************\n> \n> \n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 23 Nov 1999 12:52:16 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SQL statements: begin and end"
},
{
"msg_contents": "Zeugswetter Andreas SEV <[email protected]> writes:\n> I see everybody using the following PostgreSQL statements:\n> \"begin\" instead of \"begin work\"\n> \"end\" instead of \"commit work\"\n> This is really bad, because it is not standard,\n\nI went looking in the SQL spec to confirm this, and was rather\nstartled to discover that BEGIN is not SQL at all! The SQL spec\nseems to envision the always-in-a-transaction-block model of operation.\nThey have\n <commit statement> ::=\n COMMIT [ WORK ]\nwhich is defined to commit the current transaction; but a new xact is\nimplicitly started by the next SQL operation (cf. sec. 4.28 in SQL92).\n\nIf we wanted to be completely standards-conformant, I think we'd have to\nabandon the begin/end model entirely. I wouldn't support that ---\nauto commit of standalone statements is too convenient.\n\nBottom line: pointing at the spec is a very weak argument for telling\npeople how to spell their begin/end statements.\n\n> I vote for issuing a NOTICE for these in V7 and remove them in V8,\n> at least the single \"end\"\n\nMy feeling is that application authors have already decided whether\nthey prefer \"BEGIN\" or \"BEGIN TRANSACTION\" or \"BEGIN WORK\", and trying\nto enforce a single standard now is just going to irritate people and\nbreak existing applications. I vote for leaving well enough alone.\n\n> Bruce, please don't use \"begin\" and \"end\" in your book.\n\nSure, it makes sense for the book to consistently use \"BEGIN WORK\"\nand \"COMMIT WORK\", which are probably the least likely to confuse\nnovices. But I think actually removing the other variants would be\njust an exercise in causing trouble.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 23 Nov 1999 09:59:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SQL statements: begin and end "
},
{
"msg_contents": "> I went looking in the SQL spec to confirm this, and was rather\n> startled to discover that BEGIN is not SQL at all!\n\nRight- me too! I glanced through the SQL98 docs I have (mostly from\n'94) and there is no addition afaict. BEGIN/END *are* defined in SQL,\nbut only in the context of embedded SQL.\n\n> > Bruce, please don't use \"begin\" and \"end\" in your book.\n> Sure, it makes sense for the book to consistently use \"BEGIN WORK\"\n> and \"COMMIT WORK\", which are probably the least likely to confuse\n> novices. But I think actually removing the other variants would be\n> just an exercise in causing trouble.\n\nWORK is an optional noise word in SQL92. BEGIN WORK is not defined at\nall, but I agree with Tom that the extension is essential.\n\nI'm pretty sure that Andrea's biggest objection was to the acceptance\nand use of END, which has no connection in official SQL to transaction\ncompletion but only to block delimiting for cursor loops. It is almost\ncertainly a holdover from PostQuel.\n\nAny thoughts on whether AZ's suggestion for dropping END in this\ncontext should be done for 7.0? We certainly could make an effort to\nat least purge it from our examples in the docs...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 23 Nov 1999 15:16:58 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SQL statements: begin and end"
},
{
"msg_contents": "At 09:59 AM 11/23/99 -0500, Tom Lane wrote:\n\n>I went looking in the SQL spec to confirm this, and was rather\n>startled to discover that BEGIN is not SQL at all! The SQL spec\n>seems to envision the always-in-a-transaction-block model of operation.\n>They have\n> <commit statement> ::=\n> COMMIT [ WORK ]\n>which is defined to commit the current transaction; but a new xact is\n>implicitly started by the next SQL operation (cf. sec. 4.28 in SQL92).\n\nThis is how Oracle's SQL*Plus works.\n\n>If we wanted to be completely standards-conformant, I think we'd have to\n>abandon the begin/end model entirely. I wouldn't support that ---\n>auto commit of standalone statements is too convenient.\n\nOracle supports two modes, AFAIK (my Oracle experience is limited, but\nnot entirely non-existent). You can set it to autocommit mode. The\nTcl API I'm familiar with (for the web server AOLserver) works in \nautocommit mode. You feed it a (guess what?) \"BEGIN\" dml statement\nto switche off autocommit. Then you feed it a \"COMMIT\", it\ncommits the transaction, and tells Oracle to go back to autocommit mode.\n\nJust like PostgreSQL...\n\nRegarding the fact that SQL*Plus defaults to NOT auto-commit isn't\nnecessarily a bad thing, I might add - if you boo-boo when typing\nin deletes and updates, forgetting an \"and\" clause perhaps, you\ncan type \"abort\". In psql, I always do a \"begin\" before doing any\ndeletes or updates to the database which backs my website, watching\nto make sure that the number of rows changed or delted jives with\nmy expectation before committing.\n\nI don't mind the way Postgres does stuff, though for someone used\nto Oracle the fact that psql is autocommitting might come as an\nunpleasant surprise.\n\n>Bottom line: pointing at the spec is a very weak argument for telling\n>people how to spell their begin/end statements.\n\nFolks who do this should probably at least read the standard first.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 23 Nov 1999 07:29:12 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SQL statements: begin and end "
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I'm pretty sure that Andrea's biggest objection was to the acceptance\n> and use of END, which has no connection in official SQL to transaction\n> completion but only to block delimiting for cursor loops. It is almost\n> certainly a holdover from PostQuel.\n\n> Any thoughts on whether AZ's suggestion for dropping END in this\n> context should be done for 7.0? We certainly could make an effort to\n> at least purge it from our examples in the docs...\n\nEven AZ wasn't suggesting dropping it for 7.0!\n\nWe ought to check what other RDMSs are doing in this area before making\nany decisions. The fact that we've got so many ways to spell \"BEGIN\"\nsuggests to me that some of them were tacked on for compatibility with\nother products...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 23 Nov 1999 10:36:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SQL statements: begin and end "
},
{
"msg_contents": "On Tue, 23 Nov 1999, Tom Lane wrote:\n\n> My feeling is that application authors have already decided whether\n> they prefer \"BEGIN\" or \"BEGIN TRANSACTION\" or \"BEGIN WORK\", and trying\n> to enforce a single standard now is just going to irritate people and\n> break existing applications. I vote for leaving well enough alone.\n> \n> > Bruce, please don't use \"begin\" and \"end\" in your book.\n> \n> Sure, it makes sense for the book to consistently use \"BEGIN WORK\"\n> and \"COMMIT WORK\", which are probably the least likely to confuse\n> novices. But I think actually removing the other variants would be\n> just an exercise in causing trouble.\n\nI don't know how Oracle or most everyone else is doing it, but Sybase\nuses:\n\nbegin transaction [transaction name] \nand \ncommit transaction [transaction name]\n\nI don't see an end transaction in the quick ref, but they do have a:\n\nbegin \n statement block\nend\n\nin there. \n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 23 Nov 1999 11:15:21 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SQL statements: begin and end "
},
{
"msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> I see everybody using the following PostgreSQL statements:\n> \n> \"begin\" instead of \"begin work\"\n> \"end\" instead of \"commit work\"\n> \n> This is really bad, because it is not standard, and can easily be taken for\n> a statement block, which it is definitely not ! It is a transaction block.\n\n> \n> I vote for issuing a NOTICE for these in V7 and remove them in V8,\n> at least the single \"end\"\n\nNot sure on this one. Why not let them use it?\n\n> \n> Bruce, please don't use \"begin\" and \"end\" in your book.\n\nOK.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 Nov 1999 13:04:14 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL statements: begin and end"
}
] |
[
{
"msg_contents": "\nForwarded from webmaster.\n\n\n---------- Forwarded message ----------\n Date: Mon, 22 Nov 1999 22:17:14 -0600\n From: Brian D. Woodruff <[email protected]>\n Subject: RE: file name error\n\nooh - sorry to be so vague! It's in a frame set, linked from Administration\nin the Documentation section. The URL is:\n\nhttp://postgresql.org/docs/admin/install761.htm\n\nsteps in the list which have problems are:\n\n4, 6, 10, and 26\n\ndue to the \"v\" in the filename which is not in the filename on the FTP server.\n\n\nother parts of the install process have incorrect paths, for instance, in\nstep 11, there is a change directory issued to:\n\n/usr/src/pgsql/src/\n\nwhich should really go to:\n\n/usr/src/pgsql/postgresql-6.5.3/src/\n\nThis occurs in several places in the doc.\n\nApparently the path above is where the tar file puts things.\n\nOther than that, the installation went well! This installation is on my\nthird server. Having fun with it :-)\n\nYou can see it in action on the order page of www.IncWay.com :-)\n\nAgain, I hope I'm being helpful! Thanks!\n\nBDW\n\n\nAt 07:09 PM 11/22/99 -0500, you wrote:\n>\n>On 21-Nov-99 Brian D. Woodruff wrote:\n>> \n>> please pass this on to the appropriate parties ...\n>> \n>> the file name referenced in the installation guide is \n>> \n>> postgresql-v6.5.3.tar.gz\n>> \n>> whereast the real file name has no \"v\" before the six on the FTP site.\n>> \n>> I suppose a soft link would be a good \"quick fix\"!\n>> \n>> no big deal, but it means c/p installation doesn't work!\n>> \n>\n>Which installation guide? \n>\n>Vince.\n>-- \n>==========================================================================\n>Vince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n> # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n>==========================================================================\n>\n>\n>\n>\n\n\n",
"msg_date": "Tue, 23 Nov 1999 05:34:08 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: file name error (fwd)"
},
{
"msg_contents": "> http://postgresql.org/docs/admin/install761.htm\n> steps in the list which have problems are:\n> 4, 6, 10, and 26\n> due to the \"v\" in the filename which is not in the filename on the FTP server.\n> other parts of the install process have incorrect paths\n\nWas someone going to rewrite the installation procedure? I was looking\nat it myself and agree with the thread from a month or two ago that it\nis *much* too complicated and verbose. Several steps should be moved\nto subsequent sections as reference or backup material (e.g. flex,\nregression tests), and the whole thing could be substantially\ncondensed.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 23 Nov 1999 14:28:51 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] RE: file name error (fwd)"
},
{
"msg_contents": "On Tue, 23 Nov 1999, Thomas Lockhart wrote:\n\n> > http://postgresql.org/docs/admin/install761.htm\n> > steps in the list which have problems are:\n> > 4, 6, 10, and 26\n> > due to the \"v\" in the filename which is not in the filename on the FTP server.\n> > other parts of the install process have incorrect paths\n> \n> Was someone going to rewrite the installation procedure? I was looking\n> at it myself and agree with the thread from a month or two ago that it\n> is *much* too complicated and verbose. Several steps should be moved\n> to subsequent sections as reference or backup material (e.g. flex,\n> regression tests), and the whole thing could be substantially\n> condensed.\n\nI was working on something that takes it down to about 5 steps before\nthe initdb phase of the installation - redirecting gmake's output was\noptional. I'll see if it's still around.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 23 Nov 1999 09:58:04 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [DOCS] RE: file name error (fwd)"
},
{
"msg_contents": "> \n> Forwarded from webmaster.\n> \n> \n> ---------- Forwarded message ----------\n> Date: Mon, 22 Nov 1999 22:17:14 -0600\n> From: Brian D. Woodruff <[email protected]>\n> Subject: RE: file name error\n> \n> ooh - sorry to be so vague! It's in a frame set, linked from Administration\n> in the Documentation section. The URL is:\n> \n> http://postgresql.org/docs/admin/install761.htm\n> \n> steps in the list which have problems are:\n> \n> 4, 6, 10, and 26\n> \n> due to the \"v\" in the filename which is not in the filename on the FTP server.\n> \n> \n> other parts of the install process have incorrect paths, for instance, in\n> step 11, there is a change directory issued to:\n> \n> /usr/src/pgsql/src/\n> \n> which should really go to:\n> \n> /usr/src/pgsql/postgresql-6.5.3/src/\n> \n> This occurs in several places in the doc.\n> \n> Apparently the path above is where the tar file puts things.\n> \n> Other than that, the installation went well! This installation is on my\n> third server. Having fun with it :-)\n> \n> You can see it in action on the order page of www.IncWay.com :-)\n> \n> Again, I hope I'm being helpful! Thanks!\n\nThanks for the tips. The paths clearly needed fixing in many places. \nThe new version should appear on the website tomorrow.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 Nov 1999 13:24:46 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] RE: file name error (fwd)"
},
{
"msg_contents": "On 1999-11-23, Thomas Lockhart mentioned:\n\n> Was someone going to rewrite the installation procedure? I was looking\n> at it myself and agree with the thread from a month or two ago that it\n> is *much* too complicated and verbose. Several steps should be moved\n> to subsequent sections as reference or backup material (e.g. flex,\n> regression tests), and the whole thing could be substantially\n> condensed.\n\nAlong the way, I am currently collecting evidence for a\ninstallation/makefile/autoconf/source tree cleanup. The documentation\nupdate should probably be done along with this. When I get some time (not\nbefore the end of this year) I would be willing to work on this, perhaps\ntogether with Vince, as he had some good ideas a while back.\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Wed, 24 Nov 1999 18:28:23 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [DOCS] RE: file name error (fwd)"
},
{
"msg_contents": "On Wed, 24 Nov 1999, Peter Eisentraut wrote:\n\n> On 1999-11-23, Thomas Lockhart mentioned:\n> \n> > Was someone going to rewrite the installation procedure? I was looking\n> > at it myself and agree with the thread from a month or two ago that it\n> > is *much* too complicated and verbose. Several steps should be moved\n> > to subsequent sections as reference or backup material (e.g. flex,\n> > regression tests), and the whole thing could be substantially\n> > condensed.\n> \n> Along the way, I am currently collecting evidence for a\n> installation/makefile/autoconf/source tree cleanup. The documentation\n> update should probably be done along with this. When I get some time (not\n> before the end of this year) I would be willing to work on this, perhaps\n> together with Vince, as he had some good ideas a while back.\n\nI looked around on three different machines for the document I was working\non when I finally realized why it's nowhere to be found. I deleted it\n'cuze it became clear that configure needed to be changed and there's a\ngood chance it's beyond me to do it. I was thinking along the lines of\nan additional --with for the postgres user and using that for install.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 24 Nov 1999 13:30:18 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [DOCS] RE: file name error (fwd)"
}
] |
[
{
"msg_contents": "\n> > > but not yet an access method that actually makes it fast...\n> > \n> > Well that is of course only half the fun :-(\n> > Could it be done like an index access, \n> > where the first part of the work is skipped, or tunneled through ?\n> \n> I would commit the implementation of direct scan by tuple id soon.\n\nSounds like it is going to be a fantastic christmas :-)\n\nThank you\nAndreas\n",
"msg_date": "Tue, 23 Nov 1999 11:37:25 +0100",
"msg_from": "Zeugswetter Andreas SEV <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: [HACKERS] Getting OID in psql of recent insert "
}
] |
[
{
"msg_contents": "\n> >Bottom line: pointing at the spec is a very weak argument for telling\n> >people how to spell their begin/end statements.\n> \n> Folks who do this should probably at least read the standard first.\n\nOk, as I stated, I dislike the single \"end\" most. You are right that I\nshould have \nchecked the spec first. By standard I meant the common practice of other\nDBMS's,\nwhich is of course bad wording, Sorry.\n\nBottom line the \"end\" is evil, even in respect to SQL92 SQL98 ...\n\nAndreas\n",
"msg_date": "Tue, 23 Nov 1999 16:42:29 +0100",
"msg_from": "Zeugswetter Andreas SEV <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] SQL statements: begin and end "
}
] |
[
{
"msg_contents": "> \n> I don't see an end transaction in the quick ref, but they do have a:\n> \n> begin \n> statement block\n> end\n\nWhich of course has nothing to do with transactions in Sybase :-)\n\nAndreas\n",
"msg_date": "Tue, 23 Nov 1999 17:40:49 +0100",
"msg_from": "Zeugswetter Andreas SEV <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] SQL statements: begin and end "
},
{
"msg_contents": "\nOn 23-Nov-99 Zeugswetter Andreas SEV wrote:\n>> \n>> I don't see an end transaction in the quick ref, but they do have a:\n>> \n>> begin \n>> statement block\n>> end\n> \n> Which of course has nothing to do with transactions in Sybase :-)\n\nstatement block != transactions I didn't think I even implied that. \nWhat I was pointing out was that Sybase ... why bother explaining.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Tue, 23 Nov 1999 14:57:05 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: AW: [HACKERS] SQL statements: begin and end"
}
] |
[
{
"msg_contents": "We have a table with ~30 columns, named 'people' and we were going to\ncreate a view for all record with 'relationship' equal to 1. The\ndatabase complains where using the '*' placeholder:\n\nalbourne=> CREATE VIEW employees AS SELECT * FROM people WHERE\nrelationship = 1; \nERROR: DefineQueryRewrite: rule plan string too big.\n\nbut accepts the same 30 columns on the command:\n\nalbourne=> create view employees as select\nid,title,first_name,middle_name,last_name,suffix,company,job_title,address,city,zipcode,country,home_phone,home_fax,mobile,bus_phone,bus_fax,other_phone,e_mail_1,e_mail_2,url,birthday,christmas,brochure,golf,croquet,comment\nfrom people where relationship=1;\nCREATE\n\n'*' is SQL92 (I think) so is this a bug or a known limitation?\n\nThe system is PostgreSQL 6.5.2 on alphaev6-dec-osf4.0f, compiled by cc.\n\nThanks\nAlessio\n\n-- \nAlessio F. Bragadini\t\[email protected]\nAPL Financial Services\t\thttp://www.sevenseas.org/~alessio\nNicosia, Cyprus\t\t \tphone: +357-2-750652\n\nYou are welcome, sir, to Cyprus. -- Shakespeare's \"Othello\"\n",
"msg_date": "Tue, 23 Nov 1999 20:03:48 +0200",
"msg_from": "Alessio Bragadini <[email protected]>",
"msg_from_op": true,
"msg_subject": "A bug or a feature?"
},
{
"msg_contents": "Alessio F. Bragadini wrote:\n\n> We have a table with ~30 columns, named 'people' and we were going to\n> create a view for all record with 'relationship' equal to 1. The\n> database complains where using the '*' placeholder:\n>\n> albourne=> CREATE VIEW employees AS SELECT * FROM people WHERE\n> relationship = 1;\n> ERROR: DefineQueryRewrite: rule plan string too big.\n>\n> but accepts the same 30 columns on the command:\n>\n> [...]\n>\n> '*' is SQL92 (I think) so is this a bug or a known limitation?\n>\n> The system is PostgreSQL 6.5.2 on alphaev6-dec-osf4.0f, compiled by cc.\n\n It's a well known limitation in versions up to 6.5.*.\n\n I've lowered the problem in the 7.0 tree by compressing the\n rule plan string, using a new data type. A 'SELECT *' view\n from a table with 54 fields uses only about 25% of the\n available space then.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 24 Nov 1999 03:36:17 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A bug or a feature?"
},
{
"msg_contents": "Alessio Bragadini <[email protected]> writes:\n> We have a table with ~30 columns, named 'people' and we were going to\n> create a view for all record with 'relationship' equal to 1. The\n> database complains where using the '*' placeholder:\n> albourne=> CREATE VIEW employees AS SELECT * FROM people WHERE\n> relationship = 1; \n> ERROR: DefineQueryRewrite: rule plan string too big.\n> but accepts the same 30 columns on the command:\n\nThere is a limit on the length of rule plans :-(. Jan has implemented\ncompression of rule plan strings as a partial workaround for 7.0, and\nthe final solution will come when we eliminate tuple length limits.\n\nIn the meantime, the interesting question is why two apparently\nequivalent queries yield rule plans of different lengths. As far as\nI can tell, '*' and explicitly listing the fields *do* yield exactly\nthe same results. My guess is that your query is right at the hairy\nedge of the length limit, such that one or two characters more or less\nmake the difference. The rule plan does include a couple of instances\nof the name of the view, so if you used a longer view name in one case\nthan the other, that could explain why one worked and the other didn't.\n\nIf that's not it then I'm baffled...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Nov 1999 01:07:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A bug or a feature? "
}
] |
[
{
"msg_contents": "\n> > Ok, the fact, that the row changed is known, because we can \n> check the \n> > snapshot. We also know, that the new row must be near the \n> physical end \n> > of the table, so maybe we could do a backward scan ?\n> > Maybe we could also simply bail out, like Oracle with a \n> \"snapshot too old\" \n> > error message ?\n> > (I know that this is not the same situation as the stated \n> Oracle error)\n> \n> That is too strange. If the tuple is superceeded, not sure how to\n> handle that. My guess is that we just let them access it. How do we\n> know if it is still a valid tuple in their own transaction? \n> I am unsure\n> of this, though. Maybe there is a way to know.\n\nI think we do know, since when doing any seq scan we also have to know.\n\n> > Do we use the sql syntax 'where rowid = :xxx' for it, \n> > or do we say 'where ctid = :xxx'.\n> > I would like the rowid naming, because Informix, Oracle \n> (and DB/2 ?) use it.\n> \n> Is Informix rowid an actual physical row location. \n\nYes, it consists of page number and slot id, and is one integer.\n\nBasically the same thing in Oracle.\nHow the value is printed is imho irrelevant, since it is not for the eye.\n\n> If so, it would be\n> nice to auto-rename the column references to ctid on input. \n> Good idea.\n\nAndreas\n",
"msg_date": "Tue, 23 Nov 1999 20:12:49 +0100",
"msg_from": "Zeugswetter Andreas SEV <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: AW: [HACKERS] Getting OID in psql of recent insert"
}
] |
[
{
"msg_contents": "> > I see everybody using the following PostgreSQL statements:\n> > \n> > \"begin\" instead of \"begin work\"\n> > \"end\" instead of \"commit work\"\n> > \n> > This is really bad, because it is not standard, and can \n> easily be taken for\n> > a statement block, which it is definitely not ! It is a \n> transaction block.\n> \n> > \n> > I vote for issuing a NOTICE for these in V7 and remove them in V8,\n> > at least the single \"end\"\n> \n> Not sure on this one. Why not let them use it?\n\n1. It is actually already forbidden inside plpgsql, since there it is\na statement block, no ?\n2. somebody stated, that is has a different meaning in embedded sql spec ?\n(I did it again, have not checked :-()\n\nI actually don't have strong feelings on this, just thought I'd bring it up,\ncause it personally always misguides me.\n\nAndreas\n",
"msg_date": "Tue, 23 Nov 1999 20:20:32 +0100",
"msg_from": "Zeugswetter Andreas SEV <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] Re: SQL statements: begin and end"
}
] |
[
{
"msg_contents": "\n> They get the location they ask for, or a failure. Hunting around for\n> the new tuple seems like a real waste, and if someone vacuums, it is\n> gone, no?\n\nIt is probably worse, since they might even get the wrong row, \nbut that's the same in Informix and Oracle.\n\nIn Informix:\n\na: selects rowid\nb: updates row, row grows, does not fit in page, is relocated\nc: inserts rows, pysical location of rowid is reused\na: selects row by rowid, gets differet row --> bummer\n\nAndreas\n",
"msg_date": "Tue, 23 Nov 1999 20:29:11 +0100",
"msg_from": "Zeugswetter Andreas SEV <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: AW: [HACKERS] Getting OID in psql of recent insert"
},
{
"msg_contents": "> \n> > They get the location they ask for, or a failure. Hunting around for\n> > the new tuple seems like a real waste, and if someone vacuums, it is\n> > gone, no?\n> \n> It is probably worse, since they might even get the wrong row, \n> but that's the same in Informix and Oracle.\n> \n> In Informix:\n> \n> a: selects rowid\n> b: updates row, row grows, does not fit in page, is relocated\n> c: inserts rows, pysical location of rowid is reused\n> a: selects row by rowid, gets differet row --> bummer\n>\n\nIn my implementation,scan by old(updated) tupleid fails.\nFor example,\n\n\t=> create table t (dt int4);\n\tCREATE\n\t=> insert into t values (1);\n\tINSERT 18601 1\n\t=> select ctid,* from t;\n\tctid |dt\n\t-----+--\n\t(0,1)| 1\n\t(1 row)\n\n\t=> select * from t where ctid='(0,1)';\n\tdt\n\t--\n\t 1\n\t(1 row)\n\n\t=> update t set dt=2;\n\tUPDATE 1\n\t=> select * from t where ctid='(0,1)';\n\tdt\n\t--\n\t(0 rows)\n\n\nIn order to get new tids,I provided functions currtid() and currtid2().\n\n\t=> select currtid2('t','(0,1)');\n\tcurrtid2\n\t--------\n\t(0,2)\n\t(1 row)\n\n\t=> select * from t where ctid='(0,2)';\n\tdt\n\t--\n\t 2\n\t(1 row)\n\nOf cource,this function is not effective after vacuum.\nIf you want to detect the change by vacuum,keep oids together with tids.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n\n",
"msg_date": "Wed, 24 Nov 1999 16:36:02 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: AW: AW: [HACKERS] Getting OID in psql of recent insert"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm reposting it here, since i haven't received any help after posting it\nat the general\nmailing-list, and sending in a bug-report at the bugs mailing-list.\n\nI'm willing to forward the bug-report to any email-adress I'm asked for to\nsent it to,\nbut since it's a long one (contains a full script to reproduce this error)\nI'm a bit reluctant \nto do so.\n\nJoost Roeleveld\n\n=========== Forwarded Message follows ==========\nHi,\n\nI have found what appears to be a bug in PostgreSQL, or is this a feature?\nI don't think it's a feature, because the postgreSQL documentation state it\nshould work.\n\nWhen creating delete-rules for views, i have found that only the first\nexpression is being executed, when\nusing multiple expressions.\n\nI have managed to do this for Insert, and i think for Update as well...\nalthough i haven't gotten around to testing that yet.\n\nThe following is the RULE-definition I use, it's for a View (\nbedrijven_view ) which consists of\ntwo tables ( adressen_table and bedrijven_table ).\nWhen I change the sequence (eg. put the delete from bedrijven_table first)\nit still only does the first statement.\n\nCREATE RULE delete_bedrijven_view AS ON DELETE\n\tTO bedrijven_view\n\tDO INSTEAD (\n\t\tDELETE FROM adressen_table\n\t\t\tWHERE adres_id = get_adres_nummer(straatnaam,\n\t\t\t\thuisnummer,postcode,land);\n\t\tDELETE FROM bedrijven_table\n\t\t\tWHERE firma_id = firma_id;\n\t);\n\n\nIf you require more information, for instance a full list of statements\nthat can reproduce the problem,\nplease let me know, and I'll forward the bug-report I sent in yesterday\nmorning.\n\nwith kind regards,\n\nJoost Roeleveld\n\n",
"msg_date": "Tue, 23 Nov 1999 20:57:40 +0100",
"msg_from": "\"J. Roeleveld\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: [GENERAL] Problem with CREATE RULE <something> ON DELETE\n\t(PostgreSQL only executes the first expression)"
}
] |
[
{
"msg_contents": "> > > Good point, but (AFAIK) you could only use it for tables that you were\n> > > sure no other client was updating in parallel. Otherwise you might be\n> > > updating a just-obsoleted tuple. Or is there a solution for that?\n> > >\n> > > > Is someone still working on the xid access ?\n> > >\n> > > I think we have the ability to refer to CTID in WHERE now, but not yet an\n> > > access method that actually makes it fast...\n> > \n> > Hiroshi supplied a patch to allow it in the executor, and I applied it.\n> >\n> \n> Bruce,could you apply my attached patch ?\n> I have to add 3 new files but couldn't do 'cvs add'\n> the files on my machine.\n> Am I mistaken ?\n> I couldn't understand the reason now.\n\nApplied. No idea why the add didn't work there. It worked here.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 Nov 1999 14:58:47 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AW: [HACKERS] Getting OID in psql of recent insert"
},
{
"msg_contents": "> \n> > > > Good point, but (AFAIK) you could only use it for tables \n> that you were\n> > > > sure no other client was updating in parallel. Otherwise \n> you might be\n> > > > updating a just-obsoleted tuple. Or is there a solution for that?\n> > > >\n> > > > > Is someone still working on the xid access ?\n> > > >\n> > > > I think we have the ability to refer to CTID in WHERE now, \n> but not yet an\n> > > > access method that actually makes it fast...\n> > > \n> > > Hiroshi supplied a patch to allow it in the executor, and I \n> applied it.\n> > >\n> > \n> > Bruce,could you apply my attached patch ?\n> > I have to add 3 new files but couldn't do 'cvs add'\n> > the files on my machine.\n> > Am I mistaken ?\n> > I couldn't understand the reason now.\n> \n> Applied. No idea why the add didn't work there. It worked here.\n>\n\nThanks a lot.\n\nI don't know CVS well.\nCould someone teach me ?\n\nIt seems that 'cvs add' on my current machine connects to\npostgresql.org.\nIs it right ?\nIsn't 'cvs add' local ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Wed, 24 Nov 1999 14:00:26 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: AW: [HACKERS] Getting OID in psql of recent insert"
},
{
"msg_contents": "> > > the files on my machine.\n> > > Am I mistaken ?\n> > > I couldn't understand the reason now.\n> > \n> > Applied. No idea why the add didn't work there. It worked here.\n> >\n> \n> Thanks a lot.\n> \n> I don't know CVS well.\n> Could someone teach me ?\n> \n> It seems that 'cvs add' on my current machine connects to\n> postgresql.org.\n> Is it right ?\n> Isn't 'cvs add' local ?\n> \n\nIt is local, but has to register it with the server.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Nov 1999 00:07:35 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AW: [HACKERS] Getting OID in psql of recent insert"
}
] |
[
{
"msg_contents": ">> > Which of course has nothing to do with transactions in Sybase :-)\n>>\n>> statement block != transactions I didn't think I even implied that. \n>> What I was pointing out was that Sybase ... why bother explaining.\n>> \n>> Vince.\nVince, I suspect that Andreas knew exactly what you meant ;-)\n",
"msg_date": "Tue, 23 Nov 1999 22:57:29 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: AW: [HACKERS] SQL statements: begin and end"
}
] |
[
{
"msg_contents": ">From: Tom Lane <[email protected]>\n>I think a reasonable answer to this is to restrict VACUUM on any\n>table to be allowed only to the table owner and Postgres superuser.\n>Does anyone have an objection or better idea?\n\nIn the dim and distant past I produced a patch that put vacuum\ninto the list of things that you could GRANT on a per-table\nbasis. I don't know what effort it would take to rework that\nfor current or if it would be worth it.\n\nI think your suggestion above would be perfect if you never\nneed to allow anyone else to vacuum a table.\n\nI'va attached the old patch below.\n\nKeith.",
"msg_date": "Tue, 23 Nov 1999 22:18:53 +0000 (GMT)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] VACUUM as a denial-of-service attack"
},
{
"msg_contents": "Keith Parks <[email protected]> writes:\n>> From: Tom Lane <[email protected]>\n>> I think a reasonable answer to this is to restrict VACUUM on any\n>> table to be allowed only to the table owner and Postgres superuser.\n>> Does anyone have an objection or better idea?\n\n> In the dim and distant past I produced a patch that put vacuum\n> into the list of things that you could GRANT on a per-table\n> basis. I don't know what effort it would take to rework that\n> for current or if it would be worth it.\n\nThanks for the code, but for now I just threw in a quick pg_ownercheck\ncall: VACUUM will now vacuum all tables if you are the superuser, else\njust the tables you own, skipping the rest with a NOTICE. What you had\nlooked like more infrastructure than I thought the problem was worth...\nI suspect most people will run VACUUMs from the superuser account\nanyway...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 28 Nov 1999 23:49:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] VACUUM as a denial-of-service attack "
}
] |
[
{
"msg_contents": "\nTom, let me know what caches you want on pg_statistics, or if you would\nprefer to do it yourself, instructions are at the top of syscache.c.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 Nov 1999 19:44:02 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Cache on pg_statistics"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, let me know what caches you want on pg_statistics, or if you would\n> prefer to do it yourself, instructions are at the top of syscache.c.\n\nAFAIK the only index we need is one on starelid + staattnum. If you\nhave the time, please put it in and teach getattstatistics() in\nbackend/utils/adt/selfuncs.c to use it. That seems to be the only\nplace that reads pg_statistic.\n\nIf you wanted to include staop as a third column, then the index could\nbe UNIQUE. (We don't actually use staop at the moment, but I think it\nshould be left in place for possible future use.)\n\nvacuum.c may also need to be taught to fill the index...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Nov 1999 01:16:27 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Cache on pg_statistics "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Tom, let me know what caches you want on pg_statistics, or if you would\n> > prefer to do it yourself, instructions are at the top of syscache.c.\n> \n> AFAIK the only index we need is one on starelid + staattnum. If you\n> have the time, please put it in and teach getattstatistics() in\n> backend/utils/adt/selfuncs.c to use it. That seems to be the only\n> place that reads pg_statistic.\n> \n> If you wanted to include staop as a third column, then the index could\n> be UNIQUE. (We don't actually use staop at the moment, but I think it\n> should be left in place for possible future use.)\n> \n> vacuum.c may also need to be taught to fill the index...\n\nDone. Let me know how it works. I had to add selop param to the\nfunction getattstatistics() because I needed it to find the cache entry\nbecause that field is in the cache.\n\nNot sure how to test if it is working.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Nov 1999 19:21:07 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Cache on pg_statistics"
}
] |
[
{
"msg_contents": "I'm going to commit changes to make lztextlen() aware of\nmulti-byte. While doing the work, I found that no POSITION() or\nSUBSTRING() for lztext has been implemented in the file.\n\nBTW, does anybody work on making lztext indexable? If no, I will take\ncare of it with above addtions.\n--\nTatsuo Ishii\n\n",
"msg_date": "Wed, 24 Nov 1999 10:15:09 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "lztext.c"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n\n> I'm going to commit changes to make lztextlen() aware of\n> multi-byte. While doing the work, I found that no POSITION() or\n> SUBSTRING() for lztext has been implemented in the file.\n\n Thank's for that. I usually don't have multi-byte support\n compiled in and it's surely better if you do the extension\n and tests.\n\n I know that a lot of functions are missing so far. Especially\n comparision and the mentioned ones. I thought to get back on\n it after the multi-byte support is inside.\n\n> BTW, does anybody work on making lztext indexable? If no, I will take\n> care of it with above addtions.\n\n IMHO something questionable.\n\n A compressed data type is preferred to store large amounts of\n data. Indexing large fields OTOH is something to prevent by\n database design. The new type at hand offers reasonable\n compression rates only above some size of input.\n\n OTOOH, it might get someone around the btree split problems\n some of us encountered and which I where able to trigger with\n field contents above 2K already. In such a case it can be a\n last resort.\n\n I'd like to know what others think.\n\n Don't spend much efford for comparision and the SUBSTRING()\n things right now. I already have an additional, generalized\n decompressor in mind, that can be used in the comparision for\n example to decompress two values on the fly and stop\n comparision at the first difference, which usually happens\n early in two random datums.\n\n Tell me when you have the multi-byte (and maybe cyrillic?)\n stuff committed and I'll take my hands back on the code.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 24 Nov 1999 03:26:21 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] lztext.c"
},
{
"msg_contents": "> Don't spend much efford for comparision and the SUBSTRING()\n> things right now. I already have an additional, generalized\n> decompressor in mind, that can be used in the comparision for\n> example to decompress two values on the fly and stop\n> comparision at the first difference, which usually happens\n> early in two random datums.\n\nOk.\n\n> Tell me when you have the multi-byte (and maybe cyrillic?)\n> stuff committed and I'll take my hands back on the code.\n\nI have committed the changes just now, though cyrillic support is not\nincluded. I vaguely recall the discussion about the usefullness of\nthe cyrillic support.\n--\nTatsuo Ishii\n\n",
"msg_date": "Wed, 24 Nov 1999 12:52:53 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] lztext.c "
},
{
"msg_contents": "On Wed, 24 Nov 1999, Tatsuo Ishii wrote:\n\n> Date: Wed, 24 Nov 1999 12:52:53 +0900\n> From: Tatsuo Ishii <[email protected]>\n> To: Jan Wieck <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] lztext.c \n> \n> > Don't spend much efford for comparision and the SUBSTRING()\n> > things right now. I already have an additional, generalized\n> > decompressor in mind, that can be used in the comparision for\n> > example to decompress two values on the fly and stop\n> > comparision at the first difference, which usually happens\n> > early in two random datums.\n> \n> Ok.\n> \n> > Tell me when you have the multi-byte (and maybe cyrillic?)\n> > stuff committed and I'll take my hands back on the code.\n> \n> I have committed the changes just now, though cyrillic support is not\n> included. I vaguely recall the discussion about the usefullness of\n> the cyrillic support.\n\nIf you mean --recode you-re right.\n\n> --\n> Tatsuo Ishii\n> \n> \n> ************\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Wed, 24 Nov 1999 10:09:04 +0300 (MSK)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] lztext.c "
},
{
"msg_contents": "Tatsuo Ishii wrote:\n\n> > Don't spend much efford for comparision and the SUBSTRING()\n> > things right now. I already have an additional, generalized\n> > decompressor in mind, that can be used in the comparision for\n> > example to decompress two values on the fly and stop\n> > comparision at the first difference, which usually happens\n> > early in two random datums.\n>\n> Ok.\n>\n> > Tell me when you have the multi-byte (and maybe cyrillic?)\n> > stuff committed and I'll take my hands back on the code.\n>\n> I have committed the changes just now, though cyrillic support is not\n> included. I vaguely recall the discussion about the usefullness of\n> the cyrillic support.\n\n I added the comparision functions, operators and the default\n nbtree operator class for indexing.\n\n For the SUBSTR() and STRPOS(), I just checked the current\n setup and it automatically casts an lztext argument in these\n functions to text. I assume lztext can now be used in every\n place where text is allowed. Is it really worth to blow up\n the catalogs with rarely used functions that only gain some\n saved decompressed portion?\n\n Remember, the algorithm is optimized for decompression speed.\n It might save some time to do this for a comparision function\n used inside of index scans or btree operations, where it's\n likely to hit a difference early. But for something like\n STRPOS(), using the default cast and changing the STRPOS()\n match search itself into a KMP algorithm (instead of walking\n through the text and comparing each position against the\n pattern using strncmp) would outperform it in any case. With\n the byte by byte strncmp() method, we definitely implemented\n the slowest and best readable possibility.\n\n I think we should better spend our time in adding a lzbpchar\n type. Or work on compressed tables and tuple split to blow\n away the size limits at all.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 25 Nov 1999 03:23:52 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] lztext.c"
}
] |
[
{
"msg_contents": "(from www.linuxdoc.org)\n\nLatest News: \n Latest Document Updates: \nNovember 21, 1999 We are now accepting Docbook. We are NOT abadoning\nLinuxdoc, however with the increased usage of Docbook in DP projects\n(KDE, Postgres, Gnome) we are slowly moving towards Docbook. There is\na preliminary Docbook resource page available here. \n\nHmm. Tim Bynum, the LDP HowTo coordinator, had expressed interest in\nhaving more Postgres docs available from the LDP, and their web site\nmentions us explicitly in the note above. Hopefully we'll get some\nresolution of the docs issue sometime soon by moving to DocBook (our\nnative SGML source format), though I haven't heard back on anything\nspecific yet :(\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 24 Nov 1999 03:18:24 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "From the LDP web site..."
}
] |
[
{
"msg_contents": "Hi,\n\nIt would be nice if postmaster has its own pid file to send signals to\nit. I think the pid file could be placed under $PGDATA. Opinions?\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 24 Nov 1999 15:11:59 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "pid file for postmaster?"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> It would be nice if postmaster has its own pid file to send signals to\n> it. I think the pid file could be placed under $PGDATA. Opinions?\n\nYes, that's been discussed before, and I think it's even got an entry\non the TODO list. If you've got time to tackle it now, great!\n\n$PGDATA seems like the right place to put the file, since we can only\nhave one active postmaster at a time in a database directory.\n\nI assume you'll also create a script that sends SIGTERM or other\nrequested signal to the postmaster, using this file?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Nov 1999 01:33:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pid file for postmaster? "
},
{
"msg_contents": "From: Tom Lane <[email protected]>\n> Tatsuo Ishii <[email protected]> writes:\n> > It would be nice if postmaster has its own pid file to send signals to\n> > it. I think the pid file could be placed under $PGDATA. Opinions?\n> \n> Yes, that's been discussed before, and I think it's even got an entry\n> on the TODO list. If you've got time to tackle it now, great!\n> \n> $PGDATA seems like the right place to put the file, since we can only\n> have one active postmaster at a time in a database directory.\n\nRight.\n\n> I assume you'll also create a script that sends SIGTERM or other\n> requested signal to the postmaster, using this file?\n\nOf course:-) I'm thinking about to make an apachectl-like script.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 24 Nov 1999 15:38:31 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pid file for postmaster? "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> I assume you'll also create a script that sends SIGTERM or other\n>> requested signal to the postmaster, using this file?\n\n> Of course:-) I'm thinking about to make an apachectl-like script.\n\nIf you think Apache has a good user interface for a signal-sending\nscript, then by all means steal their design ;-). If not that, we\nshould steal someone else's. No reason to reinvent this wheel,\nnor to make sysadmins learn yet another variant on the theme.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Nov 1999 01:49:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pid file for postmaster? "
},
{
"msg_contents": "On Wed, Nov 24, 1999 at 01:33:37AM -0500, Tom Lane wrote:\n> > it. I think the pid file could be placed under $PGDATA. Opinions?\n> ... \n> $PGDATA seems like the right place to put the file, since we can only\n> have one active postmaster at a time in a database directory.\n\nEhem, I think the correct place would be /var/run. At least that's what the\nfilesystem standard says IIRC.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Thu, 25 Nov 1999 08:20:33 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pid file for postmaster?"
},
{
"msg_contents": "Red Hat ALREADY creates a file \"postmaster.pid\" in the /var/lock directory.\nI don't know how far the /var/lock convention goes across different platforms,\nbut I recommend using IT or its equivalent, I still have scars from my OS/2\ndays where every product put its goodies in a different place and you had to\nguess where, how, and in what format.\n\n Regards,\n Tim Holloway\n\nTatsuo Ishii wrote:\n> \n> From: Tom Lane <[email protected]>\n> > Tatsuo Ishii <[email protected]> writes:\n> > > It would be nice if postmaster has its own pid file to send signals to\n> > > it. I think the pid file could be placed under $PGDATA. Opinions?\n> >\n> > Yes, that's been discussed before, and I think it's even got an entry\n> > on the TODO list. If you've got time to tackle it now, great!\n> >\n> > $PGDATA seems like the right place to put the file, since we can only\n> > have one active postmaster at a time in a database directory.\n> \n> Right.\n> \n> > I assume you'll also create a script that sends SIGTERM or other\n> > requested signal to the postmaster, using this file?\n> \n> Of course:-) I'm thinking about to make an apachectl-like script.\n> --\n> Tatsuo Ishii\n> \n> ************\n",
"msg_date": "Thu, 25 Nov 1999 08:19:45 -0500",
"msg_from": "Tim Holloway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pid file for postmaster?"
},
{
"msg_contents": "Tim Holloway <[email protected]> writes:\n> Red Hat ALREADY creates a file \"postmaster.pid\" in the /var/lock directory.\n\nIf they did it just like that, then they broke the ability to run more\nthan one postmaster on the same machine. Also, there is the question\nof what the permissions are on /var/lock. If they're tight then postgres\ncan't be an ordinary unprivileged user, which is bad. If they're loose\nthen anyone can come along and cause trouble by fiddling with the lock\nfiles.\n\nThere was considerable discussion of this whole area last year in\npg-hackers (check the thread \"flock patch breaks things here\" and\nrelated threads starting in late Aug. 1998). We were focusing mostly\non the use of lockfiles to ensure that one didn't accidentally start\ntwo postmasters in the same database dir and/or with the same port\nnumber; but if the lockfiles contain PIDs then of course they can also\nserve as a contact point for a signal-sender.\n\nTatsuo, if you have forgotten that discussion you may want to go back\nand re-read it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 Nov 1999 10:59:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pid file for postmaster? "
},
{
"msg_contents": "You are quite correct. They assume that there will be one and only one\npostmaster, which may be started or stopped at runlevel switch or\nmanually via /etc/rc.d/init.d/postmaster stop|start|restart\n\nSimilar systems have made PIDfiles like:\n\n/var/run/postgres/5432\n\nWhich would get around the single-postmaster limitation and allow you to\nmake postgres own the PID directory. Whether this has traversal-rights\nissues or not, I don't know. Red Hat control starts the postmaster as an\n'su' process from root, and they may do the WRITING of the PIDfile from\nthat account.\n\nTom Lane wrote:\n> \n> Tim Holloway <[email protected]> writes:\n> > Red Hat ALREADY creates a file \"postmaster.pid\" in the /var/lock directory.\n> \n> If they did it just like that, then they broke the ability to run more\n> than one postmaster on the same machine. Also, there is the question\n> of what the permissions are on /var/lock. If they're tight then postgres\n> can't be an ordinary unprivileged user, which is bad. If they're loose\n> then anyone can come along and cause trouble by fiddling with the lock\n> files.\n> \n> There was considerable discussion of this whole area last year in\n> pg-hackers (check the thread \"flock patch breaks things here\" and\n> related threads starting in late Aug. 1998). We were focusing mostly\n> on the use of lockfiles to ensure that one didn't accidentally start\n> two postmasters in the same database dir and/or with the same port\n> number; but if the lockfiles contain PIDs then of course they can also\n> serve as a contact point for a signal-sender.\n> \n> Tatsuo, if you have forgotten that discussion you may want to go back\n> and re-read it.\n> \n> regards, tom lane\n",
"msg_date": "Thu, 25 Nov 1999 11:32:23 -0500",
"msg_from": "Tim Holloway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pid file for postmaster?"
},
{
"msg_contents": "> Red Hat ALREADY creates a file \"postmaster.pid\" in the /var/lock directory.\n> I don't know how far the /var/lock convention goes across different platforms,\n> but I recommend using IT or its equivalent, I still have scars from my OS/2\n> days where every product put its goodies in a different place and you had to\n> guess where, how, and in what format.\n\nProblem with that it is going to require root permission to create that\nfile or directory if it doesn't exist. I think we have to put it in the\npgsql/data directory. Sorry.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 Nov 1999 13:35:14 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pid file for postmaster?"
},
{
"msg_contents": "Michael Meskes wrote:\n> \n> On Wed, Nov 24, 1999 at 01:33:37AM -0500, Tom Lane wrote:\n> > > it. I think the pid file could be placed under $PGDATA. Opinions?\n> > ...\n> > $PGDATA seems like the right place to put the file, since we can only\n> > have one active postmaster at a time in a database directory.\n> \n> Ehem, I think the correct place would be /var/run. At least that's what the\n> filesystem standard says IIRC.\n\nBut that forces us to distinguish between several running backends, with the \nmain aim of _not_ allowing two distinct backends to be run from the same \n$PGDATA.\n\nwe could of course start naming them like /var/run/pgsql.pid.for.var.lib.pgsql\n\n------------\nHannu\n",
"msg_date": "Thu, 25 Nov 1999 21:38:09 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pid file for postmaster?"
},
{
"msg_contents": "On Thu, Nov 25, 1999 at 09:38:09PM +0200, Hannu Krosing wrote:\n> But that forces us to distinguish between several running backends, with the \n> main aim of _not_ allowing two distinct backends to be run from the same \n> $PGDATA.\n\nOops. It seems I did not completely read that mail.\n\n> we could of course start naming them like /var/run/pgsql.pid.for.var.lib.pgsql\n\nDoes not make sense IMO.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Thu, 25 Nov 1999 21:05:46 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pid file for postmaster?"
},
{
"msg_contents": "\nOn 25-Nov-99 Michael Meskes wrote:\n> On Wed, Nov 24, 1999 at 01:33:37AM -0500, Tom Lane wrote:\n>> > it. I think the pid file could be placed under $PGDATA. Opinions?\n>> ... \n>> $PGDATA seems like the right place to put the file, since we can only\n>> have one active postmaster at a time in a database directory.\n> \n> Ehem, I think the correct place would be /var/run. At least that's what the\n> filesystem standard says IIRC.\n\nI agree ...\n\n\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n",
"msg_date": "Thu, 25 Nov 1999 23:13:06 +0300 (MSK)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pid file for postmaster?"
},
{
"msg_contents": "Just playing Devil's Advocate.\n\nHaving to have root persmission to set up a system is more the rule than the exception,\nthough. Under Windows NT, for example, you can't create the registry keys except as\nan administrator, even though unprivileged users will be setting the values.\nConsidering what having major DBMS installed could do to the system load, one could\neven argue that ONLY an administrator should be allowed to install it!\n\nBut that's another issue entirely.....\n\nBruce Momjian wrote:\n> \n> > Red Hat ALREADY creates a file \"postmaster.pid\" in the /var/lock directory.\n> > I don't know how far the /var/lock convention goes across different platforms,\n> > but I recommend using IT or its equivalent, I still have scars from my OS/2\n> > days where every product put its goodies in a different place and you had to\n> > guess where, how, and in what format.\n> \n> Problem with that it is going to require root permission to create that\n> file or directory if it doesn't exist. I think we have to put it in the\n> pgsql/data directory. Sorry.\n> \n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 Nov 1999 16:51:58 -0500",
"msg_from": "Tim Holloway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pid file for postmaster?"
},
{
"msg_contents": "[Charset KOI8-R unsupported, filtering to ASCII...]\n> \n> On 25-Nov-99 Michael Meskes wrote:\n> > On Wed, Nov 24, 1999 at 01:33:37AM -0500, Tom Lane wrote:\n> >> > it. I think the pid file could be placed under $PGDATA. Opinions?\n> >> ... \n> >> $PGDATA seems like the right place to put the file, since we can only\n> >> have one active postmaster at a time in a database directory.\n> > \n> > Ehem, I think the correct place would be /var/run. At least that's what the\n> > filesystem standard says IIRC.\n> \n> I agree ...\n> \n\nI think the idea was to put the file in /data, and symlink to /tmp or\n/var/run.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 Nov 1999 23:02:57 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pid file for postmaster?"
}
] |
[
{
"msg_contents": "\n> Sooner or later, however, they'll probably break.\n\nOr at least cause substantial headaches.\n \n> I'd not object if we removed these operators and instead provided\n> functions with the standard names log() and exp() for all the\n> non-integral numeric types. Comments?\n\nSounds very reasonable, and also improves readability and \nportability of the user's sql statements.\n\nAndreas\n",
"msg_date": "Wed, 24 Nov 1999 10:24:15 +0100",
"msg_from": "Zeugswetter Andreas SEV <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: [HACKERS] Getting OID in psql of recent insert (; and : o\n\tperators)"
}
] |
[
{
"msg_contents": "> Hi,\n> \n> you committed a patch from Hiroshi about Tidscan access\n> yesterday. Now the sources miss the Tidscan.h, so I assume\n> it wasn't a unified diff or at least patch left it somewhere\n> else.\n\nMan, I am terrible. Take that keyboard away from me.\n\nFile added. I am in the middle of adding pg_statistic index, so initdb\neveryone. That change is coming in too, though there are no hooks to\nupdate or use the cache yet.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Nov 1999 11:52:27 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: missing Tidscan.h"
}
] |
[
{
"msg_contents": "For some reason I can never properly install a build from the development\nsources. I have always avoided it, but now I must. I don't have the very\nlatest sources, but it seems I am doing something fundamentally wrong here\nbecause all hell usually breaks loose when I try. So perhaps someone that\ndoes this all the time can give me a few hints here.\n\nI have a 6.5.* installation in /usr/local/pgsql (from /usr/src/pgsql).\n/usr/local/pgsql is in the path, /usr/local/pgsql/lib is in ld.config.\n(Linux system). The development sources I have in /home/peter/pgsql and I\nwant to install into /home/peter/postgres-cur. That's the setup.\n\n~/pgsql/src$ ./configure --prefix=/home/peter/postgres-cur\nworks fine.\n\n~/pgsql/src$ make\nbombs out somewhere in backend/optimizer with internal compiler error (gcc\n2.8.1) and/or weird make file complaints (GNU make 3.76.1). Try again with\negcs 2.91.66 works.\n\n~/pgsql/src$ make install\n\n~/postgres-cur/bin$ ./initdb --pglib=/home/peter/postgres-cur/lib \\\n--pgdata=/home/peter/postgres-cur/data\n\nWe are initializing the database system with username peter (uid=500).\nThis user will own all the files and must also own the server process.\n\nCreating Postgres database system directory /home/peter/postgres-cur/data\n\nCreating Postgres database system directory\n/home/peter/postgres-cur/data/base\n\nCreating Postgres database XLOG directory\n/home/peter/postgres-cur/data/pg_xlog\n\nCreating template database in /home/peter/postgres-cur/data/base/template1\n-boot: invalid option -- x\nUsage: postgres -boot [-d] [-C] [-F] [-O] [-Q] [-P portno] [dbName]\n d: debug mode\n C: disable version checking\n F: turn off fsync\n O: set BootstrapProcessing mode\n P portno: specify port number\ninitdb: could not create template database\n\nOkay, that's the first problem. So I removed that -x flag out of initdb\n(look for shell variable FIRSTRUN). Next try:\n\n~/postgres-cur/bin$ ./initdb --pglib=/home/peter/postgres-cur/lib \\\n--pgdata=/home/peter/postgres-cur/data\n\nWe are initializing the database system with username peter (uid=500).\nThis user will own all the files and must also own the server process.\n\nCreating template database in /home/peter/postgres-cur/data/base/template1\n syntax error 2370 : parse error\nCreating global classes in /home/peter/postgres-cur/data/base\n\nAdding template1 database to pg_database...\n\nVacuuming template1\nCreating public pg_user view\nCreating view pg_rules\nCreating view pg_views\nCreating view pg_tables\nCreating view pg_indexes\n\nDespite the error things seem to have been completed fine.\n\n~/postgres-cur/bin$ ./postmaster -p 6543 -D /home/peter/postgres-cur/data/\nDatabase system in directory /home/peter/postgres-cur/data/ is not\ncompatible with this version of Postgres, or we are unable to read the\nPG_VERSION file. Explanation from ValidatePgVersion: Version number in\nfile '/home/peter/postgres-cur/data//PG_VERSION' should be 7.0, not 6.5.\n\nNo data directory -- can't proceed.\n\nOkay, so I change all the 6.5's to 7.0's by hand and try again.\n\n~/postgres-cur/bin$ ./postmaster -p 6543 -D /home/peter/postgres-cur/data\nDEBUG: Data Base System is starting up at Wed Nov 24 17:58:40 1999\nFATAL 2: Open(cntlfile) failed: 2\nFATAL 2: Open(cntlfile) failed: 2\nStartup failed - abort\n\nAt this point I thought I'd better stop messing around and ask what's\ngoing on here.\n\nProbably the paths of my \"production\" installation and the development\ninstallation interfere with each other, but there must be a way I can do\nthis in a simple fashion and without affecting my running database.\n\nThanks,\n\tPeter\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Wed, 24 Nov 1999 18:28:54 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Development installation fails"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> For some reason I can never properly install a build from the development\n\nIt looks to me like you have a conflict against a production\ninstallation on the same machine. It is *not sufficient* to install\ninto an alternate directory; you must also select an alternate port\nnumber. See the new 'run_check.sh' regress test driver for a complete\nexample.\n\nJan reported recently that there were places that didn't pay attention\nto the -D switch, which is a bug that will likely bite you if you are\ntrying to install into a directory other than the configured-in one.\nHowever that doesn't seem to be the issue here.\n\nPersonally I build test versions with\n\n./configure --with-pgport=5440 --prefix=/users/postgres/testversion\n\n(adapt port and prefix to taste, of course) and then I don't have to\nworry about remembering to do the install specially.\n\nI dunno about the other problems you are seeing. I run a full build\nand install from scratch at least a couple times a week, just to be\nsure that current sources are not broken. They weren't as of ... hmm\n... 20:55 EST 22-Nov was my last CVS pull. Anyway:\n\n> ~/pgsql/src$ make\n> bombs out somewhere in backend/optimizer with internal compiler error (gcc\n> 2.8.1) and/or weird make file complaints (GNU make 3.76.1). Try again with\n> egcs 2.91.66 works.\n\nHmm. I have been using make 3.76.1 right along with no problems. What\nwere the make complaints again? Also, I'm still running gcc 2.7.2.2,\nso I'm a bit surprised to hear that 2.8.1 crashes. (Or maybe not ...\nthere's a reason I never updated to 2.8.* ...)\n\n> ~/postgres-cur/bin$ ./initdb --pglib=/home/peter/postgres-cur/lib \\\n> --pgdata=/home/peter/postgres-cur/data\n\nYou may need to have env variables PGDATA and PGLIB set during initdb\n(at least, the install docs recommended that last I looked) and you\ndefinitely need to have /home/peter/postgres-cur/bin in your PATH.\n\nIt looks to me like initdb.sh may need to do \"export PGDATA PGLIB\" to\nmake sure that whatever values it gets from command line switches are\npassed down to the programs it invokes... the PATH is just something\nyou have to get right...\n\n> Creating template database in /home/peter/postgres-cur/data/base/template1\n> -boot: invalid option -- x\n\nI think you were already invoking the wrong version of postgres here;\nthe current sources do support -x (although bootstrap.c's usage()\nneglects to list it). If it wasn't listed in your PATH ahead of the\n6.5 version, that's your problem right there.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Nov 1999 14:39:08 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Development installation fails "
},
{
"msg_contents": "On 1999-11-24, Tom Lane mentioned:\n\n> > ~/postgres-cur/bin$ ./initdb --pglib=/home/peter/postgres-cur/lib \\\n> > --pgdata=/home/peter/postgres-cur/data\n> \n> You may need to have env variables PGDATA and PGLIB set during initdb\n> (at least, the install docs recommended that last I looked) and you\n> definitely need to have /home/peter/postgres-cur/bin in your PATH.\n\nOkay, thanks for the tip. I thought about this for a while and came to the\nconclusion, that this is not only the most likely reason for the oft\ncriticized installation (non-)simplicity, it is also cumbersome and not\nacceptable to have to set environment variables (especially PATH) during\ninstallation. This would also mean that I would have to yank the path away\nfrom my production installation.\n\nSo I thunk that initdb could very well find out itself where it is located\nand then call ${mydir}/postgres explicitly. That solves the path problem. \nSecondly, it could also make educated guesses where the PGLIB is at\n(namely ${mydir}/../lib, or ${mydir}/../lib/pgsql as in the RPMs). That\nsolves the other problem.\n\nI included a patch that does exactly that, so now I can do:\n$ ./configure --prefix=/any/path/here\n$ make\n$ make install\n$ /any/path/here/bin/initdb -D /some/other/path #(*)\n$ /any/path/here/bin/postmaster -D /some/other/path\nand I'm up and running. (No PG* env var or special PATH is set.)\n\nI'm not kidding, this is exactly what I did (well, different path names)\nand I'm in business. Okay, there are some other glitches with make install\nand initdb being very reluctant to creating directories themselves, but\nthat could be fixed in another round of changes.\n\nSo please examine that patch. It was a very quick hack, so it's not very\nrefined. Let me know if this sounds good, then I'll put the finishing\ntouches on it.\n\n\t-Peter\n\n\n(*) - Notice that I changed the option to -D (formerly -r). This goes more\nnicely with the equivalent postmaster option, and also with \"_D_at's where\nI want my data to be stored\" :)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden",
"msg_date": "Sun, 28 Nov 1999 16:15:09 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Development installation fails "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 1999-11-24, Tom Lane mentioned:\n>> You may need to have env variables PGDATA and PGLIB set during initdb\n>> (at least, the install docs recommended that last I looked) and you\n>> definitely need to have /home/peter/postgres-cur/bin in your PATH.\n\n> Okay, thanks for the tip. I thought about this for a while and came to the\n> conclusion, that this is not only the most likely reason for the oft\n> criticized installation (non-)simplicity, it is also cumbersome and not\n> acceptable to have to set environment variables (especially PATH) during\n> installation.\n\nI never much cared for it either. If you can get rid of it, great!\n\n> So I thunk that initdb could very well find out itself where it is located\n> and then call ${mydir}/postgres explicitly. That solves the path problem. \n> Secondly, it could also make educated guesses where the PGLIB is at\n> (namely ${mydir}/../lib, or ${mydir}/../lib/pgsql as in the RPMs).\n\nReasonable, though you should provide a command line switch to override\nthe guess about PGLIB. Possibly initdb should emit a message like\n\"Assuming --pglib=/foo/bar/baz\" if it's not given a switch.\n\n> So please examine that patch. It was a very quick hack, so it's not very\n> refined. Let me know if this sounds good, then I'll put the finishing\n> touches on it.\n\nI'm not convinced your \"which $0\" implementation for finding BINDIR is\nportable/reliable. On my system, man which says that it determines\naliases and path by reading ~/.cshrc, which has got obvious problems if\nI'm not a csh user. My references say that \"which\" isn't available on\nall systems anyway. It'd probably be a good idea to verify that\n$BINDIR/postgres exists after you think you have the value.\n\nBTW, seems like doing PATH=$BINDIR:$PATH in the script would be a lot\neasier than hacking all the references to programs --- and it covers\nthe possibility that one of the invoked programs tries to call another.\n\nThe other bit of environment state that initdb currently depends on is\nUSER. This is a big risk factor IMHO, since it won't be right if you\nare su'd to the postgres account. Can you add code to verify that it\nis set (or set it if not) and that it matches the actual ownership of\nthe process?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 28 Nov 1999 12:44:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Development installation fails "
},
{
"msg_contents": "On Sun, 28 Nov 1999, Tom Lane wrote:\n\n> I'm not convinced your \"which $0\" implementation for finding BINDIR is\n> portable/reliable. On my system, man which says that it determines\n> aliases and path by reading ~/.cshrc, which has got obvious problems if\n> I'm not a csh user. My references say that \"which\" isn't available on\n> all systems anyway. It'd probably be a good idea to verify that\n> $BINDIR/postgres exists after you think you have the value.\n\nI did a little \"which\" survey, it seems you're right. On GNU/Linux systems\n\"which\" is a binary which does the seemingly right thing. Under bash\n\"which\" is also often aliased to \"type -path\". That led me to believe that\nthe sh built-in \"type\" might do, but its output format is rather\nunpredicable. On FreeBSD \"which\" is a Perl script, which seems to work\nfine.\n\nOn Solaris and SGI the problems you pointed out seem to be present, as\n\"which\" is actually implemented as a csh script. However, on the\nparticular systems I surveyed, the sysadmins were smart enough to provide\nworking versions (either the GNU or the FreeBSD variant).\n\nTo make a long story short, using the implementation I suggested with the\nchecks you suggested would probably benefit 90% of our users.\n\n> BTW, seems like doing PATH=$BINDIR:$PATH in the script would be a lot\n> easier than hacking all the references to programs --- and it covers\n> the possibility that one of the invoked programs tries to call another.\n\nI'm really hesitant to changing the path, even if only temporarily. Will\nponder.\n\n> The other bit of environment state that initdb currently depends on is\n> USER. This is a big risk factor IMHO, since it won't be right if you\n> are su'd to the postgres account. Can you add code to verify that it\n> is set (or set it if not) and that it matches the actual ownership of\n> the process?\n\nYes, I noticed that too. Again, I really don't think that the script\nshould set USER. If the user chose to unset it, they might have had a\nreason for it. The same happened in the createdb, etc. scripts and in\nthose cases there wasn't even a point for knowing the user at all, AFAI\nrecall. \n\nSeems like cleaning out the logic of initdb in general is a good idea, so\nI'll work on that.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Mon, 29 Nov 1999 15:54:04 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Development installation fails "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On Sun, 28 Nov 1999, Tom Lane wrote:\n>> I'm not convinced your \"which $0\" implementation for finding BINDIR is\n>> portable/reliable.\n\n> To make a long story short, using the implementation I suggested with the\n> checks you suggested would probably benefit 90% of our users.\n\nAnd fail entirely for the other 10%? Not good enough if so :-( ... the\nidea is to make install easier not harder. How much code would it take\nto emulate as much of \"which\" as we need, do you think? What's our\nfallback position if it doesn't work?\n\n>> The other bit of environment state that initdb currently depends on is\n>> USER.\n\n> Yes, I noticed that too. Again, I really don't think that the script\n> should set USER.\n\nAfter thinking about it for a while, I think that there shouldn't be any\ndependency on USER, period. initdb (and anything else that cares) ought\nto get the name of the user they are executing as, and use that. I\ncan't see any good reason why the name inserted into the databases\nshould be potentially different from the ownership of the files.\n\nIs 'whoami' a portable way of getting the current user id, or not?\nThe only reference I have about portable shell programming says that\nthe POSIX-approved command for this is 'id -u -n', and that 'whoami'\nis a BSD-ism. I've got doubts that either one is universal ... we might\nhave to try both. Grumble.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 29 Nov 1999 10:44:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Development installation fails "
},
{
"msg_contents": "On 1999-11-29, Tom Lane mentioned:\n\n> > To make a long story short, using the implementation I suggested with the\n> > checks you suggested would probably benefit 90% of our users.\n> \n> And fail entirely for the other 10%? Not good enough if so :-( ... the\n> idea is to make install easier not harder. How much code would it take\n> to emulate as much of \"which\" as we need, do you think? What's our\n> fallback position if it doesn't work?\n\nOkay, now you're putting words into my mouth :) The fallback would be not\nneeding \"which\" because initdb is called with an explicit path (which is\nfairly likely during installation, unless you adjust your path ahead of\ntime) and also behaviour as it is now (always a good idea).\n\nTo reproduce \"which\" one would need to do some sort of split on PATH,\nwhich would probably require awk and we're trying to avoid that. (And I\ndon't know awk, but that's just an excuse, no reason.)\n\n> After thinking about it for a while, I think that there shouldn't be any\n> dependency on USER, period. initdb (and anything else that cares) ought\n> to get the name of the user they are executing as, and use that. I\n> can't see any good reason why the name inserted into the databases\n> should be potentially different from the ownership of the files.\n\nOne potential goal (which I personally share) of simplifying the\ninstallation process would be to not have to su as postgres but do\neverything as root or a user of your choice. Together with Vince's idea of\nadding -o and -g options to the install command and a similar option to\ninitdb, we can do that and it would not be hard to understand from an end\nuser's point of view. What I don't like is that certain scripts don't find\nthe USER variable set and then set it themselves. The psql wrapper scripts\ndo that, but I'm cleaning that up.\n\n> Is 'whoami' a portable way of getting the current user id, or not?\n> The only reference I have about portable shell programming says that\n> the POSIX-approved command for this is 'id -u -n', and that 'whoami'\n> is a BSD-ism. I've got doubts that either one is universal ... we might\n> have to try both. Grumble.\n\n(The psql wrapper scripts use \"whoami\".)\n\nOnce you start wondering in this direction you might soon start noticing\nthat chances are that shell scripts cannot reliably use *any* external\nprogram. The exceptions might be the handful listed in the GNU makefile\nstandard, because as we are using autoconf we could assume that they are\npresent.\n\nWe have already taken one step into the direction of providing our own sh\nutils collection by way of pg_id (which is also a subset of what \"id\"\ndoes). We could extend that and provide pg_which and whatever else we need\n(of course pg_which will create a chicken and egg problem). But that could\nget out of hand and won't help developers either.\n\nOne thing I am missing from this project that could really help is a goal\nof what sort of operating system it is we are trying to support. In the\nbeginning the answer was clearly \"BSD\". Nowadays I believe everyone would\nagree that it should be POSIX. In practice it is probably safer to say\nPOSIX to the extend to which it is supported by the most popular operating\nsystems.\n\nThat doesn't mean others should be locked out, but then *they* should have\nto make the adjustments. That means if all reasonable systems have \"id\"\nthen the others will have to get a substitute. This also applies to C\nsource code: If all reasonable systems have CONSTANT defined, then others\nwill have to define it themselves (possibly assisted via autoconf), but\nyou should not expect your developers to remember every single constant\nyou invented yourself because of some obscure conflict. (Yes, I'm also\nalluding to PATH_MAX, but it goes further to C functions etc. Recall the\npostmaster SSL switch.) But most importantly of all you should not reject\nuseful extensions because of strange incompatibilities.\n\nWe can easily steal the relevant working utilities from some *BSD* code\nand provide them as an extra tar ball. \"To use PostgreSQL, you need\ncertain basic 'shell utilility' programs, which you do not seem to have\ninstalled. Please grab the package pgsql-shutils.tar.gz from the\nPostgreSQL ftp server. You'll be eternally grateful.\" Or even include them\nin the main tarball, if they aren't too large.\n\nI think the best we could possibly do at this point is to do a survey of\noperating systems we support and which are mostly used and see what's\nreally on there and what POSIX says and then use that as a frame of\nreference. Those OS' should probably include *BSD, BSDI, Linux (various\ndistros), Solaris, SGI, HPUX, Cygwin, (shoot me if I left yours off). I\nthink the developer circle does have access to all of those. If not, then\nit's potentially pointless to *try* to account for them, if we really\ncan't anyway. This whole portability discussion needs to have a reference.\nWe can't just always say \"I think it's not portable.\" We need to have a\nway to look it up.\n\n\nHaving said that, I'll continue to mess around with initdb on the\nassumption that nothing works and we'll see what comes out of it. Thanks\nfor reading all the way down to here anyway.\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Tue, 30 Nov 1999 01:05:00 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Portability (was Re: [HACKERS] Development installation fails)"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> To reproduce \"which\" one would need to do some sort of split on PATH,\n> which would probably require awk and we're trying to avoid that.\n\nNo, that part is easy:\n\nfor dir in `echo $PATH | sed 's/:/ /g'` ; do\n if [ -x $dir/initdb ]; then\n ...\n\nI think the harder part is manipulating $0 to see if it contains a path\npart or not --- not all systems have dirname, so you need to fake it\nwith a sed pattern. See the configure script for the standard way.\n(BTW, I see the new run_check.sh driver depends on dirname ... boo hiss.)\n\n> One potential goal (which I personally share) of simplifying the\n> installation process would be to not have to su as postgres but do\n> everything as root or a user of your choice. Together with Vince's idea of\n> adding -o and -g options to the install command and a similar option to\n> initdb, we can do that and it would not be hard to understand from an end\n> user's point of view.\n\nHuh? The user of your choice *is* postgres, or whoever you are su'd as.\n-o and -g are useless unless you are executing the install as root,\nwhich really isn't necessary --- in fact I think we ought to discourage\nit to prevent people from accidentally installing the postgres files\nwith root ownership. (initdb ought to refuse to run at all as root...)\n\n> What I don't like is that certain scripts don't find\n> the USER variable set and then set it themselves. The psql wrapper scripts\n> do that, but I'm cleaning that up.\n\nWell, if you don't like setting USER I don't have a problem with using\na different variable name instead. My point is that there is no good\nreason to allow the value to be different from the actual effective UID,\nso we should be working from that and not environment variables at all.\n\n> One thing I am missing from this project that could really help is a goal\n> of what sort of operating system it is we are trying to support. In the\n> beginning the answer was clearly \"BSD\". Nowadays I believe everyone would\n> agree that it should be POSIX.\n\nPOSIX, BSD, and anything else that anyone in the developer + beta tester\npopulation is willing to test/support. In practice, as long as we are\npretty conservative about what we assume is in POSIX or BSD (eg, use\nPOSIX.1 but not POSIX.2 stuff), I don't think it's a big deal.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 30 Nov 1999 11:18:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Portability (was Re: [HACKERS] Development installation fails) "
},
{
"msg_contents": "On Tue, 30 Nov 1999, Tom Lane wrote:\n\n> No, that part is easy:\n> \n> for dir in `echo $PATH | sed 's/:/ /g'` ; do\n> if [ -x $dir/initdb ]; then\n> ...\n\nCool.\n\n> I think the harder part is manipulating $0 to see if it contains a path\n> part or not --- not all systems have dirname, so you need to fake it\n> with a sed pattern. See the configure script for the standard way.\n> (BTW, I see the new run_check.sh driver depends on dirname ... boo hiss.)\n\nYou see, it just gets worse and worse as you start digging ... :(\nNo, seriously: *which* system doesn't have dirname? Does it have basename?\n\n> Huh? The user of your choice *is* postgres, or whoever you are su'd as.\n> -o and -g are useless unless you are executing the install as root,\n> which really isn't necessary --- in fact I think we ought to discourage\n> it to prevent people from accidentally installing the postgres files\n> with root ownership. (initdb ought to refuse to run at all as root...)\n\nThere is a very good reason for running the installation process as any\nuser you choose, because the usual sequence\n\n./configure\nmake\nmake install\n\nwill fail to create the installation directory (/usr/local/pgsql). That\nblows. Then you have to su as root and create it and then go back and\ncontinue. Then you notice you made the directory but forgot to change the\nownership to postgres. Next time you forget that altogether and then all\nyour files are owned by root. This is exactly what makes the INSTALL so\nlong. I have always resorted to installing everything as root and then\ndoing a chown --recursive (talk about non-portable ;) ), that was much\nmore convenient.\n\nI'm not sure about the terminally convenient and flexible way yet either,\nthough. But at least we're agreed that all this environment variable stuff\nneeds to go away.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 30 Nov 1999 17:44:39 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Portability (was Re: [HACKERS] Development installation fails)"
},
{
"msg_contents": "On Tue, 30 Nov 1999, Tom Lane wrote:\n\n> Peter Eisentraut <[email protected]> writes:\n> > One potential goal (which I personally share) of simplifying the\n> > installation process would be to not have to su as postgres but do\n> > everything as root or a user of your choice. Together with Vince's idea of\n> > adding -o and -g options to the install command and a similar option to\n> > initdb, we can do that and it would not be hard to understand from an end\n> > user's point of view.\n> \n> Huh? The user of your choice *is* postgres, or whoever you are su'd as.\n> -o and -g are useless unless you are executing the install as root,\n> which really isn't necessary --- in fact I think we ought to discourage\n> it to prevent people from accidentally installing the postgres files\n> with root ownership. (initdb ought to refuse to run at all as root...)\n\nPerhaps the user of choice for running PostgreSQL is postgres, but that's\nnot necessarily the same choice for the installing user. If you happen\nto install it as postgres then the -o and -g will have no effect on you,\nbut if root is installing it then you want the -o and -g in there. No?\nI do agree, tho, that initdb should only be able to run as the database\nsuperuser as stated at configuration time.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 30 Nov 1999 11:49:27 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Portability (was Re: [HACKERS] Development installation fails)"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> There is a very good reason for running the installation process as any\n> user you choose,\n\nNot \"as any user you choose\", but as *root* --- the\ncan't-make-top-directory problem can only be solved by root, and on many\nsystems only root can chown files to another account anyway.\n\nThe difficulty with encouraging people to su to root for install is that\nit's so easy to make the files root-owned and thereby create a security\nproblem. Perhaps the right compromise is to add a --owner switch to\n\"make install\", and to have it refuse to install if the (given or\ndefaulted) ownership is \"root\" ?\n\nActually, something I'm not real clear on right at the moment is whether\nit is safe to make the dirs/files created by install be root-owned or not.\nThe ownership of the data directory and everything below it must be\npostgres (s/postgres/your unprivileged user of choice/g) because the\nrunning postgres process must be able to write in those dirs. But\noffhand I can't think of any reason that any postgres-owned processes\nneed to be able to write in the bin, lib, or include hierarchies. Can\nanyone else think of one?\n\nMaybe the proper approach is \"OK, do make install as root if you feel\nlike it; we don't care whether that stuff is root-owned\". Only initdb\nwould need to have a --owner switch and take care that the files/dirs\nit makes are not left root-owned. Then the install sequence could\nlook like\n\n./configure\nmake\nsu root\nmake install\ninitdb --owner=postgres --pgdata=whatever\nexit from su\nstart postmaster\n\nBTW, do we have a check in the postmaster to refuse to start if its euid\nis root? Shouldn't we?\n\n\t\t\tregards, tom lane\n\nPS:\n\n> No, seriously: *which* system doesn't have dirname? Does it have basename?\n\nHorton says that dirname was originally SysV and was standardized in\nPOSIX.1. I'd expect it to be present on most systems these days, but\nprobably there are still some old BSD machines that ain't got it. He\nrates basename as fully portable though.\n\nBasically my take on this stuff is to conform to existing GNU portability\npractices except where we have very good reason to do otherwise. It's\nnot that hard to use the GNU workaround for dirname, so why not do it...\n",
"msg_date": "Tue, 30 Nov 1999 12:33:03 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Ownership/protection (was Re: [HACKERS] Portability)"
},
{
"msg_contents": "On Tue, 30 Nov 1999, Tom Lane wrote:\n\n> ./configure\n> make\n> su root\n> make install\n> initdb --owner=postgres --pgdata=whatever\n> exit from su\n> start postmaster\n\n./configure --with-pguser=postgres\nmake\nsudo make install\n\nis what I've been pushing for. That way when you do the installation\nit'll happen something like\n\n/usr/bin/install -c -d -g postgres -o postgres FILE-TO-BE-INSTALLED\n\nthe -c copies, the -d creates the missing directories, different install\nprograms do things differently. The -g and -o come from the configure\nswitch --with-pguser and maybe even --with-pggroup. The defaults should\nbe postgres for both. initdb can then have those values built in, yet\noverridable.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 30 Nov 1999 12:55:21 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ownership/protection (was Re: [HACKERS] Portability)"
},
{
"msg_contents": "On Tue, 30 Nov 1999, Tom Lane wrote:\n\n> The difficulty with encouraging people to su to root for install is that\n> it's so easy to make the files root-owned and thereby create a security\n> problem. Perhaps the right compromise is to add a --owner switch to\n> \"make install\", and to have it refuse to install if the (given or\n> defaulted) ownership is \"root\" ?\n\nSee Vince's email about the configure switch to be used in install. That\nis what I was shooting for. I am not sure to what extend initdb should use\nthose settings (recall: autoconf is not for configuring run time stuff)\nbut if you *insist* on running initdb as root (too lazy to su, forgot to,\netc.) there should be an option, as there is now.\n\n> offhand I can't think of any reason that any postgres-owned processes\n> need to be able to write in the bin, lib, or include hierarchies. Can\n> anyone else think of one?\n\nThey better not write there. That would certainly be a major bug.\n\n> BTW, do we have a check in the postmaster to refuse to start if its euid\n> is root? Shouldn't we?\n\nThere is a check and it refuses to start.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 30 Nov 1999 20:36:01 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ownership/protection (was Re: [HACKERS] Portability)"
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm getting an error in run_check.sh on Solaris.\n\n\nMULTIBYTE=;export MULTIBYTE; \\\n/bin/sh ./run_check.sh solaris_sparc\nawk: syntax error near line 1\nawk: illegal statement near line 1\nawk: syntax error near line 2\nawk: illegal statement near line 2\n=============== Create ./tmp_check directory ================\n.\n.\n\n\nThis is due to 2 problems.\n\n1) The awk script is broken over 2 lines.\n2) Solaris's awk does not seem to understand REs in split(). (nawk's OK)\n\n1 is easy to fix, 2 is tricky - to remain portable.\n\nKeith.\n\n\n",
"msg_date": "Wed, 24 Nov 1999 23:37:43 +0000 (GMT)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "run_check problem"
},
{
"msg_contents": ">\n> Hi all,\n>\n> I'm getting an error in run_check.sh on Solaris.\n>\n>\n> MULTIBYTE=;export MULTIBYTE; \\\n> /bin/sh ./run_check.sh solaris_sparc\n> awk: syntax error near line 1\n> .\n> .\n>\n> 1) The awk script is broken over 2 lines.\n> 2) Solaris's awk does not seem to understand REs in split(). (nawk's OK)\n\n Argh - now I remember why we tried to avoid awk as much as\n possible. Why do the sun guy's ship such a crippled version\n for a good old UNIX V7 tool? I rememver that MINIX already\n had a better one! DAMNED BLOODY MOTHERF...\n\n And I discovered another problem with it too. The path\n replacements in various scripts (especially those loading\n shared objects at runtime) need to use the ones from the temp\n installation. But that's surely hackable.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 25 Nov 1999 02:47:09 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] run_check problem"
},
{
"msg_contents": "Keith Parks <[email protected]> writes:\n> This is due to 2 problems.\n> 1) The awk script is broken over 2 lines.\n> 2) Solaris's awk does not seem to understand REs in split(). (nawk's OK)\n\nI don't think so --- the exact same split() construct is there in the\nold regress.sh test driver, same as it ever was. I can believe the line\nbreak is a problem, but if there's another problem then you need to\nkeep digging.\n\nComparing the two scripts, I wonder if Jan broke it by adding '/bin/sh'\nto the invocation of config.guess. Doesn't seem very likely, but...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 Nov 1999 00:43:01 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] run_check problem "
}
] |
[
{
"msg_contents": "Hi,\n\n I'm hacking in the operator functions for the lztext type and\n have a little question.\n\n With the generic per-byte decompressor I added, it would be\n very easy to produce functions like\n\n bool lztext_text_eq(lztext, text)\n bool text_lztext_eq(text, lztext)\n\n too. Comparision between lztext and text does already work\n because there are lztext->text and vice versa functions\n available and the parser automatically typecasts.\n\n So would it be a win or a dead end street to provide those\n functions? Does it look for a direct comparision function\n allways first? Then it would be, because it would never\n choose to compress the text item and then compare two\n lztext's (what would be terrible).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 25 Nov 1999 01:09:53 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "lztext and parser"
},
{
"msg_contents": "\nOn Thu, 25 Nov 1999, Jan Wieck wrote:\n\n> Hi,\n> \n> I'm hacking in the operator functions for the lztext type and\n> have a little question.\n\nHi,\n\nI see your the lztext/pg_compress in CVS and it is *very* nice, \nand I have a little comment. \n\n> With the generic per-byte decompressor I added, it would be\n> very easy to produce functions like\n\nIt's very good if compression routines support data stream. Suppose\nyou add the pre-byte compressor? \n\nReasoning: What is fastly - (1) a standard data I/O reading or \n (2) a compressed data I/O reading and decompress it\n (fast CPU vs. slowly I/O)?\n \n \t IMHO if (2) is fastly is prabably good for something\n data use second way. But I don't know where use this\n way in PgSQL :-)\n\n\n> too. Comparision between lztext and text does already work\n> because there are lztext->text and vice versa functions\n> available and the parser automatically typecasts.\n> \n> So would it be a win or a dead end street to provide those\n> functions? Does it look for a direct comparision function\n> allways first? Then it would be, because it would never\n> choose to compress the text item and then compare two\n> lztext's (what would be terrible).\n> \n\nIn the lztext_eq(lztext *lz1, lztext *lz2) you use lztext_cmp().\nI'm not sure if it is good, because you must decompress fistly, but \nyou don't need information about '<' or '>', you need '=' only. Why \nyou not use memcmp() (or other method) for this, and comparate this \nwithout decompression? Two equal string is equal in a compressed form, \nor not? \n\nIMHO in some routines (datetime) in PgSQL I saw internal cache, what \nadd this to the lztext_cmp(), and not decompress all in each time?\n\nAll in this letter is comments and suggestions, you can always remove \nthis letter to /dev/null :-)\n\n\t\t\t\t\t\t\tKarel\n\n------------------------------------------------------------------------------\nKarel Zak <[email protected]> http://home.zf.jcu.cz/~zakkr/\n\nDocs: http://docs.linux.cz (big docs archive)\t\nKim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\nFTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n------------------------------------------------------------------------------\n\n",
"msg_date": "Thu, 25 Nov 1999 13:27:22 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] lztext and parser"
},
{
"msg_contents": "Karel wrote:\n\n> In the lztext_eq(lztext *lz1, lztext *lz2) you use lztext_cmp().\n> I'm not sure if it is good, because you must decompress fistly, but\n> you don't need information about '<' or '>', you need '=' only. Why\n> you not use memcmp() (or other method) for this, and comparate this\n> without decompression? Two equal string is equal in a compressed form,\n> or not?\n\n For now, yes. But the current lztext implementation is only a\n starting poing (still). The final version might have some\n runtime customizable parameters for the compression algorithm\n (good match size and minimum compression rate). Then, if some\n data is stored and the parameters change after, this wouldn't\n be true any more.\n\n> All in this letter is comments and suggestions, you can always remove\n> this letter to /dev/null :-)\n\n Would never do so. All suggestions are welcome and might\n trigger another idea in someone elses head.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 25 Nov 1999 13:45:50 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] lztext and parser"
}
] |
[
{
"msg_contents": "\nWelcom.\n\nI have that message every time when I do vacuum;\n\nvacuum;\nNOTICE: Index tb_klienci_id_klienci_key: NUMBER OF INDEX' TUPLES (8776)\nIS NOT\nTHE SAME AS HEAP' (8774)\nNOTICE: Index tb_klienci_id_klienci_key: NUMBER OF INDEX' TUPLES (8776)\nIS NOT\nTHE SAME AS HEAP' (8774)\nVACUUM\n\nI dont know it is very bad message if that,\nI have a problem could you help me.\n\n\nSorry for may english\n",
"msg_date": "Thu, 25 Nov 1999 09:26:50 +0100",
"msg_from": "Grzegorz =?iso-8859-2?Q?Prze=BCdziecki?=\n\t<[email protected]>",
"msg_from_op": true,
"msg_subject": "Vacuum"
},
{
"msg_contents": "Grzegorz =?iso-8859-2?Q?Prze=BCdziecki?= <[email protected]> writes:\n> vacuum;\n> NOTICE: Index tb_klienci_id_klienci_key: NUMBER OF INDEX' TUPLES (8776)\n> IS NOT THE SAME AS HEAP' (8774)\n\nThere are some bugs here that we are looking into fixing for 7.0.\nIn the meantime, you should be able to get rid of the message by\ndropping and recreating that index.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 Nov 1999 10:17:16 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Vacuum "
},
{
"msg_contents": "> Grzegorz =?iso-8859-2?Q?Prze=BCdziecki?= <[email protected]> writes:\n> > vacuum;\n> > NOTICE: Index tb_klienci_id_klienci_key: NUMBER OF INDEX' TUPLES (8776)\n> > IS NOT THE SAME AS HEAP' (8774)\n> \n> There are some bugs here that we are looking into fixing for 7.0.\n> In the meantime, you should be able to get rid of the message by\n> dropping and recreating that index.\n> \n\nYes, and 7.0 message will suggest exactly that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 Nov 1999 13:36:09 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Vacuum"
},
{
"msg_contents": "\nThanks\n\n\nTom Lane wrote:\n> \n> Grzegorz =?iso-8859-2?Q?Prze=BCdziecki?= <[email protected]> writes:\n> > vacuum;\n> > NOTICE: Index tb_klienci_id_klienci_key: NUMBER OF INDEX' TUPLES (8776)\n> > IS NOT THE SAME AS HEAP' (8774)\n> \n> There are some bugs here that we are looking into fixing for 7.0.\n> In the meantime, you should be able to get rid of the message by\n> dropping and recreating that index.\n> \n> regards, tom lane\n> \n> ************\n\n-- \n\n/*****************************/\nvoid main(void)\n{\n Co to zrobic();\n zeby dobrze zrobic();\n i sie nie narobic();\n return(TRUE);\n}\n/*****************************/\n",
"msg_date": "Fri, 26 Nov 1999 15:30:03 +0100",
"msg_from": "Grzegorz =?iso-8859-2?Q?Prze=BCdziecki?=\n\t<[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Vacuum"
}
] |
[
{
"msg_contents": "On Wed, 24 Nov 1999, Tom Lane wrote:\n\n> Peter Eisentraut <[email protected]> writes:\n> > This patch removes the ';' (logarithm) and ':' (exponentiation) operators,\n> > as was generally agreed upon.\n> \n> This is a tad premature IMHO. In the first place, we haven't got the\n> replacement functions --- at least not with user-friendly names.\n> In the second place, I think we oughta deprecate these things for a\n> release or two before we actually remove 'em.\n\nDeprecation is not going to work in this case because those two operators\ninterfere with their blessed SQL meaning. And now is the time to remove\nthem. The reason I wanted them removed now is that it is a pain to account\nfor them, or even disambiguate them, in psql. I guess for now I will no\nlonger bother with them.\n\nAt the risk of taking on more work and/or provoking a holy war, I think\nthe mathematical operators and function names need some cleaning up in\ngeneral. From the previous conversation on this topic I gathered that a\nlot of these things are from PostQUEL times. Also, there is some confusion\nbetween float and numeric operators and functions and sometimes they only\nwork on one, etc.\n\nI don't have the SQL standards around here, but they should be the\nreference, so if someone could fill me in. Barring anything that's in\nthere, I think that there should be a standard set of functions, such as\nthis:\nexp()\nlog()\npow()\nsin(), cos(), ...\nabs()\nsgn()\nfactorial()\nsqrt()\nsurd()\nfloor()\nceil()\ntrunc()\nround()\nas well as anything else that's easily thrown in because it's already in\nmath.h.\n\nAll of these functions should be overloaded for float4, float8, and\nnumeric, as well as int* where appropriate. Some might argue that things\nsuch as sin() or exp() do not make sense for numeric and you should cast\nit to float. I'm not sure myself.\n\nOperators should only be assigned if they are in the standard and/or they\nmake sense to a math person. To me (being a math person), this would\nparticularly disqualify these operators:\n!! -- factorial left operator\n% -- truncation left operator (as opposed to % modulo)\n: -- exponentiation\n; -- logarithm\n@ -- absolute value\n\n(Tom speculated that someone might have had some fun writing the parser\nand hence threw in these things.)\n\nRationale:\n* Compatibility: break it now or be stuck with it forever\n* If anyone actually used the above operators in those meanings, I will\npersonally congratulate them.\n* Someone will have to do it eventually.\n\nThis is not something I could do tomorrow anyway, so we can have an\nextended discussion. I'm looking forward to it ...\n\n> BTW: a patch that removes user-visible features and breaks regress\n> tests is incomplete without doc and test updates...\n\nWhen will I ever learn ...\nSorry.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 25 Nov 1999 14:08:41 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] ':' and ';' operators "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n>>>> This patch removes the ';' (logarithm) and ':' (exponentiation) operators,\n>>>> as was generally agreed upon.\n>> \n>> This is a tad premature IMHO. In the first place, we haven't got the\n>> replacement functions --- at least not with user-friendly names.\n>> In the second place, I think we oughta deprecate these things for a\n>> release or two before we actually remove 'em.\n\n> Deprecation is not going to work in this case because those two operators\n> interfere with their blessed SQL meaning. And now is the time to remove\n> them. The reason I wanted them removed now is that it is a pain to account\n> for them, or even disambiguate them, in psql.\n\nIt's always been true (or at least been true for a long time) that ';'\nused as an operator has to appear inside parens, like \"SELECT (;2)\",\nto keep psql from thinking that it's a statement terminator. Can't you\nmaintain that behavior?\n\n':' is a little trickier --- if you want to use it to reference\npsql-provided variables, then there's probably no way you can support it\nas an SQL operator too. Of course ecpg must have the same problem.\n\n> At the risk of taking on more work and/or provoking a holy war, I think\n> the mathematical operators and function names need some cleaning up in\n> general. From the previous conversation on this topic I gathered that a\n> lot of these things are from PostQUEL times. Also, there is some confusion\n> between float and numeric operators and functions and sometimes they only\n> work on one, etc.\n\nI think most of the confusion in this area comes from the fact that\nwhile operators have always (?) been overloadable across types, up till\n6.5 built-in functions had to have the same names in SQL as in the C\ncode, and therefore the function names for different data types had to\nbe different. The only way to overload a function name was to make an\nugly (and slow) SQL function wrapper. So there was a strong notational\nadvantage to using operator rather than function syntax for anything\nthat was useful for more than one datatype. That problem's gone now.\n\nI'm not in favor of removing the operators, except maybe for ';' and ':',\nbut I agree it'd be a fine idea to provide standardized function names\nacross all the numeric data types, and to make sure that the operators\nthat are there work on NUMERIC too.\n\n> I don't have the SQL standards around here, but they should be the\n> reference,\n\nSQL doesn't address math functions at all, AFAICS. Given that lack,\nthe names used in C's <math.h> look OK to me, with the exception that\nI'd rather see abs() than fabs().\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 Nov 1999 10:47:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] ':' and ';' operators "
},
{
"msg_contents": "Peter Eisentraut wrote:\n\n> All of these functions should be overloaded for float4, float8, and\n> numeric, as well as int* where appropriate. Some might argue that things\n> such as sin() or exp() do not make sense for numeric and you should cast\n> it to float. I'm not sure myself.\n\n They make sense for numeric, because you can get the sine of\n a value with hundreds of SIGNIFICANT digits. Would be damned\n slow, but if needed...\n\n NUMERIC has a higher precision than float8. The problem\n arising at this point is to ensure that an expression with\n mixed attribute types (numeric, floatN and/or integer) must\n allways cast anything to numeric if at least one of the\n arguments is.\n\n The trigonometric functions are missing for numeric up to\n now, but there are Taylor and McLaurin definitions available.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 25 Nov 1999 17:02:57 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] ':' and ';' operators"
},
{
"msg_contents": "Tom Lane wrote:\n\n>\n> > I don't have the SQL standards around here, but they should be the\n> > reference,\n>\n> SQL doesn't address math functions at all, AFAICS. Given that lack,\n> the names used in C's <math.h> look OK to me, with the exception that\n> I'd rather see abs() than fabs().\n>\n> regards, tom lane\n\nFor what its worth (I'll probably get brow-beaten for even mentioning this :-) ),\nthe ODBC 2.0 specification allows clients to test ODBC data sources to determine\nif the data source has implemented the following:\n\nABS(numeric),\nACOS(float),\nASIN(float),\nATAN(float),\nATAN2(float1, float2),\nCEILING(numeric),\nCOS(float),\nCOT(float),\nEXP(float),\nFLOOR(numeric),\nLOG(float),\nMOD(integer1, integer2),\nSIGN(numeric),\nSIN(float),\nSQRT(float),\nTAN(float),\nPI(),\nRAND([integer]),\nDEGREES(numeric),\nLOG10(float),\nPOWER(numeric, integer),\nRADIANS(numeric),\nROUND(numeric, integer),\nTRUNCATE(numeric, integer)\n\nAnways, thats the list for ODBC 2.0 -- I'm not sure what ODBC 3.0 has....\n\nMike\n\n\n\n\n\n",
"msg_date": "Thu, 25 Nov 1999 12:12:44 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] ':' and ';' operators"
},
{
"msg_contents": "\nWe have some major initdb breakage from recent initdb patches. I am\ngoing to try and clean it up as best I can, but we will need Peter\ninvolved in fixing this in a portable way.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 17 Dec 1999 21:42:24 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "initdb messed up"
},
{
"msg_contents": "> \n> We have some major initdb breakage from recent initdb patches. I am\n> going to try and clean it up as best I can, but we will need Peter\n> involved in fixing this in a portable way.\n\nWith my cleanups, I now get. At least it runs now:\n\n---------------------------------------------------------------------------\n\n\nThis database system will be initialized with username \"postgres\".\nThis user will own all the data files and must also own the server process.\n\ntrap: Illegal number: SIGINT\nCreating database system directory /u/pg/data\nCreating database system directory /u/pg/data/base\nCreating database XLOG directory /u/pg/data/pg_xlog\nCreating template database in /u/pg/data/base/template1\nERROR: pg_atoi: error in \"f\": can't parse \"f\"\nERROR: pg_atoi: error in \"f\": can't parse \"f\"\nCreating global relations in /u/pg/data/base\nERROR: pg_atoi: error in \"t\": can't parse \"t\"\nERROR: pg_atoi: error in \"t\": can't parse \"t\"\nAdding template1 database to pg_database\nTRAP: Failed Assertion(\"!(reldesc):\", File: \"bootstrap.c\", Line: 464)\n\n!(reldesc) (0) [No such file or directory]\nAbort trap\n\ninitdb failed.\nRemoving /u/pg/data.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 17 Dec 1999 21:56:42 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] initdb messed up"
}
] |
[
{
"msg_contents": "\nHi,\n\nin PgAccess's the create table dialog is small bug. If I define a new table\nas interits of other table and I not define any column (as 'field' named this \ndialog) - PgAccess return ERROR message \"Your table has not field!\". But\nPgSQL allow define table as:\n\n\tCREATE TABLE xxx () INHERITS(yyy); \n\n...all colunms is from 'yyy'. \n\nNOTE: Why a attribute (column) is in the PgAccsess named 'field'? It is \nabnormal in SQL speech... \n\n\t\t\t\t\t\t\tKarel\n\n------------------------------------------------------------------------------\nKarel Zak <[email protected]> http://home.zf.jcu.cz/~zakkr/\n\nDocs: http://docs.linux.cz (big docs archive)\t\nKim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\nFTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n------------------------------------------------------------------------------\n\n",
"msg_date": "Thu, 25 Nov 1999 14:43:20 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "PgAccess - small bug?"
},
{
"msg_contents": "I assume this is fixed?\n\n\n> \n> Hi,\n> \n> in PgAccess's the create table dialog is small bug. If I define a new table\n> as interits of other table and I not define any column (as 'field' named this \n> dialog) - PgAccess return ERROR message \"Your table has not field!\". But\n> PgSQL allow define table as:\n> \n> \tCREATE TABLE xxx () INHERITS(yyy); \n> \n> ...all colunms is from 'yyy'. \n> \n> NOTE: Why a attribute (column) is in the PgAccsess named 'field'? It is \n> abnormal in SQL speech... \n> \n> \t\t\t\t\t\t\tKarel\n> \n> ------------------------------------------------------------------------------\n> Karel Zak <[email protected]> http://home.zf.jcu.cz/~zakkr/\n> \n> Docs: http://docs.linux.cz (big docs archive)\t\n> Kim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\n> FTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n> ------------------------------------------------------------------------------\n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 May 2000 20:04:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PgAccess - small bug?"
},
{
"msg_contents": "\n\n\n\nOn Wed, 31 May 2000, Bruce Momjian wrote:\n\n> I assume this is fixed?\n\n Oh, it is really old letter from me. I total forget... \n\n Hmm, I haven't here last version of CVS, but pgaccess in my comp has this\nbug still..\n\n Bruce, thanks for answer. I not had hope that anyone advert to this.\n\n\t\t\t\t\t\tKarel\n\n> \n> > \n> > Hi,\n> > \n> > in PgAccess's the create table dialog is small bug. If I define a new table\n> > as interits of other table and I not define any column (as 'field' named this \n> > dialog) - PgAccess return ERROR message \"Your table has not field!\". But\n> > PgSQL allow define table as:\n> > \n> > \tCREATE TABLE xxx () INHERITS(yyy); \n> > \n> > ...all colunms is from 'yyy'. \n> > \n> > NOTE: Why a attribute (column) is in the PgAccsess named 'field'? It is \n> > abnormal in SQL speech... \n> > \n> > \t\t\t\t\t\t\tKarel\n> > \n> > ------------------------------------------------------------------------------\n> > Karel Zak <[email protected]> http://home.zf.jcu.cz/~zakkr/\n> > \n> > Docs: http://docs.linux.cz (big docs archive)\t\n> > Kim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\n> > FTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n> > ------------------------------------------------------------------------------\n> > \n> > \n> > ************\n> > \n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n",
"msg_date": "Thu, 1 Jun 2000 10:07:33 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PgAccess - small bug?"
},
{
"msg_contents": "Constantin, can you comment on this?\n\n> \n> \n> \n> \n> On Wed, 31 May 2000, Bruce Momjian wrote:\n> \n> > I assume this is fixed?\n> \n> Oh, it is really old letter from me. I total forget... \n> \n> Hmm, I haven't here last version of CVS, but pgaccess in my comp has this\n> bug still..\n> \n> Bruce, thanks for answer. I not had hope that anyone advert to this.\n> \n> \t\t\t\t\t\tKarel\n> \n> > \n> > > \n> > > Hi,\n> > > \n> > > in PgAccess's the create table dialog is small bug. If I define a new table\n> > > as interits of other table and I not define any column (as 'field' named this \n> > > dialog) - PgAccess return ERROR message \"Your table has not field!\". But\n> > > PgSQL allow define table as:\n> > > \n> > > \tCREATE TABLE xxx () INHERITS(yyy); \n> > > \n> > > ...all colunms is from 'yyy'. \n> > > \n> > > NOTE: Why a attribute (column) is in the PgAccsess named 'field'? It is \n> > > abnormal in SQL speech... \n> > > \n> > > \t\t\t\t\t\t\tKarel\n> > > \n> > > ------------------------------------------------------------------------------\n> > > Karel Zak <[email protected]> http://home.zf.jcu.cz/~zakkr/\n> > > \n> > > Docs: http://docs.linux.cz (big docs archive)\t\n> > > Kim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\n> > > FTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n> > > ------------------------------------------------------------------------------\n> > > \n> > > \n> > > ************\n> > > \n> > \n> > \n> > -- \n> > Bruce Momjian | http://www.op.net/~candle\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> > \n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 2 Oct 2000 00:22:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PgAccess - small bug?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Constantin, can you comment on this?\n\nSorry for my delayed response!\nYes, I will fix the table creation for no columns but inherited from\nother tables!\n\n> > > > in PgAccess's the create table dialog is small bug. If I define a new table\n> > > > as interits of other table and I not define any column (as 'field' named this\n> > > > dialog) - PgAccess return ERROR message \"Your table has not field!\". But\n> > > > PgSQL allow define table as:\n> > > >\n> > > > CREATE TABLE xxx () INHERITS(yyy);\n> > > >\n> > > > ...all colunms is from 'yyy'.\n> > > >\n> > > > NOTE: Why a attribute (column) is in the PgAccsess named 'field'? It is\n> > > > abnormal in SQL speech...\n\nI will also replace all over the place 'field' with 'column'!\n\nHope I will be able to deliver soon a minor upgrade (mainly bug fixes)\nbut we are in Romania in the fever of general elections (chambers and\npresident) :-)\n\nConstatin Teodorescu\nBraila, ROMANIA\n",
"msg_date": "Fri, 13 Oct 2000 08:12:49 +0300",
"msg_from": "Constantin Teodorescu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PgAccess - small bug?"
},
{
"msg_contents": "\nHas this been dealt with?\n\n> \n> \n> \n> \n> On Wed, 31 May 2000, Bruce Momjian wrote:\n> \n> > I assume this is fixed?\n> \n> Oh, it is really old letter from me. I total forget... \n> \n> Hmm, I haven't here last version of CVS, but pgaccess in my comp has this\n> bug still..\n> \n> Bruce, thanks for answer. I not had hope that anyone advert to this.\n> \n> \t\t\t\t\t\tKarel\n> \n> > \n> > > \n> > > Hi,\n> > > \n> > > in PgAccess's the create table dialog is small bug. If I define a new table\n> > > as interits of other table and I not define any column (as 'field' named this \n> > > dialog) - PgAccess return ERROR message \"Your table has not field!\". But\n> > > PgSQL allow define table as:\n> > > \n> > > \tCREATE TABLE xxx () INHERITS(yyy); \n> > > \n> > > ...all colunms is from 'yyy'. \n> > > \n> > > NOTE: Why a attribute (column) is in the PgAccsess named 'field'? It is \n> > > abnormal in SQL speech... \n> > > \n> > > \t\t\t\t\t\t\tKarel\n> > > \n> > > ------------------------------------------------------------------------------\n> > > Karel Zak <[email protected]> http://home.zf.jcu.cz/~zakkr/\n> > > \n> > > Docs: http://docs.linux.cz (big docs archive)\t\n> > > Kim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\n> > > FTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n> > > ------------------------------------------------------------------------------\n> > > \n> > > \n> > > ************\n> > > \n> > \n> > \n> > -- \n> > Bruce Momjian | http://www.op.net/~candle\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> > \n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Jan 2001 08:41:38 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PgAccess - small bug?"
},
{
"msg_contents": "\nThis is fixed in the pgaccess version in CVS. \n\n> \n> \n> \n> \n> On Wed, 31 May 2000, Bruce Momjian wrote:\n> \n> > I assume this is fixed?\n> \n> Oh, it is really old letter from me. I total forget... \n> \n> Hmm, I haven't here last version of CVS, but pgaccess in my comp has this\n> bug still..\n> \n> Bruce, thanks for answer. I not had hope that anyone advert to this.\n> \n> \t\t\t\t\t\tKarel\n> \n> > \n> > > \n> > > Hi,\n> > > \n> > > in PgAccess's the create table dialog is small bug. If I define a new table\n> > > as interits of other table and I not define any column (as 'field' named this \n> > > dialog) - PgAccess return ERROR message \"Your table has not field!\". But\n> > > PgSQL allow define table as:\n> > > \n> > > \tCREATE TABLE xxx () INHERITS(yyy); \n> > > \n> > > ...all colunms is from 'yyy'. \n> > > \n> > > NOTE: Why a attribute (column) is in the PgAccsess named 'field'? It is \n> > > abnormal in SQL speech... \n> > > \n> > > \t\t\t\t\t\t\tKarel\n> > > \n> > > ------------------------------------------------------------------------------\n> > > Karel Zak <[email protected]> http://home.zf.jcu.cz/~zakkr/\n> > > \n> > > Docs: http://docs.linux.cz (big docs archive)\t\n> > > Kim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\n> > > FTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n> > > ------------------------------------------------------------------------------\n> > > \n> > > \n> > > ************\n> > > \n> > \n> > \n> > -- \n> > Bruce Momjian | http://www.op.net/~candle\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> > \n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 27 Jan 2001 13:46:43 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PgAccess - small bug?"
}
] |
[
{
"msg_contents": "\nHi,\n\nI need in the SELECT query extract substring 'cccc' from string \n'aaa.bbbbb.cccc.dd.eee' (extract third field from string if \ndelimiter is '.').\n\nIt is easy if I know where is begin/end of 'cccc' and I can \nuse the substring() function:\n\nselect substring('aaa.bbbbb.cccc.dd.eee' from 11 for 4);\nsubstr\n------\ncccc\n\nBut how extract it if I don't know where is position of the second \nand third '.'? \n\nYes, I know the function position() or textpos(), but this return first \na position of the substring...\n\nFor this exist nice UN*X command \"cut -f3 -d.\" , but how make it in \nSQL?\n\nI ask about it, because I write for me this as new function in C, but \nI'm not sure if not exist other (better) way for it.\n\n\t\t\t\t\t\tKarel\n\n\n------------------------------------------------------------------------------\nKarel Zak <[email protected]> http://home.zf.jcu.cz/~zakkr/\n\nDocs: http://docs.linux.cz (big docs archive)\t\nKim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\nFTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n------------------------------------------------------------------------------\n\n",
"msg_date": "Thu, 25 Nov 1999 19:38:31 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "substring extraction"
},
{
"msg_contents": "Try this:\n\n--returns the $2 field delimited by $3\ndrop function field(text,int,text);\ncreate function field(text,int,text) returns text as\n'declare\n string text;\n pos int2:= 0;\n pos1 int2:= 0;\n times int2:= 0;\n totpos int2:= 0;\nbegin\n times:= $2 - 1;\n string:= $1;\n while totpos < times loop\n string:= substr(string,pos+1);\n pos:= strpos(string,$3);\n totpos:= totpos + 1;\n end loop;\n string:= substr(string,pos+1);\n pos1:= strpos(string,$3);\n return substr(string,1,pos1 - 1);\n end;\n' language 'plpgsql';\n\nselect field('primo.secondo.terzo',1,'.');\nfield\n-----\nprimo\n(1 row)\n\nselect field('primo.secondo.terzo',2,'.');\nfield\n-------\nsecondo\n(1 row)\n\nselect field('primo.secondo.terzo',3,'.');\nfield\n-----\nterzo\n(1 row)\n\n\nJos�\n\n\n\nKarel Zak - Zakkr ha scritto:\n\n> Hi,\n>\n> I need in the SELECT query extract substring 'cccc' from string\n> 'aaa.bbbbb.cccc.dd.eee' (extract third field from string if\n> delimiter is '.').\n>\n> It is easy if I know where is begin/end of 'cccc' and I can\n> use the substring() function:\n>\n> select substring('aaa.bbbbb.cccc.dd.eee' from 11 for 4);\n> substr\n> ------\n> cccc\n>\n> But how extract it if I don't know where is position of the second\n> and third '.'?\n>\n> Yes, I know the function position() or textpos(), but this return first\n> a position of the substring...\n>\n> For this exist nice UN*X command \"cut -f3 -d.\" , but how make it in\n> SQL?\n>\n> I ask about it, because I write for me this as new function in C, but\n> I'm not sure if not exist other (better) way for it.\n>\n> Karel\n>\n> ------------------------------------------------------------------------------\n> Karel Zak <[email protected]> http://home.zf.jcu.cz/~zakkr/\n>\n> Docs: http://docs.linux.cz (big docs archive)\n> Kim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\n> FTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n> ------------------------------------------------------------------------------\n>\n> ************\n\n",
"msg_date": "Fri, 26 Nov 1999 15:25:24 +0100",
"msg_from": "jose soares <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] substring extraction"
},
{
"msg_contents": "\n\nOn Fri, 26 Nov 1999, jose soares wrote:\n\n> Try this:\n> \n> --returns the $2 field delimited by $3\n> drop function field(text,int,text);\n> create function field(text,int,text) returns text as\n> 'declare\n> string text;\n> pos int2:= 0;\n> pos1 int2:= 0;\n> times int2:= 0;\n> totpos int2:= 0;\n> begin\n> times:= $2 - 1;\n> string:= $1;\n> while totpos < times loop\n> string:= substr(string,pos+1);\n> pos:= strpos(string,$3);\n> totpos:= totpos + 1;\n> end loop;\n> string:= substr(string,pos+1);\n> pos1:= strpos(string,$3);\n> return substr(string,1,pos1 - 1);\n> end;\n> ' language 'plpgsql';\n> \n\n Oh, it is great! But my implementation in C for this is \na little longer (only) :-) \n\nI send this question to the hacker list because \"extract delimited \nsubstring\" is not a abnormal uses's request, and (IMHO) will very \ngood if this will in PgSQL. How much uses known write this in \nC or any PL? \n\n'C' implementafion \"extract delimited substring\":\n-----------------------------------------------\ntext\n*strcut( text *string, char *d, int field )\n{\n\tchar\t*ptr \t= NULL, \n\t\t*p \t= NULL,\n\t\t*pe \t= NULL;\n\ttext\t*result = NULL;\n\tint\tsiz;\n\t\n\tptr = VARDATA(string);\n\t*(ptr+ (VARSIZE(string) - VARHDRSZ)) = '\\0';\n\t\n\tfor(p = ptr; *p != '\\0'; p++) {\n\t\tif (field == 1)\n\t\t\tbreak;\n\t\tif (*p == (int) d)\n\t\t\t--field;\n\t}\t\n\tif (!*p)\n\t\treturn textin(\"\");\n\t\t\n\tfor(pe = p; *pe != '\\0'; pe++) {\n\t\tif (*pe == (int) d)\n\t\t\tbreak;\n\t}\t\n\t\n\tresult = (text *) palloc(sizeof(text) * (siz = pe - p) + VARHDRSZ);\n\tstrncpy(VARDATA(result), p, siz);\n\t\n\t*(VARDATA(result) + siz) = '\\0'; \t\n\tVARSIZE(result) = siz + VARHDRSZ;\n\treturn result;\n}\n\nCREATE FUNCTION strcut(text, char, int)\n RETURNS text\n AS '@module_dir@'\n LANGUAGE 'c';\n \n\ntemplate1=> select strcut('aaa.bbb.ccc', '.', 2);\nstrcut\n------\nbbb\n\n\n\t\t\t\t\t\tKarel\n\n",
"msg_date": "Fri, 26 Nov 1999 15:36:45 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] substring extraction"
},
{
"msg_contents": "Karel Zak wrote:\n\n> On Fri, 26 Nov 1999, jose soares wrote:\n>\n> > Try this:\n> >\n> > --returns the $2 field delimited by $3\n> > drop function field(text,int,text);\n> > create function field(text,int,text) returns text as\n> > 'declare\n> > string text;\n> > pos int2:= 0;\n> > pos1 int2:= 0;\n> > times int2:= 0;\n> > totpos int2:= 0;\n> > begin\n> > times:= $2 - 1;\n> > string:= $1;\n> > while totpos < times loop\n> > string:= substr(string,pos+1);\n> > pos:= strpos(string,$3);\n> > totpos:= totpos + 1;\n> > end loop;\n> > string:= substr(string,pos+1);\n> > pos1:= strpos(string,$3);\n> > return substr(string,1,pos1 - 1);\n> > end;\n> > ' language 'plpgsql';\n> >\n>\n> Oh, it is great! But my implementation in C for this is\n> a little longer (only) :-)\n>\n> I send this question to the hacker list because \"extract delimited\n> substring\" is not a abnormal uses's request, and (IMHO) will very\n> good if this will in PgSQL. How much uses known write this in\n> C or any PL?\n\n What about this one:\n\n create function field(text,int,text) returns text as '\n return [lindex [split $1 $3] $2]\n ' language 'pltcl';\n\n It does all the work as long as the third argument is a\n single character. For multibyte delimiters it will be\n slightly bigger, but not much. Now you might imagine why\n PL/Tcl was the first language I created.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sat, 27 Nov 1999 00:22:12 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] substring extraction"
}
] |
[
{
"msg_contents": "Anyone out there who could help this guy? Sorry I don't have the time to\ncheck it right now and I don't know it form the top of my headh.\n\nMichael\n\n----- Forwarded message from Abhijit Chatterjee <[email protected]> -----\n\nDate: Thu, 25 Nov 1999 20:49:18 +0530\nFrom: Abhijit Chatterjee <[email protected]>\nTo: [email protected]\nSubject: A query...\n\nHi,.\n I am having a problem and seeking your help to resolve the same.\n\nThe problem assumes the following:\n\n- Assume postmaster is running as user 'A'.\n- Assume that I give access to a user 'B ' by using 'createdb'.\n- User 'A' has a database 'ADB'.\n- User 'B' has a database 'BDB'.\n\nNow, can I have the following restrictions:\n- 'A 'should be able to access 'ADB' and 'BDB'.\n- 'B' should be able to access 'BDB', but not 'ADB'.\n\n Hoping you can resolve this problem.\n\n\nThanking you in advance.\n\n\nregards\nAbhijit\n\n\n\n\n\nContent-Description: Card for abhijit chatterjee\n\n\n----- End forwarded message -----\n\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Thu, 25 Nov 1999 20:25:21 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "[[email protected]: A query...]"
}
] |
[
{
"msg_contents": "Hi,\n\nEvery now and then I get the following error:\n\n cannot write block 0 of tablename [username] blind\n\nIf this happens, all my database connections get this error when trying\nto access the database and I need to restart postgresql. The problem\ncausing this error needs to be something like this:\n\n create table table2 (col1 text);\n insert into table2 (col1) values ('some data');\n begin work;\n drop table table1;\n alter table table2 rename to table1;\n commit;\n\nI've been playing with some statements to repeat the error, but I\nhaven't\nbeen able to succesfully repeat the error (only once, but doing the\nsame didn't cause the error again). Maybe it has something to do with\nusing multiple connections.\n\nTrying some statements I found an other error, using these statements:\n\n create table table1 (col1 text);\n begin work;\n drop table table1;\n alter table table2 rename to table1;\n create table table2 (col1 text);\n commit;\n select * from table1;\n\nCaused:\n\n couldn't open table1: No such file or directory\n\nI'm using postgresql-6.5.2-1.i386.rpm.\n\nIs the posgresql development team aware of these errors and will\nthey be fixed?\n\nGr. Jaco\n",
"msg_date": "Thu, 25 Nov 1999 21:36:01 +0100",
"msg_from": "Jaco de Groot <[email protected]>",
"msg_from_op": true,
"msg_subject": "drop/rename table and transactions"
},
{
"msg_contents": "Lincoln Yeoh wrote:\n\n> >If this happens, all my database connections get this error when trying\n> >to access the database and I need to restart postgresql. The problem\n> >causing this error needs to be something like this:\n> >\n> > create table table2 (col1 text);\n> > insert into table2 (col1) values ('some data');\n> > begin work;\n> > drop table table1;\n> > alter table table2 rename to table1;\n> > commit;\n>\n> I did what I did coz I was curious. But creating/Dropping tables in a\n> transaction does not appear to be a \"good thing\", and that's not just for\n> PostgreSQL. I believe doing data definition stuff in transactions is not\n> recommended.\n>\n> Is it possible to achieve your goals by using things like\n> \"delete * from table1 where id!=stuffIwant\" instead of dropping it?\n>\n> Cheerio,\n>\n> Link.\n\nThis is one of the few areas that I disagree with the development trend in\nPostgreSQL. Every release contains different bugs related to DDL statements in\ntransactions. The developers appear to want to make them work (i.e., have the\nability to rollback a DROP TABLE, ALTER TABLE ADD COLUMN, etc.). This, in my\nopinion, goes far above and beyond the call of duty for a RDBMS. Oracle issues\nan implicit COMMIT whenever a DDL statement is found. In fact, one could argue\nthat those who are porting Oracle apps to PostgreSQL would assume,\nincorrectly, than a DROP TABLE in a transaction committed any work done\npreviously.\n\nI personally believe that PostgreSQL should do the same as Oracle and greatly\nsimplify the implementation of DDL statements in the backed by issuing an\nimplicit COMMIT....\n\nJust my opinion, though\n\nMike Mascari\n\n\n",
"msg_date": "Thu, 25 Nov 1999 19:25:11 -0500",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": ">If this happens, all my database connections get this error when trying\n>to access the database and I need to restart postgresql. The problem\n>causing this error needs to be something like this:\n>\n> create table table2 (col1 text);\n> insert into table2 (col1) values ('some data');\n> begin work;\n> drop table table1;\n> alter table table2 rename to table1;\n> commit;\n\nWhat happens if two different connections try to \"drop table table1\"? Try\ndoing it step by step in two different connections?\n\nI did a vaguely similar thing.\n\nI did a BEGIN, DROP TABLE, ROLLBACK. And yes I know you're not supposed to\nbe able to rollback a drop table, but I could not recreate the table- I had\nto do a manual file system delete of the table. If a connection gets broken\nduring a transaction, and you get a rollback, you could encounter the same\nproblem.\n\nI did what I did coz I was curious. But creating/Dropping tables in a\ntransaction does not appear to be a \"good thing\", and that's not just for\nPostgreSQL. I believe doing data definition stuff in transactions is not\nrecommended. \n\nIs it possible to achieve your goals by using things like \n\"delete * from table1 where id!=stuffIwant\" instead of dropping it?\n\nCheerio,\n\nLink.\n\n",
"msg_date": "Fri, 26 Nov 1999 09:08:04 +0800",
"msg_from": "Lincoln Yeoh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "Tom Lane wrote:\n\n> Mike Mascari wrote:\n> >> This is one of the few areas that I disagree with the development\n> >> trend in PostgreSQL. Every release contains different bugs related to\n> >> DDL statements in transactions. The developers appear to want to make\n> >> them work (i.e., have the ability to rollback a DROP TABLE, ALTER\n> >> TABLE ADD COLUMN, etc.). This, in my opinion, goes far above and\n> >> beyond the call of duty for a RDBMS. Oracle issues an implicit COMMIT\n> >> whenever a DDL statement is found.\n>\n> So, the limits of our ambition should be to be as good as Oracle?\n> (Only one-half :-) here.)\n>\n> I've seen quite a few discussions on the mailing lists about\n> applications that could really use rollback-able DDL commands.\n>\n> Personally, I certainly wouldn't give up any reliability for this,\n> and darn little performance; but within those constraints I think\n> we should do what we can.\n>\n> regards, tom lane\n>\n\nWell, I agree that it would be GREAT to be able to rollback DDL\nstatements. However, at the moment, failures during a transaction while\nDDL statements occur usually require direct intervention by the user (in\nthe case of having to drop/recreate indexes) and often require the services\nof the DBA, if filesystem intervention is necessary (i.e., getting rid of\npartially dropped/created tables and their associated fileystem files). I\nguess I'm worried by the current state of ambiguity with respect to which\nDDL statements can be safely rolled back and which can't. I know you added\nNOTICEs in current, but it seems less than robust to ask the user not to\ntrigger a bug. And of course, something like the following can always\nhappen:\n\ntest=# CREATE TABLE example(value text);\nCREATE\ntest=# BEGIN;\nBEGIN\ntest=# DROP TABLE example;\nNOTICE: Caution: DROP TABLE cannot be rolled back, so don't abort now\nDROP\n\n-- someone just yanked the RJ45 cable from the hub in the T-COM closet --\n\n(which, ludicrous as it might seem, happens)\n\n>From an otherwise EXTREMELY happy user :-) (full smile...), I see 3\nscenarios:\n\n(1) Disallow DDL statements in transactions\n(2) Send NOTICE's asking for the user to not trigger the bug until the bugs\ncan be fixed -or-\n(3) Have all DDL statements implicity commit any running transactions.\n\n1, of course, stinks. 2 is the current state and would probably take\nseveral releases before all DDL statement rollback bugs could be crushed\n(look how many times it took to get segmented files right -- and are we\nREALLY sure it is?). 3, it seems to me, could be implemented in a day's\ntime, would prevent the various forms of data corruption people often post\nto this list (GENERAL) about, and still allows 2 to happen in the future as\na configure-time or run-time option.\n\nJust some ramblings,\n\nMike Mascari\n\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Thu, 25 Nov 1999 23:26:02 -0500",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "Mike Mascari wrote:\n> \n> This is one of the few areas that I disagree with the development trend in\n> PostgreSQL. Every release contains different bugs related to DDL statements in\n> transactions. The developers appear to want to make them work (i.e., have the\n> ability to rollback a DROP TABLE, ALTER TABLE ADD COLUMN, etc.). This, in my\n> opinion, goes far above and beyond the call of duty for a RDBMS. Oracle issues\n> an implicit COMMIT whenever a DDL statement is found. In fact, one could argue\n> that those who are porting Oracle apps to PostgreSQL would assume,\n> incorrectly, than a DROP TABLE in a transaction committed any work done\n> previously.\n> \n> I personally believe that PostgreSQL should do the same as Oracle and greatly\n> simplify the implementation of DDL statements in the backed by issuing an\n> implicit COMMIT....\n> \n> Just my opinion, though\n\nAnd I agreed with this.\nBut I would like to preserve ability to CREATE TABLE, mostly\nbecause I think that SELECT ... INTO TABLE ... is very usefull\nthing.\n\nVadim\n",
"msg_date": "Fri, 26 Nov 1999 12:46:33 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "Mike Mascari wrote:\n>> This is one of the few areas that I disagree with the development\n>> trend in PostgreSQL. Every release contains different bugs related to\n>> DDL statements in transactions. The developers appear to want to make\n>> them work (i.e., have the ability to rollback a DROP TABLE, ALTER\n>> TABLE ADD COLUMN, etc.). This, in my opinion, goes far above and\n>> beyond the call of duty for a RDBMS. Oracle issues an implicit COMMIT\n>> whenever a DDL statement is found.\n\nSo, the limits of our ambition should be to be as good as Oracle?\n(Only one-half :-) here.)\n\nI've seen quite a few discussions on the mailing lists about\napplications that could really use rollback-able DDL commands.\n\nPersonally, I certainly wouldn't give up any reliability for this,\nand darn little performance; but within those constraints I think\nwe should do what we can.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 26 Nov 1999 01:52:32 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions "
},
{
"msg_contents": "Hi,\n\nI'd like to prevent duplicate ids from being inserted into a table. I can\nlet the database enforce it by using UNIQUE or PRIMARY KEY. But assuming I\nprefer to catch such things with the application, what would be the best\nway of doing it?\n\nThe only way I figured to do it was to use:\nbegin;\nlock table accounts;\nselect count(*) from accounts where id=$number;\n if count=0, insert into accounts (id,etc) values ($number,$etc)\ncommit;\n\nIs this a good idea? Or is it much better and faster to let the database\ncatch things?\n\nIs it faster to use \"select count(*) from accounts\" or \"select id from\naccounts\"? \n\nApparently count(*) has some speed optimizations in MySQL. So wondering if\nthere are similar things in Postgres.\n\nThanks,\n\nLink.\n\n",
"msg_date": "Fri, 26 Nov 1999 17:04:09 +0800",
"msg_from": "Lincoln Yeoh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] locking/insert into table and transactions"
},
{
"msg_contents": "Mike Mascari <[email protected]> writes:\n> Well, I agree that it would be GREAT to be able to rollback DDL\n> statements. However, at the moment, failures during a transaction while\n> DDL statements occur usually require direct intervention by the user (in\n> the case of having to drop/recreate indexes) and often require the services\n> of the DBA, if filesystem intervention is necessary (i.e., getting rid of\n> partially dropped/created tables and their associated fileystem\n> files).\n\nAnd forced commit after the DDL statement completes will improve that\nhow?\n\n> I see 3 scenarios:\n\n> (1) Disallow DDL statements in transactions\n> (2) Send NOTICE's asking for the user to not trigger the bug until the bugs\n> can be fixed -or-\n> (3) Have all DDL statements implicity commit any running transactions.\n\n> 1, of course, stinks. 2 is the current state and would probably take\n> several releases before all DDL statement rollback bugs could be crushed\n\nIt's not an overnight project, for sure.\n\n> 3, it seems to me, could be implemented in a day's\n> time, would prevent the various forms of data corruption people often post\n> to this list (GENERAL) about,\n\nI don't believe either of those assumptions. We've had problems with\nVACUUM's internal commit, and I don't think it'd be either quick or\ninherently more reliable to apply the same model to all DDL commands.\n\n\nA more significant point is that implicit commit is not a transparent\nchange; it will break applications. People use transaction blocks for\ntwo reasons: (1) to define where to roll back to after an error, (2) to\nensure that the results of logically related updates become visible to\nother backends atomically. Implicit commit destroys both of those\nguarantees, even though only the first one is really related to the\nimplementation problem we are trying to solve.\n\nAs a user I'd be pretty unhappy if \"SELECT ... INTO\" suddenly became\n\"COMMIT; SELECT; BEGIN\". Not only would that mean that updates made\nby my transaction would become visible prematurely, but it might also\nmean that the SELECT retrieves results it should not (ie, results from\nxacts that were not committed when my xact started). Both of these\nthings could make my application logic fail in hard-to-find, hard-to-\nreproduce-except-under-load ways.\n\nSo, although implicit commit might look like a convenient workaround at\nthe level of Postgres itself, it'd be a horrible loss of reliability\nat the application level. I'd rather go with #1 (hard error) than\nrisk introducing transactional bugs into applications that use Postgres.\n\n\n> Since ORACLE has 70% of the RDBMS market, it is the de facto standard\n\nYes, and Windows is the de facto standard operating system. I don't use\nWindows, and I'm not willing to follow Oracle's lead when they make a\nbad decision...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 26 Nov 1999 11:13:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions "
},
{
"msg_contents": "> Is it possible to achieve your goals by using things like\n> \"delete * from table1 where id!=stuffIwant\" instead of dropping it?\n\nYes, I think I better use delete statements instead of drop statements\nknowing PostgreSQL can't always handle drop/rename statements in\ntransactions correctly.\n\nJaco de Groot\n",
"msg_date": "Sun, 28 Nov 1999 12:37:30 +0100",
"msg_from": "Jaco de Groot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "Mike Mascari wrote:\n> \n> >From an otherwise EXTREMELY happy user :-) (full smile...), I see 3\n> scenarios:\n>\n> (1) Disallow DDL statements in transactions\n> (2) Send NOTICE's asking for the user to not trigger the bug until the bugs\n> can be fixed -or-\n> (3) Have all DDL statements implicity commit any running transactions.\n\nI think 1 is the best solution as long as there are bugs concerning DDL\nstatements in transactions. It will prevent people from getting in\ntrouble. I've been in this trouble for months :-(. I'm using Java\nServlets to connect to PostgreSQL and I'm having DDL statements whitin\ntransactions wich sometimes cause an error. This error is hard to find\nand solve if you don't know PostgreSQL has problems with DDL statements\nin transactions. And if the error occures it doesn't simply crash 1\ntransaction or connection but it crashes all connections wich prevents\nmy website from running correctly until I've manualy fixed the problem\n(mostly restarting PostgreSQL). To prevent others from getting in the\nsame trouble I'd like to propose that the next release of PosgreSQL\nwill dissalow DDL statements in transactions and notice the user this\nis a feature wich is currently in development.\n\nJaco de Groot\n",
"msg_date": "Sun, 28 Nov 1999 12:44:45 +0100",
"msg_from": "Jaco de Groot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
}
] |
[
{
"msg_contents": "Hi all,\n\nHere is the first draft for the spec of postmaster starting/stopping\ntool. I have named it as \"pg_ctl.\"\n\no pg_ctl [-w] start\n\nstart up postmaster. If -w is specified, it will wait for the database\nup and running. Options for postmaster should be placed in\n$PGDATA/postmaster.conf.\n\nNote: pg_ctl assumes that postmaster writes its pid into\n$PGDATA/postmaster.pid.\n\no pg_ctl [-w][-s|-f|-i] stop\n\nstop postmaster. If -w is specified, it will wait for the database\nshutting down.\n\nShutdown mode can be one of:\n\n\t-s: smart shutdown (default)\n\t-f: fast shutdown\n\t-i: immediate shutdown\n\no pg_ctl [-w][-s|-f|-i] restart\n\njust stop and start postmaster.\n\no pg_ctl status\n\nreport the status of postmaster. It would be nice if it could report,\nfor example, the number of backends running (and their pids) etc. For\nthis purpose I propose followings:\n\n(1) Add another protocol STATUS_REQUEST_CODE to libpq/pqcomm.h.\n\n(2) Add code to process the protocol in\npostmaster/postmaster.c:readStartupPacket().\n\nComments, suggestions are welcome.\n--\nTatsuo Ishii\n",
"msg_date": "Fri, 26 Nov 1999 11:12:57 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "None"
},
{
"msg_contents": "> Hi all,\n> \n> Here is the first draft for the spec of postmaster starting/stopping\n> tool. I have named it as \"pg_ctl.\"\n> \n> o pg_ctl [-w] start\n> \n> start up postmaster. If -w is specified, it will wait for the database\n> up and running. Options for postmaster should be placed in\n> $PGDATA/postmaster.conf.\n> \n> Note: pg_ctl assumes that postmaster writes its pid into\n> $PGDATA/postmaster.pid.\n> \n> o pg_ctl [-w][-s|-f|-i] stop\n\nLooks good to me.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 Nov 1999 23:39:51 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: your mail"
},
{
"msg_contents": "How about adding pg_ctl [-w] restart?\n\nIt will automatically stop and start the database again.\n\nChai\n\n\nTatsuo Ishii wrote:\n> \n> Hi all,\n> \n> Here is the first draft for the spec of postmaster starting/stopping\n> tool. I have named it as \"pg_ctl.\"\n> \n> o pg_ctl [-w] start\n> \n> start up postmaster. If -w is specified, it will wait for the database\n> up and running. Options for postmaster should be placed in\n> $PGDATA/postmaster.conf.\n> \n> Note: pg_ctl assumes that postmaster writes its pid into\n> $PGDATA/postmaster.pid.\n> \n> o pg_ctl [-w][-s|-f|-i] stop\n> \n> stop postmaster. If -w is specified, it will wait for the database\n> shutting down.\n> \n> Shutdown mode can be one of:\n> \n> -s: smart shutdown (default)\n> -f: fast shutdown\n> -i: immediate shutdown\n> \n> o pg_ctl [-w][-s|-f|-i] restart\n> \n> just stop and start postmaster.\n> \n> o pg_ctl status\n> \n> report the status of postmaster. It would be nice if it could report,\n> for example, the number of backends running (and their pids) etc. For\n> this purpose I propose followings:\n> \n> (1) Add another protocol STATUS_REQUEST_CODE to libpq/pqcomm.h.\n> \n> (2) Add code to process the protocol in\n> postmaster/postmaster.c:readStartupPacket().\n> \n> Comments, suggestions are welcome.\n> --\n> Tatsuo Ishii\n> \n> ************\n",
"msg_date": "Fri, 26 Nov 1999 13:55:13 +0700",
"msg_from": "Chairudin Sentosa Harjo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> Here is the first draft for the spec of postmaster starting/stopping\n> tool. I have named it as \"pg_ctl.\"\n> o pg_ctl [-w] start\n> start up postmaster.\n\nHow will pg_ctl know what to start --- where do the database directory,\nport number, and path (to locate the postmaster executable) come from?\n\n> Options for postmaster should be placed in\n> $PGDATA/postmaster.conf.\n\nPort number could reasonably be kept there, but I'm less sure about\npath, and for sure there must be another way for pg_ctl to find PGDATA\nin the first place...\n\n> It would be nice if it could report,\n> for example, the number of backends running (and their pids) etc. For\n> this purpose I propose followings:\n\n> (1) Add another protocol STATUS_REQUEST_CODE to libpq/pqcomm.h.\n\n> (2) Add code to process the protocol in\n> postmaster/postmaster.c:readStartupPacket().\n\nSecurity issues may be a factor here. Do you want just anyone anywhere\non the net to be able to extract the postmaster status? If not, how\nshall we restrict it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 26 Nov 1999 10:38:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl"
},
{
"msg_contents": "On Thu, 25 Nov 1999, Tatsuo Ishii wrote:\n> Here is the first draft for the spec of postmaster starting/stopping\n> tool. I have named it as \"pg_ctl.\"\n\nTatsuo, are you implementing this? If so, feel free to get the startup script\nfrom the RedHat RPM set and cannibalize. This pg_ctl command is going to\ngreatly simplify startup scripts.\n\nThe RPM startup script is /etc/rc.d/init.d/postgresql, and can also get fetched\nfrom my site at\nhttp://www.ramifordistat.net/postgres/unpacked/non-beta/postgresql.init.6.5.3\n\nIf you're not implementing pg_ctl, I can take a stab at it.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 26 Nov 1999 11:04:10 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re:"
},
{
"msg_contents": "> Tatsuo, are you implementing this?\n\nYes.\n\n>If so, feel free to get the startup script\n> from the RedHat RPM set and cannibalize. This pg_ctl command is going to\n> greatly simplify startup scripts.\n\nThanks. I know that the script is very convenient since I've been\nusing the script for a while:-) This is one of the reason why I start\nto implemnt pg_ctl.\n--\nTatsuo Ishii\n\n",
"msg_date": "Sat, 27 Nov 1999 11:05:31 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re:"
},
{
"msg_contents": "From: Tom Lane <[email protected]>\nSubject: Re: pg_ctl\nDate: Fri, 26 Nov 1999 10:38:06 -0500\n\n> Tatsuo Ishii <[email protected]> writes:\n> > Here is the first draft for the spec of postmaster starting/stopping\n> > tool. I have named it as \"pg_ctl.\"\n> > o pg_ctl [-w] start\n> > start up postmaster.\n> \n> How will pg_ctl know what to start --- where do the database directory,\n> port number, and path (to locate the postmaster executable) come from?\n\nVery good question. What I'm thinking now is:\n\nthe database directory:\n\t1) $PGDATA (as an environmnet variable)\n\t2) hard coded in the pg_ctl script\n\nthe path to postmaster:\n\t1) in the command search path\n\t2) hard coded in the pg_ctl script\n\t3) assume that postmaster lives in the same directory where pg_ctl\n\t has been put\n\n> > It would be nice if it could report,\n> > for example, the number of backends running (and their pids) etc. For\n> > this purpose I propose followings:\n> \n> > (1) Add another protocol STATUS_REQUEST_CODE to libpq/pqcomm.h.\n> \n> > (2) Add code to process the protocol in\n> > postmaster/postmaster.c:readStartupPacket().\n> \n> Security issues may be a factor here. Do you want just anyone anywhere\n> on the net to be able to extract the postmaster status? If not, how\n> shall we restrict it?\n\nI think a resonable restriction would be let anyone on the same\nmachine that postmaster is running could issue the protocol.\n\nAnother idea might be using our host based authentication. What about\nhaving a \"virtual database\" used only for the status request protocol?\nFor example, with below setting, any authenticated user on the net\n192.168.0.0 could send the protocol. \"statusdb\" indicates that backend\nshould treat it in the special way.\n\nhost statusdb 192.168.0.0 255.255.255.0 crypt\n--\nTatsuo Ishii\n\n",
"msg_date": "Sat, 27 Nov 1999 11:08:51 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_ctl"
},
{
"msg_contents": "On Fri, 26 Nov 1999, Tatsuo Ishii wrote:\n\n> >If so, feel free to get the startup script\n> > from the RedHat RPM set and cannibalize. This pg_ctl command is going to\n> > greatly simplify startup scripts.\n> \n> Thanks. I know that the script is very convenient since I've been\n> using the script for a while:-) This is one of the reason why I start\n> to implemnt pg_ctl.\n\nThe script can become spoiling -- it's biggest problem is the need to run it as\nroot.\n\nOk, just a few suggestions:\n\n1.)\tAllow either environment variables or command line switches to specify\nPGDATA, PGLIB, postmaster location, port#, etc. \n2.)\tFallback to builtin defaults if no envvars or switches specified. \n3.)\tAllow a mix of envvars and switches.\n\nThe locations needed:\nPGDATA\nPGLIB\nPATH_TO_POSTMASTER\nPGPORT\nPATH_TO_PID (could need to be /var/run/pgsql for FHS compliance) \n\nFor the PID files, maybe use a format of postmaster.PGPORT (ie,\npostmaster.5432). The PID files content, of course, needs to just be the\nprocess identifier in ASCII followed by newline.\n\nAlso, options for logging could be passed -- maybe provide a switch to pass\noptions on to postmaster? This may need to wait for subsequent versions --\ngetting basic functionality first is a good idea.\n\nIt would be nice if a status report from postmaster could include the\nenvvars it was invoked with, the command line invoked with, and the other\nthings you already mentioned. (subject to security policy, of course).\n\nFor subsquent versions (not to complicate an initial version), being able to\nrun any version backend using the appropriate version libraries would be nice. \nThis would involve only one more option -- PATH_TO_POSTGRES. This way, I can\nfire up an old postmaster (using an old backend) to dump a database, stop it,\nand fire up a new postmaster (and backend) to restore.\n\nI like this command.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n\n\n\n\n> --\n> Tatsuo Ishii\n> \n> \n> ************\n",
"msg_date": "Fri, 26 Nov 1999 21:31:30 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: pg_ctl"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> Security issues may be a factor here. Do you want just anyone anywhere\n>> on the net to be able to extract the postmaster status? If not, how\n>> shall we restrict it?\n\n> I think a resonable restriction would be let anyone on the same\n> machine that postmaster is running could issue the protocol.\n\nGrumble. That's both too restrictive and not restrictive enough.\nIn an intranet-LAN kind of situation, you'd like to be able to check\nthe Postgres status without having to log into the specific machine\nthat's running the postmaster; while if the postmaster is running on\na large multiuser system, the very last thing that you want to do is\ngrant access to everyone else on the system.\n\n> Another idea might be using our host based authentication. What about\n> having a \"virtual database\" used only for the status request protocol?\n\nThat could be workable. But I think I may have a better idea.\n\nThis morning after I sent my previous comments, I was thinking that the\nreally right way to do it would be to make the status info be a \"virtual\ntable\": you log into Postgres in the normal way, and issue a query\nagainst some special table name, and if you've got the required access\nrights then you get an answer. The postgres superuser would always get\nan answer, of course, and could grant/deny permissions to other users.\n\nSee, the advantage of doing it that way is that we build on top of the\nexisting Postgres access control and permission mechanisms, instead of\ninventing a new set of mechanisms on the spur of the moment. Compare\nthe Linux \"/proc filesystem\" for accessing system and process status\ninformation --- /proc isn't a normal filesystem in any sense of the\nword, but by making it look like one, the Linux folk managed to reuse\na lot of existing, well-tested lookup and permission-check mechanisms.\n\nOffhand I don't see any reason to think that making system status look\nlike one or more virtual tables would be much harder to implement than\nmaking it available via special-purpose postmaster protocols. It seems\nworth looking into, anyway.\n\nIf it doesn't work, then your idea is definitely the next thing to try:\nrecycle the pg_hba mechanisms to control access to a postmaster status\nprotocol.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 26 Nov 1999 23:59:27 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl "
},
{
"msg_contents": "> > I think a resonable restriction would be let anyone on the same\n> > machine that postmaster is running could issue the protocol.\n> \n> Grumble. That's both too restrictive and not restrictive enough.\n> In an intranet-LAN kind of situation, you'd like to be able to check\n> the Postgres status without having to log into the specific machine\n> that's running the postmaster; while if the postmaster is running on\n> a large multiuser system, the very last thing that you want to do is\n> grant access to everyone else on the system.\n\nOk, let's regard the functionality to report the status of postmaster\nand/or backends be separate from pg_ctl.\n\n> > Another idea might be using our host based authentication. What about\n> > having a \"virtual database\" used only for the status request protocol?\n> \n> That could be workable. But I think I may have a better idea.\n> \n> This morning after I sent my previous comments, I was thinking that the\n> really right way to do it would be to make the status info be a \"virtual\n> table\": you log into Postgres in the normal way, and issue a query\n> against some special table name, and if you've got the required access\n> rights then you get an answer. The postgres superuser would always get\n> an answer, of course, and could grant/deny permissions to other users.\n\nOracle already has the concept of \"virtual table\" and I like the idea\ntoo.\n\n> Offhand I don't see any reason to think that making system status look\n> like one or more virtual tables would be much harder to implement than\n> making it available via special-purpose postmaster protocols. It seems\n> worth looking into, anyway.\n\nTom, I remember you are going to enhance the function manager to allow\nfunctions to return set of rows. If this is possible, it should be\nvery easy to implement the virtual tables. Is that what is in your\nmind?\n--\nTatsuo Ishii\n\n",
"msg_date": "Sat, 27 Nov 1999 15:35:28 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: pg_ctl "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> Offhand I don't see any reason to think that making system status look\n>> like one or more virtual tables would be much harder to implement than\n>> making it available via special-purpose postmaster protocols. It seems\n>> worth looking into, anyway.\n\n> Tom, I remember you are going to enhance the function manager to allow\n> functions to return set of rows.\n\nMoi? I don't recall saying any such thing. Jan sounded like he had\nsome ideas about how to do it, but my ambitions for fmgr don't go\nfurther than cleaning up NULL handling and fixing its portability\nproblems. I have too many other things to do...\n\n> If this is possible, it should be very easy to implement the virtual\n> tables.\n\nIt would indeed provide a nice way of defining virtual tables --- just\nmake a function that returns the current table contents on-demand.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 27 Nov 1999 12:10:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: pg_ctl "
},
{
"msg_contents": "I have a logging subsystem running - just waiting for some aid on an OS-related\nbug - but it supports processing an arbitrarily complex options file (both log and \nnon-log options) and display/logging of the environment options and other\nparameters of interest.\n\n regards,\n\n Tim Holloway\n\nLamar Owen wrote:\n> \n> On Fri, 26 Nov 1999, Tatsuo Ishii wrote:\n> \n> > >If so, feel free to get the startup script\n> > > from the RedHat RPM set and cannibalize. This pg_ctl command is going to\n> > > greatly simplify startup scripts.\n> >\n> > Thanks. I know that the script is very convenient since I've been\n> > using the script for a while:-) This is one of the reason why I start\n> > to implemnt pg_ctl.\n> \n> The script can become spoiling -- it's biggest problem is the need to run it as\n> root.\n> \n> Ok, just a few suggestions:\n> \n> 1.) Allow either environment variables or command line switches to specify\n> PGDATA, PGLIB, postmaster location, port#, etc.\n> 2.) Fallback to builtin defaults if no envvars or switches specified.\n> 3.) Allow a mix of envvars and switches.\n> \n> The locations needed:\n> PGDATA\n> PGLIB\n> PATH_TO_POSTMASTER\n> PGPORT\n> PATH_TO_PID (could need to be /var/run/pgsql for FHS compliance)\n> \n> For the PID files, maybe use a format of postmaster.PGPORT (ie,\n> postmaster.5432). The PID files content, of course, needs to just be the\n> process identifier in ASCII followed by newline.\n> \n> Also, options for logging could be passed -- maybe provide a switch to pass\n> options on to postmaster? This may need to wait for subsequent versions --\n> getting basic functionality first is a good idea.\n> \n> It would be nice if a status report from postmaster could include the\n> envvars it was invoked with, the command line invoked with, and the other\n> things you already mentioned. (subject to security policy, of course).\n> \n> For subsquent versions (not to complicate an initial version), being able to\n> run any version backend using the appropriate version libraries would be nice.\n> This would involve only one more option -- PATH_TO_POSTGRES. This way, I can\n> fire up an old postmaster (using an old backend) to dump a database, stop it,\n> and fire up a new postmaster (and backend) to restore.\n> \n> I like this command.\n> \n> --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n> \n> > --\n> > Tatsuo Ishii\n> >\n> >\n> > ************\n> \n> ************\n",
"msg_date": "Sun, 28 Nov 1999 15:02:57 -0500",
"msg_from": "Tim Holloway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: pg_ctl"
},
{
"msg_contents": "> > Tatsuo, are you implementing this?\n> \n> Yes.\n> \n> >If so, feel free to get the startup script\n> > from the RedHat RPM set and cannibalize. This pg_ctl command is going to\n> > greatly simplify startup scripts.\n> \n> Thanks. I know that the script is very convenient since I've been\n> using the script for a while:-) This is one of the reason why I start\n> to implemnt pg_ctl.\n\nIs there a reason it is called pg_ctl and not pg_control? I find I\nabbreviate control as ctrl, cntrl, cntl so I usually spell it out.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Nov 1999 00:20:13 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re:"
},
{
"msg_contents": "> Is there a reason it is called pg_ctl and not pg_control? I find I\n> abbreviate control as ctrl, cntrl, cntl so I usually spell it out.\n\nI just got the idea from \"apachectl\" or a famous system call \"ioctl.\" \nHowever, if it's not natural for English speakers, I'm glad to change\nit to more appropriate one.\n--\nTatsuo Ishii\n\n",
"msg_date": "Mon, 29 Nov 1999 14:45:47 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re:"
},
{
"msg_contents": "> > Tom, I remember you are going to enhance the function manager to allow\n> > functions to return set of rows.\n> \n> Moi? I don't recall saying any such thing. Jan sounded like he had\n> some ideas about how to do it, but my ambitions for fmgr don't go\n> further than cleaning up NULL handling and fixing its portability\n> problems. I have too many other things to do...\n\nSorry for the confusion.\n\n> > If this is possible, it should be very easy to implement the virtual\n> > tables.\n> \n> It would indeed provide a nice way of defining virtual tables --- just\n> make a function that returns the current table contents on-demand.\n\nAnyway, it seems you already have an idea how to implement virtual\ntables without modifying the fmgr. Can you tell me about that?\n--\nTatsuo Ishii\n\n",
"msg_date": "Tue, 30 Nov 1999 10:21:22 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: pg_ctl "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> It would indeed provide a nice way of defining virtual tables --- just\n>> make a function that returns the current table contents on-demand.\n\n> Anyway, it seems you already have an idea how to implement virtual\n> tables without modifying the fmgr. Can you tell me about that?\n\nNo, I don't have any ideas --- I was just agreeing that it'd be a nice\nthing if we could do it.\n\nI am planning to add a \"hook\" field into the fmgr interface to allow\ndealing with function results that can't be returned as a single Datum\n(see previous discussions on hackers list). But I'm not going to try\nto write any code that makes use of that hook, at least not for 7.0.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 29 Nov 1999 23:03:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: pg_ctl "
}
] |
[
{
"msg_contents": "Zeugswetter Andreas SEV wrote:\n\n> Vadim wrote:\n> > > The developers appear to want to make them\n> > > work (i.e., have the\n> > > ability to rollback a DROP TABLE, ALTER TABLE ADD COLUMN,\n> > > etc.). This, in my\n> > > opinion, goes far above and beyond the call of duty for a\n> > > RDBMS. Oracle issues\n> > > an implicit COMMIT whenever a DDL statement is found.\n> >\n> > And I agreed with this.\n>\n> And I strongly disagree.\n> This sounds like pushing the flush button in the toilet,\n> and instead of the toilet flushing you get a shower.\n>\n> How could anybody come to the idea that a DDL statement\n> also does a commit work if inside a transaction ?\n>\n> Now this sound so absurd, that I even doubt Oracle would do this.\n>\n> Andreas\n\nI hate to disappoint your faith in Oracle, but....\n(from the Oracle 7 SQL Lanugage Reference Manual):\n\n--------------\nTransactions\n\nA transaction (or a logical unit of work) is a sequence of SQL\nstatements that ORACLE treats as a single unit. A transaction\nbegins with the first executable SQL statement after a COMMIT,\nROLLBACK or connection to the database. A transaction ends with\na COMMIT, ROLLBACK or disconnection (intentional or unintentional)\nfrom the database. Note that ORACLE issues an implicit COMMIT\nbefore and after any Data Definition Language statement.\n\nYou can also use a COMMIT or ROLLBACK statement to terminate\na read only transaction begun by a SET TRANSACTION statement.\n--------------\n\nSince ORACLE has 70% of the RDBMS market, it is the de facto\nstandard that the RDBMS will issue an implicit COMMIT when\nprocessing a DDL statement. Like I said before, I would LOVE to\nhave working support for ROLLBACKs of DDL statements. But I\nwould prefer to have implicit COMMITs over corrupted indexes,\ntables, and mandatory DBA intervention.\n\nMike Mascari\n\n\n\n",
"msg_date": "Fri, 26 Nov 1999 00:12:31 -0500",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AW: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "Vadim wrote:\n> > The developers appear to want to make them \n> > work (i.e., have the\n> > ability to rollback a DROP TABLE, ALTER TABLE ADD COLUMN, \n> > etc.). This, in my\n> > opinion, goes far above and beyond the call of duty for a \n> > RDBMS. Oracle issues\n> > an implicit COMMIT whenever a DDL statement is found. \n>\n> And I agreed with this.\n\nAnd I strongly disagree. \nThis sounds like pushing the flush button in the toilet,\nand instead of the toilet flushing you get a shower.\n\nHow could anybody come to the idea that a DDL statement \nalso does a commit work if inside a transaction ?\n\nNow this sound so absurd, that I even doubt Oracle would do this.\n\nAndreas\n \n",
"msg_date": "Fri, 26 Nov 1999 10:38:15 +0100",
"msg_from": "Zeugswetter Andreas SEV <[email protected]>",
"msg_from_op": false,
"msg_subject": "AW: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "Zeugswetter Andreas SEV wrote:\n> \n> > > RDBMS. Oracle issues\n> > > an implicit COMMIT whenever a DDL statement is found.\n> >\n> > And I agreed with this.\n> \n> And I strongly disagree.\n> This sounds like pushing the flush button in the toilet,\n> and instead of the toilet flushing you get a shower.\n> \n> How could anybody come to the idea that a DDL statement\n> also does a commit work if inside a transaction ?\n> \n> Now this sound so absurd, that I even doubt Oracle would do this.\n\nStandard says (4.41 SQL-transactions):\n\n It is implementation-defined whether or not the non-dynamic or\n dynamic execution of an SQL-data statement or the execution of\n ^^^^^^^^^^^^^^^^^^\n an <SQL dynamic data statement> is permitted to occur within the\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n same SQL-transaction as the non-dynamic or dynamic execution of\n ^^^^^^^^^^^^^^^^^^^^^^^\n an SQL-schema statement. If it does occur, then the effect on any\n ^^^^^^^^^^^^^^^^^^^^^^^\n\nSo, you see that this idea came not to Oracle only...\n\nI don't object against DDLs inside BEGIN/END.\nI just mean that it's not required by standard.\nIf someone is ready to fix this area - welcome.\n\nVadim\nP.S. Is DROP TABLE rollback-able in Informix, Andreas?\n",
"msg_date": "Fri, 26 Nov 1999 17:12:01 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
}
] |
[
{
"msg_contents": "> So, you see that this idea came not to Oracle only...\n> \n> I don't object against DDLs inside BEGIN/END.\n\nYes, I know. All I object against is, that a DDL statement commits \nmy previous update/insert/delete statements.\n\n> I just mean that it's not required by standard.\n> If someone is ready to fix this area - welcome.\n\nimho it is ok, to disallow ddl inside transaction, \nuntil somebody fixes rollback.\n\n> \n> Vadim\n> P.S. Is DROP TABLE rollback-able in Informix, Andreas?\n\nYes.\n\nAndreas \n",
"msg_date": "Fri, 26 Nov 1999 11:29:44 +0100",
"msg_from": "Zeugswetter Andreas SEV <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: [HACKERS] Re: [GENERAL] drop/rename table and transaction\n\ts"
}
] |
[
{
"msg_contents": "Peter, is there a way from pgsql to show if an index is unique? It\nwould be nice.\n\n\ttest=# create table x(y int);\n\tCREATE\n\ttest=# create unique index xx on x(y);\n\tCREATE\n\ttest=# \\d xx\n\t Table \"xx\"\n\t Attribute | Type | Info \n\t-----------+------+------\n\t y | int4 | \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 26 Nov 1999 12:46:36 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] A bag of psql goodiesu"
},
{
"msg_contents": "On 1999-11-26, Bruce Momjian mentioned:\n\n> Peter, is there a way from pgsql to show if an index is unique? It\n> would be nice.\n\nWorks here:\n(I think this was part of the last patch.)\n\nplay=> \\d baaz\n Table \"baaz\"\n Attribute | Type | Extra \n-----------+------+----------\n a | int4 | not null\nIndex: baaz_pkey\nRule: baaz_rule\n\nplay=> \\d baaz_pkey\nIndex \"baaz_pkey\"\n Attribute | Type \n-----------+------\n a | int4\nunique btree (primary key)\n\nplay=> \\d bar \n Table \"bar\"\n Attribute | Type | Extra \n-----------+------+-------\n a | int4 | \n b | text | \nIndices: barindex,\n barunique\nConstraints: a > 0\n b IN ( 'yes' , 'no' )\n\nplay=> \\d barindex\nIndex \"barindex\"\n Attribute | Type \n-----------+------\n a | int4\nbtree\n\nplay=> \\d barunique\nIndex \"barunique\"\n Attribute | Type \n-----------+------\n a | int4\n b | text\nunique btree\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sun, 28 Nov 1999 16:14:19 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] A bag of psql goodies"
},
{
"msg_contents": "\nThanks. I see it now.\n\n\n[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> On 1999-11-26, Bruce Momjian mentioned:\n> \n> > Peter, is there a way from pgsql to show if an index is unique? It\n> > would be nice.\n> \n> Works here:\n> (I think this was part of the last patch.)\n> \n> play=> \\d baaz\n> Table \"baaz\"\n> Attribute | Type | Extra \n> -----------+------+----------\n> a | int4 | not null\n> Index: baaz_pkey\n> Rule: baaz_rule\n> \n> play=> \\d baaz_pkey\n> Index \"baaz_pkey\"\n> Attribute | Type \n> -----------+------\n> a | int4\n> unique btree (primary key)\n> \n> play=> \\d bar \n> Table \"bar\"\n> Attribute | Type | Extra \n> -----------+------+-------\n> a | int4 | \n> b | text | \n> Indices: barindex,\n> barunique\n> Constraints: a > 0\n> b IN ( 'yes' , 'no' )\n> \n> play=> \\d barindex\n> Index \"barindex\"\n> Attribute | Type \n> -----------+------\n> a | int4\n> btree\n> \n> play=> \\d barunique\n> Index \"barunique\"\n> Attribute | Type \n> -----------+------\n> a | int4\n> b | text\n> unique btree\n> \n> -- \n> Peter Eisentraut Sernanders v_g 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n> \n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Nov 1999 22:28:45 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] A bag of psql goodies"
}
] |
[
{
"msg_contents": "\n>Tom Lane <[email protected]>\n>\n>Keith Parks <[email protected]> writes:\n>> This is due to 2 problems.\n>> 1) The awk script is broken over 2 lines.\n>> 2) Solaris's awk does not seem to understand REs in split(). (nawk's OK)\n>\n>I don't think so --- the exact same split() construct is there in the\n>old regress.sh test driver, same as it ever was. I can believe the line\n>break is a problem, but if there's another problem then you need to\n>keep digging.\n>\n>Comparing the two scripts, I wonder if Jan broke it by adding '/bin/sh'\n>to the invocation of config.guess. Doesn't seem very likely, but...\n>\n\nUnfortunately, the old script was broken too. It's just that\nI'd not noticed as I'd included a patch to change awk to nawk\nmy auto build script.\n\nI tried different pernutations of quotes, brackets n'all last \nnight and came to the conclusion that it couldn't be done with\nsolaris awk. (I'm not very patient though ;-( )\n\nanyone for sed and tr ?\n\n",
"msg_date": "Fri, 26 Nov 1999 18:16:12 +0000 (GMT)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] run_check problem "
},
{
"msg_contents": "> I tried different pernutations of quotes, brackets n'all last \n> night and came to the conclusion that it couldn't be done with\n> solaris awk. (I'm not very patient though ;-( )\n> \n> anyone for sed and tr ?\n\nSure. Just don't use any non-standard tr character classes like :alpha:.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 26 Nov 1999 13:49:48 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] run_check problem"
}
] |
[
{
"msg_contents": "Hi All\nI need now 1 real credit card number, and experiance date! I need your help!\nMaby i help you and in 2 month i get you your money back with post help, ore\nin other method/ Thank you for help!\nI'am sorry for my english!\n\nMail me, if you wohnt to help: [email protected]\nBest REGARDS FROM BRUT!\n\n\n",
"msg_date": "Fri, 26 Nov 1999 22:23:25 +0300",
"msg_from": "\"Iradm\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Credit Kard 100$"
}
] |
[
{
"msg_contents": "> Tom Lane <[email protected]> writes:\n> Mike Mascari <[email protected]> writes:\n> > Well, I agree that it would be GREAT to be able to rollback DDL\n> > statements. However, at the moment, failures during a transaction\nwhile\n> > DDL statements occur usually require direct intervention by the user\n(in\n> > the case of having to drop/recreate indexes) and often require the\nservices\n> > of the DBA, if filesystem intervention is necessary (i.e., getting rid\nof\n> > partially dropped/created tables and their associated fileystem\n> > files).\n> \n> And forced commit after the DDL statement completes will improve that\n> how?\n\nBecause 99% of the instances of index and data corruption I've seen\nhave come from rolled-back DDL statements - usually because the on\ndisk representation no longer matches the system catalogue. A forced \ncommit on DDL changes against tables and indexes with access \nexclusive locks will make that operation as close to \"atomic\" as\npossible...\n\n> As a user I'd be pretty unhappy if \"SELECT ... INTO\" suddenly became\n> \"COMMIT; SELECT; BEGIN\". Not only would that mean that updates made\n> by my transaction would become visible prematurely, but it might also\n> mean that the SELECT retrieves results it should not (ie, results from\n> xacts that were not committed when my xact started). Both of these\n> things could make my application logic fail in hard-to-find, hard-to-\n> reproduce-except-under-load ways.\n\nWhat does ORACLE do here?\n\n> \n> So, although implicit commit might look like a convenient workaround at\n> the level of Postgres itself, it'd be a horrible loss of reliability\n> at the application level. I'd rather go with #1 (hard error) than\n> risk introducing transactional bugs into applications that use Postgres.\n> \n> \n> > Since ORACLE has 70% of the RDBMS market, it is the de facto standard\n> \n> Yes, and Windows is the de facto standard operating system. I don't use\n> Windows, and I'm not willing to follow Oracle's lead when they make a\n> bad decision...\n> \n> \t\t\tregards, tom lane\n\nSo I guess I should file away my other suggestion to use DCOM as \nthe object technology of choice instead of CORBA? ;-)\n\n\n\n\n\n",
"msg_date": "Fri, 26 Nov 1999 14:48:20 -0500",
"msg_from": "\"Mike Mascari\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions "
},
{
"msg_contents": "On Fri, 26 Nov 1999, Mike Mascari wrote:\n> > Tom Lane <[email protected]> writes:\n\n> What does ORACLE do here?\n\n> > > Since ORACLE has 70% of the RDBMS market, it is the de facto standard\n> > \n> > Yes, and Windows is the de facto standard operating system. I don't use\n> > Windows, and I'm not willing to follow Oracle's lead when they make a\n> > bad decision...\n\n> So I guess I should file away my other suggestion to use DCOM as \n> the object technology of choice instead of CORBA? ;-)\n\nThis is a Free Software project -- PostgreSQL is not bound by the decisions of\nthe 'market leader' any more than Linux is bound by the standards of Microsoft.\n\nHaving said that, at the same time, a run-time option to mimic Oracle's\nbehavior might be useful to all of those folk who are trying to port Oracle\napplications over to PostgreSQL -- particularly if the SQL is compiled in.\n\nHowever, someone who is interested in such an option will probably have to\nimplement it as well, as it certainly appears to not be a priority issue at\nthis point.\n\nIn cases where Oracle diverges from the SQL-92 or SQL3 standards, should we go\n'standard' -- or go 'non-standard' -- the choice should be clear.\n\nWe are not competing directly against Oracle, AFAICT -- we serve a different\nrole altogether.\n\nAnd I say that while I want an Oracle-specific application to run under\nPostgreSQL.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 26 Nov 1999 15:04:20 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
}
] |
[
{
"msg_contents": "> From: Lamar Owen <[email protected]>\n> On Fri, 26 Nov 1999, Mike Mascari wrote:\n> > > Tom Lane <[email protected]> writes:\n> \n> > What does ORACLE do here?\n> \n> > > > Since ORACLE has 70% of the RDBMS market, it is the de facto\nstandard\n> > > \n> > > Yes, and Windows is the de facto standard operating system. I don't\nuse\n> > > Windows, and I'm not willing to follow Oracle's lead when they make a\n> > > bad decision...\n> \n> > So I guess I should file away my other suggestion to use DCOM as \n> > the object technology of choice instead of CORBA? ;-)\n> \n> This is a Free Software project -- PostgreSQL is not bound by the\ndecisions of\n> the 'market leader' any more than Linux is bound by the standards of\nMicrosoft.\n\nThe DCOM remark was just a joke ;-). My remark concerning ORACLE was in\nresponse to Andreas' comment that implicit COMMITs of DDL statements was\nabsurd. I wanted to simply point out that, since ORACLE has 70% market\nshare,\nmost corporate database developers EXPECT their DDL statements to commit\ntheir transactions (if they've RTFM). I also pointed out that it would be\nGREAT \nif PostgreSQL could successfully rollback DDL statements sanely (and thus \ndiverge from ORACLE). I guess I don't expect that to happen successfully\nuntil \nsomething the equivalent of TABLESPACES is implemented and there is a \ndisassociation between table names, index names and their filesystem \ncounterparts and to be able to \"undo\" filesystem operations. That, it seems\nto \nme, will be a major undertaking and not going to happen any time soon...\n\nI'll stop swinging at windmills now...\n\nMike Mascari\n\n\n",
"msg_date": "Fri, 26 Nov 1999 15:32:04 -0500",
"msg_from": "\"Mike Mascari\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "On Fri, 26 Nov 1999, Mike Mascari wrote:\n> The DCOM remark was just a joke ;-). My remark concerning ORACLE was in\n> response to Andreas' comment that implicit COMMITs of DDL statements was\n> absurd. I wanted to simply point out that, since ORACLE has 70% market\n> share,\n\nI did not see the response to Andreas, nor did I see Andreas' assertion that it\nwas absurd. My apologies.\n\n> counterparts and to be able to \"undo\" filesystem operations. That, it seems\n> to \n> me, will be a major undertaking and not going to happen any time soon...\n\nYes, that is true. As long as the storage manager relies on the filesystem for\ntable names, this will be a problem, unless filesystem deletions are delayed\nuntil COMMIT, and filesystem creates are undone at a ROLLBACK.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 26 Nov 1999 16:51:55 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "> if PostgreSQL could successfully rollback DDL statements sanely (and thus \n> diverge from ORACLE). I guess I don't expect that to happen successfully\n> until \n> something the equivalent of TABLESPACES is implemented and there is a \n> disassociation between table names, index names and their filesystem \n> counterparts and to be able to \"undo\" filesystem operations. That, it seems\n> to \n> me, will be a major undertaking and not going to happen any time soon...\n\nIngres has table names that don't match on-disk file names, and it is a\npain to administer because you can't figure out what is going on at the\nfile system level. Table files have names like AAAHFGE.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Nov 1999 00:13:59 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > if PostgreSQL could successfully rollback DDL statements sanely (and thus\n> > diverge from ORACLE). I guess I don't expect that to happen successfully\n> > until\n> > something the equivalent of TABLESPACES is implemented and there is a\n> > disassociation between table names, index names and their filesystem\n> > counterparts and to be able to \"undo\" filesystem operations. That, it seems\n> > to\n> > me, will be a major undertaking and not going to happen any time soon...\n> \n> Ingres has table names that don't match on-disk file names, and it is a\n> pain to administer because you can't figure out what is going on at the\n> file system level. Table files have names like AAAHFGE.\n\nI have to say that I'm going to change on-disk database/table/index \nfile names to _OID_! This is required by WAL because of inside of \nlog records there will be just database/table/index oids, not names, \nand after crash recovery will not be able to read pg_class to get \ndatabase/table/index name using oid ...\n\nVadim\n",
"msg_date": "Mon, 29 Nov 1999 12:59:22 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "> I have to say that I'm going to change on-disk database/table/index \n> file names to _OID_! This is required by WAL because of inside of \n> log records there will be just database/table/index oids, not names, \n> and after crash recovery will not be able to read pg_class to get \n> database/table/index name using oid ...\n\nWow, that is a major pain. Anyone else think so?\n\nUsing oid's instead of names may give us some ability to fix some other\nbugs, though.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Nov 1999 01:09:41 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > I have to say that I'm going to change on-disk database/table/index\n> > file names to _OID_! This is required by WAL because of inside of\n> > log records there will be just database/table/index oids, not names,\n> > and after crash recovery will not be able to read pg_class to get\n> > database/table/index name using oid ...\n> \n> Wow, that is a major pain. Anyone else think so?\n\nWhy it's so painful? \nWe can write utility to construct database dir with table names\nsymlinked to real table files -:)\nActually, I don't understand \nfor what would you need to know what is what, (c) -:)\n\n> Using oid's instead of names may give us some ability to fix some other\n> bugs, though.\n\nYes.\n\nVadim\n",
"msg_date": "Mon, 29 Nov 1999 13:52:25 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > > I have to say that I'm going to change on-disk database/table/index\n> > > file names to _OID_! This is required by WAL because of inside of\n> > > log records there will be just database/table/index oids, not names,\n> > > and after crash recovery will not be able to read pg_class to get\n> > > database/table/index name using oid ...\n> > \n> > Wow, that is a major pain. Anyone else think so?\n> \n> Why it's so painful? \n> We can write utility to construct database dir with table names\n> symlinked to real table files -:)\n> Actually, I don't understand \n> for what would you need to know what is what, (c) -:)\n\nWith Ingres, you can't just look at a file and know the table name, and\nif you need to reload just one file from a tape, it is a royal pain to\nknow which file to bring back. I have said Ingres make things 100 times\nharder for adminstrators by doing this.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Nov 1999 02:11:57 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> I have to say that I'm going to change on-disk database/table/index \n>> file names to _OID_! This is required by WAL because of inside of \n>> log records there will be just database/table/index oids, not names, \n>> and after crash recovery will not be able to read pg_class to get \n>> database/table/index name using oid ...\n\n> Wow, that is a major pain. Anyone else think so?\n> Using oid's instead of names may give us some ability to fix some other\n> bugs, though.\n\nYes, and yes. I've been trying to nerve myself to propose that, because\nit seems the only reasonable way to make rollback of RENAME TABLE and\nDROP TABLE work safely. It'll be a pain in the neck for debugging and\nadmin purposes though.\n\nCan we make some sort of usually-correct-but-not-guaranteed-correct\ndump that shows which corresponds to what? Maybe something similar\nto the textfile dump of pg_shadow that the postmaster uses for password\nauthentication? Then at least you'd have some shot at figuring out\nwhich file was what in extremis...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 29 Nov 1999 02:33:13 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions "
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > >\n> > > Wow, that is a major pain. Anyone else think so?\n> >\n> > Why it's so painful?\n> > We can write utility to construct database dir with table names\n> > symlinked to real table files -:)\n> > Actually, I don't understand\n> > for what would you need to know what is what, (c) -:)\n> \n> With Ingres, you can't just look at a file and know the table name, and\n> if you need to reload just one file from a tape, it is a royal pain to\n> know which file to bring back. I have said Ingres make things 100 times\n> harder for adminstrators by doing this.\n\nMoving table file to/off database dir separately is not right way for\nbackup/restore...\n\nOn-line/off-line full backup utility will copy _all_ database files to\n_somewhere_ (tape etc) as well as on-line transaction logs\nand pg_control (to know when was the last checkpoint made).\nAnd to restore things after disk failure administrator will\nhave to copy _all_ files + logs (+logs made as incremental backup)\n+ pg_control back and start postmaster.\n\nVadim\n",
"msg_date": "Mon, 29 Nov 1999 14:55:15 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <[email protected]> writes:\n> >> I have to say that I'm going to change on-disk database/table/index\n> >> file names to _OID_! This is required by WAL because of inside of\n> >> log records there will be just database/table/index oids, not names,\n> >> and after crash recovery will not be able to read pg_class to get\n> >> database/table/index name using oid ...\n> \n> > Wow, that is a major pain. Anyone else think so?\n> > Using oid's instead of names may give us some ability to fix some other\n> > bugs, though.\n> \n> Yes, and yes. I've been trying to nerve myself to propose that, because\n> it seems the only reasonable way to make rollback of RENAME TABLE and\n> DROP TABLE work safely. It'll be a pain in the neck for debugging and\n> admin purposes though.\n\nSo, no more nerves needed, Tom, yeh? -:)\nIt would be nice if someone else, not me, implement this...\n\n> Can we make some sort of usually-correct-but-not-guaranteed-correct\n> dump that shows which corresponds to what? Maybe something similar\n> to the textfile dump of pg_shadow that the postmaster uses for password\n> authentication? Then at least you'd have some shot at figuring out\n> which file was what in extremis...\n\nAs it was proposed - utility to create dir with database name\n(in addition to dir with database oid where data really live)\nand symlinks there: table_name --> ../db_oid/table_oid\n\nVadim\n",
"msg_date": "Mon, 29 Nov 1999 15:00:44 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n> So, no more nerves needed, Tom, yeh? -:)\n> It would be nice if someone else, not me, implement this...\n\nUm, I've got more than enough on my plate already...\n\n>> Can we make some sort of usually-correct-but-not-guaranteed-correct\n>> dump that shows which corresponds to what? Maybe something similar\n>> to the textfile dump of pg_shadow that the postmaster uses for password\n>> authentication? Then at least you'd have some shot at figuring out\n>> which file was what in extremis...\n\n> As it was proposed - utility to create dir with database name\n> (in addition to dir with database oid where data really live)\n> and symlinks there: table_name --> ../db_oid/table_oid\n\nI saw your message about that after sending mine. Yes, that'd be\na cool way of displaying the relationship. But the main thing to\nremember is that it'd only be correct at steady-state when nothing\nis being changed. If we tried to guarantee the mapping was correct\n100% of the time, we'd be back to square one. Of course, that\nmakes the whole thing somewhat less useful for debugging purposes,\nsince Murphy's Law says that the times you really need to know\nwhat's what are just when the system crashed in the middle of\na table rename ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 29 Nov 1999 03:10:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions "
},
{
"msg_contents": "> > Wow, that is a major pain. Anyone else think so?\n> > Using oid's instead of names may give us some ability to fix some other\n> > bugs, though.\n> \n> Yes, and yes. I've been trying to nerve myself to propose that, because\n> it seems the only reasonable way to make rollback of RENAME TABLE and\n> DROP TABLE work safely. It'll be a pain in the neck for debugging and\n> admin purposes though.\n\nI look at this and question the value of allowing such fancy things vs.\nthe ability to look at the directory and know exactly what table is\nwhich file. Maybe we can use file names like 23423_mytable where 24323\nis the table oid and mytable is the table name. That way, we can know\nthe table, and they are unique too to allow RENAME TABLE to work.\n\nThis doesn't solve Vadim's problem. His additional work would be to\nwrite a line to the log file for each table create/delete saying I\ndeleted this table with this oid, and when reading back the log, he has\nto record the oid_username combination and use that to translate his log\noids into actual filenames.\n\nIn fact, doesn't that information already appear in the WAL log as part\nof pg_class changes? Or is the problem that the table changes happen\nbefore the pg_class is committed?\n\n\n> Can we make some sort of usually-correct-but-not-guaranteed-correct\n> dump that shows which corresponds to what? Maybe something similar\n> to the textfile dump of pg_shadow that the postmaster uses for password\n> authentication? Then at least you'd have some shot at figuring out\n> which file was what in extremis...\n\nThat is OK, and a possible workaround if the above idea is not good.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Nov 1999 14:08:41 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "> Moving table file to/off database dir separately is not right way for\n> backup/restore...\n> \n> On-line/off-line full backup utility will copy _all_ database files to\n> _somewhere_ (tape etc) as well as on-line transaction logs\n> and pg_control (to know when was the last checkpoint made).\n> And to restore things after disk failure administrator will\n> have to copy _all_ files + logs (+logs made as incremental backup)\n> + pg_control back and start postmaster.\n\nNo, I am talking about restoring a single table without doing the entire\ndatabase. If you recreate the table empty with the same structure,\nshutdown db, mv table restored file to data directory and restart, table\nnot has old contents.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Nov 1999 14:10:20 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "> > Can we make some sort of usually-correct-but-not-guaranteed-correct\n> > dump that shows which corresponds to what? Maybe something similar\n> > to the textfile dump of pg_shadow that the postmaster uses for password\n> > authentication? Then at least you'd have some shot at figuring out\n> > which file was what in extremis...\n> \n> As it was proposed - utility to create dir with database name\n> (in addition to dir with database oid where data really live)\n> and symlinks there: table_name --> ../db_oid/table_oid\n\nThat's interesting, but I am concerned about the extra overhead of\ncreating two links for every file.\n\nThe other issue is if the table is accidentally dropped, how do you use\nthat utility to know the oid of the table that was removed?\n\nI guess I like the OID_tablename idea.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Nov 1999 14:12:04 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "> As it was proposed - utility to create dir with database name\n> (in addition to dir with database oid where data really live)\n> and symlinks there: table_name --> ../db_oid/table_oid\n\nIn fact, let me change what I suggested. Instead of 3434_mytable, I\nsuggest mytable_3434 so that the tables even appear in alphabetical\norder in the directory. The _ may be the wrong character to separate\ntablename from oid. Not sure, but we may need to use something that\ncan't be used in sql like mytable+234.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Nov 1999 14:13:36 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "Hi all,\n\nI propose here that we stop the release of lock before end of transaction.\nI have been suffering from the early release of lock.\n\nComments ?\n\nIf we don't allow DDL command inside transaction block,we won't need\nthe release before end of transaction.\nIf we allow DDL command inside transaction block,it may be a problem.\nBut are there any other principles which could guarantee consistency ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Tue, 30 Nov 1999 09:34:03 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> I propose here that we stop the release of lock before end of transaction.\n> I have been suffering from the early release of lock.\n\nNow that read and write locks don't interfere with each other, this may\nnot be as big a performance loss as it sounds.\n\nBut, do you intend to apply this principle to system tables as well as\nuser tables? I am concerned that we will have deadlock problems if we\ntry to do it for system tables, because practically all transactions\nwill start out with system-table accesses, which implies grabbing a read\nlock on those system tables if you want to take a hard line about it.\nIf you later need to upgrade to a higher-grade lock on any of those\nsystem tables, you've got trouble.\n\nThere is another issue I've been thinking about that seems to require\nsome amount of lock-releasing within a transaction, too. Specifically,\nI'd like to see the parser grab a minimal lock (AccessShareLock) on each\ntable referenced in a query as soon as it recognizes the table name.\nThe rewriter would also have to lock each table that it adds into the\nquery due to rules. This would prevent problems that we have now with\nALTER TABLE running in parallel with parsing/planning of a query.\n\nBut, many queries require more than AccessShareLock on their tables.\nIf we simply try to grab the higher-grade locks without releasing\nAccessShareLock, we will certainly suffer deadlock.\n\nIf anyone's having a hard time seeing why lock upgrade is dangerous,\nconsider two backends trying at about the same time to do\n\tBEGIN; LOCK TABLE foo; etc etc\nsince this can happen:\n\tBackend A's parser recognizes 'foo', grabs AccessShareLock on foo\n\tBackend B's parser recognizes 'foo', grabs AccessShareLock on foo\n\tBackend A's executor tries to get AccessExclusiveLock on foo,\n\t\tmust wait for B\n\tBackend B's executor tries to get AccessExclusiveLock on foo,\n\t\tmust wait for A\n\nSo I think the real solution must go something like this:\n\n* Parser and rewriter grab AccessShareLock on each table as it is added\nto the query.\n* At start of planner, all tables and required access rights are known.\nRelease AccessShareLocks, then grab required lock levels on each table.\nWe probably want to error out if any DDL alteration has actually occurred\nto any of the tables by the time we re-acquire its lock.\n\nAn easy improvement on this is to avoid the drop/grab if AccessShareLock\nis the only thing needed on each table (as in a SELECT). We could\nfurther try to extend the parser so that it grabs a sufficient lock\non each table initially --- that's probably easy enough for INSERT\ntarget tables and so forth, but we cannot guarantee that it will be\npossible in every case. (Consider rule rewrites that add actions\nnot present in the initial query.)\n\nCan you see a way to solve this problem without dropping/grabbing locks?\n\n> If we don't allow DDL command inside transaction block,we won't need\n> the release before end of transaction.\n> If we allow DDL command inside transaction block,it may be a problem.\n> But are there any other principles which could guarantee consistency ?\n\nI certainly do not wish to give up the goal of supporting DDL statements\ninside transactions. What problems do you foresee?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 29 Nov 1999 22:59:16 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions "
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > I propose here that we stop the release of lock before end of \n> transaction.\n> > I have been suffering from the early release of lock.\n> \n> Now that read and write locks don't interfere with each other, this may\n> not be as big a performance loss as it sounds.\n> \n> But, do you intend to apply this principle to system tables as well as\n> user tables? \n\nYes I said about only system tables.\nIsn't an early release of lock for user tables is already a bug ?(except\nAccessShareLock). \n\n> I am concerned that we will have deadlock problems if we\n> try to do it for system tables, because practically all transactions\n> will start out with system-table accesses, which implies grabbing a read\n> lock on those system tables if you want to take a hard line about it.\n> If you later need to upgrade to a higher-grade lock on any of those\n> system tables, you've got trouble.\n>\n\nSorry,my target is only executor stage this time and AccessShareLock \nis an exception.\n\nAs for parser/planner stage,it needs further consideration and I don't\nhave any solution yet. SPI already has an ability to prepare plans and\nexecutor could use them in other transactions. We have to draw a\nclear line between executor and parse/planner. Probably plan invalidation\nmecahnism will be needed and we may have to put plans on shared\nmemory to realize it.\n\n> \n> > If we don't allow DDL command inside transaction block,we won't need\n> > the release before end of transaction.\n> > If we allow DDL command inside transaction block,it may be a problem.\n> > But are there any other principles which could guarantee consistency ?\n> \n> I certainly do not wish to give up the goal of supporting DDL statements\n> inside transactions. What problems do you foresee?\n>\n\nI may be the first man that threw a question about DDL commands inside\ntransactions. I'm stiil suspicious about the possibility.\nI have thought about the following. I think they should be considered \neven in case of DDL commands *outside* transactions.\n\n1) The biggest obstacle for me is this early release of lock(including \n parser/planner handling). Without a solution for this I couldn't see\n any consistency for system tuples.\n2) The implementation of row level share locking.\n3) The naming of relation files which has been discussed in this thread.\n4) The lack of read consistency in DDL statement though I couldn't \n tell concretely what's wrong with it. \n\nRegards. \n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Tue, 30 Nov 1999 17:17:59 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Re: [GENERAL] drop/rename table and transactions "
},
{
"msg_contents": "> I propose here that we stop the release of lock before end of transaction.\n> I have been suffering from the early release of lock.\n> \n> Comments ?\n\nIf there's no objection,I would change UnlockRelation() to not release \nthe specified lock except AccessShareLock.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Wed, 1 Dec 1999 18:57:42 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> > I propose here that we stop the release of lock before end of transaction.\n> > I have been suffering from the early release of lock.\n> >\n> > Comments ?\n> \n> If there's no objection,I would change UnlockRelation() to not release\n> the specified lock except AccessShareLock.\n\nWhy don't remove this call from improper places?\nI would try to find all calls and understand why\nthey made...\n\nVadim\n",
"msg_date": "Wed, 01 Dec 1999 16:59:58 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "\n\nVadim Mikheev wrote:\n\n> Hiroshi Inoue wrote:\n> >\n> > > I propose here that we stop the release of lock before end of transaction.\n> > > I have been suffering from the early release of lock.\n> > >\n> > > Comments ?\n> >\n> > If there's no objection,I would change UnlockRelation() to not release\n> > the specified lock except AccessShareLock.\n>\n> Why don't remove this call from improper places?\n> I would try to find all calls and understand why\n> they made...\n>\n\nI think UnlockRelation() is unnecessary\n\nOracle doesn't have\n\n>\n> Vadim\n\n",
"msg_date": "Wed, 01 Dec 1999 19:11:25 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> > Why don't remove this call from improper places?\n> > I would try to find all calls and understand why\n> > they made...\n> >\n> \n> I think UnlockRelation() is unnecessary\n> \n> Oracle doesn't have\n\nAnd we havn't UNLOCK TABLE _command_ as well -:)\nBut func call is internal thing and I don't know\nOracle internals.\nIf this call is unnecessary - get rid of it at all...\n\nVadim\n",
"msg_date": "Wed, 01 Dec 1999 17:28:36 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "Oops sorry,I sent a draft by mistake.\n\nVadim Mikheev wrote:\n\n> Hiroshi Inoue wrote:\n> >\n> > > I propose here that we stop the release of lock before end of transaction.\n> > > I have been suffering from the early release of lock.\n> > >\n> > > Comments ?\n> >\n> > If there's no objection,I would change UnlockRelation() to not release\n> > the specified lock except AccessShareLock.\n>\n> Why don't remove this call from improper places?\n> I would try to find all calls and understand why\n> they made...\n>\n\nI was surprized that few people really want DDL commands inside transactions.\nAre there any reasons to releasing lock before end of transaction except\nthat long term lock for system tuples is not preferable ?\n\nI think that UnlockRelation() is unnecessary fundamentally.\nMine is the simplest way to achieve this.\nIf there's no problem,I am glad to remove UnlockRelation() calls.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n\n",
"msg_date": "Wed, 01 Dec 1999 19:37:18 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> > >\n> > > If there's no objection,I would change UnlockRelation() to not release\n> > > the specified lock except AccessShareLock.\n> >\n> > Why don't remove this call from improper places?\n> > I would try to find all calls and understand why\n> > they made...\n> >\n> \n> I was surprized that few people really want DDL commands inside transactions.\n> Are there any reasons to releasing lock before end of transaction except\n> that long term lock for system tuples is not preferable ?\n> \n> I think that UnlockRelation() is unnecessary fundamentally.\n> Mine is the simplest way to achieve this.\n> If there's no problem,I am glad to remove UnlockRelation() calls.\n\nThere are! I finally found where I used UnlockRelation() -\nin execUtils.c:ExecCloseIndices(). Please read comments in\nExecOpenIndices() where LockRelation() is called...\n\nVadim\n",
"msg_date": "Wed, 01 Dec 1999 17:49:47 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "Vadim Mikheev wrote:\n\n> Hiroshi Inoue wrote:\n> >\n> > > >\n> > > > If there's no objection,I would change UnlockRelation() to not release\n> > > > the specified lock except AccessShareLock.\n> > >\n> > > Why don't remove this call from improper places?\n> > > I would try to find all calls and understand why\n> > > they made...\n> > >\n> >\n> > I was surprized that few people really want DDL commands inside transactions.\n> > Are there any reasons to releasing lock before end of transaction except\n> > that long term lock for system tuples is not preferable ?\n> >\n> > I think that UnlockRelation() is unnecessary fundamentally.\n> > Mine is the simplest way to achieve this.\n> > If there's no problem,I am glad to remove UnlockRelation() calls.\n>\n> There are! I finally found where I used UnlockRelation() -\n> in execUtils.c:ExecCloseIndices(). Please read comments in\n> ExecOpenIndices() where LockRelation() is called...\n\nI see it now.\n\nHmm,index itself doesn't have its time qualification and is out of\ntransaction control(at least now).\n\nOK,I would examine it one by one.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Wed, 01 Dec 1999 21:24:29 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> I propose here that we stop the release of lock before end of transaction.\n>> I have been suffering from the early release of lock.\n\n> If there's no objection,I would change UnlockRelation() to not release \n> the specified lock except AccessShareLock.\n\nAfter thinking about this some more, I am not convinced that it will buy\nanything --- and it might possibly break things that work now. The\nreason I'm not convinced about it is that we cannot apply the \"don't\nrelease locks till end of transaction\" rule perfectly uniformly. You\nalready propose not to treat AccessShareLock that way, and Vadim seems\nto think there will be other cases where we need to break the rule.\nSo we won't have a theoretically-clean situation anyway, and will have\nto look at things case by case.\n\nCan you give specific examples of cases that will be fixed?\n\nFor the most part I believe that we effectively protect updates to\nsystem-table rows by holding AccessExclusiveLock on the associated user\nrelation. Locking the system table is just a means of preventing VACUUM\nfrom running concurrently on the system table (and possibly moving the\ntuple we want to update/delete). So I think releasing the system-table\nlock is OK as long as we hold the user table lock till end of\ntransaction. VACUUM works fine with uncommitted tuples --- maybe we\nshould turn off its warning about them, at least in system relations?\n\nThere might be places where we are failing to hold user table locks long\nenough, but those are just localized bugs and ought to be treated that way.\n\nIn any case, I do not think it's a good idea to put such a quick hack\nin UnlockRelation(). UnlockRelation() should do what it's told. If we\nwant to do this, we should go around and change the heap_close() calls\nto specify NoLock instead of whatever locks they specify now.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Dec 1999 11:15:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions "
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> >> I propose here that we stop the release of lock before end of \n> transaction.\n> >> I have been suffering from the early release of lock.\n> \n> > If there's no objection,I would change UnlockRelation() to not release \n> > the specified lock except AccessShareLock.\n> \n> After thinking about this some more, I am not convinced that it will buy\n> anything --- and it might possibly break things that work now. The\n> reason I'm not convinced about it is that we cannot apply the \"don't\n> release locks till end of transaction\" rule perfectly uniformly. You\n\nWhy are we allowed to break 2 phase locking for system tables ?\nIsn't 2 phase locking a fundamental principle to preserve consistency ?\nIf there's another principle to rely on,please teach me.\n\n> already propose not to treat AccessShareLock that way, and Vadim seems\n\nIt seems to me that AccessShare(Exclusive)Lock is essentailly for\nVACUUM. There's a possibilty to remove AccessExclusiveLock\nexcept for VACUUM. Oracle has neither.\nIf AccessExclusiveLock is limited to VACUUM,AccessShareLock \ncould be hold until end of transaction.\n\n> to think there will be other cases where we need to break the rule.\n> So we won't have a theoretically-clean situation anyway, and will have\n> to look at things case by case.\n>\n\nOK case by case. I will be glad to check them one by one.\nIn fact,I have already excluded LockPage() because it is not \nused for transaction control.\n\nBut I have thought that the main purpose of early release of lock\nis to avoid long term lock for system tables. \nHave I misunderstood until now ?\n \n> Can you give specific examples of cases that will be fixed?\n>\n\nUnfortunately no example now.\n \n> For the most part I believe that we effectively protect updates to\n> system-table rows by holding AccessExclusiveLock on the associated user\n\nThere are system-table rows which don't have the associated user relations\nand there are many DDL commands except VACUUM.\nWe have to preserve consistency for system tuples among themselves,\ndon't we ?\n\n> relation. Locking the system table is just a means of preventing VACUUM\n> from running concurrently on the system table (and possibly moving the\n> tuple we want to update/delete). So I think releasing the system-table\n> lock is OK as long as we hold the user table lock till end of\n\nThis is needed for DDL command inside transactions,isn't it ?\nBut isn't walking on such a tightrope wasteful in order to realize DDL\ncommand inside transactions either ?\n\nAnyway,I want a decision here.\nI have already done a wasteful work in current spec about \"can neither\ndrop nor create\" bug. \n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Thu, 2 Dec 1999 09:56:19 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Re: [GENERAL] drop/rename table and transactions "
},
{
"msg_contents": "> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]]On Behalf Of Tom Lane\n> >\n> > \"Hiroshi Inoue\" <[email protected]> writes:\n> > >> I propose here that we stop the release of lock before end of\n> > transaction.\n> > >> I have been suffering from the early release of lock.\n> >\n> > > If there's no objection,I would change UnlockRelation() to\n> not release\n> > > the specified lock except AccessShareLock.\n> >\n> > After thinking about this some more, I am not convinced that it will buy\n> > anything --- and it might possibly break things that work now. The\n> > reason I'm not convinced about it is that we cannot apply the \"don't\n> > release locks till end of transaction\" rule perfectly uniformly. You\n>\n> Why are we allowed to break 2 phase locking for system tables ?\n> Isn't 2 phase locking a fundamental principle to preserve consistency ?\n> If there's another principle to rely on,please teach me.\n>\n> > already propose not to treat AccessShareLock that way, and Vadim seems\n> > to think there will be other cases where we need to break the rule.\n> > So we won't have a theoretically-clean situation anyway, and will have\n> > to look at things case by case.\n> >\n>\n> OK case by case. I will be glad to check them one by one.\n\nI'm checking them for AccessExclusiveLock now.\n\nAs for RowExclusiveLock,it would be much effective to remove\nthe release of lock unconditionally.\nRowExclusiveLock isn't so intensive a lock. For example it\ndoesn't conflict relatively. Therefore it would be hard to find\nand resolve the bugs which are caused by the early release\nof RowExclusiveLock,deadlock etc ...\nHolding the lock a little longer won't be so harmful inversely.\n\nComments ?\n\nAs for AccessExclusiveLock I found followings now.\n\n1) commands/user.c(CREATE/DROP/ALTER USER)\n I could create same 2 users easily.\n The lock should be held till end of trancaction.\n\n2) commands/cluster.c(CLUSTER)\n It isn't properly implemented. It seems almost impossible\n to implement CLUSTER command properly in current spec.\n\n3) commands/dbcommands.c(DROP DATABASE)\n elog(ERROR) follows immediately. The release should\n be removed.\n\n4) commands/sequence.c(CREATE SEQUNECE)\n The lock is for the being created (sequence) relation.\n Holding the lock till end of transaction has no problem.\n\n5) commands/vacuum.c(VACUUM)\n The release is caused by new security check. Probably\n the check could be done before acquiring AccessExcl-\n usiveLock.\n\n6) commands/commands.c(ALTER TABLE)\n ALTER TABLE doesn't release AccessExclusiveLock till\n end of transaction. I couldn't find any reason to allow\n the release for inheritors of a relation class. The release\n should be removed.\n\n7) commands/async.c(LISTEN/UNLISTEN)\n This case seems dangerous too but unfortunately I couldn't\n point out a concrete flaw now.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n\n\n\n\n",
"msg_date": "Sat, 4 Dec 1999 00:24:28 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Re: [GENERAL] drop/rename table and transactions "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> OK case by case. I will be glad to check them one by one.\n\n> I'm checking them for AccessExclusiveLock now.\n\n> As for RowExclusiveLock,it would be much effective to remove\n> the release of lock unconditionally.\n> RowExclusiveLock isn't so intensive a lock. For example it\n> doesn't conflict relatively. Therefore it would be hard to find\n> and resolve the bugs which are caused by the early release\n> of RowExclusiveLock,deadlock etc ...\n> Holding the lock a little longer won't be so harmful inversely.\n\nWe could try it and see, certainly. You are probably right that it\nwould not be harmful to hold it.\n\n> As for AccessExclusiveLock I found followings now.\n\n> 3) commands/dbcommands.c(DROP DATABASE)\n> elog(ERROR) follows immediately. The release should\n> be removed.\n\n> 5) commands/vacuum.c(VACUUM)\n> The release is caused by new security check. Probably\n> the check could be done before acquiring AccessExcl-\n> usiveLock.\n\nIn both of these cases, we are closing the system table unmodified,\nand AFAICT the point is just to release the lock a tad sooner than\nwe otherwise would. (I coded the VACUUM code just like code I'd seen\nelsewhere.) It's probably harmless either way.\n\n> 6) commands/commands.c(ALTER TABLE)\n> ALTER TABLE doesn't release AccessExclusiveLock till\n> end of transaction. I couldn't find any reason to allow\n> the release for inheritors of a relation class. The release\n> should be removed.\n\nYes, the recursive subroutine will grab the same lock and not release\nit. I'm not sure why it's coded the way it is, but certainly there's\nno benefit to releasing the lock earlier at the outer level.\n\n> 7) commands/async.c(LISTEN/UNLISTEN)\n> This case seems dangerous too but unfortunately I couldn't\n> point out a concrete flaw now.\n\nHolding the lock on pg_listener longer than absolutely necessary strikes\nme as very risky, since any other backend might need to grab the lock\nbefore it can complete its own transaction (in order to send or receive\nnotifies). Deadlock could ensue depending on what other locks are held.\n\nI think the locking logic for pg_listener was last revised for 6.4 (by me).\nIt seems to work fine as-is, but it doesn't know anything about MVCC.\nIt is possible that we could downgrade the locks to RowExclusiveLock and\nrely on MVCC semantics instead of a hard lock. This would require\ncareful study however, in particular to be sure that cross-backend\nnotifies couldn't be missed because of not-yet-committed tuples.\nI haven't had time to think about it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Dec 1999 18:33:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions "
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> >> OK case by case. I will be glad to check them one by one.\n> \n> > I'm checking them for AccessExclusiveLock now.\n> \n> > 7) commands/async.c(LISTEN/UNLISTEN)\n> > This case seems dangerous too but unfortunately I couldn't\n> > point out a concrete flaw now.\n> \n> Holding the lock on pg_listener longer than absolutely necessary strikes\n> me as very risky, since any other backend might need to grab the lock\n> before it can complete its own transaction (in order to send or receive\n> notifies). Deadlock could ensue depending on what other locks are held.\n>\n\nIt's difficult for me to find a flaw for this case.\nThere aren't so many conflicts. For example,LISTEN/UNLISTEN never\nconflict relatively because they could be issued only for its own backend.\nAnd as you say,it's very bad to hold the lock till end of transaction in\nLISTEN/UNLISTEN.\n \nRow level locking in MVCC may allow another(RowExclusiveLock?)\nlock instead of AccessExclusiveLock.\nI'm not sure now.\n\nRegards.\n\nHiroshi Inoue\[email protected] \n \n",
"msg_date": "Sun, 5 Dec 1999 08:45:30 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Re: [GENERAL] drop/rename table and transactions "
},
{
"msg_contents": "\n\nis there a PostgreSQL in win98 or win95?\n\n\n",
"msg_date": "Tue, 7 Dec 1999 12:16:20 +0800 (PST)",
"msg_from": "Chris Ian Capon Fiel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Postgresql in win9x"
},
{
"msg_contents": "Chris Ian Capon Fiel wrote:\n> \n> is there a PostgreSQL in win98 or win95?\n\nhttp://www.postgresql.org/docs/pgsql/doc/README.NT\n\nThe NT port will, AFAIK, run on any Win32 implementation, as long as you\nhave the Cygwin stuff loaded (talked about in the README.NT file....).\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 07 Dec 1999 16:00:39 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql in win9x"
}
] |
[
{
"msg_contents": "Just curious, is anyone working on referential integrity (foreign keys),\nor is it dead?\n\nSteve\n\n\n",
"msg_date": "Sat, 27 Nov 1999 10:25:39 -0800",
"msg_from": "Stephen Birch <[email protected]>",
"msg_from_op": true,
"msg_subject": "Referential Integrety"
},
{
"msg_contents": ">\n> Just curious, is anyone working on referential integrity (foreign keys),\n> or is it dead?\n\n Slow, but I'm still working on it.\n\n Some people offered help, but noone picked up a little peace\n up to now. Might turn out that I have to do it all myself.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 29 Nov 1999 14:11:12 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Referential Integrety"
}
] |
[
{
"msg_contents": "By popular demand, I have built and released a new RPM set. Here's the\nblurb:\n-------------\nAnnouncing the PostgreSQL 6.5.3-2 RPM set.\n\nThis RPM set is primarily a bugfix RPM set that fixes the following\nproblems and adds the following features:\n\n* Insecure permissions on /var/lib/pgsql (PGDATA) -- was set for 755,\nnow is 700; \n\n* Insecure permissions on /usr/lib/pgsql/backup -- was set for 755, now\nis 700 -- while this is nowhere near as severe a hole as the one with\nPGDATA, nonetheless any local user could access the backup copy of your\nPGDATA tree if you upgraded from a previous major version of PostgreSQL. \n\n* The PostgreSQL-HOWTO was mispackaged in the 6.5.3-1 RPM set. The\n6.5.3-2 RPM set omits this HOWTO altogether due to inaccuracies and\nnonsense in the HOWTO (read it for yourself on linuxdoc.org to see what\nI mean); \n\n* By popular demand, two versions of the RPMs are now packaged -- one\nwith locale support enabled, and one without locale support. The\nnon-locale RPMs have a 'nl' after the 6.5.3-2. These nl-RPMs improve\nperformance of the backend under certain circumstances; \n\n* Further improvements to the README.rpm documentation have been made,\nthanks to user feedback. \n------------------------\n\nhttp://www.ramifordistat.net/postgres , as always. I expect they will\nbe up on ftp.postgresql.org in a few days.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Sat, 27 Nov 1999 17:09:39 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bugfix RPM release available for immediate download"
}
] |
[
{
"msg_contents": "Message sent by: Kuppler Graphics, 32 West Main Street, Maple Shade, New Jersey, 08052,\n1-800-810-4330. This list will NOT be sold. All addresses \nare automatically added to our remove list.\n\nHello. My name is Bill from Kuppler Graphics. We do screenprinting on T Shirts, Sweatshirts,\nJackets, Hats, Tote Bags and more!\n\nDo you or someone you know have a Family Reunion coming up? Kuppler Graphics would like to\nprovide you with some great looking T Shirts for your Reunion.\n\nKuppler Graphics can also provide you with custom T's and promotional items such as imprinted\nmagnets, keychains, pens, mugs, hats, etc. for your business or any fundraising activity\n(church, school, business etc.) We also can provide you with quality embroidery. \n\nWe are a family owned company with over 15 years of experience. \n\nAll work is done at this location. No middle man. Our prices are great!\n\nClick reply to email us or call 1-800-810-4330 for more info\n\n\nBill\nKuppler Graphics\n \n \n \n \n \n \n",
"msg_date": "Sun, 28 Nov 1999 07:24:04",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "AD:Family Reunion T Shirts & More"
}
] |
[
{
"msg_contents": "Is it possible to programmatically retrieve the OID of a just-inserted\nrecord in a PL/PGSQL function? Apparently, it is not currently\npossible in psql, but I'm hoping C programming is not required for\nthis.\n\nIf so, can someone please demonstrate how this is done?\n\nIf not, can someone in the know definitely state the means by which it\nis currently possible to do this?\n\nWhy would someone want to do this? Because it is the only way I know\nof to definitively retrieve a newly-generated serial value for use as\nthe primary/foreign key (a *very* common RDBMS practice). Other\nsuggested approaches to skinning this cat are welcome. If PL/PGSQL\ncan't do this, it seems rather severely limited in its usefulness for\nnon-trivial databases. In this post,\n\n\nhttp://www.postgresql.org/mhonarc/pgsql-general/1998-07/msg00203.html\n\nBruce Momjian says its possible for things using libpq \"directly\" to\nretrieve the oid. Does PL/PGSQL use libpq directly?\n\nThis question has been asked in one form or another in a number of\nposts in pgsql-general and pgsql-sql, but without any definitive\nanswers. I have experimented, scoured the mailing list archives, the\npostgresql PL/pgSQL documentation, and deja.com to no avail, thus my\npost here.\n\nSo, is it possible with PL/pgSQL? How? Thanks in advance...\n\nCheers,\nEd\n\n",
"msg_date": "Sun, 28 Nov 1999 13:45:19 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": true,
"msg_subject": "[HACKERS] How to get OID from INSERT in PL/PGSQL?"
},
{
"msg_contents": "Ed Loehr <[email protected]> writes:\n> Is it possible to programmatically retrieve the OID of a just-inserted\n> record in a PL/PGSQL function?\n\nIt seems to me that an AFTER INSERT ROW trigger, as well as any kind of\nUPDATE or DELETE ROW trigger, ought to have access to the OID of the\nrow it is fired for. But if it's there in PL/PGSQL, I'm missing it.\n\nI think you could get at the OID from a C-coded trigger procedure, but\nI agree that that's more trouble than it's worth.\n\n> Why would someone want to do this? Because it is the only way I know\n> of to definitively retrieve a newly-generated serial value for use as\n> the primary/foreign key (a *very* common RDBMS practice).\n\nActually, using OID as a key is deprecated, because dumping and\nreloading a DB that contains references to rows by their OIDs is a\nrisky proposition. I'd suggest using a SERIAL column instead.\nSERIAL is basically syntactic sugar for an int4 column with\n\tDEFAULT nextval('associatedSequenceObject')\nand this operation generates serial IDs just fine. Or, if you want to\nprevent the user from trying to insert a key at random, don't use the\nnextval() as a default; instead generate the key value inside the\nBEFORE INSERT trigger procedure, overriding whatever the user might\nhave tried to supply:\n\n\tnew.keycol = select nextval('sequenceObject');\n\tinsert into otherTable values(new.keycol, ...);\n\nAnyway, the point is that nextval() is considerably more flexible than\nrelying solely on the OID sequence generator.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 28 Nov 1999 18:30:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How to get OID from INSERT in PL/PGSQL? "
},
{
"msg_contents": "Tom Lane wrote:\n\n> > Why would someone want to do this? Because it is the only way I know\n> > of to definitively retrieve a newly-generated serial value for use as\n> > the primary/foreign key (a *very* common RDBMS practice).\n>\n> Actually, using OID as a key is deprecated, because dumping and\n> reloading a DB that contains references to rows by their OIDs is a\n> risky proposition. I'd suggest using a SERIAL column instead.\n> SERIAL is basically syntactic sugar for an int4 column with\n> DEFAULT nextval('associatedSequenceObject')\n> and this operation generates serial IDs just fine. Or, if you want to\n> prevent the user from trying to insert a key at random, don't use the\n> nextval() as a default; instead generate the key value inside the\n> BEFORE INSERT trigger procedure, overriding whatever the user might\n> have tried to supply:\n>\n> new.keycol = select nextval('sequenceObject');\n> insert into otherTable values(new.keycol, ...);\n>\n\nThe scenario I unsuccessfully attempted to communicate is one in which the\nOID is used not as a key but rather as the intermediate link to get to the\nnewly generated SERIAL value, which *is* a primary/foreign key. In other\nwords, the OID is used to identify the newly-inserted row so that I can\nquery it to find out the newly generated SERIAL value just after an insert.\n\n newOID = insert into tableWithSerialPrimaryKey(...);\n newKey = select serialKey from tableWithSerialPrimaryKey where oid =\nnewOID;\n\nI'm told I can safely retrieve the last SERIAL value via currval() on the\nimplicit primary key serial sequence if done within the same \"session\". In\norder to guarantee the same \"session\", I'm under the impression that I have\nto do this either within a PL/pgSQL function for each SERIAL insert, or\nmaintain persistent client connections between the insert and the select on\nthe sequence. I think that'll work, even if it is a bit of hassle compared\nto a serial insert returning the new serial value.\n\nThanks,\nEd Loehr\n\n",
"msg_date": "Sun, 28 Nov 1999 23:22:03 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] How to get OID from INSERT in PL/PGSQL?"
},
{
"msg_contents": "Ed Loehr <[email protected]> writes:\n> The scenario I unsuccessfully attempted to communicate is one in which the\n> OID is used not as a key but rather as the intermediate link to get to the\n> newly generated SERIAL value, which *is* a primary/foreign key. In other\n> words, the OID is used to identify the newly-inserted row so that I can\n> query it to find out the newly generated SERIAL value just after an insert.\n\nbut ... but ... if you are using a trigger procedure then you can just\nread the SERIAL column's value out of the new tuple! Why bother with\na select on OID?\n\n> newOID = insert into tableWithSerialPrimaryKey(...);\n> newKey = select serialKey from tableWithSerialPrimaryKey where oid =\n> newOID;\n\nIf you need to do it like that (ie, not inside a trigger procedure for\ntableWithSerialPrimaryKey), consider doing\n\tnewKey = nextval('sequenceObjectForTableWithSerialPrimaryKey');\n\tinsert into tableWithSerialPrimaryKey(newKey, other-fields);\nie, do the nextval() explicitly and then insert the value, rather than\nrelying on the default-value expression for the key column.\n\n> I'm told I can safely retrieve the last SERIAL value via currval() on\n> the implicit primary key serial sequence if done within the same\n> \"session\".\n\nI don't trust currval a whole lot either... it's OK in simple cases, but\nif you have trigger procedures and rules firing all over the place then\nyou can't always be sure that only one row has gotten inserted... so the\ncurrval might not correspond to the row you were interested in.\n\nnextval() *will* give you a distinct value for each time you call it,\nand then you just have to propagate that value to the places it should\ngo.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 29 Nov 1999 00:41:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How to get OID from INSERT in PL/PGSQL? "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Ed Loehr <[email protected]> writes:\n> > The scenario I unsuccessfully attempted to communicate is one in which the\n> > OID is used not as a key but rather as the intermediate link to get to the\n> > newly generated SERIAL value, which *is* a primary/foreign key. In other\n> > words, the OID is used to identify the newly-inserted row so that I can\n> > query it to find out the newly generated SERIAL value just after an insert.\n>\n> but ... but ... if you are using a trigger procedure then you can just\n> read the SERIAL column's value out of the new tuple! Why bother with\n> a select on OID?\n\nBecause it's not inside a trigger proc, but rather a simple PL/pgSQL function,\nso NEW is not available.\n\n> > newOID = insert into tableWithSerialPrimaryKey(...);\n> > newKey = select serialKey from tableWithSerialPrimaryKey where oid =\n> > newOID;\n>\n> If you need to do it like that (ie, not inside a trigger procedure for\n> tableWithSerialPrimaryKey), consider doing\n> newKey = nextval('sequenceObjectForTableWithSerialPrimaryKey');\n> insert into tableWithSerialPrimaryKey(newKey, other-fields);\n> ie, do the nextval() explicitly and then insert the value, rather than\n> relying on the default-value expression for the key column.\n\nThat is what I ended up doing, and it works (not too painful). Thanks.\n\nCheers,\nEd Loehr\n\n\n",
"msg_date": "Mon, 29 Nov 1999 00:45:17 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] How to get OID from INSERT in PL/PGSQL?"
},
{
"msg_contents": "On 1999-11-28, Ed Loehr mentioned:\n\n> Is it possible to programmatically retrieve the OID of a just-inserted\n> record in a PL/PGSQL function? Apparently, it is not currently\n> possible in psql, but I'm hoping C programming is not required for\n> this.\n\nFor what it's worth, psql will be able to do this in the next release. It\nwill look like this:\n\n=> insert into foo values (...);\n=> insert into bar values (:LastOid, ...);\n\nwhich is even marginally SQL compliant as I understand. If you are daring\nyou can get the current snapshot and try it, but I wouldn't sign my life\naway on it quite yet.\n\n> Bruce Momjian says its possible for things using libpq \"directly\" to\n> retrieve the oid. Does PL/PGSQL use libpq directly?\n\nEverything(?) uses libpq more or less directly. It's just a matter of\ninterfacing your applicaton to the OidStatus function. The above psql does\njust that.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Tue, 30 Nov 1999 01:03:50 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How to get OID from INSERT in PL/PGSQL?"
}
] |
[
{
"msg_contents": "In 6.5.3, it seems that UNION is not allowed inside a sub-select:\n\nbray=> select p.id, p.name, a.town from person* as p, address as a\nbray=> where p.id in\nbray-> (select id from customer union select id from supplier);\nERROR: parser: parse error at or near \"union\"\n\nThe same applies to EXCEPT and INTERSECT.\n\nIs this a permanent feature, an oversight, or something already on the TODO\nlist?\n\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"The earth is the LORD'S, and the fullness thereof; the\n world, and they that dwell therein.\" Psalms 24:1\n\n\n",
"msg_date": "Sun, 28 Nov 1999 19:50:55 +0000",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "UNION not allowed in sub-selects?"
},
{
"msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> In 6.5.3, it seems that UNION is not allowed inside a sub-select:\n> Is this a permanent feature, an oversight, or something already on the TODO\n> list?\n\nThe latter, as a moment's investigation would have shown you:\n\n* Support UNION/INTERSECT/EXCEPT in sub-selects\n\nChanging the grammar to allow it would be the work of a moment,\nbut the rewriter and other stages need more work. I've been putting\nit off until we do the much-discussed, little-implemented querytree\nrepresentation redesign. It might be possible to fix this within the\ncurrent representation, but Except_Intersect_Rewrite() is so\nugly/grotty/broken that I don't really want to touch it until I can\ndiscard it and rewrite from the ground up...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 28 Nov 1999 17:43:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] UNION not allowed in sub-selects? "
},
{
"msg_contents": "\nThis is already on TODO list.\n\n> In 6.5.3, it seems that UNION is not allowed inside a sub-select:\n> \n> bray=> select p.id, p.name, a.town from person* as p, address as a\n> bray=> where p.id in\n> bray-> (select id from customer union select id from supplier);\n> ERROR: parser: parse error at or near \"union\"\n> \n> The same applies to EXCEPT and INTERSECT.\n> \n> Is this a permanent feature, an oversight, or something already on the TODO\n> list?\n> \n> \n> -- \n> Vote against SPAM: http://www.politik-digital.de/spam/\n> ========================================\n> Oliver Elphick [email protected]\n> Isle of Wight http://www.lfix.co.uk/oliver\n> PGP key from public servers; key ID 32B8FAA1\n> ========================================\n> \"The earth is the LORD'S, and the fullness thereof; the\n> world, and they that dwell therein.\" Psalms 24:1\n> \n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Nov 1999 22:54:37 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] UNION not allowed in sub-selects?"
}
] |
[
{
"msg_contents": "\nThis one was sent to webmaster for obvious reasons.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n---------- Forwarded message ----------\nDate: Sat, 27 Nov 1999 18:58:57 -0800\nFrom: Joe Brenner <[email protected]>\nTo: [email protected]\nSubject: Re: BOUNCE [email protected]: Non-member submission from [Joe\n Brenner <[email protected]>] \n\n\nOn http://www.postgresql.org, it says:\n\n> RedHat RPMs for v6.5.2 on i386 machines are now available at\n> ftp://postgresql.org/pub/RPMS/ Please report any questions\n> or problems to [email protected].\n\nBut when I took the trouble to do this I got a: \n\n> BOUNCE [email protected]: Non-member submission from [Joe Brenner <[email protected]>] \n\nIf anyone really cares, this is the problem I was trying to\nreport: \n\nStop me if you've heard this one:\n\nOn a RedHat 6.1 box, I've been having trouble getting the\nperl DBD-Pg package working, wo I decided to try installing\nall of the latest 6.5.3 RPMs you have up on your ftp site. \n\nAfterwards, I still get the same problem though:\n\n rpm -Uhv perl-DBD-Pg-0.91-1.i386.rpm\n error: failed dependencies:\n \tlibpq.so.1 is needed by perl-DBD-Pg-0.91-1\n\nI gather that libpq.so.1 was once supplied with the \n\"postgresql-lib\" RPM, which has now been split up. \n\nDid someone forget a piece, or is it just that the DBD-Pg\nrpm now badly in need of an update?\n\n\n",
"msg_date": "Sun, 28 Nov 1999 22:05:28 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BOUNCE [email protected]: Non-member submission from\n\t[Joe Brenner <[email protected]>] (fwd)"
},
{
"msg_contents": "Joe Brenner <[email protected]> writes:\n> Stop me if you've heard this one:\n> rpm -Uhv perl-DBD-Pg-0.91-1.i386.rpm\n> error: failed dependencies:\n> \tlibpq.so.1 is needed by perl-DBD-Pg-0.91-1\n\nlibpq has been rev 2 (ie, libpq.so.2) since Postgres release 6.4, over\na year ago. It seems your perl module was compiled against a 6.3 or\neven older Postgres library. You could try \"ln -s libpq.so.2 libpq.so.1\"\nand see if it works ... but I'd recommend getting a more recent build of\nthe perl module.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 29 Nov 1999 00:03:24 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: BOUNCE [email protected]: Non-member\n\tsubmission from [Joe Brenner <[email protected]>] (fwd)"
},
{
"msg_contents": "> On a RedHat 6.1 box, I've been having trouble getting the\n> perl DBD-Pg package working, wo I decided to try installing\n> all of the latest 6.5.3 RPMs you have up on your ftp site.\n> Afterwards, I still get the same problem though:\n> rpm -Uhv perl-DBD-Pg-0.91-1.i386.rpm\n> error: failed dependencies:\n> libpq.so.1 is needed by perl-DBD-Pg-0.91-1\n> I gather that libpq.so.1 was once supplied with the\n> \"postgresql-lib\" RPM, which has now been split up.\n> Did someone forget a piece, or is it just that the DBD-Pg\n> rpm now badly in need of an update?\n\nApparently, perl-DBD-Pg is in need of an update. Didn't the HR6.1 box\ncome with v6.5.x of Postgres and with DBD-Pg? (I've forgotten...) If\nso, DBD-Pg should have been built by RH to be consistant with the\ndistro so I'm not sure why you are seeing a problem. If it shipped\nwith some earlier release, then they might be in conflict and we/Lamar\nmight consider releasing a new version of the perl-DBD-Pg package.\n\nLamar?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 29 Nov 1999 16:21:52 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: BOUNCE [email protected]: Non-member\n\tsubmission from[Joe Brenner <[email protected]>] (fwd)"
},
{
"msg_contents": "[I am cc:'ing Ian MacDonald, the person who packaged perl-DBD]\nThomas Lockhart wrote:\n\n> > On a RedHat 6.1 box, I've been having trouble getting the\n> > perl DBD-Pg package working, wo I decided to try installing\n> > all of the latest 6.5.3 RPMs you have up on your ftp site.\n> > Afterwards, I still get the same problem though:\n> > rpm -Uhv perl-DBD-Pg-0.91-1.i386.rpm\n> > error: failed dependencies:\n> > libpq.so.1 is needed by perl-DBD-Pg-0.91-1\n> > I gather that libpq.so.1 was once supplied with the\n> > \"postgresql-lib\" RPM, which has now been split up.\n> > Did someone forget a piece, or is it just that the DBD-Pg\n> > rpm now badly in need of an update?\n \n> Apparently, perl-DBD-Pg is in need of an update. Didn't the HR6.1 box\n> come with v6.5.x of Postgres and with DBD-Pg? (I've forgotten...) If\n\n[snip] \n> Lamar?\n\nI was afraid you'd ask me that question :-).\n\nperl-DBD is not shipped with RedHat 6.1, according to a browse of my RH\n6.1 CD and confirmation through rpmfind.net. According to rpmfind.net,\nthis rpm is in the libc6 contribs for RedHat, and was last built in\nMarch of 1999. However, due to the dependency on libpq.so.1, it must\nhave been built prior to the release of RedHat 6.0, which was the first\nRedHat release that included libpq.so.2 (as part of PostgreSQL 6.4.2) --\nRedHat 5.2 shipped with 6.3.2, which of course included libpq.so.1. \nYes, the RPM's for PostgreSQL were that far out of sync (my primary\nmotivation for maintaining the RPM's!). (RedHat 6.0's RPM's are dated\nApril 19, 1999, according to rpmfind.net, which supports my hypothesis).\n\nThis RPM needs rebuilding for RedHat 6.1 anyway. The corresponding DBI\nrpm will also need to be rebuilt for the same reason -- the perl module\nstructure changed dramatically from RH 5.x to 6.x.\n\nHTH\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 29 Nov 1999 11:59:59 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] perl-DBD-Pg (was Re: BOUNCE [email protected]:\n\tNon-member submission from[Joe Brenner <[email protected]>] (fwd))"
},
{
"msg_contents": "On Mon 29 Nov 1999 at 11:59:59 -0500, you wrote:\n\n> perl-DBD is not shipped with RedHat 6.1, according to a browse of my RH\n> 6.1 CD and confirmation through rpmfind.net.\n\nCorrect.\n\n> According to rpmfind.net, this rpm is in the libc6 contribs for\n> RedHat, and was last built in March of 1999. However, due to the\n> dependency on libpq.so.1, it must have been built prior to the\n> release of RedHat 6.0,\n\nThat's true, it was.\n\n> This RPM needs rebuilding for RedHat 6.1 anyway. The corresponding DBI\n> rpm will also need to be rebuilt for the same reason -- the perl module\n> structure changed dramatically from RH 5.x to 6.x.\n\nRather than using the old contrib RPM, look in the PowerTools\ndirectory on a Red Hat mirror site and get hold of perl-DBD-Pg from\nthere. You'll find it in the powertools/CPAN/CPAN_rev.2/i386\ndirectory. This version was built with libpq.so.2.\n\nThe old contrib version was built and uploaded by me at a time when\nthe version shipped with the Powertools 5.2 CD was either badly out of\ndate or not even on the CD. Now it's no longer needed.\n\nBest wishes,\n\nIan\n-- \nIan Macdonald | Death before dishonor. But neither before \nRed Hat Certified Engineer | breakfast. \nhttp://www.caliban.org/ | \nLinux 2.2.13 on an i686 | \n | \n",
"msg_date": "Wed, 1 Dec 1999 01:03:57 +0100",
"msg_from": "Ian Macdonald <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] perl-DBD-Pg (was Re: BOUNCE [email protected]:\n\tNon-member submission from[Joe Brenner\n\t<[email protected]>] (fwd))"
},
{
"msg_contents": "Ian Macdonald wrote:\n> > This RPM needs rebuilding for RedHat 6.1 anyway. The corresponding DBI\n> > rpm will also need to be rebuilt for the same reason -- the perl module\n> > structure changed dramatically from RH 5.x to 6.x.\n> \n> Rather than using the old contrib RPM, look in the PowerTools\n> directory on a Red Hat mirror site and get hold of perl-DBD-Pg from\n> there. You'll find it in the powertools/CPAN/CPAN_rev.2/i386\n> directory. This version was built with libpq.so.2.\n> \n> The old contrib version was built and uploaded by me at a time when\n> the version shipped with the Powertools 5.2 CD was either badly out of\n> date or not even on the CD. Now it's no longer needed.\n> \n> Best wishes,\n\nThank you Ian for the clarification. HOWEVER, this does not show up on\nthe rpmfind.net web toold under powertools -- yet is there under the\nrufus.w3.org mirror (they're the same machine, of course). Of course,\nthis is not your problem, Ian. Uploading the updated package to the\ncontrib area is not the ideal solution either, because then those that\nhaven't updated from 6.3.2 are going to gripe. But then again, maybe\nthey need a little push to go to 6.5... :->. Un-uploading the rpm on\ncontrib is of course not possible -- but uploading the updated one???\n\nI have taken the liberty to replying to the bug report on Bugzilla on\nthis issue.\n\nThe RedHat libc6 contribs are in desparate need of reorganization, IMO.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 30 Nov 1999 21:29:04 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] perl-DBD-Pg (was Re: BOUNCE [email protected]:\n\tNon-member submission from[Joe Brenner <[email protected]>] (fwd))"
},
{
"msg_contents": "\nLamar Owen <[email protected]> wrote:\n\n> Ian Macdonald <[email protected]> wrote:\n\n> > directory on a Red Hat mirror site and get hold of perl-DBD-Pg from\n> > there. You'll find it in the powertools/CPAN/CPAN_rev.2/i386\n> > directory. This version was built with libpq.so.2.\n\n> Thank you Ian for the clarification. HOWEVER, this does not show up on\n> the rpmfind.net web toold under powertools -- yet is there under the\n> rufus.w3.org mirror (they're the same machine, of course). \n\nYes, from one point of view, I guess that's the root of the\nproblem I was having. All the ways I could think of\nsearching for an RPM (rpmfind, google, etc.) turned up\nnothing later that the 0.91-1 version.\n\nI just went to\n ftp://ftp.labs.redhat.com/pub/redhat/powertools/CPAN/CPAN_rev.2/i386/\n\nAnd grabbed this one: \n perl-DBD-Pg-0.91-2.i386.rpm\n\nAnd now my CGI scripts are functional again. \n\nI would hope that at some point in the future, the DBI/DBD\ncomponents will be folded into the postgresql-perl RPM (say,\nby the time that it finally makes it to version 1.0?). The\nuse of the older Pg.pm is now strongly discouraged, and the\nDBI interface is already much better documented (e.g. in\n_The Perl Cookbook_ and _Advanced Perl Programming_).\n\nIn any case, thanks much for everyone's help. Sorry for\ntaking this up on the \"pgsql-hackers\" list. I will now go\nand spread the wisdom on redhat-talk, linux.postgres, and\ncomp.lang.perl.modules, where no one seems to have had a\nclue about this. \n\n\n\n\n\n\n",
"msg_date": "Thu, 02 Dec 1999 03:41:46 -0800",
"msg_from": "Joe Brenner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] perl-DBD-Pg (was Re: BOUNCE [email protected]:\n\tNon-member submission from[Joe Brenner\n\t<[email protected]>] (fwd))"
},
{
"msg_contents": "Joe Brenner wrote:\n> Lamar Owen <[email protected]> wrote:\n> > Thank you Ian for the clarification. HOWEVER, this does not show up on\n> > the rpmfind.net web toold under powertools -- yet is there under the\n> > rufus.w3.org mirror (they're the same machine, of course).\n> \n> Yes, from one point of view, I guess that's the root of the\n> problem I was having. All the ways I could think of\n> searching for an RPM (rpmfind, google, etc.) turned up\n> nothing later that the 0.91-1 version.\n\nThis is something that the rpmfind.net maintainer needs to know about.\nSo, I've cc:'d him (Daniel Veillard). Daniel, the problem is that the\nCPAN archive under the redhat powertools is not indexed or searched\nunder rpmfind.net's RPM database. This has caused some confusion\nrelating to the perl-DBD-Pg RPM package, which is in the CPAN part of\npowertools.\n\n> perl-DBD-Pg-0.91-2.i386.rpm\n> \n> And now my CGI scripts are functional again.\n> \n> I would hope that at some point in the future, the DBI/DBD\n> components will be folded into the postgresql-perl RPM (say,\n> by the time that it finally makes it to version 1.0?). The\n> use of the older Pg.pm is now strongly discouraged, and the\n> DBI interface is already much better documented (e.g. in\n> _The Perl Cookbook_ and _Advanced Perl Programming_).\n\nOk, Thomas, Vince, Bruce and other PostgreSQL hackers -- what do you\nthink about that suggestion? It seems that our native perl interface\n(which is shipped with both the tarball and RPM's) is deprecated out in\nthe perl world. The DBI component is not PostgreSQL-specific, so it\nprobably shouldn't ship with the main RPM set. Comments?\n\n> In any case, thanks much for everyone's help. Sorry for\n> taking this up on the \"pgsql-hackers\" list. I will now go\n> and spread the wisdom on redhat-talk, linux.postgres, and\n> comp.lang.perl.modules, where no one seems to have had a\n> clue about this.\n\nWhere is 'linux.postgres', and how do I subscribe??\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 02 Dec 1999 10:29:38 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] perl-DBD-Pg (was Re: BOUNCE [email protected]:\n\tNon-member submission from[Joe Brenner <[email protected]>] (fwd))"
},
{
"msg_contents": "On Thu, 2 Dec 1999, Lamar Owen wrote:\n\n> Where is 'linux.postgres', and how do I subscribe??\n\nUseNet. It's a newsgroup.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 2 Dec 1999 10:42:24 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] perl-DBD-Pg (was Re: BOUNCE [email protected]:\n\tNon-member submission from[Joe Brenner <[email protected]>]\n\t(fwd))"
},
{
"msg_contents": "Vince Vielhaber wrote:\n> \n> On Thu, 2 Dec 1999, Lamar Owen wrote:\n> \n> > Where is 'linux.postgres', and how do I subscribe??\n> \n> UseNet. It's a newsgroup.\n\nThat's what I was hoping, but, these days, you never know what's in what\nheirarchy. HOWEVER, my ISP's nntp server doesn't carry it. Do you know\nof a public nntp server that carries this??\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 02 Dec 1999 16:22:49 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] perl-DBD-Pg (was Re: BOUNCE\n\[email protected]:Non-member submission from[Joe Brenner\n\t<[email protected]>] (fwd))"
},
{
"msg_contents": "\nOn 02-Dec-99 Lamar Owen wrote:\n> Vince Vielhaber wrote:\n>> \n>> On Thu, 2 Dec 1999, Lamar Owen wrote:\n>> \n>> > Where is 'linux.postgres', and how do I subscribe??\n>> \n>> UseNet. It's a newsgroup.\n> \n> That's what I was hoping, but, these days, you never know what's in what\n> heirarchy. HOWEVER, my ISP's nntp server doesn't carry it. Do you know\n> of a public nntp server that carries this??\n\nHow's www.deja.com?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Thu, 02 Dec 1999 17:08:06 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] perl-DBD-Pg (was Re: BOUNCE pgsql-ports@postgreSQL"
},
{
"msg_contents": "On Thu, Dec 02, 1999 at 10:29:38AM -0500, Lamar Owen wrote:\n> Joe Brenner wrote:\n> > Lamar Owen <[email protected]> wrote:\n> > > Thank you Ian for the clarification. HOWEVER, this does not show up on\n> > > the rpmfind.net web toold under powertools -- yet is there under the\n> > > rufus.w3.org mirror (they're the same machine, of course).\n> > \n> > Yes, from one point of view, I guess that's the root of the\n> > problem I was having. All the ways I could think of\n> > searching for an RPM (rpmfind, google, etc.) turned up\n> > nothing later that the 0.91-1 version.\n> \n> This is something that the rpmfind.net maintainer needs to know about.\n> So, I've cc:'d him (Daniel Veillard). Daniel, the problem is that the\n> CPAN archive under the redhat powertools is not indexed or searched\n> under rpmfind.net's RPM database. This has caused some confusion\n> relating to the perl-DBD-Pg RPM package, which is in the CPAN part of\n> powertools.\n\n Ok, I will try to get this fixed by tomorrow, \n\nDaniel\n\n-- \[email protected] | W3C, INRIA Rhone-Alpes | Today's Bookmarks :\nTel : +33 476 615 257 | 655, avenue de l'Europe | Linux XML libxml WWW\nFax : +33 476 615 207 | 38330 Montbonnot FRANCE | Gnome rpm2html rpmfind\n http://www.w3.org/People/all#veillard%40w3.org | RPM badminton Kaffe\n",
"msg_date": "Sat, 4 Dec 1999 09:09:54 -0500",
"msg_from": "Daniel Veillard <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] perl-DBD-Pg (was Re: BOUNCE [email protected]:\n\tNon-member submission from[Joe Brenner\n\t<[email protected]>] (fwd))"
},
{
"msg_contents": "On Sat, 04 Dec 1999, Daniel Veillard wrote:\n> On Thu, Dec 02, 1999 at 10:29:38AM -0500, Lamar Owen wrote:\n> > This is something that the rpmfind.net maintainer needs to know about.\n> > So, I've cc:'d him (Daniel Veillard). Daniel, the problem is that the\n> > CPAN archive under the redhat powertools is not indexed or searched\n> > under rpmfind.net's RPM database. This has caused some confusion\n> > relating to the perl-DBD-Pg RPM package, which is in the CPAN part of\n> > powertools.\n> \n> Ok, I will try to get this fixed by tomorrow, \n\nLooks good. Thanks for doing that -- and for the excellent rpmfind resource!\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 7 Dec 1999 21:46:40 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] perl-DBD-Pg (was Re: BOUNCE [email protected]:\n\tNon-member submission from[Joe Brenner\n\t<[email protected]>] (fwd))"
}
] |
[
{
"msg_contents": "I've been experimenting with concurrent VACUUMs and getting occasional\ninstances of\n\nNOTICE: Deadlock detected -- See the lock(l) manual page for a possible cause.\nERROR: WaitOnLock: error on wakeup - Aborting this transaction\n\nIt would be really nice if I could find out the particular locks that\nare causing this conflict --- but the code that emits these messages\nisn't very transparent :-(. Can anyone explain how to determine just\nwhat the deadlock is?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 28 Nov 1999 23:31:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to get info about deadlocks?"
},
{
"msg_contents": "> I've been experimenting with concurrent VACUUMs and getting occasional\n> instances of\n> \n> NOTICE: Deadlock detected -- See the lock(l) manual page for a possible cause.\n> ERROR: WaitOnLock: error on wakeup - Aborting this transaction\n> \n> It would be really nice if I could find out the particular locks that\n> are causing this conflict --- but the code that emits these messages\n> isn't very transparent :-(. Can anyone explain how to determine just\n> what the deadlock is?\n> \n\nMassimo has some. See the top of lock.c for pg_options flags to dump\nout locks.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Nov 1999 01:04:41 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How to get info about deadlocks?"
},
{
"msg_contents": "> > I've been experimenting with concurrent VACUUMs and getting occasional\n> > instances of\n> > \n> > NOTICE: Deadlock detected -- See the lock(l) manual page for a possible cause.\n> > ERROR: WaitOnLock: error on wakeup - Aborting this transaction\n> > \n> > It would be really nice if I could find out the particular locks that\n> > are causing this conflict --- but the code that emits these messages\n> > isn't very transparent :-(. Can anyone explain how to determine just\n> > what the deadlock is?\n> > \n> \n> Massimo has some. See the top of lock.c for pg_options flags to dump\n> out locks.\n\nYes, there is a DumpAllLocks() which should dump the lock table in case of\ndeadlock, but I have never been able to find any useful information from it.\nThe code is non compiled by default unless you define DEADLOCK_DEBUG.\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-0461534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n",
"msg_date": "Mon, 29 Nov 1999 18:47:08 +0100 (MET)",
"msg_from": "Massimo Dal Zotto <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How to get info about deadlocks?"
},
{
"msg_contents": "Well, you have a bit table that indicates what locks are already held and\nit's being AND'ed one indicating with the locks you hold. If they overlap,\nyou're in trouble.\n\nHave you turned on LOCK_MGR_DEBUG? I'd print out the masks if the lock dump\nroutine doesn't already.\n\nTom Lane wrote:\n> \n> I've been experimenting with concurrent VACUUMs and getting occasional\n> instances of\n> \n> NOTICE: Deadlock detected -- See the lock(l) manual page for a possible cause.\n> ERROR: WaitOnLock: error on wakeup - Aborting this transaction\n> \n> It would be really nice if I could find out the particular locks that\n> are causing this conflict --- but the code that emits these messages\n> isn't very transparent :-(. Can anyone explain how to determine just\n> what the deadlock is?\n> \n> regards, tom lane\n> \n> ************\n",
"msg_date": "Mon, 29 Nov 1999 20:46:30 -0500",
"msg_from": "Tim Holloway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How to get info about deadlocks?"
}
] |
[
{
"msg_contents": "> Vadim Mikheev wrote:\n> Bruce Momjian wrote:\n> > \n> > > if PostgreSQL could successfully rollback DDL statements sanely (and\nthus\n> > > diverge from ORACLE). I guess I don't expect that to happen\nsuccessfully\n> > > until\n> > > something the equivalent of TABLESPACES is implemented and there is a\n> > > disassociation between table names, index names and their filesystem\n> > > counterparts and to be able to \"undo\" filesystem operations. That, it\nseems\n> > > to\n> > > me, will be a major undertaking and not going to happen any time\nsoon...\n> > \n> > Ingres has table names that don't match on-disk file names, and it is a\n> > pain to administer because you can't figure out what is going on at the\n> > file system level. Table files have names like AAAHFGE.\n> \n> I have to say that I'm going to change on-disk database/table/index \n> file names to _OID_! This is required by WAL because of inside of \n> log records there will be just database/table/index oids, not names, \n> and after crash recovery will not be able to read pg_class to get \n> database/table/index name using oid ...\n> \n> Vadim\n\nWill that aid in fixing a problem such as this:\n\nsession 1:\n\nCREATE TABLE example1(value int4);\nBEGIN;\n\nsession 2:\n\nBEGIN;\nALTER TABLE example1 RENAME TO example2;\n\nsession 1:\n\nINSERT INTO example1 VALUES (1);\nEND;\nNOTICE: Abort Transaction and not in in-progress state\nERROR: Cannot write block 0 of example1 [test] blind\n\nsession 2:\n\nEND;\nNOTICE: Abort Transaction and not in in-progress state\nERROR: Cannot write block 0 of example1 [test] blind\n\nJust curious,\n\nMike (implicit commit) Mascari\n\n\n\n",
"msg_date": "Mon, 29 Nov 1999 01:16:45 -0500",
"msg_from": "\"Mike Mascari\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "Mike Mascari wrote:\n> \n> Will that aid in fixing a problem such as this:\n> \n> session 1:\n> \n> CREATE TABLE example1(value int4);\n> BEGIN;\n> \n> session 2:\n> \n> BEGIN;\n> ALTER TABLE example1 RENAME TO example2;\n> \n> session 1:\n> \n> INSERT INTO example1 VALUES (1);\n> END;\n> NOTICE: Abort Transaction and not in in-progress state\n> ERROR: Cannot write block 0 of example1 [test] blind\n> \n> session 2:\n> \n> END;\n> NOTICE: Abort Transaction and not in in-progress state\n> ERROR: Cannot write block 0 of example1 [test] blind\n\nSeems that oid file names will fix this...\nCurrently, each shared buffer description structure has \ndatabase/table names for the purposes of \"blind\" writes\n(when backend cache hasn't entry for relation and so\nbufmgr can't use cache to get names from oids).\nALTER TABLE ... RENAME renames relation file(s) but doesn't\nchange relation name inside shbuffers...\n\n> Mike (implicit commit) Mascari\n\n-:)))\n\nVadim\n",
"msg_date": "Mon, 29 Nov 1999 13:29:06 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
}
] |
[
{
"msg_contents": "Hi\n\nIs there any package/ application that will convert Pro*Cs which are\ncurrently\nrunning on Unix & Oracle (Server) to any Windows-based/Client-Based\napplication like VC++ on Personal Oracle?\n\nPl. reply ASAP.\nBye & Thanks\n\nS. S. Mani\n\n\n",
"msg_date": "Mon, 29 Nov 1999 15:11:58 +0530",
"msg_from": "S S Mani <[email protected]>",
"msg_from_op": true,
"msg_subject": "Pro*C conversion"
}
] |
[
{
"msg_contents": "I'm not concern very much about speed of Postgres but mostly\nabout its connection schema. Every new connect to database postgres\nforks another children. It's impossible to work with different\ndatabases. On my production site I work with persistent connections\nbetween http (mod_perl) <-> postgres and quite satisfies with efficiency -\nI have 20 httpd running and 20 db backends accordingly. \nThis requires some memory, but I could live. Now other developers\nwant to use postgres as a db backend in their Web applications and\nalso want to have persistence to some another databases. \nIf you have N databases and M httpd servers, you will end with\nN*M DB backends. This is too much and I'm afraid my solution\ncould be scalable. MySQL seems could works with several databases.\nI don't know if it's possible to have a pool of db childrens,\nwhich connected to, say, template1 database and children could\nswitch to requested database on demand. This would require some\nmodification of DBD driver of course, but I think it's not hard.\nI'm working on very big project with many databases involved,\ncurrent traffic is more than 2 mln. pageviews and most of them\ndynamic. We expect about 5x more requests and I really need scalable\nsolution. Is anybody working on COBRA interface to postgres ?\nCORBA is just a magic word for me :-) Could it be a magic wand ?\n\n\tRegards,\n\n\t\tOleg\n\n\nOn Mon, 29 Nov 1999, Marcin Mazurek - Multinet SA - Poznan wrote:\n\n> Date: Mon, 29 Nov 1999 14:27:55 +0100 (CET)\n> From: Marcin Mazurek - Multinet SA - Poznan <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [ADMIN] When postgres will be faster?\n> \n> On Mon, 29 Nov 1999 [email protected] wrote:\n> > Yes! But I recommend backend pool too. What is it? The postmaster task runs now\n> > backend for each query. Good. But After query backend finished. I recommend to\n> > stay backend running within a some timeout. If the next query occured\n> > the postmaster redirect query to any idle backend or run a new one unless. Then\n> > backend serve some connections it shut down itself, this prevents memory leaks.\n> Somebody advised me to do such thing with servlets, holding pool of\n> connections in one srvlet and give them as they are needed, but frankly\n> speaking i have no idea how to do it. Does anybodyhas such examples with\n> Connection pools?\n> mazek\n> \n> Marcin Mazurek\n> \n> -- \n> administrator\n> MULTINET SA o/Poznan\n> http://www.multinet.pl/\n> \n> \n> ************\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 29 Nov 1999 18:04:54 +0300 (MSK)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [ADMIN] When postgres will be faster?"
},
{
"msg_contents": "On Mon, 29 Nov 1999, Oleg Bartunov wrote:\n> I'm not concern very much about speed of Postgres but mostly\n> about its connection schema. Every new connect to database postgres\n> forks another children. It's impossible to work with different\n> databases. On my production site I work with persistent connections\n> between http (mod_perl) <-> postgres and quite satisfies with efficiency -\n> I have 20 httpd running and 20 db backends accordingly. \n> This requires some memory, but I could live. Now other developers\n> want to use postgres as a db backend in their Web applications and\n> also want to have persistence to some another databases. \n> If you have N databases and M httpd servers, you will end with\n> N*M DB backends. This is too much and I'm afraid my solution\n> could be scalable. MySQL seems could works with several databases.\n\n I use (not for production, though) Zope and Postgres (little non\nspectacular demo is here: http://sun.med.ru/cgi-bin/Zope.cgi/phd01)\n Zope can maintain a database connection or a pool of database\nconnections. If there is no activity on a connection within a long period\n(few hours) Zope closes the connection and reopens it on next access.\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Mon, 29 Nov 1999 15:29:09 +0000 (GMT)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [ADMIN] When postgres will be faster?"
},
{
"msg_contents": "Hi!\n\nOn 29-Nov-99 Oleg Bartunov wrote:\n> I'm not concern very much about speed of Postgres but mostly\n> about its connection schema. Every new connect to database postgres\n> forks another children. It's impossible to work with different\n\nfork and fork/exec are some different. postmaster forks and execute backend\nbinary.\n\n> databases. On my production site I work with persistent connections\n> between http (mod_perl) <-> postgres and quite satisfies with efficiency -\n> I have 20 httpd running and 20 db backends accordingly. \n> This requires some memory, but I could live. Now other developers\n\nI have >100 connections in peak load. Not all of them use postgres. If I use\npconnect I lost my RAM ;-)\n\n> want to use postgres as a db backend in their Web applications and\n> also want to have persistence to some another databases. \n> If you have N databases and M httpd servers, you will end with\n> N*M DB backends. This is too much and I'm afraid my solution\n\nWhy? Why N*M? After disconnect the persistent connection backend should not\nfinish but next connection opens other bata base? Or i misunderstood?\n\n> I don't know if it's possible to have a pool of db childrens,\n> which connected to, say, template1 database and children could\n> switch to requested database on demand. This would require some\n> modification of DBD driver of course, but I think it's not hard.\n\nHmmm... There is 2 ways to support pool.\n1. FORK only.\nPostmaster and postgres are same binary. postmaster accept connection and\nforked. Parent creates structure with child pid, descriptors etc... Child\nbecomes backend. When child finish the request it send signal (smem,fifo etc)\nto parent. Parent set IDLE flag to child structure. When next connection\naccepted parent seek through list of child to find first idle one. parent clear\nIDLE flag and fd_dup file descriptors to backend's. Child structure contain\ncall counter and time stamp of start and last call time. If call counter exceeds\nN or time exceeds T all descriptors becomes closed. Child catch SIGPIPE on\nclosed descriptors and finish. Parent scans list of structures and check time\nstamps to stop idle backends or start new one (to have pool of idle backends).\n\n2. Fork/exec.\nI dont know. But it possible too. Same like previous.\n\nSo, if backend works with one database only and cannot reconnect - add\n'database' field to child structure described above. Or add keywords \nCONNECT/DISCONNECT to language. Hmm... I was sure backend can server more then\n1 database sequentially.\n\nSKiller\n--------------------------\nSergei Keler\nWebMaster of \"ComSet\"\nE-Mail: [email protected]\nhttp://www.comset.net\n--------------------------\n",
"msg_date": "Tue, 30 Nov 1999 18:11:36 +0300 (MSK)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] When postgres will be faster?"
},
{
"msg_contents": "On Tue, 30 Nov 1999 [email protected] wrote:\n\n> Date: Tue, 30 Nov 1999 18:11:36 +0300 (MSK)\n> From: [email protected]\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected], [email protected],\n> Marcin Mazurek - Multinet SA - Poznan <[email protected]>\n> Subject: Re: [ADMIN] When postgres will be faster?\n> \n> Hi!\n> \n> On 29-Nov-99 Oleg Bartunov wrote:\n> > I'm not concern very much about speed of Postgres but mostly\n> > about its connection schema. Every new connect to database postgres\n> > forks another children. It's impossible to work with different\n> \n> fork and fork/exec are some different. postmaster forks and execute backend\n> binary.\n> \n> > databases. On my production site I work with persistent connections\n> > between http (mod_perl) <-> postgres and quite satisfies with efficiency -\n> > I have 20 httpd running and 20 db backends accordingly. \n> > This requires some memory, but I could live. Now other developers\n> \n> I have >100 connections in peak load. Not all of them use postgres. If I use\n> pconnect I lost my RAM ;-)\n> \n> > want to use postgres as a db backend in their Web applications and\n> > also want to have persistence to some another databases. \n> > If you have N databases and M httpd servers, you will end with\n> > N*M DB backends. This is too much and I'm afraid my solution\n> \n> Why? Why N*M? After disconnect the persistent connection backend should not\n> finish but next connection opens other bata base? Or i misunderstood?\n\npersistent connections are never disconnected during httpd children's life,\nthat's what I need for performance reason. every httpd children holds\ntheir own connection to specific database and there are no method \n(well, AFAIK) to share connection between childrens (see discussion in\nmodperl mailing list for today and yesterday). If you need to work with \nanother database you have to open new connection, because postgres doesn't \nworks with several database through one connection. Latest version of Mysql\ncould do this and you could explicitly specify database name\n \"select something from database.table\"\nSimple experiment with psql like \n 1. psql db1\n 2. look at process list - you'll see something like:\n 19714 ? S 0:00 /usr/local/pgsql/bin/postgres localhost megera db1 idle\n 3. \\c db2\n 4. again look at process list:\n 19718 ? S 0:00 /usr/local/pgsql/bin/postgres localhost megera db2 idle\n new process is forked.\nI dont' know backend internals, probably it's possible using libpq interface\nto switch between databases through one connection, but I suspect it could be\ndifficult to 'hide' so nice feature :-)\n\n> \n> > I don't know if it's possible to have a pool of db childrens,\n> > which connected to, say, template1 database and children could\n> > switch to requested database on demand. This would require some\n> > modification of DBD driver of course, but I think it's not hard.\n> \n> Hmmm... There is 2 ways to support pool.\n> 1. FORK only.\n> Postmaster and postgres are same binary. postmaster accept connection and\n> forked. Parent creates structure with child pid, descriptors etc... Child\n> becomes backend. When child finish the request it send signal (smem,fifo etc)\n> to parent. Parent set IDLE flag to child structure. When next connection\n> accepted parent seek through list of child to find first idle one. parent clear\n> IDLE flag and fd_dup file descriptors to backend's. Child structure contain\n> call counter and time stamp of start and last call time. If call counter exceeds\n> N or time exceeds T all descriptors becomes closed. Child catch SIGPIPE on\n> closed descriptors and finish. Parent scans list of structures and check time\n> stamps to stop idle backends or start new one (to have pool of idle backends).\n> \n> 2. Fork/exec.\n> I dont know. But it possible too. Same like previous.\n> \n> So, if backend works with one database only and cannot reconnect - add\n> 'database' field to child structure described above. Or add keywords \n> CONNECT/DISCONNECT to language. Hmm... I was sure backend can server more then\n> 1 database sequentially.\n> \n\nI suggest postgres experts comment this topic. We really need to work\nwith different databases using one connection. Postgres is rather good\nscalable DB engine and IMO it's worth to have such feature like\nDB pooling. Once postgres support db pooling it would be possible\nto develope/modify various interfaces to work with httpd.\nI'm using mod_perl, apache, perl, DBI, ApacheDBI and now looking \nfor CORBA :-)\n\n\n\tregards,\n\n\t\tOleg\n\n> SKiller\n> --------------------------\n> Sergei Keler\n> WebMaster of \"ComSet\"\n> E-Mail: [email protected]\n> http://www.comset.net\n> --------------------------\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 30 Nov 1999 19:03:00 +0300 (MSK)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [ADMIN] When postgres will be faster?"
},
{
"msg_contents": "Oleg Bartunov wrote:\n> I suggest postgres experts comment this topic. We really need to work\n> with different databases using one connection. Postgres is rather good\n> scalable DB engine and IMO it's worth to have such feature like\n> DB pooling. Once postgres support db pooling it would be possible\n\nThe AOLserver webserver/application server already fully supports pooled\ndatabase connections to PostgreSQL.\n\nAOLserver is fully multithreaded, and allows a configurable number of\ndatabase connections to be persistently pooled. There can be multiple\npools available, each connecting to a single database. AOLserver\ndynamically manages the pools, with maximum number of pools and pool\npersistence timeout configurable.\n\nThis allows many thousands of http connections to share a limited number\nof database connections, thanks to AOLserver's multithreaded front end.\n\nAOLserver will happily coexist with apache, just by binding to another\nport.\n\nThe performance increase is on the order of 100 times faster than plain\nCGI using the perl Pg module.\n\nAOLserver features tight database integration through a tcl and C API.\nThe tcl API has specialized database connection commands, http\nconnection commands, thread creation-mutex-destruction-etc commands, and\nmany other highly useful (for web scripts) commands that make even tcl a\ngood web scripting language. www.aolserver.com, or\naolserver.lcs.mit.edu.\n\nWhile it might be tempting to lift code out of AOLserver to do pooling,\nAOLserver is under the dual APL/GPL license -- such code could be GPL'd,\nbut not BSD'd. But, AOLserver's source does give you an example of how\nsuch pooling can be accomplished from a client-side libpq-using program.\n\nThe only problem is the issue of libpq's thread-safety or lack thereof\n(in practice, the thread-safety issue doesn't show until you hit a high\nload).\n\nAsk Vince about AOLserver :-).\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 30 Nov 1999 11:44:46 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [ADMIN] When postgres will be faster?"
},
{
"msg_contents": "At 11:44 AM 11/30/99 -0500, Lamar Owen wrote:\n>Oleg Bartunov wrote:\n>> I suggest postgres experts comment this topic. We really need to work\n>> with different databases using one connection. Postgres is rather good\n>> scalable DB engine and IMO it's worth to have such feature like\n>> DB pooling. Once postgres support db pooling it would be possible\n>\n>The AOLserver webserver/application server already fully supports pooled\n>database connections to PostgreSQL.\n>\n>AOLserver is fully multithreaded, and allows a configurable number of\n>database connections to be persistently pooled. There can be multiple\n>pools available, each connecting to a single database. AOLserver\n>dynamically manages the pools, with maximum number of pools and pool\n>persistence timeout configurable.\n>\n>This allows many thousands of http connections to share a limited number\n>of database connections, thanks to AOLserver's multithreaded front end.\n\nAnd there's a great toolset from Ars Digita that runs under AOLserver.\n\nI've ported part of it to Postgres. You can see one of the modules\nin action, a bulletin board module, at http://dsl-dhogaza.pacifier.net/bboard\n\nUnfortunately, portions of the Ars Digita toolkit use outer joins fairly\nheavily. I was somewhat saddened to hear that outer joins apparently\nwon't make it into V7 after all, because I was planning to port the\nentire toolkit when V7 made its debut. I still may do so, because\nyou can mechanically translate the queries to not be dependent on\nouter joins, but it makes doing a port a heck of a lot more tedious.\n\nThe Ars Digita toolkit contains, among other things, a very robust\ne-commerce module which is in use at some large, Oracle-based web\nsites. It would be cool to make this available for Postgres...\n\n(The toolkit's GPL'd, BTW)\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 30 Nov 1999 09:21:42 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [ADMIN] When postgres will be faster?"
},
{
"msg_contents": "[Charset KOI8-R unsupported, filtering to ASCII...]\n> Hi!\n> \n> On 29-Nov-99 Oleg Bartunov wrote:\n> > I'm not concern very much about speed of Postgres but mostly\n> > about its connection schema. Every new connect to database postgres\n> > forks another children. It's impossible to work with different\n> \n> fork and fork/exec are some different. postmaster forks and execute backend\n> binary.\n\npostmaster forks() and does not do an exec().\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 30 Nov 1999 12:26:39 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] When postgres will be faster?"
},
{
"msg_contents": "Hi!\n\nOn 30-Nov-99 Bruce Momjian wrote:\n> [Charset KOI8-R unsupported, filtering to ASCII...]\n>> Hi!\n>> \n>> On 29-Nov-99 Oleg Bartunov wrote:\n>> > I'm not concern very much about speed of Postgres but mostly\n>> > about its connection schema. Every new connect to database postgres\n>> > forks another children. It's impossible to work with different\n>> \n>> fork and fork/exec are some different. postmaster forks and execute backend\n>> binary.\n> \n> postmaster forks() and does not do an exec().\n\n>From postmaster log:\n\nFindExec: found \"/usr/comset/dbase/bin/postgres\" using argv[0]\n\nps ax|grep pos\n\n10665 ? R 0:01 /usr/comset/dbase/bin/postgres main.comset.com polithit pol\n13329 ? S 0:24 /usr/comset/dbase/bin/postmaster -i -D/usr/comset/dbase/dat\n\nThese samples push me thinking it was fork/exec... :-(\n\nSKiller\n--------------------------\nSergei Keler\nWebMaster of \"ComSet\"\nE-Mail: [email protected]\nhttp://www.comset.net\n--------------------------\n",
"msg_date": "Wed, 01 Dec 1999 11:53:21 +0300 (MSK)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] When postgres will be faster?"
},
{
"msg_contents": "Hi!\n\nOn 30-Nov-99 Oleg Bartunov wrote:\n\n> I suggest postgres experts comment this topic. We really need to work\n> with different databases using one connection. Postgres is rather good\n> scalable DB engine and IMO it's worth to have such feature like\n> DB pooling. Once postgres support db pooling it would be possible\n> to develope/modify various interfaces to work with httpd.\n> I'm using mod_perl, apache, perl, DBI, ApacheDBI and now looking \n> for CORBA :-)\n\nIf backend/db pooling will be made by postgres developers it will be a great\nstep to speed up www-based application using postgresql. ;-) Really.\n\nSo, When the pooling realized in postmaster other application NOT nessesary to\nmodify to speed up connection process... I read comments about AOL server.\nGood. But this should be postgresql feature.\n\nSKiller\n--------------------------\nSergei Keler\nWebMaster of \"ComSet\"\nE-Mail: [email protected]\nhttp://www.comset.net\n--------------------------\n",
"msg_date": "Wed, 01 Dec 1999 12:05:42 +0300 (MSK)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] When postgres will be faster?"
},
{
"msg_contents": "> > postmaster forks() and does not do an exec().\n> \n> >From postmaster log:\n> \n> FindExec: found \"/usr/comset/dbase/bin/postgres\" using argv[0]\n> \n> ps ax|grep pos\n> \n> 10665 ? R 0:01 /usr/comset/dbase/bin/postgres main.comset.com polithit pol\n> 13329 ? S 0:24 /usr/comset/dbase/bin/postmaster -i -D/usr/comset/dbase/dat\n> \n> These samples push me thinking it was fork/exec... :-(\n\nWe re-exec the postmaster so it has an absolute path, which is sometimes\nneeded for dynamic loading. We also need 5 paramaters to we can do ps\ndisplay if forked backends.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 1 Dec 1999 12:43:50 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] When postgres will be faster?"
},
{
"msg_contents": "Hi!\n\nOn 01-Dec-99 Bruce Momjian wrote:\n>> > postmaster forks() and does not do an exec().\n>> \n>> >From postmaster log:\n>> \n>> FindExec: found \"/usr/comset/dbase/bin/postgres\" using argv[0]\n>> \n>> ps ax|grep pos\n>> \n>> 10665 ? R 0:01 /usr/comset/dbase/bin/postgres main.comset.com polithit\n>> pol\n>> 13329 ? S 0:24 /usr/comset/dbase/bin/postmaster -i\n>> -D/usr/comset/dbase/dat\n>> \n>> These samples push me thinking it was fork/exec... :-(\n> \n> We re-exec the postmaster so it has an absolute path, which is sometimes\n> needed for dynamic loading. We also need 5 paramaters to we can do ps\n> display if forked backends.\n\nBut you know several ways to send parameters to child process...\nSo, You have:\n1. Shared memory\n2. Fifo file like /tmp/.s.PGSQL.ctl.pid-of-backend\n3. Additional unnamed pipe opened for child\n4. Signals like SIGUSR1 etc to force fetch parameters from somewhere.\n\nSo, in addition I found thet there is not nessesary to create a dynamic list of\nchild pool. You have static/dynamic linear array of backend running ;-). Waw!\nPossible to add some additional info to this structure about pooled backend (I\noffer before) to manage pool. \n\nI hope I dig postgresql code this weekend to have ideas offer for developers\nmore constructively.\n\nI think p.3 shown before is preferable. The main() of backend should gentle\nread this pipe. Pipes in Unix have more that 1024 bytes buffer...\n\nwhile (readCommand(....)) {\n initBackend(...); // Same as parse args...\n doQuery(..);\n finishBackend(...);\n}\n\nreadCommand() should use select() with timeout for check pipe. Then signal\nreceived it set flag on and select() loop may finish on it with 0 returned.\n\nSKiller\n--------------------------\nSergei Keler\nWebMaster of \"ComSet\"\nE-Mail: [email protected]\nhttp://www.comset.net\n--------------------------\n",
"msg_date": "Thu, 02 Dec 1999 11:46:21 +0300 (MSK)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] When postgres will be faster?"
},
{
"msg_contents": "\nThis email is moved off of pgsql-admin and left only on pgsql-hackers,\nwhere it belongs...\n\nSergei...we look forward to seeing patches that demonstrate, and possibly\nimplement, that which you are proposing...it would give us, I think, a\nmuch clearer idea of what you are thinking :)\n\n\n\nOn Thu, 2 Dec 1999 [email protected] wrote:\n\n> Hi!\n> \n> On 01-Dec-99 Bruce Momjian wrote:\n> >> > postmaster forks() and does not do an exec().\n> >> \n> >> >From postmaster log:\n> >> \n> >> FindExec: found \"/usr/comset/dbase/bin/postgres\" using argv[0]\n> >> \n> >> ps ax|grep pos\n> >> \n> >> 10665 ? R 0:01 /usr/comset/dbase/bin/postgres main.comset.com polithit\n> >> pol\n> >> 13329 ? S 0:24 /usr/comset/dbase/bin/postmaster -i\n> >> -D/usr/comset/dbase/dat\n> >> \n> >> These samples push me thinking it was fork/exec... :-(\n> > \n> > We re-exec the postmaster so it has an absolute path, which is sometimes\n> > needed for dynamic loading. We also need 5 paramaters to we can do ps\n> > display if forked backends.\n> \n> But you know several ways to send parameters to child process...\n> So, You have:\n> 1. Shared memory\n> 2. Fifo file like /tmp/.s.PGSQL.ctl.pid-of-backend\n> 3. Additional unnamed pipe opened for child\n> 4. Signals like SIGUSR1 etc to force fetch parameters from somewhere.\n> \n> So, in addition I found thet there is not nessesary to create a dynamic list of\n> child pool. You have static/dynamic linear array of backend running ;-). Waw!\n> Possible to add some additional info to this structure about pooled backend (I\n> offer before) to manage pool. \n> \n> I hope I dig postgresql code this weekend to have ideas offer for developers\n> more constructively.\n> \n> I think p.3 shown before is preferable. The main() of backend should gentle\n> read this pipe. Pipes in Unix have more that 1024 bytes buffer...\n> \n> while (readCommand(....)) {\n> initBackend(...); // Same as parse args...\n> doQuery(..);\n> finishBackend(...);\n> }\n> \n> readCommand() should use select() with timeout for check pipe. Then signal\n> received it set flag on and select() loop may finish on it with 0 returned.\n> \n> SKiller\n> --------------------------\n> Sergei Keler\n> WebMaster of \"ComSet\"\n> E-Mail: [email protected]\n> http://www.comset.net\n> --------------------------\n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 2 Dec 1999 11:20:14 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] When postgres will be faster?"
}
] |
[
{
"msg_contents": "Tom Lane <[email protected]>\n>\n>Keith Parks <[email protected]> writes:\n>> In the dim and distant past I produced a patch that put vacuum\n>> into the list of things that you could GRANT on a per-table\n>\n>Thanks for the code, but for now I just threw in a quick pg_ownercheck\n>call: VACUUM will now vacuum all tables if you are the superuser, else\n>just the tables you own, skipping the rest with a NOTICE. What you had\n>looked like more infrastructure than I thought the problem was worth...\n>I suspect most people will run VACUUMs from the superuser account\n>anyway...\n\nI didn't think it was worth reworking the code, although I may do\njust for fun. Your solution is fine.\n\nKeith.\n\n",
"msg_date": "Mon, 29 Nov 1999 19:30:51 +0000 (GMT)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] VACUUM as a denial-of-service attack "
}
] |
[
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> I have to say that I'm going to change on-disk database/table/index \n>> file names to _OID_! This is required by WAL because of inside of \n>> log records there will be just database/table/index oids, not names, \n>> and after crash recovery will not be able to read pg_class to get \n>> database/table/index name using oid ...\n>\n>Wow, that is a major pain. Anyone else think so?\n\nConsider had Vadim made this proposal (set the time-travel machine to \nversion 7.1.2 or so):\n\n\t\"I'm going to remove WAL from Postgres, so that we can use\n\t the table name as the filename for the table on disk.\"\n\nSo, no, rather than being a major pain, I'd classify it as a minor\ninconvenience. If it becomes, in fact, a major pain, one can always\nwrite a two-line psql script that prints a table name, given an oid.\n\nOn an unrelated matter, I haven't been following the \"limit elimination\"\neffort as closely as I should have. Is it now possible to compile Postgres\nwith 16Kb tuple size, and insert/select 15Kb text fields from the tuples?\n\n\t-Michael Robinson\n\n",
"msg_date": "Tue, 30 Nov 1999 11:44:58 +0800 (CST)",
"msg_from": "Michael Robinson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "> Consider had Vadim made this proposal (set the time-travel machine to \n> version 7.1.2 or so):\n> \n> \t\"I'm going to remove WAL from Postgres, so that we can use\n> \t the table name as the filename for the table on disk.\"\n> \n> So, no, rather than being a major pain, I'd classify it as a minor\n> inconvenience. If it becomes, in fact, a major pain, one can always\n> write a two-line psql script that prints a table name, given an oid.\n\nWhat I am saying is that I want WAL and the old naming system. I think\nname_oid may be a good solution. For log recover, Vadim can actually\nget the oid/name mapping by just getting the file names from the\ndirectories and looking at the names attached to each oid.\n\nLet's not loose the usual names if they can be preserved with little\neffort. Believe me, not keeping readable file names will cause lots of\nwork for us and for users, so a little work now in keeping the existing\nsystem will save lots of work in the future.\n\n> \n> On an unrelated matter, I haven't been following the \"limit elimination\"\n> effort as closely as I should have. Is it now possible to compile Postgres\n> with 16Kb tuple size, and insert/select 15Kb text fields from the tuples?\n\nI think so.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Nov 1999 23:02:16 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
}
] |
[
{
"msg_contents": "Any thoughts on the following\n\n------------------------------ testview.sql\n-------------------------------------\ndrop table testhead; /* If it exists */\ndrop table testline; /* If it exists */\ndrop view testview; /* If it exists */\n\ncreate table testhead (\n part text\n);\n\ncreate table testline (\n part text,\n colour text,\n adate datetime default 'now'\n);\n\ncreate view testview as\n select testhead.part, testline.colour, testline.adate from testhead, testline\n where testhead.part = testline.part;\n\ninsert into testview values ('pen', 'green');\ninsert into testview values ('pen', 'blue');\ninsert into testview values ('pen', 'black');\n\nselect * from testview;\n\n-----------------------------------------------------------------------------------\n\nThe inserts report no errors, and when looking into $PGDATA/base/mydb/testview\nwith a hex editor I can see the values inserted. \n\nThe select on view returns nothing... \n\nShould the insert not fail seeing that views are read only ?\n\n-- \n--------\nRegards\nTheo\n",
"msg_date": "Tue, 30 Nov 1999 12:25:22 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Insert into view"
}
] |
[
{
"msg_contents": "I think this was covered a little while back, but it runs something like\nthis: a view is a relation, with a select rule (which is the view query).\nWhen you insert into the view (which, like I said, is just another relation,\nit actually inserts into the view relation. However, when you select from\nit, of course, the select rule fires, and you don't see any of the\ninformation. I suppose you could set up a nice insert rule to insert into\nthe base tables of the query if you wanted. I normally do this through\nstored procs, but this would be essentially the same thing, just nicer\nclient-side SQL.\n\nI suppose that views could be made so that a tuple insert would fail, but\nyou should know your db better ;-)\n\nMikeA\n\n>> -----Original Message-----\n>> From: Theo Kramer [mailto:[email protected]]\n>> Sent: Tuesday, November 30, 1999 12:25 PM\n>> To: [email protected]\n>> Subject: [HACKERS] Insert into view\n>> \n>> \n>> Any thoughts on the following\n>> \n>> ------------------------------ testview.sql\n>> -------------------------------------\n>> drop table testhead; /* If it exists */\n>> drop table testline; /* If it exists */\n>> drop view testview; /* If it exists */\n>> \n>> create table testhead (\n>> part text\n>> );\n>> \n>> create table testline (\n>> part text,\n>> colour text,\n>> adate datetime default 'now'\n>> );\n>> \n>> create view testview as\n>> select testhead.part, testline.colour, testline.adate from \n>> testhead, testline\n>> where testhead.part = testline.part;\n>> \n>> insert into testview values ('pen', 'green');\n>> insert into testview values ('pen', 'blue');\n>> insert into testview values ('pen', 'black');\n>> \n>> select * from testview;\n>> \n>> -------------------------------------------------------------\n>> ----------------------\n>> \n>> The inserts report no errors, and when looking into \n>> $PGDATA/base/mydb/testview\n>> with a hex editor I can see the values inserted. \n>> \n>> The select on view returns nothing... \n>> \n>> Should the insert not fail seeing that views are read only ?\n>> \n>> -- \n>> --------\n>> Regards\n>> Theo\n>> \n>> ************\n>> \n",
"msg_date": "Tue, 30 Nov 1999 14:49:26 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Insert into view"
},
{
"msg_contents": "\"Ansley, Michael\" <[email protected]> writes:\n> I suppose that views could be made so that a tuple insert would fail,\n\nI think Jan muttered something about emitting a warning notice for an\nattempt to store into a table that has an ON SELECT DO INSTEAD rule but\nno ON INSERT rule --- which would imply that you'll never be able to\nsee the data you're inserting.\n\nThis mistake has bitten enough people (including me ;-)) that it seems\na warning might be a good idea. I'm not sure if I want it to be a hard\nerror though. Are there any cases where it'd make sense to allow this?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 30 Nov 1999 10:07:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Insert into view "
},
{
"msg_contents": "\"Ansley, Michael\" wrote:\n> \n> I think this was covered a little while back, but it runs something like\n> this: a view is a relation, with a select rule (which is the view query).\n> When you insert into the view (which, like I said, is just another relation,\n> it actually inserts into the view relation. However, when you select from\n> it, of course, the select rule fires, and you don't see any of the\n> information. I suppose you could set up a nice insert rule to insert into\n> the base tables of the query if you wanted. I normally do this through\n> stored procs, but this would be essentially the same thing, just nicer\n> client-side SQL.\n\nHmm, interesting.\n\n> I suppose that views could be made so that a tuple insert would fail, but\n> you should know your db better ;-)\n\nThat I do, just thought a message might assist those that don't :)\n\n--------\nRegards\nTheo\n",
"msg_date": "Tue, 30 Nov 1999 17:34:12 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Insert into view"
}
] |
[
{
"msg_contents": "I liked the new tab competion ability in psql. Seems to work well in\nCREATE * but not so great in the others, like doing FROM and WHERE. Can\nyou take a look at backend/parser/keywords.c and see if you can merge\ncompletion for those words in to psql. It may be a nice feature.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 30 Nov 1999 12:22:00 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "tab completion in psql"
},
{
"msg_contents": "On Tue, 30 Nov 1999, Bruce Momjian wrote:\n\n> I liked the new tab competion ability in psql. Seems to work well in\n> CREATE * but not so great in the others, like doing FROM and WHERE. Can\n> you take a look at backend/parser/keywords.c and see if you can merge\n> completion for those words in to psql. It may be a nice feature.\n\nThe tab completion is not very smart, it only covers the really obvious\ncases. Doing FROM shouldn't be so hard, but once you get into WHERE you\nalmost end up writing a complete SQL parser just for this. Not that\nthere's anything fundamentally wrong with that. It's a very evolving piece\nof code, however; it can only get better. I bet those readline authors\nnever had this one in mind. Otherwise readline would play nicer with it.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 30 Nov 1999 18:41:36 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tab completion in psql"
},
{
"msg_contents": "> On Tue, 30 Nov 1999, Bruce Momjian wrote:\n> \n> > I liked the new tab competion ability in psql. Seems to work well in\n> > CREATE * but not so great in the others, like doing FROM and WHERE. Can\n> > you take a look at backend/parser/keywords.c and see if you can merge\n> > completion for those words in to psql. It may be a nice feature.\n> \n> The tab completion is not very smart, it only covers the really obvious\n> cases. Doing FROM shouldn't be so hard, but once you get into WHERE you\n> almost end up writing a complete SQL parser just for this. Not that\n> there's anything fundamentally wrong with that. It's a very evolving piece\n> of code, however; it can only get better. I bet those readline authors\n> never had this one in mind. Otherwise readline would play nicer with it.\n\nI am just suggesting completing the word FROM, not doing anything more\nthan that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 30 Nov 1999 12:55:02 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: tab completion in psql"
},
{
"msg_contents": "Then Peter Eisentraut <[email protected]> spoke up and said:\n> The tab completion is not very smart, it only covers the really obvious\n> cases. Doing FROM shouldn't be so hard, but once you get into WHERE you\n> almost end up writing a complete SQL parser just for this. Not that\n> there's anything fundamentally wrong with that. It's a very evolving piece\n> of code, however; it can only get better. I bet those readline authors\n> never had this one in mind. Otherwise readline would play nicer with it.\n\nI've used Python's readline support, and I must say it's really nice\nwhen the readline engine can complete more dynamic names. It would be\nvery nice if \"\\d table\" or \"\\dt\" populated a dictionary of some kind\nwhich could then be used for completion. While context sensitivity\nwould be even better, it might be worthwhile to simply have a dynamic\ndictionary. \n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================",
"msg_date": "30 Nov 1999 13:49:24 -0500",
"msg_from": "Brian E Gallew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: tab completion in psql"
}
] |
[
{
"msg_contents": "Can someone give me some advice on the best way to send data to postgres\nfrom a tty port that periodically (every 30 secs) spit out asci data using a\nserial/com port? I guess this is more of a Unix question than a postgres\nproblem, but I was hoping that there was a way that postgres could grab the\ndata directly from a serial port.\n\nThanks....\n\n\n\n\n\n",
"msg_date": "Tue, 30 Nov 1999 22:11:18 -0600",
"msg_from": "\"svn\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Grabbing Data from serial port into postgres"
}
] |
[
{
"msg_contents": " \n> Yes, that is true. As long as the storage manager relies on \n> the filesystem for\n> table names, this will be a problem, unless filesystem \n> deletions are delayed\n> until COMMIT, and filesystem creates are undone at a ROLLBACK.\n\nI like Bruce's idea of altering the filename from tablename_segnr \nto tablename_OID_segnr.\nThen a leftover new or old file will not be a problem, since it has a \nguaranteed different name. \nThe cleanup of leftover old (or rolled back new files) could then be \na job that vacuum does.\n\nVadim needs _oid_ in the filenames for WAL anyway.\n\nThis solves create and drop table.\n\nThe alter table should imho keep an exclusive lock on the \npg_class row for that table + exclusive on the usertable\nuntil transaction commit. \n(Thus fails if table access is not exclusive)\n\nAndreas\n",
"msg_date": "Wed, 1 Dec 1999 11:20:39 +0100 ",
"msg_from": "Zeugswetter Andreas SEV <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
}
] |
[
{
"msg_contents": "\n> I look at this and question the value of allowing such fancy \n> things vs.\n> the ability to look at the directory and know exactly what table is\n> which file. Maybe we can use file names like 23423_mytable \n> where 24323\n> is the table oid and mytable is the table name. That way, we can know\n> the table, and they are unique too to allow RENAME TABLE to work.\n> \n> This doesn't solve Vadim's problem. His additional work would be to\n> write a line to the log file for each table create/delete saying I\n> deleted this table with this oid, and when reading back the \n> log, he has\n> to record the oid_username combination and use that to \n> translate his log\n> oids into actual filenames.\n\nWhy that ? \n\n24323_* will point to the correct table segments inside the db directory.\nNo need to actually know what * matches to, no ?\n\nAndreas \n",
"msg_date": "Wed, 1 Dec 1999 11:47:49 +0100 ",
"msg_from": "Zeugswetter Andreas SEV <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
},
{
"msg_contents": "> > This doesn't solve Vadim's problem. His additional work would be to\n> > write a line to the log file for each table create/delete saying I\n> > deleted this table with this oid, and when reading back the \n> > log, he has\n> > to record the oid_username combination and use that to \n> > translate his log\n> > oids into actual filenames.\n> \n> Why that ? \n> \n> 24323_* will point to the correct table segments inside the db directory.\n> No need to actually know what * matches to, no ?\n\nTrue. If we go with tablename_OID format, then vadim will have to scan\ndirectory and pick up all his oids and map them to file names before\nspinning through the log. Yes, it is a little more work, but worth it.\n\nIf you put the oid at the beginning, it is easier, but it is still an\nissue because you have to issues a scandir command to find the matching\nname for each oid. Actually, he can do that no matter where the oid is\nstored in the name. That may be the way he has to handle it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 1 Dec 1999 12:49:38 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] Re: [GENERAL] drop/rename table and transactions"
}
] |
[
{
"msg_contents": "\n> I think Jan muttered something about emitting a warning notice for an\n> attempt to store into a table that has an ON SELECT DO \n> INSTEAD rule but\n> no ON INSERT rule --- which would imply that you'll never be able to\n> see the data you're inserting.\n> \n> This mistake has bitten enough people (including me ;-)) that it seems\n> a warning might be a good idea. I'm not sure if I want it to \n> be a hard\n> error though. Are there any cases where it'd make sense to \n> allow this?\n\nOf course the real nice answer would be to create a new pg_class type 'V'\nin addition to the existing table types.\nAdvantage:\n1. no table files needed\n2. we know it is a view for sure\n\nI for one have tables with \"on select do instead\" rules that are not views.\n\nAndreas\n",
"msg_date": "Wed, 1 Dec 1999 11:56:15 +0100 ",
"msg_from": "Zeugswetter Andreas SEV <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] Insert into view "
}
] |
[
{
"msg_contents": "Hi guys, thanks for all your great work on this system. I'm really\nhammering my copy of Postgres and that's probably the only reason I'm\nseeing these errors.\n\nPostgreSQL V6.5.2 on i686-pc-linux-gnu\n\nI occasionally (twice a day) receive the error:\n\n\tNOTICE: AbortTransaction and not in in-progress\n\nAnd in the system log I see:\n\n\tFATAL 1: transaction commit failed on magnetic disk\n\tNOTICE: AbortTransaction and not in in-progress\n\nThe connection to the backend drops. I'm detecting the fatal error,\nreconnecting and re-issuing the command right away and it goes through\nfine the second time.\n\nAny suggestions?\n\n- K\n\nKristofer Munn * KMI * 973-509-9414 * AIM KrMunn * ICQ 352499 * www.munn.com\n\n",
"msg_date": "Wed, 1 Dec 1999 16:24:23 -0500 (EST)",
"msg_from": "Kristofer Munn <[email protected]>",
"msg_from_op": true,
"msg_subject": "NOTICE: AbortTransaction and not in in-progress"
},
{
"msg_contents": "Kristofer Munn <[email protected]> writes:\n> And in the system log I see:\n> \tFATAL 1: transaction commit failed on magnetic disk\n> \tNOTICE: AbortTransaction and not in in-progress\n\nUgh. As near as I can tell, the \"transaction commit failed\" message\ncan only come out if fsync() returns a failure indication. And that\nbasically shouldn't be happening, unless you've got flaky disk hardware.\n\n> Any suggestions?\n\nI'll bet it stops happening if you run with -o -F (no fsync) ;-).\nBut as for non-band-aid solutions, I haven't a clue. Anyone?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Dec 1999 18:26:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] NOTICE: AbortTransaction and not in in-progress "
}
] |
[
{
"msg_contents": "After struggling to collaborate using only email during a development\nproject last year, I developed a web software product that facilitates\nthis sort of thing. http://meteorsite.com\n\nMeteorSite is written in C++ for Linux and uses the PostgreSQL\nrelational database engine that is included with the RedHat\ndistribution. A web browser is the only requirement for users of the\nproduct.\n\nI need to find someone who understands how to sell, market, and promote\nsuch a service. If you would be interested in partnering with me to\nbring this idea to market, visit the site and contact me.\n\nI am located in Dallas, TX but you can be anywhere. The service and\ncompany behind it are ubiquitous.\n\n- Ron Howell\n- Meteor Consulting\n\n\n\n\n",
"msg_date": "Thu, 02 Dec 1999 18:10:34 GMT",
"msg_from": "Ron Howell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Searching For Business Partner"
}
] |
[
{
"msg_contents": "http://www.32bitsonline.com/article.php3?file=issues/199912/pg_sqledit&page=1\n\nTitle:\nAcademics Tool Grows Up: PostgreSQL.\n",
"msg_date": "Thu, 02 Dec 1999 16:52:29 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Neat article you will want to read."
},
{
"msg_contents": "On Thu, 2 Dec 1999, Lamar Owen wrote:\n\n> http://www.32bitsonline.com/article.php3?file=issues/199912/pg_sqledit&page=1\n> \n> Title:\n> Academics Tool Grows Up: PostgreSQL.\n\nVery nice...Vince/Jeff, can we get links to the appropriate sites for\nthis?\n\nReviews are \"a good thing\" :) \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 2 Dec 1999 18:12:48 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Neat article you will want to read."
},
{
"msg_contents": "\nOn 02-Dec-99 The Hermit Hacker wrote:\n> On Thu, 2 Dec 1999, Lamar Owen wrote:\n> \n>> http://www.32bitsonline.com/article.php3?file=issues/199912/pg_sqledit&page=1\n>> \n>> Title:\n>> Academics Tool Grows Up: PostgreSQL.\n> \n> Very nice...Vince/Jeff, can we get links to the appropriate sites for\n> this?\n> \n> Reviews are \"a good thing\" :) \n\nI just put together an in-the-news page, fixin to upload it now. But going\nthru everything I've been storing 'cuze I've been planning on doing this,\nI can't find but one more article on PostgreSQL. Anyone want to pass along\npointers?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Thu, 02 Dec 1999 17:59:36 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Neat article you will want to read."
},
{
"msg_contents": "\nOn 02-Dec-99 Vince Vielhaber wrote:\n> \n> On 02-Dec-99 The Hermit Hacker wrote:\n>> On Thu, 2 Dec 1999, Lamar Owen wrote:\n>> \n>>> http://www.32bitsonline.com/article.php3?file=issues/199912/pg_sqledit&page=1\n>>> \n>>> Title:\n>>> Academics Tool Grows Up: PostgreSQL.\n>> \n>> Very nice...Vince/Jeff, can we get links to the appropriate sites for\n>> this?\n>> \n>> Reviews are \"a good thing\" :) \n> \n> I just put together an in-the-news page, fixin to upload it now. But going\n> thru everything I've been storing 'cuze I've been planning on doing this,\n> I can't find but one more article on PostgreSQL. Anyone want to pass along\n> pointers?\n\nGuess it woulda been nice to give the URL :)\n\nhttp://www.postgresql.org/inthenews.html\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Thu, 02 Dec 1999 18:11:57 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Neat article you will want to read."
}
] |
[
{
"msg_contents": "This message was sent from Geocrawler.com by \"Tim Perdue\" <[email protected]>\nBe sure to reply to that address.\n\nIn the various versions of Postgres that I've\nused, I've just been amazed at how stupid the\nsorting process is.\n\nI'm trying to SELECT DISTINCT on a table that is\n60MB:\n\n-rw------- 1 postgres postgres 89799976 Dec 2\n20:27 pg_sorttemp32736.0\n-rw------- 1 postgres postgres 87307680 Dec 2\n20:27 pg_sorttemp32736.1\n-rw------- 1 postgres postgres 84376872 Dec 2\n20:27 pg_sorttemp32736.2\n-rw------- 1 postgres postgres 78645944 Dec 2\n20:27 pg_sorttemp32736.3\n-rw------- 1 postgres postgres 66749412 Dec 2\n20:27 pg_sorttemp32736.4\n-rw------- 1 postgres postgres 71360512 Dec 2\n20:29 pg_sorttemp32736.5\n-rw------- 1 postgres postgres 260677944 Dec 2\n20:28 pg_sorttemp32736.6\n\n\nIt uses over 1GB of disk space to do that sort,\nand it would have used a lot more if I hadn't run\nout.\n\nThen it won't fail gracefully, instead of just\nhangs and leaves temp files completely filling up\nthe hard drive.\n\nHow can it use 1GB of disk to sort a 60MB file?\n\nTim\n\n\nGeocrawler.com - The Knowledge Archive\n",
"msg_date": "Thu, 2 Dec 1999 18:34:19 -0800",
"msg_from": "\"Tim Perdue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Brain-Dead Sort Algorithm???"
},
{
"msg_contents": "hi... \n\n> \n> I'm trying to SELECT DISTINCT on a table that is\n> 60MB:\n\nwhat is your exact select statement, where are you doing this from (psql,\nlibpq, PHP etc), what OS, what version of pgsql are you using.... etc... \n\n-- \nAaron J. Seigo\nSys Admin\n",
"msg_date": "Thu, 2 Dec 1999 19:43:06 -0700",
"msg_from": "\"Aaron J. Seigo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Brain-Dead Sort Algorithm???"
},
{
"msg_contents": "At 06:34 PM 12/2/99 -0800, Tim Perdue wrote:\n\n>In the various versions of Postgres that I've\n>used, I've just been amazed at how stupid the\n>sorting process is.\n\n>I'm trying to SELECT DISTINCT on a table that is\n>60MB:\n>\n>-rw------- 1 postgres postgres 89799976 Dec 2\n>20:27 pg_sorttemp32736.0\n>-rw------- 1 postgres postgres 87307680 Dec 2\n>20:27 pg_sorttemp32736.1\n>-rw------- 1 postgres postgres 84376872 Dec 2\n>20:27 pg_sorttemp32736.2\n>-rw------- 1 postgres postgres 78645944 Dec 2\n>20:27 pg_sorttemp32736.3\n>-rw------- 1 postgres postgres 66749412 Dec 2\n>20:27 pg_sorttemp32736.4\n>-rw------- 1 postgres postgres 71360512 Dec 2\n>20:29 pg_sorttemp32736.5\n>-rw------- 1 postgres postgres 260677944 Dec 2\n>20:28 pg_sorttemp32736.6\n>\n>\n>It uses over 1GB of disk space to do that sort,\n>and it would have used a lot more if I hadn't run\n>out.\n>\n>Then it won't fail gracefully, instead of just\n>hangs and leaves temp files completely filling up\n>the hard drive.\n>\n>How can it use 1GB of disk to sort a 60MB file?\n\nBecause maybe you're doing a really dumb join before you\nsort? SQL is full of such \"gotchas\". \n\nPost your query here, maybe we can make you feel better by\npointing you to Oracle, where you can rant and rave and pay\nseveral thousand dollars to be told the same thing.\n\nSelect distinct first executes the query you give it (i.e.\nall the joins), sorts the result, then deletes dupes. The\nprimary reason why select distinct takes a long time is because\nthe joins are written ... well, stupidly. What can I say that's\nmore honest than that? I'd be kinder if you weren't so harsh\nin your assessment of Postgres.\n\n(In my experience, PostgreSQL usually does what it's told, which is\nwhy I'm being so harsh).\n\nAnd, of course, you've posed your question stupidly - \"my query's\nslow, why is Postgres so horrible?\" and you haven't bothered posting\nyour query.\n\nThat's sort of like saying \"my car won't start\" without telling us\nif there's actually gas in the tank...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 02 Dec 1999 20:27:13 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Brain-Dead Sort Algorithm???"
}
] |
[
{
"msg_contents": "Hello All!\n\nI'd like to create a table with a datetime field that defaults to +60\ndays.\n\nmydate datetime default 'now() +@60 days',\n...\n\nDoesn't seem to work.\n\nWhat am I missing?\n\nThanks\n\nAndy\n\n",
"msg_date": "Thu, 2 Dec 1999 21:56:52 -0600 (CST)",
"msg_from": "Andy Lewis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Another Date question"
},
{
"msg_contents": "your may need to make a simple function. see function in doc.\n\nOn Thu, 2 Dec 1999, Andy Lewis wrote:\n\n> Hello All!\n> \n> I'd like to create a table with a datetime field that defaults to +60\n> days.\n> \n> mydate datetime default 'now() +@60 days',\n> ...\n> \n> Doesn't seem to work.\n> \n> What am I missing?\n> \n> Thanks\n> \n> Andy\n> \n> \n> ************\n> \n\n",
"msg_date": "Fri, 3 Dec 1999 00:13:21 -0600 (CST)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Another Date question"
},
{
"msg_contents": "I finished the book (version Nov 30). It is a very good one. clear and\nstraight to the point. \nsome comments:\nA)\nhere are my 2-cents:\n1)I found a type in p55 line 4552: \"than\" should be \"that\".\n2) when I read it, I feel the data in the example should be given.\n i.e., all the inserts should be given (esp. on p58).\nB)\nhere is a big question: why you say that normalization is good for\ndata retreval (page 43 )? If my memory not wrong, it is ONLY good for data\nupdate/insert/delete.\n\nC) here is the main concern: sql92. \n1) page3, after talk oss, the book should mention sql92;\n and treat the whole book accordingly (see next).\n2) page10, \\g should not be used as recommened one. ; should be used. \n this is not sql92 (?), but \";\" is certainly the most used. \n3) page19: single quotation mark should be mentioned as the prefered \n one. (sql92 ).\n4) page23: /* */ should be mentioned that it is not sql92.\n5) page27: != is not sql92.\n6) page28: regex is not sql92, so, should be considered ONLY \n after tried like ;\n7) page31: in \"case\", should indicate that \"end\" is not needed in sql92,\n and thus very likely later version of pg may also not need end. \n8) page61: oid should be used in caution, because, in short, it is not in\n sql92. \n\nin short, all non-necessary non-sql92 features should be put into\nsecondary position. all important feature that is not sql92 should\nbe pointed out.\n \nwe OSS/PG people should differentiate/advertize ourselves as\nstandard-keeper. so, this book should keep this as the main topic.\nIt will NOT confuse new user/beginner, if handled consistantly. \nAlso, it will add the worth-value for old pg user for sql92 info.\n\n\nhope this book will not like all other vendor-oriented books where\nas if sql86/92 never exists! sql86/92 are our friends, even family member!\n \n\nKai\n\n",
"msg_date": "Fri, 3 Dec 1999 00:46:06 -0600 (CST)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "the book and sql92"
},
{
"msg_contents": "> I finished the book (version Nov 30). It is a very good one. clear and\n> straight to the point. \n> some comments:\n> A)\n> here are my 2-cents:\n> 1)I found a type in p55 line 4552: \"than\" should be \"that\".\n\nFixed. Thanks.\n\n> 2) when I read it, I feel the data in the example should be given.\n> i.e., all the inserts should be given (esp. on p58).\n\nWell, the issue here is that I really have not developed enough data to\nshow a meaningful output for this, and I don't think it is worth the\nmajor space needed to insert it. That is why I left it out.\n\n> B)\n> here is a big question: why you say that normalization is good for\n> data retreval (page 43 )? If my memory not wrong, it is ONLY good for data\n> update/insert/delete.\n\nChanged to 'data lookup' because without normalization, you can't lookup\ninformation about a specific customer very easily.\n\n> \n> C) here is the main concern: sql92. \n> 1) page3, after talk oss, the book should mention sql92;\n> and treat the whole book accordingly (see next).\n\nI disagree. This is an intro/concepts. I emphasize standard SQL ways\nas much as possible. Yesterday I changed now() to CURRENT_TIMESTAMP for\nthis reason, and if you see any other cases where I use non-standard\nmore, let me know. However, this is just to get them started. They\nwant results. Worrying about standard SQL at this point is not a good\nidea. Get them started first. I emphasize that, but don't want to be\npointing out saying \"don't do this, and don't do that\" at this point.\n\n\n> 2) page10, \\g should not be used as recommened one. ; should be used. \n> this is not sql92 (?), but \";\" is certainly the most used. \n\n\\g used very rarely, but it should be shown to show consistency with\nother psql commands.\n\n> 3) page19: single quotation mark should be mentioned as the prefered \n> one. (sql92 ).\n\nsingle mentioned first.\n\n> 4) page23: /* */ should be mentioned that it is not sql92.\n\nMentioned last.\n\n> 5) page27: != is not sql92.\n\nMany db's support this.\n\n> 6) page28: regex is not sql92, so, should be considered ONLY \n> after tried like ;\n\nAgain, see above.\n\n> 7) page31: in \"case\", should indicate that \"end\" is not needed in sql92,\n> and thus very likely later version of pg may also not need end. \n\nNo need.\n\n> 8) page61: oid should be used in caution, because, in short, it is not in\n> sql92. \n\n\n> \n> in short, all non-necessary non-sql92 features should be put into\n> secondary position. all important feature that is not sql92 should\n> be pointed out.\n> \n> we OSS/PG people should differentiate/advertize ourselves as\n> standard-keeper. so, this book should keep this as the main topic.\n> It will NOT confuse new user/beginner, if handled consistantly. \n> Also, it will add the worth-value for old pg user for sql92 info.\n> \n> \n> hope this book will not like all other vendor-oriented books where\n> as if sql86/92 never exists! sql86/92 are our friends, even family member!\n\nThat is not the scope of this book.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 3 Dec 1999 02:56:27 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] the book and sql92"
},
{
"msg_contents": "\nOn Thu, 2 Dec 1999, Andy Lewis wrote:\n\n> Hello All!\n> \n> I'd like to create a table with a datetime field that defaults to +60\n> days.\n> \n> mydate datetime default 'now() +@60 days',\n> ...\n\nWhere is a problem?\n\nYou can use \"now() + 60\"\n\nSee:\n\ntest=> create table d (x text, d datetime default now() + 60);\nCREATE\ntest=> insert into d values ('hello');\nINSERT 506143 1\ntest=> select * from d;\nx |d\n-----+----------------------------\nhello|Tue Feb 01 00:00:00 2000 CET\n(1 row)\n\n\nBut problem is if you want change other datetime value (min,sec,year..etc),\nyou can use to_char/from_char datetime routines from CVS tree: \n\nselect from_char(\n to_char('now'::datetime,'MM ') || --- Month\n to_char('now'::datetime,'DD')::int +60 || --- Day + 60\n to_char('now'::datetime,' YYYY HH24:MI:SS'), --- Year,hour,min,sec \n 'FMMM FMDD YYYY HH24:MI:SS'); --- Make datetime\n\n----------------------------\nTue Feb 01 13:30:37 2000 CET --- Output datetime\n(1 row)\n \n\nYes, it is a lot of complicated, but if you a little change this example,\nyou can use it for increment a arbitrary datetime number (sec,min..).\n\nI agree with your now() + '60 days' is better and easy, but for this we need\nnew \"datetime + text\" oprerator, now is date_pli(dateVal, days) only.\n\nMy first idea is \"to_char\" operator as:\n datetime + 'to_char format pictures string' example:\n\n\tdatetime + '05 DD 10 HH12' \n\t(add 5days and 10hours to datetime)\n\nFor this is parser in to-from_char module. \n\nOr second idea is make it as easy:\n datetime + '10 day' or \n datetime + '2 year' ..etc.\n\n\nBut I'm not sure what is better or exists it in other SQL.\n\n.... Any comment Thomas?\n\n\n\t\t\t\t\t\t\tKarel\n\n----------------------------------------------------------------------\nKarel Zak <[email protected]> http://home.zf.jcu.cz/~zakkr/\n\nDocs: http://docs.linux.cz (big docs archive)\t\nKim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\nFTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n-----------------------------------------------------------------------\n\n",
"msg_date": "Fri, 3 Dec 1999 13:23:17 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Datetime operators (was: Re: [SQL] Another Date question)"
},
{
"msg_contents": "> > I'd like to create a table with a datetime field that defaults to +60\n> > days.\n> > mydate datetime default 'now() +@60 days',\n> > ...\n> Where is a problem?\n\nYou have enclosed your default values into a large string, rather than\nletting them be evaluated as an expression:\n\n mydate datetime default (now() + '60 days')\n\nwhere the outer parens are optional.\n\n> datetime + '10 day' or\n> datetime + '2 year' ..etc.\n> But I'm not sure what is better or exists it in other SQL.\n\nafaik this is the simplest and most direct way to do it. Note that you\ncan include other timespan fields in the constant:\n\n mydate datetime default (now() + '60 days 10 hours')\n\nHTH\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 03 Dec 1999 15:48:35 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Datetime operators (was: Re: [SQL] Another Date question)"
},
{
"msg_contents": "\nOn Fri, 3 Dec 1999, Thomas Lockhart wrote:\n\n> afaik this is the simplest and most direct way to do it. Note that you\n> can include other timespan fields in the constant:\n> \n> mydate datetime default (now() + '60 days 10 hours')\n> \n\nSorry, sooooorry, I total bad see in source and docs, my previous letter\nis good for /dev/null only... \n\n\t\t\t\t\tAgain sorry\n\t\t\t Karel \n\n",
"msg_date": "Fri, 3 Dec 1999 16:50:18 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Datetime operators (was: Re: [SQL] Another Date question)"
},
{
"msg_contents": "why \n\ncreate table mymy (mydate datetime default (now() + '60 days'::timespan )); \n\ndoes not work?\n\nOn Fri, 3 Dec 1999, Thomas Lockhart wrote:\n\n> > > I'd like to create a table with a datetime field that defaults to +60\n> > > days.\n> > > mydate datetime default 'now() +@60 days',\n> > > ...\n> > Where is a problem?\n> \n> You have enclosed your default values into a large string, rather than\n> letting them be evaluated as an expression:\n> \n> mydate datetime default (now() + '60 days')\n> \n> where the outer parens are optional.\n> \n> > datetime + '10 day' or\n> > datetime + '2 year' ..etc.\n> > But I'm not sure what is better or exists it in other SQL.\n> \n> afaik this is the simplest and most direct way to do it. Note that you\n> can include other timespan fields in the constant:\n> \n> mydate datetime default (now() + '60 days 10 hours')\n> \n> HTH\n> \n> - Thomas\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n> \n> ************\n> \n\n",
"msg_date": "Fri, 3 Dec 1999 13:43:06 -0600 (CST)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Datetime operators (was: Re: [SQL] Another Date question)"
},
{
"msg_contents": "Remove the ::timespan and it will.\n\nAndy\n\nOn Fri, 3 Dec 1999 [email protected] wrote:\n\n> why \n> \n> create table mymy (mydate datetime default (now() + '60 days'::timespan )); \n> \n> does not work?\n> \n> On Fri, 3 Dec 1999, Thomas Lockhart wrote:\n> \n> > > > I'd like to create a table with a datetime field that defaults to +60\n> > > > days.\n> > > > mydate datetime default 'now() +@60 days',\n> > > > ...\n> > > Where is a problem?\n> > \n> > You have enclosed your default values into a large string, rather than\n> > letting them be evaluated as an expression:\n> > \n> > mydate datetime default (now() + '60 days')\n> > \n> > where the outer parens are optional.\n> > \n> > > datetime + '10 day' or\n> > > datetime + '2 year' ..etc.\n> > > But I'm not sure what is better or exists it in other SQL.\n> > \n> > afaik this is the simplest and most direct way to do it. Note that you\n> > can include other timespan fields in the constant:\n> > \n> > mydate datetime default (now() + '60 days 10 hours')\n> > \n> > HTH\n> > \n> > - Thomas\n> > \n> > -- \n> > Thomas Lockhart\t\t\t\[email protected]\n> > South Pasadena, California\n> > \n> > ************\n> > \n> \n\n",
"msg_date": "Fri, 3 Dec 1999 13:43:32 -0600 (CST)",
"msg_from": "Andy Lewis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Datetime operators (was: Re: [SQL] Another Date question)"
},
{
"msg_contents": "> why\n> create table mymy (mydate datetime\n> default (now() + '60 days'::timespan ));\n> does not work?\n\nUh, I think it does, right?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 03 Dec 1999 20:03:27 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Datetime operators (was: Re: [SQL] Another Date question)"
},
{
"msg_contents": "> I feel pain about it :-) because that was what I tried, and then, \n> since it did not work, I assumed \"default\" did not accept expressions.\n\nNo pain here:\n\npostgres=> create table mymy (mydate datetime\npostgres-> default (now() + '60 days'::timespan ));\nCREATE\n\nWhat version are you running??\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 03 Dec 1999 20:06:13 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Datetime operators (was: Re: [SQL] Another Date question)"
},
{
"msg_contents": "I know, thanks to you, but why? It supposes to work. seems things within\ndefault's parenthesis are different for no reason.\n\nI mean, \n insert into mymy values( now() + '60 days'::timespan );\nworks fine. usually the more strict one (diligitantly casted one)\nalways works. \n\nI feel pain about it :-) because that was what I tried, and then, since it\ndid not work, I assumed \"default\" did not accept expressions.\n\n\n\nOn Fri, 3 Dec 1999, Andy Lewis wrote:\n\n> Remove the ::timespan and it will.\n> \n> Andy\n> \n> On Fri, 3 Dec 1999 [email protected] wrote:\n> \n> > why \n> > \n> > create table mymy (mydate datetime default (now() + '60 days'::timespan )); \n> > \n> > does not work?\n> > \n> > On Fri, 3 Dec 1999, Thomas Lockhart wrote:\n> > \n> > > > > I'd like to create a table with a datetime field that defaults to +60\n> > > > > days.\n> > > > > mydate datetime default 'now() +@60 days',\n> > > > > ...\n> > > > Where is a problem?\n> > > \n> > > You have enclosed your default values into a large string, rather than\n> > > letting them be evaluated as an expression:\n> > > \n> > > mydate datetime default (now() + '60 days')\n> > > \n> > > where the outer parens are optional.\n> > > \n> > > > datetime + '10 day' or\n> > > > datetime + '2 year' ..etc.\n> > > > But I'm not sure what is better or exists it in other SQL.\n> > > \n> > > afaik this is the simplest and most direct way to do it. Note that you\n> > > can include other timespan fields in the constant:\n> > > \n> > > mydate datetime default (now() + '60 days 10 hours')\n> > > \n> > > HTH\n> > > \n> > > - Thomas\n> > > \n> > > -- \n> > > Thomas Lockhart\t\t\t\[email protected]\n> > > South Pasadena, California\n> > > \n> > > ************\n> > > \n> > \n> \n\n",
"msg_date": "Fri, 3 Dec 1999 14:48:09 -0600 (CST)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Datetime operators (was: Re: [SQL] Another Date question)"
},
{
"msg_contents": "6.5.1. time to upgrade ;-) thanks.\n\n\nOn Fri, 3 Dec 1999, Thomas Lockhart wrote:\n\n> > I feel pain about it :-) because that was what I tried, and then, \n> > since it did not work, I assumed \"default\" did not accept expressions.\n> \n> No pain here:\n> \n> postgres=> create table mymy (mydate datetime\n> postgres-> default (now() + '60 days'::timespan ));\n> CREATE\n> \n> What version are you running??\n> \n> - Thomas\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n> \n\n",
"msg_date": "Fri, 3 Dec 1999 15:01:51 -0600 (CST)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Datetime operators (was: Re: [SQL] Another Date question)"
},
{
"msg_contents": "<[email protected]> writes:\n> why \n> create table mymy (mydate datetime default (now() + '60 days'::timespan )); \n> does not work?\n\nI believe :: casts are broken in default expressions in 6.5.*. They are\nfixed in current sources (which is what Thomas probably tried) --- but\nin the meantime, that expression will work fine without the cast...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Dec 1999 17:35:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Datetime operators (was: Re: [SQL] Another Date question) "
}
] |
[
{
"msg_contents": "Hi,\n\nI have committed changes to postmaster.c. Now it creates a file called\n\"postmaster.pid\" under $PGDATA, which holds postmaster's process\nid. If the file has already existed, postmaster won't start. So we\ncannot start more than on postmaster with same $PGDATA any more. I\nbelieve it's a good thing. The file will be deleted upon postmaster\nshutting down.\n\nAnother file \"postmaster.opts\" is also created under $PGDATA. It\ncontains the path to postmaster and each option for postmaster per\nline. Example contents are shown below:\n\n/usr/local/pgsql/bin/postmaster\n-p 5432\n-D /usr/local/pgsql/data\n-B 64\n-b /usr/local/pgsql/bin/postgres\n-N 32\n-S\n\nNote that even options execpt -S is not explicitly supplied in the\ncase above (postmaster -S), other opts are shown. This file is not\nonly convenient to restart postmaster but also is usefull to determin\nwith what defaults postmaster is running, IMHO.\n\nWith these changes now we can stop postmaster:\n\n\tkill `cat /usr/local/pgsql/data/postmaster.pid`\n\nTo restart it with previous options:\n\n\teval `cat /usr/local/pgsql/data/postmaster.opts`\n\nI'm going to write pg_ctl script this week end.\n\nBTW, no initdb required, of course.\n--\nTatsuo Ishii\n",
"msg_date": "Fri, 03 Dec 1999 15:28:44 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "postmaster.pid"
},
{
"msg_contents": "> Hi,\n> \n> I have committed changes to postmaster.c. Now it creates a file called\n> \"postmaster.pid\" under $PGDATA, which holds postmaster's process\n> id. If the file has already existed, postmaster won't start. So we\n> cannot start more than on postmaster with same $PGDATA any more. I\n> believe it's a good thing. The file will be deleted upon postmaster\n> shutting down.\n\nI assume you do a kill(0) on the pid if the file exists on startup to\ncheck to see if the pid is still valid?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 3 Dec 1999 02:30:40 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postmaster.pid"
},
{
"msg_contents": "> > I have committed changes to postmaster.c. Now it creates a file called\n> > \"postmaster.pid\" under $PGDATA, which holds postmaster's process\n> > id. If the file has already existed, postmaster won't start. So we\n> > cannot start more than on postmaster with same $PGDATA any more. I\n> > believe it's a good thing. The file will be deleted upon postmaster\n> > shutting down.\n> \n> I assume you do a kill(0) on the pid if the file exists on startup to\n> check to see if the pid is still valid?\n\nA little bit different.\n\n1) if the port is already in use, postmaster exits (same as before)\n\n2) if it fails to create pid file, call\nExitPostmaster(1). ExitPostmaster calls proc_exit() and proc_exit\ncalls exit().\n--\nTatsuo Ishii\n\n",
"msg_date": "Fri, 03 Dec 1999 16:56:14 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] postmaster.pid"
},
{
"msg_contents": "> > > I have committed changes to postmaster.c. Now it creates a file called\n> > > \"postmaster.pid\" under $PGDATA, which holds postmaster's process\n> > > id. If the file has already existed, postmaster won't start. So we\n> > > cannot start more than on postmaster with same $PGDATA any more. I\n> > > believe it's a good thing. The file will be deleted upon postmaster\n> > > shutting down.\n> > \n> > I assume you do a kill(0) on the pid if the file exists on startup to\n> > check to see if the pid is still valid?\n> \n> A little bit different.\n> \n> 1) if the port is already in use, postmaster exits (same as before)\n\nOK\n\n> 2) if it fails to create pid file, call\n> ExitPostmaster(1). ExitPostmaster calls proc_exit() and proc_exit\n> calls exit().\n\nSo you don't start if the pid file is there, or do you delete it if the\nport is free?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 3 Dec 1999 03:06:56 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postmaster.pid"
},
{
"msg_contents": "> > 2) if it fails to create pid file, call\n> > ExitPostmaster(1). ExitPostmaster calls proc_exit() and proc_exit\n> > calls exit().\n> \n> So you don't start if the pid file is there,\n\nRight.\n\n>or do you delete it if the\n> port is free?\n\nNo. even if the port is free, there might be another postmaster that\nuses another one.\n\nWait, maybe I can check postmaster.opts to see if another postamster\nreally exists...\n--\nTatsuo Ishii\n\n",
"msg_date": "Fri, 03 Dec 1999 17:17:01 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] postmaster.pid"
},
{
"msg_contents": "> > > 2) if it fails to create pid file, call\n> > > ExitPostmaster(1). ExitPostmaster calls proc_exit() and proc_exit\n> > > calls exit().\n> > \n> > So you don't start if the pid file is there,\n> \n> Right.\n> \n> >or do you delete it if the\n> > port is free?\n> \n> No. even if the port is free, there might be another postmaster that\n> uses another one.\n> \n> Wait, maybe I can check postmaster.opts to see if another postamster\n> really exists...\n\nYou can do kill(0) on the pid to see if it is actually a real pid. \nRather than playing with the port, just check the pid because the\npostmaster could be startup up by not on the port yet.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 3 Dec 1999 03:38:02 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postmaster.pid"
},
{
"msg_contents": "> > Wait, maybe I can check postmaster.opts to see if another postamster\n> > really exists...\n> \n> You can do kill(0) on the pid to see if it is actually a real pid. \n> Rather than playing with the port, just check the pid because the\n> postmaster could be startup up by not on the port yet.\n\nOh, I see your point now.\n--\nTatsuo Ishii\n",
"msg_date": "Fri, 03 Dec 1999 17:43:51 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] postmaster.pid"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n\n> Another file \"postmaster.opts\" is also created under $PGDATA. It\n> contains the path to postmaster and each option for postmaster per\n> line. Example contents are shown below:\n>\n> /usr/local/pgsql/bin/postmaster\n> -p 5432\n> -D /usr/local/pgsql/data\n> -B 64\n> -b /usr/local/pgsql/bin/postgres\n> -N 32\n> -S\n>\n> Note that even options execpt -S is not explicitly supplied in the\n> case above (postmaster -S), other opts are shown. This file is not\n> only convenient to restart postmaster but also is usefull to determin\n> with what defaults postmaster is running, IMHO.\n\nIt's not quite clear to me: Do command line options override\npostmaster.opts options? (I would expect them to.)\n\nCheers.\nEd\n\n\n",
"msg_date": "Fri, 03 Dec 1999 10:31:36 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postmaster.pid"
},
{
"msg_contents": "Ed Loehr wrote:\n> Tatsuo Ishii wrote:\n> > Another file \"postmaster.opts\" is also created under $PGDATA. It\n> > contains the path to postmaster and each option for postmaster per\n> > line. Example contents are shown below:\n[snip]\n> > Note that even options execpt -S is not explicitly supplied in the\n> > case above (postmaster -S), other opts are shown. This file is not\n> > only convenient to restart postmaster but also is usefull to determin\n> > with what defaults postmaster is running, IMHO.\n \n> It's not quite clear to me: Do command line options override\n> postmaster.opts options? (I would expect them to.)\n\n>From what I gather from Tatsuo's message, the 'postmaster.opts' file\nonly shows the status of the currently running postmaster -- IOW, it's\nnot a configuration file, but a status indicator.\n\nAnd I like the idea of this status information being available in this\nway.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 03 Dec 1999 11:51:59 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postmaster.pid"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n\n> I have committed changes to postmaster.c. Now it creates a file called\n> \"postmaster.pid\" under $PGDATA, which holds postmaster's process\n> id. If the file has already existed, postmaster won't start.\n\nDo you just test to see if the file exists? If so, I would suggest\nactually reading the pid and checking to see if the process is still\nrunning. That way we don't have to 'rm postmaster.pid' if the postmaster\ndies abnormally (Netscape for Linux has this problem).\n\nCheers,\n\nEvan @ 4-am\n\n",
"msg_date": "Fri, 03 Dec 1999 11:45:55 -0600",
"msg_from": "Evan Simpson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postmaster.pid"
},
{
"msg_contents": "I hope you don't mind being superseded. I hope (FINALLY!) to be able to post\nthe patches for the log subsystem in the next week or three. The log system\nhas a fairly sophisticated set of options which required a config file of\nits own. Rather than have two options file, I absorbed pg_options and made\na general options file: \"postgres.conf\".\n\nCould you accept this:\n\n /* postgres.conf */\n environment {\n\tport 5432;\n\tpidfile \"/usr/local/pgsql/data\";\n\t/* etc. */\n }\n\n debugging {\n /* pg_options info */\n }\n\n logging {\n /* logging info */\n }\n\nFor more info, see http://216.199.14.27/\n\n regards,\n\n Tim Holloway\n\nThe logger supports reporting the run environment, BTW. I find\nthat way I don't pick up the wrong config file and only \"think\" I\nknow what options are in effect (it also reports where it GOT\nthe config file).\n\n\nTatsuo Ishii wrote:\n> \n> Hi,\n> \n> I have committed changes to postmaster.c. Now it creates a file called\n> \"postmaster.pid\" under $PGDATA, which holds postmaster's process\n> id. If the file has already existed, postmaster won't start. So we\n> cannot start more than on postmaster with same $PGDATA any more. I\n> believe it's a good thing. The file will be deleted upon postmaster\n> shutting down.\n> \n> Another file \"postmaster.opts\" is also created under $PGDATA. It\n> contains the path to postmaster and each option for postmaster per\n> line. Example contents are shown below:\n> \n> /usr/local/pgsql/bin/postmaster\n> -p 5432\n> -D /usr/local/pgsql/data\n> -B 64\n> -b /usr/local/pgsql/bin/postgres\n> -N 32\n> -S\n> \n> Note that even options execpt -S is not explicitly supplied in the\n> case above (postmaster -S), other opts are shown. This file is not\n> only convenient to restart postmaster but also is usefull to determin\n> with what defaults postmaster is running, IMHO.\n> \n> With these changes now we can stop postmaster:\n> \n> kill `cat /usr/local/pgsql/data/postmaster.pid`\n> \n> To restart it with previous options:\n> \n> eval `cat /usr/local/pgsql/data/postmaster.opts`\n> \n> I'm going to write pg_ctl script this week end.\n> \n> BTW, no initdb required, of course.\n> --\n> Tatsuo Ishii\n> \n> ************\n",
"msg_date": "Fri, 03 Dec 1999 20:47:00 -0500",
"msg_from": "Tim Holloway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postmaster.pid"
},
{
"msg_contents": "> I hope you don't mind being superseded. I hope (FINALLY!) to be able to post\n> the patches for the log subsystem in the next week or three. The log system\n> has a fairly sophisticated set of options which required a config file of\n> its own. Rather than have two options file, I absorbed pg_options and made\n> a general options file: \"postgres.conf\".\n\nI'm not sure about your postgres.conf, but I think pg_options was made\nfor postgres - the backend - not for postmaster. This is the reason\nwhy I invented yet another conf file. Anyway, could you post the\npatches so that we could evaluate them?\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 04 Dec 1999 11:37:27 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] postmaster.pid"
},
{
"msg_contents": "> Do you just test to see if the file exists? If so, I would suggest\n> actually reading the pid and checking to see if the process is still\n> running. That way we don't have to 'rm postmaster.pid' if the postmaster\n> dies abnormally (Netscape for Linux has this problem).\n\nThanks for the suggestion. I have already modified postmaster.c so\nthat it could remove postmaster.pid if the file is bogus. Will commit\nsoon.\n--\nTatsuo Ishii\n\n",
"msg_date": "Sat, 04 Dec 1999 11:41:31 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] postmaster.pid"
},
{
"msg_contents": ">From what I gather from Tatsuo's message, the 'postmaster.opts' file\n> only shows the status of the currently running postmaster -- IOW, it's\n> not a configuration file, but a status indicator.\n\nRight. Thanks for the cleaner explation!\n\n> And I like the idea of this status information being available in this\n> way.\n\nMe too.\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 04 Dec 1999 12:02:34 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] postmaster.pid"
},
{
"msg_contents": "> > Note that even options execpt -S is not explicitly supplied in the\n> > case above (postmaster -S), other opts are shown. This file is not\n> > only convenient to restart postmaster but also is usefull to determin\n> > with what defaults postmaster is running, IMHO.\n> \n> It's not quite clear to me: Do command line options override\n> postmaster.opts options? (I would expect them to.)\n\nNo, postmaster.opts is just showing what options have been passed to\npostmatser. Using it to determine what options should be given next\ntime is the job of pg_ctl which I'm writing now, not\npostmaster's. Speaking about pg_ctl, probably I will give it the\nability to override postmaster.opts options.\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 04 Dec 1999 12:02:44 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] postmaster.pid"
}
] |
[
{
"msg_contents": "> > On Thu, 2 Dec 1999, Lamar Owen wrote:\n> > \n> > > Where is 'linux.postgres', and how do I subscribe??\n> > \n> > UseNet. It's a newsgroup.\n> \n> That's what I was hoping, but, these days, you never know \n> what's in what\n> heirarchy. HOWEVER, my ISP's nntp server doesn't carry it. \n> Do you know\n> of a public nntp server that carries this??\n\nnews.edu.sollentuna.se. It's read-only when you're from the outside, though.\n\n//Magnus\n",
"msg_date": "Fri, 3 Dec 1999 09:34:17 +0100 ",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] perl-DBD-Pg (was Re: BOUNCE pgsql-ports@postgreSQL\n\t.org:Non-member submission from[Joe Brenner <[email protected]>]\n\t(f wd))"
},
{
"msg_contents": "Magnus Hagander wrote:\n> > Do you know\n> > of a public nntp server that carries this??\n> \n> news.edu.sollentuna.se. It's read-only when you're from the outside, though.\n\nThanks. I'll see if I can't get my ISP's newsadmin to carry it, but for\nnow this will help for reading this low volume group -- I'll have to\nswing over to deja to post, though. Oh well.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 03 Dec 1999 11:04:58 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] perl-DBD-Pg (was Re: BOUNCE \n\[email protected]:Non-member submission from[Joe Brenner \n\t<[email protected]>] (fwd))"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nI working on a project that will handle around 50000 mailboxes at least.\n\nI have no problem with the system or sendmail. But I'm afraid that\npopper (either pop3 or imap) won't be able to authentificate users the\nstandard way (/etc/passwd) does anyone knows if a version of imapd or\nipop3d has been modified to allow postgresql authentification.\n\nIf not, could you give me some pointers.\n\nRegards to you all\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n",
"msg_date": "Fri, 03 Dec 1999 13:55:56 +0100",
"msg_from": "Olivier PRENANT <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgresql authentification for popper"
},
{
"msg_contents": "On Fri, 3 Dec 1999, Olivier PRENANT wrote:\n\n> Hi everyone,\n> \n> I working on a project that will handle around 50000 mailboxes at least.\n> \n> I have no problem with the system or sendmail. But I'm afraid that\n> popper (either pop3 or imap) won't be able to authentificate users the\n> standard way (/etc/passwd) does anyone knows if a version of imapd or\n> ipop3d has been modified to allow postgresql authentification.\n> \n> If not, could you give me some pointers.\n\nEarlier in the week I submitted patches to the qpopper folks for a very\nsimilar purpose - although I didn't add any database stuff to it. It\nreads a flat file of username:passwd:UID:GID:$HOME:shell and the UID/GID\ncan be the same. Since the routine that reads the alternate password\nfile (BTW, it reads /etc/passwd first then the alt file) is a standalone\nit'd be trivial to modify to read from a PostgreSQL table.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Fri, 3 Dec 1999 13:31:41 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postgresql authentification for popper"
},
{
"msg_contents": "\nIf you use Cyrus IMAPd (I use it on all my servers), you can use a PAM\nmodule for authenticating...one server I'm just getting ready to put\nonline does /etc/passwd first, falls over to radiusd second and then\nNT/smb third (need to cover alot of bases)...\n\nCyrus is also designed so that you don't need to have any users in your\n/etc/passwd file...it was basically designed to be an ultra-secure, yet\nfeature-rich, mail server...\n\nThe newest version implements Sieve, for mail filter, which one of the\nguyson the mailing list did up a great filter front-end too, so that you\ncan setup vacation/filters, again, without ever needing to have a user in\nyour /etc/passwd file ...\n\n\n\nOn Fri, 3 Dec 1999, Olivier PRENANT wrote:\n\n> Hi everyone,\n> \n> I working on a project that will handle around 50000 mailboxes at least.\n> \n> I have no problem with the system or sendmail. But I'm afraid that\n> popper (either pop3 or imap) won't be able to authentificate users the\n> standard way (/etc/passwd) does anyone knows if a version of imapd or\n> ipop3d has been modified to allow postgresql authentification.\n> \n> If not, could you give me some pointers.\n> \n> Regards to you all\n> -- \n> Olivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\n> Quartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n> 31190 AUTERIVE +33-6-07-63-80-64 (GSM)\n> FRANCE Email: [email protected]\n> ------------------------------------------------------------------------------\n> Make your life a dream, make your dream a reality. (St Exupery)\n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 3 Dec 1999 14:37:35 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postgresql authentification for popper"
},
{
"msg_contents": "Olivier PRENANT <[email protected]> writes:\n> I have no problem with the system or sendmail. But I'm afraid that\n> popper (either pop3 or imap) won't be able to authentificate users the\n> standard way (/etc/passwd) does anyone knows if a version of imapd or\n> ipop3d has been modified to allow postgresql authentification.\n> \n> If not, could you give me some pointers.\n\nRecent versions of UW IMAPD are PAM-enabled (on OSs that speak PAM),\nand there is, I believe a PAM module for PostgreSQL authentication.\n\nThe 1.5.X cyrus IMAPD has patches to allow authentication against\nPostgreSQL. 1.6.X went to using SASL, which means the patches against\n1.5.X are no longer relevant, but I believe the SASL library is\nPAM-enabled, so it can also use the PAM/PostgreSQL module.\n\nMike.\n",
"msg_date": "10 Dec 1999 16:54:03 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postgresql authentification for popper"
}
] |
[
{
"msg_contents": "This message was sent from Geocrawler.com by \"Tim Perdue\" <[email protected]>\nBe sure to reply to that address.\n\n\n> >It uses over 1GB of disk space to do that sort,\n> >and it would have used a lot more if I hadn't\nrun\n> >out.\n> >\n> >Then it won't fail gracefully, instead of just\n> >hangs and leaves temp files completely filling\nup\n> >the hard drive.\n\n> Because maybe you're doing a really dumb join\nbefore you\n> sort? SQL is full of such \"gotchas\".\n\nNo, sorry.\n\nselect distinct serial into serial_good from\nserial_half;\n\nserial_half is a 1-column list of 10-digit\nnumbers. I'm doing a select distinct because I\nbelieve there may be duplicates in that column.\n\nThe misunderstanding on my end came because\nserial_half was a 60MB text file, but when it was\ninserted into postgres, it became 345MB (6.8\nmillion rows has a lot of bloat apparently).\n\nSo the temp-sort space for 345MB could easily\nsurpass the 1GB I had on my hard disk. Although\nhow anyone can take a 60MB text file and turn it\ninto > 1GB is beyond me.\n\n> And, of course, you've posed your question\nstupidly - \"my query's\n> slow, why is Postgres so horrible?\" and you\nhaven't bothered posting\n> your query.\n\nNone of that was ever stated.\n\nActually what was stated is that it is retarded to\nfill up a hard disk and then hang instead of\nbowing out gracefully, forcing the user to\nmanually delete the temp_sort files and kill -9\npostgres.\n\nYou can't argue with that portion. \n\nAnd it happens on v6.4, v6.4.2, and v6.5.2 on RHAT\n6.1, and LinuxPPC.\n\nYes, my post was rather harsh - I posted it when I\nwas pissed and that was a mistake. I had this\nsame problem in March when trying to sort a 2.5GB\nfile with 9GB free.\n\nI use postgres on every project I work on,\nincluding this site, Geocrawler.com, and my\nPHPBuilder.com site, because it's a decent and\nfree database and it will scale beyond 2GB, unlike\nMySQL.\n\nTim\n\nGeocrawler.com - The Knowledge Archive\n",
"msg_date": "Fri, 3 Dec 1999 06:32:14 -0800",
"msg_from": "\"Tim Perdue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Brain-Dead Sort Algorithm??"
},
{
"msg_contents": "\"Tim Perdue\" <[email protected]> writes:\n> serial_half is a 1-column list of 10-digit\n> numbers. I'm doing a select distinct because I\n> believe there may be duplicates in that column.\n\n> The misunderstanding on my end came because\n> serial_half was a 60MB text file, but when it was\n> inserted into postgres, it became 345MB (6.8\n> million rows has a lot of bloat apparently).\n\nThe overhead per tuple is forty-something bytes, IIRC. So when the only\nuseful data in a tuple is an int, the expansion factor is unpleasantly\nlarge. Little to be done about it though. All the overhead fields\nappear to be necessary if you want proper transaction semantics.\n\n> So the temp-sort space for 345MB could easily surpass the 1GB I had on\n> my hard disk.\n\nYes, the merge algorithm used up through 6.5.* seems to have typical\nspace usage of about 4X the actual data volume. I'm trying to reduce\nthis to just 1X for 7.0, although some folks are complaining that the\nresult is slower than before :-(.\n\n> Actually what was stated is that it is retarded to fill up a hard disk\n> and then hang instead of bowing out gracefully,\n\nYup, that was a bug --- failure to check for write errors on the sort\ntemp files. I believe it's fixed in current sources too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Dec 1999 10:03:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Brain-Dead Sort Algorithm?? "
},
{
"msg_contents": "> serial_half is a 1-column list of 10-digit\n> numbers. I'm doing a select distinct because I\n> believe there may be duplicates in that column.\n> \n> The misunderstanding on my end came because\n> serial_half was a 60MB text file, but when it was\n> inserted into postgres, it became 345MB (6.8\n> million rows has a lot of bloat apparently).\n> \n> So the temp-sort space for 345MB could easily\n> surpass the 1GB I had on my hard disk. Although\n> how anyone can take a 60MB text file and turn it\n> into > 1GB is beyond me.\n\nSigh. Y'all like the sweeping statement, which got you in a bit of\ntrouble the first time too :)\n\nWithout knowing your schema, I can't say why you have *exactly* the\nstorage requirement you see. But, you have chosen the absolute worst\ncase for *any* relational database: a schema with only a single, very\nsmall column.\n\nFor Postgres (and other DBs, but the details will vary) there is a 36\nbyte overhead per row to manage the tuple and the transaction\nbehavior. So if you stored your data as int8 (int4 is too small for 10\ndigits, right?) I see an average usage of slightly over 44 bytes per\nrow (36+8). So, for 6.8 million rows, you will require 300MB. I'm\nguessing that you are using char(10) fields, which gives 50 bytes/row\nor a total of 340MB, which matches your number to two digits.\n\nNote that the tuple header size will stay the same (with possibly some\nmodest occasional bumps) for rows with more columns, so the overhead\ndecreases as you increase the number of columns in your tables.\n\nBy the way, I was going to say to RTFM, but I see a big blank spot on\nthis topic (I could have sworn that some of the info posted to the\nmailing lists on this topic had made it into the manual, but maybe\nnot).\n\nDoes anyone see where this is in the docs, or have an interest in\nwriting a bit? The place is doc/src/sgml/storage.sgml and page.sgml\n...\n\nGood luck.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 03 Dec 1999 16:25:41 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Brain-Dead Sort Algorithm??"
},
{
"msg_contents": "> Sigh. Y'all like the sweeping statement, which got you in a bit of\n> trouble the first time too :)\n> \n> Without knowing your schema, I can't say why you have *exactly* the\n> storage requirement you see. But, you have chosen the absolute worst\n> case for *any* relational database: a schema with only a single, very\n> small column.\n> \n> For Postgres (and other DBs, but the details will vary) there is a 36\n> byte overhead per row to manage the tuple and the transaction\n> behavior. So if you stored your data as int8 (int4 is too small for 10\n> digits, right?) I see an average usage of slightly over 44 bytes per\n> row (36+8). So, for 6.8 million rows, you will require 300MB. I'm\n> guessing that you are using char(10) fields, which gives 50 bytes/row\n> or a total of 340MB, which matches your number to two digits.\n> \n> Note that the tuple header size will stay the same (with possibly some\n> modest occasional bumps) for rows with more columns, so the overhead\n> decreases as you increase the number of columns in your tables.\n> \n> By the way, I was going to say to RTFM, but I see a big blank spot on\n> this topic (I could have sworn that some of the info posted to the\n> mailing lists on this topic had made it into the manual, but maybe\n> not).\n\nThis is an FAQ item.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 3 Dec 1999 12:32:40 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Brain-Dead Sort Algorithm??"
}
] |
[
{
"msg_contents": "> Oh, and the 6.5.3-2 RPMs are ready whenever you want to upload them to\n> ftp.postgresql.org. They're in the usual place on ramifordistat.net.\n\nAh, I should have mentioned: I found a missing file in the 6.5.3-1\nRPMs I was repackaging (and using!) for Mandrake. libpq++.H isn't\ncopied to /usr/include/pgsql/ due to the bizarre cap H in the file\nname. \n\nVince, is there any real reason to keep this \"cap H\" convention? It is\nthe only file in the whole distro which has this, and it causes\nproblems for packagers and perhaps for Makefiles and emacs-like\neditors.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 03 Dec 1999 16:30:58 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [Fwd: postgresql-6.5.3. RPMs (Well Done!)]"
},
{
"msg_contents": "On Fri, 3 Dec 1999, Thomas Lockhart wrote:\n\n> > Oh, and the 6.5.3-2 RPMs are ready whenever you want to upload them to\n> > ftp.postgresql.org. They're in the usual place on ramifordistat.net.\n> \n> Ah, I should have mentioned: I found a missing file in the 6.5.3-1\n> RPMs I was repackaging (and using!) for Mandrake. libpq++.H isn't\n> copied to /usr/include/pgsql/ due to the bizarre cap H in the file\n> name. \n> \n> Vince, is there any real reason to keep this \"cap H\" convention? It is\n> the only file in the whole distro which has this, and it causes\n> problems for packagers and perhaps for Makefiles and emacs-like\n> editors.\n\nIt's only there 'cuze it was when I started. Actually it wasn't\neven up to date at the time. Personally I don't care for the cap H\nand have no objection to it getting renamed to libpq++.h \n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Fri, 3 Dec 1999 11:39:53 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: postgresql-6.5.3. RPMs (Well Done!)]"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > Oh, and the 6.5.3-2 RPMs are ready whenever you want to upload them to\n> > ftp.postgresql.org. They're in the usual place on ramifordistat.net.\n> \n> Ah, I should have mentioned: I found a missing file in the 6.5.3-1\n> RPMs I was repackaging (and using!) for Mandrake. libpq++.H isn't\n> copied to /usr/include/pgsql/ due to the bizarre cap H in the file\n> name.\n\nEwww.. I'll rebuild this weekend. Time for a -3. Is there anything\nelse I might have missed??\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 03 Dec 1999 11:43:53 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: postgresql-6.5.3. RPMs (Well Done!)]"
},
{
"msg_contents": "> It's only there 'cuze it was when I started. Actually it wasn't\n> even up to date at the time. Personally I don't care for the cap H\n> and have no objection to it getting renamed to libpq++.h\n\nOK, I'll kill it for 7.0.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 03 Dec 1999 16:56:19 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [Fwd: postgresql-6.5.3. RPMs (Well Done!)]"
},
{
"msg_contents": "> Ewww.. I'll rebuild this weekend. Time for a -3. Is there anything\n> else I might have missed??\n\nNo, I just added \"*.H\" to the \"for f in *.h access ...\" line. I may\nnot have actually tested to see if that works :/ \n\nAlso, I switched from mkdir -p to using \n\ninstall -d -m 0700 $RPM_BUILD_ROOT/var/lib/pgsql\n\nand\n\n%attr(700,postgres,postgres) %dir /var/lib/pgsql\n\nNothing else changed (I'm looking at 6.5.3-1 spec files).\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 03 Dec 1999 17:07:21 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [Fwd: postgresql-6.5.3. RPMs (Well Done!)]"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > Ewww.. I'll rebuild this weekend. Time for a -3. Is there anything\n> > else I might have missed??\n> \n> No, I just added \"*.H\" to the \"for f in *.h access ...\" line. I may\n> not have actually tested to see if that works :/\n> \n> Also, I switched from mkdir -p to using\n> \n> install -d -m 0700 $RPM_BUILD_ROOT/var/lib/pgsql\n> \n> and\n> \n> %attr(700,postgres,postgres) %dir /var/lib/pgsql\n\nOk, those last two changes are already in 6.5.3-2. The first one will\ngo in 6.5.3-3, along with anything else I find.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 03 Dec 1999 12:16:06 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: postgresql-6.5.3. RPMs (Well Done!)]"
},
{
"msg_contents": "> On Fri, 3 Dec 1999, Thomas Lockhart wrote:\n> \n> > > Oh, and the 6.5.3-2 RPMs are ready whenever you want to upload them to\n> > > ftp.postgresql.org. They're in the usual place on ramifordistat.net.\n> > \n> > Ah, I should have mentioned: I found a missing file in the 6.5.3-1\n> > RPMs I was repackaging (and using!) for Mandrake. libpq++.H isn't\n> > copied to /usr/include/pgsql/ due to the bizarre cap H in the file\n> > name. \n> > \n> > Vince, is there any real reason to keep this \"cap H\" convention? It is\n> > the only file in the whole distro which has this, and it causes\n> > problems for packagers and perhaps for Makefiles and emacs-like\n> > editors.\n> \n> It's only there 'cuze it was when I started. Actually it wasn't\n> even up to date at the time. Personally I don't care for the cap H\n> and have no objection to it getting renamed to libpq++.h \n\nYour wish is my command. Done.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 3 Dec 1999 12:35:28 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [Fwd: postgresql-6.5.3. RPMs (Well Done!)]"
},
{
"msg_contents": "On Fri, 3 Dec 1999, Bruce Momjian wrote:\n\n> > On Fri, 3 Dec 1999, Thomas Lockhart wrote:\n> > \n> > > > Oh, and the 6.5.3-2 RPMs are ready whenever you want to upload them to\n> > > > ftp.postgresql.org. They're in the usual place on ramifordistat.net.\n> > > \n> > > Ah, I should have mentioned: I found a missing file in the 6.5.3-1\n> > > RPMs I was repackaging (and using!) for Mandrake. libpq++.H isn't\n> > > copied to /usr/include/pgsql/ due to the bizarre cap H in the file\n> > > name. \n> > > \n> > > Vince, is there any real reason to keep this \"cap H\" convention? It is\n> > > the only file in the whole distro which has this, and it causes\n> > > problems for packagers and perhaps for Makefiles and emacs-like\n> > > editors.\n> > \n> > It's only there 'cuze it was when I started. Actually it wasn't\n> > even up to date at the time. Personally I don't care for the cap H\n> > and have no objection to it getting renamed to libpq++.h \n> \n> Your wish is my command. Done.\n\nThankyou!\n\nVince.\n--\n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Fri, 3 Dec 1999 12:51:56 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [Fwd: postgresql-6.5.3. RPMs (Well Done!)]"
}
] |
[
{
"msg_contents": "Sorry for the (slightly) off-topic question, but it *does* relate to\nbeing able to use emacs for editing Postgres source files:\n\nI'm trying to use emacs for more editing than I have previously (which\nwas restricted to Makefiles, lisp, and sgml). And I've got at least\ntwo projects which have differing formatting requirements which also\ndiffer from the emacs defaults.\n\nI'm running into trouble trying to get the tab vs space stuff right.\nFor Postgres, I want to set the tabbing to 4 columns, and to preserve\ntabs in the input and output. I think I can do that, with\n\n (setq tab-width 4)\n (setq standard-indent 4)\n\nthough I'm not sure that standard-indent needs to be adjusted at all.\n\nFor my other project, I need 2-column indents always space filled, so\nno tabs allowed. It happens to be C++, so I can differentiate between\nthe C code for Postgres.\n\nAnyway, the first complication was working through the fact that emacs\n(20.4, if it matters) claims to have loaded \"cc-mode\" on startup, when\nin fact the major mode is actually called \"c++-mode\" internally. So I\nhad a devil of a time setting the hooks.\n\nBut I'm also having trouble getting things to space-fill when\nindenting. I've tried\n\n (setq c++-mode-hook 'rtc-cc-mode)\n\n (defun rtc-cc-mode ()\n (setq tab-width 2)\n (setq standard-indent 2)\n (setq indent-tab-mode nil))\n\nwhich gave me two-column tabs, but I had hoped that nil-ing\nindent-tab-mode would space fill. untabify gets rid of the tabs in a\nselected region, but the tabs come back if I hit the tab key. I've\ntried nil-ing the other values and setting them to zero, but that\nseems to revert the behavior to 8 column tabs.\n\nMy emacs book got me this far, but the behavior seems to be a bit at\nodds with their description and they don't give a specific example\ncovering this. Any hints would be *greatly* appreciated.\n\nTIA\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 03 Dec 1999 16:54:21 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "emacs question"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I'm running into trouble trying to get the tab vs space stuff right.\n> For Postgres, I want to set the tabbing to 4 columns, and to preserve\n> tabs in the input and output. I think I can do that, with\n> (setq tab-width 4)\n> (setq standard-indent 4)\n> though I'm not sure that standard-indent needs to be adjusted at all.\n\n> For my other project, I need 2-column indents always space filled, so\n> no tabs allowed. It happens to be C++, so I can differentiate between\n> the C code for Postgres.\n\nI am not sure whether you have the correct mental model for this, so\nhere it is: there are two different things going on here, the \"physical\"\nwidth of tab stops and the \"logical\" notion of syntax-driven\nindentation. A TAB character stored in a file is displayed as \"indent\nto the next tab stop\", where tab stops are every tab-width columns, 8 by\ndefault. (I think you can also choose to set up a user-defined list of\ntab stop locations, but I've never done that.) The tab stops have\n*nothing* to do with logical indentation of source code --- that's\ndriven by separate logic that says \"indent this many columns relative to\nthe previous line when you see this kind of syntactic context\". After\nthe logical-indent code has decided what column it thinks the line\nshould start at, it then constructs a whitespace prefix of the right\nlength, which normally will use the right number of tabs and spaces\nbased on the current \"physical\" tab stop settings. When you press TAB\nin c++-mode, that doesn't mean \"insert one tab character\", it means\n\"reindent the current line by recomputing the appropriate leading\nwhitespace\". It could insert or delete any combination of tabs and\nspaces. (If you really want to insert one tab character, you use the\nusual escape for inserting control codes: control-Q TAB.)\n\n(If you already had your head screwed on straight, sorry for the\ndigression.)\n\n> But I'm also having trouble getting things to space-fill when\n> indenting. I've tried\n\n> (setq c++-mode-hook 'rtc-cc-mode)\n\n> (defun rtc-cc-mode ()\n> (setq tab-width 2)\n> (setq standard-indent 2)\n> (setq indent-tab-mode nil))\n\nI doubt that you want to be messing with the default tab-stop-distance\nsetting here. And you shouldn't need to mess with standard-indent\neither; I have no idea what that controls, but it's not the basis for\nindenting in C or C++ mode AFAIK. The logical indent per syntactic\nlevel in these modes is set by c-basic-offset, which may already be 2\ndepending on which c-style value you are using. Finally, as far as\nsuppressing use of tabs to do logical indentation, you've almost got it\nright, but the variable is named \"indent-tabs-mode\".\n\nAlso, if you want to have different conventions for different projects,\nsetting a c++-mode-hook is probably not the way to go; that will get run\n*any* time you visit a c++ file. I'd suggest a trick that someone\n(Peter E. I think) recently pointed out to me: you can pattern-match on\nthe location of a source file in an auto-mode-alist pattern. So I've\nnow got this in my .emacs:\n\n; Cmd to set tab stops &etc for working with PostgreSQL code\n(defun pgsql-c-mode ()\n \"Set PostgreSQL C indenting conventions in current buffer.\"\n (interactive)\n (c-mode)\t\t\t; select major mode to customize\n (setq tab-width 4)\t\t; some people have weird ideas about tabs\n (c-set-style \"bsd\")\t\t; sets c-basic-offset to 4, plus other stuff\n (c-set-offset 'case-label '+)\t; tweak case indent to match PG custom\n)\n\n; Invoke pgsql-c-mode automatically when loading appropriate files\n(setq auto-mode-alist\n (cons '(\"\\\\`/users/postgres/.*\\\\.[ch]\\\\'\" . pgsql-c-mode)\n\t (cons '(\"\\\\`/users/tgl/pgsql/.*\\\\.[ch]\\\\'\" . pgsql-c-mode)\n\t\t auto-mode-alist)))\n\nwhich means that I get Postgres-customized C mode whenever I visit a\n.c or .h file under /users/postgres/ or /users/tgl/pgsql/ (customize\npaths to taste of course).\n\nYou can use the above defun as-is for Postgres code, and for your other\nproject you probably want a mode-set command like\n\n(defun foobar-cc-mode ()\n \"Set so-and-so's C++ indenting conventions in current buffer.\"\n (interactive)\n (c++-mode)\t\t\t\t; select major mode to customize\n (setq c-basic-offset 2)\n (setq indent-tabs-mode nil))\n\n(You might also want to call c-set-style if the other project doesn't\nuse GNU-flavor brace layout and so forth --- and you can get really down\nand dirty if you need to, by tweaking offsets for individual syntax\nelements. But that's a last resort if you have to adhere to an\nunconventional set of layout guidelines. c-set-style can get you pretty\nclose to all the popular formats I've heard of.)\n\nThis is already OK to invoke by hand, and then you can add some\nauto-mode-alist entries to invoke it automatically when you visit\nfiles with the right path prefix and suffix.\n\n> My emacs book got me this far, but the behavior seems to be a bit at\n> odds with their description and they don't give a specific example\n> covering this. Any hints would be *greatly* appreciated.\n\nThe online manual (see control-H I) is generally more up-to-date about\nhow your particular version works than any book is likely to be. I'm\nusing version 19, so I might be a little off about how version 20 works,\nbut I hadn't heard that they reworked the indent logic in any major way.\n\nIt'd be well worth your while to read the manual's section about\n\"Indentation for Programs\"... but in my copy, the last part about\ncustomizing indent tells you about \"styles\" last, where it probably\nshould tell you that first instead of presenting the low-level\nadjustments first...\n\nGood luck!\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Dec 1999 13:04:03 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: emacs question "
},
{
"msg_contents": "<snip helpful intro which I'd like to think I already knew, at least\nmostly ;) >\n\n> > (setq indent-tab-mode nil))\n> ... Finally, as far as\n> suppressing use of tabs to do logical indentation, you've almost got \n> it right, but the variable is named \"indent-tabs-mode\".\n\nsheesh, that's it! I was going around in lots of tiny little\ncircles... :(\n\n> Also, if you want to have different conventions for different \n> projects, setting a c++-mode-hook is probably not the way to go; that \n> will get run *any* time you visit a c++ file. I'd suggest a trick \n> that someone (Peter E. I think) recently pointed out to me: you can \n> pattern-match on the location of a source file in an auto-mode-alist \n> pattern. So I've now got this in my .emacs:\n\nGreat. I had just been thinking about this, and now I don't have to\nfigure it out :))\n\nThanks.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 03 Dec 1999 20:15:10 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: emacs question"
}
] |
[
{
"msg_contents": "> I'm a geographic information systems (GIS) professional and a (home)\n> Linux user. After reading the documentation for the Geometric data\n> types in PostgreSQL, I'm excited about the possibilities. Are you\n> aware of any projects where the geometric data types in PostgreSQL are\n> being used as the basis of a GIS or mapping package?\n\nNot specifically, though I do know that folks have used it to do\nGIS-like things (e.g. given a location on the earth surface, identify\nsatellite tracks which are visible).\n\nThe best place to ask is on the Postgres mailing list(s); I'm cc'ing\nthe hackers list and you may want to inquire on one or two of the\nother lists too.\n\n> I'd like to know\n> if anyone's doing this and, if not, what development language would\n> you recommend for developing a mapping package using PostgreSQL.\n\nHmm. That's a hard one to answer without knowing more. If you need\ncompiled code, then C or C++ might be the best choice. But you might\nfind something like java or itcl lets you build a GUI app faster and\neasier.\n\nAn interesting possibility if you are developing in C or C++ is to\nconsider developing as a \"gnome-enabled\" app, which presumably gives\nyou a bunch of high level widgets to work with. It would also allow\nyou to Corba-ize your app to decouple the backend from the GUI.\n\n> Also, how difficult would it be to add a Z value to the X and Y\n> values to the data types' basic structure? This would allow the\n> storage of height data along with the coordinates.\n\nIt would be easy; you just need to figure out how you will be able to\nuse it. Things like comparison operators have a less intuitive meaning\nonce you go to 3D.\n\nLook at src/backend/utils/adt/geo.c for hints on how to deal with a\ngeometric data type. Also, look at contrib/ to see how to add a\ndatatype.\n\n> I use GRASS on my Linux system at home. GRASS is a (GPL'd) raster GIS\n> package. Open source vector GIS packages for Linux are, as far as I\n> know, nonexistent. Several commercial packages are available,\n> including ESRI's Arc/Info and ArcView (which I use at work). I'd like\n> to see an open source vector GIS package developed, perhaps based on\n> PostgreSQL's geometric data types.\n\nYou might also consider using something like ApplixWare, which has\nhooks into Postgres (via ODBC) and might have enough features and\npower to allow developing a package. The per-seat cost of ApplixWare\nis pretty low. It may be ODBC gets in the way of exposing the extended\nfeatures of Postgres though.\n\nGood luck.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 03 Dec 1999 19:57:15 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Geometric Data Type in PostgreSQL"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > I'm a geographic information systems (GIS) professional and a (home)\n> > Linux user. After reading the documentation for the Geometric data\n> > types in PostgreSQL, I'm excited about the possibilities. Are you\n> > aware of any projects where the geometric data types in PostgreSQL are\n> > being used as the basis of a GIS or mapping package?\n> \n\nI use it in applications for geographic purposes, not really as a basis\nof standalone, general purpose GIS systems. Mostly what I use it for\nis finding objects in a specific bounding box. \n\n> > I'd like to know\n> > if anyone's doing this and, if not, what development language would\n> > you recommend for developing a mapping package using PostgreSQL.\n> \n> Hmm. That's a hard one to answer without knowing more. If you need\n> compiled code, then C or C++ might be the best choice. But you might\n> find something like java or itcl lets you build a GUI app faster and\n> easier.\n\nJava would be a no go. For most purposes, it's fine, but iterating\nthrough hundreds/thousands of records that can be required on a map make\nit painfully slow at best. I wrote a prototype for a web based\nmap/database system using java with the JDBC driver at the time & I\nended up rewriting the map part in C and calling that from java. (I\nalso did a similar thing as a PHP extension - a C library called from\nPHP scripts, which is how its running now.)\n \n> > I use GRASS on my Linux system at home. GRASS is a (GPL'd) raster GIS\n> > package. Open source vector GIS packages for Linux are, as far as I\n> > know, nonexistent. Several commercial packages are available,\n> > including ESRI's Arc/Info and ArcView (which I use at work). I'd like\n> > to see an open source vector GIS package developed, perhaps based on\n> > PostgreSQL's geometric data types.\n\nHave you looked at what people are doing with Postgres & GRASS? I've\nseen something on the GRASS web site about the project, but I don't know\nhow serious people were about working on it or what they expected to do\nwith it. If you haven't seen it around, poke around a little deeper -\nit wasn't hidden that far.\n",
"msg_date": "Fri, 03 Dec 1999 15:57:39 -0600",
"msg_from": "Jeff Hoffmann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Geometric Data Type in PostgreSQL"
},
{
"msg_contents": "On Fri, 3 Dec 1999, Thomas Lockhart wrote:\n\n> > I'm a geographic information systems (GIS) professional and a (home)\n> > Linux user. After reading the documentation for the Geometric data\n> > types in PostgreSQL, I'm excited about the possibilities. Are you\n> > aware of any projects where the geometric data types in PostgreSQL are\n> > being used as the basis of a GIS or mapping package?\n> \n> Not specifically, though I do know that folks have used it to do\n> GIS-like things (e.g. given a location on the earth surface, identify\n> satellite tracks which are visible).\n\nIsn't Peter Mount using PostgreSQL & JDBC for a GIS project? \n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 5 Dec 1999 18:28:33 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Geometric Data Type in PostgreSQL"
},
{
"msg_contents": "> Isn't Peter Mount using PostgreSQL & JDBC for a GIS project?\n\nAstronomical project afaik...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 06 Dec 1999 02:03:03 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: Geometric Data Type in PostgreSQL"
}
] |
[
{
"msg_contents": "As I get more involved with this project, and just in general, I was\nthinking that it might be a good idea to have the SQL standards around.\nI understand that the standards organizations are selling those, but a\nquick search showed way too many documents at way too high prices in a way\ntoo far away locality.\n\nAre there any commercially available books that cover these as well to a\nreasonable extent? I guess I can live without the technical grammar specs\nif it shrinks volume and price. Of course an overview of actual\nimplementations (a.k.a. \"how does Oracle do it\") might be nice, too. I'm\nnot talking about any \"Intro to SQL\" books here, but the full deal. What\ndo you use? \n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sat, 4 Dec 1999 17:05:06 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Availability of SQL standards"
},
{
"msg_contents": "> As I get more involved with this project, and just in general, I was\n> thinking that it might be a good idea to have the SQL standards around.\n> I understand that the standards organizations are selling those, but a\n> quick search showed way too many documents at way too high prices in a way\n> too far away locality.\n> \n> Are there any commercially available books that cover these as well to a\n> reasonable extent? I guess I can live without the technical grammar specs\n> if it shrinks volume and price. Of course an overview of actual\n> implementations (a.k.a. \"how does Oracle do it\") might be nice, too. I'm\n> not talking about any \"Intro to SQL\" books here, but the full deal. What\n> do you use? \n> \n\nI have <I>A Guide to the SQL Standard,</I> by C.J. Date, et. al,\nAddison, Wesley\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 4 Dec 1999 14:01:39 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Availability of SQL standards"
},
{
"msg_contents": "Peter Eisentraut wrote:\n >As I get more involved with this project, and just in general, I was\n >thinking that it might be a good idea to have the SQL standards around.\n >I understand that the standards organizations are selling those, but a\n >quick search showed way too many documents at way too high prices in a way\n >too far away locality.\n >\n >Are there any commercially available books that cover these as well to a\n >reasonable extent? I guess I can live without the technical grammar specs\n >if it shrinks volume and price. Of course an overview of actual\n >implementations (a.k.a. \"how does Oracle do it\") might be nice, too. I'm\n >not talking about any \"Intro to SQL\" books here, but the full deal. What\n >do you use? \n\nI have \"SQL - The Standard Handbook\" by SJ Cannan and GAM Otten, published\nby McGraw-Hill 1993. ISBN: 0-07-707664-8. It cost me 35 pounds, 5 years\nago. It covers SQL-92 and it contains an appendix with the syntax in\nBNF notation.\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Behold, happy is the man whom God correcteth. \n Therefore despise thou not the chastening of the \n Almighty.\" Job 5:17 \n\n\n",
"msg_date": "Sat, 04 Dec 1999 20:10:23 +0000",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Availability of SQL standards "
},
{
"msg_contents": "> What do you use?\n\nAs does Bruce, I use the Date book. One nice feature of the newest\neditions is that there is mention of SQL3 in an appendix.\n\nAlso, we have a 1992 draft version of the SQL92 standard which seems\nto match up pretty well with the final release. I can send copies if\nyou would like...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 06 Dec 1999 01:53:28 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Availability of SQL standards"
}
] |
[
{
"msg_contents": "I talked to Marc several months ago about the possibility of setting up\na system for raising money from commercial users of PostgreSQL kind of\nlike the \"Street Performer Protocol\" explained at:\n\nhttp://www.counterpane.com/street_performer.html\n\nIt would basically allow people to view the todo list and to offer a bid\non various features and enhancements. At the same time, developers (you\nguys) could bid on doing the actual work. When the bid pool on a\ncertain enhancement got large enough to fund a work bid, the enhancement\ncould be done and the developer could actually get paid for his work.\n\nAs a commercial user of PostgreSQL, I would be interested in throwing a\ncertain number of $$ at various problems/features of the software. If\nothers feel the same way, maybe we could accelerate the development\nprocess and make it more fun for the people doing the actual work.\n\nIs anyone interested in this kind of thing? Or are financial interests\nnot really the objective for most of the developers?",
"msg_date": "Sat, 04 Dec 1999 12:35:52 -0700",
"msg_from": "Kyle Bateman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Raising funds for PostgreSQL"
},
{
"msg_contents": "> As a commercial user of PostgreSQL, I would be interested in throwing a\n> certain number of $$ at various problems/features of the software. If\n> others feel the same way, maybe we could accelerate the development\n> process and make it more fun for the people doing the actual work.\n> \n> Is anyone interested in this kind of thing? Or are financial interests\n> not really the objective for most of the developers?\n\nInteresting idea.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 4 Dec 1999 15:08:59 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Raising funds for PostgreSQL"
},
{
"msg_contents": "While I have no doubt such an idea for $$ for open source code development\nwould fly, my initial reaction would be opposed to it because I think the\noverall effect would be to pollute the long-term purity of the open source\npgsql development effort by incenting short-sighted development to earn\ncash. Quality controls can never keep up. Here's a bit more on my\nperspective...\n\n[long-winded self-important soapboxing hilosphical flame suit on]\n\nI am a software developer who has benefitted tremendously from open source\nsoftware. Linux, Apache, Perl, SSL, PostgreSQL, GIMP...hundreds if not\nthousands of pieces of open-source software. I have come to realize this\nat a very gut level. We're talking about many, many years (hundreds?\nthousands?) of labor via open source. Though not in cash, I have been\n\"paid\" in value many times over via open source software. As a result of a\nclear understanding that I along with everyone else will ultimately\nbenefit, I have a very personal and growing commitment to give back to the\nopen source movement. I do that by spending some of my personal time\nhelping others use these tools via forums, newsgroups, etc., by providing\nfeedback to developers, by providing bug fixes/patches/enhancements to open\nsource software where I can, and by generating open source software for\nothers.\n\nWhat strikes me about open source development is that it is some of the\ncleanest, purest development around. By that, I mean only that there is\nfar less of the sense of \"doing the absolute minimum to get the money\" that\nis so prevalent in private for-profit corporations who will live or die by\nnext quarter's results and the short-term assessment of value-add by\nshareholders. If you hang around these open source development forums for\nvery long and know much about how software design decisions often get made\nin the corporate software world, you will notice a powerful design slant\ntoward longer-term vision as opposed to the tyranny of the urgent that\nusually presides in the corporate environment. While I love the free\nmarket and see a lot of validity to the free market theory behind the\nmarket-driven corporate software development, I also think it is quite rare\nthat software developed under such circumstances has such benefit to the\nworld at large while also having the exponential opportunities to grow and\nspread by the worldwide efforts of others. MS Excel is, IMO, probably the\nmost productive piece of software in existence. But it will always be\nconstrained by the fortunes of Microsoft or the holder of the intellectual\nproperty rights. Open source, on the other hand, has the potential to\npropagate indefinitely to benefit people everywhere indefinitely, because\nit is free to grow. Both approaches seem valdi/important, and I think\nthere is a needed balance between proprietary nature of the private sector,\nand the open source movement. It is a fine balance, and it is none too\nclear on how to best draw it. It is a very complex issue with many sides.\nPersonally, I think it's quite related to many of the great debates of all\ntime, such as capitalism vs socialism vs communism, even the inherent\nnature of human kind. But I digress. :)\n\nBut what is clear to me is that there is a lot of open source decision\nmaking going on which is in the public's best interest. I don't have much\nconfidence that this would remain so if the proposal below played out.\n\n[flame suit off]\n\nCheers.\nEd\n\n\nKyle Bateman wrote:\n\n> I talked to Marc several months ago about the possibility of setting up\n> a system for raising money from commercial users of PostgreSQL kind of\n> like the \"Street Performer Protocol\" explained at:\n>\n> http://www.counterpane.com/street_performer.html\n>\n> It would basically allow people to view the todo list and to offer a bid\n> on various features and enhancements. At the same time, developers (you\n> guys) could bid on doing the actual work. When the bid pool on a\n> certain enhancement got large enough to fund a work bid, the enhancement\n> could be done and the developer could actually get paid for his work.\n>\n> As a commercial user of PostgreSQL, I would be interested in throwing a\n> certain number of $$ at various problems/features of the software. If\n> others feel the same way, maybe we could accelerate the development\n> process and make it more fun for the people doing the actual work.\n>\n> Is anyone interested in this kind of thing? Or are financial interests\n> not really the objective for most of the developers?\n\n",
"msg_date": "Sat, 04 Dec 1999 14:48:11 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Raising funds for PostgreSQL"
},
{
"msg_contents": "hi..\n\n> While I have no doubt such an idea for $$ for open source code development\n> would fly, my initial reaction would be opposed to it because I think the\n> overall effect would be to pollute the long-term purity of the open source\n> pgsql development effort by incenting short-sighted development to earn\n> cash. Quality controls can never keep up. Here's a bit more on my\n> perspective...\n\n<SNIP>\n\n> But what is clear to me is that there is a lot of open source decision\n> making going on which is in the public's best interest. I don't have much\n> confidence that this would remain so if the proposal below played out.\n\nthe solution to this \"problem\" is probably rediculously obvious, but here goes\nanyways:\n\n only items that are put on the to-do list by the developers can be bid on.\n\nin other words, its stuff we'd get around to anyways... the $ incentive would\nmerely motivate us to get to it sooner...\n\nhell, you could even put time/quality conditions on it... e.g. feature Y cannot\nbe commenced until feature X is completed (and if feature X is boring and dull,\nperhaps it will sit for a long time undone, unless there are other incentives\n(read: $) and therefore prelong the absence of much desired and sexy feature Y).\n\nof course, before anyone gets paid, it has to be accepted into the mainstream\npackage. \n\nhow's that?\n\nthe only issue i'd see is that the current licensing scheme provides little\nprotection to the financial donors' investment (e.g. they could be financing\nsomeone else's commercial development). but that's their problem...\n\n-- \nAaron J. Seigo\nSys Admin\n",
"msg_date": "Sat, 4 Dec 1999 14:08:38 -0700",
"msg_from": "\"Aaron J. Seigo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Raising funds for PostgreSQL"
},
{
"msg_contents": "> While I have no doubt such an idea for $$ for open source code development\n> would fly, my initial reaction would be opposed to it because I think the\n> overall effect would be to pollute the long-term purity of the open source\n> pgsql development effort by incenting short-sighted development to earn\n> cash. Quality controls can never keep up. Here's a bit more on my\n> perspective...\n\nMy assumption is that the bids are open only to proven PostgreSQL\ndevelopers, and that the quality of the work must meet the same\nstandards we use for all our patches. All our patches are\npeer-reviewed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 4 Dec 1999 16:09:09 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Raising funds for PostgreSQL"
},
{
"msg_contents": "On Sat, 4 Dec 1999, Kyle Bateman wrote:\n\n> I talked to Marc several months ago about the possibility of setting up\n> a system for raising money from commercial users of PostgreSQL kind of\n> like the \"Street Performer Protocol\" explained at:\n> \n> http://www.counterpane.com/street_performer.html\n> \n> It would basically allow people to view the todo list and to offer a bid\n> on various features and enhancements. At the same time, developers (you\n> guys) could bid on doing the actual work. When the bid pool on a\n> certain enhancement got large enough to fund a work bid, the enhancement\n> could be done and the developer could actually get paid for his work.\n> \n> As a commercial user of PostgreSQL, I would be interested in throwing a\n> certain number of $$ at various problems/features of the software. If\n> others feel the same way, maybe we could accelerate the development\n> process and make it more fun for the people doing the actual work.\n> \n> Is anyone interested in this kind of thing? Or are financial interests\n> not really the objective for most of the developers?\n\nUmmm...read point 2 at: http://www.pgsql.com/mission.html\n\nWe should probably create a seperate, more visible link, but see the\nContribute item at:\n\n\thttp://www.pgsql.com/products.html\n\nAnd, we even \"advertise\" the contributors:\n\n\thttp://www.pgsql.com/contributors.html\n\nAnd note...this has all be here for months...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 5 Dec 1999 18:33:21 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Raising funds for PostgreSQL"
},
{
"msg_contents": "On Sat, 4 Dec 1999, Bruce Momjian wrote:\n\n> > While I have no doubt such an idea for $$ for open source code development\n> > would fly, my initial reaction would be opposed to it because I think the\n> > overall effect would be to pollute the long-term purity of the open source\n> > pgsql development effort by incenting short-sighted development to earn\n> > cash. Quality controls can never keep up. Here's a bit more on my\n> > perspective...\n> \n> My assumption is that the bids are open only to proven PostgreSQL\n> developers, and that the quality of the work must meet the same\n> standards we use for all our patches. All our patches are\n> peer-reviewed.\n\nActually, this is what PostgreSQL, Inc was formed for many months\nback...there was, at one time, a URL on our page, and an email message\nsent out by myself, explaining this whole \"contribute toward\nfeatures\" aspect of things, but, going through www.pgsql.com, it appears\nto have been trim'd out at some point :(\n\nWill go through my old mailboxes and see if I can find this again, but the\ngist of the concept was that we have a contributions section on our web\npage, which is currently a part of the products page...if you click on it,\nthe order for you get presented includes a list of those features where\nyou can contribute towards having that feature added...\n\nThe features that are listed right now are short, but if something isn't\nlisted, email us and we'll add it to the list...\n\nThe features listed has to be something on the TODO list...if not, it has\nto be proposed to the -hackers list and added to the TODO list. \n\nI'm searching through my old email rigth now to try and find the original\nmessage on this, but this has both been discussed already *and*\nimplemented...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n\n",
"msg_date": "Sun, 5 Dec 1999 18:46:42 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Raising funds for PostgreSQL"
},
{
"msg_contents": "\nOn 05-Dec-99 The Hermit Hacker wrote:\n> Ummm...read point 2 at: http://www.pgsql.com/mission.html\n> \n> We should probably create a seperate, more visible link, but see the\n> Contribute item at:\n> \n> http://www.pgsql.com/products.html\n> \n> And, we even \"advertise\" the contributors:\n> \n> http://www.pgsql.com/contributors.html\n> \n> And note...this has all be here for months...\n\nPerhaps I should add something under the \"Helping Us\" link on the\ndeveloper's website?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Sun, 05 Dec 1999 17:51:02 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Raising funds for PostgreSQL"
},
{
"msg_contents": "On Sun, 5 Dec 1999, Vince Vielhaber wrote:\n\n> \n> On 05-Dec-99 The Hermit Hacker wrote:\n> > Ummm...read point 2 at: http://www.pgsql.com/mission.html\n> > \n> > We should probably create a seperate, more visible link, but see the\n> > Contribute item at:\n> > \n> > http://www.pgsql.com/products.html\n> > \n> > And, we even \"advertise\" the contributors:\n> > \n> > http://www.pgsql.com/contributors.html\n> > \n> > And note...this has all be here for months...\n> \n> Perhaps I should add something under the \"Helping Us\" link on the\n> developer's website?\n\nthat sounds good...Jeff and I are going to re-work the contribute stuff,\nsince its pretty hidden right now :( will announce something this week on\nit...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 5 Dec 1999 19:18:10 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Raising funds for PostgreSQL"
},
{
"msg_contents": "Interesting article relevant to this thread...\n\nhttp://www.linuxplanet.com/linuxplanet/newss/1316/1/\n\n\n SourceXchange Goes Live\n Hewlett-Packard Among Initial Users of the New Service\n\n Kevin Reichard\n\n \"Collab.Net announced today that it has finished its\nbeta-test phase and gone into production with its first service,\nsourceXchange, a marketplace where open-source developers match their\nexpertise to committed buyers with well-defined, financially backed\nopen-source projects.\"\n\n\n",
"msg_date": "Wed, 08 Dec 1999 14:06:33 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Raising funds for PostgreSQL"
}
] |
[
{
"msg_contents": "Hello,\n\nI was just wondering if there were any dates the major\ndevelopers had in mind as to when current will be released\nas a beta release? For my trivial part, I still have to send\nin a patch to allow pg_dump to dump COMMENT ON commands for\nany descriptions the user might have created and was\nwondering if any time frame had been established.\n\nJust curious,\n\nMike\n\n\n\n",
"msg_date": "Sat, 04 Dec 1999 20:07:31 -0500",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "When is 7.0 going Beta?"
},
{
"msg_contents": "\nNon set in stone at this time...mid-Feb, I believe, was the last that was\nthrown around...\n\nOn Sat, 4 Dec 1999, Mike Mascari wrote:\n\n> Hello,\n> \n> I was just wondering if there were any dates the major\n> developers had in mind as to when current will be released\n> as a beta release? For my trivial part, I still have to send\n> in a patch to allow pg_dump to dump COMMENT ON commands for\n> any descriptions the user might have created and was\n> wondering if any time frame had been established.\n> \n> Just curious,\n> \n> Mike\n> \n> \n> \n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 5 Dec 1999 18:35:10 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta?"
},
{
"msg_contents": "> I was just wondering if there were any dates the major\n> developers had in mind as to when current will be released\n> as a beta release? For my trivial part, I still have to send\n> in a patch to allow pg_dump to dump COMMENT ON commands for\n> any descriptions the user might have created and was\n> wondering if any time frame had been established.\n\nMy recollection was that February has been discussed. But since it is\na major rev bump we would rather get the features which (perhaps) have\nbeen waiting for a major rev, and these big changes may influence the\nbeta date...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 06 Dec 1999 01:58:50 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta?"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > I was just wondering if there were any dates the major\n> > developers had in mind as to when current will be released\n> > as a beta release? For my trivial part, I still have to send\n> > in a patch to allow pg_dump to dump COMMENT ON commands for\n> > any descriptions the user might have created and was\n> > wondering if any time frame had been established.\n> \n> My recollection was that February has been discussed. But since it is\n> a major rev bump we would rather get the features which (perhaps) have\n> been waiting for a major rev, and these big changes may influence the\n> beta date...\n\nSeems that I'll be able to return to WAL development in Feb\nonly. So, good beta date for me is Apr/May... \nSorry, but... life is life.\n\nVadim\n",
"msg_date": "Mon, 06 Dec 1999 10:33:44 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta?"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Seems that I'll be able to return to WAL development in Feb\n> only. So, good beta date for me is Apr/May... \n> Sorry, but... life is life.\n\nI'm not anywhere near \"ready\" either, at least if \"ready\" means all the\nstuff I'd hoped to get done for 7.0.\n\nIf we want to do a schedule-driven release around Feb, fine, but let's\ncall it 6.6. 7.0 ought to be driven by features not calendar.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 05 Dec 1999 23:02:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta? "
},
{
"msg_contents": "On Sun, 5 Dec 1999, Tom Lane wrote:\n\n> Vadim Mikheev <[email protected]> writes:\n> > Seems that I'll be able to return to WAL development in Feb\n> > only. So, good beta date for me is Apr/May... \n> > Sorry, but... life is life.\n> \n> I'm not anywhere near \"ready\" either, at least if \"ready\" means all the\n> stuff I'd hoped to get done for 7.0.\n> \n> If we want to do a schedule-driven release around Feb, fine, but let's\n> call it 6.6. 7.0 ought to be driven by features not calendar.\n\nI believe that we've already agreed that there are features going into the\nsource tree currently that are prompting a 7.0 release...I have no qualms\nat all about holding off the next release until Apr/May, as long as we\ncontinue with what we've started, and that is back-patching appropriate\nchanges to the -STABLE source tree and putting out a minor release\nperiodically...\n\nI'd like to stick with the next \"full release\" being 7.0 ...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 6 Dec 1999 00:30:02 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta? "
},
{
"msg_contents": "> Vadim Mikheev <[email protected]> writes:\n> > Seems that I'll be able to return to WAL development in Feb\n> > only. So, good beta date for me is Apr/May... \n> > Sorry, but... life is life.\n> \n> I'm not anywhere near \"ready\" either, at least if \"ready\" means all the\n> stuff I'd hoped to get done for 7.0.\n> \n> If we want to do a schedule-driven release around Feb, fine, but let's\n> call it 6.6. 7.0 ought to be driven by features not calendar.\n\nI am concerned about a May release. That puts us at almost a year from\nthe last major release in mid-June. That is too long. Seems like we\nshould have some release around February.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 6 Dec 1999 08:35:27 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta?"
},
{
"msg_contents": "> I am concerned about a May release. That puts us at almost a year from\n> the last major release in mid-June. That is too long. Seems like we\n> should have some release around February.\n\nLet's list the 7.0 items:\n\n\t Foreign Keys - Jan\n\t WAL - Vadim\n\t Function args - Tom\n\t System indexes - Bruce\n\t Date/Time types - Thomas\n\t Optimizer - Tom\n\t\n\t Outer Joins - Thomas?\n\t Long Tuples - ?\n\nNone of these are done, except for the system indexes, and that is a\nsmall item. It seems everyone wants a grand 7.0, but that is months\naway.\n\nI propose we go into beta on 6.6 Jan 1, with final release Feb 1. We\ncertainly have enough for a 6.6 release.\n\nI recommend this so the 6.5.* enhancements are accessible to users now,\nrather than waiting another severel months while we add the above fancy\nfeatures.\n\nAlso, I have never been a big fan of huge, fancy releases because they\ntake too long to become stable. Better for us to release what we have\nnow and work out those kinks.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 6 Dec 1999 17:58:35 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta?"
},
{
"msg_contents": "\n\nWhat do we have now for a v6.6? I'm not against, just wondering if we\nhave enough to warrant a v6.6, that's all...\n\nOn Mon, 6 Dec 1999, Bruce Momjian wrote:\n\n> > I am concerned about a May release. That puts us at almost a year from\n> > the last major release in mid-June. That is too long. Seems like we\n> > should have some release around February.\n> \n> Let's list the 7.0 items:\n> \n> \t Foreign Keys - Jan\n> \t WAL - Vadim\n> \t Function args - Tom\n> \t System indexes - Bruce\n> \t Date/Time types - Thomas\n> \t Optimizer - Tom\n> \t\n> \t Outer Joins - Thomas?\n> \t Long Tuples - ?\n> \n> None of these are done, except for the system indexes, and that is a\n> small item. It seems everyone wants a grand 7.0, but that is months\n> away.\n> \n> I propose we go into beta on 6.6 Jan 1, with final release Feb 1. We\n> certainly have enough for a 6.6 release.\n> \n> I recommend this so the 6.5.* enhancements are accessible to users now,\n> rather than waiting another severel months while we add the above fancy\n> features.\n> \n> Also, I have never been a big fan of huge, fancy releases because they\n> take too long to become stable. Better for us to release what we have\n> now and work out those kinks.\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 6 Dec 1999 19:21:12 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta?"
},
{
"msg_contents": "Bruce Momjian wrote:\n>\n> > I am concerned about a May release. That puts us at almost a year from\n> > the last major release in mid-June. That is too long. Seems like we\n> > should have some release around February.\n>\n> Let's list the 7.0 items:\n> [...]\n> None of these are done, except for the system indexes, and that is a\n> small item. It seems everyone wants a grand 7.0, but that is months\n> away.\n>\n> I propose we go into beta on 6.6 Jan 1, with final release Feb 1. We\n> certainly have enough for a 6.6 release.\n\n#define READ_BETWEEN_LINES true\n\n THAT'S MY CHANCE :-)\n\n Let's not call it 6.6, instead it should read 6.6.6 - the\n BEASTS release. That number could probably make serious\n database users/admins look somewhat more careful at the\n release notes.\n\n> Also, I have never been a big fan of huge, fancy releases because they\n> take too long to become stable. Better for us to release what we have\n> now and work out those kinks.\n\n#define READ_BETWEEN_LINES false\n\n With all the PARTIALLY developed and COMMITTED fancy 7.0\n features inside, do you really think that release would be\n easy to get stable? I fear the partial features we already\n have inside lead to a substantial increase in mailing list\n traffic.\n\n As far as I've read the responses, the users community called\n 6.5 one of the best releases ever made. Many nice, new\n features and an outstanding quality WRT reliability and\n performance. Never underestimate the users community hearsay\n in open source - don't play with our reputation!\n\n If we really go for a 6.6 release, we need to branch off from\n the 6.5 tree and backpatch things we want to have in 6.6 into\n there. Releasing some snapshot of the current 7.0 tree as 6.6\n IMHO is a risk we cannot estimate.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 7 Dec 1999 00:47:08 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta?"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> If we really go for a 6.6 release, we need to branch off from\n> the 6.5 tree and backpatch things we want to have in 6.6 into\n> there. Releasing some snapshot of the current 7.0 tree as 6.6\n> IMHO is a risk we cannot estimate.\n\nI agree with Jan that we can't just fire something out the door based\non current sources. There are enough poorly-tested or half-done\nchanges in there that we'd need a long beta cycle to have much\nconfidence in it. I think this would drain an unreasonable amount of\ndeveloper time that would be better spent on finishing the half-done\nstuff ... and *then* starting a long beta cycle ;-).\n\nWe are at a midflight point now. We don't have enough stuff done to\nput out a 7.0, but neither are we close enough to the last release\nto put out something that would fairly be called 6.6. I think users\nwould expect a \"6.6\" to offer small improvements featurewise and\ncontinue in the trend of improving stability/performance. I'm not\nsure I could promise that a 6.6 would be more stable than 6.5. At\nleast not without that long beta cycle.\n\nWe can continue to make 6.5.* releases if any critical problems come up,\nbut that again is a drain on developer time, so I'm not enthused about\nmaking 6.5.* releases unless necessary.\n\nIn short, I'd rather continue on the present course and not be overly\nconcerned about how long it takes to get it right. Schedule-driven\ndecisions are usually wrong decisions, in my experience.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Dec 1999 20:10:09 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta? "
},
{
"msg_contents": "\nI personally agree with Jan on this...I think most users have found that\nour releases have been \"worth the wait\", and altho we're askign them to\nwait a little bit longer then normal, we *are* addressing problems with he\ncurrent release by putting out 6.5.x's as required, *and* we are highly\nvisible.\n\nunlike some projects out there (gcc's \"past\" coming to mind), we have a\nhighly active mailing list where developers are constantly putting out,\nand discussing, news ideas...the end user sees this, and with what is on\nthe todo list, 7.0 will be *more* worth the wait then our past\nreleases...7.0 is looking to be our *biggest* release yet, a little more\ntime will be required on this one...\n\n\n\nOn Tue, 7 Dec 1999, Jan Wieck wrote:\n\n> Bruce Momjian wrote:\n> >\n> > > I am concerned about a May release. That puts us at almost a year from\n> > > the last major release in mid-June. That is too long. Seems like we\n> > > should have some release around February.\n> >\n> > Let's list the 7.0 items:\n> > [...]\n> > None of these are done, except for the system indexes, and that is a\n> > small item. It seems everyone wants a grand 7.0, but that is months\n> > away.\n> >\n> > I propose we go into beta on 6.6 Jan 1, with final release Feb 1. We\n> > certainly have enough for a 6.6 release.\n> \n> #define READ_BETWEEN_LINES true\n> \n> THAT'S MY CHANCE :-)\n> \n> Let's not call it 6.6, instead it should read 6.6.6 - the\n> BEASTS release. That number could probably make serious\n> database users/admins look somewhat more careful at the\n> release notes.\n> \n> > Also, I have never been a big fan of huge, fancy releases because they\n> > take too long to become stable. Better for us to release what we have\n> > now and work out those kinks.\n> \n> #define READ_BETWEEN_LINES false\n> \n> With all the PARTIALLY developed and COMMITTED fancy 7.0\n> features inside, do you really think that release would be\n> easy to get stable? I fear the partial features we already\n> have inside lead to a substantial increase in mailing list\n> traffic.\n> \n> As far as I've read the responses, the users community called\n> 6.5 one of the best releases ever made. Many nice, new\n> features and an outstanding quality WRT reliability and\n> performance. Never underestimate the users community hearsay\n> in open source - don't play with our reputation!\n> \n> If we really go for a 6.6 release, we need to branch off from\n> the 6.5 tree and backpatch things we want to have in 6.6 into\n> there. Releasing some snapshot of the current 7.0 tree as 6.6\n> IMHO is a risk we cannot estimate.\n> \n> \n> Jan\n> \n> --\n> \n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #========================================= [email protected] (Jan Wieck) #\n> \n> \n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 6 Dec 1999 21:57:39 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta?"
},
{
"msg_contents": "> \n> I personally agree with Jan on this...I think most users have found that\n> our releases have been \"worth the wait\", and altho we're askign them to\n> wait a little bit longer then normal, we *are* addressing problems with he\n> current release by putting out 6.5.x's as required, *and* we are highly\n> visible.\n> \n> unlike some projects out there (gcc's \"past\" coming to mind), we have a\n> highly active mailing list where developers are constantly putting out,\n> and discussing, news ideas...the end user sees this, and with what is on\n> the todo list, 7.0 will be *more* worth the wait then our past\n> releases...7.0 is looking to be our *biggest* release yet, a little more\n> time will be required on this one...\n\nOK, seeing as everyone disagrees with me...\n\nOther than WAL, what else is half-completed and installed?\n\n Foreign Keys - Jan\n WAL - Vadim\n Function args - Tom\n Date/Time types - Thomas\n Optimizer - Tom\n\n Outer Joins - Thomas?\n Long Tuples - ?\n\nI guess I am wondering, other than WAL, what makes our current state\nany worse than the time before previous beta cycles\n\nI am very hesitant about our \"one big release\" thing coming? If we wait\nfor everything to get done, we would never have a release.\n\nThe more items in a release, the longer the beta cycle. If you wait too\nlong, you are fixing code you wrote 6 months ago, and that makes it very\nhard. Smaller releases where the code is relatively fresh and the\nadditional features minimal are cleaner, faster betas.\n\nThe concern about a release sapping our energy when we should be adding\ncode is valid, but this delays how soon users can use the items we\n_have_ finished.\n\nOur 6.5.* subreleases are actually allowing us to take longer between\nreleases because we don't have to fix that _major_ bug a new release. \nWe fix it in the subrelease.\n\nObviously, no one agrees with me, so it looks like May.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 6 Dec 1999 22:00:26 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta?"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> OK, seeing as everyone disagrees with me...\n\n BADDAY.MPG?\n\n Wasn't that I completely disagreed. Just that in the actual\n case, I don't think we can make another release out of the\n current tree. And about that I'm not alone.\n\n> Other than WAL, what else is half-completed and installed?\n>\n> Foreign Keys - Jan\n> WAL - Vadim\n> Function args - Tom\n> Date/Time types - Thomas\n> Optimizer - Tom\n>\n> Outer Joins - Thomas?\n> Long Tuples - ?\n>\n> I guess I am wondering, other than WAL, what makes our current state\n> any worse than the time before previous beta cycles\n\n At least FK. Just found myself a bug that makes things ugly.\n Forces CKECK lookup in PK table even if FK values didn't\n change on trigger invocation - otherwise someone could cause\n RI violation not recognized by deleting DEFAULT key values of\n FK from PK table.\n\n That's IMHO something so easily to trigger, that I expect\n more bugs in the stuff.\n\n Since FOREIGN KEY is a feature so many ppl asked for, I\n expect many of them penetrating the lists if things go bad -\n what's really possible for now. Thus, I wouldn't feel\n comfortable if it goes out in this state.\n\n> I am very hesitant about our \"one big release\" thing coming? If we wait\n> for everything to get done, we would never have a release.\n>\n> The more items in a release, the longer the beta cycle. If you wait too\n> long, you are fixing code you wrote 6 months ago, and that makes it very\n> hard. Smaller releases where the code is relatively fresh and the\n> additional features minimal are cleaner, faster betas.\n\n Hmmm,\n\n yes and no. Yes, the more items the longer beta. But no, the\n longer beta the better the release.\n\n The last release had the longest of all beta delays. And it\n was the best of all releases. Maybe because while feature A\n prevents release, another bug in feature H shows up while B-G\n are silent, but after fixing H's bug F shout's \"stop\". Not\n surprising - (real) programmers experience.\n\n The worst thing we can do is to release FEATURES, that are\n current development state and must be deprecated because they\n interfere with ongoing development. I just saw that someone\n added some kind of subselect inside or the targetlist - and I\n absolutely don't know if that will stand the test of time\n (i.e. issues WRT the rewriter MIGHT be more important). So\n that syntax might need to be removed/changed in 7.0 or a\n subsequent release again.\n\n> The concern about a release sapping our energy when we should be adding\n> code is valid, but this delays how soon users can use the items we\n> _have_ finished.\n\n I don't see any (of the important) issues finished.\n\n> Obviously, no one agrees with me, so it looks like May.\n\n At least not me.\n\n May? Vadim said he could probably come back to finish WAL by\n April/May. And I'm so bored about the tuple split stuff up\n to now, that I don't know if I should start on it for 7.0\n right now. So May (optimistic) might be start of BETA - not\n release.\n\n And AFAIK, if we start beta that long after our last release,\n the next wouldn't be out before July.\n\n That would give me enough time for the other thing (Bruce\n knows what I'm talking about).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 7 Dec 1999 04:55:50 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta?"
},
{
"msg_contents": "> \n> \n> What do we have now for a v6.6? I'm not against, just wondering if we\n> have enough to warrant a v6.6, that's all...\n> \n\nJust from completed TODO list items I have many. This doesn't count the\nnon-list items like the psql rewrite and other big stuff that never made\nit to this list. If this gets people interested, I can generate a full\nlog dump to show the items.\n\n\n* -Recover or force failure when disk space is exhausted(Hiroshi)\n* -INSERT INTO ... SELECT with AS columns matching result columns problem\n* -Select a[1] FROM test fails, it needs test.a[1](Tom)\n* -Array index references without table name cause problems [array](Tom)\n* -INSERT ... SELECT ... GROUP BY groups by target columns not source columns(Tom)\n* -CREATE TABLE test (a char(5) DEFAULT text '', b int4) fails on INSERT(Tom)\n* -UNION with LIMIT fails\n* -CREATE TABLE x AS SELECT 1 UNION SELECT 2 fails\n* -CREATE TABLE test(col char(2) DEFAULT user) fails in length restriction\n* -mismatched types in CREATE TABLE ... DEFAULT causes problems [default]\n* -select * from pg_class where oid in (0,-1)\n* -SELECT COUNT('asdf') FROM pg_class WHERE oid=12 crashes\n* -require SELECT DISTINCT target list to have all ORDER BY columns\n* -When using aggregates + GROUP BY, no rows in should yield no rows out(Tom)\n* -Allow HAVING to use comparisons that have no aggregates(Tom)\n* -Eliminate limits on query length\n* -Fix memory leak for aggregates(Tom)\n* -Allow compression of large fields or a compressed field type\n* -Allow pg_descriptions when creating tables\n* -Allow pg_descriptions when creating types, columns, and functions\n* -Add index on NUMERIC/DECIMAL type(Jan)\n* -Move LIKE index optimization handling to the optimizer(Tom)\n* -Allow psql \\copy to allow delimiters\n* -Add a function to return the last inserted oid, for use in psql scripts\n* -Allow psql to print nulls as distinct from \"\" [null]\n* -Certain indexes will not shrink, i.e. oid indexes with many inserts(Vadim)\n* -Allow WHERE restriction on ctid(Hiroshi)\n* -Transaction log, so re-do log can be on a separate disk by\n* -Allow subqueries in target list\n* -Overhaul mdmgr/smgr to fix double unlinking and double opens, cleanup\n* -Allow transaction commits with rollback with no-fsync performance [fsync](Vadim)\n* -Prevent fsync in SELECT-only queries(Vadim)\n* -Convert function(constant) into a constant for index use(Tom)\n* -Make index creation use psort code, because it is now faster(Vadim)\n* -Allow creation of sort temp tables > 1 Gig\n* -Allow optimizer to prefer plans that match ORDER BY(Tom)\n* -Fix memory exhaustion when using many OR's [cnfify](Tom)\n* -Process const = const parts of OR clause in separate pass(Tom)\n* -Add needed includes and removed unneeded include files(Bruce)\n* -Make configure --enable-debug add -g on compile line\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 6 Dec 1999 23:12:30 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta?"
},
{
"msg_contents": "\nHere's my \"fear\"...we go beta on jan 1st, for a 6.6 ... if last beta was\nany indication, we're talking 2-3 months of beta, so March/April before we\ndo a release, which disrupts everything that those working on v7.0\nfeatures is working on...plus, while debugging, it distracts them from\nthat work.\n\nIf we wait to break into Beta around June 1st, we're still going tobe\ntalking 2-3mos of beta work, max, so Sept 1 or so for a release...\n\nWe're to the point of \"competing with the big boys\"...how often does\nOracle release? \n\nHow about this? Out of the list below, how many can be *safely*\nback-patched to the -STABLE tree? The kind of thing where ppl are\n*confident* enough that we could put out a v6.5.4 based on those patches?\n\nI would rather see the *safe* ones backpatched and a v6.5.4 released, then\ntie up ppl working v7.0 features with debugging an interim release, but\nthat's just me :(\n\n\nOn Mon, 6 Dec 1999, Bruce Momjian wrote:\n\n> > \n> > \n> > What do we have now for a v6.6? I'm not against, just wondering if we\n> > have enough to warrant a v6.6, that's all...\n> > \n> \n> Just from completed TODO list items I have many. This doesn't count the\n> non-list items like the psql rewrite and other big stuff that never made\n> it to this list. If this gets people interested, I can generate a full\n> log dump to show the items.\n> \n> \n> * -Recover or force failure when disk space is exhausted(Hiroshi)\n> * -INSERT INTO ... SELECT with AS columns matching result columns problem\n> * -Select a[1] FROM test fails, it needs test.a[1](Tom)\n> * -Array index references without table name cause problems [array](Tom)\n> * -INSERT ... SELECT ... GROUP BY groups by target columns not source columns(Tom)\n> * -CREATE TABLE test (a char(5) DEFAULT text '', b int4) fails on INSERT(Tom)\n> * -UNION with LIMIT fails\n> * -CREATE TABLE x AS SELECT 1 UNION SELECT 2 fails\n> * -CREATE TABLE test(col char(2) DEFAULT user) fails in length restriction\n> * -mismatched types in CREATE TABLE ... DEFAULT causes problems [default]\n> * -select * from pg_class where oid in (0,-1)\n> * -SELECT COUNT('asdf') FROM pg_class WHERE oid=12 crashes\n> * -require SELECT DISTINCT target list to have all ORDER BY columns\n> * -When using aggregates + GROUP BY, no rows in should yield no rows out(Tom)\n> * -Allow HAVING to use comparisons that have no aggregates(Tom)\n> * -Eliminate limits on query length\n> * -Fix memory leak for aggregates(Tom)\n> * -Allow compression of large fields or a compressed field type\n> * -Allow pg_descriptions when creating tables\n> * -Allow pg_descriptions when creating types, columns, and functions\n> * -Add index on NUMERIC/DECIMAL type(Jan)\n> * -Move LIKE index optimization handling to the optimizer(Tom)\n> * -Allow psql \\copy to allow delimiters\n> * -Add a function to return the last inserted oid, for use in psql scripts\n> * -Allow psql to print nulls as distinct from \"\" [null]\n> * -Certain indexes will not shrink, i.e. oid indexes with many inserts(Vadim)\n> * -Allow WHERE restriction on ctid(Hiroshi)\n> * -Transaction log, so re-do log can be on a separate disk by\n> * -Allow subqueries in target list\n> * -Overhaul mdmgr/smgr to fix double unlinking and double opens, cleanup\n> * -Allow transaction commits with rollback with no-fsync performance [fsync](Vadim)\n> * -Prevent fsync in SELECT-only queries(Vadim)\n> * -Convert function(constant) into a constant for index use(Tom)\n> * -Make index creation use psort code, because it is now faster(Vadim)\n> * -Allow creation of sort temp tables > 1 Gig\n> * -Allow optimizer to prefer plans that match ORDER BY(Tom)\n> * -Fix memory exhaustion when using many OR's [cnfify](Tom)\n> * -Process const = const parts of OR clause in separate pass(Tom)\n> * -Add needed includes and removed unneeded include files(Bruce)\n> * -Make configure --enable-debug add -g on compile line\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 7 Dec 1999 01:04:14 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta?"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Other than WAL, what else is half-completed and installed?\n\nWAL is probably the thing that forces our hand here. Vadim would know\nbetter than anyone else whether the current state of the code is\nreleasable, but I bet he'll say \"no\".\n\nHowever, I've got a couple of nontrivial concerns:\n\n1. Table locking and shared-cache-invalidation (SI) handling. We've\nmade some fundamental changes in this area since 6.5, but I don't think\nwe're done yet. I know Hiroshi is very worried about this area, and\nI'm not happy with it either. And this is the kind of thing we cannot\nexpect to validate with a short beta test cycle. I won't be happy until\nI am logically convinced that the code is right, *and* we see solid\nbehavior in heavy beta testing.\n\n2. Optimizer's handling of order-by cases, specifically whether to use\nan index scan or an explicit sort. The current code is (finally!) able\nto use an index scan in all the cases where one is applicable ... but it\nis *too* eager to do so, because the cost model is underestimating the\ncost of an index scan. If we don't fix that I think we will see\nsubstantial performance degradation in some real-world cases, compared\nto 6.5 which avoided some losing index scans simply because it was too\nstupid to consider them. In particular, we really need some\nconsideration of the effects of LIMIT in there, because that has a huge\nimpact on the desirability of index scan vs. sort.\n\n3. Alpha (and other platforms) porting problems. We're still hanging\nfire on the issue of what to do about these. If we don't do the fmgr\nrewrite the way I want, I think we have to put in the Alpha patches that\nUncle George did... and I'd rather we didn't hack up the code that\nway...\n\n> The more items in a release, the longer the beta cycle. If you wait too\n> long, you are fixing code you wrote 6 months ago, and that makes it very\n> hard. Smaller releases where the code is relatively fresh and the\n> additional features minimal are cleaner, faster betas.\n\nThis is true, but it's also very hard to accomplish really large changes\nthat way. Some of the things we're trying to get done in this release\nare pretty pervasive changes. I think every so often you need a slow-\nbirthing release where you take time to tackle big things. I don't want\nto make a habit of slow releases either, but I see plenty of reasons to\ntake our time with this one.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Dec 1999 00:10:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <[email protected]> writes:\n> > Other than WAL, what else is half-completed and installed?\n> \n> WAL is probably the thing that forces our hand here. Vadim would know\n> better than anyone else whether the current state of the code is\n> releasable, but I bet he'll say \"no\".\n\nNo.\n-:)\n\nBut it's very easy to turn current functionality off -\nWAL is called on startup/shutdown only, so, just ~10 lines of code\nto do nothing here...\n\nVadim\n",
"msg_date": "Tue, 07 Dec 1999 12:28:26 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta?"
},
{
"msg_contents": "On Mon, 6 Dec 1999, Bruce Momjian wrote:\n\n> I propose we go into beta on 6.6 Jan 1, with final release Feb 1. We\n> certainly have enough for a 6.6 release.\n\nJust to add my two cents here, a Jan 1 feature-freeze is most definitely\nnot going to work for my part, as there are still some subtle and not so\nsubtle things to tie up.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 7 Dec 1999 13:10:55 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta?"
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Other than WAL, what else is half-completed and installed?\n> \n> WAL is probably the thing that forces our hand here. Vadim would know\n> better than anyone else whether the current state of the code is\n> releasable, but I bet he'll say \"no\".\n\nOK, seems Vadim can turn this off easily, which is what I expected.\n\nThe big question is whether we want to stop working on the \"features\"\nlist to put out a release?\n\nWe have a list of stuff, but other than the first two, they are not\nbeing developed in the current tree, or are not started:\n\n Foreign Keys - Jan\n WAL - Vadim\n Function args - Tom\n Date/Time types - Thomas\n Optimizer - Tom\n\n Outer Joins - Thomas?\n Long Tuples - ?\n\nSeems the general agreement is that people don't want to stop for 2-3\nmonths to polish what we have done so far. They want to use those 2-3\nmonths for development of the above items.\n\nThat is fine, as long as everyone realizes we are going +1 year between\nmajor releases in this case, and that in the period from June to\ncurrent, we only have two of these items partially done.\n\nI agree we really don't have a MVCC-type feature to justify a 6.6 at\nthis time. If Jan could get foreign keys partially working for the\ncommon cases, that may be enough to tip the scales.\n\nI have never been a big release fan. I like to get the code out to the\nusers. Every release seems to be bigger than the last.\n\nMarc, as far as backpatching, you really can't do that without going\ninto beta on the release, and if you do that, you might as well use the\ncurrent tree. Each patch has already been evaluated for backpatching. \nSorry, no free lunch there.\n\nAnother secret is that deadlines generate features. As beta approaches,\nthings seem to happen much faster.\n\nHowever, since no one else agrees, I will drop the topic now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 7 Dec 1999 07:10:58 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta?"
},
{
"msg_contents": "> On Mon, 6 Dec 1999, Bruce Momjian wrote:\n> \n> > I propose we go into beta on 6.6 Jan 1, with final release Feb 1. We\n> > certainly have enough for a 6.6 release.\n> \n> Just to add my two cents here, a Jan 1 feature-freeze is most definitely\n> not going to work for my part, as there are still some subtle and not so\n> subtle things to tie up.\n\nThat is interesting, because Peter's psql changes are some of the\nfeatures I wanted to get out to users in 6.6.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 7 Dec 1999 07:14:56 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta?"
},
{
"msg_contents": "On Tue, 7 Dec 1999, Bruce Momjian wrote:\n\n> > Just to add my two cents here, a Jan 1 feature-freeze is most definitely\n> > not going to work for my part, as there are still some subtle and not so\n> > subtle things to tie up.\n> \n> That is interesting, because Peter's psql changes are some of the\n> features I wanted to get out to users in 6.6.\n\nIn case you're interested, this is what definitely still needs to be done:\n\n* \\e to edit previous query\n* match up \\do with COMMENT ON OPERATOR\n* actually fix up those comments in the catalogues\n* smoothen out tab completion\n* write a Windows makefile [ <--- up for grabs! ]\n* re-generate regression tests\n* make sure the output looks like it should before doing the above\n* (maybe more)\n\nThese are all not very challenging but do take time, which I currently\ndon't have.\n\nAlso, I think there are some problems with using psql with old servers, in\nparticular some internal queries generated by \\dd or \\do create weird\nnotices.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 7 Dec 1999 13:28:28 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta?"
},
{
"msg_contents": "Bruce Momjian <[email protected]> el d�a Mon, 6 Dec 1999 22:00:26 -0500 \n(EST), escribi�:\n\n>I am very hesitant about our \"one big release\" thing coming? If we wait\n>for everything to get done, we would never have a release.\n\nand if you declare a feature-freeze at some point ?\n\n\nSergio\n\n",
"msg_date": "Tue, 7 Dec 1999 09:41:33 -0300",
"msg_from": "\"Sergio A. Kessler\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta?"
},
{
"msg_contents": "On Tue, 7 Dec 1999, Bruce Momjian wrote:\n\n> > Bruce Momjian <[email protected]> writes:\n> > > Other than WAL, what else is half-completed and installed?\n> > \n> > WAL is probably the thing that forces our hand here. Vadim would know\n> > better than anyone else whether the current state of the code is\n> > releasable, but I bet he'll say \"no\".\n> \n> OK, seems Vadim can turn this off easily, which is what I expected.\n> \n> The big question is whether we want to stop working on the \"features\"\n> list to put out a release?\n\nNo\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 7 Dec 1999 09:03:20 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta?"
},
{
"msg_contents": "On Tue, 7 Dec 1999, Bruce Momjian wrote:\n\n> > On Mon, 6 Dec 1999, Bruce Momjian wrote:\n> > \n> > > I propose we go into beta on 6.6 Jan 1, with final release Feb 1. We\n> > > certainly have enough for a 6.6 release.\n> > \n> > Just to add my two cents here, a Jan 1 feature-freeze is most definitely\n> > not going to work for my part, as there are still some subtle and not so\n> > subtle things to tie up.\n> \n> That is interesting, because Peter's psql changes are some of the\n> features I wanted to get out to users in 6.6.\n\nBut, as Peter pop'd up, he's not ready for a Beta yet either :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 7 Dec 1999 09:03:48 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta?"
},
{
"msg_contents": "> > OK, seems Vadim can turn this off easily, which is what I expected.\n> > The big question is whether we want to stop working on the \"features\"\n> > list to put out a release?\n> No\n\nHmm. istm that several of the folks expecting to make major changes\nfor a 7.0 release are not able to work on it (much anyway) for the\nnext couple of months. Tom Lane is the only one to have feet in both\ncamps (big (?) changes already committed, more coming) but perhaps he\nmight reconsider his vote for \"no 6.6\". A 6.6 release cycle would get\nthat stuff out in the field and tested soon (Jan/Feb timeframe) rather\nthan waiting 'til May/June to start intensive testing.\n\nI know that we made the commitment for v7.0 (and that waffling on that\nissue for past releases drove me nuts ;), but I suspect that what we\nalready have in the code tree could come close to standing on its own\n(especially in light of easily disabling the WAL framework, per\nVadim's note).\n\nI could have \"join syntax\" ready for a 6.6 (no major changes to the\nquery tree required), then do the outer joins (with bigger query\ntree/optimizer changes) and date/time reunification for 7.0...\n\n - Thomas\n\nMaybe we could steal Scott McNealy's (of Sun Microsystems) term for\nWin2K for our 7.0 release:\n\n \"The big hairball\"\n\n:))\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 07 Dec 1999 14:18:29 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta?"
},
{
"msg_contents": "On Tue, 7 Dec 1999, Thomas Lockhart wrote:\n\n> > > OK, seems Vadim can turn this off easily, which is what I expected.\n> > > The big question is whether we want to stop working on the \"features\"\n> > > list to put out a release?\n> > No\n> \n> Hmm. istm that several of the folks expecting to make major changes\n> for a 7.0 release are not able to work on it (much anyway) for the\n> next couple of months. Tom Lane is the only one to have feet in both\n> camps (big (?) changes already committed, more coming) but perhaps he\n> might reconsider his vote for \"no 6.6\". A 6.6 release cycle would get\n> that stuff out in the field and tested soon (Jan/Feb timeframe) rather\n> than waiting 'til May/June to start intensive testing.\n\nThen why not just move 7.0 up to Mar/Apr (or even Feb/Mar) and plan on\nthe other new stuff for 7.1 if it's not ready?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 7 Dec 1999 10:23:42 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta?"
},
{
"msg_contents": "> Then why not just move 7.0 up to Mar/Apr (or even Feb/Mar) and plan on\n> the other new stuff for 7.1 if it's not ready?\n\nEven though we make pretty major changes for \"minor releases\", istm\n7.0 should be the release which is the sum of what we've been working\ntowards in the v6.x series. It would include WAL, outer joins,\nintegrated date/time, reworked locking, referential integrity,\nreworked parse/query tree, yada yada yada...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 07 Dec 1999 15:33:45 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta?"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I know that we made the commitment for v7.0 (and that waffling on that\n> issue for past releases drove me nuts ;), but I suspect that what we\n> already have in the code tree could come close to standing on its own\n> (especially in light of easily disabling the WAL framework, per\n> Vadim's note).\n\nAs Bruce's list reminds us, there are a lot of nice fixes in place\nalready. I'm not changing my vote (yet), but let's just think a little\nfurther on this...\n\nConsidering Bruce's \"major items\":\n\nWAL: if we can turn it off and leave it in the tree, then this could be\npostponed past a 6.6 release.\n\nForeign keys: Jan has made some commits; features are there but probably\nrather broken. As long as the FK stuff is unlikely to break any *existing*\nfunctionality (Jan, do you think that's a safe assumption?) it'd be OK\nto leave it in the tree, documented as \"work in progress, use at your\nown risk\". Getting feedback from users might actually help with the\ndebugging here.\n\nDate/Time types: no commits yet, but because of compatibility issues\nwe'd want to postpone this to 7.0 anyway.\n\nFunction arg changes: same comments. However, we'd have to plug in\nUncle George's Alpha fixes instead. Those are ugly, but I could live\nwith them as a short-term hack.\n\nOptimizer: really needs some work, but perhaps not very much to get to a\nreleasable state. Much of what I'd hoped to do for 7.0 could be put off.\n\nOuter joins: As yet no commits, and I'd be inclined to say \"leave it\nthat way for 6.6\".\n\nLong tuples: not started, postpone to 7.0.\n\nQuery tree redesign: not started, postpone to 7.0.\n\n\nThings Bruce didn't list:\n\npsql: not ready for prime time, according to Peter (who should know ;-)).\n\nTable locking/SI handling: also not ready for prime time.\n\nLong queries: although this area is almost done, we cannot release\nwithout fixing pg_dump, else it will choke on complex table definitions\nand rules that are now possible. Michael Ansley is working on this ---\nMichael, do you have an ETA? I think there were some loose ends in\nsome of the interface libraries, too.\n\n\nSo, if we refocused our energies into cleaning up these items, we\nprobably could make a reasonable 6.6 release. I'd have to say that\n1 Jan is too soon for beta freeze --- dunno about you guys, but family\ncommitments and Christmas shopping are going to be soaking up most of my\nspare cycles through New Year's Day. I can't promise to get much of\nanything done until January. 1 Feb would be a reasonable date for me.\nI think Hiroshi and I could have the locking stuff solved by then, and\nI could find some time to do what must be done in the optimizer. If\nPeter can have psql in a presentable state by 1 Feb, it could work.\n\nThere is an awful lot to be said for finishing undone work and getting\nit out the door to people who need it. Bruce has a good point about\nhow difficult it is to remember what you were doing 6 months ago.\nOn the other hand, if we do this then serious 7.0 feature development\nwill probably not resume till about May, which is a long way off.\nMaybe that's a good tradeoff for consolidating our current gains and\ngetting a beta-test cycle under our belts for what we have already done.\n\nComments? What other stuff do we have in progress that needs to be\ntaken into account?\n\nAt this point I'm sitting on the fence --- I can see the arguments for\ngoing either way. But I think I might be leaning in favor of a 6.6,\nunless someone points out an issue I missed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Dec 1999 10:40:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta? "
},
{
"msg_contents": "On Tue, 7 Dec 1999, Thomas Lockhart wrote:\n\n> > Then why not just move 7.0 up to Mar/Apr (or even Feb/Mar) and plan on\n> > the other new stuff for 7.1 if it's not ready?\n> \n> Even though we make pretty major changes for \"minor releases\", istm\n> 7.0 should be the release which is the sum of what we've been working\n> towards in the v6.x series. It would include WAL, outer joins,\n> integrated date/time, reworked locking, referential integrity,\n> reworked parse/query tree, yada yada yada...\n\nBut wouldn't much of that end up in a 6.6 release leaving considerably \nless for 7.0?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 7 Dec 1999 10:50:18 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta?"
},
{
"msg_contents": "On Tue, 7 Dec 1999, Tom Lane wrote:\n\n> psql: not ready for prime time, according to Peter (who should know ;-)).\n\nI could also go for a Feb 1st deadline. By then I could also have some\nminor tweakage of initdb and makefiles done, and rewrite the INSTALL doc\nso the install process is easier. (Always a good selling point.)\n\nI would have to agree that waiting for all our 7.0 wishes to become true\nmight mean an indefinite wait at the worst. Release early, release often\nis what makes open source projects successful.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 7 Dec 1999 17:35:28 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta? "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Foreign keys: Jan has made some commits; features are there but probably\n> rather broken. As long as the FK stuff is unlikely to break any *existing*\n> functionality (Jan, do you think that's a safe assumption?) it'd be OK\n> to leave it in the tree, documented as \"work in progress, use at your\n> own risk\". Getting feedback from users might actually help with the\n> debugging here.\n\n Not as safe as you probably want it.\n\n Initially some ppl offered help for FK stuff. The basic's\n have been committed for several weeks now, and no contributor\n surfaced. So I don't expect any other help now than what I\n already got - bug reports. And that's why I concentrated on\n the parser/utility area, to get full support into CREATE\n TABLE, and the completeness of MATCH FULL. Voila - a few\n hours later I got the first bug report.\n\n Now the bad part, the major change I did was the internal\n change of the trigger manager to run driven by a deferred\n invocation queue. That's the part that could hurt, because\n it isn't complete. All trigger events are collected in memory\n up to now, and during a huge transaction with millions of\n trigger invocations, it could potentially blow away the\n backend. Not only RI triggers, ALL trigger events must run\n through the queue, and it must remember anything from the\n beginning to detect the \"triggered data change\" violation or\n decide what the actual operation really is and which trigger\n to invoke finally.\n\n This queue must be able to use a temp file in the case it\n grows too big. Since I cannot easily rollback these changes,\n it's a show stopper. I knew that from the beginning and\n wouldn't have committed that if we hadn't agreed on 7.0 for\n the next release. This work is now delayed due to the missed\n help, and it must be delayed more to build a multibackend\n test driver. Hiroshi's report showed, that especially\n referential integrity tests don't make much sense if run by a\n single backend serialized.\n\n> At this point I'm sitting on the fence --- I can see the arguments for\n> going either way. But I think I might be leaning in favor of a 6.6,\n> unless someone points out an issue I missed.\n\n From my point of view, we could start BETA for a 6.6.6 when I\n have the temp file buffered queue and the multibackend driver\n plus a test suite ready. Even if I don't like it, personally.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 7 Dec 1999 18:15:48 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta?"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> This queue must be able to use a temp file in the case it\n> grows too big. Since I cannot easily rollback these changes,\n> it's a show stopper.\n\nOK, so having the queue file is a must-do before we could release a 6.6.\n(BTW, please consider using storage/file/buffile.c for the queue file;\nthat handles virtual-file access, segmenting of multi-gig files, and\nresource cleanup at abort for you. If you need features buffile hasn't\ngot, let me know.)\n\n> ... it must be delayed more to build a multibackend\n> test driver. Hiroshi's report showed, that especially\n> referential integrity tests don't make much sense if run by a\n> single backend serialized.\n\nClearly a good thing for testing referential integrity, but is it needed\nto verify that old functionality still works?\n\nOTOH, such a testbed would also be nice for stress-testing the table\nlocking and SI changes, so maybe it is critical for 6.6 anyway.\n\n> From my point of view, we could start BETA for a 6.6.6 when I\n> have the temp file buffered queue and the multibackend driver\n> plus a test suite ready. Even if I don't like it, personally.\n\nWould 1 Feb be a good target date for you? How much would doing things\nthis way distort your development path, compared to what you'd do if\nwe didn't plan a 6.6 release?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Dec 1999 12:40:36 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta? "
},
{
"msg_contents": "> From my point of view, we could start BETA for a 6.6.6 when I\n> have the temp file buffered queue and the multibackend driver\n> plus a test suite ready.\n\nIf y'all need some help to get the FK stuff farther along, a 6.6\nrelease will help on that imho. It takes the immediate pressure off of\nthe other developers, and they can choose to continue with their\ndevelopments or to take a breather and help out the 6.6 release.\n\nThe fact that more work needs to be done on FKs etc to enable a 6.6\nrelease is not fatal; that is an issue for every release on one topic\nor another.\n\nI haven't heard anything (yet) which would be a show stopper imho, and\nI'd *really* like to decouple some of the changes coming up. In\nparticular, I could commit \"join syntax\" for 6.6, and then work on the\nquery tree redesign for 7.0...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 08 Dec 1999 15:51:50 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] When is 7.0 going Beta?"
}
] |
[
{
"msg_contents": "Is this fixed in later versions? If not, should I send in a patch?\n\n\t-Michael Robinson\n\n-----------------------------\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n[PostgreSQL 6.5.1 on i386-unknown-freebsd3.3, compiled by gcc 2.7.2.3]\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: template1\n\ntemplate1=> select 9.99::money * 0.1;\n?column?\n--------\n$0.99 \n(1 row)\n\ntemplate1=> select 9.99::money / 10;\n?column?\n--------\n$0.99 \n(1 row)\n\ntemplate1=> select 9.99::money / 10.0;\n?column?\n--------\n$1.00 \n(1 row)\n\ntemplate1=> \n",
"msg_date": "Sun, 5 Dec 1999 22:00:11 +0800 (CST)",
"msg_from": "Michael Robinson <[email protected]>",
"msg_from_op": true,
"msg_subject": "The Accountant is not Amused"
},
{
"msg_contents": "hi.\n\n\n> Is this fixed in later versions? If not, should I send in a patch?\n> \n> \t-Michael Robinson\n> \n<SNIPPED OUT LOTS OF MONEY DATA TYPE STUFF>\n\napparently, the money time is deprecated and will be going the way of the\ndinosaurs someday soon (so threaten the developers)\n\nwe used the money data type extensively in an installation i run... with the\nnews of money going out of style however =) we switched to numeric(9,2) which\nworks quite well...\n\nstill some quirks with numeric: no money->numeric (surprise), int2 doesn't play\nwell with numeric (but converts easily to int4 which does)...\n\nit took me 2 days to rid our system of money and update all our code... and now\nit works much nicer, too!\n\n-- \nAaron J. Seigo\nSys Admin\n",
"msg_date": "Sun, 5 Dec 1999 09:13:26 -0700",
"msg_from": "\"Aaron J. Seigo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] The Accountant is not Amused"
},
{
"msg_contents": "Michael Robinson <[email protected]> writes:\n> Is this fixed in later versions? If not, should I send in a patch?\n\nWhat would you consider a patch? cash_div_flt8 rounds its result,\ncash_div_int4 truncates. Which is right, and how many existing\napps might you break by changing the other one? How many of the\nother money operators need to be tweaked too?\n\nAs Aaron points out, the money data type is looking awfully\ndinosaur-like; nothing based on an int4 underlying representation\ncan possibly be really satisfactory for this purpose. The general\nconsensus on the hackers list has been that the money type should\nbe deprecated and eventually phased out. In the meantime, subtle\nalterations of its behavior are of dubious value.\n\nIt seems to me, though, that the money type does offer a couple of\nuseful things that you don't get in raw NUMERIC; specifically,\ninput and output functions that are customized for currency display.\nWhat really would be a useful project would be to reimplement money\nas a thin overlay on NUMERIC, basically just input/output functions.\nThe interesting part of the job would be to do better in non-US\nlocales than we currently do; I don't think the money code is very\nflexible about commas versus decimal points, for example.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 05 Dec 1999 12:17:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] The Accountant is not Amused "
},
{
"msg_contents": "\"Aaron J. Seigo\" <[email protected]> writes:\n>still some quirks with numeric: no money->numeric (surprise), int2 doesn't play\n>well with numeric (but converts easily to int4 which does)...\n\nDo you pay taxes?\n\n================\ntemplate1=> select 9.99::numeric(9,2) * 0.1;\nERROR: Unable to identify an operator '*' for types 'numeric' and 'float8'\n You will have to retype this query using an explicit cast\ntemplate1=> select 9.99::numeric(9,2) * 0.1::float4;\nERROR: Unable to identify an operator '*' for types 'numeric' and 'float4'\n You will have to retype this query using an explicit cast\n================\n\nI need a type that exhibits correct financial rounding behavior in tax\ncomputations and currency conversions. My understanding is that in the \nU.S., you are supposed to compute to the mil, and then round. In China\n(my jurisdiction of concern), you just round to the nearest fen.\n\n\t-Michael Robinson\n\n",
"msg_date": "Mon, 6 Dec 1999 01:53:29 +0800 (CST)",
"msg_from": "Michael Robinson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] The Accountant is not Amused"
},
{
"msg_contents": "On 1999-12-05, Michael Robinson mentioned:\n\n> template1=> select 9.99::money / 10.0;\n> ?column?\n> --------\n> $1.00 \n> (1 row)\n\nYou should be using the numeric type. Money is deprecated. What you\npointed out is probably only one of its problems. However, the numeric\ntype seems to have some ideas of its own as well:\n\n=> select 9.99::numeric(9,2) / 10.0::numeric(9,2);\n ?column?\n------------\n0.9990000000\n(1 row)\n\nWhat are the rules governing this situation?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sun, 5 Dec 1999 23:56:34 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] The Accountant is not Amused"
},
{
"msg_contents": "> Is this fixed in later versions? If not, should I send in a patch?\n\nSend patches. But there is a chance that the money type will be ripped\nout for v7.0 (since afaik the numeric/decimal types supercede the\nolder hacked type).\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 06 Dec 1999 02:00:30 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] The Accountant is not Amused"
}
] |
[
{
"msg_contents": "As I already hinted a while ago, I am trying to resolve some of the\nvarious shell scripts' obscure dependencies on various environment\nvariables and paths. My question is whether the use of the program\npostconfig to determine the PGLIB value (or anything else, since there are\nno checks done) is still encouraged/desired/done. To be clear: Contrary\nto what some of you might start to believe, I do not want to remove every\nlittle feature in PostgreSQL that I don't like/understand/use ;) I'm just\nwondering.\n\nMy survey showed that the only two places where PGLIB is used is\ncreatelang and initdb, so in a normal environment there is no good reason\nto have PGLIB set all the time, anyway. In a developer's environment,\nwhere these commands are executed many times, PGLIB is going to mess you\nup if you want to install several versions, which is exactly the reason\nwhy I am looking at this.\n\nI found a pretty fool-proof (and with Tom's help portable) way to\ndetermine the location of the needed files (the bki's and the PL handlers)\nfrom the actual location of the shell script, and if that fails (which it\nshouldn't) you can always use the --pglib/-L option. This will, unless you\nintentionally use a particularly evil installation layout, ensure that any\ninitdb or createlang (or whatever else might use this) call is always\ngoing to use the correct files and backend version without you having to\ndo any setting of anything (including PATH).\n\nSo, to summarize:\n* PGLIB, keep it or lose it?\n* postconfig, keep it or lose it?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sun, 5 Dec 1999 23:56:21 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "postconfig/PGLIB/initdb"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> So, to summarize:\n> * PGLIB, keep it or lose it?\n\nI had assumed that PGLIB was used for dynamic loading of extension\nmodules, but I now see that it isn't. It's primarily used by initdb\nto find the data files needed for initialization of template1. AFAICT\nit's not used by a running postmaster or backend at all.\n\nIf you can reliably find the location of the executing script, I\nthink it'd be a fine idea to lose PGLIB and instead get the data\nfiles from \"BINDIR/../lib/\". One less setting to get wrong.\n\n(I'm not convinced yet about that \"if\", though. Do you have a\nsubstitute for \"which\" that you think is portable? How can we\ntest it?)\n\n> * postconfig, keep it or lose it?\n\nSince postconfig is invoked as just \"postconfig\", trying to use it\nintroduces a very strong dependency on PATH. I've never used it\nso maybe I'm not seeing what it's good for --- but my guess is that\nin the situation where you've got multiple versions installed, trying\nto use postconfig would just result in confusion and havoc. And in\nthe case where you have only one installation, it's not necessary.\n\nI'm not really seeing what postconfig brings to the party. If you\nwant the config to depend on your path, you can put the appropriate\nBINDIR in your path, no?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 05 Dec 1999 19:11:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postconfig/PGLIB/initdb "
}
] |
[
{
"msg_contents": "\nDue to a recent thread started on pgsql-hackers, I'm posting this to the\nlists. Vince is planning on putting in appropriate links for some of\nthis, and, Bruce, can we maybe put it into the FAQ?\n\nI'm not an English major, so this is more techinese then anything\nelse...or, a rambling of an un-ordered mind, however you want to classify\nit :)\n\n============\n\nThere are several ways that people can contribute to the PostgreSQL\nproject, and, below, I'm going to try and list them...\n\n1. Code. We have a TODO list available at\n http://www.postgresql.org/docs/todo.html, which lists enhancements that\n have been picked out as needed. Some of them take time to learn the\n intricacies of the code, some require no more then time. Contributing\n code, altho not the only way to contribute, is always one of the more\n valuable ways of improving any Open Source Project.\n\n2. Web Site. http://www.postgresql.org is mirrored on many sites around\n the world, as is ftp://ftp.postgresql.org. By increasing the number of\n mirrors available around the world, you help reduce the load on any one\n site, as well as improve the accessibility to the code. If you have\n the resources to provide a mirror, both hardware and bandwidth, this is\n another means of contributing to the project. All our mirrors are\n required to use rsync, in order to be listed, with details on this\n found at http://www.postgresql.org/howtomirror.html\n\n3. Mailing Lists. We use software that allows us to use remote sites for\n 'mail relaying'. Basically, instead of our central server having to\n service *all* remote addresses, it offloads email onto remote servers\n to do the distribution. For intance, by dumping all email destined for\n a subscribers in France to a server residing in France, the central\n server has to send one email mesage \"Across the pond\", and let the\n server in France handle the other servers. If you are interested in\n providing a relay point, email [email protected] (me) for details on how\n to get setup for this.\n\n4. Financial. In June of 1999, PostgreSQL, Inc was formed as the\n \"Commercial Arm\" of the PostgreSQL Project. Although it was originally\n formed to provide Commercial Support for PostgreSQL, it has expanded to\n include Consulting services, PostgreSQL Merchandise (ElephantWear) and,\n most recently, Database Hosting services. \n\n As our mission statement (http://www.pgsql.com/mission.html) states,\n our purpose (among several) is to provide funding for various project,\n whether they be Advertising or Programming. Although not currently\n available, but will be when the new site is up, there will be a set of\n pages off of http://www.pgsql.com that will provide a cleaner means of\n contribute financially towards having features implemented, as well as\n showing funds available for various projects. For instance, 25% of the\n revenue from Support Contracts will be ear-marked for stuff like\n Advertising and a General Pool that we can use to fund projects that we\n feel is important from a \"commercial deployment\" standpoint.\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n\n\n\n",
"msg_date": "Sun, 5 Dec 1999 20:56:47 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Oft Ask: How to contribute to PostgreSQL?"
},
{
"msg_contents": "hi,\n\nIs PG done in C or C++?\n\n\n--\n.............\n......... Jason C. Leach\n...... University College of the Cariboo\n... [email protected].\n.. http://www.ocis.net/~jcl\n.\n\n\nDebian!Linux!\n\n\n\n",
"msg_date": "Sun, 05 Dec 1999 19:15:35 -0800",
"msg_from": "\"Jason C. Leach\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Oft Ask: How to contribute to PostgreSQL?"
},
{
"msg_contents": "On Sun, 5 Dec 1999, Jason C. Leach wrote:\n\n> hi,\n> \n> Is PG done in C or C++?\n\nC\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 5 Dec 1999 23:38:30 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Oft Ask: How to contribute to PostgreSQL?"
},
{
"msg_contents": "At 08:56 PM 05-12-1999 -0400, The Hermit Hacker wrote:\n> pages off of http://www.pgsql.com that will provide a cleaner means of\n> contribute financially towards having features implemented, as well as\n\nHow about a website where the public can stick in their credit card number\nand donate directly to\n1) Postgres Inc/Org (which then could pay out \"salaries\")\n2) A particular Postgres project (if necessary).\n3) A particular developer (if it won't be detrimental to team spirit).\n\nAnd everyone knows how much is going where- logs of total donations,\ndonations per month, per project etc.\n\nDevelopers could even donate to each other if they feel that someone is not\nbeing compensated enough for whatever reason (not enough visibility). \n\nCould also have a Thank you page for things like \"Keep up the good work\",\n\"Mucho gracias!\", etc (criticism and bug reports should be kept to mailing\nlists).\n\nI suggested something like this to GNU org some time back, dunno if they\nlike it at all. Open Destination Donation for Open Source Software.\n\nThe main problem is getting the money into bank accounts. But I figure\nPostgres Inc could accept the donations, handle the financial details (card\nproblems etc), and then distribute money/thanks to the projects/developers\naccordingly, via cheques, money orders or whatever.\n\nGood idea? \n\nLink.\n\n",
"msg_date": "Tue, 07 Dec 1999 10:13:33 +0800",
"msg_from": "Lincoln Yeoh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Oft Ask: How to contribute to PostgreSQL?"
},
{
"msg_contents": "\nI'm glad we are all thinking alike :)\n\n1. don't like that one...that's what we do the support contracts,\n consulting and database hosting for...and the merchandise. the\n merchandise has the added benefit of providing advertising for the\n project too, so everyone wins.\n\n we are looking into expanding the product line also, currently talking\n with our supplier on this...\n\n IMHO, I'd rather have ppl sign up for support contracts and use them,\n then just having \"donations\" flow in...its what we formed PostgreSQL,\n Inc to do...\n\n2. see http://www.pgsql.com/features ... its a start, we will expand on it\n over the next few days, add a cgi backend, etc etc...\n\n3. I feel that this one ties directly into 2...pick a feature taht that\n developer has taken on according to the TODO list...\n\nOn Tue, 7 Dec 1999, Lincoln Yeoh wrote:\n\n> At 08:56 PM 05-12-1999 -0400, The Hermit Hacker wrote:\n> > pages off of http://www.pgsql.com that will provide a cleaner means of\n> > contribute financially towards having features implemented, as well as\n> \n> How about a website where the public can stick in their credit card number\n> and donate directly to\n> 1) Postgres Inc/Org (which then could pay out \"salaries\")\n> 2) A particular Postgres project (if necessary).\n> 3) A particular developer (if it won't be detrimental to team spirit).\n> \n> And everyone knows how much is going where- logs of total donations,\n> donations per month, per project etc.\n> \n> Developers could even donate to each other if they feel that someone is not\n> being compensated enough for whatever reason (not enough visibility). \n> \n> Could also have a Thank you page for things like \"Keep up the good work\",\n> \"Mucho gracias!\", etc (criticism and bug reports should be kept to mailing\n> lists).\n\nA comments page? Lincoln Yech <[email protected]> says this about\nPostreSQL...? Doable, let me play with it...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 6 Dec 1999 23:12:08 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Oft Ask: How to contribute to PostgreSQL?"
},
{
"msg_contents": "At 11:12 PM 06-12-1999 -0400, The Hermit Hacker wrote:\n>\n>I'm glad we are all thinking alike :)\n\nYep. I really would like a credit card thingy tho, coz it's more convenient\nthan trying to do international bank transfers, money orders and all that.\n\nThere's a bit of pain interfacing with the bank/card clearinghouse though. \n\n>On Tue, 7 Dec 1999, Lincoln Yeoh wrote:\n>\n>> At 08:56 PM 05-12-1999 -0400, The Hermit Hacker wrote:\n>> > pages off of http://www.pgsql.com that will provide a cleaner means of\n>> > contribute financially towards having features implemented, as well as\n>> \n>> How about a website where the public can stick in their credit card number\n>> and donate directly to\n>> 1) Postgres Inc/Org (which then could pay out \"salaries\")\n>> 2) A particular Postgres project (if necessary).\n>> 3) A particular developer (if it won't be detrimental to team spirit).\n>> \n\n>1. don't like that one...that's what we do the support contracts,\n\nOK. Was just an idea for general funding.\n\n>2. see http://www.pgsql.com/features ... its a start, we will expand on it\n> over the next few days, add a cgi backend, etc etc...\n\nLooks ok, heh maybe some bright spark will put a pledge for negative bucks\nfor stuff they don't like :). \n\nI still would like support for credit card payments..\n\n>3. I feel that this one ties directly into 2...pick a feature taht that\n> developer has taken on according to the TODO list...\n\nYeah, probably better for teamwork/teamspirit..\n\n>> Could also have a Thank you page for things like \"Keep up the good work\",\n>> \"Mucho gracias!\", etc (criticism and bug reports should be kept to mailing\n>> lists).\n>\n>A comments page? Lincoln Yech <[email protected]> says this about\n>PostreSQL...? Doable, let me play with it...\n\nMore like a way of saying \"Thanks\" to the developers. I figure many times a\nsincere word of thanks/encouragement at the right moment is priceless.\nProblem is there could be abuses by nasty people.\n\nThe comments could actually be added to your existing projects page perhaps.\n\nBTW the Postgres elephant doesn't look quite as attractive as the Linux\npenguin. \n\nThat said, it could still make a decent stuffed toy. Give it big eyes and a\nPostgreSQL t-shirt around a roly-poly body.\n\nThing is at current exchange rates, I can get cute stuffed toys for only\nUSD2 here. USD20 is like 20 lunches... Got to save up...\n\nCheerio,\n\nLink.\n\n",
"msg_date": "Tue, 07 Dec 1999 12:27:03 +0800",
"msg_from": "Lincoln Yeoh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Oft Ask: How to contribute to PostgreSQL?"
},
{
"msg_contents": "On Tue, 7 Dec 1999, Lincoln Yeoh wrote:\n\n> At 11:12 PM 06-12-1999 -0400, The Hermit Hacker wrote:\n> >\n> >I'm glad we are all thinking alike :)\n> \n> Yep. I really would like a credit card thingy tho, coz it's more\n> convenient than trying to do international bank transfers, money\n> orders and all that.\n> \n> There's a bit of pain interfacing with the bank/card clearinghouse\n> though.\n\nWe do do credit cards...let me build the backend for that page first...it\nwill include the ability to do credit cards. Our products page currently\ndoes handle credit cards, but i haven't had a chance to build up the cgi\nyet for the 'pledge' page...\n\n> >1. don't like that one...that's what we do the support contracts,\n> \n> OK. Was just an idea for general funding.\n\nThere is a 'General Pool' option under the Contribute item in the products\npage...I'm going to end up throwing in an 'Advertising' option also, for\nmagazine ads and such...\n\n> >2. see http://www.pgsql.com/features ... its a start, we will expand on it\n> > over the next few days, add a cgi backend, etc etc...\n> \n> Looks ok, heh maybe some bright spark will put a pledge for negative bucks\n> for stuff they don't like :). \n> \n> I still would like support for credit card payments..\n\nGo make an order on he products page...ti will ask you for your credit\ncard number...the pledge page will have it too...i'll try hard to get it\nin place tomorrow...\n\n> >> Could also have a Thank you page for things like \"Keep up the good work\",\n> >> \"Mucho gracias!\", etc (criticism and bug reports should be kept to mailing\n> >> lists).\n> >\n> >A comments page? Lincoln Yech <[email protected]> says this about\n> >PostreSQL...? Doable, let me play with it...\n> \n> More like a way of saying \"Thanks\" to the developers. I figure many times a\n> sincere word of thanks/encouragement at the right moment is priceless.\n> Problem is there could be abuses by nasty people.\n> \n> The comments could actually be added to your existing projects page perhaps.\n> \n> BTW the Postgres elephant doesn't look quite as attractive as the Linux\n> penguin. \n> \n> That said, it could still make a decent stuffed toy. Give it big eyes and a\n> PostgreSQL t-shirt around a roly-poly body.\n\nJeff? :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 7 Dec 1999 00:44:10 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Oft Ask: How to contribute to PostgreSQL?"
},
{
"msg_contents": "> BTW the Postgres elephant doesn't look quite as attractive as the Linux\n> penguin. \n\nthat being said, the linun penguin isn't nearly as attractive as the freebsd\ndeemon , but don't get us started :)\n\n> \n> That said, it could still make a decent stuffed toy. Give it big eyes and a\n> PostgreSQL t-shirt around a roly-poly body.\n\n20usd is 30+cdn : for a stuffed toy ? we'll look into it.\n\non a side note we are looking into cofee mugs, keychains(alredy ordered)\nand possibly mouse pads\n\njeff\n\n> \n> \n\n",
"msg_date": "Tue, 7 Dec 1999 19:15:40 -0400 (AST)",
"msg_from": "\"Jeff MacDonald <[email protected]>\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Oft Ask: How to contribute to PostgreSQL?"
},
{
"msg_contents": "Not sure this belongs in the FAQ. Seems more of a web page thing.\n\n\n> \n> Due to a recent thread started on pgsql-hackers, I'm posting this to the\n> lists. Vince is planning on putting in appropriate links for some of\n> this, and, Bruce, can we maybe put it into the FAQ?\n> \n> I'm not an English major, so this is more techinese then anything\n> else...or, a rambling of an un-ordered mind, however you want to classify\n> it :)\n> \n> ============\n> \n> There are several ways that people can contribute to the PostgreSQL\n> project, and, below, I'm going to try and list them...\n> \n> 1. Code. We have a TODO list available at\n> http://www.postgresql.org/docs/todo.html, which lists enhancements that\n> have been picked out as needed. Some of them take time to learn the\n> intricacies of the code, some require no more then time. Contributing\n> code, altho not the only way to contribute, is always one of the more\n> valuable ways of improving any Open Source Project.\n> \n> 2. Web Site. http://www.postgresql.org is mirrored on many sites around\n> the world, as is ftp://ftp.postgresql.org. By increasing the number of\n> mirrors available around the world, you help reduce the load on any one\n> site, as well as improve the accessibility to the code. If you have\n> the resources to provide a mirror, both hardware and bandwidth, this is\n> another means of contributing to the project. All our mirrors are\n> required to use rsync, in order to be listed, with details on this\n> found at http://www.postgresql.org/howtomirror.html\n> \n> 3. Mailing Lists. We use software that allows us to use remote sites for\n> 'mail relaying'. Basically, instead of our central server having to\n> service *all* remote addresses, it offloads email onto remote servers\n> to do the distribution. For intance, by dumping all email destined for\n> a subscribers in France to a server residing in France, the central\n> server has to send one email mesage \"Across the pond\", and let the\n> server in France handle the other servers. If you are interested in\n> providing a relay point, email [email protected] (me) for details on how\n> to get setup for this.\n> \n> 4. Financial. In June of 1999, PostgreSQL, Inc was formed as the\n> \"Commercial Arm\" of the PostgreSQL Project. Although it was originally\n> formed to provide Commercial Support for PostgreSQL, it has expanded to\n> include Consulting services, PostgreSQL Merchandise (ElephantWear) and,\n> most recently, Database Hosting services. \n> \n> As our mission statement (http://www.pgsql.com/mission.html) states,\n> our purpose (among several) is to provide funding for various project,\n> whether they be Advertising or Programming. Although not currently\n> available, but will be when the new site is up, there will be a set of\n> pages off of http://www.pgsql.com that will provide a cleaner means of\n> contribute financially towards having features implemented, as well as\n> showing funds available for various projects. For instance, 25% of the\n> revenue from Support Contracts will be ear-marked for stuff like\n> Advertising and a General Pool that we can use to fund projects that we\n> feel is important from a \"commercial deployment\" standpoint.\n> \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n> \n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 May 2000 21:23:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Oft Ask: How to contribute to PostgreSQL?"
},
{
"msg_contents": "On Wed, May 31, 2000 at 09:23:27PM -0400, Bruce Momjian wrote:\n> > 3. Mailing Lists. We use software that allows us to use remote sites for\n> > 'mail relaying'. Basically, instead of our central server having to\n> > service *all* remote addresses, it offloads email onto remote servers\n> > to do the distribution. For intance, by dumping all email destined for\n> > a subscribers in France to a server residing in France, the central\n> > server has to send one email mesage \"Across the pond\", and let the\n> > server in France handle the other servers. If you are interested in\n> > providing a relay point, email [email protected] (me) for details on how\n> > to get setup for this.\n\nFWIW this not as good an idea as it seems. I know of many .fr domains\nthat are hosted in the US. My own .ch is in St-Louis (MI), whereas some\nclients' .com are hosted right here in Paris.\n\nThis setup is the reason I was unable to get {-hackers,-general} list\ntraffic for a week because of a faulty \"relay\" for my Swiss .ch domain,\nwhich apparently refused to relay back to the US where this domain\nlives.\n\nDomains are diconnected from geography nowadays, and increasingly as\nwe go.\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.fr\n\nI don't build computers, I'm a cooling engineer.\n -- Seymour Cray, founder of Cray Inc. \n",
"msg_date": "Thu, 1 Jun 2000 08:17:12 +0200",
"msg_from": "Louis-David Mitterrand <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Oft Ask: How to contribute to PostgreSQL?"
},
{
"msg_contents": "Louis-David Mitterrand wrote:\n> On Wed, May 31, 2000 at 09:23:27PM -0400, Bruce Momjian wrote:\n> > > 3. Mailing Lists. We use software that allows us to use remote sites for\n> > > 'mail relaying'. Basically, instead of our central server having to\n> > > service *all* remote addresses, it offloads email onto remote servers\n> > > to do the distribution. For intance, by dumping all email destined for\n> > > a subscribers in France to a server residing in France, the central\n> > > server has to send one email mesage \"Across the pond\", and let the\n> > > server in France handle the other servers. If you are interested in\n> > > providing a relay point, email [email protected] (me) for details on how\n> > > to get setup for this.\n> FWIW this not as good an idea as it seems. I know of many .fr domains\n> that are hosted in the US. My own .ch is in St-Louis (MI), whereas some\n> clients' .com are hosted right here in Paris.\n\n.COM is not US dependant. Those servers in the USA really would be best served\nin the .COM, .EDU, .NET, .ORG, .INT, or .US domain. The entire *point* of\ngeographically based names was to allow management the dns tree in a\ngeographic manner, so france could divide their own local tree as *they* chose,\nand so domains which were following geographic practice would get reasonably\noptimized DNS management. Servers which are *international* servers would\nbe best serviced if they used an international domain (all of the above\nexcept for .us). This wasn't set up out of cultural ignorance or arrogance,\nit was designed this way to facilitate management and DNS resolution.\n\nThat way, some .fr server in the us wouldn't be tying up international\nlines every time a dns reload/refresh occurred, and a .us server wouldn't\nbe in france, doing the same thing...\n\nHmm...An intelligent algorythm for this mail could batch based on the\nnetblock of the MX, using the same logic systems as CIDR, and relay\nmessges into a mail relay server on that *provider* netblock, but\nthis might require more machines for relaying than we currently have\navailable, no?\n\n> This setup is the reason I was unable to get {-hackers,-general} list\n> traffic for a week because of a faulty \"relay\" for my Swiss .ch domain,\n> which apparently refused to relay back to the US where this domain\n> lives.\n\nA faulty relay caused a mail failure. That's standard mail routing. If you\nhad a faulty relay for your mail delivery in the .com domain (US), which\nrefused to relay go to your .ch domain, it would have been a problem\nas well. Might I suggest to those who are setting up the reigonal/national\nrelays that they use multiple MX systems, so relay failures are managed\non the fly?\n\n> Domains are diconnected from geography nowadays, and increasingly as\n> we go.\n\nWell, those who ignore the domain name system rfc's, and choose to try\nto do it their *own* way, well, I guess they will be subjecting themselves\nto more problems. Some domains never *were* geographic, some have been\nthe same since 1994.\n\nPlease read:\nhttp://www.rfc-editor.org/rfc/rfc1591.txt\nFor a clearer understanding of proper international dns domain usage\nand TLD assignment.\n\nhttp://www.rfc-editor.org/rfc/rfc1480.txt\nDetails how this is used in the USA, I assume the CCIT or somesuch\nhas similar guidelines for proper usage of .fr, and I am unaware\nof the prober body to handle .ch server management.\n\n-Ronabop\n\n--\nBrought to you from iBop the iMac, a MacOS, Win95, Win98, LinuxPPC machine,\nwhich is currently in MacOS land. Your bopping may vary.\n",
"msg_date": "Thu, 01 Jun 2000 02:15:05 -0700",
"msg_from": "Ron Chmara <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [HACKERS] Oft Ask: How to contribute to PostgreSQL?"
},
{
"msg_contents": "On Wed, 31 May 2000, Bruce Momjian wrote:\n\n> Not sure this belongs in the FAQ. Seems more of a web page thing.\n\nIt's been on the website for a long time. Click on \"Helping Us\" from\nany page.\n\nVince.\n\n> \n> \n> > \n> > Due to a recent thread started on pgsql-hackers, I'm posting this to the\n> > lists. Vince is planning on putting in appropriate links for some of\n> > this, and, Bruce, can we maybe put it into the FAQ?\n> > \n> > I'm not an English major, so this is more techinese then anything\n> > else...or, a rambling of an un-ordered mind, however you want to classify\n> > it :)\n> > \n> > ============\n> > \n> > There are several ways that people can contribute to the PostgreSQL\n> > project, and, below, I'm going to try and list them...\n> > \n> > 1. Code. We have a TODO list available at\n> > http://www.postgresql.org/docs/todo.html, which lists enhancements that\n> > have been picked out as needed. Some of them take time to learn the\n> > intricacies of the code, some require no more then time. Contributing\n> > code, altho not the only way to contribute, is always one of the more\n> > valuable ways of improving any Open Source Project.\n> > \n> > 2. Web Site. http://www.postgresql.org is mirrored on many sites around\n> > the world, as is ftp://ftp.postgresql.org. By increasing the number of\n> > mirrors available around the world, you help reduce the load on any one\n> > site, as well as improve the accessibility to the code. If you have\n> > the resources to provide a mirror, both hardware and bandwidth, this is\n> > another means of contributing to the project. All our mirrors are\n> > required to use rsync, in order to be listed, with details on this\n> > found at http://www.postgresql.org/howtomirror.html\n> > \n> > 3. Mailing Lists. We use software that allows us to use remote sites for\n> > 'mail relaying'. Basically, instead of our central server having to\n> > service *all* remote addresses, it offloads email onto remote servers\n> > to do the distribution. For intance, by dumping all email destined for\n> > a subscribers in France to a server residing in France, the central\n> > server has to send one email mesage \"Across the pond\", and let the\n> > server in France handle the other servers. If you are interested in\n> > providing a relay point, email [email protected] (me) for details on how\n> > to get setup for this.\n> > \n> > 4. Financial. In June of 1999, PostgreSQL, Inc was formed as the\n> > \"Commercial Arm\" of the PostgreSQL Project. Although it was originally\n> > formed to provide Commercial Support for PostgreSQL, it has expanded to\n> > include Consulting services, PostgreSQL Merchandise (ElephantWear) and,\n> > most recently, Database Hosting services. \n> > \n> > As our mission statement (http://www.pgsql.com/mission.html) states,\n> > our purpose (among several) is to provide funding for various project,\n> > whether they be Advertising or Programming. Although not currently\n> > available, but will be when the new site is up, there will be a set of\n> > pages off of http://www.pgsql.com that will provide a cleaner means of\n> > contribute financially towards having features implemented, as well as\n> > showing funds available for various projects. For instance, 25% of the\n> > revenue from Support Contracts will be ear-marked for stuff like\n> > Advertising and a General Pool that we can use to fund projects that we\n> > feel is important from a \"commercial deployment\" standpoint.\n> > \n> > \n> > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > Systems Administrator @ hub.org \n> > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> > \n> > \n> > \n> > \n> > \n> > ************\n> > \n> \n> \n> \n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 1 Jun 2000 06:03:09 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Oft Ask: How to contribute to PostgreSQL?"
},
{
"msg_contents": "On Thu, 1 Jun 2000, Louis-David Mitterrand wrote:\n\n> On Wed, May 31, 2000 at 09:23:27PM -0400, Bruce Momjian wrote:\n> > > 3. Mailing Lists. We use software that allows us to use remote sites for\n> > > 'mail relaying'. Basically, instead of our central server having to\n> > > service *all* remote addresses, it offloads email onto remote servers\n> > > to do the distribution. For intance, by dumping all email destined for\n> > > a subscribers in France to a server residing in France, the central\n> > > server has to send one email mesage \"Across the pond\", and let the\n> > > server in France handle the other servers. If you are interested in\n> > > providing a relay point, email [email protected] (me) for details on how\n> > > to get setup for this.\n> \n> FWIW this not as good an idea as it seems. I know of many .fr domains\n> that are hosted in the US. My own .ch is in St-Louis (MI), whereas some\n> clients' .com are hosted right here in Paris.\n> \n> This setup is the reason I was unable to get {-hackers,-general} list\n> traffic for a week because of a faulty \"relay\" for my Swiss .ch domain,\n> which apparently refused to relay back to the US where this domain\n> lives.\n\nno, actually, the problem was on my part ... the .ch domain admin had sent\nme an email about changing the machine it went through and it got lost in\nmy mailbox ...\n\n\n",
"msg_date": "Thu, 1 Jun 2000 09:58:35 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Oft Ask: How to contribute to PostgreSQL?"
},
{
"msg_contents": "I'd like to add\n\n1 1/2: Writing documentation. More, better, and clearer documentation is\nalways welcome and takes little special skill to make. If you find that\nthe documentation is not clear about a topic or you just figured out\nsomething that is not documented at all, please write up something and\ncontribute it. Submissions are not required to be in DocBook format --\nplain text is enough. The mailing list for documentation work is\[email protected].\n\n\nBruce Momjian writes:\n\n> > There are several ways that people can contribute to the PostgreSQL\n> > project, and, below, I'm going to try and list them...\n> > \n> > 1. Code. We have a TODO list available at\n> > http://www.postgresql.org/docs/todo.html, which lists enhancements that\n> > have been picked out as needed. Some of them take time to learn the\n> > intricacies of the code, some require no more then time. Contributing\n> > code, altho not the only way to contribute, is always one of the more\n> > valuable ways of improving any Open Source Project.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sat, 3 Jun 2000 01:47:53 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Oft Ask: How to contribute to PostgreSQL?"
}
] |
[
{
"msg_contents": "As I promised, I have written a small program called \"pg_ctl\" to\nstart/stop/restart postmaster. I have committed into src/bin/pg_ctl.\nPlease remember to pull the latest postmaster.c.\n\no How to use\n\npg_ctl has three modes:\n\n1. startup mode\n\npg_ctl [-w][-D database_dir][-p path_to_postmaster][-o \"postmaster_opts\"] start\n\nstart postmaster. If -w is specified, pg_ctl will block until database\nis in the production mode. -D sets path to the database directory,\nwhich overrides the environment variable $PGDATA. pg_ctl finds\npostmaster.pid and other files under the directory. -p specifies the\npath to postmaster. If -p is not given, default path (generated as\n$(BINDIR)/postmaster while making pg_ctl from pg_ctl.sh) will be used.\nIf -o option is not given, pg_ctl will take options for postmaster\nfrom $PGDATA/postamster.opts.default. Note that this file is not\ncurrently installed by \"make install.\" So you have to make it by hand\nor could copy from postmaster.opts that is made once pg_ctl starts\npostmaster. Sample postmaster.opts.default looks like:\n\npostmaster\n-S\n\nor you could write it in one line:\n\npostmaster -S\n\n2. stop mode\n\npg_ctl [-w][-D database_dir][-m s[mart]|f[ast]|i[mmediate]] stop\n\nstop postmaster. -m specifies database shutdown method. The default\nis \"-m smart\".\n\n3. restart mode\n\npg_ctl [-w][-D database_dir][-m s[mart]|f[ast]|i[mmediate]][-o \"postmaster_opts\"] restart\n\nstop postmaster, then start it again. Options to start postmaster is\ntaken from $PGDATA/postmaster.opts unless -o is given.\n\n4. status reporting mode\n\npg_ctl [-D database_dir] status\n\nreports status of postmaster. Currently reported information is\nrelatively limited. Hopefully we could add the functionarity to report\nmore valuable info such as number of backend running.\n\no sample session\n\nHere is an example session using pg_ctl:\n\n$ pg_ctl stop\t# stop postmaster\npostmaster successfully shut down.\n$ pg_ctl stop\t# cannot stop postmaster if it is not running\npg_ctl: Can't find /usr/local/src/pgsql/current/data/postmaster.pid.\nIs postmaster running?\n$ pg_ctl start\t# start postmaster\npostmaster successfully started up.\n$ pg_ctl status\t# status report\n$ pg_ctl status\npg_ctl: postmaster is running (pid: 736)\noptions are:\n/usr/local/src/pgsql/current/bin/postmaster\n-p 5432\n-D /usr/local/src/pgsql/current/data\n-B 64\n-b /usr/local/src/pgsql/current/bin/postgres\n-N 32\n-S\n$ pg_ctl restart\t# stop postmaster then start it again\nWaiting for postmaster shutting down...done.\npostmaster successfully shut down.\npostmaster successfully started up.\n$ pg_ctl status\t\t# see how the pid is different from before\npg_ctl: postmaster is running (pid: 761)\noptions are:\n/usr/local/src/pgsql/current/bin/postmaster\n-p 5432\n-D /usr/local/src/pgsql/current/data\n-B 64\n-b /usr/local/src/pgsql/current/bin/postgres\n-N 32\n-S\n$ pg_ctl -o \"-S -B 1024 -N 128 -o -F\" restart\t# restart with different options\nWaiting for postmaster shutting down...done.\npostmaster successfully shut down.\npostmaster successfully started up.\n$ pg_ctl status\npg_ctl: postmaster is running (pid: 961)\noptions are:\n/usr/local/src/pgsql/current/bin/postmaster\n-p 5432\n-D /usr/local/src/pgsql/current/data\n-B 1024\n-b /usr/local/src/pgsql/current/bin/postgres\n-N 128\n-S\n-o '-F'\n--\nTatsuo Ishii\n",
"msg_date": "Mon, 06 Dec 1999 18:11:44 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_ctl"
},
{
"msg_contents": "On Mon, 6 Dec 1999, Tatsuo Ishii wrote:\n\n> pg_ctl [-w][-D database_dir][-p path_to_postmaster][-o \"postmaster_opts\"] start\n\n> postmaster.pid and other files under the directory. -p specifies the\n> path to postmaster. If -p is not given, default path (generated as\n> $(BINDIR)/postmaster while making pg_ctl from pg_ctl.sh) will be used.\n\nMay I issue a complaint? The use of the configure time generated BINDIR\nfor finding binaries at run time is not only an explicitly discouraged\nabuse of the whole autoconf concept, it might potentially lead to big\nproblems when packages are built in temporary trees. I'm currently working\non a portable \"which\" alternative. When I get it done, I'll suggest that\none to you.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Mon, 6 Dec 1999 12:56:31 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_ctl"
},
{
"msg_contents": "> May I issue a complaint? The use of the configure time generated BINDIR\n> for finding binaries at run time is not only an explicitly discouraged\n> abuse of the whole autoconf concept, it might potentially lead to big\n> problems when packages are built in temporary trees. I'm currently working\n> on a portable \"which\" alternative. When I get it done, I'll suggest that\n> one to you.\n\nOk. let me know if you finish the work.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 07 Dec 1999 10:12:17 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pg_ctl"
}
] |
[
{
"msg_contents": "\n\nOn Linux now exist project for raw I/O device \n(http://oss.sgi.com/projects/rawio/). Exist any plan (for far future)\nwith raw device for PgSQL? (TODO be quiet for this.)\n\n\t\t\t\t\t\tKarel\n\n----------------------------------------------------------------------\nKarel Zak <[email protected]> http://home.zf.jcu.cz/~zakkr/\n\nDocs: http://docs.linux.cz (big docs archive)\t\nKim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\nFTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n-----------------------------------------------------------------------\n\n",
"msg_date": "Mon, 6 Dec 1999 10:59:23 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "RAW I/O device"
},
{
"msg_contents": "> On Linux now exist project for raw I/O device\n> (http://oss.sgi.com/projects/rawio/). Exist any plan (for far future)\n> with raw device for PgSQL? (TODO be quiet for this.)\n\n Up to now we kept the storage manager overhead in the system.\n Actually there is no way to tell which storage manager to use\n for a particular table/index, so anything goes to the default\n which is the magnetic disk one that uses single files for\n each relation.\n\n There was a discussion about simplifying it, but the\n consensus was to let it as is because it is the base for a\n tablespace and/or raw device manager.\n\n AFAIK, noone is working on it, so it must be really FAR\n future. But the plan is still alive.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 6 Dec 1999 12:40:12 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RAW I/O device"
},
{
"msg_contents": "\n\n\nOn Mon, 6 Dec 1999, Jan Wieck wrote:\n\n> > On Linux now exist project for raw I/O device\n> > (http://oss.sgi.com/projects/rawio/). Exist any plan (for far future)\n> > with raw device for PgSQL? (TODO be quiet for this.)\n> \n> Up to now we kept the storage manager overhead in the system.\n> Actually there is no way to tell which storage manager to use\n> for a particular table/index, so anything goes to the default\n> which is the magnetic disk one that uses single files for\n> each relation.\n> \n> There was a discussion about simplifying it, but the\n> consensus was to let it as is because it is the base for a\n> tablespace and/or raw device manager.\n> \n> AFAIK, noone is working on it, so it must be really FAR\n> future. But the plan is still alive.\n\n I raise the question, because the linux kernel opening with raw-device \nnew way for a faster and better database engine. I know (and agree) \nthat it not is priority for next year(s?). But it is interesting, and\nis prabably good remember it during development, and not write (in future)\nfeatures which close this good way. \n\n\t\t\t\t\t\tKarel\n \n\n\n",
"msg_date": "Mon, 6 Dec 1999 13:47:18 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] RAW I/O device"
},
{
"msg_contents": "I may be out of place on this one, but remember that postgres runs on\nother systems besides linux. Making the DB work w/ one FS (and write the\nstorage code for it) seems pointless if we are still stuck on normal FSs\non other machines. \n\n- merlin\n\n> > On Linux now exist project for raw I/O device\n> > (http://oss.sgi.com/projects/rawio/). Exist any plan (for far future)\n> > with raw device for PgSQL? (TODO be quiet for this.)\n> \n> Up to now we kept the storage manager overhead in the system.\n> Actually there is no way to tell which storage manager to use\n> for a particular table/index, so anything goes to the default\n> which is the magnetic disk one that uses single files for\n> each relation.\n> \n> There was a discussion about simplifying it, but the\n> consensus was to let it as is because it is the base for a\n> tablespace and/or raw device manager.\n> \n> AFAIK, noone is working on it, so it must be really FAR\n> future. But the plan is still alive.\n\n\n\n\n ------++++======++++------\n Smith Computer Lab Administrator, Case Western Reserve University\n [email protected] | 216.368.5066 | http://home.cwru.edu/~bap\n ------++++======++++------\n\n\n",
"msg_date": "Mon, 6 Dec 1999 08:03:25 -0500 (EST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RAW I/O device"
},
{
"msg_contents": "> other systems besides linux. Making the DB work w/ one FS (and write the\n> storage code for it) seems pointless if we are still stuck on normal FSs\n> on other machines.\n\n\tYes, of course, but storing databases directly to RAW device - not\nthrough the filesystem - is one feature of modern DB engines...\n\n--------------------------------------------------------------------------\nIng. Pavel Janousek (PaJaSoft) FoNet, spol. s r. o.\nVyvoj software, sprava siti, Unix, Web, Y2K Anenska 11, 602 00 Brno\nE-mail: mailto:[email protected] Tel.: +420 5 4324 4749\nSMS: mailto:[email protected] Fax.: +420 5 4324 4751\nWWW: http://WWW.FoNet.Cz/ E-mail:\nmailto:[email protected]\n--------------------------------------------------------------------------\n",
"msg_date": "Mon, 06 Dec 1999 14:26:07 +0100",
"msg_from": "\"Ing. Pavel PaJaSoft Janousek\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RAW I/O device"
},
{
"msg_contents": "On Mon, 6 Dec 1999, Ing. Pavel PaJaSoft Janousek wrote:\n\n> > other systems besides linux. Making the DB work w/ one FS (and write the\n> > storage code for it) seems pointless if we are still stuck on normal FSs\n> > on other machines.\n> \n> \tYes, of course, but storing databases directly to RAW device - not\n> through the filesystem - is one feature of modern DB engines...\n\nActually, Oracle has been moving *away* from this...more recent versions\nof Oracle recommend using the Operating System file systems, since, in\nmost cases, the Operating System does a better job, and its too difficult\nto have Oracle itself optimize internal for all the different variants\nthat it supports....\n\nAt work, we use Oracle extensively, and I sat down one day last year with\nour Oracle DBA to discuss exactly this...and, if I recall correctly, it\nwas prompted by a similar thread here...\n\nIf Linux is providing an Interface into the RAW file system, then this may\nchange things for Oracle, since it wouldn't have to \"learn\" all the\ndifferent OSs, as long as the API is the same across them all...and, my\nexperience with Linux is that the API for Linux will most likely be\ndifferent then everyone else *roll eyes*\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 6 Dec 1999 10:44:55 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RAW I/O device"
},
{
"msg_contents": "The Hermit Hacker writes:\n> If Linux is providing an Interface into the RAW file system, then this may\n> change things for Oracle, since it wouldn't have to \"learn\" all the\n> different OSs, as long as the API is the same across them all...and, my\n> experience with Linux is that the API for Linux will most likely be\n> different then everyone else *roll eyes*\n\nThe system admin side is slightly different from others: raw devices\nhave their own major number and devices, /dev/rawN and you \"bind\" a\nraw device to any existing block device /dev/blockdev by doing\n # raw /dev/rawN /dev/blockdev\n\nHowever, the DBA side (or software) side isn't any different: provided\nyou access /dev/rawN *only* in sector chunks (i.e. multiples of 512\nbytes that are 512-byte aligned) then the software doesn't care\nwhether it's /dev/rawN or an ordinary block device. If PostgreSQL can\nguarantee (or be tweaked/enhanced to guarantee) that it only ever\nreads/writes in multiples of 512 byte chunks and never does anything\n\"weird\" (truncates, file-specific, ioctls, mmap, needing O_CREAT to\nstart with etc.) then it should be perfectly happy when presented with\na /dev/rawN instead of an ordinary file.\n\n--Malcolm\n\n-- \nMalcolm Beattie <[email protected]>\nUnix Systems Programmer\nOxford University Computing Services\n",
"msg_date": "Mon, 6 Dec 1999 17:25:29 +0000 (GMT)",
"msg_from": "Malcolm Beattie <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RAW I/O device"
},
{
"msg_contents": "Sorry for the previous note...I'd tried stopping and restarting\npostmaster and that didn't help, so I posted - this web site\ngets low but steady use and was suddenly acting weird on me.\n\nAnyway, a quick look at the source made it clear that Postgres\nwasn't at fault, as the node's a T_Query and apply_RIR_view\nclearly handles it.\n\nSo, I rebooted linux and it works fine now. \n\nSorry to bother folks...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 06 Dec 1999 11:52:10 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: view dying out of the blue..."
},
{
"msg_contents": "> I raise the question, because the linux kernel opening with raw-device \n> new way for a faster and better database engine. I know (and agree) \n> that it not is priority for next year(s?). But it is interesting, and\n> is prabably good remember it during development, and not write (in future)\n> features which close this good way. \n\nI would be very surprised to see any significant change in raw vs.\nfilesystem i/o on modern file systems, and I am sorry, but Linux ext2\ndoes not count as modern.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 6 Dec 1999 22:24:24 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RAW I/O device"
},
{
"msg_contents": "> On Mon, 6 Dec 1999, Ing. Pavel PaJaSoft Janousek wrote:\n> \n> > > other systems besides linux. Making the DB work w/ one FS (and write the\n> > > storage code for it) seems pointless if we are still stuck on normal FSs\n> > > on other machines.\n> > \n> > \tYes, of course, but storing databases directly to RAW device - not\n> > through the filesystem - is one feature of modern DB engines...\n> \n> Actually, Oracle has been moving *away* from this...more recent versions\n> of Oracle recommend using the Operating System file systems, since, in\n> most cases, the Operating System does a better job, and its too difficult\n> to have Oracle itself optimize internal for all the different variants\n> that it supports....\n\n\nDing, ding, ding. Give that man a cigar.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 6 Dec 1999 22:28:49 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RAW I/O device"
},
{
"msg_contents": "\nOn Mon, 6 Dec 1999, Bruce Momjian wrote:\n\n> > I raise the question, because the linux kernel opening with raw-device \n> > new way for a faster and better database engine. I know (and agree) \n> > that it not is priority for next year(s?). But it is interesting, and\n> > is prabably good remember it during development, and not write (in future)\n> > features which close this good way. \n> \n> I would be very surprised to see any significant change in raw vs.\n> filesystem i/o on modern file systems, and I am sorry, but Linux ext2\n> does not count as modern.\n\nYes. The ext2's limitation and unavailable is public secret and use it for\nraw is crazy idea. On a raw device can be implement specific data organization\n(specific for DB demand). Raw's advantage is non-universal organization.\n\nA raw is not only about filesystem, this feature remove full control from \nOS kernel to DB (example data caching - kernel not has information \nhow/why/what remove to cache but DB has this information... etc).\n\n\t\t\t\t\t\tKarel \n\n\n\n\n \n\n",
"msg_date": "Tue, 7 Dec 1999 12:55:03 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] RAW I/O device"
}
] |
[
{
"msg_contents": "> > On Linux now exist project for raw I/O device\n> > (http://oss.sgi.com/projects/rawio/). Exist any plan (for \n> far future)\n> > with raw device for PgSQL? (TODO be quiet for this.)\n> \n> Up to now we kept the storage manager overhead in the system.\n> Actually there is no way to tell which storage manager to use\n> for a particular table/index, so anything goes to the default\n> which is the magnetic disk one that uses single files for\n> each relation.\n> \n> There was a discussion about simplifying it, but the\n> consensus was to let it as is because it is the base for a\n> tablespace and/or raw device manager.\n> \n> AFAIK, noone is working on it, so it must be really FAR\n> future. But the plan is still alive.\n\nI have done some work in this area - mostly to recognize new keywords\n(CREATE TABLESPACE ...) and I have also created a new storage manager, but I\nhave stopped it due the lack of free time ;-). I can send a patch for 6.5.1.\n\nIn my opinion, the main problem is with accessing shared system tables (like\npg_database) and to include a storage manager type into the tuples in\npg_database. But I think it is possible to solve this problems.\n\n\t\tDan\n",
"msg_date": "Mon, 6 Dec 1999 13:59:36 +0100 ",
"msg_from": "Horak Daniel <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] RAW I/O device"
}
] |
[
{
"msg_contents": "> > On Linux now exist project for raw I/O device\n> > (http://oss.sgi.com/projects/rawio/). Exist any plan (for \n> far future)\n> > with raw device for PgSQL? (TODO be quiet for this.)\n> \n> Up to now we kept the storage manager overhead in the system.\n> Actually there is no way to tell which storage manager to use\n> for a particular table/index, so anything goes to the default\n> which is the magnetic disk one that uses single files for\n> each relation.\n> \n> There was a discussion about simplifying it, but the\n> consensus was to let it as is because it is the base for a\n> tablespace and/or raw device manager.\n> \n> AFAIK, noone is working on it, so it must be really FAR\n> future. But the plan is still alive.\n\nI have done some work in this area - mostly to recognize new keywords\n(CREATE TABLESPACE ...) and I have also created a new storage manager, but I\nhave stopped it due the lack of free time ;-). I can send a patch for 6.5.1.\n\nIn my opinion, the main problem is with accessing shared system tables (like\npg_database) and to include a storage manager type into the tuples in\npg_database. But I think it is possible to solve this problems.\n\n\t\tDan\n",
"msg_date": "Mon, 6 Dec 1999 14:25:26 +0100 ",
"msg_from": "Horak Daniel <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] RAW I/O device"
}
] |
[
{
"msg_contents": "This example forwarded from pgsql-sql:\n\ncreate table x (a char(20));\nCREATE\ncreate table y (b varchar(20));\nCREATE\nselect * from x a, y b where a.a = b.b;\nERROR: Unable to identify an operator '=' for types 'bpchar' and 'varchar'\n You will have to retype this query using an explicit cast\n\nOK so far, but:\n\nselect * from x a, y b where text(a.a) = text(b.b);\nERROR: Unable to identify an operator '=' for types 'bpchar' and 'varchar'\n You will have to retype this query using an explicit cast\nselect * from x a, y b where a.a::text = b.b::text;\nERROR: Unable to identify an operator '=' for types 'bpchar' and 'varchar'\n You will have to retype this query using an explicit cast\n\n6.5.3 and current sources behave the same.\n\nI believe this is an artifact of a misbehavior that I've noticed before:\nwhen the parser decides that an explicit typecast or type conversion\nfunction is a no-op (because of binary compatibility of the types\ninvolved), it simply throws away the conversion in toto. Seems to me\nthat it should relabel the subexpression as having the result type of\nthe requested conversion. Otherwise, subsequent processing will use\noperators appropriate to the original type not the converted type,\nwhich presumably is exactly what the user didn't want.\n\nThomas, any comments on this?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Dec 1999 11:47:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Binary-compatible type follies"
}
] |
[
{
"msg_contents": "Hello,\n\nI have this problem with PostgreSQL 6.5.2:\n\ntable timelog199911 has \n\nlogs=> select count(*) from timelog199911;\n count\n------\n208749\n(1 row)\n\n\nlogs=> select distinct confid \nlogs-> from timelog199910\nlogs-> where\nlogs-> confid IS NOT NULL;\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is \nimpossible. Terminating.\n\nThe logged message in stderr (of postmaster) is \n\nFATAL 1: Memory exhausted in AllocSetAlloc()\n\nThe process size grows to 76 MB (this is somehow a limit of Postgres on \nBSD/OS, but this is not my question now).\n\nWhy would it require so much memory? The same query without distinct is \nprocessed fast, but I don't need that much data back in the application.\nThe format is:\n\nTable = timelog\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| loginname | text | var |\n| site | varchar() | 16 |\n| start_time | datetime | 8 |\n| elapsed | timespan | 12 |\n| port | text | var |\n| valid | bool default 't' | 1 |\n| ipaddress | inet | var |\n| confid | int4 | 4 |\n| session_id | text | var |\n+----------------------------------+----------------------------------+-------+\nIndices: timelog_loginname_idx\n timelog_start_time_idx\n\n(indexes are btree on the indicate fields).\n\nWeird, isn't it? \n\nDaniel\n\n",
"msg_date": "Mon, 06 Dec 1999 18:59:41 +0200",
"msg_from": "Daniel Kalchev <[email protected]>",
"msg_from_op": true,
"msg_subject": "memory problem again"
},
{
"msg_contents": "Daniel Kalchev <[email protected]> writes:\n> I have this problem with PostgreSQL 6.5.2:\n\n> logs=> select distinct confid \n> logs-> from timelog199910\n> logs-> where\n> logs-> confid IS NOT NULL;\n> pqReadData() -- backend closed the channel unexpectedly.\n\n> The logged message in stderr (of postmaster) is \n> FATAL 1: Memory exhausted in AllocSetAlloc()\n\nOdd. I can't replicate this here. (I'm using 6.5.3, but I doubt that\nmatters.) There must be some factor involved that you haven't told us.\nYou don't have any triggers or rules on the table, do you?\n\nHas anyone else seen anything like this?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Dec 1999 20:53:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] memory problem again "
},
{
"msg_contents": "Tom... this is getting even more weird:\n\nlogs=> select distinct confid from timelog199911;\npqReadData() -- backend closed the channel unexpectedly.\n[...]\n\nNow this:\nlogs=> \\copy timelog199911 to timelog199911\nSuccessfully copied.\nlogs=> drop table timelog199911;\nDROP\nlogs=> CREATE TABLE \"timelog199911\" (\n \"loginname\" text,\n \"site\" character varying(16),\n \"start_time\" datetime,\n \"elapsed\" timespan,\n \"port\" text,\n \"valid\" bool,\n \"ipaddress\" inet,\n \"confid\" int4,\n \"session_id\" text);\n[...]\nCREATE\nlogs=> CREATE INDEX \"timelog199911_loginname_idx\" on \"timelog199911\" using \nbtree ( \"loginname\" \"text_ops\" );\nCREATE\nlogs=> \\copy timelog199911 from timelog199911\n(ok, I know it's smarted to build the index after copying in the data)\nSuccessfully copied.\n\nlogs=> select distinct confid from timelog199911;\nsbrk: grow failed, return = 12\nsbrk: grow failed, return = 12\npqReadData() -- backend closed the channel unexpectedly.\n[...]\n\nlogs=> select confid from timelog199911;\n[...]\n(208749 rows)\n\nWeird!\n\nDaniel\n\n>>>Tom Lane said:\n > Daniel Kalchev <[email protected]> writes:\n > > I have this problem with PostgreSQL 6.5.2:\n > \n > > logs=> select distinct confid \n > > logs-> from timelog199910\n > > logs-> where\n > > logs-> confid IS NOT NULL;\n > > pqReadData() -- backend closed the channel unexpectedly.\n > \n > > The logged message in stderr (of postmaster) is \n > > FATAL 1: Memory exhausted in AllocSetAlloc()\n > \n > Odd. I can't replicate this here. (I'm using 6.5.3, but I doubt that\n > matters.) There must be some factor involved that you haven't told us.\n > You don't have any triggers or rules on the table, do you?\n > \n > Has anyone else seen anything like this?\n > \n > \t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Dec 1999 10:14:07 +0200",
"msg_from": "Daniel Kalchev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] memory problem again "
},
{
"msg_contents": "I found out how to resolve this problem, yet it does not explain why it \nhappens anyway!\n\nI had postmaster started with this script:\n\nunlimit\npostmaster -D/usr/local/pgsql/data -B 256 -i -o \"-e -S 8192\" >> \n/usr/local/pgsql/errlog 2>&1 &\n\nRemoving all the parameters to postmaster\n\npostmaster -D/usr/local/pgsql/data -i -o \"-e\" >> /usr/local/pgsql/errlog 2>&1 &\n\nmade it work....\n\nPerhaps some memory management problem? I guess the -S option is the culprit \nhere, but this machine has 256 MB RAM and actually never swaps (yet).\n\nHope this helps somehow.\n\nDaniel\n\n>>>Tom Lane said:\n > Daniel Kalchev <[email protected]> writes:\n > > I have this problem with PostgreSQL 6.5.2:\n > \n > > logs=> select distinct confid \n > > logs-> from timelog199910\n > > logs-> where\n > > logs-> confid IS NOT NULL;\n > > pqReadData() -- backend closed the channel unexpectedly.\n > \n > > The logged message in stderr (of postmaster) is \n > > FATAL 1: Memory exhausted in AllocSetAlloc()\n > \n > Odd. I can't replicate this here. (I'm using 6.5.3, but I doubt that\n > matters.) There must be some factor involved that you haven't told us.\n > You don't have any triggers or rules on the table, do you?\n > \n > Has anyone else seen anything like this?\n > \n > \t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Dec 1999 10:25:56 +0200",
"msg_from": "Daniel Kalchev <[email protected]>",
"msg_from_op": true,
"msg_subject": "(resolution?) Re: [HACKERS] memory problem again "
},
{
"msg_contents": "Daniel Kalchev <[email protected]> writes:\n> I found out how to resolve this problem, yet it does not explain why it \n> happens anyway!\n> I had postmaster started with this script:\n> postmaster -D/usr/local/pgsql/data -B 256 -i -o \"-e -S 8192\" >> \n> /usr/local/pgsql/errlog 2>&1 &\n> Removing all the parameters to postmaster\n> postmaster -D/usr/local/pgsql/data -i -o \"-e\" >> /usr/local/pgsql/errlog 2>&1 &\n> made it work....\n> Perhaps some memory management problem? I guess the -S option is the culprit \n> here, but this machine has 256 MB RAM and actually never swaps (yet).\n\n8192 * 1K = 8 meg workspace per sort sure doesn't sound unreasonable.\nThere is a sort going on under-the-hood in your SELECT DISTINCT (it's\nimplemented in the same fashion as \"sort | uniq\"), but under ordinary\ncircumstances that doesn't cause any problem. I can see a couple of\npossibilities:\n\t1. You have a very small kernel limit on per-process data space,\n\t probably 8M or at most 16M.\n\t2. Something is broken in the sort code that makes it fail to\n\t obey the -S limit.\nI favor #1, since if #2 were true we'd probably have noticed it before.\n\nYou might try experimenting with a couple of different -S values (-B\nshouldn't make any difference here, it just affects the size of the\nshared-memory-block request), and watching the size of the backend\nprocess with top(1) or something like it.\n\nIn the meantime, find out where kernel parameters are set on your\nsystem, and look at what MAXDSIZ is set to...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Dec 1999 11:02:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (resolution?) Re: [HACKERS] memory problem again "
},
{
"msg_contents": "Tom,\n\nI think the #2 is more likely, because:\n\nthe kernel is compiled with large enough data size:\n\n# support for larger processes and number of childs\noptions \"DFLDSIZ=\\(128*1024*1024\\)\"\noptions \"MAXDSIZ=\\(256*1024*1024\\)\"\noptions \"CHILD_MAX=256\"\noptions \"OPEN_MAX=256\"\noptions \"KMAPENTRIES=4000\" # Prevents kmem malloc errors !\noptions \"KMEMSIZE=\\(32*1024*1024\\)\"\n\nthe default postgres acocunt limits are:\n\ncoredumpsize unlimited\ncputime unlimited\ndatasize 131072 kbytes\nfilesize unlimited\nmaxproc 256\nmemorylocked 85380 kbytes\nmemoryuse 256136 kbytes\nopenfiles 128\nstacksize 2048 kbytes\n\nI run the postmaster after unlimit, which sets limits thus:\n\ncoredumpsize unlimited\ncputime unlimited\ndatasize 262144 kbytes\nfilesize unlimited\nmaxproc 4116\nmemorylocked 256140 kbytes\nmemoryuse 256136 kbytes\nopenfiles 13196\nstacksize 262144 kbytes\n\nI will do some experimentation with the -S flag to see how it works.\n\nBTW, this postgres is compiled with default of 64 backends - I saw recently \nnote here that this may interfere with the -S option somehow....\n\n(another not related bug, but still on memory allocation)\nStill - this does not explain why postgres cannot allocated more than 76 MB \n(according to top) on BSD/OS (never did, actually - any previous version too), \nwhile a simple malloc(1 MB) loop allocates up to the process limit.\n\nMaybe at some time postrges tries to allocate 'larger' chunk, which the BSD/OS \nmalloc does not like?\n\nDaniel\n\n>>>Tom Lane said:\n > Daniel Kalchev <[email protected]> writes:\n > > I found out how to resolve this problem, yet it does not explain why it \n > > happens anyway!\n > > I had postmaster started with this script:\n > > postmaster -D/usr/local/pgsql/data -B 256 -i -o \"-e -S 8192\" >> \n > > /usr/local/pgsql/errlog 2>&1 &\n > > Removing all the parameters to postmaster\n > > postmaster -D/usr/local/pgsql/data -i -o \"-e\" >> /usr/local/pgsql/errlog 2\n >&1 &\n > > made it work....\n > > Perhaps some memory management problem? I guess the -S option is the culpr\n it \n > > here, but this machine has 256 MB RAM and actually never swaps (yet).\n > \n > 8192 * 1K = 8 meg workspace per sort sure doesn't sound unreasonable.\n > There is a sort going on under-the-hood in your SELECT DISTINCT (it's\n > implemented in the same fashion as \"sort | uniq\"), but under ordinary\n > circumstances that doesn't cause any problem. I can see a couple of\n > possibilities:\n > \t1. You have a very small kernel limit on per-process data space,\n > \t probably 8M or at most 16M.\n > \t2. Something is broken in the sort code that makes it fail to\n > \t obey the -S limit.\n > I favor #1, since if #2 were true we'd probably have noticed it before.\n > \n > You might try experimenting with a couple of different -S values (-B\n > shouldn't make any difference here, it just affects the size of the\n > shared-memory-block request), and watching the size of the backend\n > process with top(1) or something like it.\n > \n > In the meantime, find out where kernel parameters are set on your\n > system, and look at what MAXDSIZ is set to...\n > \n > \t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Dec 1999 18:13:42 +0200",
"msg_from": "Daniel Kalchev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: (resolution?) Re: [HACKERS] memory problem again "
},
{
"msg_contents": "> (another not related bug, but still on memory allocation)\n> Still - this does not explain why postgres cannot allocated more than 76 MB \n> (according to top) on BSD/OS (never did, actually - any previous version too), \n> while a simple malloc(1 MB) loop allocates up to the process limit.\n> \n> Maybe at some time postrges tries to allocate 'larger' chunk, which the BSD/OS \n> malloc does not like?\n> \n\nYou can easily put in errlog(NOTICE...) and dump out the allocations to\nsee what is being requested. It is also possible TOP display is not\naccurate in some way. Does ps vm flags show this too?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 7 Dec 1999 18:16:17 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (resolution?) Re: [HACKERS] memory problem again"
},
{
"msg_contents": "Daniel Kalchev <[email protected]> writes:\n> I run the postmaster after unlimit, which sets limits thus:\n> datasize 262144 kbytes\n\nOh well, so much for the small-DSIZ theory. But I still don't much\ncare for the other theory (sort ignores -S) because (a) I can't\nreproduce any such behavior here, (b) I've been through that code\nrecently and didn't see anything that looked like it would cause\nthat behavior, and (c) if it were true then we ought to be seeing\nmore complaints.\n\nI think there's probably some platform-specific issue that's causing\nthe misbehavior you see, but I'm at a loss to guess what it is.\nAnyone have any ideas?\n\n> BTW, this postgres is compiled with default of 64 backends - I saw recently \n> note here that this may interfere with the -S option somehow....\n\nI must've missed that --- I don't know any reason for number of backends\nto interfere with -S, because -S just sets the amount of memory that\nany one backend thinks it can expend for local working storage (per\nsort or hash node). Can you recall where/when this discussion was?\n\n> (another not related bug, but still on memory allocation)\n> Still - this does not explain why postgres cannot allocated more than\n> 76 MB (according to top) on BSD/OS (never did, actually - any previous\n> version too), while a simple malloc(1 MB) loop allocates up to the\n> process limit.\n\nThat does seem odd. Could it be that the shared memory segment used\nby Postgres gets placed at 64M or so in your process's virtual address\nspace, thus preventing the malloc arena from expanding past that point?\nIf so, is there anything we can do to force a higher placement?\n\n> Maybe at some time postrges tries to allocate 'larger' chunk, which\n> the BSD/OS malloc does not like?\n\nThere is some code in aset.c that asks for larger and larger chunks,\nbut it should fall back to asking for a smaller chunk if it can't\nget a bigger one. More to the point, the sort operation invoked by\nSELECT DISTINCT shouldn't ask for more than (roughly) your -S setting.\nSo I'm still clueless where the problem is :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Dec 1999 02:47:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (resolution?) Re: [HACKERS] memory problem again "
}
] |
[
{
"msg_contents": "Hi,\n\n I just committed a patch that turns on FOREIGN KEY. Thus,\n REFERENCES used in CREATE TABLE now automatically creates the\n appropriate constraint triggers. The implementation also\n supports omitting the PK column definition, if the\n corresponding columns should be the PRIMARY KEY of the\n referenced table.\n\n Also I completed some more of the generic trigger procs. For\n MATCH FULL, the key existence check in PK table and these\n actions are completed:\n\n ON DELETE RESTRICT\n ON DELETE CASCADE\n ON UPDATE RESTRICT\n ON UPDATE CASCADE\n\n Still missing are the SET NULL and SET DEFAULT actions. The\n former is easy and will follow soon, the latter looks tricky\n if the implementation should support ALTER TABLE/DEFAULT\n (what it IMHO must even if that ALTER TABLE isn't implemented\n yet).\n\n Anyway, I ran into some shift/reduce problem in the main\n parser. The syntax according to SQL3 says\n\n <constraint attributes> ::=\n <constraint check time> [ [ NOT ] DEFERRABLE ]\n | [ NOT ] DEFERRABLE [ <constraint check time> ]\n\n <constraint check time> defines INITIALLY DEFERRED/IMMEDIATE\n and defaults to IMMEDIATE.\n\n If I allow the <constraint attributes> in column constraints,\n I get 2 shift/reduce conflicts. Seems the syntax interferes\n with NOT NULL. Actually I commented that part out, so the\n complete syntax is available only for table constraints, not\n on the column level.\n\n Could some yacc-guru please take a look at it?\n\n Another interesting question is about inheritance. If a\n REFERENCES constraint exists for a table, must another table,\n inheriting this one, also get all the FK checks applied?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 6 Dec 1999 19:20:10 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "FOREIGN KEY and shift/reduce"
},
{
"msg_contents": "I have the following table and view:\n\n\ncreate table users (\n user_id integer not null primary key,\n first_names varchar(50) not null,\n last_name varchar(50) not null,\n password varchar(30) not null,\n email varchar(50) not null unique,\n\n census_rights_p boolean default 'f',\n locale_rights_p boolean default 'f',\n admin_rights_p boolean default 'f',\n\n -- to suppress email alerts\n on_vacation_until date,\n\n -- set when user reappears at site\n last_visit datetime,\n -- this is what most pages query against (since the above column\n -- will only be a few minutes old for most pages in a session)\n second_to_last_visit datetime,\n\n registration_date date,\n registration_ip varchar(50),\n user_state varchar(100) check(user_state is null or\nuser_state in ('need_email_verification_and_admin_approv', 'n\need_admin_approv', 'need_email_verification', 'rejected', 'authorized',\n'banned', 'deleted')\n),\n deleted_p boolean default 'f',\n banned_p boolean default 'f',\n -- who and why this person was banned\n banning_user integer,\n banning_note varchar(4000),\n portrait_loaded boolean default 'f',\n portrait_type varchar(10) default ''\n);\n\n-- Create an \"alert table\" view of just those users who should\n-- be sent e-mail alerts.\n\ncreate view users_alertable\nas\nselect *\n from users\n where (on_vacation_until is null or\n on_vacation_until < 'now'::date)\n and (deleted_p = 'f');\n\nThis has been working for months, just fine. I've been porting over a bunch\nmore stuff from Oracle to this Postgres-based system, and bam! Now any\nselect from the view dies with:\n\nunknown node tag 600 in apply_RIR_view\n\nI've tried dropping and rebuilding the table and view in a test database\nand the problem remains. I recall running into problems with other\noperations many moons ago, where a particular node type wasn't being\nhandled by a particular operator (the ones I'd seen previously were\nfixed by the excellent 6.5.* versions). \n\nIs this a similar case? I may do a little digging myself tonight, but\nthought I'd ask to see if this rings a bell with anyone. It's a bit\nstrange because this view's been working great on this table for so\nlong. I added a couple of extra columns to the table recently but\nthe view worked immediately afterwards. The stuff I've been porting\ncreates views willy-nilly and it's almost like there's an interaction\ntaking place, but that doesn't seem right.\n\nIt fails in the same manner if I simply declare the view as:\n\ncreate view users_alertable as select * from users;\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 06 Dec 1999 11:24:43 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "A view just stopped working out of the blue..."
},
{
"msg_contents": "Don Baccus wrote:\n\n> This has been working for months, just fine. I've been porting over a bunch\n> more stuff from Oracle to this Postgres-based system, and bam! Now any\n> select from the view dies with:\n>\n> unknown node tag 600 in apply_RIR_view\n\n Node tag 600 is T_Query.\n\n> I've tried dropping and rebuilding the table and view in a test database\n> and the problem remains. I recall running into problems with other\n> operations many moons ago, where a particular node type wasn't being\n> handled by a particular operator (the ones I'd seen previously were\n> fixed by the excellent 6.5.* versions).\n>\n> Is this a similar case? I may do a little digging myself tonight, but\n> thought I'd ask to see if this rings a bell with anyone. It's a bit\n> strange because this view's been working great on this table for so\n> long. I added a couple of extra columns to the table recently but\n> the view worked immediately afterwards. The stuff I've been porting\n> creates views willy-nilly and it's almost like there's an interaction\n> taking place, but that doesn't seem right.\n>\n> It fails in the same manner if I simply declare the view as:\n>\n> create view users_alertable as select * from users;\n\n I assume the column with the IN constraint is either new or\n changed recently. Seems the system generates some subselect\n for that and the rewriter is unable to handle this case.\n\n I don't have the time to tackle that, just some hint to push\n you into the right direction.\n\n Be careful if hacking inside the rewriter, it's a very\n sensitive area!\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 6 Dec 1999 20:34:23 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] A view just stopped working out of the blue..."
},
{
"msg_contents": "\nOn Mon, 6 Dec 1999, Jan Wieck wrote:\n\n> Hi,\n> \n> Another interesting question is about inheritance. If a\n> REFERENCES constraint exists for a table, must another table,\n> inheriting this one, also get all the FK checks applied?\n\nHi,\n\nThis inspire me for next question: What is in PosgreSQL inherited? \n\nIMHO is problem make inheriting FOREIGN KEY if not is support for UNIQUE \nor PRIMARY KEYs inheriting. (Or is it in CVS tree?).\n\nPostgreSQL 6.5.3:\n\ntest=> create table aaa (x int4 UNIQUE);\nNOTICE: CREATE TABLE/UNIQUE will create implicit index 'aaa_x_key' for\ntable 'aaa'\nCREATE\ntest=> create table y () inherits (aaa);\nCREATE\ntest=> insert into y values (1);\nINSERT 590807 1\ntest=> insert into y values (1);\nINSERT 590808 1\n\n\t\t\t\t\t\t\tKarel\n\nPS. I very look forward to a FOREIGN KEY, with this feature will \n life easier and the PgSQL will more tread on the heels of non-GNU\n engines. \n\n",
"msg_date": "Mon, 6 Dec 1999 21:59:54 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] FOREIGN KEY and shift/reduce"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Jan Wieck\n> \n> Hi,\n> \n> I just committed a patch that turns on FOREIGN KEY. Thus,\n> REFERENCES used in CREATE TABLE now automatically creates the\n> appropriate constraint triggers. The implementation also\n> supports omitting the PK column definition, if the\n> corresponding columns should be the PRIMARY KEY of the\n> referenced table.\n> \n> Also I completed some more of the generic trigger procs. For\n> MATCH FULL, the key existence check in PK table and these\n> actions are completed:\n> \n> ON DELETE RESTRICT\n> ON DELETE CASCADE\n> ON UPDATE RESTRICT\n> ON UPDATE CASCADE\n>\n\nNice.\nI tried a little.\n\n< session 1 >\n=> create table ri1 (id int4 primary key);\n\tNOTICE: CREATE TABLE/PRIMARY KEY will create implicit\n\tindex 'ri1_pkey' for table 'ri1'\n\tCREATE\n=> insert into ri1 values (1);\n\tINSERT 92940 1\n=>create table ri2 (id int4 references ri1 match full on delete restrict); \n\tNOTICE: CREATE TABLE will create implicit trigger(s) for\n\tFOREIGN KEY check(s)\n\tCREATE\n=> begin;\n\tBEGIN\n=> delete from ri1 where id=1;\n\tDELETE 1 \n\n< session 2 >\n=> insert into ri2 values (1);\n\tINSERT 92960 1\n\n< session 1 >\n=> commit;\n\tEND\n=> select * from ri1;\n\tid\n\t--\n\t(0 rows)\n=> select * from ri2;\n\tid\n\t--\n\t 1\n\t(1 row)\n\nIs this a temporary behavior ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Tue, 7 Dec 1999 13:03:28 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] FOREIGN KEY and shift/reduce"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n\n> Nice.\n> I tried a little.\n>\n> < session 1 >\n> => create table ri1 (id int4 primary key);\n> NOTICE: CREATE TABLE/PRIMARY KEY will create implicit\n> index 'ri1_pkey' for table 'ri1'\n> CREATE\n> => insert into ri1 values (1);\n> INSERT 92940 1\n> =>create table ri2 (id int4 references ri1 match full on delete restrict);\n> NOTICE: CREATE TABLE will create implicit trigger(s) for\n> FOREIGN KEY check(s)\n> CREATE\n> => begin;\n> BEGIN\n> => delete from ri1 where id=1;\n> DELETE 1\n>\n> < session 2 >\n> => insert into ri2 values (1);\n> INSERT 92960 1\n\nOutch,\n\n I see the shared visibility conflict. So the CHECK constraint\n trigger must get an exclusive lock somehow - I think an\n internal \"FOR UPDATE OF\" can do it - will try.\n\n\nThanks, Jan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 7 Dec 1999 05:33:14 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] FOREIGN KEY and shift/reduce"
},
{
"msg_contents": "> If I allow the <constraint attributes> in column constraints,\n> I get 2 shift/reduce conflicts. Seems the syntax interferes\n> with NOT NULL. Actually I commented that part out, so the\n> complete syntax is available only for table constraints, not\n> on the column level.\n> Could some yacc-guru please take a look at it?\n\nGuru I'm not, but I should be able to track it down if it isn't fixed\nwhen I *finally* get time to finish the join syntax. Maybe a few\nweeks...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 07 Dec 1999 05:16:06 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] FOREIGN KEY and shift/reduce"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n\n> Nice.\n> I tried a little.\n>\n> [...]\n> => begin;\n> BEGIN\n> => delete from ri1 where id=1;\n> DELETE 1\n>\n> < session 2 >\n> => insert into ri2 values (1);\n> INSERT 92960 1\n>\n> < session 1 >\n> => commit;\n> END\n> => select * from ri1;\n> id\n> --\n> (0 rows)\n> => select * from ri2;\n> id\n> --\n> 1\n> (1 row)\n>\n> Is this a temporary behavior ?\n\n Fixed.\n\n Session 2 waits now until session 1 ends transaction.\n\n I'm thinking about another enhancement to the regression test\n now. Something where at least two sessions can run queries\n in a predefined order. Otherwise, something like the above\n cannot be checked during regression.\n\n I'm not sure how that can be done with a standard shell, and\n that's a must. Maybe something using named pipes and so -\n will play around a little.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 7 Dec 1999 16:42:53 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] FOREIGN KEY and shift/reduce"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> I'm thinking about another enhancement to the regression test\n> now. Something where at least two sessions can run queries\n> in a predefined order. Otherwise, something like the above\n> cannot be checked during regression.\n\nYes, we could really use something like that.\n\n> I'm not sure how that can be done with a standard shell, and\n> that's a must. Maybe something using named pipes and so -\n> will play around a little.\n\nWould probably be more portable to write a little bit of C code that\nopens several libpq connections and reads a script file with interleaved\ncommands to send to the different connections. OTOH, by the time you\ngot done, it might have much of psql in it (certainly at least the\ndisplay code).\n\nPeter, in your slicing and dicing of psql, did you end up with anything\nthat might make this a feasible approach?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Dec 1999 12:52:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Parallel regress tests (was Re: FOREIGN KEY and shift/reduce)"
},
{
"msg_contents": "On Tue, 7 Dec 1999, Tom Lane wrote:\n\n> [email protected] (Jan Wieck) writes:\n> > I'm thinking about another enhancement to the regression test\n> > now. Something where at least two sessions can run queries\n> > in a predefined order. Otherwise, something like the above\n> > cannot be checked during regression.\n\n> Peter, in your slicing and dicing of psql, did you end up with anything\n> that might make this a feasible approach?\n\nUm, you could call another psql from within psql, like so:\n\n/* psql script */\ncreate this\nselect that\n\\! psql -f 'second-script'\nselect more\n\nThat satisfies the requirement of two separate sessions and a predefined\norder. I haven't actually tried it, but if it happens to not work as\ndesired, it could certainly be fixed.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Tue, 7 Dec 1999 19:09:50 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY and\n\tshift/reduce)"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> Peter, in your slicing and dicing of psql, did you end up with anything\n>> that might make this a feasible approach?\n\n> Um, you could call another psql from within psql, like so:\n\n> /* psql script */\n> create this\n> select that\n> \\! psql -f 'second-script'\n> select more\n\n> That satisfies the requirement of two separate sessions and a predefined\n> order.\n\nI assume that the \\! command won't continue until the subjob exits?\nIf so, this doesn't give us any way to verify that query A will wait for\nquery B to finish ... at least not without locking up the test...\n\nAnother possible approach is to accept that a parallel multi-backend\ntest lashup *doesn't* have to run on every single system that Postgres\nruns on, if we keep it separate from the standard regress tests.\nFor example, it's not much work to build a parallel test driver in Perl\n(I have done it), and I think that an auxiliary test package that\nrequires Perl would be acceptable.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Dec 1999 13:21:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY and\n\tshift/reduce)"
},
{
"msg_contents": "On Tue, 7 Dec 1999, Tom Lane wrote:\n\n> \n> > /* psql script */\n> > create this\n> > select that\n> > \\! psql -f 'second-script'\n> > select more\n> \n> > That satisfies the requirement of two separate sessions and a predefined\n> > order.\n> \n> I assume that the \\! command won't continue until the subjob exits?\n> If so, this doesn't give us any way to verify that query A will wait for\n> query B to finish ... at least not without locking up the test...\n\n\\! psql -f 'second-script' &\n\nwill do it. You may wanna redirect your output.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 7 Dec 1999 13:37:50 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY and\n\tshift/reduce)"
},
{
"msg_contents": "On Tue, 7 Dec 1999, Tom Lane wrote:\n\n> > Um, you could call another psql from within psql, like so:\n> \n> > /* psql script */\n> > create this\n> > select that\n> > \\! psql -f 'second-script'\n> > select more\n> \n> > That satisfies the requirement of two separate sessions and a predefined\n> > order.\n> \n> I assume that the \\! command won't continue until the subjob exits?\n> If so, this doesn't give us any way to verify that query A will wait for\n> query B to finish ... at least not without locking up the test...\n\nI'm kind of losing you here. You want parallel execution *and* a\npredefined order of execution? In my (limited) book, those contradict each\nother a little bit. If it helps you can also do this:\n\\! psql -f 'second-script' >& output &\nand the thus invoked script will happily continue executing even after you\nquit the first psql.\n\nI guess you could also do some simple synchronization things, like have\nthe second psql wait on a file to spring into existence:\n/* second-script */\n\\! while [ ! -f /tmp/lock.file ]; do ;; done\n\\! rm /tmp/lock.file\n\nKind of like a simple semaphore. Isn't that what you are getting at?\n\nIn the overall view of things it almost seems like these kind of tests\nneed a human eye supervising them, because how do really determine \"query\nA waits for query B to finish\" otherwise?\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 7 Dec 1999 19:40:49 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY and\n\tshift/reduce)"
},
{
"msg_contents": "Peter Eisentraut wrote:\n\n> On Tue, 7 Dec 1999, Tom Lane wrote:\n>\n> > I assume that the \\! command won't continue until the subjob exits?\n> > If so, this doesn't give us any way to verify that query A will wait for\n> > query B to finish ... at least not without locking up the test...\n>\n> I'm kind of losing you here. You want parallel execution *and* a\n> predefined order of execution? In my (limited) book, those contradict each\n> other a little bit. If it helps you can also do this:\n> \\! psql -f 'second-script' >& output &\n> and the thus invoked script will happily continue executing even after you\n> quit the first psql.\n\n Yes, we mean controlled order of execution across multiple\n backends. And a script invoked in background, like the above,\n doesn't do the trick - and BTW we already have that kind of\n testing.\n\n> I guess you could also do some simple synchronization things, like have\n> the second psql wait on a file to spring into existence:\n> /* second-script */\n> \\! while [ ! -f /tmp/lock.file ]; do ;; done\n> \\! rm /tmp/lock.file\n>\n> Kind of like a simple semaphore. Isn't that what you are getting at?\n\n Kind of, but wasting CPU while waiting. OTOH, some sleep(1)\n inside the loop would slow down to a certain degree.\n\n As usual, Tom and I had similar ideas. A little amount of C\n code should do it. Only that I would like to use pipes\n to/from psql instead of dealing with libpq and formatting\n myself.\n\n One problem I noticed so far is, that the new psql (in\n contrast to the v6.5 one) doesn't flush it's output when\n doing it to a pipe. To control that one backend finished\n execution of a query, I thought to send a special comment\n line and wait for it to be echoed back. This would only work\n if psql flushes.\n\n Peter, could you please add some switch to psql that\n activates an fflush() whenever psql would like to read more\n input commands. In the meantime I can start to write the\n driver using the v6.5 psql.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 7 Dec 1999 19:59:15 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY and"
},
{
"msg_contents": "On Tue, 7 Dec 1999, Jan Wieck wrote:\n\n> > I guess you could also do some simple synchronization things, like have\n> > the second psql wait on a file to spring into existence:\n> > /* second-script */\n> > \\! while [ ! -f /tmp/lock.file ]; do ;; done\n> > \\! rm /tmp/lock.file\n> >\n> > Kind of like a simple semaphore. Isn't that what you are getting at?\n> \n> Kind of, but wasting CPU while waiting. OTOH, some sleep(1)\n> inside the loop would slow down to a certain degree.\n\nWell, we're testing and not benchmarking...\n\n> code should do it. Only that I would like to use pipes\n> to/from psql instead of dealing with libpq and formatting\n> myself.\n\nIf you like, the print routines in psql are competely isolated from the\nrest of the code and you can just #include and use them.\n\n> Peter, could you please add some switch to psql that\n> activates an fflush() whenever psql would like to read more\n> input commands. In the meantime I can start to write the\n> driver using the v6.5 psql.\n\nI'm going to go an a flush hunt...\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Tue, 7 Dec 1999 22:56:51 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY and"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> Peter, could you please add some switch to psql that\n>> activates an fflush() whenever psql would like to read more\n>> input commands. In the meantime I can start to write the\n>> driver using the v6.5 psql.\n\n> I'm going to go an a flush hunt...\n\nOffhand I see no reason why psql shouldn't *always* fflush its output\nbefore reading a new command. It'd waste a few microseconds per command\nwhen reading from a file, but that'd hardly be noticeable in the great\nscheme of things. Adding another switch is just offering another way\nfor people to make a mistake...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Dec 1999 18:04:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY and "
},
{
"msg_contents": "> > ... not sure how that can be done with a standard shell, and\n> > that's a must. Maybe something using named pipes and so -\n> > will play around a little.\n> Would probably be more portable to write a little bit of C code that\n> opens several libpq connections and reads a script file with \n> interleaved commands to send to the different connections.\n\nThere are tools I'd consider using for this such as tcl/expect which\nhelp with simulating interactive sessions and with coordinating\nmultiple actions. Forget the \"sh only\" approach if it makes things\ndifficult; some platforms ship with a much richer toolset, and these\ntools might be well suited to your problem.\n\nIf it works and there is a big demand for the \"sh tools\" then someone\ncan recode it later, and it isn't holding you up now.\n\n... my 2 cents...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 08 Dec 1999 15:57:19 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY and\n\tshift/reduce)"
},
{
"msg_contents": "> I guess you could also do some simple synchronization things, like have\n> the second psql wait on a file to spring into existence:\n> /* second-script */\n> \\! while [ ! -f /tmp/lock.file ]; do ;; done\n> \\! rm /tmp/lock.file\n> Kind of like a simple semaphore. Isn't that what you are getting at?\n\nLISTEN/NOTIFY might be useful for this...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 08 Dec 1999 15:59:30 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY\n\tandshift/reduce)"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Kind of like a simple semaphore. Isn't that what you are getting at?\n\n> LISTEN/NOTIFY might be useful for this...\n\nLISTEN/NOTIFY would be one of the things we were trying to *test*.\nIt's not helpful for checking intra-transaction-block behavior anyway,\nsince notifys are only sent at commit and only heard between\ntransactions.\n\nI like the tcl/expect idea a lot. We'd have to upgrade our tcl\ninterface to support asynchronous queries (ie, send query then do other\nstuff until answer arrives); AFAIR it can't handle that now. But that'd\nbe well worth doing anyway...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Dec 1999 11:02:32 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY\n\tandshift/reduce)"
},
{
"msg_contents": "Tom Lane wrote:\n\n> Thomas Lockhart <[email protected]> writes:\n> >> Kind of like a simple semaphore. Isn't that what you are getting at?\n>\n> > LISTEN/NOTIFY might be useful for this...\n>\n> LISTEN/NOTIFY would be one of the things we were trying to *test*.\n> It's not helpful for checking intra-transaction-block behavior anyway,\n> since notifys are only sent at commit and only heard between\n> transactions.\n>\n> I like the tcl/expect idea a lot. We'd have to upgrade our tcl\n> interface to support asynchronous queries (ie, send query then do other\n> stuff until answer arrives); AFAIR it can't handle that now. But that'd\n> be well worth doing anyway...\n\n That would be really nice, especially because Tcl is my\n preferred tool language. But I wouldn't use libpgtcl, instead\n it should control a number of psql sessions over command\n pipelines. These can already handle asynchronous actions\n across all supported platforms.\n\n I'll build the first version of the tool using standard Tcl,\n and just keep in mind that the command language must be\n easily implementable with lex/yacc so someone can convert it\n later into a C version.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 8 Dec 1999 17:51:42 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY\n\tandshift/reduce)"
},
{
"msg_contents": "Tom Lane wrote:\n\n> I like the tcl/expect idea a lot. We'd have to upgrade our tcl\n> interface to support asynchronous queries (ie, send query then do other\n> stuff until answer arrives); AFAIR it can't handle that now. But that'd\n> be well worth doing anyway...\n\nO.K.\n\n The multi session test tool, written in Tcl, is ready. Where\n should it go and how should it be named? I think it wouldn't\n be a bad idea to have it in a place where it can be used from\n any location, instead of putting it into the regression dir.\n Maybe install it into bin?\n\n Just to show a little:\n\n # ----\n # Multi session test suite\n # ----\n\n # ----\n # Session 1 starts a transaction and deletes all rows from t1\n # ----\n / connect A\n begin;\n select * from t1;\n delete from t1;\n / wait A\n\n # ----\n # Session 2 tries to select all from t1 for update - should block\n # ----\n / connect B\n begin;\n select * from t1 for update of t1;\n commit;\n / wait B\n\n # ----\n # Now session 1 rolls back ...\n # ----\n / use A\n rollback;\n / wait A\n\n # ----\n # ... what must release session 2\n # ----\n / wait B\n\n / close A\n / close B\n\n The above input produces this output:\n\n # ----\n # Multi session test suite\n # ----\n # ----\n # Session 1 starts a transaction and deletes all rows from t1\n # ----\n (A) QUERY: begin;\n (A) QUERY: select * from t1;\n (A) a1|b1|c1\n (A) --+--+-----\n (A) 2| 2|key 2\n (A) 3| 3|key 3\n (A) 8| 8|key 7\n (A) (3 rows)\n (A)\n (A) QUERY: delete from t1;\n # ----\n # Session 2 tries to select all from t1 for update - should block\n # ----\n (B) QUERY: begin;\n (B) QUERY: select * from t1 for update of t1;\n *** Session of connection B seems locked\n # ----\n # Now session 1 rolls back ...\n # ----\n (A) QUERY: rollback;\n # ----\n # ... what must release session 2\n # ----\n (B) a1|b1|c1\n (B) --+--+-----\n (B) 2| 2|key 2\n (B) 3| 3|key 3\n (B) 8| 8|key 7\n (B) (3 rows)\n (B)\n (B) QUERY: commit;\n\n The default time for the \"wait\" command is 5 seconds, but can\n be specified explicitly as \"wait A 10\". Session names can of\n course be longer than one character.\n\n The script stopped at session B's SELECT ... FOR UPDATE, and\n correctly continued with the *** message 5 seconds later. As\n you might have noticed, it doesn't matter that the COMMIT of\n session 2 was placed directly after the SELECT. If the\n session hangs, all queries are internally buffered.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 8 Dec 1999 20:45:20 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY\n\tandshift/reduce)"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> The multi session test tool, written in Tcl, is ready.\n\nLooks way cool.\n\n> Where should it go and how should it be named?\n\nWhy not throw it in as another src/bin/ subdirectory, or maybe put it\nin Peter's new \"src/bin/scripts/\" directory? No great ideas for\na name here.\n\n> The default time for the \"wait\" command is 5 seconds, but can\n> be specified explicitly as \"wait A 10\".\n\nIt makes me uncomfortable that there are any explicit times at all.\nA developer might set up a test script with delays that seem ample\non his hardware, yet will fail when someone tries to use the script\non a much slower and/or heavily loaded system.\n\nCan we find a way to avoid needing explicit times in the scripts?\nIf not, there should probably be a command-line switch that allows all\nthe times to be scaled by some amount. (Ugly, but could be really\nhandy.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Dec 1999 20:46:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY\n\tandshift/reduce)"
},
{
"msg_contents": "Tom Lane wrote:\n\n> [email protected] (Jan Wieck) writes:\n> > The multi session test tool, written in Tcl, is ready.\n>\n> Looks way cool.\n\n Did I ever say that I love Tcl :-) ?\n\n It took me less that 5 hours to create a full featured tool,\n that already discovered another bug (not yet fixed) in the RI\n procs, close to that Hiroshi reported (just do the violation\n the other way round and it will be accepted).\n\n I know, that there are similar powerful languages availabel\n (perl for one). It's just that I looked for a good scripting\n language some years ago and found Tcl (version 7.4 at that\n time). Today it is such a magic tool for someone, familiar\n with the C language, that I think it was one of the best\n coices I ever made. Tcl was the first language I ever\n embedded as a PL, and the difficulties reported (and yet\n missing results) on approaches to embedd other languages tell\n the entire story.\n\n> > Where should it go and how should it be named?\n>\n> Why not throw it in as another src/bin/ subdirectory, or maybe put it\n> in Peter's new \"src/bin/scripts/\" directory? No great ideas for\n> a name here.\n\n Exactly what I intended to do.\n\n I prefer src/bin, since it isn't a shell script. I think that\n \"pgmultitest\" (my internal development name) wouldn't be such\n a bad decision.\n\n> > The default time for the \"wait\" command is 5 seconds, but can\n> > be specified explicitly as \"wait A 10\".\n>\n> It makes me uncomfortable that there are any explicit times at all.\n> A developer might set up a test script with delays that seem ample\n> on his hardware, yet will fail when someone tries to use the script\n> on a much slower and/or heavily loaded system.\n>\n> Can we find a way to avoid needing explicit times in the scripts?\n> If not, there should probably be a command-line switch that allows all\n> the times to be scaled by some amount. (Ugly, but could be really\n> handy.)\n\n Once again, similar thoughts and feelings.\n\n The third parameter should indeed be a value used internally\n to ESTIMATE the real time require, to tell that the session\n is blocked in a lock operation. And the estimation should be\n based on some prior analyzed system performance.\n\n Load peak's might still confuse the estimation. I don't have\n any clue right now, how to estimate, but anyway, load peaks\n lead to estimator confusions when it comes to estimate the\n execution time of some operation.\n\n I don't think, that a few (1-2) seconds of delay at an\n expected lock would really hurt. Well, a test stressing the\n locking mechanism (like the RI test I want to have), might\n take a while, even if executed on high end hardware. But I\n cannot imagine any other way, than using multiple sessions\n and response timeouts, to detect from the outside that a\n query ended in a blocking lock request.\n\n Except we extend the entire FE/BE protocol with information,\n telling \"I'm blocked\" / \"I'm resuming\" (plus adding\n appropriate messages to psql), there is absolutely no way to\n avoid the above timeouts. And we don't want to add regression\n test requirements to the FE/BE protocol and psql - no? The\n already discussed flushing feature is a requirement, but it\n might be useful in other situations too and thus worth\n anyway.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 9 Dec 1999 04:57:00 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY\n\tandshift/reduce)"
},
{
"msg_contents": "At 04:57 AM 12/9/99 +0100, Jan Wieck wrote:\n\n> I know, that there are similar powerful languages availabel\n> (perl for one). It's just that I looked for a good scripting\n> language some years ago and found Tcl (version 7.4 at that\n> time). Today it is such a magic tool for someone, familiar\n> with the C language, that I think it was one of the best\n> coices I ever made. \n\nAnd this is the scripting language embedded in the NaviSoft, later\nAOLserver web server. Your comments pretty much sum things up.\n\nThere's an official port of the ArsDigita Community System to postgres\n(from Oracle) underway.\n\nThis is in some sense the wrong forum to mention such things, but in\nanother sense it is the right forum, because mainstream database server\ncompanies are pouring everything they've got into \"web-ifying\" their\ntools.\n\nBut, really, with a lightweight threaded server like AOLserver and an\nexcellent Tcl API, middleware and such is really not necessary. We think\nthe port's going to be excellent, though I'm steadily building up a list\nof Postgres bugs to report (I'll wait until I have a dozen or so). One\nsymptom of PotgreSQL's stability in 6.5.* is that the bugs are of the\nreproducible, mostly language-oriented kind and because of this they're\neasy to isolate and work-around (not always the truth in a multi-threaded\nenvironment such as typifies AOLserver).\n\nThe enthusiasm for this in-progress port's pretty surprising, given that\nwe only announced it four-five days ago and even then more or less under\nduress.\n\nPostgreSQL, though, has the potential to be an ideal web db server for\nsmall-to-mid-range sites. Given that Oracle wants like $22,500 for a\nfully paid up license for a single-processor P500, you can understand\nthe interest for integrating these tools, which have been used to build\na number of high-profile sites.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Wed, 08 Dec 1999 20:30:30 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY\n\tandshift/reduce)"
},
{
"msg_contents": "[cc: list trimmed, and subject changed....]\n\nOn Wed, 08 Dec 1999, Don Baccus wrote:\n> At 04:57 AM 12/9/99 +0100, Jan Wieck wrote:\n> \n> > I know, that there are similar powerful languages availabel\n> > (perl for one). It's just that I looked for a good scripting\n> > language some years ago and found Tcl (version 7.4 at that\n> > time). Today it is such a magic tool for someone, familiar\n> > with the C language, that I think it was one of the best\n> > coices I ever made. \n> \n> And this is the scripting language embedded in the NaviSoft, later\n> AOLserver web server. Your comments pretty much sum things up.\n\nI replied to Jan privately about that, but, since you brought it up on-list....\n\n> This is in some sense the wrong forum to mention such things, but in\n> another sense it is the right forum, because mainstream database server\n> companies are pouring everything they've got into \"web-ifying\" their\n> tools.\n\nVince almost finished the switch, but got tangled up in the high latency of the\nAOLserver developers for 3.0beta3. His comments about the ease of setup were\non the money, but he wanted to use CGI, and CGI is far from AOLserver's strong\nsuite. The tcl API is the preferred way to do AOLserver hacking.\n\n> symptom of PotgreSQL's stability in 6.5.* is that the bugs are of the\n> reproducible, mostly language-oriented kind and because of this they're\n> easy to isolate and work-around (not always the truth in a multi-threaded\n> environment such as typifies AOLserver).\n\nAs opposed to PostgreSQL 6.2.1, the first version that was officially supported\nby the AOLserver crew. PostgreSQL has come a long way!\n\n> PostgreSQL, though, has the potential to be an ideal web db server for\n> small-to-mid-range sites. \n\nThe use of AOLserver+Tcl+PostgreSQL has the potential to far surpass the speed\nof Apache+mod_perl+mySQL -- or Apache+mod_php+mySQL.\n\nTo quote Philip Greenspun, AOLserver is now handling 28 thousand hits at AOL. \nOh, that's 28K hits _per_second_ aggregate over AOL's server farm.\n\nIMO,. AOLserver is far and away the best web front end for PostgreSQL. And\nthat's my final answer.\n\n--\nLamar Owen\nWGCR Internet Radio\n",
"msg_date": "Wed, 8 Dec 1999 23:47:08 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "PostgreSQL front ends (was Re: [HACKERS] Parallel regress tests (was\n\tRe: FOREIGN KEY andshift/reduce))"
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n> the port's going to be excellent, though I'm steadily building up a list\n> of Postgres bugs to report (I'll wait until I have a dozen or so).\n\nActually, I think it's easier to keep track of things if you file bug\nreports as separate messages, rather than one big message that lists a\nbunch of unrelated problems. So feel free to send 'em in as you find\n'em.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Dec 1999 01:30:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY\n\tandshift/reduce)"
},
{
"msg_contents": "At 11:47 PM 12/8/99 -0500, Lamar Owen wrote:\n\n>Vince almost finished the switch, but got tangled up in the high latency\nof the\n>AOLserver developers for 3.0beta3. His comments about the ease of setup were\n>on the money, but he wanted to use CGI, and CGI is far from AOLserver's\nstrong\n>suite. The tcl API is the preferred way to do AOLserver hacking.\n\nThe PHP folks have claimed that they'll have an embedded interpreter\nfor AOLserver for their next release. It's not clear that they'll\nduplicate the rich API the server provides Tcl though. If they do,\nthis will be an alternative for those who don't like Tcl or AOLserver's\nADPs (HTML with embedded Tcl snippets)\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 09 Dec 1999 07:44:30 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL front ends (was Re: [HACKERS] Parallel regress\n\ttests (was Re: FOREIGN KEY andshift/reduce))"
},
{
"msg_contents": "As I've mentioned previously, I'm porting over a web toolkit\nfrom Oracle to Postgres.\n\nOne portion of the toolkit creates and alters tables to add\nuser-defined fields into what's being used essentially as a\nmeta-table (used to later define real tables). One page\nforgets to check for an empty form before issuing its\n\"alter table\" command, and it crashed the backend. I'm\ncorrecting its forgetfulness, but am also reporting the\nproblem:\n\nalter table foo add;\n\ngives a parser error, which is fine.\n\nalter table foo add();\n\ncrashes the backend.\n\nI'd say it's really low priority, but should be fixed.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 09 Dec 1999 07:54:31 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "alter table crashes back end "
},
{
"msg_contents": "On 1999-12-08, Jan Wieck mentioned:\n\n> The multi session test tool, written in Tcl, is ready. Where\n> should it go and how should it be named? I think it wouldn't\n> be a bad idea to have it in a place where it can be used from\n> any location, instead of putting it into the regression dir.\n> Maybe install it into bin?\n\nPlease consider not doing that. Bin is for user programs. I don't see why\nit shouldn't at least go under the test tree, if not under regress.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Fri, 10 Dec 1999 02:28:29 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY\n\tandshift/reduce)"
},
{
"msg_contents": "Peter Eisentraut wrote:\n\n> On 1999-12-08, Jan Wieck mentioned:\n>\n> > The multi session test tool, written in Tcl, is ready. Where\n> > should it go and how should it be named? I think it wouldn't\n> > be a bad idea to have it in a place where it can be used from\n> > any location, instead of putting it into the regression dir.\n> > Maybe install it into bin?\n>\n> Please consider not doing that. Bin is for user programs. I don't see w=\n> hy\n> it shouldn't at least go under the test tree, if not under regress.\n\nYou're right,\n\n I'll place it into the regression tree. Those of us hackers,\n who need it handy in a more central place can copy it to\n whereever they like.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Fri, 10 Dec 1999 02:42:38 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY\n\tandshift/reduce)"
},
{
"msg_contents": "Peter Eisentraut wrote:\n\n> On 1999-12-08, Jan Wieck mentioned:\n>\n> > The multi session test tool, written in Tcl, is ready. Where\n> > should it go and how should it be named? I think it wouldn't\n> > be a bad idea to have it in a place where it can be used from\n> > any location, instead of putting it into the regression dir.\n> > Maybe install it into bin?\n>\n> Please consider not doing that. Bin is for user programs. I don't see w=\n> hy\n> it shouldn't at least go under the test tree, if not under regress.\n\n BTW: I added one fflush(stdout) to psql/commands.c where the\n query buffer is printed for the \\p command. After that, my\n test tool is totally happy with it.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Fri, 10 Dec 1999 02:44:59 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY\n\tandshift/reduce)"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> If I allow the <constraint attributes> in column constraints,\n> I get 2 shift/reduce conflicts. Seems the syntax interferes\n> with NOT NULL. Actually I commented that part out, so the\n> complete syntax is available only for table constraints, not\n> on the column level.\n\n> Could some yacc-guru please take a look at it?\n\nWell, I'm not a guru, but I looked anyway. It's a mess. The problem\nis that when NOT is the next token, the grammar doesn't know whether\nthe NOT is starting NOT NULL, which would be a new ColConstraintElem,\nor starting NOT DEFERRABLE, which would be part of the current\nColConstraintElem. So it can't decide whether it's time to reduce\nthe current stack contents to a finished ColConstraintElem or not.\nThe only way to do that is to look ahead further than the NOT.\n\nIn short, we no longer have an LR(1) grammar. Yipes.\n\nAfter a few minutes' thought, it seems that the least unclean way\nto attack this problem is to hack up the lexer so that\n\"NOT<whitespace>NULL\" is lexed as a single keyword. Assuming that\nthat's doable (I haven't tried, but I think it's possible), the\nrequired changes in the grammar would be small. The shift/reduce\nproblem would go away, since we'd essentially have pushed the\nrequired lookahead into the lexer.\n\nIt's possible that making this change would even allow us to use\nfull a_expr rather than b_expr in DEFAULT expressions. I'm not\nsure about it, but that'd be a nice side benefit if so.\n\nDoes anyone see a better answer? This'd definitely be a Big Kluge\nfrom the lexer's point of view, but I don't see an answer at the\ngrammar level.\n\nBTW --- if we do this, it'd be a simple matter to allow \"NOTNULL\"\nwith no embedded space, which is something that I think a number\nof other DBMSes accept. (Which may tell us something about how\nthey solved this problem...) It's not a keyword according to \nSQL92, so I'm inclined *not* to accept it, but perhaps someone\nelse wants to argue the case.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Dec 1999 21:22:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] FOREIGN KEY and shift/reduce "
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n> alter table foo add();\n> crashes the backend.\n\n> I'd say it's really low priority, but should be fixed.\n\nA crash is never a good thing. If you feel like patching your\ncopy, the problem is in backend/parser/gram.y:\n\n***************\n*** 759,769 ****\n \t\t\t\t}\n \t\t\t| ADD '(' OptTableElementList ')'\n \t\t\t\t{\n- \t\t\t\t\tNode *lp = lfirst($3);\n- \n \t\t\t\t\tif (length($3) != 1)\n \t\t\t\t\t\telog(ERROR,\"ALTER TABLE/ADD() allows one column only\");\n! \t\t\t\t\t$$ = lp;\n \t\t\t\t}\n \t\t\t| DROP opt_column ColId\n \t\t\t\t{\telog(ERROR,\"ALTER TABLE/DROP COLUMN not yet implemented\"); }\n--- 759,767 ----\n \t\t\t\t}\n \t\t\t| ADD '(' OptTableElementList ')'\n \t\t\t\t{\n \t\t\t\t\tif (length($3) != 1)\n \t\t\t\t\t\telog(ERROR,\"ALTER TABLE/ADD() allows one column only\");\n! \t\t\t\t\t$$ = (Node *) lfirst($3);\n \t\t\t\t}\n \t\t\t| DROP opt_column ColId\n \t\t\t\t{\telog(ERROR,\"ALTER TABLE/DROP COLUMN not yet implemented\"); }\n***************\n\nLine numbers certainly not right for 6.5 ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Dec 1999 21:42:30 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] alter table crashes back end "
},
{
"msg_contents": "> [email protected] (Jan Wieck) writes:\n> > If I allow the <constraint attributes> in column constraints,\n> > I get 2 shift/reduce conflicts. Seems the syntax interferes\n> > with NOT NULL. Actually I commented that part out, so the\n> > complete syntax is available only for table constraints, not\n> > on the column level.\n> \n> > Could some yacc-guru please take a look at it?\n> \n> Well, I'm not a guru, but I looked anyway. It's a mess. The problem\n> is that when NOT is the next token, the grammar doesn't know whether\n> the NOT is starting NOT NULL, which would be a new ColConstraintElem,\n> or starting NOT DEFERRABLE, which would be part of the current\n> ColConstraintElem. So it can't decide whether it's time to reduce\n> the current stack contents to a finished ColConstraintElem or not.\n> The only way to do that is to look ahead further than the NOT.\nTom and I talked about moving NOT DEFERED up into the main level with\nNOT NULL.\n\nIn gram.y, line 949 and line, could there be a test that if the last\nList element of $1 is a constraint, and if $2 is NOT DEFERED, we can set\nthe bit in $1 and just skip adding the defered node? If not, we can\nthrow an error.\n\nAlso, ColQualList seems very strange. Why the two actions? I have\nremoved it and made ColQualifier work properly.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Dec 1999 00:16:28 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] FOREIGN KEY and shift/reduce"
},
{
"msg_contents": "> > Well, I'm not a guru, but I looked anyway. It's a mess. The problem\n> > is that when NOT is the next token, the grammar doesn't know whether\n> > the NOT is starting NOT NULL, which would be a new ColConstraintElem,\n> > or starting NOT DEFERRABLE, which would be part of the current\n> > ColConstraintElem. So it can't decide whether it's time to reduce\n> > the current stack contents to a finished ColConstraintElem or not.\n> > The only way to do that is to look ahead further than the NOT.\n> Tom and I talked about moving NOT DEFERED up into the main level with\n> NOT NULL.\n> \n> In gram.y, line 949 and line, could there be a test that if the last\n> List element of $1 is a constraint, and if $2 is NOT DEFERED, we can set\n> the bit in $1 and just skip adding the defered node? If not, we can\n> throw an error.\n\nAlso, I am using flex 2.5.4, and an seeing no shift-reduce errors from\nthe current gram.y.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Dec 1999 00:31:04 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] FOREIGN KEY and shift/reduce"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Also, I am using flex 2.5.4, and an seeing no shift-reduce errors from\n> the current gram.y.\n\nYou wouldn't, because the critical item is still commented out.\n\n | REFERENCES ColId opt_column_list key_match key_actions \n {\n /* XXX\n * Need ConstraintAttributeSpec as $6 -- Jan\n */\n\nIf you add ConstraintAttributeSpec to the end of this production,\nit fails...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Dec 1999 01:42:01 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] FOREIGN KEY and shift/reduce "
},
{
"msg_contents": "> > If I allow the <constraint attributes> in column constraints,\n> > I get 2 shift/reduce conflicts. Seems the syntax interferes\n> > with NOT NULL. Actually I commented that part out, so the\n> > complete syntax is available only for table constraints, not\n> > on the column level.\n> Well, I'm not a guru, but I looked anyway. It's a mess. The problem\n> is that when NOT is the next token, the grammar doesn't know whether\n> the NOT is starting NOT NULL, which would be a new ColConstraintElem,\n> or starting NOT DEFERRABLE, which would be part of the current\n> ColConstraintElem. So it can't decide whether it's time to reduce\n> the current stack contents to a finished ColConstraintElem or not.\n> The only way to do that is to look ahead further than the NOT.\n> \n> In short, we no longer have an LR(1) grammar. Yipes.\n> \n> After a few minutes' thought, it seems that the least unclean way\n> to attack this problem is to hack up the lexer so that\n> \"NOT<whitespace>NULL\" is lexed as a single keyword. Assuming that\n> that's doable (I haven't tried, but I think it's possible), the\n> required changes in the grammar would be small. The shift/reduce\n> problem would go away, since we'd essentially have pushed the\n> required lookahead into the lexer.\n> \n> It's possible that making this change would even allow us to use\n> full a_expr rather than b_expr in DEFAULT expressions. I'm not\n> sure about it, but that'd be a nice side benefit if so.\n> \n> Does anyone see a better answer? This'd definitely be a Big Kluge\n> from the lexer's point of view, but I don't see an answer at the\n> grammar level.\n\nI'd like a chance to fix it at the grammar level. It involves mixing\nNOT DEFERRABLE and NOT NULL into the same clauses, but if I can work\nit out I'd rather isolate the Big Kluges in gram.y, which seems to\ncollect that kind of stuff. scan.l is still fairly clean...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 10 Dec 1999 15:56:28 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] FOREIGN KEY and shift/reduce"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Does anyone see a better answer? This'd definitely be a Big Kluge\n>> from the lexer's point of view, but I don't see an answer at the\n>> grammar level.\n\n> I'd like a chance to fix it at the grammar level. It involves mixing\n> NOT DEFERRABLE and NOT NULL into the same clauses, but if I can work\n> it out I'd rather isolate the Big Kluges in gram.y, which seems to\n> collect that kind of stuff. scan.l is still fairly clean...\n\nBruce and I were talking about that last night. I think it could be\nfixed by having the grammar treat\n\tNOT DEFERRABLE\n\tDEFERRABLE\n\tINITIALLY IMMEDIATE\n\tINITIALLY DEFERRED\nas independent ColConstraintElem clauses, and then have a post-pass in\nanalyze.c that folds the flags into the preceding REFERENCES clause\n(and complains if they don't appear right after a REFERENCES clause).\nPretty grotty, especially since you probably wouldn't want to do the\nsame thing for the other uses of these clauses in TableConstraint\nand CreateTrigStmt ... but possibly cleaner than a lexer hack.\n\nAnother possible approach is to leave scan.l untouched and put a filter\nsubroutine between gram.y and scan.l. The filter would normally just\ncall the scanner and return the token as-is; but when it gets a NOT\ntoken, it would call the scanner again to see if it gets a NULL token.\nIf so, it returns a single \"NOTNULL\" token to the grammar; otherwise\nit stashes away the lookahead token to return on the next call.\n\nThis last approach probably involves the least amount of dirt, but\nit does require being able to get in between yacc and lex. I'm not\nsure whether we'd have portability problems doing that; never tried it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Dec 1999 11:11:43 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] FOREIGN KEY and shift/reduce "
},
{
"msg_contents": "> Bruce and I were talking about that last night. I think it could be\n> fixed by having the grammar treat\n> NOT DEFERRABLE\n> DEFERRABLE\n> INITIALLY IMMEDIATE\n> INITIALLY DEFERRED\n> as independent ColConstraintElem clauses, and then have a post-pass in\n> analyze.c that folds the flags into the preceding REFERENCES clause\n> (and complains if they don't appear right after a REFERENCES clause).\n> Pretty grotty, especially since you probably wouldn't want to do the\n> same thing for the other uses of these clauses in TableConstraint\n> and CreateTrigStmt ... but possibly cleaner than a lexer hack.\n\nanalyze.c already does a grotty scan of the constraint clauses to push\nthem out into the place Vadim's implementation expects them. We could\nidentify the FK clauses and push them somewhere else, no problem\n(well, at least in principle ;).\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 10 Dec 1999 16:36:54 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] FOREIGN KEY and shift/reduce"
},
{
"msg_contents": "Thomas Lockhart wrote:\n\n> > Bruce and I were talking about that last night. I think it could be\n> > fixed by having the grammar treat\n> > NOT DEFERRABLE\n> > DEFERRABLE\n> > INITIALLY IMMEDIATE\n> > INITIALLY DEFERRED\n> > as independent ColConstraintElem clauses, and then have a post-pass in\n> > analyze.c that folds the flags into the preceding REFERENCES clause\n> > (and complains if they don't appear right after a REFERENCES clause).\n> > Pretty grotty, especially since you probably wouldn't want to do the\n> > same thing for the other uses of these clauses in TableConstraint\n> > and CreateTrigStmt ... but possibly cleaner than a lexer hack.\n>\n> analyze.c already does a grotty scan of the constraint clauses to push\n> them out into the place Vadim's implementation expects them. We could\n> identify the FK clauses and push them somewhere else, no problem\n> (well, at least in principle ;).\n\n I already added my own list of constraint clauses, where\n foreign key ones are pushed out of the place until the index\n stuff is done. Then the list is processed to add the trigger\n statements to extras_after. It's enough of crippled code\n there IMHO.\n\n I like the other approach by wrapping around yylex() better.\n We definitely insist on bison, and ship a prepared gram.c.\n And a little test here showed, that having\n\n static int kludge_yylex_wrapper(void);\n #define yylex() kludge_yylex_wrapper()\n\n in the top declarations section and defining\n\n #undef yylex()\n static int\n kludge_yylex_wrapper(void)\n {\n return yylex();\n }\n\n at the very end of gram.y does a fine job, changing totally\n nothing. So that's a perfect place to do exactly what Tom\n suggested. I don't see any portability issues on that.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Fri, 10 Dec 1999 17:53:12 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] FOREIGN KEY and shift/reduce"
}
] |
[
{
"msg_contents": "Well,\n\n ON DELETE/UPDATE SET NULL is also in place.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n",
"msg_date": "Mon, 6 Dec 1999 20:43:58 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "FOREIGN KEY again"
}
] |
[
{
"msg_contents": "Besides, there is a good reason why Oracle introduced this into their\nproduct, and I don't think it was for server-based dbs. I think that Oracle\nintroduced it for embedded systems. This is the only reason that I can\nthink of for moving away from the file system. As we don't really cater for\nembedded systems, I don't see any reason to do this. There is a lot of\nstuff that the file system does for us that a raw device doesn't, which we\nwould then have to write :-(\n\nMikeA\n\n-----Original Message-----\nFrom: The Hermit Hacker\nTo: Ing. Pavel PaJaSoft Janousek\nCc: [email protected]; [email protected]\nSent: 99/12/06 04:44\nSubject: Re: [HACKERS] RAW I/O device\n\nOn Mon, 6 Dec 1999, Ing. Pavel PaJaSoft Janousek wrote:\n\n> > other systems besides linux. Making the DB work w/ one FS (and\nwrite the\n> > storage code for it) seems pointless if we are still stuck on normal\nFSs\n> > on other machines.\n> \n> \tYes, of course, but storing databases directly to RAW device -\nnot\n> through the filesystem - is one feature of modern DB engines...\n\nActually, Oracle has been moving *away* from this...more recent versions\nof Oracle recommend using the Operating System file systems, since, in\nmost cases, the Operating System does a better job, and its too\ndifficult\nto have Oracle itself optimize internal for all the different variants\nthat it supports....\n\nAt work, we use Oracle extensively, and I sat down one day last year\nwith\nour Oracle DBA to discuss exactly this...and, if I recall correctly, it\nwas prompted by a similar thread here...\n\nIf Linux is providing an Interface into the RAW file system, then this\nmay\nchange things for Oracle, since it wouldn't have to \"learn\" all the\ndifferent OSs, as long as the API is the same across them all...and, my\nexperience with Linux is that the API for Linux will most likely be\ndifferent then everyone else *roll eyes*\n\nMarc G. Fournier ICQ#7615664 IRC Nick:\nScrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary:\nscrappy@{freebsd|postgresql}.org \n\n\n************\n",
"msg_date": "Mon, 6 Dec 1999 22:05:54 +0200 ",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] RAW I/O device"
}
] |
[
{
"msg_contents": "This belongs to the chapter \"initdb weirdnesses\", if you will. I have long\nbeen confused about this, but now I think I have the answer. Could someone\nfrom the multibyte camp please confirm this.\n\nWhen I configure --with-mb=FOO, the only place FOO is actually used is in\ninitdb as the default encoding of the database system you are creating\n(can be overriden with --pgencoding). The rest of the source code only\ndoes occasional #ifdef MULTIBYTE checks. This sort of arrangement is\nquestionable for a number of reasons:\n\n1) It's not very clear to the casual observer (=end user). I was lead to\nbelieve that the database system you are compiling will only support the\nFOO encoding and I used several --with-mb's if I wanted more.\n\n2) It is very well possible that one initdb instance can be used to\ninstall databases in several locations with varying encodings.\n\n3) I might sound like a broken record, but autoconf is not for controlling\nruntime behavior.\n\nWhile the notion of having a default encoding is perhaps not so bad (but\nhow often do you do initdb?) it could be introduced via other mechanisms,\nsuch as environment variables. (I am contradicting earlier emails now, but\nI'm not sure of a good way myself, yet.) The current approach causes all\nkinds of structural hazards in the overall view of things. I propose that\n--with-mb be replaced by --enable-mb (how about --enable-multibyte?). This\nis nothing urgent, but I would like to know what you think.\n\n\t-Peter\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Mon, 6 Dec 1999 22:40:03 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Multibyte in autoconf"
},
{
"msg_contents": "> This belongs to the chapter \"initdb weirdnesses\", if you will. I have long\n> been confused about this, but now I think I have the answer. Could someone\n> from the multibyte camp please confirm this.\n> \n> When I configure --with-mb=FOO, the only place FOO is actually used is in\n> initdb as the default encoding of the database system you are creating\n> (can be overriden with --pgencoding). The rest of the source code only\n> does occasional #ifdef MULTIBYTE checks. This sort of arrangement is\n> questionable for a number of reasons:\n> \n> 1) It's not very clear to the casual observer (=end user). I was lead to\n> believe that the database system you are compiling will only support the\n> FOO encoding and I used several --with-mb's if I wanted more.\n\nHave you ever read doc/README.mb?\n\n> 2) It is very well possible that one initdb instance can be used to\n> install databases in several locations with varying encodings.\n\nYou can initialize database with specified default encoding by initdb -\ne or -pgencoding. What's the problem with this?\n\n> 3) I might sound like a broken record, but autoconf is not for controlling\n> runtime behavior.\n> \n> While the notion of having a default encoding is perhaps not so bad (but\n> how often do you do initdb?) it could be introduced via other mechanisms,\n> such as environment variables. (I am contradicting earlier emails now, but\n> I'm not sure of a good way myself, yet.) The current approach causes all\n> kinds of structural hazards in the overall view of things. I propose that\n> --with-mb be replaced by --enable-mb (how about --enable-multibyte?). This\n> is nothing urgent, but I would like to know what you think.\n\nI don't understabd why you do not complain about --with-pgport or --\nwith-maxbackends. Sounds they have same problems as mb:-)\n\nAnyway, I don't like the idea to have an yet another environment\nvariable to give a default encoding to initdb when -e or -pgencoding\nis not specified. We already have enough. Changing --with-mb to --\nenable-multibyte seems good but I don't know how to give the default\nencoding to initdb in this case. Or just changing --with-mb=FOO to --\nenable-multibyte=FOO is what you want?\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 07 Dec 1999 10:29:41 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Multibyte in autoconf"
},
{
"msg_contents": "On Tue, 7 Dec 1999, Tatsuo Ishii wrote:\n\n> > 1) It's not very clear to the casual observer (=end user). I was lead to\n> > believe that the database system you are compiling will only support the\n> > FOO encoding and I used several --with-mb's if I wanted more.\n> \n> Have you ever read doc/README.mb?\n\nYes, and although it is nice, it didn't make this particular part easier\nto figure out. I mean, if I configure the compilation of a program with\n--with-something=foo, then I assume it actually uses \"foo\" somehow. And\nthen I see my compilation actually full of -DMULTIBYTE=XXX lines,\nconfusing me further.\n\nBtw., why is this not in the main documentation?\n\n> > 2) It is very well possible that one initdb instance can be used to\n> > install databases in several locations with varying encodings.\n> \n> You can initialize database with specified default encoding by initdb -\n> e or -pgencoding. What's the problem with this?\n\nFirst and foremost, non-obvious, multi-level meta-defaults. You actually\nhave a default for what initdb chooses as the default encoding. Also,\nthink about package maintainers. Which one are they going to pick?\n\n> I don't understabd why you do not complain about --with-pgport or --\n> with-maxbackends. Sounds they have same problems as mb:-)\n\nWell, I can't complain about everything at once :) Surely, those are more\nsubtle things, though. The pg_ctl you are working on will pretty much\neliminate the need for those.\n\n> Anyway, I don't like the idea to have an yet another environment\n> variable to give a default encoding to initdb when -e or -pgencoding\n> is not specified. We alread\ty have enough. Changing --with-mb to --\n\nI agree. Considering the fact that in a fairly normal environment you only\ninitdb once and you only configure once, would it be too far-fetched to\npropose moving this sort of decision completely into initdb, that is, make\nthe --pgencoding mandatory if you do want some encoding? Because I'm also\nnot completely sure how you would initdb a database without any encoding\nwhatsoever if you have your initdb set to always use some default.\n\nTo be clear: This is not the end of the world, if you think this will be a\nmajor pain for you, then I'll drop it. I just want to know what the\nmotivation behind this was.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 7 Dec 1999 13:09:07 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Multibyte in autoconf"
},
{
"msg_contents": "> > Have you ever read doc/README.mb?\n> \n> Yes, and although it is nice, it didn't make this particular part easier\n> to figure out. I mean, if I configure the compilation of a program with\n> --with-something=foo, then I assume it actually uses \"foo\" somehow. And\n> then I see my compilation actually full of -DMULTIBYTE=XXX lines,\n> confusing me further.\n\nI must admit that I'm not good at English and writing:-)\n\n> Btw., why is this not in the main documentation?\n\nOk, I will do it for 7.0. Please give me some idea to enhance\nREADME.mb (you already gave one) if you have any.\n\n> > Anyway, I don't like the idea to have an yet another environment\n> > variable to give a default encoding to initdb when -e or -pgencoding\n> > is not specified. We alread\ty have enough. Changing --with-mb to --\n> \n> I agree. Considering the fact that in a fairly normal environment you only\n> initdb once and you only configure once, would it be too far-fetched to\n> propose moving this sort of decision completely into initdb, that is, make\n> the --pgencoding mandatory if you do want some encoding? Because I'm also\n> not completely sure how you would initdb a database without any encoding\n> whatsoever if you have your initdb set to always use some default.\n\nI think I see your point. Giving a default-default encoding to initdb\nis not a good idea, right? If so, it comes sounding reasonable to me\ntoo.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 07 Dec 1999 22:56:14 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Multibyte in autoconf"
},
{
"msg_contents": "> > Btw., why is this not in the main documentation?\n> Ok, I will do it for 7.0. Please give me some idea to enhance\n> README.mb (you already gave one) if you have any.\n\nIf this is on the same scale as the basic locale support, it might fit\ninto doc/src/sgml/config.sgml (to appear in the chapter on\nConfiguration Options in the Admin's Guide). Or if you want put it\ninto a separate file (e.g. multibyte.sgml) as either plain text or\nwith some markup and I'll help to integrate it.\n\nAlso, I'll be happy to help edit and adjust language, so don't worry\nabout the translation details ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 07 Dec 1999 14:26:16 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Multibyte in autoconf"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> I agree. Considering the fact that in a fairly normal environment you only\n>> initdb once and you only configure once, would it be too far-fetched to\n>> propose moving this sort of decision completely into initdb, that is, make\n>> the --pgencoding mandatory if you do want some encoding? Because I'm also\n>> not completely sure how you would initdb a database without any encoding\n>> whatsoever if you have your initdb set to always use some default.\n\n> I think I see your point. Giving a default-default encoding to initdb\n> is not a good idea, right? If so, it comes sounding reasonable to me\n> too.\n\nOK, so the proposal is\n\nconfigure: --enable-mb\n\tEnables compilation of MULTIBYTE code, does not select a default\n\ninitdb: --pgencoding=FOO\n\tEstablishes coding of database; it's an error to specify non-\n\tdefault encoding if MULTIBYTE wasn't compiled.\n\tIf no --pgencoding, you get default (non-multibyte) coding even\n\tif you compiled with --enable-mb.\n\nSeems reasonable and flexible to me.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Dec 1999 10:04:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Multibyte in autoconf "
},
{
"msg_contents": "> OK, so the proposal is\n> \n> configure: --enable-mb\n> \tEnables compilation of MULTIBYTE code, does not select a default\n\nAgreed.\n\n> initdb: --pgencoding=FOO\n> \tEstablishes coding of database; it's an error to specify non-\n> \tdefault encoding if MULTIBYTE wasn't compiled.\n\nAgreed.\n\n> \tIf no --pgencoding, you get default (non-multibyte) coding even\n> \tif you compiled with --enable-mb.\n\nNot agreed. I think it would be better to give an error if no default\nencoding is not sepecified if configured with --enable-mb. Reasons:\n\n1) Users tend to use only one encoding rather than switching multiple\nencoding database. Thus major encoding for the user should be properly\nset as the default.\n\n2) if non-multibyte coding such as SQL_ASCII is accidently set as the\ndefault, and if a multi-byte user create a database with no encoding\narugument, the result would be a disaster.\n--\nTatsuo Ishii\n\n",
"msg_date": "Wed, 08 Dec 1999 16:23:04 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Multibyte in autoconf "
},
{
"msg_contents": "> encoding database. Thus major encoding for the user should be properly\n> set as the default.\n> \n> 2) if non-multibyte coding such as SQL_ASCII is accidently set as the\n> default, and if a multi-byte user create a database with no encoding\n> arugument, the result would be a disaster.\n\nTatsuo, glad you are handling the multi-byte issues. Most of us are\nclueless about it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 8 Dec 1999 02:36:22 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Multibyte in autoconf"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> If no --pgencoding, you get default (non-multibyte) coding even\n>> if you compiled with --enable-mb.\n\n> Not agreed. I think it would be better to give an error if no default\n> encoding is not sepecified if configured with --enable-mb.\n\nOK, I could live with that too. I think Peter's main point is that\nthere's no good reason to select a particular encoding at configure\ntime, even just as a \"default\". It'll be less confusing if initdb time\nis the *only* time where you specify the particular MULTIBYTE encoding\nyou want.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Dec 1999 03:33:36 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Multibyte in autoconf "
},
{
"msg_contents": "On Wed, 8 Dec 1999, Tatsuo Ishii wrote:\n\n> > \tIf no --pgencoding, you get default (non-multibyte) coding even\n> > \tif you compiled with --enable-mb.\n> \n> Not agreed. I think it would be better to give an error if no default\n> encoding is not sepecified if configured with --enable-mb. Reasons:\n> \n> 1) Users tend to use only one encoding rather than switching multiple\n> encoding database. Thus major encoding for the user should be properly\n> set as the default.\n\nUsers also initdb only once, and that is the time to *choose* what they\nwant. Then and only then. Once they're done with that they'll never have\nto worry about it again.\n\n> 2) if non-multibyte coding such as SQL_ASCII is accidently set as the\n> default, and if a multi-byte user create a database with no encoding\n> arugument, the result would be a disaster.\n\nHuh, so if I compile my database with multibyte and then I then I choose\nto not have a default encoding in template1 but maybe I want to have the\nmultibyte option available for some other database later on, that will be\na disaster? Not so good.\n\nWhat I'm also thinking of is the the package maintainer. They should be\nable to provide a \"neutral\" yet multibyte (and locale, and cyrillic)\nenabled package, and one should be able to use that even if one doesn't\nwant to use the multibyte features right now or at all.\n\nAlso, it should not be initdb's job to verify that the encodings are\ncorrect, supported, etc. The backend should find that out itself. That\neliminates duplication of the same logic, which the backend can do better\nanyway.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Wed, 8 Dec 1999 14:31:19 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Multibyte in autoconf "
},
{
"msg_contents": "> > > \tIf no --pgencoding, you get default (non-multibyte) coding even\n> > > \tif you compiled with --enable-mb.\n> > \n> > Not agreed. I think it would be better to give an error if no default\n> > encoding is not sepecified if configured with --enable-mb. Reasons:\n> > \n> > 1) Users tend to use only one encoding rather than switching multiple\n> > encoding database. Thus major encoding for the user should be properly\n> > set as the default.\n> \n> Users also initdb only once, and that is the time to *choose* what they\n> want. Then and only then. Once they're done with that they'll never have\n> to worry about it again.\n> \n> > 2) if non-multibyte coding such as SQL_ASCII is accidently set as the\n> > default, and if a multi-byte user create a database with no encoding\n> > arugument, the result would be a disaster.\n> \n> Huh, so if I compile my database with multibyte and then I then I choose\n> to not have a default encoding in template1 but maybe I want to have the\n> multibyte option available for some other database later on, that will be\n> a disaster? Not so good.\n\nFirst of all, it's not possible not to have a default encoding in\ntemplate1. Probably you mean you choose SQL_ASCII (encoding no. is 0)\nas the defaut encoding. Anyway, I'm going to give an example scenario\nof the disaster.\n\n1) initdb with no encoding augument (suppose that SQL_ASCII is set as\nthe default encoding in template1)\n\n2) a user creates a database with no encoding augument. he thought\nthat the default encoding is EUC_JP.\n\n3) he makes a table then fills it with some Japanese data.\n\n4) later he pulls data from the table and found that it no longer\nJapanese!\n\n> What I'm also thinking of is the the package maintainer. They should be\n> able to provide a \"neutral\" yet multibyte (and locale, and cyrillic)\n> enabled package, and one should be able to use that even if one doesn't\n> want to use the multibyte features right now or at all.\n\nSo you think a postgres package with multibyte/locale/cyrillic options\nenabled is a good thing for everyone? At least I don't like locale\noption. It is not only useless for multibyte languages such as\nJapanese, but it makes slow for text comparison. I wouldn't say locale\nis useless for everyone, however. I admit it is usefull for single\nbyte encodings.\n\nI think it would be very hard to make a unified ideal package for\neveryone.\n\n> Also, it should not be initdb's job to verify that the encodings are\n> correct, supported, etc. The backend should find that out itself. That\n> eliminates duplication of the same logic, which the backend can do better\n> anyway.\n\nActually that duplication can be eliminated by using the same\ncode. I think pg_id command will do the job.\n\nBTW, I don't think the current implmentation of multibyte is not yet\ncompleted. Next target would be NATIONAL CHARATER support (not sure\nit's for 7.0, though). I would like to find a solution for the\nproblem of locale I stated above.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 08 Dec 1999 23:31:52 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Multibyte in autoconf "
},
{
"msg_contents": "> BTW, I don't think the current implmentation of multibyte is not yet\n> completed. Next target would be NATIONAL CHARATER support (not sure\n> it's for 7.0, though).\n\nI'm still here, interested in working on NATIONAL CHAR and other\ncharacter stuff. Will need a multibyte partner though, since I'm not\nfamiliar with all of the issues...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 08 Dec 1999 15:33:13 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Multibyte in autoconf"
},
{
"msg_contents": "On 1999-12-08, Tatsuo Ishii mentioned:\n\n> 1) initdb with no encoding augument (suppose that SQL_ASCII is set as\n> the default encoding in template1)\n> \n> 2) a user creates a database with no encoding augument. he thought\n> that the default encoding is EUC_JP.\n\nWhy would the user think that? Can't he check if he's not sure? Call his\ndb admin? Or did the db admin mess up the initdb?\n\n> \n> 3) he makes a table then fills it with some Japanese data.\n> \n> 4) later he pulls data from the table and found that it no longer\n> Japanese!\n\nThat really doesn't have anything to do with what I'm getting at. This is\njust a naive user, quite honestly.\n\n> So you think a postgres package with multibyte/locale/cyrillic options\n> enabled is a good thing for everyone? At least I don't like locale\n> option. It is not only useless for multibyte languages such as\n> Japanese, but it makes slow for text comparison. I wouldn't say locale\n> is useless for everyone, however. I admit it is usefull for single\n> byte encodings.\n\n(Locale doesn't only affect language matters, but also currenct\nformatting, number display, etc.)\n\nThe performance problems with locale is a deficiency which will get fixed.\nBut that doesn't mean we have to block this path via other means. But that\nwas not the point. The point was that what we have here is a default for a\ndefault. And moreover a default for an action you only do once. If you\ninit a database system, you make then and there (and only there) a\ndecision what you are going to do, tell your users about it and everyone\nis happy. That's not any more complicated than it is now, only that it\nmoves runtime behaviour to run time programs and leaves build time\ndecisions with configure time programs.\n\nNow you would do:\n./configure --with-mb=FOO\nmake\nmake install\ninitdb\n\nWith the proposal you could do:\n./configure --enable-multibyte\nmake\nmake install\n\ninitdb -E FOO # if you want multibyte in all your databases\n--or--\ninitdb # if you don't want multibyte by default but want\n # to keep the option for individual cases\n\nThe fact that you have configured with --enable-multibyte doesn't mean you\nhave to use it. Just because a program is locale capable, doesn't mean you\nhave to decide on the default locale at compile time.\n\n> I think it would be very hard to make a unified ideal package for\n> everyone.\n\nThat's what packages try to achieve. We shouldn't make it harder for them.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n\n",
"msg_date": "Thu, 9 Dec 1999 01:18:20 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Multibyte in autoconf "
},
{
"msg_contents": "\n\nPeter Eisentraut wrote:\n\n> On 1999-12-08, Tatsuo Ishii mentioned:\n>\n> > 1) initdb with no encoding augument (suppose that SQL_ASCII is set as\n> > the default encoding in template1)\n> >\n> > 2) a user creates a database with no encoding augument. he thought\n> > that the default encoding is EUC_JP.\n>\n> Why would the user think that? Can't he check if he's not sure? Call his\n> db admin? Or did the db admin mess up the initdb?\n>\n\nAs a Japanese,I don't want to specify an encoding for every initdb.\nThere are few selections except --with-mb=EUC_JP in Japan.\nIsn't it preferable that PostgreSQL doesn't need an excellent db\nadmin ?\n\nI do initdb frequently in current tree.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Thu, 09 Dec 1999 10:04:01 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Multibyte in autoconf"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> The fact that you have configured with --enable-multibyte doesn't mean you\n> have to use it. Just because a program is locale capable, doesn't mean you\n> have to decide on the default locale at compile time.\n\nWell, if you don't determine a default locale at configure/compile time,\nwhat that *really* means is that the default was hardwired in even\nearlier, ie, when the program was written. (Or else it means that there\nis no default: if we did that, users would be required to explicitly\ngive an encoding choice whenever they run initdb.)\n\nSeems to me that Tatsuo is right that setting a site-specific default\nencoding at configure time is handy, and *also* that Peter is right that\nthe encoding should be selectable at initdb time. But where's the\nconflict? We can accept \"--with-mb=FOO\" at configure time, with the\nunderstanding that the *only* thing FOO is used for is to set the\ndefault value of initdb's --pgencoding switch. You override FOO by\ngiving an explicit --pgencoding switch when you do initdb. People\nbuilding generic multibyte-capable RPMs would probably configure with\nFOO=ASCII (or whatever the non-multibyte encoding is called). Seems\nlike that should satisfy everyone. Have I missed something?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Dec 1999 20:31:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Multibyte in autoconf "
},
{
"msg_contents": "> > BTW, I don't think the current implmentation of multibyte is not yet\n> > completed. Next target would be NATIONAL CHARATER support (not sure\n> > it's for 7.0, though).\n> \n> I'm still here, interested in working on NATIONAL CHAR and other\n> character stuff. Will need a multibyte partner though, since I'm not\n> familiar with all of the issues...\n\nThanks for the offering. I would need help with the parser and some\nother staffs too.\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 09 Dec 1999 12:01:24 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Multibyte in autoconf"
},
{
"msg_contents": "On 1999-12-09, Hiroshi Inoue mentioned:\n\n> As a Japanese,I don't want to specify an encoding for every initdb.\n> There are few selections except --with-mb=EUC_JP in Japan.\n\nAs I mentioned frequently, in a normal environment, you configure as many\ntimes as you initdb.\n\n> Isn't it preferable that PostgreSQL doesn't need an excellent db\n> admin ?\n> \n> I do initdb frequently in current tree.\n\nBut you configure and build each time, right?\n\nAnyway, as one who is only getting into multibyte for toyage, I guess I\nwill drop this topic for now.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sat, 11 Dec 1999 01:20:27 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Multibyte in autoconf"
}
] |
[
{
"msg_contents": "The Hermit Hacker wrote:\n\n> Hi Kyle...\n>\n> Jeff is currently in the process of redoing the web site, and\n> we're going to be working this 'bidding' system into the new site, but its\n> going to take a litlte bit of time, since he would like to get it right\n> and not have to do it again :)\n>\n> Keep an eye on the sight and expect *a variant* on what you\n> proposed to be in place around the first of the new year...but it will be\n> implemented...\n>\n\nSounds good. I hope my input will prove helpful.\n\nBTW, you don't have to wait for the web site to be working right to get me involved\n(although I think that might help others to get more involved).\n\nIf you're interested in getting some money to help out with development now, this is\nthe list of things I have been hoping for. I'm sure my modest contribution alone is\nnot enough to justify the work they will take, but its probably better than a kick in\nthe butt.\n\n - Allow a view of a union\n - Get rid of size restrictions on tuples (8K?)\n - Get rid of size restrictions on queries (16K)\n - Make sequences roll back on abort (at least optionally)\n - Resident functions that can execute with super-privileges\n This would mean that a PL function could execute as the user who created\n it (or perhaps some other user the database creater might specify). This\n would allow certain information to come from a table that the calling user\n might not normally have access to (without having access to the whole table).\n - Support for outer (and other kinds of) joins\n - Column based permissions (I don't know if there are SQL standards for this)\n - Support for passwords in the TCL interface (is it there now?)\n - Could a subquery be included in the target list of a query\n select a,(select b from c) from d where e = f;\n (Not sure if this one is even standard SQL.)\n - Support for \"alter table drop column\"\n\nMaybe some of the things have been done already. Some have not. The items are\nroughly prioritized for our needs here at ATI.\n\nIf I knew that certain of these could be accomplished and for how much, that would\nreally help me move forward.",
"msg_date": "Mon, 06 Dec 1999 14:57:33 -0700",
"msg_from": "Kyle Bateman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Raising funds for PostgreSQL"
},
{
"msg_contents": "At 02:57 PM 12/6/99 -0700, Kyle Bateman wrote:\n\n*> - Allow a view of a union\n*> - Get rid of size restrictions on tuples (8K?)\n*> - Get rid of size restrictions on queries (16K)\n > - Resident functions that can execute with super-privileges\n > This would mean that a PL function could execute as the user who created\n > it (or perhaps some other user the database creater might specify).\nThis\n > would allow certain information to come from a table that the calling\nuser\n > might not normally have access to (without having access to the whole\ntable).\n*> - Support for outer (and other kinds of) joins\n > - Column based permissions (I don't know if there are SQL standards for\nthis)\n > - Support for passwords in the TCL interface (is it there now?)\n*> - Could a subquery be included in the target list of a query\n > select a,(select b from c) from d where e = f;\n> (Not sure if this one is even standard SQL.)\n*> - Support for \"alter table drop column\"\n\nWhile I don't have any money to offer, I just thought I'd point out\nthat each of the starred items are things that I've run into while\nporting an application from Oracle in just the past three days.\n\nThe system builds and allows the editing of tables via a web\ninterface, and not being able to drop columns is a drag. I'm\nemulating it by renaming such columns to something invisible\n(the user doesn't see the actual table, just what the application\nthinks is in the table, so I just make it lie to the user after\nthe renaming), which for this particular application will be\nacceptable for now.\n\nOuter joins can be worked around, but at times it's painful,\nthough I'm finding that for this application at least at times\nI can rewrite the query and make it more readable and efficient\nwithout the outer join. Apparently, outer joins are a temptation\nfor misuse. But there are times when they're really helpful.\n\nAllowing a view of a union can always be worked around in some\nsort of brute-force manner but can require studying the\nqueries on it closely in order to properly rewrite them without\nthe view.\n\nTuple size restrictions coupled with the fact that LOBs aren't\ndumped are also painful. If 7.0 contains a \"real\" backup, will\nit be able to backup LOBs? That would help me in the port of\nthis application.\n\nAs the word gets out that Postgres has greatly improved in the\nrecent past, more and more folks will be looking to move stuff\nover from commercial databases. This is partly driven by the\nweb (entirely, in my case) where databases are extremely \nuseful creatures, serving as the site's foundation in many\ncases. Oracle's pricing for web use is scandalous ($22K-ish\nfor a fully-paid up unsupported license on a PIII 500!) and\nbesides is overkill for all but the largest and busiest sites.\nPostgres can fill a very important niche in this environment.\nOne that some folks think MySQL fills but, hey, I ain't trusting\nany money transactions to a non-transactional database!\n\nThe focus on things like reliability and performance is \nextremely important, of course - MVCC in 6.5.*, a very\nnoticable decline in memory leaks (\"drastic\" is not too\nstrong a word), removing the uneccessary write to pg_log\nafter read-only selects were crucial for my own web-based\napplication. WAL and a good backup facility are really\nnice to think about for those of us who worry about our\nhardware's reliability, and referential integrity constraints\nfor those of us who worry about our ability to write\nbug-free code (I can't wait to play with Jan's new updates),\nall very good things.\n\nBut Postgres is maturing to the point where users can point\nto the reliability and performance aspects of the tool and\nfind little to complain about. Now folks are going to start\nconcentrating on SQL language features more exclusively :)\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 06 Dec 1999 14:27:50 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Raising funds for PostgreSQL"
},
{
"msg_contents": "Kyle Bateman <[email protected]> writes:\n> [ wish list ]\n> - Get rid of size restrictions on queries (16K)\n> - Could a subquery be included in the target list of a query\n> select a,(select b from c) from d where e = f;\n> (Not sure if this one is even standard SQL.)\n\nFWIW, the above two things are already done in current sources.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Dec 1999 20:17:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Raising funds for PostgreSQL "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Kyle Bateman <[email protected]> writes:\n> > [ wish list ]\n> > - Get rid of size restrictions on queries (16K)\n> > - Could a subquery be included in the target list of a query\n> > select a,(select b from c) from d where e = f;\n> > (Not sure if this one is even standard SQL.)\n>\n> FWIW, the above two things are already done in current sources.\n\nUh,\n\n who did the latter one?\n\n Just running some qry through psql I see that it works\n partially. Especially I see a\n\n ExecSetParamPlan: more than one tuple returned by expression subselect\n\n So I assume it's not implemented in the form I outlined - the\n subselecting range table entry. I'll take a look at it the\n next days and maybe complain about that change if it\n interferes with the needs of the rule system to make\n aggregate, distinct, group by, sort by and all the other\n (long wanted) view stuff possible.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 7 Dec 1999 03:12:15 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Raising funds for PostgreSQL"
},
{
"msg_contents": "\nHere...\n\n\thttp://www.pgsql.com/features/\n\nHave to get it format'd and need to build up the CGI, but this is what you\nasking for? :)\n\nOn Mon, 6 Dec 1999, Kyle Bateman wrote:\n\n> The Hermit Hacker wrote:\n> \n> > Hi Kyle...\n> >\n> > Jeff is currently in the process of redoing the web site, and\n> > we're going to be working this 'bidding' system into the new site, but its\n> > going to take a litlte bit of time, since he would like to get it right\n> > and not have to do it again :)\n> >\n> > Keep an eye on the sight and expect *a variant* on what you\n> > proposed to be in place around the first of the new year...but it will be\n> > implemented...\n> >\n> \n> Sounds good. I hope my input will prove helpful.\n> \n> BTW, you don't have to wait for the web site to be working right to get me involved\n> (although I think that might help others to get more involved).\n> \n> If you're interested in getting some money to help out with development now, this is\n> the list of things I have been hoping for. I'm sure my modest contribution alone is\n> not enough to justify the work they will take, but its probably better than a kick in\n> the butt.\n> \n> - Allow a view of a union\n> - Get rid of size restrictions on tuples (8K?)\n> - Get rid of size restrictions on queries (16K)\n> - Make sequences roll back on abort (at least optionally)\n> - Resident functions that can execute with super-privileges\n> This would mean that a PL function could execute as the user who created\n> it (or perhaps some other user the database creater might specify). This\n> would allow certain information to come from a table that the calling user\n> might not normally have access to (without having access to the whole table).\n> - Support for outer (and other kinds of) joins\n> - Column based permissions (I don't know if there are SQL standards for this)\n> - Support for passwords in the TCL interface (is it there now?)\n> - Could a subquery be included in the target list of a query\n> select a,(select b from c) from d where e = f;\n> (Not sure if this one is even standard SQL.)\n> - Support for \"alter table drop column\"\n> \n> Maybe some of the things have been done already. Some have not. The items are\n> roughly prioritized for our needs here at ATI.\n> \n> If I knew that certain of these could be accomplished and for how much, that would\n> really help me move forward.\n> \n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 6 Dec 1999 22:25:30 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Raising funds for PostgreSQL"
},
{
"msg_contents": "> Tom Lane wrote:\n> \n> > Kyle Bateman <[email protected]> writes:\n> > > [ wish list ]\n> > > - Get rid of size restrictions on queries (16K)\n> > > - Could a subquery be included in the target list of a query\n> > > select a,(select b from c) from d where e = f;\n> > > (Not sure if this one is even standard SQL.)\n> >\n> > FWIW, the above two things are already done in current sources.\n> \n> Uh,\n> \n> who did the latter one?\n\nTom Lane.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 6 Dec 1999 22:02:19 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Raising funds for PostgreSQL"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> Tom Lane wrote:\n>>>> - Could a subquery be included in the target list of a query\n>>>> select a,(select b from c) from d where e = f;\n>> [ is done ]\n\n> Uh,\n> who did the latter one?\n\nMe.\n\n> Just running some qry through psql I see that it works\n> partially. Especially I see a\n> ExecSetParamPlan: more than one tuple returned by expression subselect\n\nYes. I allowed the existing kinds of subselect expressions in\ntargetlists (not only in qual clauses), and fixed things so that\na subselect yielding a single result can appear anywhere in an\nexpression, not only as the righthand-side argument of an operator.\nSubselects yielding multiple rows are still valid only as the\nrighthand argument of an IN, quantified boolean operator\n(such as = ANY or = ALL), or EXISTS. It's a perfectly straightforward\ngeneralization (and simplification!) of what we already had.\n\n> So I assume it's not implemented in the form I outlined - the\n> subselecting range table entry. I'll take a look at it the\n> next days and maybe complain about that change if it\n> interferes with the needs of the rule system to make\n> aggregate, distinct, group by, sort by and all the other\n> (long wanted) view stuff possible.\n\nI don't believe this has anything to do with subselects as range\ntable entries. That facility will be quite separate from subselects\nin expressions, no?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Dec 1999 22:22:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Raising funds for PostgreSQL "
},
{
"msg_contents": "Tom Lane wrote:\n\n> [email protected] (Jan Wieck) writes:\n>\n> > Uh,\n> > who did the latter one?\n>\n> > So I assume it's not implemented in the form I outlined - the\n> > subselecting range table entry. I'll take a look at it the\n> > next days and maybe complain about that change if it\n> > interferes with the needs of the rule system to make\n> > aggregate, distinct, group by, sort by and all the other\n> > (long wanted) view stuff possible.\n>\n> I don't believe this has anything to do with subselects as range\n> table entries. That facility will be quite separate from subselects\n> in expressions, no?\n\nNow that you say it...\n\n Yes, they are separate. They don't interfere.\n\n Thank's for the *slap* - sometimes I need that :-).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 7 Dec 1999 05:07:17 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TLE subselects (was: Raising funds for PostgreSQL)"
},
{
"msg_contents": "The Hermit Hacker wrote:\n\n> Here...\n>\n> http://www.pgsql.com/features/\n>\n> Have to get it format'd and need to build up the CGI, but this is what you\n> asking for? :)\n>\n\nThat looks great. You don't waste alot of time, do you...\n\nFor what its worth, here's the language I would suggest for the page:\n\n\"This page enables you to help the Postgresql project move along more quickly and at the\nsame time it will help you get the new features you want and need the most. Below are a\nlist of enhancements under consideration by the development team. However it is not known\nwhen each will bubble up to the top of the priority list and actually get done.\n\nIf one of these features is important to you, you can pledge a certain amount of money\n($100 minimum, please) toward that feature. If you want to help build the pool, tell\nothers about this page and encourage them to pledge toward those features they feel are\nneeded the most.\n\nWhen the accumulation of pledges on any feature reaches the level preset by the\ndevelopment team, you will be contacted at the email address you supply. You will then\nhave 2 weeks to actually send in your pledge. If the pledges actually received are still\nhigh enough to justify the project, your feature will be completed and you will be\nnotified of the next release which includes it.\n\nIf the development team does not successfully complete your feature, or if an insufficient\namount of the pledges are actually collected, you will be given the option of getting your\nmoney back or applying the amount toward another feature. If you make a pledge and then\ndo not honor it, you will not be eligible to make future pledges or for support through\nPostgreSQL Inc.\n\nOf the funds collected through this mechanism, 10% will be used for administrative\npurposes. The remaining 90% will go directly to the developer(s) working on your\nenhancement.\n\nTo get an item added to this list, please send email to Jeff MacDonald. If the item is not\nalready on the TODO list, it will get brought up, discussed, and if entered onto the TODO\nlist, will also be entered here. You will then be contacted to let you know your entry has\nbeen added, so that you can make your pledge.\n\n\n\nNow again, these are just suggestions, but I think the following are good ideas:\n\n1. Exclude items from the list which will be completed in the next 2-3 months anyway\n2. Take bids from the development team in advance on each feature. In other words, how\nmany dollars would they need to start on the enhancement today.\n3. Do not disclose these bids to the public\n4. Do not disclose the received pledges to date to the public\n5. Show on the page how much has been pledged toward the feature only as a percentage of\nthe amount needed to start the work\n6. Include a buffer (20%?) to allow for uncollectable pledges\n7. When the pledge is made, bring up a page with an electronic contract with Accept and\nDecline buttons. This contract should contain language which is legally binding and which\nwould hold up in a small claims court. That way if someone makes a pledge and you\ncomplete the feature, you could actually collect your money from them if you wanted to. I\ncan probably help with the language of this if you want.\n8. After a feature has been funded and completed, publish all the details (bids, pledge\namounts, who donated, who flaked on their pledges, etc.)\n9. Include prominent information about how to participate in this program on all the web\npage headers/footers and in the distribution README's. A catchy link might be \"How to get\nyour favorite feature added into PostgreSQL\" You should probably throw something into\nthese mailing lists from time to time too.\n10. Are you set up to take credit cards? This would be nice but I think you can do\nwithout it.\n11. You probably should probably choose US Dollars as the standard interchange format.\nHowever, this should appeal to an international market. If you get set up with a web\ncredit card vendor, they can probably handle exchange issues for you automatically.\n\nHope this is helpful.\n\nKyle",
"msg_date": "Tue, 07 Dec 1999 10:22:01 -0700",
"msg_from": "Kyle Bateman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Raising funds for PostgreSQL"
},
{
"msg_contents": "Kyle Bateman wrote:\n\n> Now again, these are just suggestions, but I think the following are good ideas:\n>\n> 3. Do not disclose these bids to the public\n\n> 4. Do not disclose the received pledges to date to the public\n\nI'm curious about your rationale for 3 and 4. Why not disclose bid amounts (not bidders) and\nreceived pledge amounts (not pledgers) to the public prior to the work being completed? What\ndo you think would happen?\n\nEd Loehr\n\n\n\n",
"msg_date": "Tue, 07 Dec 1999 12:08:58 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Raising funds for PostgreSQL"
},
{
"msg_contents": "Ed Loehr wrote:\n\n> Kyle Bateman wrote:\n>\n> > Now again, these are just suggestions, but I think the following are good ideas:\n> >\n> > 3. Do not disclose these bids to the public\n>\n> > 4. Do not disclose the received pledges to date to the public\n>\n> I'm curious about your rationale for 3 and 4. Why not disclose bid amounts (not bidders) and\n> received pledge amounts (not pledgers) to the public prior to the work being completed? What\n> do you think would happen?\n>\n> Ed Loehr\n\nI'm not really that strong on those issues. I just think if I were setting it up that's\nprobably the way I would do it to try to maximize the amount of money actually raised and\nperhaps keep things a bit more discrete for the developers.\n\nI don't think anything really bad would happen if people know what the numbers are. You could\nin fact figure out about what the numbers were anyway by placing a pledge and watching the\nchange in the percentage. That might be a good reason for hiding them right there... :)",
"msg_date": "Tue, 07 Dec 1999 11:36:14 -0700",
"msg_from": "Kyle Bateman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Raising funds for PostgreSQL"
},
{
"msg_contents": "\nI like most of the wording, but the idea behind a pledge is to provide a\npool of money, dedicated towards a particular feature, that we have\nreserved, on hand, to be able to contract out. If a \"feature\" gets $100\nin pledges and someone pops up, says I'll do it, pledging for that feature\nwill freeze at that point, and the programmer will get paid after his code\nhas been submitted, approved and entered into the repository...\n\nA pledge is record/valid when, and only when, the pledge has been \nreceived...\n\n\nOn Tue, 7 Dec 1999, Kyle Bateman wrote:\n\n> The Hermit Hacker wrote:\n> \n> > Here...\n> >\n> > http://www.pgsql.com/features/\n> >\n> > Have to get it format'd and need to build up the CGI, but this is what you\n> > asking for? :)\n> >\n> \n> That looks great. You don't waste alot of time, do you...\n> \n> For what its worth, here's the language I would suggest for the page:\n> \n> \"This page enables you to help the Postgresql project move along more\n> quickly and at the same time it will help you get the new features you\n> want and need the most. Below are a list of enhancements under\n> consideration by the development team. However it is not known when\n> each will bubble up to the top of the priority list and actually get\n> done.\n> \n> If one of these features is important to you, you can pledge a certain\n> amount of money ($100 minimum, please) toward that feature. If you\n> want to help build the pool, tell others about this page and encourage\n> them to pledge toward those features they feel are needed the most.\n> \n> When the accumulation of pledges on any feature reaches the level\n> preset by the development team, you will be contacted at the email\n> address you supply. You will then have 2 weeks to actually send in\n> your pledge. If the pledges actually received are still high enough\n> to justify the project, your feature will be completed and you will be\n> notified of the next release which includes it.\n> \n> If the development team does not successfully complete your feature,\n> or if an insufficient amount of the pledges are actually collected,\n> you will be given the option of getting your money back or applying\n> the amount toward another feature. If you make a pledge and then do\n> not honor it, you will not be eligible to make future pledges or for\n> support through PostgreSQL Inc.\n> \n> Of the funds collected through this mechanism, 10% will be used for\n> administrative purposes. The remaining 90% will go directly to the\n> developer(s) working on your enhancement.\n> \n> To get an item added to this list, please send email to Jeff\n> MacDonald. If the item is not already on the TODO list, it will get\n> brought up, discussed, and if entered onto the TODO list, will also be\n> entered here. You will then be contacted to let you know your entry\n> has been added, so that you can make your pledge.\n\n> 1. Exclude items from the list which will be completed in the next 2-3 months anyway\n\nwhy? if someone feels that WAL is important, and wants to pledge towards\ngetting that completed, so be it...most of the TODO list is currently\nunder someone's responsibility (WAL == Vadim)...if ppl want to pledge\n$100+ for Vadim to work on this, so be it...when its released, we send\nVadim a cheque for $100+ - 10% ... I don't think he'd refuse, now, do\nyou? :)\n\n> 2. Take bids from the development team in advance on each feature. \n> In other words, how many dollars would they need to start on the\n> enhancement today.\n> 3. Do not disclose these bids to the public\n> 4. Do not disclose the received pledges to date to the public\n\nI don't like this ...\n\n> 5. Show on the page how much has been pledged toward the feature only\n> as a percentage of the amount needed to start the work\n\nThe pledges are, IMHO, an incentive for a developer to develop that\nfeature...if 10 ppl pledge $100 for WAL (sorry, so much talk about it\nrecently its what first comes to mind), and the 11th person feels its\nworth another $100 to get done, why put a cap? Or, if feature X is\nsomething that none of the developers really care about, but 15 admins do,\nthe higher the pledges go, the more incentive there is for it to get\ndone...I think its up to those doing the pledges to determine what they\nthing a feature is worth in the scheme of things...if the pledges to get\n$1000 for a feature, and none of the developers feels up to doing it,\nshould we cap it?\n\n> 6. Include a buffer (20%?) to allow for uncollectable pledges\n\nAs mentioned above...a pledge isn't listed if not-collected...\n\n> 7. When the pledge is made, bring up a page with an electronic\n> contract with Accept and Decline buttons. This contract should\n> contain language which is legally binding and which would hold up in a\n> small claims court. That way if someone makes a pledge and you\n> complete the feature, you could actually collect your money from them\n> if you wanted to. I can probably help with the language of this if\n> you want.\n\nMore headaches then its worth, really...see above...\n\n> 8. After a feature has been funded and completed, publish all the\n> details (bids, pledge amounts, who donated, who flaked on their\n> pledges, etc.)\n\nNot in my life time...what are we trying for, guilt trips? :(\n\n> 9. Include prominent information about how to participate in this\n> program on all the web page headers/footers and in the distribution\n> README's. A catchy link might be \"How to get your favorite feature\n> added into PostgreSQL\" You should probably throw something into these\n> mailing lists from time to time too.\n\nAgreed...\n\n> 10. Are you set up to take credit cards? This would be nice but I\n> think you can do without it.\n\nDefinitely...\n\n> 11. You probably should probably choose US Dollars as the standard\n> interchange format. However, this should appeal to an international\n> market. If you get set up with a web credit card vendor, they can\n> probably handle exchange issues for you automatically.\n\nWe do all our values in Canadian, actually...with a link to one of the\nonline Exchanges for translating values...if I can somehow figure out how\nto tie into one of those, where I pass it something like:\n\n\t?from=canadian&to=us&value=###.##\n\nthen I wouldn't be adverse to adding that to the web page...anyone?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 7 Dec 1999 17:15:33 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Raising funds for PostgreSQL"
},
{
"msg_contents": "The Hermit Hacker wrote:\n\n> I like most of the wording, but the idea behind a pledge is to provide a\n> pool of money, dedicated towards a particular feature, that we have\n> reserved, on hand, to be able to contract out. If a \"feature\" gets $100\n> in pledges and someone pops up, says I'll do it, pledging for that feature\n> will freeze at that point, and the programmer will get paid after his code\n> has been submitted, approved and entered into the repository...\n>\n> A pledge is record/valid when, and only when, the pledge has been\n> received...\n>\n\nWhat can we do to assure a donor that his feature will actually get completed as a result\nof his donation? It seems to me this will be most people's hesitation (mine, at least)\nis sending money in without any kind of mechanism to control how it is spent.\n\n>\n>\n> > 1. Exclude items from the list which will be completed in the next 2-3 months anyway\n>\n> why? if someone feels that WAL is important, and wants to pledge towards\n> getting that completed, so be it...most of the TODO list is currently\n> under someone's responsibility (WAL == Vadim)...if ppl want to pledge\n> $100+ for Vadim to work on this, so be it...when its released, we send\n> Vadim a cheque for $100+ - 10% ... I don't think he'd refuse, now, do\n> you? :)\n>\n\nI guess my thought here was if an item was already being worked on, it would be difficult\n(impossible) to determine a target value (the amount of money needed before work would\nbegin). The target value is inherently 0 once the work has already begun. I think the\ngoal is to give donors positive feedback i.e. \"I pledged, I paid, the feature got done.\nWithout me, it probably wouldn't have happened (so soon).\" Once they have that cycle\nhappen, they'll be back to do it again because it will feel good.\n\n\n>\n> > 2. Take bids from the development team in advance on each feature.\n> > In other words, how many dollars would they need to start on the\n> > enhancement today.\n> > 3. Do not disclose these bids to the public\n> > 4. Do not disclose the received pledges to date to the public\n>\n> I don't like this ...\n\nUnderstood. I think this part can be done in a number of ways. But let me explain what\nthis does.\n\nBy setting a bid from the development team, that tells the donor that you are serious\nabout using his money for its intended purpose. The team is committed to that feature,\nthey're just waiting for the pool to build up to the target value. A donor will want to\nknow that his cash is not just going to be sucked into whatever cash flow needs are\nurgent that week--he'll know exactly what he needs to do to make the enhancement a\nreality. And the rules won't change on him after he sends his check in.\n\n>\n>\n> > 5. Show on the page how much has been pledged toward the feature only\n> > as a percentage of the amount needed to start the work\n>\n> The pledges are, IMHO, an incentive for a developer to develop that\n> feature...if 10 ppl pledge $100 for WAL (sorry, so much talk about it\n> recently its what first comes to mind), and the 11th person feels its\n> worth another $100 to get done, why put a cap? Or, if feature X is\n> something that none of the developers really care about, but 15 admins do,\n> the higher the pledges go, the more incentive there is for it to get\n> done...I think its up to those doing the pledges to determine what they\n> thing a feature is worth in the scheme of things...if the pledges to get\n> $1000 for a feature, and none of the developers feels up to doing it,\n> should we cap it?\n\nI don't consider it a cap. If you get more money than the feature needs, it would be\ngreat to have the right to throw the excess off into a second feature. That would\naccelerate the development even more.\n\n>\n>\n> > 6. Include a buffer (20%?) to allow for uncollectable pledges\n>\n> As mentioned above...a pledge isn't listed if not-collected...\n>\n> > 7. When the pledge is made, bring up a page with an electronic\n> > contract with Accept and Decline buttons. This contract should\n> > contain language which is legally binding and which would hold up in a\n> > small claims court. That way if someone makes a pledge and you\n> > complete the feature, you could actually collect your money from them\n> > if you wanted to. I can probably help with the language of this if\n> > you want.\n>\n> More headaches then its worth, really...see above...\n\nI think the key here is no matter what we do, if no one sends in the $, its not going to\nwork. It has to be something enticing enough that people will actually pay.\n\n>\n>\n> > 8. After a feature has been funded and completed, publish all the\n> > details (bids, pledge amounts, who donated, who flaked on their\n> > pledges, etc.)\n>\n> Not in my life time...what are we trying for, guilt trips? :(\n>\n> > 9. Include prominent information about how to participate in this\n> > program on all the web page headers/footers and in the distribution\n> > README's. A catchy link might be \"How to get your favorite feature\n> > added into PostgreSQL\" You should probably throw something into these\n> > mailing lists from time to time too.\n>\n> Agreed...\n>\n> > 10. Are you set up to take credit cards? This would be nice but I\n> > think you can do without it.\n>\n> Definitely...\n>\n> > 11. You probably should probably choose US Dollars as the standard\n> > interchange format. However, this should appeal to an international\n> > market. If you get set up with a web credit card vendor, they can\n> > probably handle exchange issues for you automatically.\n>\n> We do all our values in Canadian, actually...with a link to one of the\n> online Exchanges for translating values...if I can somehow figure out how\n> to tie into one of those, where I pass it something like:\n>",
"msg_date": "Tue, 07 Dec 1999 14:52:51 -0700",
"msg_from": "Kyle Bateman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Raising funds for PostgreSQL"
},
{
"msg_contents": "Kyle Bateman wrote:\n\n> The Hermit Hacker wrote:\n> > A pledge is record/valid when, and only when, the pledge has been\n> > received...\n> >\n>\n> What can we do to assure a donor that his feature will actually get completed as a result\n> of his donation? It seems to me this will be most people's hesitation (mine, at least)\n> is sending money in without any kind of mechanism to control how it is spent.\n\n Maybe we have to make pools per feature. Requires IMHO to\n move features, where donations can be thrown on, onto a\n separate DONOTODO list. Items there should have a clear,\n detailed specification, what the feature finally will\n implement and expected implementation time, assuming a\n developer can work at least X hours per week on it.\n\n Each of these DONOTODO items has it's own account, and a\n donator can send in cash for specific items. As soon, as the\n account balance raised high enough, someone will claim to do\n it - sure. At this point, the item is locked and a deadline\n for finishing, depending on the estimated efford (from the\n detailed feature spec) is set.\n\n Up to the point, where an item get's locked, the donator can\n transfer any amount between item accounts and (of course) get\n back his money. After that point, only additional donations\n can be placed on the item to increase the efford to finish it\n in time.\n\n At deadline overruns, donators can decide again what to do\n with their money. So the developers have some pressure in the\n neck to complete the items in time.\n\n In the backside, we must coordinate if multiple developers\n need each other to do one item together. They have to decide\n a split ratio of the account balance. Should IMHO be a\n floating process and depending on the developers involved,\n could be open until the end.\n\n Well, there must be a steering commitee, controlling the\n DONOTODO items and the cash flow - PostgreSQL Inc. would be\n my first guess.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 8 Dec 1999 01:31:17 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Raising funds for PostgreSQL"
},
{
"msg_contents": "Kyle Bateman <[email protected]> writes:\n> [ lots of good thoughts snipped ]\n\n> 2. Take bids from the development team in advance on each feature. In\n> other words, how many dollars would they need to start on the\n> enhancement today.\n\nI don't think that's very practical; the \"bids\" will be changing\nconstantly. For one thing, many user-level features will change in\ndifficulty depending on what other things have been completed.\nFor another, a developer might unexpectedly find himself with spare time\non his hands ... or a sudden need for cash ;-) ... which might induce\nhim to pick up some feature request that he hadn't been excited about\ndoing before. In the terms you're using that'd correspond to an\nunpredictable drop in the bid price.\n\nBefore I comment too much on this topic I should probably mention that\nI have myself engaged in exactly this kind of transaction: a few months\nago, someone who need not be named here sent me a couple hundred bucks\nin return for my dealing with a Postgres problem that they needed fixed\npronto (ie, within a week or two). It was something I would have fixed\nanyway, eventually, but it was worth their cash to encourage me to deal\nwith that issue sooner rather than later. So I've certainly got no\nmoral objection to arrangements like this.\n\nBut I do have a practical concern, which is that bidding like this might\ndistort the development process, for example by tempting someone to put\nin a quick-and-dirty hack that would provide the requested feature and\nyet cause trouble down the line for future improvements. In the long\nrun that's not good for the health of the project.\n\nI'm not sure how to answer that concern. One possible answer is to\nput a cap on the amounts bid --- a person's judgment is less likely\nto be swayed in the wrong direction by $$ than $$$$$$, no? But the\ncap would probably have to vary depending on the difficulty of the\nproposed feature. I don't think we want to get into spending a lot\nof effort on cost-estimating everything that's on the TODO list,\nso administering a bid cap might well be impractical.\n\nMaybe there's a better answer. No good ideas at the moment...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Dec 1999 03:23:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Raising funds for PostgreSQL "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Kyle Bateman <[email protected]> writes:\n> > [ lots of good thoughts snipped ]\n>\n> > 2. Take bids from the development team in advance on each feature. In\n> > other words, how many dollars would they need to start on the\n> > enhancement today.\n\n> Before I comment too much on this topic I should probably mention that\n\n> I have myself engaged in exactly this kind of transaction: a few months\n> ago, someone who need not be named here sent me a couple hundred bucks\n> in return for my dealing with a Postgres problem that they needed fixed\n> pronto (ie, within a week or two). It was something I would have fixed\n> anyway, eventually, but it was worth their cash to encourage me to deal\n> with that issue sooner rather than later. So I've certainly got no\n> moral objection to arrangements like this.\n>\n\nTom, the more I hear, the more I'm beginning to think that maybe the best\nmechanism is for the client to deal directly with one or two developers in\nthe way you did. Everything else we talk about begins to get rather\ncomplicated in a hurry.\n\nIt would be relatively easy to open up a new discussion group for posting\noffers (in either direction). Once a developer and a client got connected,\nthey could negotiate privately for the feature.\n\n>\n> But I do have a practical concern, which is that bidding like this might\n> distort the development process, for example by tempting someone to put\n> in a quick-and-dirty hack that would provide the requested feature and\n> yet cause trouble down the line for future improvements. In the long\n> run that's not good for the health of the project.\n>\n> I'm not sure how to answer that concern. One possible answer is to\n> put a cap on the amounts bid --- a person's judgment is less likely\n> to be swayed in the wrong direction by $$ than $$$$$$, no? But the\n> cap would probably have to vary depending on the difficulty of the\n> proposed feature. I don't think we want to get into spending a lot\n> of effort on cost-estimating everything that's on the TODO list,\n> so administering a bid cap might well be impractical.\n>\n\nI think the best answer to this is already in place. It seems to me the\ndeveloper pool as a whole watches additions very carefully. Someone who\nputs in an ugly hack has to think about what the rest of the team will\nthink of it. If the problem were repeated, that developer would begin to\nlose clout and eventually would not be allowed the same kind of access to\nthe source tree. I think you guys are pretty motivated to do your best\nwork on this project. If that were not so, we would not see the quality of\nwork we do.\n\nNone of the arrangements can be forced (because this is a volunteer\nproject) so things like caps will probably not work anyway. I think I'd\nencourage the group to begin to experiment with the kind of transactions\nyou described and see if we run into any problems with it. If it begins to\ncause problems, adjustments can always be made to compensate.",
"msg_date": "Wed, 08 Dec 1999 09:48:52 -0700",
"msg_from": "Kyle Bateman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Raising funds for PostgreSQL"
}
] |
[
{
"msg_contents": "> %--\n> %--> Dear sir,\n> %--> \n> %--> Just one remark for your book :\n> %--> in your chapter \"SQL Aggregates\", it is interessant perhaps if you\n> %--> say if COUNT(DISTINCT ...) for example is supported. \n> %--> \n> %--> mb\n> %--> \n> %--\n> %--It isn't supported.\n> %--\n> \n> Sorry, I just want to say \"It is not supported\" obviously. I think, it is \n> important to say it, because I search for my mistake a little bit before\n> finding the answer in the FAQ.\n\nCC to hackers.\n\nThe best way to do that is to display a mess to the user when they try\nCOUNT(DISTINCT...). That makes it easy because they see it as soon as\nthey try it. No hunting around, but I am not sure how to do that in the\ngrammer because we don't find out about aggregates until later.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 6 Dec 1999 22:27:22 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Book - SQL Aggregates"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> The best way to do that is to display a mess to the user when they try\n> COUNT(DISTINCT...). That makes it easy because they see it as soon as\n> they try it.\n\nI was actually thinking about trying to implement aggregate(DISTINCT ...),\nor failing that, at least understand why it's hard ;-)\n\nAt the very least I think I can manage an explicit \"DISTINCT not supported\"\nerror message from the parser. Will take this as a TODO item.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Dec 1999 23:29:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Book - SQL Aggregates "
}
] |
[
{
"msg_contents": "I'd be interested in a data also, as it will give me a target for all\nthe JDBC stuff I'm working on for 7.0.\n\nPeter (Still catching up :-( )\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Mike Mascari [mailto:[email protected]]\nSent: Sunday, December 05, 1999 1:08 AM\nTo: [email protected]\nSubject: [HACKERS] When is 7.0 going Beta?\n\n\nHello,\n\nI was just wondering if there were any dates the major\ndevelopers had in mind as to when current will be released\nas a beta release? For my trivial part, I still have to send\nin a patch to allow pg_dump to dump COMMENT ON commands for\nany descriptions the user might have created and was\nwondering if any time frame had been established.\n\nJust curious,\n\nMike\n\n\n\n\n************\n",
"msg_date": "Tue, 7 Dec 1999 07:20:27 -0000 ",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] When is 7.0 going Beta?"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.