threads
listlengths
1
2.99k
[ { "msg_contents": "* Peter Eisentraut <[email protected]> [001029 14:32]:\n> Larry Rosenman writes:\n> \n> > Would the timezone change last night be causing this? \n> \n> The \"timestamp\" failure, yes. The \"geometry\", no. Geometry simply needs\n> a new expected file, but unfortunately they're not the same for \"cc\" and\n> \"gcc\"...\nHmm. I wonder why cc and gcc are doing different math. Wierd. \n\nI suspect it might have to do with what gcc was compiled on (7.0.x of\nUW). Can we make 2 expected files and have the map file figure it\nout? \n\nAs to the timestamp, can we make a note that on timechange sundays\ndon't run the regression test? :-) \n\nLER\n\n> \n> -- \n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 29 Oct 2000 14:51:10 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: regression failure/UnixWare7.1.1/current sources" }, { "msg_contents": "Larry Rosenman writes:\n\n> Hmm. I wonder why cc and gcc are doing different math. Wierd. \n\nNot only that, but you get different results with the same compiler\ndepending on different optimization settings. The joys of binary floating\npoint...\n\n> I suspect it might have to do with what gcc was compiled on (7.0.x of\n> UW). Can we make 2 expected files and have the map file figure it\n> out?\n\nThe resultmap mechanism isn't really prepared for this yet, but it is\ndoable I'd say.\n\n> As to the timestamp, can we make a note that on timechange sundays\n> don't run the regression test? :-) \n\nWe've had a note in there since the last change (April was it?), which was\nwhen I ran into this. :-)\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 30 Oct 2000 17:55:01 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: regression failure/UnixWare7.1.1/current sources" } ]
[ { "msg_contents": "\nis anyone else getting these but me?\n\nOn Sun, 29 Oct 2000, Marc G. Fournier wrote:\n\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 29 Oct 2000 17:01:36 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: another try " }, { "msg_contents": "I'm getting them...\n* The Hermit Hacker <[email protected]> [001029 22:36]:\n> \n> is anyone else getting these but me?\n> \n> On Sun, 29 Oct 2000, Marc G. Fournier wrote:\n> \n> > \n> > \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 29 Oct 2000 23:36:24 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: another try" }, { "msg_contents": "I'm getting all of 'em, unfortunately =)\n\nI dunno what's goin' on, but I ended up back on hackers and general, even\ntho' I unsubbed from general, and was never on hackers!\n\nUnwanted emails received : plenty\nUnsolicited Postgres knowledge : pleasantly rising =)\n\n- r\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of The Hermit\n> Hacker\n> Sent: October 29, 2000 1:02 PM\n> To: [email protected]\n> Subject: Re: [HACKERS] another try\n>\n>\n>\n> is anyone else getting these but me?\n>\n> On Sun, 29 Oct 2000, Marc G. Fournier wrote:\n>\n> >\n> >\n>\n> Marc G. Fournier ICQ#7615664 IRC\n> Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary:\n> scrappy@{freebsd|postgresql}.org\n>\n\n", "msg_date": "Sun, 29 Oct 2000 21:41:46 -0800", "msg_from": "\"Rob S.\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: another try " } ]
[ { "msg_contents": "\nlet see that this doesn't generate an error \n\n", "msg_date": "Sun, 29 Oct 2000 18:25:12 -0400 (AST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "its too quiet" }, { "msg_contents": "received\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of\n> [email protected]\n> Sent: October 29, 2000 2:25 PM\n> To: [email protected]\n> Subject: [HACKERS] its too quiet\n> \n> \n> \n> let see that this doesn't generate an error \n> \n> \n", "msg_date": "Mon, 30 Oct 2000 08:18:40 -0800", "msg_from": "\"Rob S.\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: its too quiet" } ]
[ { "msg_contents": "\nsomething screwed up, possibly in the configs ... subscriptions should all\nbe fine, but have to fix the configurations after getting these reloaded\n...\n\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 29 Oct 2000 19:17:00 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "okay, retry this one ..." } ]
[ { "msg_contents": "\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 29 Oct 2000 19:21:05 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "hrmmmm ... ignore ..." } ]
[ { "msg_contents": "\n\nVadim Mikheev wrote:\n\n> Hi, All\n>\n> First, as I've already mentioned in answer to Tom about DROP TABLE, undo\n> logic\n> will not be implemented in 7.1 -:( Doable for tables but for indices we\n> would need\n> either in compensation records or in xmin/cmin in index tuples. So, we'll\n> still live\n> with dust from aborted xactions in our tables/indices.\n>\n\nDoes it mean that there would still be inconsistency between\ntables and their indexes ?\n\nRegards.\nHiroshi Inoue\n\n", "msg_date": "Mon, 30 Oct 2000 13:02:48 +0900", "msg_from": "Hiroshi Inoue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WAL status update" }, { "msg_contents": "> > First, as I've already mentioned in answer to Tom about DROP TABLE, undo\n> > logic will not be implemented in 7.1 -:( Doable for tables but for\nindices we\n> > would need either in compensation records or in xmin/cmin in index\ntuples.\n> > So, we'll still live with dust from aborted xactions in our\ntables/indices.\n>\n> Does it mean that there would still be inconsistency between\n> tables and their indexes ?\n\nNot related. I just meant to say that tuples inserted into tables/indices by\naborted transactions will stay there till vacuum.\nRedo should guarantee that index tuples will not be lost in split operation\n(what's possible now), but not that an index will have correct structure\nafter crash - parent page may be unupdated, what could be handled\nat run time.\n\nVadim\n\n\n", "msg_date": "Sun, 29 Oct 2000 23:10:00 -0800", "msg_from": "\"Vadim Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL status update" }, { "msg_contents": "Vadim Mikheev writes:\n\n> WAL todo list looks like:\n\nSo what's the latest on going beta?\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 30 Oct 2000 17:56:37 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL status update" }, { "msg_contents": "\nI believe that its just resting on Vadim again to give us the go ahead\n... which I believe its always been on his shoulders, no? :)\n\nVadim? \n\nOn Mon, 30 Oct 2000, Peter Eisentraut wrote:\n\n> Vadim Mikheev writes:\n> \n> > WAL todo list looks like:\n> \n> So what's the latest on going beta?\n> \n> -- \n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 31 Oct 2000 01:41:44 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL status update" }, { "msg_contents": "Vadim Mikheev writes:\n\n> WAL with rollforward logic is available for testing but yet requires\n> re-compiling with -DXLOG flag. Note that regress tests are passed but\n> it doesn't mean anything. What is required is testing how recovery\n> does work (just run pg_ctl -m i stop after some changes in db and test\n> db after restart). Also, I've tested it for single running transaction\n> only. xlog.c:XLOG_DEBUG is on currently and results in high output to\n> stderr.\n\nThe first test did not go very well. I did a fresh compile, initdb,\nstarted the postmaster, ran 'make installcheck' (sequential regression\ntests), and sent a kill -QUIT to the postmaster during the numeric test.\nThen I restarted the postmaster and got a load of lines like\n\nREDO @ 0/434072; LSN 0/434100: prev 0/433992; xprev 0/433992; xid\n17278: Transaction - commit: 2000-10-31 23:21:29\nREDO @ 0/434100; LSN 0/434252: prev 0/434072; xprev 0/0; xid 17279: Heap -\ninsert: node 19008/1259; cid 0; tid 1/43\n\nafter which it finished with\n\nStartup failed - abort\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Tue, 31 Oct 2000 23:48:49 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL status update" } ]
[ { "msg_contents": " Date: Monday, October 30, 2000 @ 02:17:31\nAuthor: ishii\n\nUpdate of /home/projects/pgsql/cvsroot/pgsql\n from hub.org:/tmp/cvs-serv21342\n\nModified Files:\n\tconfigure.in \n\n----------------------------- Log Message -----------------------------\n\nAdd new configure option \"--enable-uniconv\" that enables automatic\ncode conversion between Unicode and other encodings. Note that\nthis option requires --enable-multibyte also.\nThe reason why this is optional is that the feature requires huge\nmapping tables and I don't think every user need the feature.\n\n", "msg_date": "Mon, 30 Oct 2000 02:17:31 -0500 (EST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "pgsql (configure.in)" }, { "msg_contents": "> Add new configure option \"--enable-uniconv\" that enables automatic\n> code conversion between Unicode and other encodings. Note that\n> this option requires --enable-multibyte also.\n> The reason why this is optional is that the feature requires huge\n> mapping tables and I don't think every user need the feature.\n\nCan you explain what this does? Does it mean frontends can use Unicode as\ntheir character set?\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 30 Oct 2000 18:23:32 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Unicode conversion (Re: [COMMITTERS] pgsql (configure.in))" }, { "msg_contents": "> > Add new configure option \"--enable-uniconv\" that enables automatic\n> > code conversion between Unicode and other encodings. Note that\n> > this option requires --enable-multibyte also.\n> > The reason why this is optional is that the feature requires huge\n> > mapping tables and I don't think every user need the feature.\n> \n> Can you explain what this does? Does it mean frontends can use Unicode as\n> their character set?\n\nYes. Here are some examples:\n\n(1) both backend/frontend uses Unicode(actually UTF-8)\n\n$ createdb -E unicode unicode\n$ psql unicode\n[some sessions follow using UTF-8]\n\t\t\t :\n\t\t\t :\n\nNote that this is not a new functionality as opposite to (2), (3).\n\n(2) backend is ISO8859-2 but frontend is UNICODE\n\n$ createdb -E LATIN2 latin2\n$ psql latin2\n\\encoding UNICODE\n[some sessions follows using UTF-8]\n\t\t\t :\n\t\t\t :\n\nNote that if you type in a wrong ISO8859-2 character that could not be\nconverted to UTF-8, you would get notices something like:\n\nNOTICE: local_to_utf: could not convert (0x00b4) LATIN2 to UTF-8. Ignored\n\n(3) backend is Unicode but frontend is ISO8859-2\n\n$ createdb -E unicode unicode\n$ psql unicode\n\\encoding LATIN2\n[some sessions follow using ISO8859-2]\n\t\t\t :\n\t\t\t :\n\nSame note above...\n--\nTatsuo Ishii\n", "msg_date": "Tue, 31 Oct 2000 09:40:32 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unicode conversion (Re: [COMMITTERS] pgsql\n (configure.in))" }, { "msg_contents": "Tatsuo Ishii writes:\n\n> > > Add new configure option \"--enable-uniconv\" that enables automatic\n\nDo you mind if we name this \"--enable-unicode-conversion\"? It's a bit\nlonger, but that's why they're called long options. :)\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sun, 5 Nov 2000 21:58:34 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unicode conversion (Re: [COMMITTERS] pgsql (configure.in))" }, { "msg_contents": "> Do you mind if we name this \"--enable-unicode-conversion\"? It's a bit\n> longer, but that's why they're called long options. :)\n\nSounds reasonable:-) Please go ahead and change it.\n--\nTatsuo Ishii\n", "msg_date": "Mon, 06 Nov 2000 10:05:52 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unicode conversion (Re: [COMMITTERS] pgsql\n (configure.in))" } ]
[ { "msg_contents": "\n> After thinking some more about yesterday's discussions, I propose that\n> we adopt the following planning behavior for cursors:\n> \n> 1. If DECLARE CURSOR does not contain a LIMIT, continue to plan on the\n> basis of 10%-or-so fetch (I'd consider anywhere from 5% to 25% to be\n> just as reasonable, if people want to argue about the exact number;\n> perhaps a SET variable is in order?). 10% seems to be a reasonable\n> compromise between delivering tuples promptly and not choosing a plan\n> that will take forever if the user fetches the whole result.\n\nImho that was a wrong assumption in the first place. The default assumption \nimho needs to be 100 %. Especially if you fixed the limit clause enabling people\nto optimize the few rows fetched case.\n\n> 3. If DECLARE CURSOR contains \"LIMIT ALL\", plan on the assumption that\n> all tuples will be fetched, ie, select lowest-total-cost plan.\n> \n> (Note: LIMIT ALL has been in the grammar right along, but up to now\n> it has been entirely equivalent to leaving out the LIMIT clause. This\n> proposal essentially suggests allowing it to act as a planner \n> hint that\n> the user really does intend to fetch all the tuples.)\n> \n> Comments?\n\nImho an explicit statement to switch optimizer mode from all rows to first rows\nwould be a lot easier to understand and is what other DB vendors do.\n\nAndreas\n", "msg_date": "Mon, 30 Oct 2000 09:34:03 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: LIMIT in DECLARE CURSOR: request for comments" }, { "msg_contents": "\nOn Mon, 30 Oct 2000, Zeugswetter Andreas SB wrote:\n\n> \n> > After thinking some more about yesterday's discussions, I propose that\n> > we adopt the following planning behavior for cursors:\n> > \n> > 1. If DECLARE CURSOR does not contain a LIMIT, continue to plan on the\n> > basis of 10%-or-so fetch (I'd consider anywhere from 5% to 25% to be\n> > just as reasonable, if people want to argue about the exact number;\n> > perhaps a SET variable is in order?). 10% seems to be a reasonable\n> > compromise between delivering tuples promptly and not choosing a plan\n> > that will take forever if the user fetches the whole result.\n> \n> Imho that was a wrong assumption in the first place. The default assumption \n> imho needs to be 100 %. Especially if you fixed the limit clause enabling people\n> to optimize the few rows fetched case.\n\nBut what if you're doing fetch 10 rows, fetch 10 rows, ...\nYou're not limiting, because you want all of them, but you are only\npulling a small number at a time to say do expensive front end processing.\nIt might make sense to actually pull a plan which is lower startup and\nhigher per row. Although the full cost is higher, you get a better\nturnaround time on the first set and the cost difference per set may\nbe unnoticeable (it would depend on the particulars).\n\n", "msg_date": "Mon, 30 Oct 2000 09:07:36 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: LIMIT in DECLARE CURSOR: request for comments" } ]
[ { "msg_contents": "I've just tried to checkout a clean copy of the cvs tree, and it seems\nthat configure is missing a substitutions in Makefile.global.in, ie:\n\nmake: *** No rule to make target\n`@abs_top_srcdir@/src/Makefile.global.in', needed by\n`../../../src/Makefile.global'. Stop.\n\nAny ideas?\n\nPeter\n\n-- \nPeter T Mount [email protected] http://www.retep.org.uk\nPostgreSQL JDBC Driver http://www.retep.org.uk/postgres/\nJava PDF Generator http://www.retep.org.uk/pdf/\n\n\n", "msg_date": "Mon, 30 Oct 2000 09:42:20 +0000 (GMT)", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Current CVS broken?" }, { "msg_contents": "Peter Mount writes:\n\n> I've just tried to checkout a clean copy of the cvs tree, and it seems\n> that configure is missing a substitutions in Makefile.global.in, ie:\n> \n> make: *** No rule to make target\n> `@abs_top_srcdir@/src/Makefile.global.in', needed by\n> `../../../src/Makefile.global'. Stop.\n> \n> Any ideas?\n\nRun './config.status --recheck'.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 30 Oct 2000 17:23:34 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Current CVS broken?" }, { "msg_contents": "On Mon, 30 Oct 2000, Peter Eisentraut wrote:\n\n> Peter Mount writes:\n> \n> > I've just tried to checkout a clean copy of the cvs tree, and it seems\n> > that configure is missing a substitutions in Makefile.global.in, ie:\n> > \n> > make: *** No rule to make target\n> > `@abs_top_srcdir@/src/Makefile.global.in', needed by\n> > `../../../src/Makefile.global'. Stop.\n> > \n> > Any ideas?\n> \n> Run './config.status --recheck'.\n\nNo still has the problem. I'm currently having to edit it manually to get\nround the problem.\n\nPeter\n\n-- \nPeter T Mount [email protected] http://www.retep.org.uk\nPostgreSQL JDBC Driver http://www.retep.org.uk/postgres/\nJava PDF Generator http://www.retep.org.uk/pdf/\n\n\n", "msg_date": "Mon, 30 Oct 2000 17:10:40 +0000 (GMT)", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Current CVS broken?" }, { "msg_contents": "Peter Mount writes:\n\n> > Run './config.status --recheck'.\n> \n> No still has the problem. I'm currently having to edit it manually to get\n> round the problem.\n\nOh, you need to run './config.status' as well. './config.status\n--recheck' figures out the new value of @abs_top_srcdir@, and\n'./config.status' substitutes it.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 30 Oct 2000 18:57:56 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Current CVS broken?" }, { "msg_contents": "On Mon, 30 Oct 2000, Peter Eisentraut wrote:\n\n> Peter Mount writes:\n> \n> > > Run './config.status --recheck'.\n> > \n> > No still has the problem. I'm currently having to edit it manually to get\n> > round the problem.\n> \n> Oh, you need to run './config.status' as well. './config.status\n> --recheck' figures out the new value of @abs_top_srcdir@, and\n> './config.status' substitutes it.\n\nDid that, and it still doesn't substitute @abs_top_srcdir@\n\nPeter\n\n-- \nPeter T Mount [email protected] http://www.retep.org.uk\nPostgreSQL JDBC Driver http://www.retep.org.uk/postgres/\nJava PDF Generator http://www.retep.org.uk/pdf/\n\n\n", "msg_date": "Mon, 30 Oct 2000 19:10:37 +0000 (GMT)", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Current CVS broken?" }, { "msg_contents": "Peter Mount writes:\n\n> Did that, and it still doesn't substitute @abs_top_srcdir@\n\nHmm, if you have \"configure\" revision 1.74 then you should certainly get\nsomething for @abs_top_srcdir@. Try to remove config.cache and re-run\nconfigure by hand. Most odd...\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Tue, 31 Oct 2000 10:55:39 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Current CVS broken?" } ]
[ { "msg_contents": "\nJust wondering what the cost of begin/end transaction is.\n\nThis is for pg_dump which, when restoring BLOBs, inserts multiple rows into\na temporary xref table. The sequence of events is:\n\nConn1: Begin\nConn1: lo_create/lo_close/lo_write.../lo_close\nConn2: Insert into xref table (which does an implicit begin/end, I think).\nConn1: Commit;\n\nWould I get substantially better performance by doing a begin/end every\n10/100/1000 rows in each connection, or is the transaction overhead low? Or\nis this something I just need to test?\n\n[eg. in Dec/RDB TX begin/end is expensive, but writing more than 1000 rows\nin a TX can also be costly, so a compromise is useful]\n\n \n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 30 Oct 2000 21:32:20 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Transaction costs?" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> This is for pg_dump which, when restoring BLOBs, inserts multiple rows into\n> a temporary xref table. The sequence of events is:\n\n> Conn1: Begin\n> Conn1: lo_create/lo_close/lo_write.../lo_close\n> Conn2: Insert into xref table (which does an implicit begin/end, I think).\n> Conn1: Commit;\n\nTwo connections? Why in the world are you doing it like that --- and\nespecially in that order? Seems like this is committing an xref change\nbefore the LO itself is committed. Why?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Nov 2000 11:33:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transaction costs? " }, { "msg_contents": "At 11:33 2/11/00 -0500, Tom Lane wrote:\n>> Conn1: Begin\n>> Conn1: lo_create/lo_close/lo_write.../lo_close\n>> Conn2: Insert into xref table (which does an implicit begin/end, I think).\n>> Conn1: Commit;\n>\n>Two connections?\n\nOtherwise a reconnect will lose the temp table contents.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 03 Nov 2000 09:59:39 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Transaction costs? " } ]
[ { "msg_contents": "yes I have been getting them too, I have tried to change my sub but it seems the address I remember is not taking my requests, and the postgresql.org page that is supose to descibe how to get off is broken a s well. So at this point I am learning way more than I need to. (I just can't help from reading them all)\n\n", "msg_date": "Mon, 30 Oct 2000 07:52:03 -0800", "msg_from": "\"Nathan Suderman\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: another try" } ]
[ { "msg_contents": "Hello,\n\nSmall technical question: what exactly CommandCounterIncrement do?\nAnd what exactly it should be used for?\n\nI use it to see data which is changed in current transaction.\nIf to be more \nexact when I write BLOB in transaction each time I write additional piece I \ndo CommandCounterIncrement.\n\nI ask this question because I found out that when I run postgres with \nverbose=4 I see lot's of StartTransactionCommand & CommitTransactionCommand\npair in the place where BLOB is written. And I have a feeling that something \nis wrong. Looks like explicitly commit all changes. That's really bad...\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Tue, 31 Oct 2000 00:19:35 +0600", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": true, "msg_subject": "CommandCounterIncrement" }, { "msg_contents": "Denis Perchine <[email protected]> writes:\n> Small technical question: what exactly CommandCounterIncrement do?\n\nIt increments the command counter ;-)\n\n> And what exactly it should be used for?\n\nYou need it if, within a chunk of backend code, you want subsequent\nqueries to see the results of earlier queries. Ordinarily a query\ncannot see its own output --- else a command like\n\tUPDATE foo SET x = x + 1\nfor example, would be an infinite loop, since as it scans the table\nit would find the tuples it inserted, update them, insert the updated\ncopies, ...\n\nPostgres' solution is that tuples inserted by the current transaction\nAND current command ID are not visible. So, to make them visible\nwithout starting a new transaction, increment the command counter.\n\n> I ask this question because I found out that when I run postgres with \n> verbose=4 I see lot's of StartTransactionCommand & CommitTransactionCommand\n> pair in the place where BLOB is written. And I have a feeling that something \n> is wrong. Looks like explicitly commit all changes. That's really bad...\n\nThese do not commit anything, assuming you are inside a transaction\nblock. Offhand I don't think they will amount to much more than a\nCommandCounterIncrement() call in that case, but read xact.c if you want\nto learn more.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Nov 2000 11:43:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CommandCounterIncrement " }, { "msg_contents": "> Denis Perchine <[email protected]> writes:\n> > Small technical question: what exactly CommandCounterIncrement do?\n>\n> It increments the command counter ;-)\n>\n> > And what exactly it should be used for?\n>\n> You need it if, within a chunk of backend code, you want subsequent\n> queries to see the results of earlier queries. Ordinarily a query\n> cannot see its own output --- else a command like\n> \tUPDATE foo SET x = x + 1\n> for example, would be an infinite loop, since as it scans the table\n> it would find the tuples it inserted, update them, insert the updated\n> copies, ...\n>\n> Postgres' solution is that tuples inserted by the current transaction\n> AND current command ID are not visible. So, to make them visible\n> without starting a new transaction, increment the command counter.\n\nPerfect. That what I thought it is.\n\n> > I ask this question because I found out that when I run postgres with\n> > verbose=4 I see lot's of StartTransactionCommand &\n> > CommitTransactionCommand pair in the place where BLOB is written. And I\n> > have a feeling that something is wrong. Looks like explicitly commit all\n> > changes. That's really bad...\n>\n> These do not commit anything, assuming you are inside a transaction\n> block. Offhand I don't think they will amount to much more than a\n> CommandCounterIncrement() call in that case, but read xact.c if you want\n> to learn more.\n\nYeps. I get this... But there's still a problem when people try to use BLOBs \noutside of TX. I like to detect it...\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Thu, 2 Nov 2000 22:48:41 +0600", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CommandCounterIncrement" }, { "msg_contents": "I have added this to the developers FAQ. However, the developers FAQ\nisn't accessible from the web site, and I have contacted Vince on this.\n\n---------------------------------------------------------------------------\n\n\n13) What is CommandCounterIncrement()?\n\nNormally, transactions can not see the rows they modify. This allows\n\n\tUPDATE foo SET x = x + 1</CODE> to work correctly. \n\n\nHowever, there are cases where a transactions needs to see rows affected\nin previous parts of the transaction. This is accomplished using a\nCommand Counter. Incrementing the counter allows transactions to be\nbroken into pieces so each piece can see rows modified by previous\npieces. CommandCounterIncrement() increments the Command\nCounter, creating a new part of the transaction.\n\n---------------------------------------------------------------------------\n\n\n\n> Denis Perchine <[email protected]> writes:\n> > Small technical question: what exactly CommandCounterIncrement do?\n> \n> It increments the command counter ;-)\n> \n> > And what exactly it should be used for?\n> \n> You need it if, within a chunk of backend code, you want subsequent\n> queries to see the results of earlier queries. Ordinarily a query\n> cannot see its own output --- else a command like\n> \tUPDATE foo SET x = x + 1\n> for example, would be an infinite loop, since as it scans the table\n> it would find the tuples it inserted, update them, insert the updated\n> copies, ...\n> \n> Postgres' solution is that tuples inserted by the current transaction\n> AND current command ID are not visible. So, to make them visible\n> without starting a new transaction, increment the command counter.\n> \n> > I ask this question because I found out that when I run postgres with \n> > verbose=4 I see lot's of StartTransactionCommand & CommitTransactionCommand\n> > pair in the place where BLOB is written. And I have a feeling that something \n> > is wrong. Looks like explicitly commit all changes. That's really bad...\n> \n> These do not commit anything, assuming you are inside a transaction\n> block. Offhand I don't think they will amount to much more than a\n> CommandCounterIncrement() call in that case, but read xact.c if you want\n> to learn more.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 4 Nov 2000 13:23:19 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CommandCounterIncrement" } ]
[ { "msg_contents": "Hi,\n\nFor users of large PostgreSQL and PostgreSQL builders, this is for you.\n\nI'm having a terrible time deciding now. :(\n\nWe're about to build a \"huge\" website now. I got tied up in signing the\ncontract without really getting enough information about PgSQL since this\nwhat we plan to implement with PHP (normally we use mySQL but i guess it\ndoes not fit for huge databases like that).\n\nHere's my problem.. We're about to build a site like hitbox.com where there\nis a large amount of database required.. If say there is 100,000 users with\n1000 page hits per day for each, and everything will be logged, you could\nimagine how huge this will be. I'm just so \"nervous\" (really, that's the\nterm) if we implement this and later on experience a slow down or worse than\nthat, crash in the server.\n\nMy questions are:\n1. What is the limit for number of records in a table WITHOUT SUFFERING SLOW\nDOWN.\n2. ....limit in number of tables per database\n3. ... limit in number of database.\n\nThanks for you comments. I would really appreciate every comment that I'll\nreceive regarding this.\n\nArnold\n\n", "msg_date": "Tue, 31 Oct 2000 13:25:04 +0800", "msg_from": "\"Arnold Gamboa\" <[email protected]>", "msg_from_op": true, "msg_subject": "how good is PostgreSQL" }, { "msg_contents": "\"Arnold Gamboa\" <[email protected]> writes:\n\n> We're about to build a \"huge\" website now. I got tied up in signing the\n> contract without really getting enough information about PgSQL since this\n> what we plan to implement with PHP (normally we use mySQL but i guess it\n> does not fit for huge databases like that).\n\nCan you do connection pooling and client side caching of database queries\nin PHP ? From working with Java this is the spot we really improve speed. \n\n> \n> Here's my problem.. We're about to build a site like hitbox.com where there\n> is a large amount of database required.. If say there is 100,000 users with\n> 1000 page hits per day for each, and everything will be logged, you could\n> imagine how huge this will be. I'm just so \"nervous\" (really, that's the\n> term) if we implement this and later on experience a slow down or worse than\n> that, crash in the server.\n\nHow many database queries do you have per page hit ?\nHow many database inserts/updates do you have per page hit ?\n\nAre you using the database for httpd access logging, or is it some\napplication level logging ? Anyhow you might want to look into an\narchitecture where you have a dedicated box for the logging. \n\nBut most important, test with real data. Populate your database and run\nstress tests. \n\nI'm was doing some testing on a portal my company has developed with\nPostgreSQL as the backend database. Running on my Linux laptop P466 with\n128MB, Apache JServ, PostgreSQL 7.0.2. I managed to get about ~20 pageviews\na second. Each pageview had on average 4 queries and 1 insert. \n\nBut measure for yourself. Remember that you can gain a lot by tuning\napplication, database and OS. \n\nregards, \n\n\tGunnar\n", "msg_date": "31 Oct 2000 12:50:53 +0100", "msg_from": "Gunnar R|nning <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how good is PostgreSQL" }, { "msg_contents": "Arnold Gamboa wrote:\n\n> Hi,\n>\n> For users of large PostgreSQL and PostgreSQL builders, this is for you.\n>\n> I'm having a terrible time deciding now. :(\n>\n> We're about to build a \"huge\" website now. I got tied up in signing the\n> contract without really getting enough information about PgSQL since this\n> what we plan to implement with PHP (normally we use mySQL but i guess it\n> does not fit for huge databases like that).\n>\n> Here's my problem.. We're about to build a site like hitbox.com where there\n> is a large amount of database required.. If say there is 100,000 users with\n> 1000 page hits per day for each, and everything will be logged, you could\n> imagine how huge this will be. I'm just so \"nervous\" (really, that's the\n> term) if we implement this and later on experience a slow down or worse than\n> that, crash in the server.\n\nThat is a LOT of work for any system. That is over 1100 page views a second, or\nunder 900us each.. A standard Pentium III system, serving static pages would\nhave problems with that.\n\nIf you look at search engines, to get that performance with readonly data, they\nusually cluster multiple systems and load balance across them. You may need to\nsegment your data and have multiple SQL servers perform different functions.\n\nAlso, that 1100 page view per second is assuming an even distribution of\ntraffic, which does not happen in a web server. If you average that much,\nchances are there will be periods of twice that.\n\nLook into a \"local director,\" \"Alteon,\" or even LVS.\n\n>\n>\n> My questions are:\n> 1. What is the limit for number of records in a table WITHOUT SUFFERING SLOW\n> DOWN.\n\n> 2. ....limit in number of tables per database\n> 3. ... limit in number of database.\n\nThere are a couple factors involved, more complex than a simple response.\n\nUse multiple databases and put each on a separate disk, with its own\ncontroller. Better yet, have multiple load balanced web boxes do a lot of\nprocessing in PHP and offload much of the CPU bound SQL work to the \"cheap\" web\nboxes, and have multiple SQL databases in the back handling various independent\ntasks.\n\nIn a web site I worked on, we had multiple front end web servers, load balanced\nwith an Alteon. Each web server had its own SQL database which provided SQL\naccess to \"static\" data which was updated each week. We had an additional\nsingle SQL database backend which all the Web servers accessed for synchronized\ndynamic data.\n\nIf you are serious about the load you expect to put on this system you must be\ncareful:\nDo not create any indexes you do not need.\nDo not use the \"foreign key\" constraint as it forces a trigger for each insert.\n\nMake sure you index the keys by which you will access data.\nAvoid searching by strings, try to use keys.\n\nEven after that, you have a long way to go before you will hit 1000\ntransactions per second from any SQL database.\n\nIf you are betting your business on this implementation, you have a lot of\nhomework to do.\n\n>\n>\n> Thanks for you comments. I would really appreciate every comment that I'll\n> receive regarding this.\n>\n> Arnold\n\n\n\n", "msg_date": "Tue, 31 Oct 2000 14:02:01 -0500", "msg_from": "markw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how good is PostgreSQL" }, { "msg_contents": "\n> Even after that, you have a long way to go before you will hit 1000\n> transactions per second from any SQL database.\n\n I guess they could always buy a few Sun E10000's on the backend, and a\nlarge room of rack-mountable PC's for web/CGI serving. Nothing like\nplopping down ten or twenty million dollars on hardware. : )\n\nsteve\n\n\n", "msg_date": "Tue, 31 Oct 2000 13:02:02 -0700", "msg_from": "\"Steve Wolfe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how good is PostgreSQL" }, { "msg_contents": "Steve Wolfe wrote:\n> \n> > Even after that, you have a long way to go before you will hit 1000\n> > transactions per second from any SQL database.\n\n> I guess they could always buy a few Sun E10000's on the backend, and a\n> large room of rack-mountable PC's for web/CGI serving. Nothing like\n> plopping down ten or twenty million dollars on hardware. : )\n\nOr they could buy a single IBM S/390, run Linux/390 and PostgreSQL on\nthat. Probably would cost less, and be more reliable. And they can\nalways load another Linux/390 VM -- an S/390 can run something like\n41,000 virtual machines each running Linux/390 and Apache.\n\nHowever, if you want to see the architecture of a _large_\ndatabase-backed website, see the story behind Digital City at\nwww.aolserver.com. While they're using Sybase instead of PostgreSQL,\nthe architecture is the same.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 31 Oct 2000 15:10:41 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how good is PostgreSQL" }, { "msg_contents": "> Or they could buy a single IBM S/390, run Linux/390 and PostgreSQL on\n> that. Probably would cost less, and be more reliable. And they can\n> always load another Linux/390 VM -- an S/390 can run something like\n> 41,000 virtual machines each running Linux/390 and Apache.\n\n Yeah.... I'm very optomistic about IBM's new chips that are coming out\nnext year. Each \"processor module\" will have 4 processors, but each\nprocessor will have 2 cores - so in effect, each \"processor module\" has 8\nprocessors on it. All processors will have copper interconnects, and\ndepending on the source, will debut at anywhere from 1.3 to 2 gigahertz. I\nthink that will certainly help them get a larger share of the high-end\nmarket!\n\nsteve\n\n\n", "msg_date": "Tue, 31 Oct 2000 13:18:49 -0700", "msg_from": "\"Steve Wolfe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how good is PostgreSQL" }, { "msg_contents": "\n> Even after that, you have a long way to go before you will hit 1000\n> transactions per second from any SQL database.\n\n Since my last post probably wasn't too useful, here's some information\nthat might be a little more help. It's a little long, I know, but hopefully\nit will be of use to someone.\n\n As programmers, we naturally want to throw things into databases for\nthree reasons. First, it's easy to get data in. Second, it's easy to get\nrelevant data out. And third, it's \"cool\". We don't want to work with flat\nfiles, now do we? ; )\n\n However, in some cases, using the database to get data out ends up\ncosting us a lot of time and money. Sometimes we do the same nasty query so\noften, that we end up purchasing bigger hardware to make the system work\nreasonably. Why? Because it was easier for us to write a program that did:\n\nGetDataFromDatabase();\nPrepareData();\nPrintData();\n\n Each time, the database server does the work. But it doesn't\nnecessarily have to be that way. In our company, we've found two trends\nthat have enabled us to save a LOT of processing power on our machines.\n(read: Increase the capacity of our servers by 30% or more, with fairly\nminor changes)\n\n The first case is that of rarely-changing data. Some of our datasets\nprobably have around 50,000 to 1,000,000 views (selects) for each update\n(insert/delete). Having the database repeat the query every time is a\nwaste. So, we began writing our programs such that they will grab the data\nfrom the database once, and generate the HTML for every page, and the\nindexes. Then, when an update is made to the database (via the\nadministrative tools), it simply rewrites *the relevant HTML files*, and\nchanges the indeces pointing to them. (There are also some other very large\nadvantages to this sort of thing, but I'm not allowed to say them. ; ) )\n\n The second case is that of often-repeated queries. One of the\nofferings on our site is an online directory, which gets a pretty fair\namount of traffic. Unfortunately, it uses a proprietary program that was\npurchased by management before they spoke with us. Grr.... It was the\nmost utterly inefficient program I've ever seen. It would *not* allow the\ndatabase to do joins, it would grab entire tables, then try to do the joins\nitself, in Perl.\n\n We rewrote the program to let PostgreSQL do the joins, and that sped\nit up. Then we realized that a very small number of queries (those for the\nfirst one or two levels of pages) accounted for a huge portion of the\nuseage. So, we replaced the front page with a static HTML page (the front\npage doesn't change...), and saw another terrific drop in our system loads.\n\n\n Overall, by only modifying a couple of our more heavily-uesd programs,\nour server loads dropped by about 30%-40%. If we went to the trouble to\nmodify some others, it would drop even more. But we're going to rewrite\nthem completely for other reasons. : )\n\n\n In any event, there are ways like this to save a LOT of CPU and disk I/O.\nMost web servers can server out several hundred static pages with the\nresources that would otherwise deliver one dynamically-created,\ndatabase-driven page. It also allows you to cluster the web servers with\ncheap commodity hardware, instead of using big-iron on the database. And if\nyou have a big-iron machine running the back-end, this can severely lighten\nthe load on it, keeping you from dropping a few hundred grand on the next\nstep up. ; )\n\n\n (Incidentally, we've toyed around with developping a query-caching system\nthat would sit betwen PostgreSQL and our DB libraries. However, it seems\nlike it could be done *much* more efficiently in PostgreSQL itself, as it\nwould be much easier to keep track of which tables have changed, etc..\nAnybody know if this sort of functionality is planned? It would be terrific\nto simply give the machine another 256 megs of RAM, and tell it to use it as\na DB cache...)\n\nsteve\n\n\n\n", "msg_date": "Tue, 31 Oct 2000 13:30:54 -0700", "msg_from": "\"Steve Wolfe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how good is PostgreSQL" }, { "msg_contents": "markw wrote:\n> \n> Arnold Gamboa wrote:\n> \n> > Hi,\n> >\n> > For users of large PostgreSQL and PostgreSQL builders, this is for you.\n\n..snip..\n\n> \n> Also, that 1100 page view per second is assuming an even distribution of\n> traffic, which does not happen in a web server. If you average that much,\n> chances are there will be periods of twice that.\n> \n\nThat's excessively optimistic. If your daily average is 1100 per second, you'll\nhave 2200 average for many of the hours in that day, 5500 for a few hours, and\nsome 10-minute periods with 11,000, certainly once in a while.\n\n++ kevin\n\n\n-- \nKevin O'Gorman (805) 650-6274 mailto:[email protected]\nPermanent e-mail forwarder: mailto:Kevin.O'[email protected]\nAt school: mailto:[email protected]\nWeb: http://www.cs.ucsb.edu/~kogorman/index.html\nWeb: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n\n\"There is a freedom lying beyond circumstance,\nderived from the direct intuition that life can\nbe grounded upon its absorption in what is\nchangeless amid change\" \n -- Alfred North Whitehead\n", "msg_date": "Tue, 31 Oct 2000 12:50:02 -0800", "msg_from": "\"Kevin O'Gorman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how good is PostgreSQL" }, { "msg_contents": "> As programmers, we naturally want to throw things into databases for\n> three reasons. First, it's easy to get data in. Second, it's easy to get\n> relevant data out. And third, it's \"cool\". We don't want to work with\n> flat files, now do we? ; )\n\n Kiddin', eh? :) Actually, the third reason seems to dominate the younger\ndevelopers' minds. People often tend to keep everything in poor DBMS until\nit begins to kick back. And this has impact on the customers. Does your\nsystem use a database? No, why should it? You mean you'll keep our dearly\nbeloved banner ads as flat files? Yes, this is where they belong. Sorry,\nwe'll seek for someone more advanced. Good luck.\n Of course, hardware vendors jump up of joy :) Maybe I don't get it, but\nIMHO there's no reason to put into DB something that can't be indexed and\nused in where clause.\n\n> It would *not* allow the\n> database to do joins, it would grab entire tables, then try to do the\n> joins\n> itself, in Perl.\n\n Umh.... Yeah.... Well.... To keep compatibility with other Open Source\nDatabases and ESR/RMS, you know :)\n\n> (Incidentally, we've toyed around with developping a query-caching\n> system that would sit betwen PostgreSQL and our DB libraries.\n\n Sounds amazing, but requires some research, I guess. However, in many\ncases one would be more than happy with cahced connections. Of course,\ncahced query results can be naturally added to that, but just connections\nare OK to start with. Security....\n\n\n--\n\n contaminated fish and microchips\n huge supertankers on Arabian trips\n oily propaganda from the leaders' lips\n all about the future\n there's people over here, people over there\n everybody's looking for a little more air\n crossing all the borders just to take their share\n planning for the future\n\n Rainbow, Difficult to Cure\n", "msg_date": "Tue, 31 Oct 2000 21:03:38 +0000", "msg_from": "KuroiNeko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how good is PostgreSQL" }, { "msg_contents": "\n> > (Incidentally, we've toyed around with developping a\nquery-caching\n> > system that would sit betwen PostgreSQL and our DB libraries.\n>\n> Sounds amazing, but requires some research, I guess. However, in\nmany\n> cases one would be more than happy with cahced connections. Of\ncourse,\n> cahced query results can be naturally added to that, but just\nconnections\n> are OK to start with. Security....\n\n To me, it doesn't sound like it would be that difficult of a project, at\nleast not for the likes of the PostgreSQL developpers. It also doesn't seem\nlike it would really introduce any security problems, not if it were done\ninside of PostgreSQL. Long ago, I got sidetracked from my endeavors in C,\nand so I don't feel that I'm qualified to do it. (otherwise, I would have\ndone it already. : ) ) If you wanted it done in Perl or Object Pascal, I\ncould help. : )\n\n Here's a simple design that I was tossing back and forth. Please\nunderstand that I'm not saying this is the best way to do it, or even a good\nway to do it. Just a possible way to do it. I haven't been able to give it\nas much thought as I would like to. Here goes.\n\n------------\nImplementation\n\n Upon starting, the PostgreSQL engine could allocate a chunk of memory,\nsized according to the administrator's desire. That chunk would be used\nsolely for query caching.\n\n When a query came in that was not cached (say, the first query), the\ndatabase engine would process it as normal. It would then return it to the\nuser, and add it to the cache. \"Adding it to the cache\" would mean that it\nwould enter the query itself, the result set, and a list of which tables the\nquery relied upon. The query that is stored could be either the query\ncoming from the user, or the query after it goes through the optimizer.\nEach has pros and cons, I would probably favor using the query that comes\nfrom the user.\n\n When another query comes along, the caching engine would quickly look\nin the hash table, and see if it already had the cached results of the\nquery. If so, it returns them, and wham. You've just avoided all of the\nwork of optimizing, parsing, and executing, not to mention the disk I/O. A\nhash lookup seems extremely cheap compared to the work of actually\nprocessing a query.\n\n When an update/delete/insert comes along, the engine would analyze\nwhich tables were affected, and clear the cache entries that relied upon\nthose tables.\n\n-----------------\nCache Clearing\n\n Cache clearing would be achieved via an LRU-based algorithm, which\nwould also take into account the amount of RAM used by each query in the\ncache.\n-----------------\nPerformance Impact\n\n The potential performance differences range from a miniscule decrease to\na tremendous increase. And it's a lot cheaper to throw an extra half gig of\nRAM in a machine that to upgrade processors and disk subsystems!\n\n------------------\nPossible Changes\n\n One potential drawback is that when a table is modified, the queries\nthat rely upon it would be discarded. Where a table is updated frequently,\nthat could greatly reduce the performance benefit. One possible alternative\nis to store the query cost with each query in the cache. When a table is\nupdated, those queries are marked as \"dirty\". If the system load is below a\ncertain amount, or the system has been idle, it could then re-execute those\nqueries and update the cache. Which queries it re-executed would be\ndetermined on a factor of query cost and how frequently those cache entries\nwere used.\n-------------------\n\n The reason I would prefer it done in the PostgreSQL engine (as opposed to\nin a middleware application) is that the caching engine needs to know (a)\nwhich tables a query relies upon, and (b) which tables get changed. It\nseems that it would significantly reduce overhead to do those inside of\nPostgreSQL (which is already doing the query parsing and analysis).\n\n This could certainly give PostgreSQL a huge advantage over other\ndatabase systems, too. It could save administrators a very large chunk of\ncash that they would otherwise have to spend on large systems. And it would\njust be cool. ; )\n\nsteve\n\n\n", "msg_date": "Tue, 31 Oct 2000 14:42:01 -0700", "msg_from": "\"Steve Wolfe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Query caching" }, { "msg_contents": "> Whenever a query is executed (not found in cache, etc.), the caching\n> system would simply store the query, the results, and a list of tables\n> queried. When a new query came in, it would do a quick lookup in the\nquery\n> hash to see if it already had the results. If so, whammo. Whenever an\n> insert/delete/update was sensed, it would look at the tables being\naffected,\n> and the caching mechanism would clear out the entries depending on those\n> tables.\n\nIt seems to me that tracking the list of cached queries and watching for\nqueries that might invalidate them adds a lot of complexity to the back end\nand the front end still has to establish the connection and wait transfer\nthe data over the socket.\n\nOn a more practical level, a backend solution would require someone with\nfairly detailed knowlege of the internals of the backend. A front end\nsolution can more likely to be implemented by someone not as knowlegable.\n\nOne of the big advantages of your technique is there is no code change at\nthe application level. This means less database lock-in. Maybe that is a\ndisadvantage too. ;-)\n\n\n", "msg_date": "Tue, 31 Oct 2000 16:49:10 -0500", "msg_from": "\"Bryan White\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how good is PostgreSQL" }, { "msg_contents": "> It seems to me that tracking the list of cached queries and watching for\n> queries that might invalidate them adds a lot of complexity to the back\nend\n> and the front end still has to establish the connection and wait transfer\n> the data over the socket.\n\n I really don't think that it would. Checking to see if you have a query\n(a hash lookup) is very, very cheap relative to normally processing a query,\nI would think.\n\n And invalidating cache entries would also be very, very cheap compared to\nthe normal activity of the database. Assuming hashes are done correctly, it\nwould probably be done much, much faster than any query could execute. If\nsoftware caches can increase the performance of disk drives that have\nlatencies in thousandths of seconds, I'm sure they could help with queries\nthat take hundredths or tenths of seconds. ; )\n\n> On a more practical level, a backend solution would require someone with\n> fairly detailed knowlege of the internals of the backend. A front end\n> solution can more likely to be implemented by someone not as knowlegable.\n\n Yeah. I was hoping that one of the developpers would say \"oooh... that\nwould rock. We should do that.\" : )\n\n> One of the big advantages of your technique is there is no code change at\n> the application level. This means less database lock-in. Maybe that is a\n> disadvantage too. ;-)\n\n I'm sure that someone with a better understanding of the theory associated\nwith cache invalidation would design a better algorithm that I would, but it\nseems that even a fairly rudimentary implementation would seriously increase\nperformance.\n\nsteve\n\n\n\n", "msg_date": "Tue, 31 Oct 2000 14:58:17 -0700", "msg_from": "\"Steve Wolfe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how good is PostgreSQL" }, { "msg_contents": "> Here's a simple design that I was tossing back and forth. Please\n> understand that I'm not saying this is the best way to do it, or even a\n> good way to do it. Just a possible way to do it.\n\n Sounds interesting, I certainly have reasons to play bad guy, but that's\nwhat I always do, so nevermind :)\n However, there's one major point where I disagree. Not that I have real\nreasons to, or observation or analysis to background my position, just a\nfeeling. And the feeling is that connection/query cache should be separate\nfrom DBMS server itself.\n Several things come to the mind right off, like possibilities to cache\nconnections to different sources, like PGSQL and Oracle, as well as a\nchance to run this cache on a separate box that will perform various\nadditional functions, like load balancing. But that's right on the surface.\n Still in doubt....\n\n\n--\n\n contaminated fish and microchips\n huge supertankers on Arabian trips\n oily propaganda from the leaders' lips\n all about the future\n there's people over here, people over there\n everybody's looking for a little more air\n crossing all the borders just to take their share\n planning for the future\n\n Rainbow, Difficult to Cure\n", "msg_date": "Tue, 31 Oct 2000 22:17:53 +0000", "msg_from": "KuroiNeko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query caching" }, { "msg_contents": "> Sounds interesting, I certainly have reasons to play bad guy, but\nthat's\n> what I always do, so nevermind :)\n\n That's OK. Somebody has to be a realist. : )\n\n> However, there's one major point where I disagree. Not that I have\nreal\n> reasons to, or observation or analysis to background my position, just\na\n> feeling. And the feeling is that connection/query cache should be\nseparate\n> from DBMS server itself.\n> Several things come to the mind right off, like possibilities to\ncache\n> connections to different sources, like PGSQL and Oracle,\n\n That would be a benefit if you're running multiple DBMS' in the house -\nand you're certainly welcome to do something like that as a standalone\npackage. ; ) I think it would be terrific if PostgreSQL could have the\nfeature added to it, which would (a) give it a big performance benefit, (b)\nlet it take advantage of already-written code, and (c) make one less machine\nand service to administer.\n\n> as well as a\n> chance to run this cache on a separate box that will perform\nvarious\n> additional functions, like load balancing. But that's right on the\nsurface.\n> Still in doubt....\n\n Yes, load-balancing would be another good factor. However, to my (very\nlimitted) knowledge, there aren't any truly good ways of splitting up\ndatabase work. If you're doing nothing but selects, it would be easy. But\nwhen updates come around, it gets hairier - and when you try to try for\ndynamic continuity-checking and database rebuilding, it gets very ugly. If\nthere are any systems that get around those without huge performance hits,\nI'd love to hear about it.\n\n (Of course, if you have lots of money, a Beowolf-style cluster with high\nbandwidth, low-latency interconnects becomes desireable. But that's a\ndifferent ballgame.)\n\n However, there is one other possibility: With caching, your servers\nmight see enough of a performance increase that you wouldn't need to\nload-balance them. : )\n\nsteve\n\n", "msg_date": "Tue, 31 Oct 2000 15:41:30 -0700", "msg_from": "\"Steve Wolfe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query caching" }, { "msg_contents": "* Steve Wolfe <[email protected]> [001031 13:47] wrote:\n> \n> > > (Incidentally, we've toyed around with developping a\n> query-caching\n> > > system that would sit betwen PostgreSQL and our DB libraries.\n> >\n> > Sounds amazing, but requires some research, I guess. However, in\n> many\n> > cases one would be more than happy with cahced connections. Of\n> course,\n> > cahced query results can be naturally added to that, but just\n> connections\n> > are OK to start with. Security....\n> \n> To me, it doesn't sound like it would be that difficult of a project, at\n> least not for the likes of the PostgreSQL developpers. It also doesn't seem\n> like it would really introduce any security problems, not if it were done\n> inside of PostgreSQL. Long ago, I got sidetracked from my endeavors in C,\n> and so I don't feel that I'm qualified to do it. (otherwise, I would have\n> done it already. : ) ) If you wanted it done in Perl or Object Pascal, I\n> could help. : )\n> \n> Here's a simple design that I was tossing back and forth. Please\n> understand that I'm not saying this is the best way to do it, or even a good\n> way to do it. Just a possible way to do it. I haven't been able to give it\n> as much thought as I would like to. Here goes.\n> \n> ------------\n> Implementation\n> \n\n[snip]\n\nKarel Zak <[email protected]> Implemented stored proceedures for\npostgresql but still hasn't been approached to integrated them.\n\nYou can find his second attempt to get a response from the developers\nhere:\n\nhttp://people.freebsd.org/~alfred/karel-pgsql.txt\n\n--\n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Tue, 31 Oct 2000 15:01:31 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query caching" }, { "msg_contents": "> I think this feature deserves to be put on the TODO list under exotic\n> features.\n\n Well, it's kinda implemented already, I believe, with decades of being run\nunattended :)\n\n> This feature would probably also be a threat to MySQL dominance in the\n> web scripting area for websites with medium to high traffic ;)\n\n Dominance? Who needs it, anyway?\n\n\n--\n\n contaminated fish and microchips\n huge supertankers on Arabian trips\n oily propaganda from the leaders' lips\n all about the future\n there's people over here, people over there\n everybody's looking for a little more air\n crossing all the borders just to take their share\n planning for the future\n\n Rainbow, Difficult to Cure\n", "msg_date": "Tue, 31 Oct 2000 23:07:01 +0000", "msg_from": "KuroiNeko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query caching" }, { "msg_contents": "\n\nKuroiNeko wrote:\n> \n> > Here's a simple design that I was tossing back and forth. Please\n> > understand that I'm not saying this is the best way to do it, or even a\n> > good way to do it. Just a possible way to do it.\n> \n> Sounds interesting, I certainly have reasons to play bad guy, but that's\n> what I always do, so nevermind :)\n\nI think this feature deserves to be put on the TODO list under exotic\nfeatures.\n\nThis feature would probably also be a threat to MySQL dominance in the\nweb scripting area for websites with medium to high traffic ;)\n\nPoul L. Christiansen\n", "msg_date": "Tue, 31 Oct 2000 23:44:57 +0000", "msg_from": "\"Poul L. Christiansen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query caching" }, { "msg_contents": "a) Don't log to a database. Log data should be sent into a process\n that collects any needed on-the-fly statistics and then outputs\n into disk files (rotating hourly or daily depending on your needs).\n This model is becoming pretty standard with Apache now; look at\n rotatelog in the Apache distribution for an example.\n\nb) Number of records isn't really the issue. Query complexity and\n number of queries are more pertinent. Generally, for example, a\n single SELECT that pulls in multiple rows is much faster than\n a bunch of small SELECTs.\n\nc) For very high traffic, you are going to have multiple front-end\n servers. If you design the system carefully, you can have a single\n shared network disk used by all of your front ends, then just stack\n boxes in front of it. This doesn't give you endless scalability,\nthough;\n at some point you'll saturate your network file server and/or\ndatabase\n box.\n\nd) PHP may not be a great choice. It doesn't provide a lot of hooks\n for effective caching of database connections and/or results.\n mod_perl or Java servlets may be better, depending on the details.\n\n\t\t\t\t- Tim Kientzle\n\nArnold Gamboa wrote:\n> \n> Hi,\n> \n> For users of large PostgreSQL and PostgreSQL builders, this is for you.\n> \n> I'm having a terrible time deciding now. :(\n> \n> We're about to build a \"huge\" website now. I got tied up in signing the\n> contract without really getting enough information about PgSQL since this\n> what we plan to implement with PHP (normally we use mySQL but i guess it\n> does not fit for huge databases like that).\n> \n> Here's my problem.. We're about to build a site like hitbox.com where there\n> is a large amount of database required.. If say there is 100,000 users with\n> 1000 page hits per day for each, and everything will be logged, you could\n> imagine how huge this will be. I'm just so \"nervous\" (really, that's the\n> term) if we implement this and later on experience a slow down or worse than\n> that, crash in the server.\n> \n> My questions are:\n> 1. What is the limit for number of records in a table WITHOUT SUFFERING SLOW\n> DOWN.\n> 2. ....limit in number of tables per database\n> 3. ... limit in number of database.\n> \n> Thanks for you comments. I would really appreciate every comment that I'll\n> receive regarding this.\n> \n> Arnold\n", "msg_date": "Tue, 31 Oct 2000 16:20:04 -0800", "msg_from": "Tim Kientzle <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how good is PostgreSQL" }, { "msg_contents": "> d) PHP may not be a great choice. It doesn't provide a lot of hooks\n> for effective caching of database connections and/or results.\n> mod_perl or Java servlets may be better, depending on the details.\n\n One of our competitors spent a very, very large deal of money on high-end\nSun equipment, so that they could write their CGI stuff in Java servlets.\nIt still ran slow. We run Perl on machines that pale compared to theirs,\nand get far better performance. : )\n\nsteve\n\n", "msg_date": "Tue, 31 Oct 2000 17:39:54 -0700", "msg_from": "\"Steve Wolfe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how good is PostgreSQL" }, { "msg_contents": "> We run Perl on machines that pale compared to theirs,\n> and get far better performance. : )\n\n Well, don't get me wrong, I'm not going to a war. Here :) But CGI is so\nsimple and straightforward that anything more than C is quite an overkill\n(think assembly).\n Myself I'm planning to port all my PERL stuff eventually. Yes, PERL is\ngreat for string handling, but when you spend a couple of weeks on BugTraq,\nyou'll suddenly feel that it's still too much. When you only let `known\ngood' values in, lex or regexp libs will do.\n Sorry for the offtopic, anyone interested is welcome to email me in\nprivate.\n\n Ed\n\n\n--\n\n contaminated fish and microchips\n huge supertankers on Arabian trips\n oily propaganda from the leaders' lips\n all about the future\n there's people over here, people over there\n everybody's looking for a little more air\n crossing all the borders just to take their share\n planning for the future\n\n Rainbow, Difficult to Cure\n", "msg_date": "Wed, 01 Nov 2000 00:51:05 +0000", "msg_from": "KuroiNeko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how good is PostgreSQL" }, { "msg_contents": "\"Steve Wolfe\" <[email protected]> writes:\n\n> One of our competitors spent a very, very large deal of money on high-end\n> Sun equipment, so that they could write their CGI stuff in Java servlets.\n> It still ran slow. We run Perl on machines that pale compared to theirs,\n> and get far better performance. : )\n\nYou can always do it slow if you don't design properly. A former customer\nsaved a lot hardware and maintenance cost by migrating from a perl based\npublishing system to a Java based one. Less hardware, better performance and\nmore functionality. ;-) The old perl system had been developed and maintained\nover a 4 year period - the initial development of the new Java based system\ntook about 9 months. \n\nregards, \n\n\tGunnar\n", "msg_date": "01 Nov 2000 01:58:55 +0100", "msg_from": "Gunnar R|nning <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how good is PostgreSQL" }, { "msg_contents": "\ni have a client which merged two companies, one running perl the other running\njava.\n\nwhat to do?\n\ni convinced them to port both the perl and java code to INTERCAL, and run\nthe whole system on an array of C-64's.\n\nworks better than either of the perl or java stuff.\n\nOn Wed, Nov 01, 2000 at 01:58:55AM +0100, Gunnar R|nning wrote:\n> \"Steve Wolfe\" <[email protected]> writes:\n> > One of our competitors spent a very, very large deal of money on high-end\n> > Sun equipment, so that they could write their CGI stuff in Java servlets.\n> > It still ran slow. We run Perl on machines that pale compared to theirs,\n> > and get far better performance. : )\n> \n> You can always do it slow if you don't design properly. A former customer\n> saved a lot hardware and maintenance cost by migrating from a perl based\n> publishing system to a Java based one. Less hardware, better performance and\n> more functionality. ;-) The old perl system had been developed and maintained\n> over a 4 year period - the initial development of the new Java based system\n> took about 9 months. \n\n-- \n[ Jim Mercer [email protected] +1 416 410-5633 ]\n[ Reptilian Research -- Longer Life through Colder Blood ]\n[ Don't be fooled by cheap Finnish imitations; BSD is the One True Code. ]\n", "msg_date": "Tue, 31 Oct 2000 20:08:59 -0500", "msg_from": "Jim Mercer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how good is PostgreSQL" }, { "msg_contents": "On Tue, 31 Oct 2000, Jim Mercer wrote:\n\n> i convinced them to port both the perl and java code to INTERCAL, and run\n> the whole system on an array of C-64's.\nBut there are no bindings from Postgresql to intercal! ;)\n\n(I hope I didn't just give a bad idea to someone...;)\n\n-alex\n\n\n", "msg_date": "Tue, 31 Oct 2000 20:24:56 -0500 (EST)", "msg_from": "Alex Pilosov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how good is PostgreSQL" }, { "msg_contents": "On Tue, 31 Oct 2000, Alfred Perlstein wrote:\n\n> * Steve Wolfe <[email protected]> [001031 13:47] wrote:\n> > \n> > > > (Incidentally, we've toyed around with developping a\n> > query-caching\n> > > > system that would sit betwen PostgreSQL and our DB libraries.\n> > >\n> > > Sounds amazing, but requires some research, I guess. However, in\n> > many\n> > > cases one would be more than happy with cahced connections. Of\n> > course,\n> > > cahced query results can be naturally added to that, but just\n> > connections\n> > > are OK to start with. Security....\n> > \n> > To me, it doesn't sound like it would be that difficult of a project, at\n> > least not for the likes of the PostgreSQL developpers. It also doesn't seem\n> > like it would really introduce any security problems, not if it were done\n> > inside of PostgreSQL. Long ago, I got sidetracked from my endeavors in C,\n> > and so I don't feel that I'm qualified to do it. (otherwise, I would have\n> > done it already. : ) ) If you wanted it done in Perl or Object Pascal, I\n> > could help. : )\n> > \n> > Here's a simple design that I was tossing back and forth. Please\n> > understand that I'm not saying this is the best way to do it, or even a good\n> > way to do it. Just a possible way to do it. I haven't been able to give it\n> > as much thought as I would like to. Here goes.\n> > \n> > ------------\n> > Implementation\n> > \n> \n> [snip]\n> \n> Karel Zak <[email protected]> Implemented stored proceedures for\n> postgresql but still hasn't been approached to integrated them.\n\nsomeone has to approach him to integrate them? *raised eyebrow*\n\nKarel, where did things stand the last time this was brought up? We\nhaven't gone beta yet, can you re-submit a patch for v7.1 before beta so\nthat we can integrate the changes? *Maybe*, if possible, submit it such\nthat its a compile time option, so that its there if someone like Alfred\nwants to be brave, but it won't zap everyone if there is a bug?\n\n", "msg_date": "Tue, 31 Oct 2000 22:01:16 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] Query caching" }, { "msg_contents": "\nOn the topic of query cache (or maybe this is just tangential and I'm\nconfused):\n\nI've always heard that Oracle has the ability to essentially suck in as\nmuch of the database into RAM as you have memory to allow it, and can then\njust run its queries on that in-RAM database (or db subset) without doing\ndisk I/O (which I would probably imagine is one of the more expensive\nparts of a given SQL command). I've looked for references as to\nPostgresql's ability to do something like this, but I've never been\ncertain if it's possible. Can postgresql do this, please? And, if not,\ndoes it have to hit the disk for every SQL instruction (I would assume\nso)?\n\nI would imagine that the actual query cache would be slightly orthogonal\nto this in-RAM database cache, in as much as it would actually store the\nresults of specific queries, rather than the complete tuple set on which\nto run queries. However, I would imagine that both schemes would provide\nperformance increases.\n\nAlso, as KuroiNeko writes below about placing the query cache outside the\nactual DBMS, don't some webservers (or at least specific custom coding\nimplementations of them) just cache common query results themselves? \n(Not that it would necessarily be bad for the DBMS to do so, I\nwouldn't know enough about this to surmise.)\n\nI'd appreciate any pointers to more information on specific performance\ntuning in this area (IMHO, it would probably be a boon to the postgresql\ndatabase and its community, if there existed some reference like\nO'Reilly's _Oracle Performance Tuning_ that was focused on Postgresql.)\n\nThanks for any extra info,\n\nDaniel\n\n\nOn Tue, 31 Oct 2000, KuroiNeko wrote:\n\n> > Here's a simple design that I was tossing back and forth. Please\n> > understand that I'm not saying this is the best way to do it, or even a\n> > good way to do it. Just a possible way to do it.\n> \n> Sounds interesting, I certainly have reasons to play bad guy, but that's\n> what I always do, so nevermind :)\n> However, there's one major point where I disagree. Not that I have real\n> reasons to, or observation or analysis to background my position, just a\n> feeling. And the feeling is that connection/query cache should be separate\n> from DBMS server itself.\n> Several things come to the mind right off, like possibilities to cache\n> connections to different sources, like PGSQL and Oracle, as well as a\n> chance to run this cache on a separate box that will perform various\n> additional functions, like load balancing. But that's right on the surface.\n> Still in doubt....\n> \n\n\n", "msg_date": "Tue, 31 Oct 2000 23:18:05 -0500 (EST)", "msg_from": "Daniel Freedman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query caching" }, { "msg_contents": "> I've looked for references as to\n> Postgresql's ability to do something like this, but I've never been\n> certain if it's possible. Can postgresql do this, please? And, if not,\n> does it have to hit the disk for every SQL instruction (I would assume\n> so)?\n\n Doing so, as you might guess is quite dangerous. Eg, RAM failures are\nextremely rare, with probability very close to 0, but there's nothing\nabsolutely reliable.\n From my, quite limited, experience, I can tell that PGSQL relies more on\nfile caching (or whatever is the term), provided by the OS, rather than on\nslurping relations into RAM. See the recent discussion of [f]sync(), maybe\nit sheds more light.\n\n> I would imagine that the actual query cache would be slightly orthogonal\n> to this in-RAM database cache\n\n Actually, there are several ways to keep the data in memory, each having\nits advantages drawbacks and reasons. To name just a few: caching pages and\nfiles, mapping files, storing `internal' structures (like the tuples in\nyour example) in shared memory areas.\n Apologets and enemies of each method come in all shapes, but the real life\nis even worse. Often these methods interfere with each other, and\ninaccurate combination (you cache the pages, but overlooked file caching,\nperformed by the OS) may easily become a bottleneck.\n\n> I'd appreciate any pointers to more information on specific performance\n> tuning in this area (IMHO, it would probably be a boon to the postgresql\n> database and its community, if there existed some reference like\n> O'Reilly's _Oracle Performance Tuning_ that was focused on Postgresql.)\n\n As I see it, performance tuning with PGSQL should be concentrated around\nquality design of your DB and queries. I may be wrong, but there's not much\nto play with where PGSQL server touches the system.\n Maybe it's bad, but I like it. General suggestions about fs performance\napply to PGSQL and you don't have to re-invent the wheel. There are just\nfiles. Play with sync, install a RAID of SCSI drives, keep your swap on\nseparate controller. Nothing really special that would impact or, what's\nmore important, interfere with other services running on the same box.\n Change must come from inside :) Here, inside is DB design.\n\n Ed\n\n\n--\n\n contaminated fish and microchips\n huge supertankers on Arabian trips\n oily propaganda from the leaders' lips\n all about the future\n there's people over here, people over there\n everybody's looking for a little more air\n crossing all the borders just to take their share\n planning for the future\n\n Rainbow, Difficult to Cure\n", "msg_date": "Wed, 01 Nov 2000 04:41:05 +0000", "msg_from": "KuroiNeko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query caching" }, { "msg_contents": "> PostgreSQL hits the disk on UPDATE/DELETE/INSERT operations. SELECT's\n> are cached, but the default cache is only �MB of RAM. You can change\n> this to whatever you want.\n>\n> I'm using Cold Fusion and it can cache queries itself, so no database\n> action is necessary. But I don't think PHP and others have this\n> possibility. But Cold Fusion costs 1300$ :(\n\nNo, PHP has this.\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Wed, 1 Nov 2000 15:23:08 +0600", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query caching" }, { "msg_contents": "On Wed, Nov 01, 2000 at 10:16:58AM +0000, Poul L. Christiansen wrote:\n> PostgreSQL hits the disk on UPDATE/DELETE/INSERT operations. SELECT's\n> are cached, but the default cache is only �MB of RAM. You can change\n> this to whatever you want.\n\nThat sound like a very cool thing to do, and the default seems awfully\nconservative, given the average server�s RAM equipment nowadays. If you\nhave a small Linux server with 128 MB of RAM, it would be interesting to\nsee what happens, performance-wise, if you increase the cache for\nselects to, for instance, 64 MB. Has anyone tried to benchmark this? How\nwould you benchmark it? Where do you change this cache size? How do you\nkeep the cache from being swapped out to disk (which would presumably\nall but eradicate the benefits of such a measure)?\n\nCheers Frank\n\n-- \nfrank joerdens \n\njoerdens new media\nurbanstr. 116\n10967 berlin\ngermany\n\ne: [email protected]\nt: +49 (0)30 69597650\nf: +49 (0)30 7864046 \nh: http://www.joerdens.de\n\npgp public key: http://www.joerdens.de/pgp/frank_joerdens.asc\n", "msg_date": "Wed, 1 Nov 2000 11:08:32 +0100", "msg_from": "Frank Joerdens <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query caching" }, { "msg_contents": "\nOn Tue, 31 Oct 2000, The Hermit Hacker wrote:\n\n> On Tue, 31 Oct 2000, Alfred Perlstein wrote:\n\n> > Karel Zak <[email protected]> Implemented stored proceedures for\n> > postgresql but still hasn't been approached to integrated them.\n> \n> someone has to approach him to integrate them? *raised eyebrow*\n> \n> Karel, where did things stand the last time this was brought up? We\n> haven't gone beta yet, can you re-submit a patch for v7.1 before beta so\n> that we can integrate the changes? *Maybe*, if possible, submit it such\n> that its a compile time option, so that its there if someone like Alfred\n> wants to be brave, but it won't zap everyone if there is a bug?\n\nWell I can re-write and resubmit this patch. Add it as a compile time option\nis not bad idea. Second possibility is distribute it as patch in the contrib\ntree. And if it until not good tested not dirty with this main tree...\n\n Ok, I next week prepare it... \n\n\t\t\t\t\t\tKarel\n\n\n\n", "msg_date": "Wed, 1 Nov 2000 11:13:03 +0100 (CET)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] Query caching" }, { "msg_contents": "Daniel Freedman wrote:\n> \n> On the topic of query cache (or maybe this is just tangential and I'm\n> confused):\n> \n> I've always heard that Oracle has the ability to essentially suck in as\n> much of the database into RAM as you have memory to allow it, and can then\n> just run its queries on that in-RAM database (or db subset) without doing\n> disk I/O (which I would probably imagine is one of the more expensive\n> parts of a given SQL command). I've looked for references as to\n> Postgresql's ability to do something like this, but I've never been\n> certain if it's possible. Can postgresql do this, please? And, if not,\n> does it have to hit the disk for every SQL instruction (I would assume\n> so)?\n\nPostgreSQL hits the disk on UPDATE/DELETE/INSERT operations. SELECT's\nare cached, but the default cache is only �MB of RAM. You can change\nthis to whatever you want.\n\nI'm using Cold Fusion and it can cache queries itself, so no database\naction is necessary. But I don't think PHP and others have this\npossibility. But Cold Fusion costs 1300$ :(\n\nPoul L. Christiansen\n", "msg_date": "Wed, 01 Nov 2000 10:16:58 +0000", "msg_from": "\"Poul L. Christiansen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query caching" }, { "msg_contents": "On Wed, 1 Nov 2000, Karel Zak wrote:\n\n> \n> On Tue, 31 Oct 2000, The Hermit Hacker wrote:\n> \n> > On Tue, 31 Oct 2000, Alfred Perlstein wrote:\n> \n> > > Karel Zak <[email protected]> Implemented stored proceedures for\n> > > postgresql but still hasn't been approached to integrated them.\n> > \n> > someone has to approach him to integrate them? *raised eyebrow*\n> > \n> > Karel, where did things stand the last time this was brought up? We\n> > haven't gone beta yet, can you re-submit a patch for v7.1 before beta so\n> > that we can integrate the changes? *Maybe*, if possible, submit it such\n> > that its a compile time option, so that its there if someone like Alfred\n> > wants to be brave, but it won't zap everyone if there is a bug?\n> \n> Well I can re-write and resubmit this patch. Add it as a compile time option\n> is not bad idea. Second possibility is distribute it as patch in the contrib\n> tree. And if it until not good tested not dirty with this main tree...\n> \n> Ok, I next week prepare it... \n\nIf you can have it as a compile time option before we go beta, I'll put it\ninto the main tree ... if not, we'll put it into contrib. \n\n\n", "msg_date": "Wed, 1 Nov 2000 07:42:57 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] Query caching" }, { "msg_contents": "Frank Joerdens wrote:\n> \n> On Wed, Nov 01, 2000 at 10:16:58AM +0000, Poul L. Christiansen wrote:\n> > PostgreSQL hits the disk on UPDATE/DELETE/INSERT operations. SELECT's\n> > are cached, but the default cache is only �MB of RAM. You can change\n> > this to whatever you want.\n> \n> That sound like a very cool thing to do, and the default seems awfully\n> conservative, given the average server�s RAM equipment nowadays. If you\n> have a small Linux server with 128 MB of RAM, it would be interesting to\n> see what happens, performance-wise, if you increase the cache for\n> selects to, for instance, 64 MB. Has anyone tried to benchmark this? How\n> would you benchmark it? Where do you change this cache size? How do you\n> keep the cache from being swapped out to disk (which would presumably\n> all but eradicate the benefits of such a measure)?\n\nI have a PostgreSQL server with 80MB of RAM running Redhat Linux 7.0 and\nin my /etc/rc.d/init.d/postgresql start script I have these 2 lines that\nstart the postmaster.\n\necho 67108864 > /proc/sys/kernel/shmmax\n\nsu -l postgres -c \"/usr/bin/pg_ctl -D $PGDATA -p /usr/bin/postmaster -o\n'-i -B 4096 -o -F' start >/dev/null 2>&1\" < /dev/null\n\nThe first line increases the maxium shared memory to 64MB.\nThe \"-B 4096\" indicates 4096 * 8kb = 32MB to each postmaster.\n\nI haven't benchmarked it, but I know it's MUCH faster.\n\nPoul L. Christiansen\n", "msg_date": "Wed, 01 Nov 2000 13:25:10 +0000", "msg_from": "\"Poul L. Christiansen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query caching" }, { "msg_contents": "> How do you\n> keep the cache from being swapped out to disk (which would presumably\n> all but eradicate the benefits of such a measure)?\n\n You make sure that you have enough RAM that you aren't using swap. : )\n\n Seriously, as cheap as RAM is today, if a machine uses swap more than\noccasionally, an upgrade is in order.\n\nsteve\n\n\n", "msg_date": "Wed, 1 Nov 2000 10:37:09 -0700", "msg_from": "\"Steve Wolfe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query caching" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Karel, where did things stand the last time this was brought up? We\n> haven't gone beta yet, can you re-submit a patch for v7.1 before beta so\n> that we can integrate the changes?\n\nI think it would be a very bad idea to try to integrate the query cache\nstuff at this point in the 7.1 cycle. The feature needs more\ndiscussion/design/testing than we have time to give it for 7.1.\n\nSome of the concerns I have about it:\n\n1. What is the true performance gain --- if any --- in real-world\nsituations? The numbers Karel has quoted sound like wildly optimistic\nbest cases to me. What's the worst case? What's the average case?\n\n2. How do we handle flushing the cache when conditions change (schema\nalterations, etc)?\n\n3. Is it really a good idea to use a shared-across-backends cache?\nWhat are the locking and contention costs? What happens when we run\nout of shared memory (which is a *very* finite resource)? Will cache\nflush work correctly in a situation where backends are concurrently\ninserting new plans? Doesn't a shared cache make it nearly impossible\nto control the query planner, if the returned plan might have been\ngenerated by a different backend with a different set of\noptimization-control variables?\n\n4. How does one control the cache, anyway? Can it be flushed by user\ncommand? How is a new query matched against existing cache entries?\nCan one determine which elements of a query are considered parameters to\nthe cached plan, and which are constants? Does the syntax for doing\nthese things have anything to do with the SQL standard?\n\n\nI think this is a potentially interesting feature, but it requires far\nmore discussion and review than it's gotten so far, and there's no time\nto do that unless we want to push out 7.1 release a lot more. I'm also\nconcerned that we will need to focus heavily on testing WAL during 7.1\nbeta, and I don't want a major distraction from that...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Nov 2000 13:56:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] Query caching " }, { "msg_contents": "Performance depends on a lot of factors. Shelling out $$$ for Sun\nhardware doesn't garuntee good performance. They might have been better\noff buying a Tru64 system with Compaq's jdk.\n\nSteve Wolfe wrote:\n> \n> > d) PHP may not be a great choice. It doesn't provide a lot of hooks\n> > for effective caching of database connections and/or results.\n> > mod_perl or Java servlets may be better, depending on the details.\n> \n> One of our competitors spent a very, very large deal of money on high-end\n> Sun equipment, so that they could write their CGI stuff in Java servlets.\n> It still ran slow. We run Perl on machines that pale compared to theirs,\n> and get far better performance. : )\n> \n> steve\n\n-- \nJoseph Shraibman\[email protected]\nIncrease signal to noise ratio. http://www.targabot.com\n", "msg_date": "Thu, 02 Nov 2000 14:14:16 -0500", "msg_from": "Joseph Shraibman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how good is PostgreSQL" }, { "msg_contents": "> Performance depends on a lot of factors. Shelling out $$$ for Sun\n> hardware doesn't garuntee good performance. They might have been better\n> off buying a Tru64 system with Compaq's jdk.\n\n Yeah, it could be. But comparing the $7,000 Intel machine I built\nagainst a $20,000 Alpha, I'm still very happy with Intel. Yes, the Alpha\nwas faster on a per-processor basis. But it also cost more than twice as\nmuch on a dollar-for-transaction basis. ; )\n\nsteve\n\n\n", "msg_date": "Thu, 2 Nov 2000 12:40:07 -0700", "msg_from": "\"Steve Wolfe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how good is PostgreSQL" }, { "msg_contents": "On Thu, 2 Nov 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > Karel, where did things stand the last time this was brought up? We\n> > haven't gone beta yet, can you re-submit a patch for v7.1 before beta so\n> > that we can integrate the changes?\n> \n> I think it would be a very bad idea to try to integrate the query cache\n\n We not talking about integrate it.. we talking about \"prepare \n*experimental* patch for 7.1\" as contrib matter or compile time\noption. I mean that contrib will better.\n\n> stuff at this point in the 7.1 cycle. The feature needs more\n> discussion/design/testing than we have time to give it for 7.1.\n\n Agree.\n\n> \n> Some of the concerns I have about it:\n> \n> 1. What is the true performance gain --- if any --- in real-world\n> situations? The numbers Karel has quoted sound like wildly optimistic\n\n :-)\n\n> best cases to me. What's the worst case? What's the average case?\n\nIt's total some as SPI's saved planns. The query cache not has too much\ncost, EXECUTE saved plan is: lock, search in HTAB, unlock, run executor.. \n\n> 2. How do we handle flushing the cache when conditions change (schema\n> alterations, etc)?\n\n It's a *global* PG problem. What happen with VIEW if anyone change table \ndefinition? ...etc. IMHO not ide for this. \n\n> 3. Is it really a good idea to use a shared-across-backends cache?\n\n I know your fear. But IMHO it's capital feature. For application \nthat not use persistent connection and very often re-connecting to\nbackend is very interesting share planns. \n\n The query cache has two stores: \n\t\t- global in shared memory - \n\t\t- local in HTAB inside standard backend memory\n\n> What are the locking and contention costs? What happens when we run\n\n costs of spinlock.. \n\n> out of shared memory (which is a *very* finite resource)? Will cache\n\n The cache has list of all planns and keep track of usage. If use define \ncache entry as \"removeable\" is this oldest entry remove. Else cache\nreturn error like 'cache is full'. The size of cache is possible define\nduring backen start up (argv).\n\n> flush work correctly in a situation where backends are concurrently\n> inserting new plans? Doesn't a shared cache make it nearly impossible\n> to control the query planner, if the returned plan might have been\n> generated by a different backend with a different set of\n> optimization-control variables?\n\n Hmm, not implemented now.\n\n> 4. How does one control the cache, anyway? Can it be flushed by user\n> command? How is a new query matched against existing cache entries?\n\n All depend on user, the query is stored under some key (can be text or\nbinary). The key must be unique, but can be stored some planns but under \ndiffernet keys.\n\n> Can one determine which elements of a query are considered parameters to\n> the cached plan, and which are constants? Does the syntax for doing\n\n I don't underestend here. I use strandard '$' parameters and executor\noptions for this.\n\n> these things have anything to do with the SQL standard?\n\nYes, it is a problem. I mean that SQL92 expect a little differnet \nstuff of PREPARE/EXECUTE.\n\n> I think this is a potentially interesting feature, but it requires far\n> more discussion and review than it's gotten so far, and there's no time\n> to do that unless we want to push out 7.1 release a lot more. I'm also\n> concerned that we will need to focus heavily on testing WAL during 7.1\n> beta, and I don't want a major distraction from that...\n\n Total agree.. I prepare it as patch for playful hackers \n(hope, like you :-)))\n\n\t\t\t\t\tKarel\n\n", "msg_date": "Fri, 3 Nov 2000 09:05:30 +0100 (CET)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] Query caching " } ]
[ { "msg_contents": "Hi Sirs.\n\n\tWhat is the data definition for the aclitem datatype, I'm not able to found it in the sources, I know is there but I was not able to find it. Thank you.\n\n--\nLuis Maga�a\nGnovus Networks & Software\nwww.gnovus.com\nTel. +52 (7) 4422425\[email protected]\n\n\n", "msg_date": "Tue, 31 Oct 2000 00:32:37 -0600", "msg_from": "Luis =?UNKNOWN?Q?Maga=F1a?= <[email protected]>", "msg_from_op": true, "msg_subject": "Data definition for aclitem Datatype" }, { "msg_contents": "Luis Maga�a writes:\n\n> \tWhat is the data definition for the aclitem datatype, I'm not able to found it in the sources, I know is there but I was not able to find it. Thank you.\n\nsrc/backend/utils/adt/acl.c\n\nDon't use it though, it's slated to disappear soon.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Tue, 31 Oct 2000 10:57:01 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data definition for aclitem Datatype" } ]
[ { "msg_contents": "\n> > Hmm. I wonder why cc and gcc are doing different math. Wierd. \n> \n> Not only that, but you get different results with the same compiler\n> depending on different optimization settings. The joys of \n> binary floating point...\n\nSame on AIX.\n\nAndreas\n", "msg_date": "Tue, 31 Oct 2000 08:51:59 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: regression failure/UnixWare7.1.1/current sources" } ]
[ { "msg_contents": "\n> >So I think if you want to make optimization decisions based on cursors\n> >being used versus a \"normal\" select, then the only thing you can safely\n> >take into account is the network roundtrip and client processing per\n> >fetch, but that might be as random as anything.\n> \n> Which is why I like the client being able to ask the \n> optimizer for certain kinds of solutions *explicitly*.\n\nYes, something like:\n\tset optimization to [first_rows|all_rows]\n\nAndreas\n", "msg_date": "Tue, 31 Oct 2000 14:14:15 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: LIMIT in DECLARE CURSOR: request for comments" } ]
[ { "msg_contents": "At 14:14 31/10/00 +0100, Zeugswetter Andreas SB wrote:\n>> \n>> Which is why I like the client being able to ask the \n>> optimizer for certain kinds of solutions *explicitly*.\n>\n>Yes, something like:\n>\tset optimization to [first_rows|all_rows]\n>\n\nThat's one way that is usefull for affecting all subsequent statements, but\nit would be nice to also allow such things in each statement, eg. in comments:\n\n /*++optimizer: fast_start, no_seq_scan */\n select...\n\nie. make all settable values dynamically settable in a statement. The scope\nof the settings would probably have to depend on the abilities of the\noptimizer - eg. how would subselects and views be handled?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 01 Nov 2000 00:23:53 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AW: LIMIT in DECLARE CURSOR: request for comments" } ]
[ { "msg_contents": "Hello,\n\nthere's really wierd trouble.\nWhen I run 2 vacuum's in parallel they hangs. Both.\nI use PostgreSQL from 7.0.x CVS (almost 7.0.3).\nAny ideas? Tom?\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Tue, 31 Oct 2000 20:16:29 +0600", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": true, "msg_subject": "Problem with 2 avcuums in parallel" }, { "msg_contents": "Denis Perchine <[email protected]> writes:\n> When I run 2 vacuum's in parallel they hangs. Both.\n> I use PostgreSQL from 7.0.x CVS (almost 7.0.3).\n\nHm. I don't see a hang, but I do see errors like\n\nNOTICE: Deadlock detected -- See the lock(l) manual page for a possible cause.\nERROR: WaitOnLock: error on wakeup - Aborting this transaction\n\nWhat's curious is that neither 7.0.2 nor current seem to exhibit this\nbehavior (at least, it's much easier to reproduce in REL7_0_PATCHES than\nin 7.0.2 or current). Odd... in theory we shouldn't have introduced any\nnew bugs in REL7_0_PATCHES that wouldn't also be in current...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Nov 2000 17:42:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with 2 avcuums in parallel " }, { "msg_contents": "I wrote:\n> Denis Perchine <[email protected]> writes:\n>> When I run 2 vacuum's in parallel they hangs. Both.\n>> I use PostgreSQL from 7.0.x CVS (almost 7.0.3).\n\n> Hm. I don't see a hang, but I do see errors like\n\n> NOTICE: Deadlock detected -- See the lock(l) manual page for a possible cause.\n> ERROR: WaitOnLock: error on wakeup - Aborting this transaction\n\n> What's curious is that neither 7.0.2 nor current seem to exhibit this\n> behavior (at least, it's much easier to reproduce in REL7_0_PATCHES than\n> in 7.0.2 or current).\n\nI'm not sure why it seemed easier to reproduce this deadlock in 7.0.3;\nmight just be because I was using a different test database for each\nversion. Anyway, this turns out to be a long-standing issue.\n\nThe problem is that VACUUM does index_open() on each index of the target\nrelation. That may require reading both pg_am and pg_amop to fill in\nall the index-strategy data. Guess what happens when you're vacuuming\npg_am itself, and someone else is vacuuming pg_amop. (pg_amproc is a\npossible source of deadlock here, too.)\n\nThis might be fixable with sufficient contortions, but I think we have\nmore important problems to worry about than whether concurrent VACUUMs\nwill occasionally deadlock. It's not really easy to fix, in any case,\nsince we certainly want to grab exclusive lock on the target relation\nbefore we start to investigate its indexes. Else the indexes could\nchange/disappear under us :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Nov 2000 12:12:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with 2 avcuums in parallel " } ]
[ { "msg_contents": "Hi: \n \nI'm wrinting a graphical interface that would manage users, groups and permissions on a postgresql database, that was the reason for me to ask for this datatype since it \nis in the pg_class table. If it is going to \ndissapear soon then what should I do ?, is there any other form to handle permissions rather than aclitem datatype ?. Thanks for your answer. \n \nSincerely. \n\n\n--\nLuis Maga�a\nGnovus Networks & Software\nwww.gnovus.com\nTel. +52 (7) 4422425\[email protected]\n\nOriginal message from: Peter Eisentraut\n>Luis Maga�a writes:\n>\n>> \tWhat is the data definition for the aclitem datatype, I'm not able to found it in the sources, I know is there but I was not able to find it. Thank you.\n>\n>src/backend/utils/adt/acl.c\n>\n>Don't use it though, it's slated to disappear soon.\n>\n>-- \n>Peter Eisentraut [email protected] http://yi.org/peter-e/\n>\n>\n\n", "msg_date": "Tue, 31 Oct 2000 10:14:21 -0600", "msg_from": "Luis =?UNKNOWN?Q?Maga=F1a?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Data definition for aclitem Datatype" } ]
[ { "msg_contents": "After much too long a time, I have updated the RedHat RPMset on\nftp.postgresql.org.\n\nThe version is 7.0.2, release is 21.\n\nPlease see the changelog for more information (rpm -q --changelog\npostgresql for installed packages, rpm -qp --changelog\npostgresql-7.0.2-21.i386.rpm for packages before installation). And\nplease read the README.rpm placed in the doc dir (depends on your\ndistribution as to where the doc dir is -- newer distributions are using\n/usr/share/doc instead of /usr/doc -- and the documentation in the\nREADME assumes, for better or for worse, the RedHat 7 layout).\n\nThe big fix is the os.h dangling symlink in the -devel package. The\nsource package will rebuild on both RedHat 7 and RedHat 6 -- and should\nrebuild with no trouble on TurboLinux 6 as well as Mandrake's 6 and 7.\n\nDon't try to install the RedHat 6 binary packages on RedHat 7 or\nTurboLinux -- please rebuild from the source RPM for non-RedHat 6\ndistributions until we get other binaries uploaded.\n\nftp://ftp.postgresql.org/pub/binary/v7.0.2/RedHat-6.x/RPMS\nftp://ftp.postgresql.org/pub/binary/v7.0.2/RedHat-6.x/SRPMS\nftp://ftp.postgresql.org/pub/binary/v7.0.2/RedHat-6.x/unpacked\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 31 Oct 2000 11:29:20 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL 7.0.2-21 RPMset available." } ]
[ { "msg_contents": "Hi.\n\nIve had a crash in postgresql 7.0.2. Looking at what happened, I actually\nsuspect that this is a filesystem bug, and not a postgresql bug necessarily,\nbut I wanted to report it here and see if anyone else had any opinions.\n\nThe platform this happened on was linux (redhat 6.2), kernel 2.2.16 (SMP) dual\npentium III 500MHz cpus, Mylex DAC960 raid controller running in raid5 mode.\n\nDuring regular activity, I got a kernel oops. Looking at the call trace from\nthe kernel, as well as the EIP, I think maybe there is a bug here int the fs\nbuffer code, and that htis is a linux kernel problem (not a postgresql\nproblem).\n\nBug I'm no expert here.. Does this sould correct looking at the kernel erros\nbelow?\n\nSorry if this is off topic. I just want to make sure this is a kernel bug and\nnot a postgresql bug.\n\nMike\n\nThe oopses:\n\nkernel: Unable to handle kernel NULL pointer dereference at virtual address 00000134 \nkernel: current->tss.cr3 = 1a325000, %%cr3 = 1a325000 \nkernel: *pde = 00000000 \nkernel: Oops: 0002 \nkernel: CPU: 0 \nkernel: EIP: 0010:[remove_from_queues+169/328] \nkernel: EFLAGS: 00010206 \nkernel: eax: 00000100 ebx: 00000002 ecx: df022e40 edx: efba76b8 \nkernel: esi: df022e40 edi: 00000000 ebp: 00000000 esp: da327ea4 \nkernel: ds: 0018 es: 0018 ss: 0018 \nkernel: Process postmaster (pid: 11527, process nr: 51, stackpage=da327000) \nkernel: Stack: df022e40 c012be79 df022e40 df022e40 00001000 c0142cb8 c0142cc7 df022e40 \nkernel: ec247140 ffffffea ec0b026c da326000 df022e40 df022e40 df022e40 000a4000 \nkernel: 00000000 da327f08 00000000 00000000 eff29200 00001000 000000a5 000a5000 \nkernel: Call Trace: [refile_buffer+77/184] [ext2_file_write+996/1584] [ext2_file_write+1011/1584] [kfree_skbmem+51/64] [__kfree_skb+162/168] [lockd:__insmod_lockd_O/lib/modules/2.2.16-3smp/fs/lockd.o_M394EA7+-76392/76] [handle_IRQ_event+90/140] \nkernel: [sys_write+240/292] [ext2_file_write+0/1584] [system_call+52/56] [startup_32+43/164] \nkernel: Code: 89 50 34 c7 01 00 00 00 00 89 02 c7 41 34 00 00 00 00 ff 0d \nkernel: Unable to handle kernel NULL pointer dereference at virtual address 00000100 \nkernel: current->tss.cr3 = 1ba46000, %%cr3 = 1ba46000 \nkernel: *pde = 00000000 \nkernel: Oops: 0000 \nkernel: CPU: 1 \nkernel: EIP: 0010:[find_buffer+104/144] \nkernel: EFLAGS: 00010206 \nkernel: eax: 00000100 ebx: 00000007 ecx: 00069dae edx: 00000100 \nkernel: esi: 0000000d edi: 00003006 ebp: 0005ce4b esp: e53a19f4 \nkernel: ds: 0018 es: 0018 ss: 0018 \nkernel: Process postmaster (pid: 5545, process nr: 37, stackpage=e53a1000) \nkernel: Stack: 0005ce4b 00003006 00069dae c012b953 00003006 0005ce4b 00001000 c012bcc6 \nkernel: 00003006 0005ce4b 00001000 00003006 eff29200 00003006 00004e4b ef18c960 \nkernel: c0141ee7 00003006 0005ce4b 00001000 0005ce4b e53a1bb0 edc3c660 edc3c660 \nkernel: Call Trace: [get_hash_table+23/36] [getblk+30/324] [ext2_new_block+2291/2756] [getblk+271/324] [ext2_alloc_block+344/356] [block_getblk+305/624] [ext2_getblk+256/524] \nkernel: [ext2_file_write+1308/1584] [__brelse+19/84] [permission+36/248] [dump_seek+53/104] [dump_seek+53/104] [dump_write+48/84] [elf_core_dump+3104/3216] [do_IRQ+82/92] \nkernel: [tcp_write_xmit+407/472] [__release_sock+36/124] [tcp_do_sendmsg+2125/2144] [inet_sendmsg+0/144] [cprt+1553/20096] [cprt+1553/20096] [cprt+1553/20096] [do_signal+458/724] \nkernel: [force_sig_info+168/180] [force_sig+17/24] [do_general_protection+54/160] [error_code+45/52] [signal_return+20/24] \nkernel: Code: 8b 00 39 6a 04 75 15 8b 4c 24 20 39 4a 08 75 0c 66 39 7a 0c \n\n", "msg_date": "Tue, 31 Oct 2000 13:16:09 -0600 (CST)", "msg_from": "Michael J Schout <[email protected]>", "msg_from_op": true, "msg_subject": "7.0.2 crash (maybe linux kernel bug??)" }, { "msg_contents": "* Michael J Schout <[email protected]> [001031 11:22] wrote:\n> Hi.\n> \n> Ive had a crash in postgresql 7.0.2. Looking at what happened, I actually\n> suspect that this is a filesystem bug, and not a postgresql bug necessarily,\n> but I wanted to report it here and see if anyone else had any opinions.\n> \n> The platform this happened on was linux (redhat 6.2), kernel 2.2.16 (SMP) dual\n> pentium III 500MHz cpus, Mylex DAC960 raid controller running in raid5 mode.\n> \n> During regular activity, I got a kernel oops. Looking at the call trace from\n> the kernel, as well as the EIP, I think maybe there is a bug here int the fs\n> buffer code, and that htis is a linux kernel problem (not a postgresql\n> problem).\n> \n> Bug I'm no expert here.. Does this sould correct looking at the kernel erros\n> below?\n> \n> Sorry if this is off topic. I just want to make sure this is a kernel bug and\n> not a postgresql bug.\n> \n> Mike\n> \n> The oopses:\n> \n> kernel: Unable to handle kernel NULL pointer dereference at virtual address 00000134 \n> kernel: current->tss.cr3 = 1a325000, %%cr3 = 1a325000 \n> kernel: *pde = 00000000 \n> kernel: Oops: 0002 \n> kernel: CPU: 0 \n> kernel: EIP: 0010:[remove_from_queues+169/328] \n> kernel: EFLAGS: 00010206 \n> kernel: eax: 00000100 ebx: 00000002 ecx: df022e40 edx: efba76b8 \n> kernel: esi: df022e40 edi: 00000000 ebp: 00000000 esp: da327ea4 \n> kernel: ds: 0018 es: 0018 ss: 0018 \n> kernel: Process postmaster (pid: 11527, process nr: 51, stackpage=da327000) \n> kernel: Stack: df022e40 c012be79 df022e40 df022e40 00001000 c0142cb8 c0142cc7 df022e40 \n> kernel: ec247140 ffffffea ec0b026c da326000 df022e40 df022e40 df022e40 000a4000 \n> kernel: 00000000 da327f08 00000000 00000000 eff29200 00001000 000000a5 000a5000 \n> kernel: Call Trace: [refile_buffer+77/184] [ext2_file_write+996/1584] [ext2_file_write+1011/1584] [kfree_skbmem+51/64] [__kfree_skb+162/168] [lockd:__insmod_lockd_O/lib/modules/2.2.16-3smp/fs/lockd.o_M394EA7+-76392/76] [handle_IRQ_event+90/140] \n> kernel: [sys_write+240/292] [ext2_file_write+0/1584] [system_call+52/56] [startup_32+43/164] \n> kernel: Code: 89 50 34 c7 01 00 00 00 00 89 02 c7 41 34 00 00 00 00 ff 0d \n> kernel: Unable to handle kernel NULL pointer dereference at virtual address 00000100 \n\nYes, your kernel basically segfaulted, I would get a traceback from your\ncrashdump and discuss it with the kernel developers.\n\n--\n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n\n\n> kernel: current->tss.cr3 = 1ba46000, %%cr3 = 1ba46000 \n> kernel: *pde = 00000000 \n> kernel: Oops: 0000 \n> kernel: CPU: 1 \n> kernel: EIP: 0010:[find_buffer+104/144] \n> kernel: EFLAGS: 00010206 \n> kernel: eax: 00000100 ebx: 00000007 ecx: 00069dae edx: 00000100 \n> kernel: esi: 0000000d edi: 00003006 ebp: 0005ce4b esp: e53a19f4 \n> kernel: ds: 0018 es: 0018 ss: 0018 \n> kernel: Process postmaster (pid: 5545, process nr: 37, stackpage=e53a1000) \n> kernel: Stack: 0005ce4b 00003006 00069dae c012b953 00003006 0005ce4b 00001000 c012bcc6 \n> kernel: 00003006 0005ce4b 00001000 00003006 eff29200 00003006 00004e4b ef18c960 \n> kernel: c0141ee7 00003006 0005ce4b 00001000 0005ce4b e53a1bb0 edc3c660 edc3c660 \n> kernel: Call Trace: [get_hash_table+23/36] [getblk+30/324] [ext2_new_block+2291/2756] [getblk+271/324] [ext2_alloc_block+344/356] [block_getblk+305/624] [ext2_getblk+256/524] \n> kernel: [ext2_file_write+1308/1584] [__brelse+19/84] [permission+36/248] [dump_seek+53/104] [dump_seek+53/104] [dump_write+48/84] [elf_core_dump+3104/3216] [do_IRQ+82/92] \n> kernel: [tcp_write_xmit+407/472] [__release_sock+36/124] [tcp_do_sendmsg+2125/2144] [inet_sendmsg+0/144] [cprt+1553/20096] [cprt+1553/20096] [cprt+1553/20096] [do_signal+458/724] \n> kernel: [force_sig_info+168/180] [force_sig+17/24] [do_general_protection+54/160] [error_code+45/52] [signal_return+20/24] \n> kernel: Code: 8b 00 39 6a 04 75 15 8b 4c 24 20 39 4a 08 75 0c 66 39 7a 0c \n", "msg_date": "Tue, 31 Oct 2000 11:59:37 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.2 crash (maybe linux kernel bug??)" } ]
[ { "msg_contents": "Please take me off this list! I have received over 50 emails in the last 24\nhours and I have no idea why I am getting them. Please look for email\naddress [email protected] or [email protected] and take it out!\nThanks!\n\n\n\n-----Original Message-----\nFrom: Robert Kernell [mailto:[email protected]]\nSent: Tuesday, October 31, 2000 3:36 PM\nTo: [email protected]\nSubject: Re: [HACKERS] Restricting permissions on Unix socket\n\n\n\n> I'd like to add an option or two to restrict the set of users that can\n> connect to the Unix domain socket of the postmaster, as an extra security\n> option.\n> \n> I imagine something like this:\n> \n> unix_socket_perm = 0660\n> unix_socket_group = pgusers\n> \n> Obviously, permissions that don't have 6's in there don't make much sense,\n> but I feel this notation is the most intuitive way for admins.\n> \n> I'm not sure how to do the group thing, though. If I use chown(2) then\n> there's a race condition, but doing savegid; create socket; restoregid\n> might be too awkward? Any hints?\n> \n\nJust curious. What is a race condition? \n\nBob Kernell\nResearch Scientist\nSurface Validation Group\nAtmospheric Sciences Competency\nAnalytical Services & Materials, Inc.\nemail: [email protected]\ntel: 757-827-4631\n\n\n\n\n\nRE: [HACKERS] Restricting permissions on Unix socket\n\n\nPlease take me off this list!  I have received over 50 emails in the last 24 hours and I have no idea why I am getting them.  Please look for email address [email protected] or [email protected] and take it out!  Thanks!\n\n\n-----Original Message-----\nFrom: Robert Kernell [mailto:[email protected]]\nSent: Tuesday, October 31, 2000 3:36 PM\nTo: [email protected]\nSubject: Re: [HACKERS] Restricting permissions on Unix socket\n\n\n\n> I'd like to add an option or two to restrict the set of users that can\n> connect to the Unix domain socket of the postmaster, as an extra security\n> option.\n> \n> I imagine something like this:\n> \n> unix_socket_perm = 0660\n> unix_socket_group = pgusers\n> \n> Obviously, permissions that don't have 6's in there don't make much sense,\n> but I feel this notation is the most intuitive way for admins.\n> \n> I'm not sure how to do the group thing, though.  If I use chown(2) then\n> there's a race condition, but doing savegid; create socket; restoregid\n> might be too awkward?  Any hints?\n> \n\nJust curious. What is a race condition? \n\nBob Kernell\nResearch Scientist\nSurface Validation Group\nAtmospheric Sciences Competency\nAnalytical Services & Materials, Inc.\nemail: [email protected]\ntel: 757-827-4631", "msg_date": "Tue, 31 Oct 2000 15:35:55 -0500", "msg_from": "\"Jones, Colin\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Restricting permissions on Unix socket" } ]
[ { "msg_contents": "I'd like to add an option or two to restrict the set of users that can\nconnect to the Unix domain socket of the postmaster, as an extra security\noption.\n\nI imagine something like this:\n\nunix_socket_perm = 0660\nunix_socket_group = pgusers\n\nObviously, permissions that don't have 6's in there don't make much sense,\nbut I feel this notation is the most intuitive way for admins.\n\nI'm not sure how to do the group thing, though. If I use chown(2) then\nthere's a race condition, but doing savegid; create socket; restoregid\nmight be too awkward? Any hints?\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Tue, 31 Oct 2000 21:50:46 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Restricting permissions on Unix socket" }, { "msg_contents": "* Peter Eisentraut <[email protected]> [001031 12:57] wrote:\n> I'd like to add an option or two to restrict the set of users that can\n> connect to the Unix domain socket of the postmaster, as an extra security\n> option.\n> \n> I imagine something like this:\n> \n> unix_socket_perm = 0660\n> unix_socket_group = pgusers\n> \n> Obviously, permissions that don't have 6's in there don't make much sense,\n> but I feel this notation is the most intuitive way for admins.\n> \n> I'm not sure how to do the group thing, though. If I use chown(2) then\n> there's a race condition, but doing savegid; create socket; restoregid\n> might be too awkward? Any hints?\n\nSet your umask to 777 then go to town.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Tue, 31 Oct 2000 15:02:30 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting permissions on Unix socket" } ]
[ { "msg_contents": "I'm about to launch into an experiment that will do some new things\ninside the PG server. I'm sure to have a lot of problems, and one\nof them I can already tell is going to be difficult is the business\nof contexts: memory contexts, scan contexts and the like.\n\nBefore I go around shooting myself in the foot, I would like to\neducate myself about how they work inside the current code. Does\nanyone know where best to look? It can be the code, better if it's\na document. I'm happy to RTFM or RTFC, but I'd like to know where\nto start.\n\n++ kevin\n\n\n-- \nKevin O'Gorman (805) 650-6274 mailto:[email protected]\nPermanent e-mail forwarder: mailto:Kevin.O'[email protected]\nAt school: mailto:[email protected]\nWeb: http://www.cs.ucsb.edu/~kogorman/index.html\nWeb: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n\n\"There is a freedom lying beyond circumstance,\nderived from the direct intuition that life can\nbe grounded upon its absorption in what is\nchangeless amid change\" \n -- Alfred North Whitehead\n", "msg_date": "Tue, 31 Oct 2000 13:00:38 -0800", "msg_from": "\"Kevin O'Gorman\" <[email protected]>", "msg_from_op": true, "msg_subject": "Contexts" }, { "msg_contents": "\"Kevin O'Gorman\" <[email protected]> writes:\n> I'm about to launch into an experiment that will do some new things\n> inside the PG server. I'm sure to have a lot of problems, and one\n> of them I can already tell is going to be difficult is the business\n> of contexts: memory contexts, scan contexts and the like.\n\nThere is some doco about memory contexts in\nsrc/backend/utils/mmgr/README. Dunno about anything comparable\nfor scan handles --- best way to deal with table scans is probably\nto find a routine that does something like what you need to do,\nand crib the code.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Nov 2000 04:10:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Contexts " } ]
[ { "msg_contents": "> I believe that its just resting on Vadim again to give us the go ahead\n> ... which I believe its always been on his shoulders, no? :)\n> \n> Vadim? \n\nI think that at least 1 & 2 from WAL todo (checkpoints and port to\nmachines without TAS) is required before beta. As well as more testing...\nDid anyone else test WAL recovery?\nSorry guys but I can't do both testing and coding at the same time.\nWAL is the most complex thing I've ever did in project.\nMVCC was just child's game.\n\nTom & Bruce are in summit anyway - let's wait them. And I'll do 1 & 2 in\na few days.\n\nVadim\n", "msg_date": "Tue, 31 Oct 2000 13:04:15 -0800", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: WAL status update" }, { "msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n> I think that at least 1 & 2 from WAL todo (checkpoints and port to\n> machines without TAS) is required before beta.\n\nI'm not sure that you do need to add support for machines without TAS.\nI pointed out a couple months ago that the non-TAS support code hasn't\neven compiled for the past release or three, and proposed ripping it\nall out rather than fixing code that clearly isn't being used.\nNo one objected.\n\nI haven't got round to doing the ripping yet, but as far as I know\nthere is no reason to expend work on adding more code for the non-TAS\ncase.\n\nI have a few other things I still need to do before 7.1 beta. Do you\nthink you'll be ready in, say, a week?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Nov 2000 04:14:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL status update " }, { "msg_contents": "> \"Mikheev, Vadim\" <[email protected]> writes:\n> > I think that at least 1 & 2 from WAL todo (checkpoints and port to\n> > machines without TAS) is required before beta.\n> \n> I'm not sure that you do need to add support for machines without TAS.\n> I pointed out a couple months ago that the non-TAS support code hasn't\n> even compiled for the past release or three, and proposed ripping it\n> all out rather than fixing code that clearly isn't being used.\n> No one objected.\n> \n> I haven't got round to doing the ripping yet, but as far as I know\n> there is no reason to expend work on adding more code for the non-TAS\n> case.\n\nOh, it's great! Thanks!\none todo item is gone -:)\n\n> I have a few other things I still need to do before 7.1 beta. Do you\n> think you'll be ready in, say, a week?\n\nyes.\n\nVadim\n\n\n", "msg_date": "Thu, 2 Nov 2000 09:43:40 -0800", "msg_from": "\"Vadim Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL status update " } ]
[ { "msg_contents": "\n> I'd like to add an option or two to restrict the set of users that can\n> connect to the Unix domain socket of the postmaster, as an extra security\n> option.\n> \n> I imagine something like this:\n> \n> unix_socket_perm = 0660\n> unix_socket_group = pgusers\n> \n> Obviously, permissions that don't have 6's in there don't make much sense,\n> but I feel this notation is the most intuitive way for admins.\n> \n> I'm not sure how to do the group thing, though. If I use chown(2) then\n> there's a race condition, but doing savegid; create socket; restoregid\n> might be too awkward? Any hints?\n> \n\nJust curious. What is a race condition? \n\nBob Kernell\nResearch Scientist\nSurface Validation Group\nAtmospheric Sciences Competency\nAnalytical Services & Materials, Inc.\nemail: [email protected]\ntel: 757-827-4631\n\n", "msg_date": "Tue, 31 Oct 2000 16:36:26 -0500 (EST)", "msg_from": "Robert Kernell <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Restricting permissions on Unix socket" } ]
[ { "msg_contents": "Thanks for the info.\n\n\n", "msg_date": "Tue, 31 Oct 2000 22:26:00 GMT", "msg_from": "[email protected] (KanjiSoft Systems)", "msg_from_op": true, "msg_subject": "How to unsuscribe from this list?" }, { "msg_contents": "I never saw much traffic regarding Karel's work on making stored\nproceedures:\n\nhttp://people.freebsd.org/~alfred/karel-pgsql.txt\n\nWhat happened with this? It looked pretty interesting. :(\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Tue, 31 Oct 2000 15:11:44 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Query cache import?" }, { "msg_contents": "\nOn Tue, 31 Oct 2000, Alfred Perlstein wrote:\n\n> I never saw much traffic regarding Karel's work on making stored\n> proceedures:\n>\n> http://people.freebsd.org/~alfred/karel-pgsql.txt\n> \n> What happened with this? It looked pretty interesting. :(\n\n It's probably a little about me :-) ... well,\n\n My query cache is in usable state and it's efficient for all \nthings those motivate me to work on this.\n\n some basic features:\n\n\t- share parsed plans between backends in shared memory\n\t- store plans to private backend hash table\n\t- use parameters for stored queries\n\t- better design for SPI \n\t\t\t- memory usage for saved plans\n\t\t\t- save plans \"by key\"\n\n \n The current query cache code depend on 7.1 memory management. After\nofficial 7.1 release I prepare patch with query cache+SPI (if not\nhit me over head, please ..)\n\n All what will doing next time not depend on me, *it's on code developers*.\n\n For example Jan has interesting idea about caching all plans which\nprocessing backend. But it's far future and IMHO we must go by small\nsteps to Oracle's funeral :-) \n\n If I need the query cache in the my work (typical for some web+pgsql) or \nwill some public interest I will continue on this, if not I freeze it. \n(Exists more interesting work like http://mape.jcu.cz ... sorry of \nadvertising :-)\n\n\t\t\t\t\tKarel\n\n\n\n\n\n", "msg_date": "Wed, 1 Nov 2000 01:16:42 +0100 (CET)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query cache import?" }, { "msg_contents": "> My query cache is in usable state and it's efficient for all things\n> those motivate me to work on this.\n\n Well, you know, us application developers are lazy egoists, we want all of\nthat without efforts on our side :) In fact, customers do that. They don't\nwant to pay for both implementing query cache and re-writing applications.\n I suggest by your description that it shouldn't be a brain surgery to\napply your caching to a stable server, so when I'll have a chance to put my\nhands on a busy discussion forum next time, it'd be nice to give it a\nwhirl.\n\n\n--\n\n contaminated fish and microchips\n huge supertankers on Arabian trips\n oily propaganda from the leaders' lips\n all about the future\n there's people over here, people over there\n everybody's looking for a little more air\n crossing all the borders just to take their share\n planning for the future\n\n Rainbow, Difficult to Cure\n", "msg_date": "Wed, 01 Nov 2000 00:26:44 +0000", "msg_from": "KuroiNeko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query cache import?" }, { "msg_contents": "* Karel Zak <[email protected]> [001031 16:18] wrote:\n> \n> On Tue, 31 Oct 2000, Alfred Perlstein wrote:\n> \n> > I never saw much traffic regarding Karel's work on making stored\n> > proceedures:\n> >\n> > http://people.freebsd.org/~alfred/karel-pgsql.txt\n> > \n> > What happened with this? It looked pretty interesting. :(\n> \n> It's probably a little about me :-) ... well,\n> \n> My query cache is in usable state and it's efficient for all \n> things those motivate me to work on this.\n> \n> some basic features:\n> \n> \t- share parsed plans between backends in shared memory\n> \t- store plans to private backend hash table\n> \t- use parameters for stored queries\n> \t- better design for SPI \n> \t\t\t- memory usage for saved plans\n> \t\t\t- save plans \"by key\"\n> \n> \n> The current query cache code depend on 7.1 memory management. After\n> official 7.1 release I prepare patch with query cache+SPI (if not\n> hit me over head, please ..)\n> \n> All what will doing next time not depend on me, *it's on code developers*.\n> \n> For example Jan has interesting idea about caching all plans which\n> processing backend. But it's far future and IMHO we must go by small\n> steps to Oracle's funeral :-) \n\nWell I'm just hoping that perl's $dbh->prepare() actually does a\ntemporary stored proceedure so that I can shave cycles off of \nmy thousands upon thousands of repeated queries. :)\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Tue, 31 Oct 2000 16:40:33 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query cache import?" }, { "msg_contents": "On Tue, 31 Oct 2000, Alfred Perlstein wrote:\n\n> * Karel Zak <[email protected]> [001031 16:18] wrote:\n> > \n> > On Tue, 31 Oct 2000, Alfred Perlstein wrote:\n> > \n> > All what will doing next time not depend on me, *it's on code developers*.\n\t\t\t\t\t\t\t ^^^^^^^^\n\t\t\t\t\t\tright is \"core\"\n\n> Well I'm just hoping that perl's $dbh->prepare() actually does a\n> temporary stored proceedure so that I can shave cycles off of \n> my thousands upon thousands of repeated queries. :)\n\n IMHO implement good cache for planns is not easy, if is wanted\nuse cached planns in more backend and store it in shared memory. I\nwrote for this new memory context type. My idea is not save/share \nparsed planns only for PREPARE/EXECUTE statemment but for SPI \n(triggers - FK for example) too. It expect support inside backend\nand not is possible write it in some application layout (at client).\n\n But it's very good \"investment\", because query pasring in the PG is\nvery expensive (all in queries is dynamic). In my tests is execute\nfor stored planns very faster (90%) for queries that spending more time\nin the parser/planner/rewriter.\n\n\t\t\t\t\t\t\tKarel\n\n", "msg_date": "Wed, 1 Nov 2000 10:52:51 +0100 (CET)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query cache import?" } ]
[ { "msg_contents": "> The first test did not go very well. I did a fresh compile, initdb,\n> started the postmaster, ran 'make installcheck' (sequential regression\n> tests), and sent a kill -QUIT to the postmaster during the \n> numeric test.\n> Then I restarted the postmaster and got a load of lines like\n> \n> REDO @ 0/434072; LSN 0/434100: prev 0/433992; xprev 0/433992; xid\n> 17278: Transaction - commit: 2000-10-31 23:21:29\n> REDO @ 0/434100; LSN 0/434252: prev 0/434072; xprev 0/0; xid \n> 17279: Heap - insert: node 19008/1259; cid 0; tid 1/43\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nIs this *the last* output just before abort?\nI need to know in what stage abort occured.\nCould you look at core file too?\n\n> after which it finished with\n> \n> Startup failed - abort\n\nVadim\n \n", "msg_date": "Tue, 31 Oct 2000 14:45:17 -0800", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: WAL status update" } ]
[ { "msg_contents": "> > The first test did not go very well. I did a fresh compile, initdb,\n> > started the postmaster, ran 'make installcheck' (sequential \n> > regression tests), and sent a kill -QUIT to the postmaster during the \n> > numeric test.\n> > Then I restarted the postmaster and got a load of lines like\n> > \n> > REDO @ 0/434072; LSN 0/434100: prev 0/433992; xprev 0/433992; xid\n> > 17278: Transaction - commit: 2000-10-31 23:21:29\n> > REDO @ 0/434100; LSN 0/434252: prev 0/434072; xprev 0/0; xid \n> > 17279: Heap - insert: node 19008/1259; cid 0; tid 1/43\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> Is this *the last* output just before abort?\n> I need to know in what stage abort occured.\n> Could you look at core file too?\n> \n> > after which it finished with\n> > \n> > Startup failed - abort\n\nFixed. Thanks for pointing to the problem!\n\nVadim\n", "msg_date": "Tue, 31 Oct 2000 15:45:17 -0800", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: WAL status update" } ]
[ { "msg_contents": "It seems postmaster won't restart under WAL. What I have done so far\nwas creating some tables and inserting fairly large amount of tuples\n(100000 tuples) using pgbench.\n\nHere is the test sequence:\n\npg_ctl -w stop\nrm -fr /usr/local/pgsql/data\ninitdb\npg_ctl -w start\ncreatedb test\n./pgbench -i test\npg_ctl -w -m i stop\npg_ctl -w start\n\nand postmaster does not return from automatic recovering job. Here are\nlast 2 lines from log:\n\nNov 1 12:38:46 srapc968-yotsuya postgres[14769]: [7] DEBUG: The DataBase system was not properly shut down ^IAutomatic recovery is in progress...\nNov 1 12:38:46 srapc968-yotsuya postgres[14769]: [8] DEBUG: Redo starts at (0, 287536)\n\nIt seems that the recovery process is waiting for acquiring a lock.\nHere is the backtrace from the process:\n\n(gdb) where\n#0 0x2ac4acae in __select () from /lib/libc.so.6\n#1 0x2ac9f0ac in ?? ()\n#2 0x80ea18c in LockBuffer (buffer=37, mode=2) at xlog_bufmgr.c:1995\n#3 0x8082feb in XLogReadBuffer (extend=0, reln=0x82e92c8, blkno=1)\n at xlogutils.c:215\n#4 0x80781e4 in btree_xlog_delete (redo=1, lsn={xlogid = 0, \n xrecoff = 23741124}, record=0x82980dc) at nbtree.c:1013\n#5 0x80790fe in btree_redo (lsn={xlogid = 0, xrecoff = 23741124}, \n record=0x82980dc) at nbtree.c:1450\n#6 0x80825b2 in StartupXLOG () at xlog.c:1452\n#7 0x80850cd in BootstrapMain (argc=6, argv=0x7fffeffc) at bootstrap.c:349\n#8 0x80e047a in SSDataBase (startup=1 '\\001') at postmaster.c:2187\n#9 0x80deb1a in PostmasterMain (argc=1, argv=0x7ffff694) at postmaster.c:667\n#10 0x80c3646 in main (argc=1, argv=0x7ffff694) at main.c:112\n\nPlease let me know if you need more info.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 01 Nov 2000 12:45:44 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "WAL: postmaster won't start" } ]
[ { "msg_contents": "I want off this list immediately!!!!!!!!!!! If I am not off by the end of\nthe day, I will go directly to the top! Thanks!\n\n-----Original Message-----\nFrom: Oelkers, Phil [mailto:[email protected]]\nSent: Wednesday, November 01, 2000 9:09 AM\nTo: Hackers List; [email protected]\nSubject: [HACKERS] list owner please help me get off the list.\n\n\ntried the normal ways but your pos list software won't let me off.\nHelp Please.\n\n-----Original Message-----\nFrom: Thomas Lockhart [mailto:[email protected]]\nSent: Wednesday, November 01, 2000 9:21 AM\nTo: Lamar Owen; Hackers List; [email protected]\nSubject: [HACKERS] New RPMs for RedHat and Mandrake\n\n\nI've posted RPMs built for Mandrake at\n\n ftp://ftp.postgresql.org/pub/binary/Mandrake-7.1/{RPMS,SRPMS}\n\nwhich are exact copies of the RedHat RPMs (with the version changed from\n\"21\" to \"21mdk\") recently posted by Lamar Owens at\n\n ftp://ftp.postgresql.org/pub/binary/RedHat-6.x/{RPMS,SRPMS}\n\nI've also posted rebuilt mod_php3 RPMs to cope with the libpq library\nversion change between the original Mandrake distro and the current\nPostgreSQL release.\n\nThanks Lamar for all the work on the RPMs!\n\n - Thomas\n\n\n\n\n\nRE: [HACKERS] list owner please help me get off the list.\n\n\nI want off this list immediately!!!!!!!!!!!  If I am not off by the end of the day, I will go directly to the top!  Thanks!\n-----Original Message-----\nFrom: Oelkers, Phil [mailto:[email protected]]\nSent: Wednesday, November 01, 2000 9:09 AM\nTo: Hackers List; [email protected]\nSubject: [HACKERS] list owner please help me get off the list.\n\n\ntried the normal ways but your pos list software won't let me off.\nHelp Please.\n\n-----Original Message-----\nFrom: Thomas Lockhart [mailto:[email protected]]\nSent: Wednesday, November 01, 2000 9:21 AM\nTo: Lamar Owen; Hackers List; [email protected]\nSubject: [HACKERS] New RPMs for RedHat and Mandrake\n\n\nI've posted RPMs built for Mandrake at\n\n  ftp://ftp.postgresql.org/pub/binary/Mandrake-7.1/{RPMS,SRPMS}\n\nwhich are exact copies of the RedHat RPMs (with the version changed from\n\"21\" to \"21mdk\") recently posted by Lamar Owens at\n\n  ftp://ftp.postgresql.org/pub/binary/RedHat-6.x/{RPMS,SRPMS}\n\nI've also posted rebuilt mod_php3 RPMs to cope with the libpq library\nversion change between the original Mandrake distro and the current\nPostgreSQL release.\n\nThanks Lamar for all the work on the RPMs!\n\n                  - Thomas", "msg_date": "Wed, 1 Nov 2000 09:22:50 -0500 ", "msg_from": "\"Jones, Colin\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: list owner please help me get off the list." }, { "msg_contents": "\nya know, I always love seeing email's like this ... what time do you\nconsider to be the end of the day? and going directly to the top means\ntalking to ... wow, me. and its the end of my day here, and I don't have\nyou off yet, so now you are in a pickle, no? :)\n\n\nOn Wed, 1 Nov 2000, Jones, Colin wrote:\n\n> I want off this list immediately!!!!!!!!!!! If I am not off by the end of\n> the day, I will go directly to the top! Thanks!\n> \n> -----Original Message-----\n> From: Oelkers, Phil [mailto:[email protected]]\n> Sent: Wednesday, November 01, 2000 9:09 AM\n> To: Hackers List; [email protected]\n> Subject: [HACKERS] list owner please help me get off the list.\n> \n> \n> tried the normal ways but your pos list software won't let me off.\n> Help Please.\n> \n> -----Original Message-----\n> From: Thomas Lockhart [mailto:[email protected]]\n> Sent: Wednesday, November 01, 2000 9:21 AM\n> To: Lamar Owen; Hackers List; [email protected]\n> Subject: [HACKERS] New RPMs for RedHat and Mandrake\n> \n> \n> I've posted RPMs built for Mandrake at\n> \n> ftp://ftp.postgresql.org/pub/binary/Mandrake-7.1/{RPMS,SRPMS}\n> \n> which are exact copies of the RedHat RPMs (with the version changed from\n> \"21\" to \"21mdk\") recently posted by Lamar Owens at\n> \n> ftp://ftp.postgresql.org/pub/binary/RedHat-6.x/{RPMS,SRPMS}\n> \n> I've also posted rebuilt mod_php3 RPMs to cope with the libpq library\n> version change between the original Mandrake distro and the current\n> PostgreSQL release.\n> \n> Thanks Lamar for all the work on the RPMs!\n> \n> - Thomas\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 1 Nov 2000 18:52:07 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "RE: list owner please help me get off the list." }, { "msg_contents": "\neven better, of course, is the fact that you aren't even on this list:\n\nMajordomo>unsubscribe pgsql-hackers [email protected]\n\n**** Cannot unsubscribe [email protected]: no matching addresses.\n\n\n\nOn Wed, 1 Nov 2000, Jones, Colin wrote:\n\n> I want off this list immediately!!!!!!!!!!! If I am not off by the end of\n> the day, I will go directly to the top! Thanks!\n> \n> -----Original Message-----\n> From: Oelkers, Phil [mailto:[email protected]]\n> Sent: Wednesday, November 01, 2000 9:09 AM\n> To: Hackers List; [email protected]\n> Subject: [HACKERS] list owner please help me get off the list.\n> \n> \n> tried the normal ways but your pos list software won't let me off.\n> Help Please.\n> \n> -----Original Message-----\n> From: Thomas Lockhart [mailto:[email protected]]\n> Sent: Wednesday, November 01, 2000 9:21 AM\n> To: Lamar Owen; Hackers List; [email protected]\n> Subject: [HACKERS] New RPMs for RedHat and Mandrake\n> \n> \n> I've posted RPMs built for Mandrake at\n> \n> ftp://ftp.postgresql.org/pub/binary/Mandrake-7.1/{RPMS,SRPMS}\n> \n> which are exact copies of the RedHat RPMs (with the version changed from\n> \"21\" to \"21mdk\") recently posted by Lamar Owens at\n> \n> ftp://ftp.postgresql.org/pub/binary/RedHat-6.x/{RPMS,SRPMS}\n> \n> I've also posted rebuilt mod_php3 RPMs to cope with the libpq library\n> version change between the original Mandrake distro and the current\n> PostgreSQL release.\n> \n> Thanks Lamar for all the work on the RPMs!\n> \n> - Thomas\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 1 Nov 2000 18:53:07 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "RE: list owner please help me get off the list." } ]
[ { "msg_contents": "Hello everyone,\n\nno hope to use PostgreSQL + Mosix, because PostgreSQL uses shared memory, as \nsuch it's not suitable for migration.\n\nMosix isn't able now to give us a single system image, where a cluster of \nmachine acts as a single transparent machine.\n\nI'm trying DIPC+GFS to do this, i've patched the 2.2.17 linux kernel with \nDIPC , changed PostgreSQL sources to create distributed shared memory and \nsemaphores (postgres uses only sem and shm), next steps are to try to \nstart-up (the changed) postgres to use this distibuted ipc and a central \nlocation for datafiles (GFS , NFS?, or a thing that permits a shared fs with \nflock and fcntl) , then do a simple connect script that , for example , do a \nround-robin connection to the different postgresql servers on the machineS.\n\nDIPC has not a failover mechanism now.\n\nIdeas?\n\nbye\nvalter\n\n\n\n>From: Johan Sj�holm <[email protected]>\n>To: \"Mosix List\" <[email protected]>\n>Subject: PSQL\n>Date: Wed, 1 Nov 2000 14:09:34 +0100\n>\n>Hello everyone,\n>\n>Anyone that has tryied PostgreesSQL under Mosix ? How well does it run ?\n>Just checking before I get going whit it;)\n>\n>And I am also going to run Apache on those machines, will that work any \n>good ?\n>\n>Whit friendly Regards\n>\n>- Johan\n>\n>\n>--\n>To unsubscribe, send message to [email protected]\n>with \"unsubscribe\" in the message body.\n>\n\n_________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.\n\nShare information about yourself, create your own public profile at \nhttp://profiles.msn.com.\n\n", "msg_date": "Wed, 01 Nov 2000 15:04:45 GMT", "msg_from": "\"valter m\" <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Re: PSQL, Mosix is unuseful" } ]
[ { "msg_contents": "tried the normal ways but your pos list software won't let me off.\nHelp Please.\n\n-----Original Message-----\nFrom: Thomas Lockhart [mailto:[email protected]]\nSent: Wednesday, November 01, 2000 9:21 AM\nTo: Lamar Owen; Hackers List; [email protected]\nSubject: [HACKERS] New RPMs for RedHat and Mandrake\n\n\nI've posted RPMs built for Mandrake at\n\n ftp://ftp.postgresql.org/pub/binary/Mandrake-7.1/{RPMS,SRPMS}\n\nwhich are exact copies of the RedHat RPMs (with the version changed from\n\"21\" to \"21mdk\") recently posted by Lamar Owens at\n\n ftp://ftp.postgresql.org/pub/binary/RedHat-6.x/{RPMS,SRPMS}\n\nI've also posted rebuilt mod_php3 RPMs to cope with the libpq library\nversion change between the original Mandrake distro and the current\nPostgreSQL release.\n\nThanks Lamar for all the work on the RPMs!\n\n - Thomas\n", "msg_date": "Wed, 1 Nov 2000 07:09:06 -0800 ", "msg_from": "\"Oelkers, Phil\" <[email protected]>", "msg_from_op": true, "msg_subject": "list owner please help me get off the list." } ]
[ { "msg_contents": "I've posted RPMs built for Mandrake at\n\n ftp://ftp.postgresql.org/pub/binary/Mandrake-7.1/{RPMS,SRPMS}\n\nwhich are exact copies of the RedHat RPMs (with the version changed from\n\"21\" to \"21mdk\") recently posted by Lamar Owens at\n\n ftp://ftp.postgresql.org/pub/binary/RedHat-6.x/{RPMS,SRPMS}\n\nI've also posted rebuilt mod_php3 RPMs to cope with the libpq library\nversion change between the original Mandrake distro and the current\nPostgreSQL release.\n\nThanks Lamar for all the work on the RPMs!\n\n - Thomas\n", "msg_date": "Wed, 01 Nov 2000 15:21:22 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "New RPMs for RedHat and Mandrake" } ]
[ { "msg_contents": "set digest\n\n\n", "msg_date": "Wed, 1 Nov 2000 18:07:29 +0100", "msg_from": "Sion Morris <[email protected]>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "Did you turn XLOG_DEBUG off?\nWith XLOG_DEBUG on there should be lines in log like\n\nREDO @ 0/434100; LSN 0/434252: .........\n\nbefore control would reach btree_redo...\n\nOk, anyway btree_xlog_delete is fixed. Thanks, Tatsuo!\nAnd please try again -:)\n\nVadim\n\n> It seems postmaster won't restart under WAL. What I have done so far\n> was creating some tables and inserting fairly large amount of tuples\n> (100000 tuples) using pgbench.\n> \n> Here is the test sequence:\n> \n> pg_ctl -w stop\n> rm -fr /usr/local/pgsql/data\n> initdb\n> pg_ctl -w start\n> createdb test\n> ./pgbench -i test\n> pg_ctl -w -m i stop\n> pg_ctl -w start\n> \n> and postmaster does not return from automatic recovering job. Here are\n> last 2 lines from log:\n> \n> Nov 1 12:38:46 srapc968-yotsuya postgres[14769]: [7] DEBUG: \n> The DataBase system was not properly shut down ^IAutomatic \n> recovery is in progress...\n> Nov 1 12:38:46 srapc968-yotsuya postgres[14769]: [8] DEBUG: \n> Redo starts at (0, 287536)\n> \n> It seems that the recovery process is waiting for acquiring a lock.\n> Here is the backtrace from the process:\n> \n> (gdb) where\n> #0 0x2ac4acae in __select () from /lib/libc.so.6\n> #1 0x2ac9f0ac in ?? ()\n> #2 0x80ea18c in LockBuffer (buffer=37, mode=2) at xlog_bufmgr.c:1995\n> #3 0x8082feb in XLogReadBuffer (extend=0, reln=0x82e92c8, blkno=1)\n> at xlogutils.c:215\n> #4 0x80781e4 in btree_xlog_delete (redo=1, lsn={xlogid = 0, \n> xrecoff = 23741124}, record=0x82980dc) at nbtree.c:1013\n> #5 0x80790fe in btree_redo (lsn={xlogid = 0, xrecoff = 23741124}, \n> record=0x82980dc) at nbtree.c:1450\n> #6 0x80825b2 in StartupXLOG () at xlog.c:1452\n> #7 0x80850cd in BootstrapMain (argc=6, argv=0x7fffeffc) at \n> bootstrap.c:349\n> #8 0x80e047a in SSDataBase (startup=1 '\\001') at postmaster.c:2187\n> #9 0x80deb1a in PostmasterMain (argc=1, argv=0x7ffff694) at \n> postmaster.c:667\n> #10 0x80c3646 in main (argc=1, argv=0x7ffff694) at main.c:112\n> \n> Please let me know if you need more info.\n> --\n> Tatsuo Ishii\n> \n", "msg_date": "Wed, 1 Nov 2000 12:28:17 -0800 ", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: WAL: postmaster won't start" }, { "msg_contents": "> Did you turn XLOG_DEBUG off?\n> With XLOG_DEBUG on there should be lines in log like\n> \n> REDO @ 0/434100; LSN 0/434252: .........\n> \n> before control would reach btree_redo...\n\nNo. I just started up postmaster with -S (logs are forwarded to syslog\nin my settings).\n\n> Ok, anyway btree_xlog_delete is fixed. Thanks, Tatsuo!\n> And please try again -:)\n\nThanks. I will try again.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 02 Nov 2000 10:39:58 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "RE: WAL: postmaster won't start" } ]
[ { "msg_contents": "In Mandrake 7.2 Postgresql 7.0 is included...\n\nCheck it out on www.rpmfind.net\n\nFranck Martin\nDatabase Development Officer\nSOPAC South Pacific Applied Geoscience Commission\nFiji\nE-mail: [email protected] <mailto:[email protected]> \nWeb site: http://www.sopac.org/ <http://www.sopac.org/> \n\nThis e-mail is intended for its recipients only. Do not forward this\ne-mail without approval. The views expressed in this e-mail may not be\nneccessarily the views of SOPAC.\n\n\n\n-----Original Message-----\nFrom: Thomas Lockhart [mailto:[email protected]]\nSent: Thursday, November 02, 2000 3:21 AM\nTo: Lamar Owen; Hackers List; [email protected]\nSubject: [HACKERS] New RPMs for RedHat and Mandrake\n\n\nI've posted RPMs built for Mandrake at\n\n ftp://ftp.postgresql.org/pub/binary/Mandrake-7.1/{RPMS,SRPMS}\n\nwhich are exact copies of the RedHat RPMs (with the version changed from\n\"21\" to \"21mdk\") recently posted by Lamar Owens at\n\n ftp://ftp.postgresql.org/pub/binary/RedHat-6.x/{RPMS,SRPMS}\n\nI've also posted rebuilt mod_php3 RPMs to cope with the libpq library\nversion change between the original Mandrake distro and the current\nPostgreSQL release.\n\nThanks Lamar for all the work on the RPMs!\n\n - Thomas\n", "msg_date": "Thu, 2 Nov 2000 10:14:53 +1200 ", "msg_from": "Franck Martin <[email protected]>", "msg_from_op": true, "msg_subject": "RE: New RPMs for RedHat and Mandrake" } ]
[ { "msg_contents": "Get me off this list too....\n \nThe F**** interface does not work!\n \[email protected] <mailto:[email protected]> \nand [email protected]\n \n\nFranck Martin\nDatabase Development Officer\nSOPAC South Pacific Applied Geoscience Commission\nFiji\nE-mail: [email protected] <mailto:[email protected]> \nWeb site: <http://www.sopac.org/> http://www.sopac.org/\n\nThis e-mail is intended for its recipients only. Do not forward this e-mail\nwithout approval. The views expressed in this e-mail may not be neccessarily\nthe views of SOPAC.\n\n \n\n-----Original Message-----\nFrom: Jones, Colin [mailto:[email protected]]\nSent: Thursday, November 02, 2000 2:23 AM\nTo: 'Oelkers, Phil'; Hackers List; [email protected]\nSubject: RE: [HACKERS] list owner please help me get off the list.\nImportance: High\n\n\n\nI want off this list immediately!!!!!!!!!!! If I am not off by the end of\nthe day, I will go directly to the top! Thanks!\n\n-----Original Message----- \nFrom: Oelkers, Phil [ mailto:[email protected]\n<mailto:[email protected]> ] \nSent: Wednesday, November 01, 2000 9:09 AM \nTo: Hackers List; [email protected] \nSubject: [HACKERS] list owner please help me get off the list. \n\n\ntried the normal ways but your pos list software won't let me off. \nHelp Please. \n\n-----Original Message----- \nFrom: Thomas Lockhart [ mailto:[email protected]\n<mailto:[email protected]> ] \nSent: Wednesday, November 01, 2000 9:21 AM \nTo: Lamar Owen; Hackers List; [email protected] \nSubject: [HACKERS] New RPMs for RedHat and Mandrake \n\n\nI've posted RPMs built for Mandrake at \n\n ftp://ftp.postgresql.org/pub/binary/Mandrake-7.1/\n<ftp://ftp.postgresql.org/pub/binary/Mandrake-7.1/> {RPMS,SRPMS} \n\nwhich are exact copies of the RedHat RPMs (with the version changed from \n\"21\" to \"21mdk\") recently posted by Lamar Owens at \n\n ftp://ftp.postgresql.org/pub/binary/RedHat-6.x/\n<ftp://ftp.postgresql.org/pub/binary/RedHat-6.x/> {RPMS,SRPMS} \n\nI've also posted rebuilt mod_php3 RPMs to cope with the libpq library \nversion change between the original Mandrake distro and the current \nPostgreSQL release. \n\nThanks Lamar for all the work on the RPMs! \n\n - Thomas \n\n", "msg_date": "Thu, 2 Nov 2000 10:16:38 +1200 ", "msg_from": "Franck Martin <[email protected]>", "msg_from_op": true, "msg_subject": "list owner please help me get off the list too." } ]
[ { "msg_contents": "Are there any status and mode applications for postgres? I mean, an \napplication that will tell me the status of the server at the moment, and an \napp to start and stop postgres.\nDoes postgres have \"administration mode\" like a mode to make backups on, \nwithout threads connected?\n\nThanks!!!\n\n-- \n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Wed, 1 Nov 2000 20:57:24 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": true, "msg_subject": "status applications" }, { "msg_contents": "On Mi� 01 Nov 2000 20:57, Martin A. Marques wrote:\n\nSeeing that nobody responded to my questions, here I go. ;-)\n\nI think one of the poor partes about postgres is the administration tools. I \nam not a PostgreSQL hacker (would like to be one) so I don know if there are \nthings like user threads, locks and all those stuff to check for with some \nsort of administration tool.\n\nI am willing to help in the development of this tool, and I think it would be \nvery important for the PostgreSQL community, adn for PostgreSQL as a whole.\n\nAny comments? Bruce? Tom? Vadim?\n\n> Are there any status and mode applications for postgres? I mean, an\n> application that will tell me the status of the server at the moment, and\n> an app to start and stop postgres.\n> Does postgres have \"administration mode\" like a mode to make backups on,\n> without threads connected?\n>\n> Thanks!!!\n\nSaludos... :-)\n\n-- \n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Thu, 2 Nov 2000 15:04:21 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: status applications" }, { "msg_contents": "Martin A. Marques writes:\n\n> Are there any status and mode applications for postgres? I mean, an \n> application that will tell me the status of the server at the moment, and an \n> app to start and stop postgres.\n\npg_ctl\n\n> Does postgres have \"administration mode\" like a mode to make backups on, \n> without threads connected?\n\nYou just disallow everyone to connect.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 2 Nov 2000 19:27:34 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: status applications" }, { "msg_contents": "On Jue 02 Nov 2000 15:27, you wrote:\n> Martin A. Marques writes:\n> > Are there any status and mode applications for postgres? I mean, an\n> > application that will tell me the status of the server at the moment, \n> > and an app to start and stop postgres.\n>\n> pg_ctl\n\nYes, I have just been checking on that script.\n\n> > Does postgres have \"administration mode\" like a mode to make backups on,\n> > without threads connected?\n>\n> You just disallow everyone to connect.\n\nHow would you do something like that?\n\nThanks.\n\n-- \n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart���n Marqu���s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Thu, 2 Nov 2000 16:32:54 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: status applications" }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> On Mi? 01 Nov 2000 20:57, Martin A. Marques wrote:\n> \n> Seeing that nobody responded to my questions, here I go. ;-)\n> \n> I think one of the poor partes about postgres is the administration tools. I \n> am not a PostgreSQL hacker (would like to be one) so I don know if there are \n> things like user threads, locks and all those stuff to check for with some \n> sort of administration tool.\n> \n> I am willing to help in the development of this tool, and I think it would be \n> very important for the PostgreSQL community, adn for PostgreSQL as a whole.\n> \n> Any comments? Bruce? Tom? Vadim?\n\nI want to write a tcl/tk utility that can monitor database connections\nand server status. I hope to start in in the next month or two.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 4 Nov 2000 11:30:51 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: status applications" } ]
[ { "msg_contents": "> > Did you turn XLOG_DEBUG off?\n> > With XLOG_DEBUG on there should be lines in log like\n> > \n> > REDO @ 0/434100; LSN 0/434252: .........\n> > \n> > before control would reach btree_redo...\n> \n> No. I just started up postmaster with -S (logs are forwarded to syslog\n> in my settings).\n\nOh. XLOG_DEBUG calls write(2,) - too big output for syslog I think.\n\nVadim\n", "msg_date": "Wed, 1 Nov 2000 17:28:59 -0800 ", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: WAL: postmaster won't start" }, { "msg_contents": "> > No. I just started up postmaster with -S (logs are forwarded to syslog\n> > in my settings).\n> \n> Oh. XLOG_DEBUG calls write(2,) - too big output for syslog I think.\n\nI see. Next time I will turn off -S so that I could send you\nXLOG_DEBUG outputs...\n--\nTatsuo Ishii\n\n", "msg_date": "Thu, 02 Nov 2000 10:45:33 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "RE: WAL: postmaster won't start" } ]
[ { "msg_contents": "I have installed plpgsql procedural language ok,\nI could not get any functions to work. \nI tried the most simple function as documented:\nCREATE FUNCTION sptest3 (int4) RETURNS int4 AS\n\t'BEGIN\n\t\tRETURN $1+1;\n\tEND;' LANGUAGE 'plpgsql';\n\n\nWhen i call the function from sql\nSELECT sptest3(4) AS x;\nI get the error:\n\n\"NOTICE: plpgsql: ERROR during compile of sptest3 near line 1\n\"RROR: parse error at or near \"\n\ncan anybody help?\n", "msg_date": "Thu, 2 Nov 2000 13:46:08 +1100 ", "msg_from": "Pam Withnall <[email protected]>", "msg_from_op": true, "msg_subject": "create function" }, { "msg_contents": "\nWhat version are you using? On a 7.0.2 freebsd machine,\nI cut and paste the below function and query and had\nno problems.\n\nStephan Szabo\[email protected]\n\nOn Thu, 2 Nov 2000, Pam Withnall wrote:\n\n> I have installed plpgsql procedural language ok,\n> I could not get any functions to work. \n> I tried the most simple function as documented:\n> CREATE FUNCTION sptest3 (int4) RETURNS int4 AS\n> \t'BEGIN\n> \t\tRETURN $1+1;\n> \tEND;' LANGUAGE 'plpgsql';\n> \n> \n> When i call the function from sql\n> SELECT sptest3(4) AS x;\n> I get the error:\n> \n> \"NOTICE: plpgsql: ERROR during compile of sptest3 near line 1\n> \"RROR: parse error at or near \"\n> \n> can anybody help?\n> \n\n\n", "msg_date": "Wed, 1 Nov 2000 18:53:25 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: create function" }, { "msg_contents": "Pam Withnall <[email protected]> writes:\n> When i call the function from sql\n> SELECT sptest3(4) AS x;\n> I get the error:\n\n> \"NOTICE: plpgsql: ERROR during compile of sptest3 near line 1\n> \"RROR: parse error at or near \"\n\nThe message looks just like that, eh? I bet it's unhappy because your\nfunction text contains DOS-style newlines (\\r\\n) not Unix-style (\\n).\n\n7.1 plpgsql will accept \\r as whitespace, but current releases don't.\nIn the meantime, save your script in a not-so-Microsoft-oriented editor.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Nov 2000 03:50:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: create function " } ]
[ { "msg_contents": "\nsounds great, then hopefully we get v7.0.3 out early next week :) thanks\n...\n\n\nOn Wed, 1 Nov 2000, Bruce Momjian wrote:\n\n> I am back, and will resolve the cvs and brand 7.0.3 tomorrow.\n> \n> \n> > \n> > this week, once I hear from bruce that he's ready ... last I heard, he was\n> > back with his old CVS problem ;)\n> > \n> > \n> > \n> > On Mon, 30 Oct 2000, Lamar Owen wrote:\n> > \n> > > Any idea when 7.0.3 will be released?\n> > > --\n> > > Lamar Owen\n> > > WGCR Internet Radio\n> > > 1 Peter 4:11\n> > > \n> > \n> > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > Systems Administrator @ hub.org \n> > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> > \n> > \n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 1 Nov 2000 23:37:19 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [CORE] 7.0.3 Release date?" }, { "msg_contents": "Yes, sorry about the delay. Also, I will send a report to core about\nthe summit.\n\n> \n> sounds great, then hopefully we get v7.0.3 out early next week :) thanks\n> ...\n> \n> \n> On Wed, 1 Nov 2000, Bruce Momjian wrote:\n> \n> > I am back, and will resolve the cvs and brand 7.0.3 tomorrow.\n> > \n> > \n> > > \n> > > this week, once I hear from bruce that he's ready ... last I heard, he was\n> > > back with his old CVS problem ;)\n> > > \n> > > \n> > > \n> > > On Mon, 30 Oct 2000, Lamar Owen wrote:\n> > > \n> > > > Any idea when 7.0.3 will be released?\n> > > > --\n> > > > Lamar Owen\n> > > > WGCR Internet Radio\n> > > > 1 Peter 4:11\n> > > > \n> > > \n> > > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > > Systems Administrator @ hub.org \n> > > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> > > \n> > > \n> > \n> > \n> > -- \n> > Bruce Momjian | http://candle.pha.pa.us\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> > \n> > \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 1 Nov 2000 23:52:54 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [CORE] 7.0.3 Release date?" }, { "msg_contents": "On Wed, 1 Nov 2000, Bruce Momjian wrote:\n\n> Yes, sorry about the delay. Also, I will send a report to core about\n> the summit.\n\nis there a reason why -hackers wouldn't be interested as well? *raised\neyebrow*\n\n\n> \n> > \n> > sounds great, then hopefully we get v7.0.3 out early next week :) thanks\n> > ...\n> > \n> > \n> > On Wed, 1 Nov 2000, Bruce Momjian wrote:\n> > \n> > > I am back, and will resolve the cvs and brand 7.0.3 tomorrow.\n> > > \n> > > \n> > > > \n> > > > this week, once I hear from bruce that he's ready ... last I heard, he was\n> > > > back with his old CVS problem ;)\n> > > > \n> > > > \n> > > > \n> > > > On Mon, 30 Oct 2000, Lamar Owen wrote:\n> > > > \n> > > > > Any idea when 7.0.3 will be released?\n> > > > > --\n> > > > > Lamar Owen\n> > > > > WGCR Internet Radio\n> > > > > 1 Peter 4:11\n> > > > > \n> > > > \n> > > > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > > > Systems Administrator @ hub.org \n> > > > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> > > > \n> > > > \n> > > \n> > > \n> > > -- \n> > > Bruce Momjian | http://candle.pha.pa.us\n> > > [email protected] | (610) 853-3000\n> > > + If your life is a hard drive, | 830 Blythe Avenue\n> > > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> > > \n> > > \n> > \n> > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > Systems Administrator @ hub.org \n> > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> > \n> > \n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 2 Nov 2000 20:22:44 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: [CORE] 7.0.3 Release date?" }, { "msg_contents": "> On Wed, 1 Nov 2000, Bruce Momjian wrote:\n> \n> > Yes, sorry about the delay. Also, I will send a report to core about\n> > the summit.\n> \n> is there a reason why -hackers wouldn't be interested as well? *raised\n> eyebrow*\n\nGood question. I have some analysis of how other open-source database \ndo things differently than us, and I don't think it would be flattering\nto mention this in too public a way. I will post a more version without\nthese details to the announce/general list tonight.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 2 Nov 2000 19:28:16 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [CORE] 7.0.3 Release date?" } ]
[ { "msg_contents": "I have received no replies to this question, so I am assuming there is\nno way to do this in CVS. I will dump out the branch logs in date order\nand just grab the post-7.0.2 stuff. I will look at cvs2cl for 7.1.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nI seem to have trouble again getting cvs logs for just the 7.0.X branch.\nI am running this command from a cvs checkout tree of 7.0.X:\n\n\t$ cvs log -d'>2000-06-07 00:00:00 GMT' -rREL7_0_PATCHES\n\nAnd am seeing entries like below. Can someone please explain why I am\nseeing stuff committed in current?\n\n---------------------------------------------------------------------------\n\n\tRCS file: /home/projects/pgsql/cvsroot/pgsql/COPYRIGHT,v\n\tWorking file: COPYRIGHT\n\thead: 1.5\n\tbranch:\n\tlocks: strict\n\taccess list:\n\tsymbolic names:\n\t\tREL7_0_PATCHES: 1.5.0.2\n\t\tREL7_0: 1.5\n\t\tREL6_5_PATCHES: 1.3.0.4\n\t\tREL6_5: 1.3\n\t\tREL6_4: 1.3.0.2\n\t\trelease-6-3: 1.3\n\t\tSUPPORT: 1.1.1.1\n\t\tPG95-DIST: 1.1.1\n\tkeyword substitution: kv\n\ttotal revisions: 6;\tselected revisions: 0\n\tdescription:\n\t=============================================================================\n\t\n\tRCS file: /home/projects/pgsql/cvsroot/pgsql/GNUmakefile.in,v\n\tWorking file: GNUmakefile.in\n\thead: 1.14\n\tbranch:\n\tlocks: strict\n\taccess list:\n\tsymbolic names:\n\tkeyword substitution: kv\n\ttotal revisions: 14;\tselected revisions: 13\n\tdescription:\n\t----------------------------\n\trevision 1.14\n\tdate: 2000/10/02 22:21:21; author: petere; state: Exp; lines: +4 -2\n\t\"installcheck\" doesn't need to depend on \"all\" since we depend on the user\n\tto start up a postmaster anyway.\n\t----------------------------\n\trevision 1.13\n\tdate: 2000/09/29 17:17:31; author: petere; state: Exp; lines: +3 -1\n\tNew unified regression test driver, test/regress makefile cleanup,\n\tadd \"check\" and \"installcheck\" targets, straighten out make variable naming\n\tof host_os, host_cpu, etc.\n\t----------------------------\n\trevision 1.12\n\tdate: 2000/09/21 20:17:41; author: petere; state: Exp; lines: +1 -20\n\tReplace brain-dead Autoconf macros AC_ARG_{ENABLE,WITH} with something\n\tthat's actually useful, robust, consistent.\n\t\n\tBetter plan to generate aclocal.m4 as well: use m4 include directives,\n\trather than cat.\n\t...\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026", "msg_date": "Wed, 1 Nov 2000 23:30:43 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "More cvs branch problems (fwd)" }, { "msg_contents": "I too have tried to remove myself from this list. I've tried\npgsql-hackers-request, and I've emailed pgsql-hackers-owner\n\nMy email address is [email protected] and also [email protected] . I can only\nemail from the second, but I'd like to remove myself from the first. Can\nsomeone help me out, please?\n\nSorry to bother everyone else.\n\nJade\n\n", "msg_date": "Thu, 2 Nov 2000 09:26:50 -0800 (PST)", "msg_from": "Jade Rubick <[email protected]>", "msg_from_op": false, "msg_subject": "Another remove request" }, { "msg_contents": "Me as well!!!\n\[email protected]\n\n- r\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Jade Rubick\n> Sent: November 2, 2000 9:27 AM\n> To: PostgreSQL-development\n> Subject: [HACKERS] Another remove request\n> \n> \n> I too have tried to remove myself from this list. I've tried\n> pgsql-hackers-request, and I've emailed pgsql-hackers-owner\n> \n> My email address is [email protected] and also [email protected] . I can only\n> email from the second, but I'd like to remove myself from the first. Can\n> someone help me out, please?\n> \n> Sorry to bother everyone else.\n> \n> Jade\n> \n", "msg_date": "Thu, 2 Nov 2000 13:47:21 -0800", "msg_from": "\"Rob S.\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Another remove request" } ]
[ { "msg_contents": "Hello,\n\nHaving some expirience with catching errors with BLOBs, I realised, that it \nis really hard to understand that you forget to enclose BLOB operations in \ntransaction...\n\nI would like to add a check for each BLOB operation which will check whether \nwe are in transaction, and if not it will issue a notice.\n\nThe question is how correctly check that I am in transaction.\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Thu, 2 Nov 2000 11:09:44 +0600", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": true, "msg_subject": "How to check that I am in transaction inside backend" }, { "msg_contents": "Denis Perchine wrote:\n\n> Hello,\n>\n> Having some expirience with catching errors with BLOBs, I realised, that it\n> is really hard to understand that you forget to enclose BLOB operations in\n> transaction...\n>\n> I would like to add a check for each BLOB operation which will check whether\n> we are in transaction, and if not it will issue a notice.\n>\n> The question is how correctly check that I am in transaction.\n\nsimply use IsTransactionBlock()\n\nChristof\n\n\n", "msg_date": "Fri, 03 Nov 2000 16:17:36 +0100", "msg_from": "Christof Petig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to check that I am in transaction inside backend" } ]
[ { "msg_contents": "\n> Well I can re-write and resubmit this patch. Add it as a \n> compile time option\n> is not bad idea. Second possibility is distribute it as patch \n> in the contrib\n> tree. And if it until not good tested not dirty with this main tree...\n> \n> Ok, I next week prepare it... \n\nOne thing that worries me though is, that it extends the sql language,\nand there has been no discussion about the chosen syntax.\n\nImho the standard embedded SQL syntax (prepare ...) could be a \nstarting point.\n\nAndreas \n", "msg_date": "Thu, 2 Nov 2000 09:35:29 +0100 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Re: [GENERAL] Query caching" }, { "msg_contents": "\nOn Thu, 2 Nov 2000, Zeugswetter Andreas SB wrote:\n\n> \n> > Well I can re-write and resubmit this patch. Add it as a \n> > compile time option\n> > is not bad idea. Second possibility is distribute it as patch \n> > in the contrib\n> > tree. And if it until not good tested not dirty with this main tree...\n> > \n> > Ok, I next week prepare it... \n> \n> One thing that worries me though is, that it extends the sql language,\n> and there has been no discussion about the chosen syntax.\n> \n> Imho the standard embedded SQL syntax (prepare ...) could be a \n> starting point.\n\n Yes, you are right... my PREPARE/EXECUTE is not too much ready to SQL92,\nI some old letter I speculate about \"SAVE/EXECUTE PLAN\" instead\nPREPARE/EXECUTE. But don't forget, it will *experimental* patch... we can \nchange it in future ..etc.\n\n\t\t\t\t\tKarel\n\n", "msg_date": "Thu, 2 Nov 2000 15:38:14 +0100 (CET)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Re: [GENERAL] Query caching" }, { "msg_contents": "Karel Zak wrote:\n\n> On Thu, 2 Nov 2000, Zeugswetter Andreas SB wrote:\n>\n> >\n> > > Well I can re-write and resubmit this patch. Add it as a\n> > > compile time option\n> > > is not bad idea. Second possibility is distribute it as patch\n> > > in the contrib\n> > > tree. And if it until not good tested not dirty with this main tree...\n> > >\n> > > Ok, I next week prepare it...\n> >\n> > One thing that worries me though is, that it extends the sql language,\n> > and there has been no discussion about the chosen syntax.\n> >\n> > Imho the standard embedded SQL syntax (prepare ...) could be a\n> > starting point.\n>\n> Yes, you are right... my PREPARE/EXECUTE is not too much ready to SQL92,\n> I some old letter I speculate about \"SAVE/EXECUTE PLAN\" instead\n> PREPARE/EXECUTE. But don't forget, it will *experimental* patch... we can\n> change it in future ..etc.\n>\n> Karel\n\n[Sorry, I didn't look into your patch, yet.]\nWhat about parameters? Normally you can prepare a statement and execute it\nusing different parameters. AFAIK postgres' frontend-backend protocol is not\ndesigned to take parameters for statements (e.g. like result presents\nresults). A very long road to go.\nBy the way, I'm somewhat interested in getting this feature in. Perhaps it\nshould be part of a protocol redesign (e.g. binary parameters/results).\nHandling endianness is one aspect, floats are harder (but float->ascii->float\nsometimes fails as well).\n\nChristof\n\n\n", "msg_date": "Fri, 03 Nov 2000 16:47:11 +0100", "msg_from": "Christof Petig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Re: [GENERAL] Query caching" }, { "msg_contents": "\nOn Fri, 3 Nov 2000, Christof Petig wrote:\n\n> Karel Zak wrote:\n> \n> > On Thu, 2 Nov 2000, Zeugswetter Andreas SB wrote:\n> >\n> > >\n> > > > Well I can re-write and resubmit this patch. Add it as a\n> > > > compile time option\n> > > > is not bad idea. Second possibility is distribute it as patch\n> > > > in the contrib\n> > > > tree. And if it until not good tested not dirty with this main tree...\n> > > >\n> > > > Ok, I next week prepare it...\n> > >\n> > > One thing that worries me though is, that it extends the sql language,\n> > > and there has been no discussion about the chosen syntax.\n> > >\n> > > Imho the standard embedded SQL syntax (prepare ...) could be a\n> > > starting point.\n> >\n> > Yes, you are right... my PREPARE/EXECUTE is not too much ready to SQL92,\n> > I some old letter I speculate about \"SAVE/EXECUTE PLAN\" instead\n> > PREPARE/EXECUTE. But don't forget, it will *experimental* patch... we can\n> > change it in future ..etc.\n> >\n> > Karel\n> \n> [Sorry, I didn't look into your patch, yet.]\n\n Please, read my old query cache and PREPARE/EXECUTE description...\n\n> What about parameters? Normally you can prepare a statement and execute it\n\n We have in PG parameters, see SPI, but now it's used inside backend only\nand not exist statement that allows to use this feature in be<->fe.\n\n> using different parameters. AFAIK postgres' frontend-backend protocol is not\n> designed to take parameters for statements (e.g. like result presents\n> results). A very long road to go.\n> By the way, I'm somewhat interested in getting this feature in. Perhaps it\n> should be part of a protocol redesign (e.g. binary parameters/results).\n> Handling endianness is one aspect, floats are harder (but float->ascii->float\n> sometimes fails as well).\n\n PREPARE <name> AS <query>\n [ USING type, ... typeN ]\n [ NOSHARE | SHARE | GLOBAL ]\n\n EXECUTE <name>\n [ INTO [ TEMPORARY | TEMP ] [ TABLE ] new_table ]\n [ USING val, ... valN ]\n [ NOSHARE | SHARE | GLOBAL ]\n\n DEALLOCATE PREPARE\n [ <name> [ NOSHARE | SHARE | GLOBAL ]]\n [ ALL | ALL INTERNAL ]\n\n\nAn example:\n\n\nPREPARE chris_query AS SELECT * FROM pg_class WHERE relname = $1 USING text;\n\nEXECUTE chris_query USING 'pg_shadow';\n\n\n\tOr mean you something other?\n\t\t\t\t\tKarel\n\n\n\n\n\n", "msg_date": "Mon, 6 Nov 2000 09:15:04 +0100 (CET)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Re: [GENERAL] Query caching" }, { "msg_contents": "Karel Zak wrote:\n\n> On Fri, 3 Nov 2000, Christof Petig wrote:\n>\n> > Karel Zak wrote:\n> >\n> > > On Thu, 2 Nov 2000, Zeugswetter Andreas SB wrote:\n> > >\n> > > >\n> > > > > Well I can re-write and resubmit this patch. Add it as a\n> > > > > compile time option\n> > > > > is not bad idea. Second possibility is distribute it as patch\n> > > > > in the contrib\n> > > > > tree. And if it until not good tested not dirty with this main tree...\n> > > > >\n> > > > > Ok, I next week prepare it...\n> > > >\n> > > > One thing that worries me though is, that it extends the sql language,\n> > > > and there has been no discussion about the chosen syntax.\n> > > >\n> > > > Imho the standard embedded SQL syntax (prepare ...) could be a\n> > > > starting point.\n> > >\n> > > Yes, you are right... my PREPARE/EXECUTE is not too much ready to SQL92,\n> > > I some old letter I speculate about \"SAVE/EXECUTE PLAN\" instead\n> > > PREPARE/EXECUTE. But don't forget, it will *experimental* patch... we can\n> > > change it in future ..etc.\n> > >\n> > > Karel\n> >\n> > [Sorry, I didn't look into your patch, yet.]\n>\n> Please, read my old query cache and PREPARE/EXECUTE description...\n\nSorry I can't find it in my (current) mailbox, do you have a copy around? Or can\nyou give me a keyword?\n\n> > What about parameters? Normally you can prepare a statement and execute it\n>\n> We have in PG parameters, see SPI, but now it's used inside backend only\n> and not exist statement that allows to use this feature in be<->fe.\n\nSad. Since ecpg would certainly benefit from this.\n\n> > using different parameters. AFAIK postgres' frontend-backend protocol is not\n> > designed to take parameters for statements (e.g. like result presents\n> > results). A very long road to go.\n> > By the way, I'm somewhat interested in getting this feature in. Perhaps it\n> > should be part of a protocol redesign (e.g. binary parameters/results).\n> > Handling endianness is one aspect, floats are harder (but float->ascii->float\n> > sometimes fails as well).\n>\n> PREPARE <name> AS <query>\n> [ USING type, ... typeN ]\n> [ NOSHARE | SHARE | GLOBAL ]\n>\n> EXECUTE <name>\n> [ INTO [ TEMPORARY | TEMP ] [ TABLE ] new_table ]\n> [ USING val, ... valN ]\n> [ NOSHARE | SHARE | GLOBAL ]\n>\n> DEALLOCATE PREPARE\n> [ <name> [ NOSHARE | SHARE | GLOBAL ]]\n> [ ALL | ALL INTERNAL ]\n>\n> An example:\n>\n> PREPARE chris_query AS SELECT * FROM pg_class WHERE relname = $1 USING text;\n\nI would prefer '?' as a parameter name, since this is in the embedded sql standard\n(do you have a copy of the 94 draft? I can mail mine to you?)\nAlso the standard says a whole lot about guessing the parameter's type.\n\nAlso I vote for ?::type or type(?) or sql's cast(...) (don't know it's syntax)\ninstead of abusing the using keyword.\n\n> EXECUTE chris_query USING 'pg_shadow';\n\nGreat idea of yours to implement this! Since I was thinking about implementing a\nmore decent schema for ecpg but had no mind to touch the backend and be-fe\nprotocol (yet).\nIt would be desirable to do an 'execute immediate using', since using input\nparameters would take a lot of code away from ecpg.\n\nYours\n Christof\n\nPS: I vote for rethinking the always ascii over the wire strategy. CORBA was\nproposed as a potential replacement which takes care of endianness and float\nconversions. But I would not go that far (???), perhaps taking encodings (aka\nmarshalling?) from CORBA.\n\n", "msg_date": "Wed, 08 Nov 2000 16:05:50 +0100", "msg_from": "Christof Petig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Re: [GENERAL] Query caching" }, { "msg_contents": "On Wed, Nov 08, 2000 at 04:05:50PM +0100, Christof Petig wrote:\n> Karel Zak wrote:\n> >\n> > Please, read my old query cache and PREPARE/EXECUTE description...\n> \n> Sorry I can't find it in my (current) mailbox, do you have a copy around? Or can\n> you give me a keyword?\n> \n\nIn my archives, there's this one:\n\nDate: Wed, 19 Jul 2000 10:16:13 +0200 (CEST)\nFrom: Karel Zak <[email protected]>\nTo: pgsql-hackers <[email protected]>\nSubject: [HACKERS] The query cache - first snapshot (long)\n\nHere's the URL to the archives:\n\nhttp://www.postgresql.org/mhonarc/pgsql-hackers/2000-07/msg01098.html\n\nRoss\n-- \nOpen source code is like a natural resource, it's the result of providing\nfood and sunshine to programmers, and then staying out of their way.\n[...] [It] is not going away because it has utility for both the developers \nand users independent of economic motivations. Jim Flynn, Sunnyvale, Calif.\n", "msg_date": "Wed, 8 Nov 2000 10:07:44 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Re: [GENERAL] Query caching" }, { "msg_contents": "\nOn Wed, 8 Nov 2000, Christof Petig wrote:\n\n> Karel Zak wrote:\n> \n> > > What about parameters? Normally you can prepare a statement and execute it\n> >\n> > We have in PG parameters, see SPI, but now it's used inside backend only\n> > and not exist statement that allows to use this feature in be<->fe.\n> \n> Sad. Since ecpg would certainly benefit from this.\n> \n> > > using different parameters. AFAIK postgres' frontend-backend protocol is not\n> > > designed to take parameters for statements (e.g. like result presents\n> > > results). A very long road to go.\n> > > By the way, I'm somewhat interested in getting this feature in. Perhaps it\n> > > should be part of a protocol redesign (e.g. binary parameters/results).\n> > > Handling endianness is one aspect, floats are harder (but float->ascii->float\n> > > sometimes fails as well).\n> >\n> > PREPARE <name> AS <query>\n> > [ USING type, ... typeN ]\n> > [ NOSHARE | SHARE | GLOBAL ]\n> >\n> > EXECUTE <name>\n> > [ INTO [ TEMPORARY | TEMP ] [ TABLE ] new_table ]\n> > [ USING val, ... valN ]\n> > [ NOSHARE | SHARE | GLOBAL ]\n> >\n> > DEALLOCATE PREPARE\n> > [ <name> [ NOSHARE | SHARE | GLOBAL ]]\n> > [ ALL | ALL INTERNAL ]\n> >\n> > An example:\n> >\n> > PREPARE chris_query AS SELECT * FROM pg_class WHERE relname = $1 USING text;\n> \n> I would prefer '?' as a parameter name, since this is in the embedded sql standard\n> (do you have a copy of the 94 draft? I can mail mine to you?)\n\n This not depend on query cache. The '$n' is PostgreSQL query parametr\nkeyword and is defined in standard parser. The PREPARE statement not parsing\nquery it's job for standard parser.\n\n> Also the standard says a whole lot about guessing the parameter's type.\n> \n> Also I vote for ?::type or type(?) or sql's cast(...) (don't know it's syntax)\n> instead of abusing the using keyword.\n\nThe postgresql executor expect types of parametrs in separate input (array).\nI not sure how much expensive/executable is survey it from query.\n\n> > EXECUTE chris_query USING 'pg_shadow';\n> \n> Great idea of yours to implement this! Since I was thinking about implementing a\n> more decent schema for ecpg but had no mind to touch the backend and be-fe\n> protocol (yet).\n> It would be desirable to do an 'execute immediate using', since using input\n> parameters would take a lot of code away from ecpg.\n\nBy the way, PREPARE/EXECUTE is face only. More interesting in this period is\nquery-cache-kernel. SQL92 is really a little unlike my PREPARE/EXECUTE.\n\n\t\t\t\t\t\tKarel\n\n", "msg_date": "Thu, 9 Nov 2000 09:23:41 +0100 (CET)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Re: [GENERAL] Query caching" }, { "msg_contents": "Karel Zak wrote:\n\n> On Wed, 8 Nov 2000, Christof Petig wrote:\n>\n> > Karel Zak wrote:\n> >\n> > > > What about parameters? Normally you can prepare a statement and execute it\n> > >\n> > > We have in PG parameters, see SPI, but now it's used inside backend only\n> > > and not exist statement that allows to use this feature in be<->fe.\n> >\n> > Sad. Since ecpg would certainly benefit from this.\n\nPostponed for future improvements ...\n\n> > > PREPARE chris_query AS SELECT * FROM pg_class WHERE relname = $1 USING text;\n> >\n> > I would prefer '?' as a parameter name, since this is in the embedded sql standard\n> > (do you have a copy of the 94 draft? I can mail mine to you?)\n>\n> This not depend on query cache. The '$n' is PostgreSQL query parametr\n> keyword and is defined in standard parser. The PREPARE statement not parsing\n> query it's job for standard parser.\n\nI see.\n\n> > Also the standard says a whole lot about guessing the parameter's type.\n> >\n> > Also I vote for ?::type or type(?) or sql's cast(...) (don't know it's syntax)\n> > instead of abusing the using keyword.\n>\n> The postgresql executor expect types of parametrs in separate input (array).\n> I not sure how much expensive/executable is survey it from query.\n\nThat would involve changing the parser. Future project.\n\n> > > EXECUTE chris_query USING 'pg_shadow';\n> >\n> > Great idea of yours to implement this! Since I was thinking about implementing a\n> > more decent schema for ecpg but had no mind to touch the backend and be-fe\n> > protocol (yet).\n> > It would be desirable to do an 'execute immediate using', since using input\n> > parameters would take a lot of code away from ecpg.\n>\n> By the way, PREPARE/EXECUTE is face only. More interesting in this period is\n> query-cache-kernel. SQL92 is really a little unlike my PREPARE/EXECUTE.\n\nI'm looking forward to get first experiences with the query cache kernel. I think it's\nthe right way to go.\n\nChristof\n\n\n\n\n", "msg_date": "Fri, 10 Nov 2000 08:05:38 +0100", "msg_from": "Christof Petig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query caching" }, { "msg_contents": "Did someone think about query costs ? Is you prepare\nquery like SELECT id FROM t1 WHERE type=$1 and\nexecute it with $1=1 and 2. For 1 there is one record\nin t1 a all other have type=2.\nWithout caching, first query will use index, second\nnot.\nShould cached plan use index or not ?\ndevik\n\nChristof Petig wrote:\n> \n> Karel Zak wrote:\n> \n> > On Wed, 8 Nov 2000, Christof Petig wrote:\n> >\n> > > Karel Zak wrote:\n> > >\n> > > > > What about parameters? Normally you can prepare a statement and execute it\n> > > >\n> > > > We have in PG parameters, see SPI, but now it's used inside backend only\n> > > > and not exist statement that allows to use this feature in be<->fe.\n> > >\n> > > Sad. Since ecpg would certainly benefit from this.\n> \n> Postponed for future improvements ...\n> \n> > > > PREPARE chris_query AS SELECT * FROM pg_class WHERE relname = $1 USING text;\n> > >\n> > > I would prefer '?' as a parameter name, since this is in the embedded sql standard\n> > > (do you have a copy of the 94 draft? I can mail mine to you?)\n> >\n> > This not depend on query cache. The '$n' is PostgreSQL query parametr\n> > keyword and is defined in standard parser. The PREPARE statement not parsing\n> > query it's job for standard parser.\n> \n> I see.\n> \n> > > Also the standard says a whole lot about guessing the parameter's type.\n> > >\n> > > Also I vote for ?::type or type(?) or sql's cast(...) (don't know it's syntax)\n> > > instead of abusing the using keyword.\n> >\n> > The postgresql executor expect types of parametrs in separate input (array).\n> > I not sure how much expensive/executable is survey it from query.\n> \n> That would involve changing the parser. Future project.\n> \n> > > > EXECUTE chris_query USING 'pg_shadow';\n> > >\n> > > Great idea of yours to implement this! Since I was thinking about implementing a\n> > > more decent schema for ecpg but had no mind to touch the backend and be-fe\n> > > protocol (yet).\n> > > It would be desirable to do an 'execute immediate using', since using input\n> > > parameters would take a lot of code away from ecpg.\n> >\n> > By the way, PREPARE/EXECUTE is face only. More interesting in this period is\n> > query-cache-kernel. SQL92 is really a little unlike my PREPARE/EXECUTE.\n> \n> I'm looking forward to get first experiences with the query cache kernel. I think it's\n> the right way to go.\n> \n> Christof\n\n", "msg_date": "Fri, 10 Nov 2000 11:40:09 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Query caching" }, { "msg_contents": "On Fri, 10 Nov 2000 [email protected] wrote:\n\n> Did someone think about query costs ? Is you prepare\n> query like SELECT id FROM t1 WHERE type=$1 and\n> execute it with $1=1 and 2. For 1 there is one record\n> in t1 a all other have type=2.\n> Without caching, first query will use index, second\n> not.\n> Should cached plan use index or not ?\n> devik\n\n The postgresql already have planns caching. See SPI (saveplan), but\nit's usable for internal stuff (for example triggers..) only. The\nPREPARE/EXECUTE pull up it to be<->fe and make new memory type that\nallows save it in shared memory. But else it's *nothing* new. \n\n A validity of cached planns is user problem now. Not some internal\nmethod how check changes that out of date some query (or exist some idea?). \nIt can be more changes like changes in DB schema.\n\n If resolve this anyone clever person it will great for VIEW, SPI too.\n\n Rebuid a query plan in the planner is not a problem, in the cache is \nstored original query tree, but you must known when... or must know\nit a DB user.\n\n\t\t\t\t\tKarel\n\n", "msg_date": "Sun, 12 Nov 2000 12:34:51 +0100 (CET)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query caching" } ]
[ { "msg_contents": "Hello,\n\nMy previous mail about VACUUM deadlock was silently ignored...\nNow I have much more interesting trouble...\n\nJust a minutes ago I found out that my usual routine which recreate indices \nis hanged... I see the picture like this:\n\n10907 ? SW 0:01 /home/postgres/bin/postgres 127.0.0.1 \nwebmailstation webmailstation DROP waiting\n\nAnd lots of other backends are also have waiting...\n\nThe system is quite heavily loaded. I have > 200.000 queries per day.\nThere are lot's of inserts and updates, mostly updates.\n\nPostgreSQL 7.0.3pre.\nLinux 2.2.16\n\nAny comments? Ideas?\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Thu, 2 Nov 2000 19:39:29 +0600", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": true, "msg_subject": "DROP hangup..." } ]
[ { "msg_contents": "Hi All,\nI am new here . I have been working alone in my\nunderground lab away from the world shunning any\ncontact with Postgre Community( not intentional) and\ntesting out mine and postgre's limits but I have run\ninto this stumbling block ...... \nEnough kidding..;-)\nYes I am new and I want to install PL/Perl on my\nversion postgresql-7.0.2,but when I try to make the\nPerl interpreter It just makes the Dummy Makefile\nwhich says \"Cannot build plperl because libperl is not\na shared library; skipping it.\"\n\nI know this is a very basic thing but can someone\nplease help.\n\nThanks\nNick\n\n__________________________________________________\nDo You Yahoo!?\n>From homework help to love advice, Yahoo! Experts has your answer.\nhttp://experts.yahoo.com/\n", "msg_date": "Thu, 2 Nov 2000 10:12:29 -0800 (PST)", "msg_from": "Nick Wayne <[email protected]>", "msg_from_op": true, "msg_subject": "Installing PL/Perl" } ]
[ { "msg_contents": " Date: Thursday, November 2, 2000 @ 13:20:12\nAuthor: wieck\n\nUpdate of /home/projects/pgsql/cvsroot/pgsql/contrib/pg_dumpaccounts\n from hub.org:/tmp/cvs-serv16823/pg_dumpaccounts\n\nAdded Files:\n\tMakefile README pg_dumpaccounts.sh \n\n----------------------------- Log Message -----------------------------\n\nAdded utility script pg_dumpaccounts to contrib.\n\nDerived from pg_dumpall it just dumps users and groups.\n\nJan\n\n", "msg_date": "Thu, 2 Nov 2000 13:20:12 -0500 (EST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "pgsql/contrib/pg_dumpaccounts (Makefile README pg_dumpaccounts.sh)" }, { "msg_contents": "> Added utility script pg_dumpaccounts to contrib.\n> \n> Derived from pg_dumpall it just dumps users and groups.\n\nWe can do the same thing with a 5-line change in pg_dumpall. We don't\nneed an extra copy&pasted program for that.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 2 Nov 2000 20:14:36 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile README\n\tpg_dumpaccounts.sh)" }, { "msg_contents": "I think the issue is that we don't want to risk breaking pg_dumpall in a\nminor release.\n\nComments?\n\n> > Added utility script pg_dumpaccounts to contrib.\n> > \n> > Derived from pg_dumpall it just dumps users and groups.\n> \n> We can do the same thing with a 5-line change in pg_dumpall. We don't\n> need an extra copy&pasted program for that.\n> \n> -- \n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 2 Nov 2000 14:17:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile\n\tREADME pg_dumpaccounts.sh)" }, { "msg_contents": "Bruce Momjian wrote:\n> I think the issue is that we don't want to risk breaking pg_dumpall in a\n> minor release.\n \n> Comments?\n\nFor 7.0.x, let's leave pg_dumpall alone -- it's too important to risk\nbreakage without extensive beta testing prior to release. An added\ncontrib utility is not a problem -- in fact, contrib tree changes, being\nthat they're by nature not a part of the main tree, probably can be\naccomodated even when main tree changes can't.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 02 Nov 2000 14:25:36 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts\n\t(MakefileREADME pg_dumpaccounts.sh)" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I think the issue is that we don't want to risk breaking pg_dumpall in a\n> minor release.\n\nNo we don't, but I agree with Peter that pg_dumpall is the place for\nthis feature in the long run. A separate contrib script is not going\nto get maintained.\n\nWhat I want to know is why we are adding features at all in a minor\nrelease. Especially 24 or so hours before release, when there is\ncertainly no time for any testing worthy of the name. Contrib or no\ncontrib, I think this is a bad idea and a bad precedent.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Nov 2000 14:28:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile README\n\tpg_dumpaccounts.sh)" }, { "msg_contents": "Tom, your feelings on this? Does Lamar's argument change anything?\n\nI agree this is not optimial, and see arguments against its inclusion\neven in /contrib.\n\n> Bruce Momjian <[email protected]> writes:\n> > I think the issue is that we don't want to risk breaking pg_dumpall in a\n> > minor release.\n> \n> No we don't, but I agree with Peter that pg_dumpall is the place for\n> this feature in the long run. A separate contrib script is not going\n> to get maintained.\n> \n> What I want to know is why we are adding features at all in a minor\n> release. Especially 24 or so hours before release, when there is\n> certainly no time for any testing worthy of the name. Contrib or no\n> contrib, I think this is a bad idea and a bad precedent.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 2 Nov 2000 14:30:44 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile\n\tREADME pg_dumpaccounts.sh)" }, { "msg_contents": "Well, here in relatively minor form is the First Example of a Great \nBridge Priority (which Tom, Bruce, and Jan have all predicted would \ncome... ;-)\n\nOur feeling is that DBAs will want to have the ability to backup user \nand group info, which you currently can't do with pg_dump. You *can* do \nit with pg_dumpall - but only if you dump every database you've got at \nthe same time. Picture a professional environment where you might have \nmany different databases running 24/7 - and doing a pg_dumpall across \nall of them at once just isn't practical. Most DBAs would prefer to \nstagger their regular backups in such an environment, one database at a \ntime. Indeed, those backups are often on fixed schedules, at different \ntimes, for real business reasons. And if you do that, you can't backup \nthe aforementioned system catalogs.\n\nThat's what this pg_dumpaccounts is designed to do. As you've seen, \nit's very simple - it does the same COPY stuff that pg_dumpall does \nbefore calling pg_dump, just without the pg_dump. It's an inelegant \nsolution, and shame on us for not catching the problem sooner. But it \n*is* a problem, albeit perhaps one that current PostgreSQL users haven't \nrun into yet. We're concerned that people might have a false sense of \nsecurity with pg_dump - that they might think if they backup one \ndatabase, they're able to do a full restore. They're not. And like I \nsaid, there are situations when pg_dumpall isn't the appropriate solution.\n\nWe recognize this is a temporary hack - and fully expect it to go away \nin 7.1 We actually think that the final solution might be more \nappropriate in pg_dump itself than pg_dumpall, but that's obviously a \nmuch more breakable proposition (hence the separate utility).\n\nI understand everyone's hesitation about adding a new utility this late \nin the process - and we're happy to be overruled on that (even if it's a \ndiscrete piece of code that wouldn't affect anything else...) I'm not \nwild about putting it in /contrib, but if that's what everyone wants to \ndo, ok.\n\nHave we adequately explained the need for this? Or do people think it's \nnot necessary?\n\nIf it *is* necessary (or at least worthwhile), is it the consensus of \nthe -hackers community that it go in /contrib?\n\nThanks,\nNed\n\n\nTom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> \n>> I think the issue is that we don't want to risk breaking pg_dumpall in a\n>> minor release.\n> \n> No we don't, but I agree with Peter that pg_dumpall is the place for\n> this feature in the long run. A separate contrib script is not going\n> to get maintained.\n> \n> What I want to know is why we are adding features at all in a minor\n> release. Especially 24 or so hours before release, when there is\n> certainly no time for any testing worthy of the name. Contrib or no\n> contrib, I think this is a bad idea and a bad precedent.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n----------------------------------------------------\nNed Lilly e: [email protected]\nVice President w: www.greatbridge.com\nEvangelism / Hacker Relations v: 757.233.5523\nGreat Bridge, LLC f: 757.233.5555\n\n", "msg_date": "Thu, 02 Nov 2000 15:01:51 -0500", "msg_from": "Ned Lilly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile README\n\tpg_dumpaccounts.sh)" }, { "msg_contents": "> I understand everyone's hesitation about adding a new utility this late \n> in the process - and we're happy to be overruled on that (even if it's a \n> discrete piece of code that wouldn't affect anything else...) I'm not \n> wild about putting it in /contrib, but if that's what everyone wants to \n> do, ok.\n> \n> Have we adequately explained the need for this? Or do people think it's \n> not necessary?\n> \n> If it *is* necessary (or at least worthwhile), is it the consensus of \n> the -hackers community that it go in /contrib?\n\nSince it is a never-before-asked-for new feature appearing in a minor\nrelease, and it is probably going away in 7.1, it is lucky to be getting\ninto /contrib. :-)\n\nThere is a good argument that it shouldn't be in 7.0.3 at all, and we\nneed the opinion of the hackers group to come to a consensus.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 2 Nov 2000 15:10:07 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile\n\tREADME pg_dumpaccounts.sh)" }, { "msg_contents": "On Thu, 2 Nov 2000, Bruce Momjian wrote:\n\n> > I understand everyone's hesitation about adding a new utility this late\n> > in the process - and we're happy to be overruled on that (even if it's a\n> > discrete piece of code that wouldn't affect anything else...) I'm not\n> > wild about putting it in /contrib, but if that's what everyone wants to\n> > do, ok.\n> >\n> > Have we adequately explained the need for this? Or do people think it's\n> > not necessary?\n> >\n> > If it *is* necessary (or at least worthwhile), is it the consensus of\n> > the -hackers community that it go in /contrib?\n>\n> Since it is a never-before-asked-for new feature appearing in a minor\n> release, and it is probably going away in 7.1, it is lucky to be getting\n> into /contrib. :-)\n>\n> There is a good argument that it shouldn't be in 7.0.3 at all, and we\n> need the opinion of the hackers group to come to a consensus.\n\nI think /contrib is the proper place for it no matter when it shows up.\nWhether it's a patch release or major release, it's not like it's an\naddition that's changing the course of giant rivers or anything like that.\nIt's a simple tool that fills a void. Looking at the other stuff that's\nalready in contrib I don't see what all the hoopla's about.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 2 Nov 2000 15:30:17 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile\n\tREADME pg_dumpaccounts.sh)" }, { "msg_contents": "Ned Lilly wrote:\n\n> Our feeling is that DBAs will want to have the ability to backup user\n> and group info, which you currently can't do with pg_dump. You *can* do\n> it with pg_dumpall - but only if you dump every database you've got at\n> the same time. Picture a professional environment where you might have\n> many different databases running 24/7 - and doing a pg_dumpall across\n> all of them at once just isn't practical. Most DBAs would prefer to\n> stagger their regular backups in such an environment, one database at a\n> time. Indeed, those backups are often on fixed schedules, at different\n> times, for real business reasons. And if you do that, you can't backup\n> the aforementioned system catalogs.\n> \n> That's what this pg_dumpaccounts is designed to do. As you've seen,\n> it's very simple - it does the same COPY stuff that pg_dumpall does\n> before calling pg_dump, just without the pg_dump. It's an inelegant\n> solution, and shame on us for not catching the problem sooner. But it\n> *is* a problem, albeit perhaps one that current PostgreSQL users haven't\n> run into yet. We're concerned that people might have a false sense of\n> security with pg_dump - that they might think if they backup one\n> database, they're able to do a full restore. They're not. And like I\n> said, there are situations when pg_dumpall isn't the appropriate solution.\n> \n> We recognize this is a temporary hack - and fully expect it to go away\n> in 7.1 We actually think that the final solution might be more\n> appropriate in pg_dump itself than pg_dumpall, but that's obviously a\n> much more breakable proposition (hence the separate utility).\n> \n> I understand everyone's hesitation about adding a new utility this late\n> in the process - and we're happy to be overruled on that (even if it's a\n> discrete piece of code that wouldn't affect anything else...) I'm not\n> wild about putting it in /contrib, but if that's what everyone wants to\n> do, ok.\n> \n> Have we adequately explained the need for this? Or do people think it's\n> not necessary?\n\nAs a user, I think it is necessary. In fact, I was planning to write a version of such a utility myself. It would be a shame to have to duplicate someone else's work because policy was more important than usability. \n\nPutting a short-lived utility in contrib seems fine to me, FWIW. I would certainly prefer that to putting less tested functionality into the release. But I would like it if this functionality could somehow become part of PostgreSQL as soon as is feasible.\n\nJust my $0.02 worth.\n\n-- \nKarl DeBisschop [email protected]\nLearning Network Reference http://www.infoplease.com\nNetsaint Plugin Developer [email protected]\n", "msg_date": "Thu, 02 Nov 2000 15:32:47 -0500", "msg_from": "Karl DeBisschop <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile\n\tREADME pg_dumpaccounts.sh)" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, your feelings on this? Does Lamar's argument change anything?\n\nNot for me. I understand Lamar's concern, but the time to be responding\nto it was two weeks ago, not today. 7.0.3 is long overdue already ---\nand in fact would be out now, had you not been out of town earlier this\nweek, no?\n\nI also don't like the fact that a commit change appeared without any\nprior discussion or even notice ... not the way to do things for a\npatch-release, IMHO.\n\nBasically I want to get 7.0.3 out the door so we can focus our full\nattention on getting 7.1 to beta. Last-minute marginal hacks are not\nthe way to go.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Nov 2000 15:42:43 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile README\n\tpg_dumpaccounts.sh)" }, { "msg_contents": "Ned Lilly writes:\n\n> That's what this pg_dumpaccounts is designed to do. As you've seen, \n> it's very simple - it does the same COPY stuff that pg_dumpall does \n> before calling pg_dump, just without the pg_dump.\n\nI only wonder since when the solution to a problem of the nature \"I need a\nprogram like X, that does A but not B\" is to make a textual copy of X,\nremove all the parts that do B, and sell it as a different program.\n\nI added an option for pg_dumpall now to only dump the users and\ngroups. This whole thing will probably break horribly in semantics once\nwe implement SQL roles, though.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 2 Nov 2000 22:19:06 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile\n\tREADME pg_dumpaccounts.sh)" }, { "msg_contents": "Ned Lilly <[email protected]> writes:\n> Well, here in relatively minor form is the First Example of a Great \n> Bridge Priority (which Tom, Bruce, and Jan have all predicted would \n> come... ;-)\n\nHmm. I wasn't aware that Jan had done it at Great Bridge's request,\nand I am going to make a point of not letting that affect my opinion ;-).\n\nWhat really got my ire up was that this change was committed several\ndays *after* core had agreed that 7.0.3 was frozen and ready to go except\nfor updating the changelog, and that it was committed with no prior\nnotice or discussion. The fact that GB asked for it doesn't make that\nbetter; if anything it makes it worse. We wouldn't have accepted such\na patch at this late date from an outside contributor, I believe.\nJan should surely have known better than to handle it in this fashion.\n\nNeed I remind you, also, that GB has been bugging us for several weeks\nto get 7.0.3 released ASAP? Last-minute changes don't further that\ngoal.\n\nThe early returns from pghackers seem to be that people favor just\ndropping the script into /contrib and not worrying about how well\ntested/documented it is. If that's the consensus then I'll shut up\n... but I do *not* like the way this was handled.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Nov 2000 16:26:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile README\n\tpg_dumpaccounts.sh)" }, { "msg_contents": "> Ned Lilly <[email protected]> writes:\n> > Well, here in relatively minor form is the First Example of a Great \n> > Bridge Priority (which Tom, Bruce, and Jan have all predicted would \n> > come... ;-)\n> \n> Hmm. I wasn't aware that Jan had done it at Great Bridge's request,\n> and I am going to make a point of not letting that affect my opinion ;-).\n> \n> What really got my ire up was that this change was committed several\n> days *after* core had agreed that 7.0.3 was frozen and ready to go except\n> for updating the changelog, and that it was committed with no prior\n> notice or discussion. The fact that GB asked for it doesn't make that\n> better; if anything it makes it worse. We wouldn't have accepted such\n> a patch at this late date from an outside contributor, I believe.\n> Jan should surely have known better than to handle it in this fashion.\n> \n> Need I remind you, also, that GB has been bugging us for several weeks\n> to get 7.0.3 released ASAP? Last-minute changes don't further that\n> goal.\n> \n> The early returns from pghackers seem to be that people favor just\n> dropping the script into /contrib and not worrying about how well\n> tested/documented it is. If that's the consensus then I'll shut up\n> ... but I do *not* like the way this was handled.\n\nI totally agree with Tom on all his points. If people were worried we\nwould not be objective now that we are employed by GB, they can rest\neasy.\n\nAlso, seems like it is hidden enough in /contrib for it to stay. While\nI would not have added it myself, I do not feel strongly enough to\nremove Jan's commit. However, I am not going to mention it in the 7.0.3\nrelease notes.\n\nI want it removed from 7.1 /contrib. I will do that now myself.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 2 Nov 2000 16:33:40 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile\n\tREADME pg_dumpaccounts.sh)" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I want it removed from 7.1 /contrib. I will do that now myself.\n\nLooks like Peter has already eliminated the need for it for 7.1 ;-).\nWhat remains to discuss is just whether we want it as a contrib item\nin 7.0.3.\n\nAs several people mentioned, it's harmless enough in contrib. I'm\nmainly objecting on principle --- this should've been done with more\nattention to protocol.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Nov 2000 16:49:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile README\n\tpg_dumpaccounts.sh)" }, { "msg_contents": "Tom Lane wrote:\n> What really got my ire up was that this change was committed several\n> days *after* core had agreed that 7.0.3 was frozen and ready to go except\n> for updating the changelog, and that it was committed with no prior\n\nNow that I've seen the back story, I must agree.\n \n> The early returns from pghackers seem to be that people favor just\n> dropping the script into /contrib and not worrying about how well\n> tested/documented it is. If that's the consensus then I'll shut up\n> ... but I do *not* like the way this was handled.\n\nBruce I believe has made a good call -- it goes in contrib for now, and\ngets yanked for 7.1, which _might_ have the same functionality as a\npg_dump/pg_dumpall option (which will then get wrung out on beta\ntesting).\n\nBut I agree -- I'm not thrilled with the method.\n\nThe functionality itself sounds nice -- but I know we need a solid 7.0.3\nout. New functionality belongs in 7.1. Until beta -- and then a freeze\nthere.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 02 Nov 2000 16:51:30 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile\n\tREADME pg_dumpaccounts.sh)" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I want it removed from 7.1 /contrib. I will do that now myself.\n> \n> Looks like Peter has already eliminated the need for it for 7.1 ;-).\n> What remains to discuss is just whether we want it as a contrib item\n> in 7.0.3.\n> \n> As several people mentioned, it's harmless enough in contrib. I'm\n> mainly objecting on principle --- this should've been done with more\n> attention to protocol.\n\nTotally agree, and I made that point clear to GB staff. Hopefully Jan\nwill read this when he gets online and know about it too. I will phone\nhim now to update him on the issues.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 2 Nov 2000 16:54:44 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile\n\tREADME pg_dumpaccounts.sh)" }, { "msg_contents": "On Thu, 2 Nov 2000, Ned Lilly wrote:\n\n> We recognize this is a temporary hack - and fully expect it to go away\n> in 7.1 We actually think that the final solution might be more\n> appropriate in pg_dump itself than pg_dumpall, but that's obviously a\n> much more breakable proposition (hence the separate utility).\n\nOkay, because of this paragraph, and this one only, I will agree with Tom\n(and I believe Bruce) that this should be removed. If, as Peter states,\nthis could be put into pg_dump in 5 lines, and as you say, it is a\ntemporary hack, then more appropriate would be to put a link off of the\nweb site and *not* put it into the source distribution ...\n\nI like what it does, since I can relate to the need to dump user/group\ninfo seperate from everything else, but if a permanent fix is as doable as\nPeter states, putting a temporary one, especially into a minor release,\nmakes little to no sense ...\n\nMy vote is to please remove it from the source tree ...\n\n\n> If it *is* necessary (or at least worthwhile), is it the consensus of\n> the -hackers community that it go in /contrib?\n\nAltho this is going to force me to agree with Tom concerning Karel's\npatch, it should not be added to the 7.0.x branch *at all* ... 7.0.x is a\n*patch* release, new features are for 7.1 and 7.1 only ...\n\n\n", "msg_date": "Thu, 2 Nov 2000 20:29:52 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile\n\tREADME pg_dumpaccounts.sh)" }, { "msg_contents": "On Thu, 2 Nov 2000, Tom Lane wrote:\n\n> Ned Lilly <[email protected]> writes:\n> > Well, here in relatively minor form is the First Example of a Great \n> > Bridge Priority (which Tom, Bruce, and Jan have all predicted would \n> > come... ;-)\n> \n> Hmm. I wasn't aware that Jan had done it at Great Bridge's request,\n> and I am going to make a point of not letting that affect my opinion ;-).\n> \n> What really got my ire up was that this change was committed several\n> days *after* core had agreed that 7.0.3 was frozen and ready to go except\n> for updating the changelog, and that it was committed with no prior\n> notice or discussion. The fact that GB asked for it doesn't make that\n> better; if anything it makes it worse. We wouldn't have accepted such\n> a patch at this late date from an outside contributor, I believe.\n> Jan should surely have known better than to handle it in this fashion.\n> \n> Need I remind you, also, that GB has been bugging us for several weeks\n> to get 7.0.3 released ASAP? Last-minute changes don't further that\n> goal.\n> \n> The early returns from pghackers seem to be that people favor just\n> dropping the script into /contrib and not worrying about how well\n> tested/documented it is. If that's the consensus then I'll shut up\n> ... but I do *not* like the way this was handled.\n\nI will back up Tom on this and vote against even putting it into /contrib\n... the only reason we delayed the release as we did was so that Bruce\ncould finish up the release docs, not to give \"just one more patch\" time\nto get into the tree. \n\nTom, apologies ... the Karel issue is the same thing, and I was in err for\neven suggesting we put *that* into contrib. \n\n", "msg_date": "Thu, 2 Nov 2000 20:34:56 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile\n\tREADME pg_dumpaccounts.sh)" }, { "msg_contents": "On Thu, 2 Nov 2000, Bruce Momjian wrote:\n\n> > Ned Lilly <[email protected]> writes:\n> > > Well, here in relatively minor form is the First Example of a Great \n> > > Bridge Priority (which Tom, Bruce, and Jan have all predicted would \n> > > come... ;-)\n> > \n> > Hmm. I wasn't aware that Jan had done it at Great Bridge's request,\n> > and I am going to make a point of not letting that affect my opinion ;-).\n> > \n> > What really got my ire up was that this change was committed several\n> > days *after* core had agreed that 7.0.3 was frozen and ready to go except\n> > for updating the changelog, and that it was committed with no prior\n> > notice or discussion. The fact that GB asked for it doesn't make that\n> > better; if anything it makes it worse. We wouldn't have accepted such\n> > a patch at this late date from an outside contributor, I believe.\n> > Jan should surely have known better than to handle it in this fashion.\n> > \n> > Need I remind you, also, that GB has been bugging us for several weeks\n> > to get 7.0.3 released ASAP? Last-minute changes don't further that\n> > goal.\n> > \n> > The early returns from pghackers seem to be that people favor just\n> > dropping the script into /contrib and not worrying about how well\n> > tested/documented it is. If that's the consensus then I'll shut up\n> > ... but I do *not* like the way this was handled.\n> \n> I totally agree with Tom on all his points. If people were worried we\n> would not be objective now that we are employed by GB, they can rest\n> easy.\n> \n> Also, seems like it is hidden enough in /contrib for it to stay. While\n> I would not have added it myself, I do not feel strongly enough to\n> remove Jan's commit. However, I am not going to mention it in the 7.0.3\n> release notes.\n\nI do feel strongly about this ... 7.0.3 was considered in a release state\n*before* it was committed, pending your docs changes ... personally, if we\nleave this in contrib, my vote is to hold off the release a suitable\namount of time for testing purposes ... Jan has added a new feature that\nnobody had any pre-warning about, not even other developers in the same\ncompany as he is in ... not a good precedent :(\n\n\n", "msg_date": "Thu, 2 Nov 2000 20:39:21 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile\n\tREADME pg_dumpaccounts.sh)" }, { "msg_contents": "> On Thu, 2 Nov 2000, Ned Lilly wrote:\n> \n> > We recognize this is a temporary hack - and fully expect it to go away\n> > in 7.1 We actually think that the final solution might be more\n> > appropriate in pg_dump itself than pg_dumpall, but that's obviously a\n> > much more breakable proposition (hence the separate utility).\n> \n> Okay, because of this paragraph, and this one only, I will agree with Tom\n> (and I believe Bruce) that this should be removed. If, as Peter states,\n> this could be put into pg_dump in 5 lines, and as you say, it is a\n> temporary hack, then more appropriate would be to put a link off of the\n> web site and *not* put it into the source distribution ...\n> \n> I like what it does, since I can relate to the need to dump user/group\n> info seperate from everything else, but if a permanent fix is as doable as\n> Peter states, putting a temporary one, especially into a minor release,\n> makes little to no sense ...\n> \n> My vote is to please remove it from the source tree ...\n\n\n> \n> \n> > If it *is* necessary (or at least worthwhile), is it the consensus of\n> > the -hackers community that it go in /contrib?\n> \n> Altho this is going to force me to agree with Tom concerning Karel's\n> patch, it should not be added to the 7.0.x branch *at all* ... 7.0.x is a\n> *patch* release, new features are for 7.1 and 7.1 only ...\n\nOK, we have votes from Lamar, Ned, Jan, and someone else to keep it in\n/contrib, votes from Marc and Tom to remove it completely.\n\nOther votes?\n\nIt will not be mentioned in the release notes.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 2 Nov 2000 19:40:59 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile\n\tREADME pg_dumpaccounts.sh)" }, { "msg_contents": "> > Also, seems like it is hidden enough in /contrib for it to stay. While\n> > I would not have added it myself, I do not feel strongly enough to\n> > remove Jan's commit. However, I am not going to mention it in the 7.0.3\n> > release notes.\n> \n> I do feel strongly about this ... 7.0.3 was considered in a release state\n> *before* it was committed, pending your docs changes ... personally, if we\n> leave this in contrib, my vote is to hold off the release a suitable\n> amount of time for testing purposes ... Jan has added a new feature that\n> nobody had any pre-warning about, not even other developers in the same\n> company as he is in ... not a good precedent :(\n\nThe fact that we are in the same company is pretty meaningless, as you\nhave seen.\n\nHowever, we do have two core developers opposed, one for it, and three\nusers for it. I am not voting because I can see both points.\n\nI think we need more votes or a general core vote. I am branding 7.0.3\nnow, so we have a little time for more votes, perhaps from other core\nmembers.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 2 Nov 2000 19:46:05 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile\n\tREADME pg_dumpaccounts.sh)" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> I do feel strongly about this ... 7.0.3 was considered in a release state\n> *before* it was committed, pending your docs changes ... personally, if we\n> leave this in contrib, my vote is to hold off the release a suitable\n> amount of time for testing purposes ...\n\nEr, since when do we do pre-release testing of contrib stuff? I'm\ngenerally in agreement that this wasn't a good idea, but I don't see a\nreason to hold off the release to test it. Let's wait till tomorrow to\nsee if we get more votes, and then it's either in or out.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Nov 2000 19:51:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile README\n\tpg_dumpaccounts.sh)" }, { "msg_contents": "On Thu, 2 Nov 2000, Bruce Momjian wrote:\n\n> > On Thu, 2 Nov 2000, Ned Lilly wrote:\n> >\n> > > We recognize this is a temporary hack - and fully expect it to go away\n> > > in 7.1 We actually think that the final solution might be more\n> > > appropriate in pg_dump itself than pg_dumpall, but that's obviously a\n> > > much more breakable proposition (hence the separate utility).\n> >\n> > Okay, because of this paragraph, and this one only, I will agree with Tom\n> > (and I believe Bruce) that this should be removed. If, as Peter states,\n> > this could be put into pg_dump in 5 lines, and as you say, it is a\n> > temporary hack, then more appropriate would be to put a link off of the\n> > web site and *not* put it into the source distribution ...\n> >\n> > I like what it does, since I can relate to the need to dump user/group\n> > info seperate from everything else, but if a permanent fix is as doable as\n> > Peter states, putting a temporary one, especially into a minor release,\n> > makes little to no sense ...\n> >\n> > My vote is to please remove it from the source tree ...\n>\n>\n> >\n> >\n> > > If it *is* necessary (or at least worthwhile), is it the consensus of\n> > > the -hackers community that it go in /contrib?\n> >\n> > Altho this is going to force me to agree with Tom concerning Karel's\n> > patch, it should not be added to the 7.0.x branch *at all* ... 7.0.x is a\n> > *patch* release, new features are for 7.1 and 7.1 only ...\n>\n> OK, we have votes from Lamar, Ned, Jan, and someone else to keep it in\n> /contrib, votes from Marc and Tom to remove it completely.\n>\n> Other votes?\n>\n> It will not be mentioned in the release notes.\n>\n>\n\nSo now I'm only a \"someone else\"? Wait till you see your next link!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 2 Nov 2000 20:11:53 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile\n\tREADME pg_dumpaccounts.sh)" }, { "msg_contents": "> > > Altho this is going to force me to agree with Tom concerning Karel's\n> > > patch, it should not be added to the 7.0.x branch *at all* ... 7.0.x is a\n> > > *patch* release, new features are for 7.1 and 7.1 only ...\n> >\n> > OK, we have votes from Lamar, Ned, Jan, and someone else to keep it in\n> > /contrib, votes from Marc and Tom to remove it completely.\n> >\n> > Other votes?\n> >\n> > It will not be mentioned in the release notes.\n> >\n> >\n> \n> So now I'm only a \"someone else\"? Wait till you see your next link!\n\nNo, actually, I forgot you had voted, which may be worse than forgetting\nyour name. Vince is another vote to keep it in /contrib.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 2 Nov 2000 20:13:00 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile\n\tREADME pg_dumpaccounts.sh)" }, { "msg_contents": "On Thu, 2 Nov 2000, The Hermit Hacker wrote:\n\n> On Thu, 2 Nov 2000, Bruce Momjian wrote:\n>\n> > > Ned Lilly <[email protected]> writes:\n> > > > Well, here in relatively minor form is the First Example of a Great\n> > > > Bridge Priority (which Tom, Bruce, and Jan have all predicted would\n> > > > come... ;-)\n> > >\n> > > Hmm. I wasn't aware that Jan had done it at Great Bridge's request,\n> > > and I am going to make a point of not letting that affect my opinion ;-).\n> > >\n> > > What really got my ire up was that this change was committed several\n> > > days *after* core had agreed that 7.0.3 was frozen and ready to go except\n> > > for updating the changelog, and that it was committed with no prior\n> > > notice or discussion. The fact that GB asked for it doesn't make that\n> > > better; if anything it makes it worse. We wouldn't have accepted such\n> > > a patch at this late date from an outside contributor, I believe.\n> > > Jan should surely have known better than to handle it in this fashion.\n> > >\n> > > Need I remind you, also, that GB has been bugging us for several weeks\n> > > to get 7.0.3 released ASAP? Last-minute changes don't further that\n> > > goal.\n> > >\n> > > The early returns from pghackers seem to be that people favor just\n> > > dropping the script into /contrib and not worrying about how well\n> > > tested/documented it is. If that's the consensus then I'll shut up\n> > > ... but I do *not* like the way this was handled.\n> >\n> > I totally agree with Tom on all his points. If people were worried we\n> > would not be objective now that we are employed by GB, they can rest\n> > easy.\n> >\n> > Also, seems like it is hidden enough in /contrib for it to stay. While\n> > I would not have added it myself, I do not feel strongly enough to\n> > remove Jan's commit. However, I am not going to mention it in the 7.0.3\n> > release notes.\n>\n> I do feel strongly about this ... 7.0.3 was considered in a release state\n> *before* it was committed, pending your docs changes ... personally, if we\n> leave this in contrib, my vote is to hold off the release a suitable\n> amount of time for testing purposes ... Jan has added a new feature that\n> nobody had any pre-warning about, not even other developers in the same\n> company as he is in ... not a good precedent :(\n\nWhat am I missing? We're talking about contrib. Most things you find\nin contrib directories don't even work and you're worried about a testing\nphase? Most folks don't even look in contrib directories unless they're\nspecifically looking for something.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 2 Nov 2000 20:15:44 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile\n\tREADME pg_dumpaccounts.sh)" }, { "msg_contents": "On Thu, 2 Nov 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > I do feel strongly about this ... 7.0.3 was considered in a release state\n> > *before* it was committed, pending your docs changes ... personally, if we\n> > leave this in contrib, my vote is to hold off the release a suitable\n> > amount of time for testing purposes ...\n> \n> Er, since when do we do pre-release testing of contrib stuff? I'm\n> generally in agreement that this wasn't a good idea, but I don't see a\n> reason to hold off the release to test it. Let's wait till tomorrow to\n> see if we get more votes, and then it's either in or out.\n\nSorry, should have added a smiley at the end of that :)\n\n\n", "msg_date": "Thu, 2 Nov 2000 21:22:25 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile\n\tREADME pg_dumpaccounts.sh)" }, { "msg_contents": "While I see both sides, this looks like an *INTERNAL* *CORE* debate. \n\n From a USER perspective the functionality is useful. From a Software \nDevelopment perspective, the timing stinks. \n\n I'd leave it in /contrib, and make damn sure we get the right\nfunctionality into the 7.1 release. \n\nLER\n\n* Vince Vielhaber <[email protected]> [001102 19:18]:\n> On Thu, 2 Nov 2000, The Hermit Hacker wrote:\n> \n> > On Thu, 2 Nov 2000, Bruce Momjian wrote:\n> >\n> > > > Ned Lilly <[email protected]> writes:\n> > > > > Well, here in relatively minor form is the First Example of a Great\n> > > > > Bridge Priority (which Tom, Bruce, and Jan have all predicted would\n> > > > > come... ;-)\n> > > >\n> > > > Hmm. I wasn't aware that Jan had done it at Great Bridge's request,\n> > > > and I am going to make a point of not letting that affect my opinion ;-).\n> > > >\n> > > > What really got my ire up was that this change was committed several\n> > > > days *after* core had agreed that 7.0.3 was frozen and ready to go except\n> > > > for updating the changelog, and that it was committed with no prior\n> > > > notice or discussion. The fact that GB asked for it doesn't make that\n> > > > better; if anything it makes it worse. We wouldn't have accepted such\n> > > > a patch at this late date from an outside contributor, I believe.\n> > > > Jan should surely have known better than to handle it in this fashion.\n> > > >\n> > > > Need I remind you, also, that GB has been bugging us for several weeks\n> > > > to get 7.0.3 released ASAP? Last-minute changes don't further that\n> > > > goal.\n> > > >\n> > > > The early returns from pghackers seem to be that people favor just\n> > > > dropping the script into /contrib and not worrying about how well\n> > > > tested/documented it is. If that's the consensus then I'll shut up\n> > > > ... but I do *not* like the way this was handled.\n> > >\n> > > I totally agree with Tom on all his points. If people were worried we\n> > > would not be objective now that we are employed by GB, they can rest\n> > > easy.\n> > >\n> > > Also, seems like it is hidden enough in /contrib for it to stay. While\n> > > I would not have added it myself, I do not feel strongly enough to\n> > > remove Jan's commit. However, I am not going to mention it in the 7.0.3\n> > > release notes.\n> >\n> > I do feel strongly about this ... 7.0.3 was considered in a release state\n> > *before* it was committed, pending your docs changes ... personally, if we\n> > leave this in contrib, my vote is to hold off the release a suitable\n> > amount of time for testing purposes ... Jan has added a new feature that\n> > nobody had any pre-warning about, not even other developers in the same\n> > company as he is in ... not a good precedent :(\n> \n> What am I missing? We're talking about contrib. Most things you find\n> in contrib directories don't even work and you're worried about a testing\n> phase? Most folks don't even look in contrib directories unless they're\n> specifically looking for something.\n> \n> Vince.\n> -- \n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n> 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n> \n> \n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Thu, 2 Nov 2000 19:29:39 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile README\n\tpg_dumpaccounts.sh)" }, { "msg_contents": "> I do feel strongly about this ... 7.0.3 was considered in a release state\n> *before* it was committed, pending your docs changes ... personally, if we\n> leave this in contrib, my vote is to hold off the release a suitable\n> amount of time for testing purposes ... Jan has added a new feature that\n> nobody had any pre-warning about, not even other developers in the same\n> company as he is in ... not a good precedent :(\n\nTo me the whole point of the contrib directory is as a place for \nstuff that is not officially part of the release, but that somebody \nmight find interesting or even useful. If we start enforcing \nelaborate rules about what can go in there, then we will need \nanother place to put stuff that doesn't fit those rules but might \nnonetheless be interesting or even useful. Where does it end?\n\nThis is a vote for leaving the addition in place. It is also a \nvote for clarifying that contrib is, specifically, the place to\nto put potentially useful things that have not been officially\nqualified. The distinction is made to relieve pressure to add \ninsufficiently-considered features to the release proper, a \nfunction contrib can serve only if it is allowed to.\n\nNathan Myers\[email protected]\n\n", "msg_date": "Thu, 2 Nov 2000 17:39:17 -0800", "msg_from": "Nathan Myers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql/contrib/pg_dumpaccounts " }, { "msg_contents": "I agree with leaving it be in contrib. The lesson has been learned,\nand contrib has certainly gone out in _much_ worse shape, with code that\nwouldn't even compile.\n\nRoss\n-- \nOpen source code is like a natural resource, it's the result of providing\nfood and sunshine to programmers, and then staying out of their way.\n[...] [It] is not going away because it has utility for both the developers \nand users independent of economic motivations. Jim Flynn, Sunnyvale, Calif.\n", "msg_date": "Fri, 3 Nov 2000 09:29:07 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile README\n\tpg_dumpaccounts.sh)" }, { "msg_contents": "Bruce Momjian writes:\n\n> OK, we have votes from Lamar, Ned, Jan, and someone else to keep it in\n> /contrib, votes from Marc and Tom to remove it completely.\n> \n> Other votes?\n\nWhat part of \"no new features in bug-fix releases\" is giving people\ntrouble?\n\nIf Great Bridge wants this in their platinum certified re-release, nothing\nis stopping them. If Great Bridge \"bugs\" people to release ASAP, why\ndon't they release it themselves?\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Fri, 3 Nov 2000 18:49:42 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile\n\tREADME pg_dumpaccounts.sh)" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > OK, we have votes from Lamar, Ned, Jan, and someone else to keep it in\n> > /contrib, votes from Marc and Tom to remove it completely.\n> > \n> > Other votes?\n> \n> What part of \"no new features in bug-fix releases\" is giving people\n> trouble?\n> \n> If Great Bridge wants this in their platinum certified re-release, nothing\n> is stopping them. If Great Bridge \"bugs\" people to release ASAP, why\n> don't they release it themselves?\n\nCertainly none of this should be based on Great Bridge and their\nsupplying of the patch, though their contacting Jan directly and him\nmaking the change with no discussion was a major mistake. Also, having\nhim install it as a new command in /bin/pg_dump/ was a major mistake\ntoo. \n\nI have talked to GB and they understand their error. Some people want\nto yank it out of /contrib too because of this error, and I can\nunderstand that reaction.\n\nIt just re-illustrates that Great Bridge has some things to learn in\nworking with the open-source community. \n\nThe larger questions is whether we allow feature additions to /contrib\nin minor releases. I think we do. In fact, I think I have even\ninstalled new 3rd party stuff like pgaccess in minor releases. In fact,\n6.5.3 has\n\n\tUpdated version of pgaccess 0.98\n\nand I am not sure if that was a bug fix or feature change to pgaccess,\nbut the conclusion was that it is a 3rd party thing and the person who\nwrote it is responsible for making sure it works. I think the same can\nbe said of /contrib.\n\nComments?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 3 Nov 2000 13:02:22 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile\n\tREADME pg_dumpaccounts.sh)" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> What part of \"no new features in bug-fix releases\" is giving people\n> trouble?\n\nInteresting observation here: the key developers seem to be much more\nexercised about this than the rest of the community. Counting core\nmembers and Peter we have three \"no\" and one \"yes\", whereas as best\nI recall the votes from the rest of pghackers are about 6 to 1 in\nfavor.\n\nMake of that what you will --- but I'm going to yield the point,\nsince the non-core sentiment seems to be very clear.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Nov 2000 13:03:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile README\n\tpg_dumpaccounts.sh)" }, { "msg_contents": "> Peter Eisentraut <[email protected]> writes:\n> > What part of \"no new features in bug-fix releases\" is giving people\n> > trouble?\n> \n> Interesting observation here: the key developers seem to be much more\n> exercised about this than the rest of the community. Counting core\n> members and Peter we have three \"no\" and one \"yes\", whereas as best\n> I recall the votes from the rest of pghackers are about 6 to 1 in\n> favor.\n> \n> Make of that what you will --- but I'm going to yield the point,\n> since the non-core sentiment seems to be very clear.\n\nI think the core/active group is more negative because of the way this\nwas done, with GB calling Jan, and Jan jamming it in with no discussion.\n\nJan originally had it in /bin/pg_dump as a new command and a new manual\npage. You can image the firestore that would have caused. I called him\nright away and got it into /contrib.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 3 Nov 2000 13:07:53 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile\n\tREADME pg_dumpaccounts.sh)" }, { "msg_contents": "Tom Lane wrote:\n> \n> Peter Eisentraut <[email protected]> writes:\n> > What part of \"no new features in bug-fix releases\" is giving people\n> > trouble?\n> \n> Interesting observation here: the key developers seem to be much more\n> exercised about this than the rest of the community. Counting core\n> members and Peter we have three \"no\" and one \"yes\", whereas as best\n> I recall the votes from the rest of pghackers are about 6 to 1 in\n> favor.\n> \n> Make of that what you will --- but I'm going to yield the point,\n> since the non-core sentiment seems to be very clear.\n\nISTM that devlopers are correctly concerned about policy. And ISTM that users place a slightly higher premium on administrative ease than they do on consistency. Nothing really wrong with that either.\n\nMy guess is that users would be very happy to yield on placing the script in /contrib if some alternate were proposed. I'd be just as happy if there were a link to the file on the download page. Or if it was placed in an \"interim\" directory or something like that. I just want it to be available. I would prefer if it could be done in a way that's consistent with the project policy, but that consistency is just slightly less important to me than short-term usability. \n\n-- \nKarl DeBisschop [email protected]\nLearning Network Reference http://www.infoplease.com\nNetsaint Plugin Developer [email protected]\n", "msg_date": "Fri, 03 Nov 2000 13:18:11 -0500", "msg_from": "Karl DeBisschop <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile\n\tREADME pg_dumpaccounts.sh)" }, { "msg_contents": "> Peter Eisentraut <[email protected]> writes:\n> > What part of \"no new features in bug-fix releases\" is giving people\n> > trouble?\n> \n> Interesting observation here: the key developers seem to be much more\n> exercised about this than the rest of the community. Counting core\n> members and Peter we have three \"no\" and one \"yes\", whereas as best\n> I recall the votes from the rest of pghackers are about 6 to 1 in\n> favor.\n> \n> Make of that what you will --- but I'm going to yield the point,\n> since the non-core sentiment seems to be very clear.\n\nOne final analysis of this. If Jan had come to the list and explained\nthe problem, and suggested adding something to /contrib for 7.0.3, I\nthink most people would have said OK.\n\nMy guess is that many of the negative votes are based on the way this\nwas handled. I know I think about how this was handled and get upset\nmyself.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 3 Nov 2000 13:18:23 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile\n\tREADME pg_dumpaccounts.sh)" }, { "msg_contents": "On Fri, 3 Nov 2000, Bruce Momjian wrote:\n\n> I have talked to GB and they understand their error.\n\n\tUntil the next time? This isn't the first time you've \"talked to\nthem\" ...\n\n", "msg_date": "Sun, 5 Nov 2000 00:13:54 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile\n\tREADME pg_dumpaccounts.sh)" }, { "msg_contents": "Bruce Momjian wrote:\n> Also, seems like it is hidden enough in /contrib for it to stay. While\n> I would not have added it myself, I do not feel strongly enough to\n> remove Jan's commit. However, I am not going to mention it in the 7.0.3\n> release notes.\n>\n> I want it removed from 7.1 /contrib. I will do that now myself.\n\n Need to apologize for all the trouble caused.\n\n My approach was to meet the documentations deadline of Mark\n Cotton, while making it as easy, safe and 7.0.3 vs. 7.1\n compatible. I know that it'll not stand the test of time.\n Finally (with roles) we need to develop a real solution, but\n this was IMHO better than void (what we had before).\n\n Due to my current crazy situation, there was no time for a\n discussion, so I decided just not to touch any existing code\n (not a single byte) and put it in as a separate script, MEANT\n to be removed subsequently again.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Mon, 6 Nov 2000 04:24:04 -0500 (EST)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile\n\tREADME pg_dumpaccounts.sh))" } ]
[ { "msg_contents": "I too got somehow on the list without subscribing. Something is wrong. But I \nlike it and will stay on. :)\n\nBob Kernell\nResearch Scientist\nSurface Validation Group\nAtmospheric Sciences Competency\nAnalytical Services & Materials, Inc.\nemail: [email protected]\ntel: 757-827-4631\n\n", "msg_date": "Thu, 2 Nov 2000 13:29:03 -0500 (EST)", "msg_from": "Robert Kernell <[email protected]>", "msg_from_op": true, "msg_subject": "me too" }, { "msg_contents": "> I too got somehow on the list without subscribing. Something is wrong. But I \n> like it and will stay on. :)\n\nMarc, not sure what you are doing over there with the mailing lists, but\nkeep it up. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 2 Nov 2000 13:39:44 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: me too" } ]
[ { "msg_contents": "When I start the postmaster with PGDATA a relative path[*], then I\nreproducably get things like this:\n\nDEBUG: Data Base System is in production state at Thu Nov 2 20:28:26 2000\nNOTICE: Cannot create init file pg-install/var/data/base/1/pg_internal.init.10037: No such file or directory\n Continuing anyway, but there's something wrong.\nNOTICE: mdopen: couldn't open pg-install/var/data/global/1269: No such file or directory\nNOTICE: mdopen: couldn't open pg-install/var/data/global/1264: No such file or directory\nNOTICE: mdopen: couldn't open pg-install/var/data/global/1269: No such file or directory\nFATAL 1: cannot open relation pg_log\nNOTICE: Cannot create init file pg-install/var/data/base/1/pg_internal.init.10053: No such file or directory\n Continuing anyway, but there's something wrong.\nNOTICE: mdopen: couldn't open pg-install/var/data/global/1269: No such file or directory\nNOTICE: mdopen: couldn't open pg-install/var/data/global/1264: No such file or directory\n\n(hangs here, and then after a while)\n\nFATAL: s_lock(40017029) at spin.c:131, stuck spinlock. Aborting.\n \nFATAL: s_lock(40017029) at spin.c:131, stuck spinlock. Aborting.\nServer process (pid 10053) exited with status 6 at Thu Nov 2 20:29:54 2000\nTerminating any active server processes...\nServer processes were terminated at Thu Nov 2 20:29:54 2000\nReinitializing shared memory and semaphores\n\nIt seems to me that \"Continuing anyway, but there's something wrong.\" is a\nrather inappropriate error handling, and this whole things seems pretty\nscary for a trivial user mistake.\n\n\n[*] -- The postmaster should refuse to start when the data directory is\nnot an absolute path. I'm on that.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 2 Nov 2000 20:37:51 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Why is failure to find file a \"NOTICE\"?" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> DEBUG: Data Base System is in production state at Thu Nov 2 20:28:26 2000\n> NOTICE: Cannot create init file pg-install/var/data/base/1/pg_internal.init.10037: No such file or directory\n> Continuing anyway, but there's something wrong.\n> NOTICE: mdopen: couldn't open pg-install/var/data/global/1269: No such file or directory\n\n> It seems to me that \"Continuing anyway, but there's something wrong.\" is a\n> rather inappropriate error handling, and this whole things seems pretty\n> scary for a trivial user mistake.\n\nFor pg_internal.init, which is inessential (it's merely a cache of index\ninfo that can be extracted the hard way), I think the behavior is\nreasonable.\n\nI think the fact that mdopen doesn't treat open failure as a hard error\nis an artifact of the way we currently handle relation deletion (ie,\nphysical file goes away before end of xact, hence relation descriptor\nstill exists and might get referenced). I will try to clean this\nup when I revise DROP TABLE to postpone deletion.\n\n> [*] -- The postmaster should refuse to start when the data directory is\n> not an absolute path. I'm on that.\n\nEither that, or convert it to an absolute path. The problem is that the\nbackends chdir() to their individual databases' data directories, so\nrelative paths that were OK from the postmaster's perspective are no\ngood anymore.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Nov 2000 16:05:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is failure to find file a \"NOTICE\"? " }, { "msg_contents": "Tom Lane writes:\n\n> > [*] -- The postmaster should refuse to start when the data directory is\n> > not an absolute path. I'm on that.\n> \n> Either that, or convert it to an absolute path. The problem is that the\n> backends chdir() to their individual databases' data directories, so\n> relative paths that were OK from the postmaster's perspective are no\n> good anymore.\n\nIs there a profound reason for this chdir()?\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Fri, 3 Nov 2000 21:50:27 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is failure to find file a \"NOTICE\"? " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> Either that, or convert it to an absolute path. The problem is that the\n>> backends chdir() to their individual databases' data directories, so\n>> relative paths that were OK from the postmaster's perspective are no\n>> good anymore.\n\n> Is there a profound reason for this chdir()?\n\nI like it because it keeps coredump files separate for backends in\ndifferent databases, not to mention separate from the postmaster's\nown corefile.\n\nIt used to be true that some places in the backend would use relative\npaths (ie, just \"foo\") to access some files, so that was also forcing\nthe working directory to be the same as the database subdirectory.\nOther places build absolute paths (or what they think are absolute\npaths, anyway) by prepending the -D string. I'm not sure if all the\nuses of relative paths have been removed or not. Just on performance\ngrounds it seems to me that using relative paths is preferable, and we\nought to be removing the prepending of the -D path rather than making\nit essential...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Nov 2000 15:55:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is failure to find file a \"NOTICE\"? " } ]
[ { "msg_contents": "\n> I'm looking at some point in time in the future doing a\n> 'postgresql-upgrade' RPM that would include pre-built postmasters and\n> other binaries necessary to dump any previous version PostgreSQL (since\n> about 6.2.1 or so -- 6.2.1 was the first RedHat official PostgreSQL RPM,\n> although there were 6.1.1 RPM's before that, and there is still a\n> postgres95-1.09 RPM out there), linked to the current libs for that\n> RPM's OS release. It would be a large RPM (and the source RPM for it\n> would be _huge_, containing entire tarballs for at least 6.2.1, 6.3.2,\n> 6.4.2, 6.5.3, and 7.0.3). But, this may be the only way to make this\n> work barring a real migration utility.\n> \n\tHow about instead of one huge utility that does everything for all\ncombinations, you release specific utilities for different upgrades.\nExample, one RPM for 6.5.3 -> 7.0.3 and another RPM for 6.4.2 -> 7.0.2. It\nwould result in more RPMs but they would be smaller and probably easier to\nmaintain.\n", "msg_date": "Thu, 2 Nov 2000 15:51:39 -0600 ", "msg_from": "Matthew <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)" } ]
[ { "msg_contents": "HI,\nI want to return a record from a FUNCTION in plpgsql procedural language.\nThere are very few examples to go by.\nIt doesn't accept RETURN RECORD.\nI've tried\nmaking a record in the declare section and returning OPAQUE.\n\nTYPE temp IS RECORD\n(id int4,\nname varchar(50),\n);\nIt gives the error java.sql.SQLException: ERROR: typeidTypeRelid: Invalid\ntype - oid = 0\ndoes someone have the answer?\nThanks, Pam\n", "msg_date": "Fri, 3 Nov 2000 11:08:45 +1100 ", "msg_from": "Pam Withnall <[email protected]>", "msg_from_op": true, "msg_subject": "FUNCTIONS returning a record?" } ]
[ { "msg_contents": "We've hacked up pg_dump so that it won't dump objects inherited from\ntemplate1. Unfortunately I have realized there are a couple of serious\nproblems:\n\n1. What if the inherited object is a table or a sequence? Its state may\nno longer be the same as it was in template1 (eg, a table may contain\nmore or different rows than it did when copied from template1). It\nseems a perfectly natural use of the template1 functionality to store,\nsay, a table definition in template1 and then add rows to it in\ninherited databases --- that's exactly what the system does with\npg_proc, for example.\n\n2. For that matter, even function definitions might change once we\nsupport ALTER FUNCTION, which we surely will someday. Or, template1\nmight contain data which was not present when some other database was\ncreated. In this case, reloading template1 first will not reproduce\nthe prior state of that database.\n\n3. What if the OID counter wraps around? I've been telling people\nthat's not a fatal scenario ... but it sure is if pg_dump silently omits\ndata on the basis of ordered OID comparisons.\n\nA solution that would work for pg_dumpall is to dump all the user items\nin each database and dump template1 *last*. This won't help much for\npiecemeal pg_dump and restore, however.\n\nThoughts? At the moment I'm afraid that the functionality we have is\nworse than the way prior versions behaved --- not least because anyone\nwho was putting user data in template1 has probably gotten used to the\nprior behavior. Maybe we should give up the whole idea of user data\nin template1.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Nov 2000 19:35:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Unhappy thoughts about pg_dump and objects inherited from template1" }, { "msg_contents": "At 19:35 2/11/00 -0500, Tom Lane wrote:\n>We've hacked up pg_dump so that it won't dump objects inherited from\n>template1. Unfortunately I have realized there are a couple of serious\n>problems:\n>\n>1. What if the inherited object is a table or a sequence?\n>2. For that matter, even function definitions might change \n\nThe only solution I can think of for this would be to use lastsysoid from\ntemplate1; this is the value set when initdb runs.\n\n\n>3. What if the OID counter wraps around?\n\nCan the code that wraps the OID restart it at 'select max(lastsysoid) from\npg_database'? Is that too complex?\n\n\n>Maybe we should give up the whole idea of user data\n>in template1.\n\nI'm leaning a little this way, but local mods are useful.\n\nThere's also a problem if a db drops a function created by template1, then\ncreates its own version (eg. via (mythical) ALTER FUNCTION). If we restore\ntemplate1 then the db, we get a problem.\n\nPerhaps, for pg_dumpall:\n\n1. Restore vanilla template1 (this is probably not necessary?)\n2. Restore all DBs (dumped using template1->lastsysoid)\n3. Restore local mods to template1\n\n\nAnd for single-db dump we dump using db->lastsysoid (the assumption being\nthat the DB will be restored in the 'right' template1 context). This would\nbe the default behaviour of pg_dump.\n\nThis requires a way of asking pg_dump to use a 'system' (ie. template1) or\n'local' (ie. from the specific database) lastsysoid...( --last-oid\n{S,D}/-L{S,D}). I think this fixes it, but perhaps I'm hallucinating.\n\nDoes this sound OK?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 03 Nov 2000 12:00:48 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects inherited from\n\ttemplate1" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n>> 1. What if the inherited object is a table or a sequence?\n\n> The only solution I can think of for this would be to use lastsysoid from\n> template1; this is the value set when initdb runs.\n\nHow does that help? It won't tell you anything about updated or deleted\nrows, nor about sequence advances, nor ALTER FUNCTION changes. You\ncould detect inserted rows, and that's about it.\n\n>> 3. What if the OID counter wraps around?\n\n> Can the code that wraps the OID restart it at 'select max(lastsysoid) from\n> pg_database'? Is that too complex?\n\nWhat if lastsysoid is MAXINT minus just a little? Not very workable,\neven if it were possible for the OID counter to work that way, which\nI don't think it is (the OID allocator is way too low-level to go off\ninvoking arbitrary queries with safety).\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Nov 2000 20:12:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects inherited from\n\ttemplate1" }, { "msg_contents": "At 20:12 2/11/00 -0500, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>>> 1. What if the inherited object is a table or a sequence?\n>\n>> The only solution I can think of for this would be to use lastsysoid from\n>> template1; this is the value set when initdb runs.\n>\n>How does that help? It won't tell you anything about updated or deleted\n>rows, nor about sequence advances, nor ALTER FUNCTION changes. You\n>could detect inserted rows, and that's about it.\n\nIn template1, lastsysoid is based on entries in pg_description. So it is\nvery close to restoring the original behaviour of pg_dump. I agree it won't\nfix everything, but it will ensure it is no worse than before.\n\nIn the longer term, OID wrapping will be a problem for *any* oid-based\nrestoration scheme, as will ALTER FUNCTION. This is true for old & new\npg_dump alike.\n\nThe only real solution is to go away from OID-based restore, but I can't\nsee how. An 'add-or-update' method of restoration for everything, including\nsystem tables, would be a disaster for version upgrades.\n\nAny suggestions?\n\n\n>> Can the code that wraps the OID restart it at 'select max(lastsysoid) from\n>> pg_database'? Is that too complex?\n>\n>(the OID allocator is way too low-level to go off\n>invoking arbitrary queries with safety).\n\nThought that might be the case :-(\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 03 Nov 2000 12:24:56 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects inherited from\n\ttemplate1" }, { "msg_contents": "Tom Lane wrote:\n> We've hacked up pg_dump so that it won't dump objects inherited from\n> template1. Unfortunately I have realized there are a couple of serious\n> problems:\n>\n> 1. What if the inherited object is a table or a sequence? Its state may\n> no longer be the same as it was in template1 (eg, a table may contain\n> more or different rows than it did when copied from template1). It\n> seems a perfectly natural use of the template1 functionality to store,\n> say, a table definition in template1 and then add rows to it in\n> inherited databases --- that's exactly what the system does with\n> pg_proc, for example.\n>\n> 2. For that matter, even function definitions might change once we\n> support ALTER FUNCTION, which we surely will someday. Or, template1\n> might contain data which was not present when some other database was\n> created. In this case, reloading template1 first will not reproduce\n> the prior state of that database.\n>\n> 3. What if the OID counter wraps around? I've been telling people\n> that's not a fatal scenario ... but it sure is if pg_dump silently omits\n> data on the basis of ordered OID comparisons.\n>\n> A solution that would work for pg_dumpall is to dump all the user items\n> in each database and dump template1 *last*. This won't help much for\n> piecemeal pg_dump and restore, however.\n>\n> Thoughts? At the moment I'm afraid that the functionality we have is\n> worse than the way prior versions behaved --- not least because anyone\n> who was putting user data in template1 has probably gotten used to the\n> prior behavior. Maybe we should give up the whole idea of user data\n> in template1.\n\n FWIW, what about having another \"template0\" database, where\n nobody can add user data. Initially, template0 and template1\n are identically. CREATE DATABASE get's a new switch (used by\n the pg_dump output) that tells to create it from the vanilla\n template0 DB (generalized, so someone can setup a couple of\n template<n>'s) and all objects inherited from template1\n (those not in template0) are regularly dumped per database.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Tue, 7 Nov 2000 14:04:11 -0500 (EST)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects inherited from\n\ttemplate1" }, { "msg_contents": "At 14:04 7/11/00 -0500, Jan Wieck wrote:\n>> Thoughts? At the moment I'm afraid that the functionality we have is\n>> worse than the way prior versions behaved --- not least because anyone\n>> who was putting user data in template1 has probably gotten used to the\n>> prior behavior. Maybe we should give up the whole idea of user data\n>> in template1.\n>\n> FWIW, what about having another \"template0\" database, where\n> nobody can add user data. Initially, template0 and template1\n> are identically. CREATE DATABASE get's a new switch (used by\n> the pg_dump output) that tells to create it from the vanilla\n> template0 DB (generalized, so someone can setup a couple of\n> template<n>'s) and all objects inherited from template1\n> (those not in template0) are regularly dumped per database.\n\nAll pg_dump really needs is the abilty to ask for a 'vanilla' database from\n'CREATE DATABASE' or createdb. It can use lastsysoid for template1/0 do\ndump all database definitions. Any altered system objects will not be\ndumped, which is probably OK (and may even be the Right Thing).\n\nThe command to create the new database needs to ask for a vanilla database\nsomehow, but extending the SQL doesn't seem like a good idea. *Maybe* we\ncan use a new 'set' command to define the template database for the current\nsession:\n\n set pg_template <db-name>\n create database...\n\nor\n\n createdb --template=<db-name>\n\nIt would also be good to allow some kind of installation-wide default\ntemplate (not necessarily template1/0), which is overridden temporarily by\nthe 'set' command.\n\nIf we can do this, then we create template0 & 1 in the same way we create\ntemplate1 now, then set template1 as the default template.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 09 Nov 2000 00:33:21 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects\n\tinherited from template1" }, { "msg_contents": "> At 14:04 7/11/00 -0500, Jan Wieck wrote:\n>> FWIW, what about having another \"template0\" database, where\n>> nobody can add user data. Initially, template0 and template1\n>> are identically. CREATE DATABASE get's a new switch (used by\n>> the pg_dump output) that tells to create it from the vanilla\n>> template0 DB (generalized, so someone can setup a couple of\n>> template<n>'s) and all objects inherited from template1\n>> (those not in template0) are regularly dumped per database.\n\nI like that a lot. Solves the whole problem at a stroke, and even\nadds some extra functionality (alternate templates).\n\nDo we need an actual enforcement mechanism for \"don't modify template0\"?\nI think we could live without that for now. If you're worried about it,\none way would be to not allow connections of any sort to template0...\nin fact template0 needn't be a real database at all, just a $PGDATA/base\nsubdirectory with no pg_database entry. initdb would set it up via\ncp -r from template1, and thereafter it'd just sit there.\n\nPhilip Warner <[email protected]> writes:\n> The command to create the new database needs to ask for a vanilla database\n> somehow, but extending the SQL doesn't seem like a good idea.\n\nWhy not? CREATE DATABASE isn't a standard command in the first place,\nand it's got a couple of even-less-standard options already. I like\n\n\tCREATE DATABASE foo WITH TEMPLATE 'template0'\n\nbetter than a SET command.\n\n> It would also be good to allow some kind of installation-wide default\n> template (not necessarily template1/0),\n\nMaybe, but let's not go overboard here. For one thing, where are you\ngoing to keep that default setting? I think a hard-wired default of\ntemplate1 is a perfectly good choice.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 Nov 2000 10:15:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects inherited from\n\ttemplate1" }, { "msg_contents": "At 10:15 8/11/00 -0500, Tom Lane wrote:\n>I like\n>\n>\tCREATE DATABASE foo WITH TEMPLATE 'template0'\n>\n>better than a SET command.\n\nJust seems like we'd be forcing non-standard syntax on ourselves when/if\nCREATE DATABASE becomes CREATE SCHEMA; I would assume that the two\nstatements would become synonymous? Since this code is only for pg_dump,\npolluting CREATE DATABASE even further seems like a bad idea. No big deal,\nthough. \n\n[Minor aside: would 'FROM TEMPLATE' be better?]\n\nQuestion: if I issue a \"CREATE DATABASE foo WITH TEMPLATE 'my-favorite-db'\"\nwill I just get a copy of the specified database, including data?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 09 Nov 2000 02:48:50 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects\n\tinherited from template1" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> Just seems like we'd be forcing non-standard syntax on ourselves when/if\n> CREATE DATABASE becomes CREATE SCHEMA; I would assume that the two\n> statements would become synonymous?\n\nNo, I don't think so --- we already have WITH LOCATION and WITH\nENCODING, neither of which look like schema-level properties to me.\n\n> [Minor aside: would 'FROM TEMPLATE' be better?]\n\nWITH is already embedded in the CREATE DATABASE syntax.\n\n> Question: if I issue a \"CREATE DATABASE foo WITH TEMPLATE 'my-favorite-db'\"\n> will I just get a copy of the specified database, including data?\n\nIf we allow it, that's what would happen. Seems like a potential\nsecurity hole though ... should we restrict the set of clonable\ntemplates somehow?\n\nIt occurs to me that the current implementation of CREATE DATABASE\nassumes that no changes are actively going on in the cloned database;\nfor example, you'd miss copying any pages that are sitting in dirty\nbuffers in shared memory. So trying to copy an active database this\nway is a recipe for trouble. Probably better restrict it to identified\ntemplate databases. Maybe only allow cloning from DBs that are named\ntemplateNNN?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 Nov 2000 10:56:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects inherited from\n\ttemplate1" }, { "msg_contents": "At 10:56 8/11/00 -0500, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> Just seems like we'd be forcing non-standard syntax on ourselves when/if\n>> CREATE DATABASE becomes CREATE SCHEMA; I would assume that the two\n>> statements would become synonymous?\n>\n>No, I don't think so --- we already have WITH LOCATION and WITH\n>ENCODING, neither of which look like schema-level properties to me.\n\nCREATE SCHEMA supports character set specification, so I'd guess 'WITH\nENCODING' will apply in some form. It also support a 'schema path name',\nwhich may or may not map to locations.\n\n\n>> Question: if I issue a \"CREATE DATABASE foo WITH TEMPLATE 'my-favorite-db'\"\n>> will I just get a copy of the specified database, including data?\n>\n>If we allow it, that's what would happen. Seems like a potential\n>security hole though ... should we restrict the set of clonable\n>templates somehow?\n\nIt would be nice to have a 'supported' COPY DATABASE (which is what we're\ntalking about, really), so I'd vote for being able to use any DB as a\ntemplate, if possible.\n\nCan we restrict the command to databases that have only one active backend?\nOr add an 'istemplate' flag set in pg_database? I don't really like relying\non specific name formats, if we can avoid it.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 09 Nov 2000 03:06:30 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects\n\tinherited from template1" }, { "msg_contents": "On Thu, Nov 09, 2000 at 02:48:50AM +1100, Philip Warner wrote:\n> At 10:15 8/11/00 -0500, Tom Lane wrote:\n> >I like\n> >\n> >\tCREATE DATABASE foo WITH TEMPLATE 'template0'\n> >\n> >better than a SET command.\n> \n> Just seems like we'd be forcing non-standard syntax on ourselves when/if\n> CREATE DATABASE becomes CREATE SCHEMA; I would assume that the two\n> statements would become synonymous? Since this code is only for pg_dump,\n> polluting CREATE DATABASE even further seems like a bad idea. No big deal,\n> though. \n\nNope, we'll still have databases, with schema inside them. Schema are\nessentially a logical namespace, while a database encompasses all the data\nobjects accessible to one session (via standard SQL), i.e. one backend.\n\nAs Tom said, creating and maintaining those are 'implementation defined'\nin the standard.\n\nRoss\n-- \nOpen source code is like a natural resource, it's the result of providing\nfood and sunshine to programmers, and then staying out of their way.\n[...] [It] is not going away because it has utility for both the developers \nand users independent of economic motivations. Jim Flynn, Sunnyvale, Calif.\n", "msg_date": "Wed, 8 Nov 2000 10:11:17 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects inherited from\n\ttemplate1" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> It would be nice to have a 'supported' COPY DATABASE (which is what we're\n> talking about, really), so I'd vote for being able to use any DB as a\n> template, if possible.\n\n> Can we restrict the command to databases that have only one active backend?\n\nNo active backends would be more like it. The problem here is that\nthere's a race condition much like the one for DROP DATABASE --- there\nmay be no one connected when you look, but that's no guarantee someone\ncan't connect right after you look.\n\nWe're already overdue for beta, so I really don't want to start\ndesigning/implementing a generalized COPY DATABASE. (We're not\nofficially in feature freeze yet, but inventing new features off the\ntop of our heads doesn't seem like the thing to be doing now.)\nI'd like to see a proper fix for the inherited-data problem, though,\nsince that's clearly a bug in an existing feature.\n\n> Or add an 'istemplate' flag set in pg_database? I don't really like relying\n> on specific name formats, if we can avoid it.\n\nThat's reasonable I guess.\n\nDo we still need the lastsysoid column in pg_database if we do things\nthis way? Seems like what you really want is to suppress all the\nobjects that are in template0, so you really only need one lastsysoid\nvalue, namely template0's. The other entries are useless AFAICS.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 Nov 2000 11:13:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects inherited from\n\ttemplate1" }, { "msg_contents": "At 11:13 8/11/00 -0500, Tom Lane wrote:\n>\n>Do we still need the lastsysoid column in pg_database if we do things\n>this way? Seems like what you really want is to suppress all the\n>objects that are in template0, so you really only need one lastsysoid\n>value, namely template0's. The other entries are useless AFAICS.\n\nThat sounds reasonable; although there may be some value in allowing dumps\nrelative to template0 OR template1. Not sure.\n\nWhere would you store the value if not in pg_database?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 09 Nov 2000 03:24:05 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects\n\tinherited from template1" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> Where would you store the value if not in pg_database?\n\nNo other ideas at the moment. I was just wondering whether there was any\nway to delete it entirely, but seems like we want to have the value for\ntemplate0 available. The old way of hardwiring knowledge into pg_dump\nwas definitely not as good.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 Nov 2000 11:42:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects inherited from\n\ttemplate1" }, { "msg_contents": "Tom Lane wrote:\n>\n> Do we still need the lastsysoid column in pg_database if we do things\n> this way? Seems like what you really want is to suppress all the\n> objects that are in template0, so you really only need one lastsysoid\n> value, namely template0's. The other entries are useless AFAICS.\n>\n> regards, tom lane\n\n Right. All we dump after having a non-accessible template0 is\n the difference to that. So that a dump will create it's\n database from that template0 (no matter wherever it was\n created from originally) and \"patch\" it (i.e. restoring all\n diffs) to look like at dump time.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Thu, 9 Nov 2000 09:36:20 -0500 (EST)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects inherited from\n\ttemplate1" }, { "msg_contents": "At 09:47 9/11/00 -0500, Jan Wieck wrote:\n>\n> To make pg_dump failsafe, we'd IMHO need to freeze all\n> objects that come with template0 copying.\n>\n> For now we have oid's 1-16383 hardwired from the bki files.\n> Some 16384-xxxxx get allocated by initdb after bootstrap, so\n> we just need to bump the oid counter at the end of initdb (by\n> some bootstrap interface command) to lets say 32768 and\n> reject any attempt to touch an object with a lower oid.\n>\n\nI'd still like to see this number stored in the pgsql catalog somewhere\n(not just header files).\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 10 Nov 2000 01:36:46 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects\n\tinherited from template1" }, { "msg_contents": "Tom Lane wrote:\n> Philip Warner <[email protected]> writes:\n> > Where would you store the value if not in pg_database?\n>\n> No other ideas at the moment. I was just wondering whether there was any\n> way to delete it entirely, but seems like we want to have the value for\n> template0 available. The old way of hardwiring knowledge into pg_dump\n> was definitely not as good.\n\n To make pg_dump failsafe, we'd IMHO need to freeze all\n objects that come with template0 copying.\n\n For now we have oid's 1-16383 hardwired from the bki files.\n Some 16384-xxxxx get allocated by initdb after bootstrap, so\n we just need to bump the oid counter at the end of initdb (by\n some bootstrap interface command) to lets say 32768 and\n reject any attempt to touch an object with a lower oid.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Thu, 9 Nov 2000 09:47:29 -0500 (EST)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects inherited from\n\ttemplate1" }, { "msg_contents": "Jan Wieck <[email protected]> writes:\n> For now we have oid's 1-16383 hardwired from the bki files.\n> Some 16384-xxxxx get allocated by initdb after bootstrap, so\n> we just need to bump the oid counter at the end of initdb (by\n> some bootstrap interface command) to lets say 32768 and\n> reject any attempt to touch an object with a lower oid.\n\nWhat do you mean by \"touch\"? The system catalogs certainly can't\nbe made read-only in general.\n\nAFAIK we already have sufficient defenses against unwanted hackery on\nthe system catalogs, and so I don't really see a need for another level\nof checking.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 09 Nov 2000 10:50:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects inherited from\n\ttemplate1" }, { "msg_contents": "Mark Hollomon <[email protected]> writes:\n> How does this solve the 'ALTER FUNCTION' problem?\n\nWhat's that got to do with it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 09 Nov 2000 21:33:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects inherited from\n\ttemplate1" }, { "msg_contents": "At 22:24 9/11/00 -0500, Mark Hollomon wrote:\n>On Wednesday 08 November 2000 10:15, Tom Lane wrote:\n>> > At 14:04 7/11/00 -0500, Jan Wieck wrote:\n>> >> FWIW, what about having another \"template0\" database, where\n>> >> nobody can add user data. Initially, template0 and template1\n>> >> are identically. CREATE DATABASE get's a new switch (used by\n>> >> the pg_dump output) that tells to create it from the vanilla\n>> >> template0 DB (generalized, so someone can setup a couple of\n>> >> template<n>'s) and all objects inherited from template1\n>> >> (those not in template0) are regularly dumped per database.\n>>\n>> I like that a lot. Solves the whole problem at a stroke, and even\n>> adds some extra functionality (alternate templates).\n>>\n>\n>How does this solve the 'ALTER FUNCTION' problem?\n>\n\nI think both this and the OID-wrap problem will be permanent features until\nwe have a non-oid-based dump procedure. Pretty much every piece of metadata\nneeds some kind of 'I am a system object, don't dump me' flag. \n\nRelying of values of numeric OIDs is definitely clunky, but it's all we can\ndo at the moment.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 10 Nov 2000 13:35:34 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects\n\tinherited from template1" }, { "msg_contents": "On Wednesday 08 November 2000 10:15, Tom Lane wrote:\n> > At 14:04 7/11/00 -0500, Jan Wieck wrote:\n> >> FWIW, what about having another \"template0\" database, where\n> >> nobody can add user data. Initially, template0 and template1\n> >> are identically. CREATE DATABASE get's a new switch (used by\n> >> the pg_dump output) that tells to create it from the vanilla\n> >> template0 DB (generalized, so someone can setup a couple of\n> >> template<n>'s) and all objects inherited from template1\n> >> (those not in template0) are regularly dumped per database.\n>\n> I like that a lot. Solves the whole problem at a stroke, and even\n> adds some extra functionality (alternate templates).\n>\n\nHow does this solve the 'ALTER FUNCTION' problem?\n\n-- \nMark Hollomon\n", "msg_date": "Thu, 9 Nov 2000 22:24:10 -0500", "msg_from": "Mark Hollomon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects inherited from\n\ttemplate1" }, { "msg_contents": "Jan Wieck wrote:\n> Tom Lane wrote:\n> > Philip Warner <[email protected]> writes:\n> > > Where would you store the value if not in pg_database?\n> >\n> > No other ideas at the moment. I was just wondering whether there was any\n> > way to delete it entirely, but seems like we want to have the value for\n> > template0 available. The old way of hardwiring knowledge into pg_dump\n> > was definitely not as good.\n> \n> To make pg_dump failsafe, we'd IMHO need to freeze all\n> objects that come with template0 copying.\n\nHere's another (somewhat) unhappy thought: what if there are objects\nin template1 or other databases that one doesn't want to dump or\nrestore?\n\nThis is very much the case for user-defined types that usually consist\nof multiple dozens of components. Currently, pg_dump picks them up\nbased on their oid, whether or not they are sitting in template1, and\ndumps them in a non-restorable and non-portable manner along with the\nuser data. Consequently, I have to write filters to pluck the type\ncode out from the dump. The filters are ugly, unreliable and have to\nbe maintained in sync with the types. \n\nPicture this, though: if int and float where user-defined types --\nwould anyone be happy seeing them in every dump? Or, even worse,\nresponding to \"object already exists\" kind of problems during restore?\n\nNot that I couldn't get by like this; but since everybody seems\nunhappy too, maybe it's a good moment to consider a special 'dump'\nattribute for every object in the schema? The attribute could be\nlooked at by dump and restore tools and set by whatever rules one may\nfind appropriate.\n\n--Gene\n", "msg_date": "Thu, 09 Nov 2000 22:59:53 -0600", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects inherited from\n\ttemplate1" }, { "msg_contents": "\nPhilip Warner wrote:\n\n> I think both this and the OID-wrap problem will be permanent features until\n> we have a non-oid-based dump procedure. Pretty much every piece of metadata\n> needs some kind of 'I am a system object, don't dump me' flag. \n\nCuriously enough, Philip, you seem to have been ahead of me by just a\nfew keystrokes, so I didn't read your observation until I sent mine.\n\n> Relying of values of numeric OIDs is definitely clunky, but it's all we can\n> do at the moment.\n\nI held that one up, but now I am wondering: would checking a \"don't\ndump me\" flag involve any more code or or would it be any more\ndifficult than the current (oid > n)? Seems like a straightforward\nchange to me, so what's the reason for this \"all we can do\" sentiment?\n\n--Gene\n\n", "msg_date": "Thu, 09 Nov 2000 23:23:38 -0600", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects inherited from\n\ttemplate1" }, { "msg_contents": "At 23:23 9/11/00 -0600, [email protected] wrote:\n>\n>Philip Warner wrote:\n>> Relying of values of numeric OIDs is definitely clunky, but it's all we can\n>> do at the moment.\n>\n>I held that one up, but now I am wondering: would checking a \"don't\n>dump me\" flag involve any more code or or would it be any more\n>difficult than the current (oid > n)? Seems like a straightforward\n>change to me, so what's the reason for this \"all we can do\" sentiment?\n\nThe imminent release of 7.1, the fact that I am not totally sold on the\nidea myself, and the fact that it would require a new attribute on many\nsystem tables. It is *a* solution to the problem, but I'd very much like to\nfind a different one if possible.\n\nI have also mentioned this on two occasions now, and each has met with\ntotal silence. I have come to interpret this to mean either (a) the idea is\ntoo stupid to rate a comment, or (b) go ahead with the proposal. Since I am\nnot really proposing anything, I assume the correct interpretation is (a).\n:-(.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 10 Nov 2000 16:51:19 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects\n\tinherited from template1" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> I have also mentioned this on two occasions now, and each has met with\n> total silence. I have come to interpret this to mean either (a) the idea is\n> too stupid to rate a comment, or (b) go ahead with the proposal.\n\nMore like \"oof ...\"\n\nYou're right, it's *a* solution, but it'd involve a lot of tedious work.\nIt's not just adding a column to all the system tables. If I interpret\ncorrectly what Mark and Gene are concerned about, it'd also mean\nchanging the code so that any update to a system-table row would\nautomatically clear the \"I'm a standard item\" flag. That's not just\ntedious, it's also the sort of thing that will break because someone\nforgets to do it someplace.\n\nI think everyone is keeping quiet until they can think of a better\nidea...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 10 Nov 2000 01:21:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects inherited from\n\ttemplate1" }, { "msg_contents": "Philip Warner writes:\n\n> I think both this and the OID-wrap problem will be permanent features until\n> we have a non-oid-based dump procedure. Pretty much every piece of metadata\n> needs some kind of 'I am a system object, don't dump me' flag. \n\nWhen we implement schemas, then all objects belonging to the\nDEFINITION_SCHEMA will not be dumped, all other objects will be. At least\nI imagine that this might be something to work with.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Fri, 10 Nov 2000 17:24:54 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects inherited\n\tfrom template1" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> When we implement schemas, then all objects belonging to the\n> DEFINITION_SCHEMA will not be dumped, all other objects will be. At least\n> I imagine that this might be something to work with.\n\nThat's a thought, although it still doesn't cope with the issue of\n\"what if I've altered a standard system object?\" ... which is what\nI think Mark was getting at yesterday. I'm not sure there's any\nreasonable way to handle that, though, short of diff'ing against a\ndump of template1 :-(\n\nTo bring this back from future nice solutions to the reality of what\nto do today, do people like the \"template0\" solution for now (7.1)?\nI can work on it if so.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 10 Nov 2000 11:39:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects inherited from\n\ttemplate1" }, { "msg_contents": "On Friday 10 November 2000 11:39, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n> > When we implement schemas, then all objects belonging to the\n> > DEFINITION_SCHEMA will not be dumped, all other objects will be. At\n> > least I imagine that this might be something to work with.\n>\n> That's a thought, although it still doesn't cope with the issue of\n> \"what if I've altered a standard system object?\" ... which is what\n> I think Mark was getting at yesterday. I\n\nCorrect. I don't know why anyone would want to change the definition of\n(say) int48eq, but if we are going to allow them to do so, we should be\ncareful to allow them to backup and restore such a change.\n\nThe template0 solution is at least better than what we have. And since I\nhave no other more brilliant suggestions, I would vote for it.\n-- \nMark Hollomon\n", "msg_date": "Fri, 10 Nov 2000 21:52:06 -0500", "msg_from": "Mark Hollomon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects inherited from\n\ttemplate1" }, { "msg_contents": "At 11:39 10/11/00 -0500, Tom Lane wrote:\n>\n>To bring this back from future nice solutions to the reality of what\n>to do today, do people like the \"template0\" solution for now (7.1)?\n>I can work on it if so.\n>\n\nBeing able to create a vanilla DB is essential to make pg_dump work with\ncustomized templates, and I can't think if a better solution. So yes, it's\ndefinitely a good idea.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 11 Nov 2000 14:44:29 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects\n\tinherited from template1" }, { "msg_contents": "At 01:21 10/11/00 -0500, Tom Lane wrote:\n>\n>You're right, it's *a* solution, but it'd involve a lot of tedious work.\n>It's not just adding a column to all the system tables. If I interpret\n>correctly what Mark and Gene are concerned about, it'd also mean\n>changing the code so that any update to a system-table row would\n>automatically clear the \"I'm a standard item\" flag. \n\nI appreciate that (I think) I have said the opposite before, but I'd\nactually vote against this; once something is defined as a 'system item',\nit should not be the job of pg_dump to restore it, even if a DBA has\nchanged it. This is the correct behaviour since system objects will, almost\nby definition, depend on the version of PG, and the dumped database needs\nto be as close as possible to version-agnostic. In fact, the reason for the\nrestore may be to go back to a vanilla system after corrupting the old\nsystem catalog...\n\nAs previously observed, we have three things to restore:\n\n1. The base system. This is done by initdb, which creates template0/1.\n\n2. The local extensions to the template database.\n\n3. The local databases. We need to be able to restore these one at a time\nin the presence of a localized template1 as well as in the presence of a\nvanilla template1.\n\nImplementing template0 will suffice for the moment, and maybe later we need\nto consider some kind of 'isSystemObject' flag.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 11 Nov 2000 15:26:11 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects\n\tinherited from template1" }, { "msg_contents": "Mark Hollomon wrote:\n\n> Correct. I don't know why anyone would want to change the definition of\n> (say) int48eq, but if we are going to allow them to do so, we should be\n> careful to allow them to backup and restore such a change.\n\nYes, and it is also important that if such weirdos exist, they are\nallowed to backup this type of change separately from the databases.\n\n--Gene\n", "msg_date": "Fri, 10 Nov 2000 23:09:15 -0600", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects inherited from\n\ttemplate1" }, { "msg_contents": "Tom Lane wrote:\n> To bring this back from future nice solutions to the reality of what\n> to do today, do people like the \"template0\" solution for now (7.1)?\n> I can work on it if so.\n\n Go for it.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Wed, 15 Nov 2000 04:31:28 -0500 (EST)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unhappy thoughts about pg_dump and objects inherited from\n\ttemplate1" } ]
[ { "msg_contents": ">> I too got somehow on the list without subscribing.\n>> Something is wrong. But I \n>> like it and will stay on. :)\n\n> Marc, not sure what you are doing over there with the\n> mailing lists, but\n> keep it up. :-)\n\nHmmm! What happened to the digest mode? I can't reset the thing.\nAnd what happened to \"http://www.postgresql.org/\"?\n\nIndividual emails are killing me (when added to all the other mail I get). I have to unsubscribe.\n\nIf someone knows how to reset digest mode and access the developer site, email me directly.\n\n*\nL. S.\n**\n\n----------------------------------------------------------------\nGet your free email from AltaVista at http://altavista.iname.com\n", "msg_date": "Thu, 2 Nov 2000 22:06:35 -0500 (EST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: me too" }, { "msg_contents": "\nokay, to date I've just been manually fixing stuff like this, but its time\nto debug what the problem is here ...\n\nso, what have you tried to do to set it as digest, and what error did you\nget? \n\nOn Thu, 2 Nov 2000 [email protected] wrote:\n\n> >> I too got somehow on the list without subscribing.\n> >> Something is wrong. But I \n> >> like it and will stay on. :)\n> \n> > Marc, not sure what you are doing over there with the\n> > mailing lists, but\n> > keep it up. :-)\n> \n> Hmmm! What happened to the digest mode? I can't reset the thing.\n> And what happened to \"http://www.postgresql.org/\"?\n> \n> Individual emails are killing me (when added to all the other mail I get). I have to unsubscribe.\n> \n> If someone knows how to reset digest mode and access the developer site, email me directly.\n> \n> *\n> L. S.\n> **\n> \n> ----------------------------------------------------------------\n> Get your free email from AltaVista at http://altavista.iname.com\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 2 Nov 2000 23:30:31 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: me too" }, { "msg_contents": "On Thu, 2 Nov 2000 [email protected] wrote:\n\n> >> I too got somehow on the list without subscribing.\n> >> Something is wrong. But I\n> >> like it and will stay on. :)\n>\n> > Marc, not sure what you are doing over there with the\n> > mailing lists, but\n> > keep it up. :-)\n>\n> Hmmm! What happened to the digest mode? I can't reset the thing.\n> And what happened to \"http://www.postgresql.org/\"?\n>\n> Individual emails are killing me (when added to all the other mail I get). I have to unsubscribe.\n>\n> If someone knows how to reset digest mode and access the developer site, email me directly.\n\nWhat's the matter with the website? I can access it just fine. Perhaps\nyou're finding a bad mirror site?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 3 Nov 2000 05:57:05 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: me too" } ]
[ { "msg_contents": "I have marked 7.0.3 release tree. The new 7.0.3 items are listed below.\nI apologize for the delay.\n\n---------------------------------------------------------------------------\n\t\n\tJdbc fixes (Peter)\n\tLarge object fix (Tom)\n\tFix lean in COPY WITH OIDS leak (Tom)\n\tFix backwards-index-scan (Tom)\n\tFix SELECT ... FOR UPDATE so it checks for duplicate keys (Hiroshi)\n\tAdd --enable-syslog to configure (Marc)\n\tFix abort transaction at backend exit in rare cases (Tom)\n\tFix for psql \\l+ when multi-byte enabled (Tatsuo)\n\tAllow PL/pgSQL to accept non ascii identifiers (Tatsuo)\n\tMake vacuum always flush buffers (Tom)\n\tFix to allow cancel while waiting for a lock (Hiroshi)\n\tFix for memory aloocation problem in user authentication code (Tom)\n\tRemove bogus use of int4out() (Tom)\n\tFixes for multiple subqueries in COALESCE or BETWEEN (Tom)\n\tFix for failure of triggers on heap open in certain cases (Jeroen van\n\t Vianen)\n\tFix for erroneous selectivity of not-equals (Tom)\n\tFix for erroneous use of strcmp() (Tom)\n\tFix for bug where storage manager accesses items beyond end of file\n\t (Tom)\n\tFix to include kernel errno message in all smgr elog messages (Tom)\n\tFix for '.' not in PATH at build time (SL Baur)\n\tFix for out-of-file-descriptors error (Tom)\n\tFix to make pg_dump dump 'iscachable' flag for functions (Tom)\n\tFix for subselect in targetlist of Append node (Tom)\n\tFix for mergejoin plans (Tom)\n\tFix TRUNCATE failure on relations with indexes (Tom)\n\tAvoid database-wide restart on write error (Hiroshi)\n\tFix nodeMaterial to honor chgParam by recomputing its output (Tom)\n\tFix VACUUM problem with moving chain of update tuples when source and\n\t destination of a tuple lie on the same page (Tom)\n\tFix user.c CommandCounterIncrement (Tom)\n\tFix for AM/PM boundary problem in to_char() (Karel Zak)\n\tFix TIME aggregate handling (Tom)\n\tFix to_char() to avoid coredump on NULL input. (Tom)\n\tBuffer fix (Tom)\n\tFix for inserting/copying longer multibyte strings into bpchar data\n\t types (Tatsuo)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 2 Nov 2000 22:44:44 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "7.0.3 branded" }, { "msg_contents": "Bruce Momjian writes:\n > I have marked 7.0.3 release tree. The new 7.0.3 items are listed\n > below.\n\nSo have Jason's patches to build on Cygwin not made it in?\n\n\nOn a related note, what tag should I give to cvs to get code from the\n7.0.3 branch? Is it REL7_0_PATCHES?\n-- \nPete Forman -./\\.- Disclaimer: This post is originated\nWestern Geophysical -./\\.- by myself and does not represent\[email protected] -./\\.- the opinion of Baker Hughes or\nhttp://www.crosswinds.net/~petef -./\\.- its divisions.\n", "msg_date": "Fri, 3 Nov 2000 12:06:58 +0000 (GMT)", "msg_from": "Pete Forman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.3 branded" }, { "msg_contents": "> Bruce Momjian writes:\n> > I have marked 7.0.3 release tree. The new 7.0.3 items are listed\n> > below.\n> \n> So have Jason's patches to build on Cygwin not made it in?\n\nNo, they are too risky for a minor release.\n\n> On a related note, what tag should I give to cvs to get code from the\n> 7.0.3 branch? Is it REL7_0_PATCHES?\n\nIt is. I can grab the cvs using that tag, but can not grab log entries\nusing that tag. No one seems to know why it doesn't work.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 3 Nov 2000 10:11:37 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.0.3 branded" }, { "msg_contents": " From: Bruce Momjian <[email protected]>\n Date: Fri, 3 Nov 2000 10:11:37 -0500 (EST)\n\n > On a related note, what tag should I give to cvs to get code from the\n > 7.0.3 branch? Is it REL7_0_PATCHES?\n\n It is. I can grab the cvs using that tag, but can not grab log entries\n using that tag. No one seems to know why it doesn't work.\n\nWhat are you trying to do? In what way does it not work as you\nexpect?\n\nI just tried to find a file which was changed on the REL7_0_PATCHES\nbranches, but I failed. I was probably doing something wrong.\n\n(I used to be a CVS maintainer--technically, I suppose I still am--so\nI know something about how CVS works.)\n\nIan\n", "msg_date": "3 Nov 2000 12:07:44 -0800", "msg_from": "Ian Lance Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 7.0.3 branded" }, { "msg_contents": "Remember the CVS problems I had grabbing the logs from just one branch\nof cvs?\n\nWell, it turns out it is a cvs bug. CVS is fine with showing logs from\njust a branch iff the file existed when the branch was created. For\npost-branch files like pgsql/GNUmakefile.in, you see all changes.\n\nI received e-mails from the cvs 'log' maintainer, which are attached. \nHe explained the problem and gave me a workaround. He is going to try\nand get the bug fixed.\n\nI have modified pgcvslog to show the workaround, and have attached the\noutput. Seems cvs2cl works around this problem somehow.\n\n---------------------------------------------------------------------------", "msg_date": "Sat, 4 Nov 2000 16:37:20 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "cvs problems solved" } ]
[ { "msg_contents": "On Oct. 30 and 31 I attended OSDN's rather grandiosely named \"Open Source\nDatabase Summit\" (despite what you might infer from the name, it was just\na small, open-to-the-public conference). Their info about the conference\nis at http://www.osdn.com/conf/osd/conf_index.shtml, though I'm not sure\nhow long that page will remain up. OSDN invited a number of the principal\nsuspects from each major open-source database project to speak, and paid\nfor airfare and hotel rooms for the speakers. The invited speakers were\nBruce Momjian and myself from Postgres, David Axmark and Monty Widenius\nfrom MySQL, Mike Olson and Mike Ubell from Sleepycat (Berkeley DB), and\nAnn Harrison from InterBa^H^H^H^H IBPhoenix; also Britt Johnston, who as\nCTO of NuSphere can fairly be ranked in the MySQL camp; plus Tim Perdue\nand Rob Ferber as representative application-builders. Total attendance\nwas about forty or fifty, so we had a pretty good crowd of interested\npeople. Ned Lilly of Great Bridge was also there (on GB's dime), as well\nas two or three more NuSphere people, but mostly it seemed to be users\nand potential users of open-source databases.\n\nSunday evening, Bruce and Ned and I filtered in at different times.\nOSDN had laid out a spread of free food in one of the meeting rooms,\nbut the hotel staff didn't tell any of us about it, so we ended up\nhanging out in the hotel bar with a number of similarly ill-informed\nsouls. It was particularly interesting to talk to John Scott, who is\nworking for Verizon Wireless on redoing the software for their nationwide\npaging service. It turns out they are looking at using Postgres for the\ncustomer database, and either Postgres or Berkeley DB for the realtime\ndatabase that handles paging messages being pumped through the system.\nThat'll be a feather in our caps if it happens!\n\nMonday, Britt Johnston opened the formal proceedings with what amounted\nto a pep talk for OS DB work. I have a fairly interesting table in my\nnotes, giving current total Web-search hits for various databases:\n\tOracle\t\t3.0 mil\n\tMySQL\t\t2.3 mil\n\tPostgres\t0.7 mil\n\tSQL Server\t0.6 mil\n\tIBM DB2\t\t0.5 mil\n\tInterbase\t0.1 mil\n(I had to copy the last half of the table from memory, so it may not be\nexactly what he said, but it's close.) This says that MySQL+PG together\nare *already* as interesting as Oracle for web work. He also announced\nthat NuSphere would be financing substantial work on MySQL --- I have a\nnote about 10000 concurrent transactions on a single server, which'd be\npretty impressive (he didn't say what size server, though).\n\nThe conference format alternated between group-wide sessions and pairs\nof concurrent workshop talks, so after that we split into two groups.\nI went to hear Tim Perdue talk, while Bruce and Ned listened to Mike\nUbell; they'll have to report on what Mike said. Tim's discussion was\nabout building apps atop PHP and a database. He pointed out that for\nmost website builders, the path of least resistance given their existing\nskills is to construct an \"application heavy\" system in which most of\nthe logic is in application code. He contrasted this with \"database\nheavy\" design, in which more reliance is put on database functionality,\nsuch as constraints, triggers, views, etc. Unfortunately (from our\npoint of view) Postgres excels for the database-heavy style, whereas\nMySQL's lean feature set is sufficient or at least self-reinforcing\nfor the application-heavy style. It'll be difficult for PG to achieve\nworld domination until Web developers become more database-savvy ;-).\nTim encouraged a great deal of comment from the audience, and went so far\nas to make everyone introduce themselves first. (One interesting thing\nthat emerged at that point was that there were *very* few MySQL users,\nand no MySQL developers, at this talk --- though I guess that just meant\nthat the MySQL people all wanted to hear what Mike Ubell had to say,\nsince he was talking about a directly-MySQL-related subject.) One of the\nlongest-running parts of the discussion had to do with giving good error\nmessages and how it is hard to get friendly messages when you rely on the\ndatabase to do error checking. I thought this pointed up the need we've\nbeen aware of for awhile to overhaul our error reporting. Tim also had\na \"wish list\" for PG that included better admin tools, such as a way to\nsee exactly what queries are running; and a way to retrieve all the\ndatabase-generated items in a just-inserted row, not only the OID.\nBoth of these have also been on the radar screen for awhile.\n\nAfter a fine lunch (all the food was superb BTW; OSDN made an excellent\nchoice of hotel), we reconvened to hear David Axmark talk about the\nhistory and philosophy of MySQL. The only thing that really surprised me\nis that that project is quite young: it started in 1995. Given that Monty\nseems to do the vast majority of the development work, there are not many\nman-years in it, certainly far less than in Postgres. They've done well\nto come as far as they have.\n\nThe subsequent breakout was between Rob Ferber talking about shedding\ndatabase processing load to stateless clients, and me talking about\nPostgres' transaction model. I was quite annoyed that I couldn't go\nhear Rob, because his talk abstract sounded very interesting :-(.\nYou can find the slides from my talks (also Bruce's) at\nhttp://www.postgresql.org/osdn/index.html, so I won't go into detail,\nbut I hope Bruce will report on Rob's talk.\n\nThat evening there was a cocktail hour in the hotel's library (free\nbooze, courtesy of the conference) followed by dinner at the hotel's\nbetter restaurant. I spent a good part of the cocktail hour talking\nwith Ann Harrison and several other people about organizing some sort\nof open-source database benchmarking project. It turns out that DEC's\n(now Compaq's) performance measurement group has a nearly-done reference\nimplementation of AS3AP, which they're thinking of releasing as an open\nsource project. Everyone agreed that would be a fine starting point.\nWe also got to hear Ann's version of the InterBase situation --- more\nabout that later. Towards the end of the hour I wandered over and started\nto talk to Monty and David. That stretched into eating dinner with them.\nSince I'd had a couple glasses of wine already, and a couple more during\ndinner, while they'd started with vodka and then joined in on the wine,\nI doubt that either side could repeat much of the conversation word-for-\nword ;-). But it was all pleasant and perhaps will serve to dispel some\nof the bad blood that's existed between the two projects for awhile.\n\nThe next morning, the opening speaker was me, with a presentation on the\ninternals of Postgres (see slides at above URL). The subsequent breakout\nhad Bruce giving a talk on the history and project-management practices\nof Postgres (see slides), while I went to hear Ann Harrison talk about\nintegrity checking in databases. Before she could get into her promised\ntopic, the audience pretty much forced her to give a rundown on the\nInterBase situation. Bottom line: it's a mess. She feels Borland were\nbeing unreasonable (and in her telling of it, they indeed seem to be)\nwhile they felt, or said they felt, that she was. She thinks that once\nshe and others had come up with a business plan for doing something with\nInterBase rather than dropping it, Borland/Inprise decided they could\nexecute the business plan without her --- and that may be pretty accurate.\nAnyway, Inprise now has a small in-house development team with few if any\noriginal developers, Ann has only jawbone control over a dozen or so\nopen-source developers (these also with little or no deep knowledge of the\nsource, apparently), and there's a code fork between the Inprise version\nand the \"Firebird\" open-source project. The two groups are apparently\ntalking enough to try to keep their trees from diverging too much, in the\nhopes that the fork might be reunited someday, but Ann didn't sound all\nthat hopeful about it. Things sound mighty bleak to me --- but perhaps\nInterBase is just going through a transition comparable to Postgres'\ntransition from a Berkeley project to an open project.\n\nTo get back to the technical part of Ann's talk, the thing I came away\nwith is a realization that IB did a lot of things pretty similar to\nPostgres. In particular, it sounds like they have a multi-versioning\nmodel nearly identical to Postgres'. They also have some ideas we might\nbe able to adopt --- for example, their indexes point only to the newest\nversion of a row, not all versions. It'd be worth our while to dig\nthrough their code for ideas. However, Ann admitted that they are\nwoefully short on internals documentation, so extracting useful ideas\npromises to be painful :-(\n\nThe final group-wide session featured Mike Olson of Sleepycat as speaker.\nMost of you know that Mike was part of the Berkeley Postgres team years\nago (if you don't, try scanning our sources for the initials mao) so I\ncount him still a Postgres man, even though Sleepycat is currently in bed\nwith MySQL. Mike had some *extremely* interesting things to say about\nthe prospects for open-source databases making inroads against commercial\ncompetition. He pointed out that the notion that we have any chance of\ndoing so is mostly founded on the success Linux has been having competing\nwith Windows --- but that success is founded on (a) a cost advantage,\n(b) a reliability advantage, and (c) an advantage in the applications\nspace: Linux runs sendmail, bind, Apache, and all the other core Internet\nserver apps, whereas Windows doesn't run them especially well. Mike\npointed out that Oracle could *easily* afford to give away their software\nfor free and make all their money on support contracts (license fees are\nalready only 1/3rd of their revenue, so it wouldn't be that big a\nswitch). That would make the cost advantage a harder sell. We could\nstill make a good case for open databases on total cost of ownership,\nbut a key ball to keep our eye on is the ease of installation and\nadministration of our servers. Much of the differential comes from the\nfact that qualified Oracle DBAs are scarce and obscenely well-paid.\nWe have to be sure that Joe Average Unix Sysadmin can deal with our\nservers without much trouble. As for point (b), the news is bad:\nwe are *not* up to Oracle standards on reliability. (Mike only said\nthat it's unproven that we are up to commercial standards, but from\nhere in the trenches I'd say we ain't.) We need to keep our noses to\nthe grindstone on this issue, and even so it's unlikely that we'll ever\nhave the same sort of obvious reliability advantage that Linux has over\nWindows, simply because the commercial databases aren't anywhere near\nas bad as Windows. That leaves point (c) --- we have to exploit the\nopen-source nature of our systems to encourage a flowering of compatible\napplications. And we'd better make sure that people can make money\nbuilding apps atop open-source databases, or that flowering won't happen.\nA thought-provoking talk indeed; probably the best one at the conference,\nIMHO.\n\nThe final pair of speakers were Monty on the history and\nproject-management practices of MySQL, and Rob Ferber on Open Sales'\n^H^H^H^H Zelerate's way of building distributed transaction processing.\nBruce went to hear Monty, I went to hear Rob. It was pretty interesting:\nbasically, they do not try to replicate state, but instead distribute\n\"events\" --- maybe better called \"actions\", since the events are things\nlike \"decrement available-stock by 1\". Each server in their network is\n\"authoritative\" for events that it originates, and is responsible for\ntransmitting those events to other servers. Each server maintains state\ntables that represent the integral of all the events it knows of so far,\nbut it's explicitly recognized that these state tables may be out of sync\ndue to network latency, communication failures, etc. With appropriate\napplication programming it's possible to build a highly robust distributed\nsystem, sitting atop non-distributed database servers. Their system is\nopen source and all coded in Perl, so you can go have a look if you want\nto learn more.\n\nBruce and John Scott and I wasted most of Tuesday evening in a fruitless\nsearch for the Computer Literacy bookstore that used to exist near Apple\nheadquarters, so I can't say if anything interesting happened around the\nhotel then. But it seemed that things were winding down and a lot of\npeople were departing that evening, so probably not...\n\nOverall it was a very interesting and worthwhile conference. I have to\ncongratulate Mark Stone and Christine Dzierzeski of OSDN on organizing\na great conference on little time and minimal budget. If they invite\nme to the next one, I'll be there.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Nov 2000 23:47:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "OSDN Database conference report (long)" }, { "msg_contents": "Hi all,\n\nI'm one of the accidentally-subscribed readers to the Hackers list. I\nmainly wanted to thank Tom for typing this up. I had a great time reading\nit and learned a lot. I also wanted to try and help with some personal\nexperience since God knows I wouldn't be able to code in C or whatever PG is\nin ;)\n\n> database to do error checking. I thought this pointed up the need we've\n> been aware of for awhile to overhaul our error reporting. Tim also had\n> a \"wish list\" for PG that included better admin tools, such as a way to\n> see exactly what queries are running; and a way to retrieve all the\n> database-generated items in a just-inserted row, not only the OID.\n> Both of these have also been on the radar screen for awhile.\n\nI'm not sure exactly where the error checking comes in. I've been using\nPostgres in two places - at home (Apache/Tomcat) and at work (Apache/iASP)\nfor the last 8 months or so. The only gripe I have with error messages is\nthat they could be more specific. \"Error near <some character that occurs\n20+ times in the query>\" is usually pretty useless =) Otherwise, I can't\nrecall a single time where I said, \"man that message should be more clear\".\n\nThen again, I'm positive I'm not the hardcore DB-user that lots of Oracle\nDBAs are, or most all of you! In your collective journeys, I'm sure there\nare some other ~bad ones, or else Tom wouldn't have mentioned that. Just\nwanted to say in my xp, nothing bad to say.\n\n> We have to be sure that Joe Average Unix Sysadmin can deal with our\n> servers without much trouble.\n\nI'm much less competant than your Joe Six-pack *nix admin. Hell, I probably\neven spelled competant wrong. I downloaded, compiled, and installed PG in\nno time flat. I was querying via JSP less than 30 minutes after I\ndownloaded the database. However, I've never really done anything past\nthat. Would I be considered your 'typical' PG user? Dunno... I've written\na lot of little apps that use Postgres, but it's all your routine stuff.\nSome INSERTs, some SELECTs, etc. Nothing mind-blowing, that's for sure. I\ndon't use any of the management tools, never any need to. Although, I have\nmade several mental-notes to figure out just where the \"Object\" stuff comes\ninto place w.r.t. \"Object-Relational\" tho' =)\n\nI wish I could offer some better feedback, but from my point of view,\nPostgres has been great. Never crashed, takes up pretty much no system\nresources, great docs, etc.\n\nKeep up the great work guys! There are countless people appreciating it\nevery single day that you'll never hear from. Not my boss for sure. That\nguy's unorganized as hell. Good guy, but you'd never get a thanks from him\n;) Mental note: make sure Dave doesn't read pgsql-hackers.\n\nTake care!\n\n- r\n\n", "msg_date": "Thu, 2 Nov 2000 22:29:50 -0800", "msg_from": "\"Rob S.\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] OSDN Database conference report (long)" }, { "msg_contents": "Hi,\n\nHelp me please.\nHow to get previouse wonth in query?\n\n\n", "msg_date": "Fri, 3 Nov 2000 11:50:36 +0300", "msg_from": "igor <[email protected]>", "msg_from_op": false, "msg_subject": "DateTime functions" }, { "msg_contents": "Very nice paper.\nThanks Tom!\nAnd thanks all the Postgres developer! Great job!\n\nLaser\n\n", "msg_date": "Fri, 03 Nov 2000 18:06:15 +0800", "msg_from": "\"He Weiping (Laser Henry)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OSDN Database conference report (long)" }, { "msg_contents": "\"Rob S.\" <[email protected]> writes:\n>> ... I thought this pointed up the need we've\n>> been aware of for awhile to overhaul our error reporting.\n\n> I'm not sure exactly where the error checking comes in. I've been using\n> Postgres in two places - at home (Apache/Tomcat) and at work (Apache/iASP)\n> for the last 8 months or so. The only gripe I have with error messages is\n> that they could be more specific. \"Error near <some character that occurs\n> 20+ times in the query>\" is usually pretty useless =) Otherwise, I can't\n> recall a single time where I said, \"man that message should be more clear\".\n\nThe thing is that any error that the database itself issues is probably\ndatabase-centric; it may be helpful to the person coding the application,\nbut is unlikely to make a lot of sense to an end user. So well-coded\napps typically want to substitute their own error messages --- say,\n\"please enter a positive value\" rather than \"rejected due to CHECK\nconstraint foo\". We need to provide more support for that. A\nconsistent numbering scheme for error codes would help, for instance,\nso that apps could just look at the error number and not be dependent\non pattern-matching against strings that the developers might reword\nfrom time to time.\n\nAs I said, this has been on the todo list for awhile...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Nov 2000 11:08:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] OSDN Database conference report (long) " }, { "msg_contents": "Sorry - I mean the previous _month_ ,\nlike IncMonth(Now(),-1) function in C++ Builder\n\n>>\n>> Help me please.\n>> How to get previouse wonth in query?\nIR> What? О©╫О©╫О©╫ О©╫ О©╫О©╫О©╫О©╫О©╫О©╫? \n\n\n", "msg_date": "Fri, 3 Nov 2000 20:25:58 +0300", "msg_from": "Igor Khanjine <[email protected]>", "msg_from_op": false, "msg_subject": "Re[2]: DateTime functions" }, { "msg_contents": "At 11:08 AM 11/3/2000 -0500, Tom Lane wrote:\n>\"Rob S.\" <[email protected]> writes:\n> >> ... I thought this pointed up the need we've\n> >> been aware of for awhile to overhaul our error reporting.\n>\n> > I'm not sure exactly where the error checking comes in. I've been using\n> > Postgres in two places - at home (Apache/Tomcat) and at work (Apache/iASP)\n> > for the last 8 months or so. The only gripe I have with error messages is\n> > that they could be more specific. \"Error near <some character that occurs\n> > 20+ times in the query>\" is usually pretty useless =) Otherwise, I can't\n> > recall a single time where I said, \"man that message should be more clear\".\n\nAs opposed to SQL server which tends to be extrememly cryptic and goes out \nof it's way to hide information from you. Once I was importing some records \nfrom a text file and it kept stopping in the middle with a dialog box that \nsaid \"overflow\" and that's all. It would not tell me the line number or the \nfield it was having trouble with. What I did was to recreate the structure \nof the table in postgres and try to import into there. Postgres told me \nthat I had an invalid date on line number whatever. I guess you cna say I \nused postgres as a debugging tool for ms-sql server.\n\nBTW did you know that it's impossible to store dates before 1700 on ms-sql \nserver? It's datetime datatype will not support older dates.\n----------------------------------------------\n Tim Uckun\n Mobile Intelligence Unit.\n----------------------------------------------\n \"There are some who call me TIM?\"\n----------------------------------------------\n", "msg_date": "Fri, 03 Nov 2000 10:45:54 -0800", "msg_from": "Tim Uckun <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] OSDN Database conference report\n (long)" }, { "msg_contents": "Tom Lane wrote:\n> \nThanks for doing it ;)\n\nI have some comments too ...\n\n> One of the\n> longest-running parts of the discussion had to do with giving good error\n> messages and how it is hard to get friendly messages when you rely on the\n> database to do error checking. I thought this pointed up the need we've\n> been aware of for awhile to overhaul our error reporting. \n\nOne thing I have wished for a long time is _configurable_ error\nreporting, \nso that I could tell the backent to \"set errorlevel to N\" and not\nreceive \nany errors below level N\n\n> Tim also had\n> a \"wish list\" for PG that included better admin tools, such as a way to\n> see exactly what queries are running; and a way to retrieve all the\n> database-generated items in a just-inserted row, not only the OID.\n> Both of these have also been on the radar screen for awhile.\n\nor at least allow something like \"select * from table where\nLAST_INSERTED_BY_ME\"\nto be fast (are we currently smaprt enoug to stop scam after finding an\nunique item, \nor is it so, that \"select ... where oid=N\" still does a full table scan,\neven thoug \noid's are uniqe\")\n\n> After a fine lunch (all the food was superb BTW; OSDN made an excellent\n> choice of hotel), we reconvened to hear David Axmark talk about the\n> history and philosophy of MySQL. The only thing that really surprised me\n> is that that project is quite young: it started in 1995. Given that Monty\n> seems to do the vast majority of the development work, there are not many\n> man-years in it, certainly far less than in Postgres. They've done well\n> to come as far as they have.\n\nIIRC they already had an ISAM library and mSQL as a sampole\nimplementation.\n\n> To get back to the technical part of Ann's talk, the thing I came away\n> with is a realization that IB did a lot of things pretty similar to\n> Postgres. In particular, it sounds like they have a multi-versioning\n> model nearly identical to Postgres'.\n\nI thought that was common knowledge here ;)\n\nAt least I mentioned it on this list several times in early days (maybe\n95-96) \nand I'm pretty sure that also Vadim wrote something about their approah\nback \nthen ?\n\n> They also have some ideas we might\n> be able to adopt --- for example, their indexes point only to the newest\n> version of a row, not all versions. \n\nI seems to be good as an idea, but woul make it somewhat harder to find\ntuples \nin transactions that are started earlyer than the newest tuple,\nspecifically in \ncases where they have planned to find the tuple using the index.\n\n> It'd be worth our while to dig\n> through their code for ideas. However, Ann admitted that they are\n> woefully short on internals documentation, so extracting useful ideas\n> promises to be painful :-(\n\nProbably it would be easier to have someone spy on their mailing lists\n;)\n\n> The final group-wide session featured Mike Olson of Sleepycat as speaker.\n> Most of you know that Mike was part of the Berkeley Postgres team years\n> ago (if you don't, try scanning our sources for the initials mao) so I\n> count him still a Postgres man, even though Sleepycat is currently in bed\n> with MySQL. Mike had some *extremely* interesting things to say about\n> the prospects for open-source databases making inroads against commercial\n> competition. He pointed out that the notion that we have any chance of\n> doing so is mostly founded on the success Linux has been having competing\n> with Windows --- but that success is founded on (a) a cost advantage,\n> (b) a reliability advantage, and (c) an advantage in the applications\n> space: Linux runs sendmail, bind, Apache, and all the other core Internet\n> server apps, whereas Windows doesn't run them especially well. Mike\n> pointed out that Oracle could *easily* afford to give away their software\n> for free and make all their money on support contracts (license fees are\n> already only 1/3rd of their revenue, so it wouldn't be that big a\n> switch). \n\nThat implies that their software is _not_ easy to install/maintain.\n\nI have a frustrated friend who set up a linux box for a company to\nreplace \nan win2000 server which did not work (sendmail, squid, firewall, dns,\n...)\nand who was promised a support contract as well, but as things just work\nfor \nmore than ahalf a year without him, he did not get any contract.\n\n> That would make the cost advantage a harder sell. We could\n> still make a good case for open databases on total cost of ownership,\n> but a key ball to keep our eye on is the ease of installation and\n> administration of our servers. \n\nIMHO postgres is easy to install, not only from .rpm biut also using \nconfigure, make, make install\n\n> Much of the differential comes from the\n> fact that qualified Oracle DBAs are scarce and obscenely well-paid.\n> We have to be sure that Joe Average Unix Sysadmin can deal with our\n> servers without much trouble. As for point (b), the news is bad:\n> we are *not* up to Oracle standards on reliability. (Mike only said\n> that it's unproven that we are up to commercial standards, but from\n> here in the trenches I'd say we ain't.) \n\nIn my experience \"commercial standards\" here means usually that if you \nhave that 2/3 of price support contract, someone will come running to\nfix it\nif something happens, not that it is bullet-proof.\n\nBut I still need that 7.0.3 releas so that my database would allow me to \ndo a vacuum without restarting after some amount of heavy work ;)\n\n> We need to keep our noses to\n> the grindstone on this issue, and even so it's unlikely that we'll ever\n> have the same sort of obvious reliability advantage that Linux has over\n> Windows, simply because the commercial databases aren't anywhere near\n> as bad as Windows. \n\nTrue. Or at least partially true ;) Some of it can be attributed to the \nhuman nature and to the nature of software failures - if you have a\nmission \ncritical application and you have payd humongous amounts of money for\nyour \nsoftware (and hardware) you are more likely to test thing very\nthoroughly \nbefore going production. Also you are less likely to blame (or allowed\nto \nblame) an expensive product you may have yoursef recommended. \n\nPostgreSQL and other OSS will never have that defence; so we must be\n_more_ \nreliable in order to be perceived as being \"as reliable as\".\n\n-----------------\nHannu\n", "msg_date": "Sat, 04 Nov 2000 20:10:49 +0000", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] OSDN Database conference report (long)" } ]
[ { "msg_contents": "\n> > I'd say that normally you're not using cursors because you intend to throw\n> > away 80% or 90% of the result set, but instead you're using it because\n> > it's convenient in your programming environment (e.g., ecpg). There are\n> > other ways of getting only some rows, this is not it.\n> \n> I didn't say I was assuming that the user would only fetch 10% of the\n> rows. Since what we're really doing is a linear interpolation between\n> startup and total cost, what this is essentially doing is favoring low\n> startup cost, but not to the complete exclusion of total cost.\n> I think that that describes the behavior we want for a cursor pretty well.\n\nI did understand this, but I still disagree. Whether this is what you want\nstrongly depends on what the application does with the resulting rows.\nIt is the correct assumption if the application needs a lot of time \nto process each row. If the application processing for each row is fast,\nwe will still want least total cost. \n\nThere is no way for the backend to know this, thus imho the app needs\nto give a hint. \n\nAndreas\n", "msg_date": "Fri, 3 Nov 2000 09:44:27 +0100 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: LIMIT in DECLARE CURSOR: request for comments " }, { "msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> I did understand this, but I still disagree. Whether this is what you want\n> strongly depends on what the application does with the resulting rows.\n\nSure ...\n\n> There is no way for the backend to know this, thus imho the app needs\n> to give a hint. \n\nRight. So what do you think about a hint that takes the form of a SET\nvariable for the fetch percentage to assume for a DECLARE CURSOR?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Nov 2000 11:10:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: LIMIT in DECLARE CURSOR: request for comments " } ]
[ { "msg_contents": "\n>so, what have you tried to do to set it as digest, and what error did you\nget?\n\nI had been getting digests up until about a week or so ago. I assumed\nsomething might have changed with the list server so I submitted a request\nfor a digest, but keep getting individual e-mails.\n\nMy request was sent to [email protected] (if I remember correctly). The\nbody of the message was:\n\nset digest [email protected]\n\nI wasn't sure if the mail address was needed, but I included it based on a\nresponse from the previous time I turned on digest.\n\nI got the expected confirmation e-mail from the server and replied with\n\"accept\". At this point I thought I was set, but I did not start getting\ndigests.\n\nLater I got the following e-mail:\n\n\nPlease respond to [email protected]\n\nTo: [email protected]\ncc:\n\nSubject: 5D5F-755B-8775 : CONFIRM from pgsql-hackers (set)\n\n\nSomeone ([email protected]) has made the following request:\n\n \"set pgsql-hackers\ndigest-daily-mime,noeliminatecc,nohide,prefix,replyto,selfcopy,norewritefrom,noackstall,noackdeny,noackpost,noackreject\n\[email protected]\"\n\nIt requires your confirmation for the following reason(s):\n\n [email protected] made a request that affects\nanother address ([email protected]).\n\n\nIf you want this action to be taken, please do one of the following:\n\n1. If you have web browsing capability, visit\n <URL:\nhttp://mail.postgresql.org/cgi-bin/mj_confirm?d=postgresql.org&t=5D5F-755B-8775\n>\n and follow the instructions there.\n\n2. Reply to [email protected]\n with the word \"accept\" (without quotes) on a line by itself.\n\n\nIf you do not respond within 4 days, a reminder will be sent.\nIf you do not respond within 7 days, this token will expire.\n\nIf you did not originate this request, believe it to be in error, or do not\nwant it to be carried out, you may reject it by doing one of the following:\n\n1. If you have web browsing capability, visit\n <URL:\nhttp://mail.postgresql.org/cgi-bin/mj_confirm?d=postgresql.org&t=5D5F-755B-8775\n>\n and follow the instructions there.\n\n2. Reply to [email protected]\n with the word \"reject\" (without quotes) on a line by itself.\n\n\nIf you would like to communicate with a person,\nsend mail to [email protected].\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n\nAgain I replied with \"accept\" and got the following response:\n\n\n\nPlease respond to [email protected]\n\nTo: [email protected]\ncc:\n\nSubject: Majordomo results: Re: 5D5F-755B-8775 : CONFIRM from pgsql-\n\n\n>>>> accept\n---- Now the request must be approved by the list owner.\n---- The results will be mailed to you after this is done.\n\n1 valid command processed; it is pending.\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n\n\nThis is the last administrative message I got and I still have been\nreceiving individual emails.\n\nI hope this helps narrow down the problem.\n\n\n", "msg_date": "Fri, 3 Nov 2000 08:14:55 -0500", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: me too" }, { "msg_contents": "\nsorry, the migration this past weekend was to remove all traces of hub.org\nfrom the list addresses ... we built a 'virtual server' that now houses\nthe postgresql.org mailing lists, so you need to send to\[email protected], and it should work ...\n\nplease try that and let me know if it works or not ...\n\n\nOn Fri, 3 Nov 2000 [email protected] wrote:\n\n> \n> >so, what have you tried to do to set it as digest, and what error did you\n> get?\n> \n> I had been getting digests up until about a week or so ago. I assumed\n> something might have changed with the list server so I submitted a request\n> for a digest, but keep getting individual e-mails.\n> \n> My request was sent to [email protected] (if I remember correctly). The\n> body of the message was:\n> \n> set digest [email protected]\n> \n> I wasn't sure if the mail address was needed, but I included it based on a\n> response from the previous time I turned on digest.\n> \n> I got the expected confirmation e-mail from the server and replied with\n> \"accept\". At this point I thought I was set, but I did not start getting\n> digests.\n> \n> Later I got the following e-mail:\n> \n> \n> Please respond to [email protected]\n> \n> To: [email protected]\n> cc:\n> \n> Subject: 5D5F-755B-8775 : CONFIRM from pgsql-hackers (set)\n> \n> \n> Someone ([email protected]) has made the following request:\n> \n> \"set pgsql-hackers\n> digest-daily-mime,noeliminatecc,nohide,prefix,replyto,selfcopy,norewritefrom,noackstall,noackdeny,noackpost,noackreject\n> \n> [email protected]\"\n> \n> It requires your confirmation for the following reason(s):\n> \n> [email protected] made a request that affects\n> another address ([email protected]).\n> \n> \n> If you want this action to be taken, please do one of the following:\n> \n> 1. If you have web browsing capability, visit\n> <URL:\n> http://mail.postgresql.org/cgi-bin/mj_confirm?d=postgresql.org&t=5D5F-755B-8775\n> >\n> and follow the instructions there.\n> \n> 2. Reply to [email protected]\n> with the word \"accept\" (without quotes) on a line by itself.\n> \n> \n> If you do not respond within 4 days, a reminder will be sent.\n> If you do not respond within 7 days, this token will expire.\n> \n> If you did not originate this request, believe it to be in error, or do not\n> want it to be carried out, you may reject it by doing one of the following:\n> \n> 1. If you have web browsing capability, visit\n> <URL:\n> http://mail.postgresql.org/cgi-bin/mj_confirm?d=postgresql.org&t=5D5F-755B-8775\n> >\n> and follow the instructions there.\n> \n> 2. Reply to [email protected]\n> with the word \"reject\" (without quotes) on a line by itself.\n> \n> \n> If you would like to communicate with a person,\n> send mail to [email protected].\n> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n> \n> Again I replied with \"accept\" and got the following response:\n> \n> \n> \n> Please respond to [email protected]\n> \n> To: [email protected]\n> cc:\n> \n> Subject: Majordomo results: Re: 5D5F-755B-8775 : CONFIRM from pgsql-\n> \n> \n> >>>> accept\n> ---- Now the request must be approved by the list owner.\n> ---- The results will be mailed to you after this is done.\n> \n> 1 valid command processed; it is pending.\n> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n> \n> \n> This is the last administrative message I got and I still have been\n> receiving individual emails.\n> \n> I hope this helps narrow down the problem.\n> \n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 3 Nov 2000 20:08:54 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: me too" } ]
[ { "msg_contents": "Here�s the error:\n\n------------------------- snip -------------------------\nDBD::Pg::db table_info failed: ERROR: Unable to identify an ordering\noperator '<' for type 'unknown'\n Use an explicit ordering operator or modify the query\nCan't call method \"fetchrow_array\" on an undefined value at test.pl line\n103. \n------------------------- snap -------------------------\n\nand this is the part of the test.pl script referred to by the error\nmessage:\n\n\n------------------------- snip -------------------------\n ######################### create table\n 85\n 86 $dbh->do(\"CREATE TABLE builtin (\n 87 bool_ bool,\n 88 char_ char,\n 89 char12_ char(12),\n 90 char16_ char(16),\n 91 varchar12_ varchar(12),\n 92 text_ text,\n 93 date_ date,\n 94 int4_ int4,\n 95 int4a_ int4[],\n 96 float8_ float8,\n 97 point_ point,\n 98 lseg_ lseg,\n 99 box_ box\n 100 )\");\n 101\n 102 $sth = $dbh->table_info;\n 103 my @infos = $sth->fetchrow_array; \n------------------------- snap -------------------------\n\nNow, since it works with 6.4, and the developer of DBD::Pg says that\nhe�s tested this with 6.5, my hunch is that it has to do with some\nchange or other that was made in 7.1.\n\nHas anyone tried to install or use DBI and DBD::Pg with the current\nsnapshot? The snapshot I used is from October 10.\n\nI haven�t tested this with the current version, i.e. release 7, but will\nprobably do that if I get round to it.\n\nCheers Frank\n\n-- \nfrank joerdens \n\njoerdens new media\nurbanstr. 116\n10967 berlin\ngermany\n\ne: [email protected]\nt: +49 (0)30 69597650\nf: +49 (0)30 7864046 \nh: http://www.joerdens.de\n\npgp public key: http://www.joerdens.de/pgp/frank_joerdens.asc\n", "msg_date": "Fri, 3 Nov 2000 14:34:35 +0100", "msg_from": "Frank Joerdens <[email protected]>", "msg_from_op": true, "msg_subject": "DBD::Pg installation seems to fail with 7.1 libs" }, { "msg_contents": "Frank Joerdens <[email protected]> writes:\n> [ DBD::Pg fails against current sources with ]\n> DBD::Pg::db table_info failed: ERROR: Unable to identify an ordering\n> operator '<' for type 'unknown'\n\nHmm. It looks like this is caused by my reimplementation of UNION to\nuse CASE-style datatype resolution. The DBD code is fetching table\ndata with a UNION that includes a couple of untyped literal constants.\nBoiled down:\n\nregression=# select 'foo' union select 'bar';\nERROR: Unable to identify an ordering operator '<' for type 'unknown'\n Use an explicit ordering operator or modify the query\n\nThis is bad, since the same query used to work pre-7.1.\n\nThe reason it worked is that the old UNION code had an ugly hack to\nforce such constants to be treated as type text. There is no such hack\nin the datatype unification code right now.\n\nThomas, you've been muttering about altering the type resolution rules\nso that \"unknown\" will be treated as \"text\" when all else fails. Are\nyou planning to commit such a thing for 7.1? If not, I'll probably have\nto hack up parse_coerce.c's select_common_type(), along the lines of\n\n \t\t\t}\n \t\t}\n \t}\n+\t/* if all inputs are UNKNOWN, treat as text */\n+ \tif (ptype == UNKNOWNOID)\n+\t\tptype = TEXTOID;\n \treturn ptype;\n }\n\n\nThis might be an appropriate change anyway. Comments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Nov 2000 16:55:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] DBD::Pg installation seems to fail with 7.1 libs " }, { "msg_contents": "> Thomas, you've been muttering about altering the type resolution rules\n> so that \"unknown\" will be treated as \"text\" when all else fails. Are\n> you planning to commit such a thing for 7.1? If not, I'll probably have\n> to hack up parse_coerce.c's select_common_type(), along the lines of\n\nI'm *finally* getting several patches together, to do the following\nthings:\n\no Fix the type resolution for unknown function arguments to fall back to\n\"text\" or a string type, if available. Previously discussed.\n\no Implement an AT TIME ZONE clause, per SQL9x. Will handle an INTERVAL\nargument, per standard, and also accept a string containing a time zone\nspec, per existing PostgreSQL extension. Previously discussed.\n\no Fix timestamp/interval math across daylight savings time boundaries.\nPreviously discussed.\n\no Allow interpretation of \"hh:mm:ss\" as INTERVAL input. I can't remember\nif I've mentioned this one before.\n\no Fix output of INTERVAL when sign of year/month is different than sign\nof hour/min/sec. This is accompanied by changes in the \"ISO\" form of\noutput to more closely resemble a \"hh:mm:ss\" format. I just noticed the\nproblem today, so have not discussed it on list yet.\n\no Add some JOIN regression tests. More should be and will be done, but I\ndon't want to keep holding back patches on this. Per Tom Lane's request.\n\n\nThe only one with some potential for user trouble is the INTERVAL format\nchange. The old code was wrong, but the format itself has been changed\nto be a little more concise.\n\n - Thomas\n", "msg_date": "Sat, 04 Nov 2000 07:43:19 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] DBD::Pg installation seems to fail with 7.1 libs" }, { "msg_contents": "Thomas Lockhart wrote:\n> \n> > Thomas, you've been muttering about altering the type resolution rules\n> > so that \"unknown\" will be treated as \"text\" when all else fails. Are\n> > you planning to commit such a thing for 7.1? If not, I'll probably have\n> > to hack up parse_coerce.c's select_common_type(), along the lines of\n> \n> I'm *finally* getting several patches together, to do the following\n> things:\n> \n> o Fix the type resolution for unknown function arguments to fall back to\n> \"text\" or a string type, if available. Previously discussed.\n\nCan this be expected to be found in the 7.1 snapshot release anytime\nsoon? Otherwise I would have to continue developing my app on 7.0.x,\nwhich is not so good because eventually I will be requiring TOAST and\nI'd like to run into 7.1 related trouble - should there be such a thing\n;) - as early as possible.\n\nMany Thanks, Frank\n", "msg_date": "Mon, 13 Nov 2000 12:27:13 +0100", "msg_from": "Frank =?iso-8859-1?Q?J=F6rdens?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: DBD::Pg installation seems to fail with\n 7.1 libs" }, { "msg_contents": "Frank J�rdens wrote:\n> \n> Thomas Lockhart wrote:\n> >\n> > > Thomas, you've been muttering about altering the type resolution rules\n> > > so that \"unknown\" will be treated as \"text\" when all else fails. Are\n> > > you planning to commit such a thing for 7.1? If not, I'll probably have\n> > > to hack up parse_coerce.c's select_common_type(), along the lines of\n> >\n> > I'm *finally* getting several patches together, to do the following\n> > things:\n> >\n> > o Fix the type resolution for unknown function arguments to fall back to\n> > \"text\" or a string type, if available. Previously discussed.\n> \n> Can this be expected to be found in the 7.1 snapshot release anytime\n> soon? Otherwise I would have to continue developing my app on 7.0.x,\n\nworks a treat with the changes committed last thursday :)\n\n- Frank\n", "msg_date": "Thu, 16 Nov 2000 16:44:08 +0100", "msg_from": "Frank Joerdens <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: DBD::Pg installation seems to fail with\n 7.1 libs" }, { "msg_contents": "> > o Fix the type resolution for unknown function arguments to fall back to\n> > \"text\" or a string type, if available. Previously discussed.\n> Can this be expected to be found in the 7.1 snapshot release anytime\n> soon?\n\nIt is already there, and even has had a round of bug fixes already :)\n\nI expect that it will work for you with no problems.\n\n - Thomas\n", "msg_date": "Thu, 16 Nov 2000 17:06:10 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: DBD::Pg installation seems to fail with\n 7.1 libs" } ]
[ { "msg_contents": "\n> > The problem at hand is that\n> > a plan may be invalidated before it is even finished building. Do you\n> > expect the parse-rewrite-plan-execute pipeline to be prepared to back up\n> > and restart if we notice a relation schema change report halfway down the\n> > process? \n\nYes, during the processing \"of one single statement\", (and this includes\nthe parse/plan phase) we will need to hold a shared lock from the first access,\nas by our previous discussion.\n\nAndreas\n", "msg_date": "Fri, 3 Nov 2000 15:05:27 +0100 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: relation ### modified while in use " } ]
[ { "msg_contents": "Hi Bruce, Hi Michael,\n\nhere is the really short patch for shutting out all postgres definitions\nfrom ecpg\nprograms. (e.g. Datum, Pointer, DEBUG, ERROR).\nSomeone really should take a look into libpq and do the same.\nBut I had to copy a small part of c.h (bool,true,false,TRUE,FALSE) into\necpg/include/libecpg.h. And ... there is a possible bug in c.h. You\ncan't check a\ntypedef via #ifndef.\n\ntypedef char bool;\n...\n#ifndef bool\ntypedef char bool;\n#endif\n\nwill fail. But I don't know any decent solution to that problem!\nPerhaps c.h should be broken into seperate parts.\n\nChristof\n\nPS: to Jacek: you need this patch to compile libcommon++.a!\n\nBruce Momjian wrote:\n\n> Thanks.\n>\n> > > Yes, leaking into user programs is a bad practice. Is there a\n> > > solution/patch for that?\n> >\n> > A solution would be a simple patch which is not available yet. But I plan on\n> > doing one (some other things still have higher priority).\n> >\n> > Christof", "msg_date": "Fri, 03 Nov 2000 15:53:33 +0100", "msg_from": "Christof Petig <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Leaking definitions to user programs" }, { "msg_contents": "\nYou may find 7.1beta has fixed this. I know some include files were\nrearranged in 7.1.\n\n> Hi Bruce, Hi Michael,\n> \n> here is the really short patch for shutting out all postgres definitions\n> from ecpg\n> programs. (e.g. Datum, Pointer, DEBUG, ERROR).\n> Someone really should take a look into libpq and do the same.\n> But I had to copy a small part of c.h (bool,true,false,TRUE,FALSE) into\n> ecpg/include/libecpg.h. And ... there is a possible bug in c.h. You\n> can't check a\n> typedef via #ifndef.\n> \n> typedef char bool;\n> ...\n> #ifndef bool\n> typedef char bool;\n> #endif\n> \n> will fail. But I don't know any decent solution to that problem!\n> Perhaps c.h should be broken into seperate parts.\n> \n> Christof\n> \n> PS: to Jacek: you need this patch to compile libcommon++.a!\n> \n> Bruce Momjian wrote:\n> \n> > Thanks.\n> >\n> > > > Yes, leaking into user programs is a bad practice. Is there a\n> > > > solution/patch for that?\n> > >\n> > > A solution would be a simple patch which is not available yet. But I plan on\n> > > doing one (some other things still have higher priority).\n> > >\n> > > Christof\n\n[ application/x-gzip is not supported, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Jan 2001 23:50:55 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Leaking definitions to user programs" } ]
[ { "msg_contents": "Got this in my inbox today - might be of interest to some in this \ngroup. I don't know anything about them- forwarding it as interesting \ninfo only. Note: It starts in about an hour!\n\n--\n\nJoin us at SearchDatabase.com TODAY, Friday, November 3rd at 11:00 \nEastern (16:00 GMT) for a Live Expert Q&A with Simon Williams, CEO of \nLazy Software. Mr. Williams will answer your questions in real-time on \nthe following topic: \"The Associative Model: An Alternative to \nRelational Databases\".\n\nLog on at:\n\nhttp://www.searchdatabase.com/searchDatabase_Chat_Login_Page/1,281918,,00.html\n\nThe Associative Model is the first new database architecture since the \nadvent of object technology in the 1980's. Mr. Williams will discuss how \nthis new technology overcomes the limitations of the relational and \nobject models.\n\nSimon Williams founded Lazy Software in 1998 to pioneer the Associative \nModel of Data. Previously he was founder and chief executive of Synon \nCorporation, the foremost supplier of application development tools for \nIBM's AS/400 platform.\n\n\n\n\n\n-- \n----------------------------------------------------\nNed Lilly e: [email protected]\nVice President w: www.greatbridge.com\nEvangelism / Hacker Relations v: 757.233.5523\nGreat Bridge, LLC f: 757.233.5555\n\n", "msg_date": "Fri, 03 Nov 2000 10:07:39 -0500", "msg_from": "Ned Lilly <[email protected]>", "msg_from_op": true, "msg_subject": "live chat on associative model databases (starts 11:00 ET today)" } ]
[ { "msg_contents": "I've been working on date/time issues over the last few weeks (at least\none or two from reports on the list, others that I've stumbled across,\nand even one or two planned ones ;)\n\nAnyway, the INTERVAL type output representation has trouble with values\nsuch as\n\n '-1 month +2 hours'\n\nsince it assumes that the *sign* of every field is the same (the current\ndevelopment tree may have other troubles with interpreting this too, but\nI'm fixing that). For years/months, that resolves as those values are\nstored, so, for example, '-1 year +1 month' becomes '-11 months' as it\nis stored, and '-1 day +2 hours -4 minutes' resolves similarly. But\nmonths and days/hours/minutes/seconds are stored separately, so could\nhave different signs.\n\nAny suggestions for how to represent mixed-sign intervals? I'm inclined\nto move away from the \"ago\" convention for representing negative\nintervals, since \"-1 month +2 hours ago\" could be considered a little\nambiguous.\n\nShould we move to signed-only representations? Or retain the \"ago\"\nconvention, having it match the sign of the first printed field, with\nsubsequent fields having negative signs if they are positive values?\n\nAt the moment, mixed-sign intervals are stored correctly (so have the\nright results for math) but are *not* represented in the output\ncorrectly.\n\nPossibilities are:\n\n'1 month -2 days ago' is less than a month ago.\n'1 month -2 days +03:04' is three hours more than two days less than a\nmonth from now.\n'-1 month +2 days' is less than a month ago.\n\nComments?\n\n - Thomas\n", "msg_date": "Fri, 03 Nov 2000 16:25:23 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "INTERVAL representation" }, { "msg_contents": "On Fri, Nov 03, 2000 at 04:25:23PM +0000, Thomas Lockhart wrote:\n> I've been working on date/time issues over the last few weeks (at least\n> one or two from reports on the list, others that I've stumbled across,\n> and even one or two planned ones ;)\n> \n<snip>\n> \n> Should we move to signed-only representations? Or retain the \"ago\"\n> convention, having it match the sign of the first printed field, with\n> subsequent fields having negative signs if they are positive values?\n> \n> At the moment, mixed-sign intervals are stored correctly (so have the\n> right results for math) but are *not* represented in the output\n> correctly.\n> \n> Possibilities are:\n> \n> '1 month -2 days ago' is less than a month ago.\n> '1 month -2 days +03:04' is three hours more than two days less than a\n> month from now.\n> '-1 month +2 days' is less than a month ago.\n> \n> Comments?\n> \n\nHmm, negative time values always force me to think twice. I guess I think of\ntime as a concrete thing, like a board: it has length, at to speak of a \nnegative length' makes little sense. Admittedly, time is inherently vectorial,\nwhich other physical length measurements are not, requiring an arbitrarily\nchosen point as reference.\n\nHmm, I started this reply planning on arguing that _keeping_ the 'ago'\nwas easiest on my ears. Now I find I've talked myself into losing it,\nbecause it implies too much: 'ago' claims that that one end of the\ninterval is 'now' and the other end is in the past. If what you've got\nis actually the difference between next Christmas and New Years:\n\ntemplate1=# select ('25/12/2000'::timestamp - '01/01/2001'::timestamp)\n as \"deadtime\";\n\n deadtime \n-------------\n 7 00:00 ago\n(1 row)\n\nThat seems just wrong.\n\nRoss\n-- \nOpen source code is like a natural resource, it's the result of providing\nfood and sunshine to programmers, and then staying out of their way.\n[...] [It] is not going away because it has utility for both the developers \nand users independent of economic motivations. Jim Flynn, Sunnyvale, Calif.\n", "msg_date": "Mon, 6 Nov 2000 10:23:54 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INTERVAL representation" }, { "msg_contents": "> Hmm, I started this reply planning on arguing that _keeping_ the 'ago'\n> was easiest on my ears. Now I find I've talked myself into losing it,\n> because it implies too much: 'ago' claims that that one end of the\n> interval is 'now' and the other end is in the past. If what you've got\n> is actually the difference between next Christmas and New Years:\n> template1=# select ('25/12/2000'::timestamp - '01/01/2001'::timestamp)\n> as \"deadtime\";\n> deadtime\n> -------------\n> 7 00:00 ago\n> (1 row)\n> That seems just wrong.\n\nI've removed the \"ago convention\" from the ISO interval format, but have\nretained it for the \"traditional Postgres\" format. In the latter case,\nthe first numeric field is never negative, and the \"ago\", if present,\nindicates a negative interval. Subsequent fields can have a positive or\nnegative sign, and if negative will indicate a sign flip relative to the\nleading \"ago-qualified\" field.\n\nThe input interpretation of all of this is about the same as for 7.0.2,\nthough we now do a better job coping with more variations on the\n\"hh:mm:ss\" style of representation.\n\nTake a look at it and let me know what y'all think!\n\n - Thomas\n", "msg_date": "Tue, 07 Nov 2000 14:45:29 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: INTERVAL representation" } ]
[ { "msg_contents": "\n> > There is no way for the backend to know this, thus imho the app needs\n> > to give a hint. \n> \n> Right. So what do you think about a hint that takes the form of a SET\n> variable for the fetch percentage to assume for a DECLARE CURSOR?\n\nSince we don't have other hints that are embedded directly into the SQL\nthat sounds perfect. \n\nThe not so offhand question for me is whether to use this percentage for \nnon cursor selects also. Imho both should (at least in default) behave the same.\n\nAndreas\n", "msg_date": "Fri, 3 Nov 2000 17:32:21 +0100 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: AW: LIMIT in DECLARE CURSOR: request for comments " }, { "msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n>> Right. So what do you think about a hint that takes the form of a SET\n>> variable for the fetch percentage to assume for a DECLARE CURSOR?\n\n> Since we don't have other hints that are embedded directly into the SQL\n> that sounds perfect. \n\n> The not so offhand question for me is whether to use this percentage\n> for non cursor selects also. Imho both should (at least in default)\n> behave the same.\n\nNot at all, since in a non-cursor select you *must* retrieve all the\ndata. I can't see any reason to optimize that on any other basis than\ntotal execution time.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Nov 2000 11:59:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: AW: LIMIT in DECLARE CURSOR: request for comments " } ]
[ { "msg_contents": "Hi,\n\nI start yesterday CVS PostgreSQL server, and saw strange thing:\nfrom user postgres:\n# create database test;\nCREATE\n# \\c test;\n#create user bobson with password '1' nocreatedb nocreateuser;\nCREATE\n#create table a (a int4);\nCREATE\n#revoke all on a from public;\nCHANGE\n\nand now from user bobson after conecting to test database:\n#insert into a values ('1');\nINSERT 19104 1\n\nhmmm... looks like bug. Or I miss something?\n\nso next from user postgres:\n#revoke all on a from bobson;\nCHANGE\n\nand from user bobson after connect:\n#delete from a;\nDELETE 1\n\nPostgres ignore access permissions ?\nBTW... in my pg_hba.conf\nlocal\t\t\tpassword\n...\n \n\nregards\nRobert 'BoBsoN' Partyka\n\n", "msg_date": "Fri, 3 Nov 2000 18:57:43 +0100 (CET)", "msg_from": "Partyka Robert <[email protected]>", "msg_from_op": true, "msg_subject": "postgres not use table access permissions ?" }, { "msg_contents": "Partyka Robert <[email protected]> writes:\n> #create user bobson with password '1' nocreatedb nocreateuser;\n> CREATE\n> #create table a (a int4);\n> CREATE\n> #revoke all on a from public;\n> CHANGE\n> and now from user bobson after conecting to test database:\n> #insert into a values ('1');\n> INSERT 19104 1\n\n> hmmm... looks like bug. Or I miss something?\n\nOops. Strange though, this looks like it must be a very long-standing\nbug: aclinsert3 thinks it can delete any zero-permissions item from an\nACL array, whereas aclcheck has a hard-wired assumption that the world\nitem is always there. Could we have missed this for this long?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Nov 2000 13:22:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres not use table access permissions ? " }, { "msg_contents": "> Partyka Robert <[email protected]> writes:\n> > #create user bobson with password '1' nocreatedb nocreateuser;\n> > CREATE\n> > #create table a (a int4);\n> > CREATE\n> > #revoke all on a from public;\n> > CHANGE\n> > and now from user bobson after conecting to test database:\n> > #insert into a values ('1');\n> > INSERT 19104 1\n> \n> > hmmm... looks like bug. Or I miss something?\n> \n> Oops. Strange though, this looks like it must be a very long-standing\n> bug: aclinsert3 thinks it can delete any zero-permissions item from an\n> ACL array, whereas aclcheck has a hard-wired assumption that the world\n> item is always there. Could we have missed this for this long?\n\nIn 6.5.3 I've found other strange thing. When I give user INSERT, UPDATE\npermissions such user can do DELETE without DELETE permissions so in fact\nif I do \n# grant UPDATE, INSERT, SELECT on a to user1;\nit was treat as:\n# grant UPDATE, INSERT, DELETE, SELECT on a to user1;\n\nToday I want to test it on lastest CVS, but ... you know ;)\n\nregards\nRobert 'BoBsoN' Partyka\n\n", "msg_date": "Fri, 3 Nov 2000 19:32:55 +0100 (CET)", "msg_from": "Partyka Robert <[email protected]>", "msg_from_op": true, "msg_subject": "Re[2]: postgres not use table access permissions ? " }, { "msg_contents": "I wrote:\n> Oops. Strange though, this looks like it must be a very long-standing\n> bug: aclinsert3 thinks it can delete any zero-permissions item from an\n> ACL array, whereas aclcheck has a hard-wired assumption that the world\n> item is always there. Could we have missed this for this long?\n\nYup, we could've. It looks like \"revoke all from public\" has been\nbroken clear back to Postgres95, if not longer. Amazing that no one\nnoticed. Anyway, fixed now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Nov 2000 15:19:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres not use table access permissions ? " }, { "msg_contents": "Partyka Robert <[email protected]> writes:\n> if I do \n> # grant UPDATE, INSERT, SELECT on a to user1;\n> it was treat as:\n> # grant UPDATE, INSERT, DELETE, SELECT on a to user1;\n\nYeah. The underlying permission set is actually \"read, write, append\"\n(where write access also allows append). So UPDATE and DELETE are\ntreated the same, and allowing them also allows INSERT. This is\nsomething that probably oughta be changed some day. That'll doubtless\nbreak some user applications, though, since the true permission set is\nuser-visible (try psql's \\z command for example).\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Nov 2000 15:34:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re[2]: postgres not use table access permissions ? " } ]
[ { "msg_contents": "I was installing the snapshot version last night, whenever I\ninitialized the database with \"initdb -E EUC_TW -D\n/usr/local/pgsql/data\", I got error message that the EUC_TW was not\nthe valid encoding. Is it a bug in the snapshot version?\n\n\nThanks\nDave\n", "msg_date": "Sat, 04 Nov 2000 03:11:40 +0800", "msg_from": "Dave <[email protected]>", "msg_from_op": true, "msg_subject": "EUC_TW not working in snapshot version" }, { "msg_contents": "> I was installing the snapshot version last night, whenever I\n> initialized the database with \"initdb -E EUC_TW -D\n> /usr/local/pgsql/data\", I got error message that the EUC_TW was not\n> the valid encoding. Is it a bug in the snapshot version?\n\nSorry, I forgot to add EUC_TW encoding. Should be ok now, please grab\nthe new snapshot tonight or apply included patches to\nsrc/interfaces/libpq/fe-connect.c if you are hurry.\n\n*** fe-connect.c\t2000/10/30 10:31:46\t1.143\n--- fe-connect.c\t2000/11/04 02:25:45\n***************\n*** 2712,2717 ****\n--- 2712,2718 ----\n \t\t{EUC_JP, \"EUC_JP\"},\n \t\t{EUC_CN, \"EUC_CN\"},\n \t\t{EUC_KR, \"EUC_KR\"},\n+ \t\t{EUC_TW, \"EUC_TW\"},\n \t\t{UNICODE, \"UNICODE\"},\n \t\t{MULE_INTERNAL, \"MULE_INTERNAL\"},\n \t\t{LATIN1, \"LATIN1\"},\n", "msg_date": "Sat, 04 Nov 2000 11:30:45 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: EUC_TW not working in snapshot version" } ]
[ { "msg_contents": "Forwarded from Mosix mailing-list.\nIs this a crazy stuff?\n\nMosix isn't useful for shared memory processes (like apache and postgresql) \nit simply doesn't migrate such tasks.\nYou can do a simple connect script that , for example , do a\nround-robin connection to the different postgresql servers on the machineS.\n\n-----------------\n $dbh = connect_do_db();\n---------------\nfirst time it connects to machine1, second time to machine2 ...machineN, \nthan you re-start from 1, or you can poll the load of the machines and \nchoose the less loaded machine.\n\nOther points: start only one machine that init the shared mem structures \n(the first), not the others (is there a postgres flag to do this stuff). GFS \ndoesn't implement fcntl that posgres uses a bit.\nThe file to patch is in backend/storage/ipc/ipc.c, just add IPC_DIPC in \nshmget and semget.\n\nThis is are my ideas, i've not yet done all the stuff.\n\nIf this perform ok, also other similar programs will perform ok.\n\n\nbye\nvalter\n\n\n>From: \"Prescott, Richard (EXP)\" <[email protected]>\n>To: [email protected]\n>Subject: RE: PSQL, Mosix is unuseful\n>Date: Fri, 03 Nov 2000 14:07:22 -0500\n>\n>I just though about it.\n>\n>If you run DIPC + GFS + postgresql. Where the clients will connect ? I \n>mean. There is more than one machine answering to postgresql port! You \n>need to distribute the load. and that's the whole point of Mosix: one \n>virtual machine!\n>\n>Richard\n>\n> > -----Original Message-----\n> > From:\tvalter m [SMTP:[email protected]]\n> > Sent:\tNovember 01, 2000 9:45 AM\n> > To:\[email protected]; [email protected]\n> > Subject:\tRe: PSQL, Mosix is unuseful\n> >\n> > Hello everyone,\n> >\n> > no hope to use PostgreSQL + Mosix, because PostgreSQL uses shared \n>memory, as\n> > such it's not suitable for migration.\n> >\n> > Mosix isn't able now to give us a single system image, where a cluster \n>of\n> > machine acts as a single transparent machine.\n> >\n> > I'm trying DIPC+GFS to do this, i've patched the 2.2.17 linux kernel \n>with\n> > DIPC , changed PostgreSQL sources to create distributed shared memory \n>and\n> > semaphores (postgres uses only sem and shm), next steps are to try to\n> > start-up (the changed) postgres to use this distibuted ipc and a \n>central\n> > location for datafiles (GFS , NFS?, or a thing that permits a shared fs \n>with\n> > flock and fcntl) , then do a simple connect script that , for example , \n>do a\n> > round-robin connection to the different postgresql servers on the \n>machineS.\n> >\n> > DIPC has not a failover mechanism now.\n> >\n> > Ideas?\n> >\n> > bye\n> > valter\n> >\n> >\n> >\n> > >From: Johan Sj> �holm <[email protected]>\n> > >To: \"Mosix List\" <[email protected]>\n> > >Subject: PSQL\n> > >Date: Wed, 1 Nov 2000 14:09:34 +0100\n> > >\n> > >Hello everyone,\n> > >\n> > >Anyone that has tryied PostgreesSQL under Mosix ? How well does it run \n>?\n> > >Just checking before I get going whit it;)\n> > >\n> > >And I am also going to run Apache on those machines, will that work any\n> > >good ?\n> > >\n> > >Whit friendly Regards\n> > >\n> > >- Johan\n> > >\n> > >\n> > >--\n> > >To unsubscribe, send message to [email protected]\n> > >with \"unsubscribe\" in the message body.\n> > >\n> >\n> > \n>_________________________________________________________________________\n> > Get Your Private, Free E-mail from MSN Hotmail at \n>http://www.hotmail.com.\n> >\n> > Share information about yourself, create your own public profile at\n> > http://profiles.msn.com.\n> >\n> >\n> > --\n> > To unsubscribe, send message to [email protected]\n> > with \"unsubscribe\" in the message body.\n> >\n>\n>--\n>To unsubscribe, send message to [email protected]\n>with \"unsubscribe\" in the message body.\n>\n\n_________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.\n\nShare information about yourself, create your own public profile at \nhttp://profiles.msn.com.\n\n", "msg_date": "Fri, 03 Nov 2000 19:42:01 GMT", "msg_from": "\"valter m\" <[email protected]>", "msg_from_op": true, "msg_subject": "fwd: parallel PSQL ?" } ]
[ { "msg_contents": " From: Bruce Momjian <[email protected]>\n Date: Fri, 3 Nov 2000 15:11:00 -0500 (EST)\n\n\t cvs log -rREL7_0_PATCHES\n\n I want just log entries that are part of the branch. I get all entries.\n\nWhat I see when I try this is that for files which have the\nREL7_0_PATCHES tag (i.e., files which are on the branch), I see only\nlog entries for the branch. For files which are do not have the\nREL7_0_PATCHES tag (i.e., are not on the branch), I see all log\nentries.\n\nFor example, in the top pgsql directory,\n cvs log -rREL7_0_PATCHES HISTORY\ngives me only log entries for the branch for the HISTORY file.\nHowever,\n cvs log -rREL7_0_PATCHES GNUmakefile.in\ngives me all log entries. It also gives me this warning:\n cvs server: warning: no revision `REL7_0_PATCHES' in `/home/projects/pgsql/cvsroot/pgsql/GNUmakefile.in,v'\n\nIs this also what you see?\n\nThe natural way to fix this ought to be\n cvs co -rREL7_0_PATCHES pgsql\n cvs log .\nUnfortunately, I tried it, and cvs log, I believe erroneously, seems\nto pick up all files in the directory, even if they have not been\nchecked out.\n\nI can tell you a hideous kludge to avoid this, but I can't claim that\nit is the way to operate. Check out the branch using the -r option as\nabove. Then do this:\n find . -name CVS -type d -exec touch '{}/Entries.Static' \\;\nAfter that, in the same directory, do\n cvs log -rREL7_0_PATCHES\n\nI'd hate to have to explain why that works.\n\nWhich version of CVS are you running on the server? When I find some\ntime I'll see about fixing cvs log.\n\nIan\n", "msg_date": "3 Nov 2000 12:29:32 -0800", "msg_from": "Ian Lance Taylor <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 7.0.3 branded" } ]
[ { "msg_contents": " From: Bruce Momjian <[email protected]>\n Date: Fri, 3 Nov 2000 15:35:01 -0500 (EST)\n\n > The natural way to fix this ought to be\n > cvs co -rREL7_0_PATCHES pgsql\n > cvs log .\n > Unfortunately, I tried it, and cvs log, I believe erroneously, seems\n > to pick up all files in the directory, even if they have not been\n > checked out.\n > \n > I can tell you a hideous kludge to avoid this, but I can't claim that\n > it is the way to operate. Check out the branch using the -r option as\n > above. Then do this:\n > find . -name CVS -type d -exec touch '{}/Entries.Static' \\;\n > After that, in the same directory, do\n > cvs log -rREL7_0_PATCHES\n > \n > I'd hate to have to explain why that works.\n\n Does this cause any other problems, or does it just affect log?\n\nThe main effect is that a cvs update in that directory will not pick\nup any newly added files. That will catch you by surprise after a\nwhile, so I wouldn't recommend leaving the Entries.Static files around\nforever.\n\n > Which version of CVS are you running on the server? When I find some\n > time I'll see about fixing cvs log.\n\n Concurrent Versions System (CVS) 1.10.3 (client/server)\n\n I couldn't imagine cvs was so broken as to do what it is doing, so I\n concluded I was doing something wrong. Can I share this email with the\n hackers list?\n\nCVS is a long aggregation of hacks. Heck, the first version was a\nbunch of shell scripts. Since there is no theory underlying CVS, it's\neasy to get the corner cases wrong unless you test them. I would\nguess that the author of the current cvs log implementation didn't\ntest this sort of thing. (The author in question was, um, me,\nalthough I think I might be able to blame John Gilmore for this\nparticular feature.)\n\nYes, please go ahead and share these e-mail messages if you like.\n\nBy the way, I gather you spoke with Nathan Meyers at the free database\nsummit. I'm co-founder and CTO of Zembu, where he works.\n\nIan\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n--ELM973373840-2705-0_\nContent-Transfer-Encoding: 7bit\nContent-Type: text/plain\nContent-Disposition: inline; filename=\"/tmp/log\"\n\n\n momjian\n/doc/FAQ\n\n Update FAQ.\n\n---\n momjian\n/doc/FAQ_BSDI\n\n Update bsdi faq.\n\n---\n momjian\n/doc/FAQ_DEV\n\n update developers faq\n\n---\n momjian\n/src/interfaces/jdbc/postgresql/jdbc1/DatabaseMetaData.java\n\n Brand 7.1 release. Also update jdbc version in release branch.\n\n---\n momjian\n/src/interfaces/jdbc/postgresql/jdbc2/DatabaseMetaData.java\n\n Brand 7.1 release. Also update jdbc version in release branch.\n\n---\n thomas\n/doc/src/sgml/inherit.sgml\n/doc/src/sgml/query.sgml\n/doc/src/sgml/release.sgml\n\n Fix markup to allow doc building.\n\n---\n thomas\n/doc/src/sgml/sql.sgml\n\n Fix markup to allow doc building.\n\n---\n momjian\n/src/interfaces/jdbc/CHANGELOG\n/src/interfaces/jdbc/Makefile\n/src/interfaces/jdbc/example/basic.java\n/src/interfaces/jdbc/org/postgresql/Connection.java\n/src/interfaces/jdbc/org/postgresql/ResultSet.java\n/src/interfaces/jdbc/org/postgresql/jdbc1/Connection.java\n/src/interfaces/jdbc/org/postgresql/jdbc1/ResultSet.java\n/src/interfaces/jdbc/org/postgresql/jdbc2/Connection.java\n/src/interfaces/jdbc/org/postgresql/jdbc2/ResultSet.java\n\n Backpatch jdbc fixes into 7.0.X.\n\n---\n momjian\n/src/interfaces/jdbc/org/postgresql/jdbc2/Statement.java\n\n Backpatch jdbc fixes into 7.0.X.\n\n---\n tgl\n/src/backend/storage/large_object/inv_api.c\n\n Back-patch large-object fix.\n\n---\n momjian\n/doc/FAQ\n\n Update FAQ.\n\n---\n tgl\n/src/backend/commands/copy.c\n\n Back-patch COPY WITH OIDS leak fix.\n\n---\n tgl\n/src/backend/utils/adt/like.c\n/src/backend/utils/adt/regexp.c\n/src/backend/utils/adt/varchar.c\n\n Back-patch StrNCpy fix.\n\n---\n tgl\n/src/backend/optimizer/path/indxpath.c\n\n Backpatch backwards-index-scan fix.\n\n---\n ishii\n/src/backend/utils/time/tqual.c\n\n SELECT ... FOR UPDATE neglects duplicate key checking.\n patches submitted by Hiroshi Inoue.\n\n---\n tgl\n/src/backend/optimizer/plan/planner.c\n\n Back-patch primary fix for planner recursion bug.\n\n---\n scrappy\n/src/configure\n/src/configure.in\n\n \n backpatch the --enable-syslog functionality to REL7_0 branch\n\n---\n scrappy\n/src/configure\n/src/configure.in\n\n \n oops, in v7.x its USE_SYSLOG, not ENABLE_SYSLOG\n modify config.h.in so that it gets set by configure properly\n\n---\n scrappy\n/src/include/config.h.in\n\n \n oops, in v7.x its USE_SYSLOG, not ENABLE_SYSLOG\n modify config.h.in so that it gets set by configure properly\n\n---\n tgl\n/src/backend/tcop/postgres.c\n\n Back-patch fix to ensure we abort any open transaction at backend exit.\n\n---\n ishii\n/src/bin/psql/describe.c\n\n Fix psql crash. If MULTIBYTE is enabled, \\l+ dumps core due to\n SQL buffer in listAllDbs is just too small.\n\n---\n tgl\n/src/include/executor/nodeMaterial.h\n/src/backend/executor/execAmi.c\n/src/backend/executor/nodeMaterial.c\n\n Back-patch fix for bogus plans involving non-mark/restorable plan\n as inner plan of a mergejoin.\n\n---\n tgl\n/src/backend/optimizer/plan/createplan.c\n\n Back-patch fix for bogus plans involving non-mark/restorable plan\n as inner plan of a mergejoin.\n\n---\n ishii\n/src/pl/plpgsql/src/scan.l\n\n Allow PL/pgSQL accept non ascii identifiers\n\n---\n tgl\n/src/backend/commands/vacuum.c\n\n Back-patch fix to ensure that VACUUM always calls FlushRelationBuffers.\n\n---\n inoue\n/src/backend/storage/lmgr/proc.c\n\n Cancel request while waiting for a lock should try to wake\n up sleeping processes.\n\n---\n tgl\n/src/bin/psql/help.c\n\n Back-patch fix for erroneous free() of getpwuid() result.\n\n---\n tgl\n/src/interfaces/odbc/info.c\n\n Back-patch fix to remove bogus use of int4out().\n\n---\n tgl\n/src/backend/optimizer/plan/subselect.c\n\n Back-patch fix to copy sub-Query nodes before planning them. This\n fixes problems with subselects appearing in contexts like COALESCE or\n BETWEEN, where parser will make multiple links to same subexpression.\n\n---\n tgl\n/src/backend/utils/adt/ri_triggers.c\n\n Apply Jeroen van Vianen's patch for failure to check heap_openr failure\n in RI triggers. This is fixed in another way in current sources, but\n might as well apply this patch to REL7_0 branch so that 7.0.3 need not\n suffer this crash.\n\n---\n tgl\n/src/backend/utils/adt/selfuncs.c\n\n Back-patch fix for erroneous selectivity of not-equals.\n\n---\n tgl\n/src/backend/utils/adt/ruleutils.c\n\n Back-patch fix for erroneous use of strcmp().\n\n---\n tgl\n/src/backend/storage/smgr/md.c\n\n Back-patch fix for 'Sorcerer's Apprentice' syndrome wherein md.c would\n create a vast quantity of zero-length files if asked to access a block\n number far beyond the actual end of a relation.\n\n---\n tgl\n/src/backend/storage/smgr/smgr.c\n\n Back-patch fix to include kernel errno message in all smgr elog messages.\n\n---\n tgl\n/src/pl/tcl/Makefile\n/src/bin/pgtclsh/Makefile\n\n Back-patch fix for '.' not in PATH at build time, per SL Baur.\n\n---\n tgl\n/src/backend/storage/file/fd.c\n\n Back-patch fix that allows AllocateFile() to return errno=ENFILE/EMFILE\n after we are no longer able to close any more VFDs. This is needed to\n avoid postmaster crash under out-of-file-descriptors conditions.\n\n---\n tgl\n/src/bin/pg_dump/pg_dump.c\n/src/bin/pg_dump/pg_dump.h\n\n Back-patch fix to make pg_dump dump 'iscachable' flag for functions.\n\n---\n tgl\n/src/backend/optimizer/plan/setrefs.c\n\n Back-patch fix for subselect in targetlist of Append node.\n\n---\n tgl\n/src/include/optimizer/paths.h\n/src/include/optimizer/planmain.h\n/src/backend/optimizer/path/pathkeys.c\n/src/backend/optimizer/plan/initsplan.c\n/src/backend/optimizer/plan/planmain.c\n\n Back-patch code to deduce implied equalities from transitivity of\n mergejoin clauses, and add these equalities to the given WHERE clauses.\n This is necessary to ensure that sort keys we think are equivalent\n really are equivalent as soon as their rels have been joined. Without\n this, 7.0 may create an incorrect mergejoin plan.\n\n---\n tgl\n/src/backend/storage/buffer/bufmgr.c\n\n Back-patch fix to grab read lock on a buffer while it is written out.\n\n---\n tgl\n/src/backend/catalog/heap.c\n\n Back-patch fix for TRUNCATE failure on relations with indexes.\n\n---\n inoue\n/src/backend/storage/buffer/bufmgr.c\n\n avoid database-wide restart on write error\n\n---\n tgl\n/src/backend/executor/nodeMaterial.c\n\n Back-patch nodeMaterial to honor chgParam by recomputing its output.\n\n---\n tgl\n/src/backend/commands/vacuum.c\n\n Patch VACUUM problem with moving chain of update tuples when source\n and destination of a tuple lie on the same page.\n\n---\n tgl\n/src/backend/commands/user.c\n\n Back-patch CommandCounterIncrement fix.\n\n---\n tgl\n/src/backend/utils/adt/formatting.c\n\n Back-patch fix for AM/PM boundary problem in to_char().\n Fix from Karel Zak, 10/18/00.\n\n---\n tgl\n/src/backend/utils/adt/date.c\n\n Fix time_larger, time_smaller, timetz_larger, timetz_smaller to meet\n nodeAgg.c's expectation that aggregate transition functions never return\n pointers to their input values. This is fixed in a much better way in\n current sources, but in 7.0.* it's gotta be done like this.\n\n---\n tgl\n/src/backend/utils/adt/formatting.c\n\n Fix to_char() to avoid coredump on NULL input. Not needed in current\n sources due to fmgr rewrite, but 7.0.3 can use the patch...\n\n---\n tgl\n/src/backend/storage/buffer/bufmgr.c\n\n Back-patch fix for bogus clearing of BufferDirtiedByMe.\n\n---\n ishii\n/src/backend/utils/adt/varchar.c\n\n Fix for inserting/copying longer multibyte strings into bpchar data\n types.\n\n---\n wieck\n/src/bin/pg_dump/Makefile.in\n\n New dump utility script pg_dumpaccounts.\n \n Dumps pg_shadow and pg_group (derived from pg_dumpall).\n \n Jan\n\n---\n wieck\n/src/bin/pg_dump/Makefile.in\n\n Revoked changes for pg_dumpaccounts\n \n Script will go into the contrib directory.\n \n Jan\n\n---\n wieck\n/contrib/pg_dumpaccounts/Makefile\n/contrib/pg_dumpaccounts/README\n\n Added pg_dumpaccounts utility script in contrib.\n \n Derived from pg_dumpall it just dumps the pg_shadow and\n pg_group contents.\n \n Jan\n\n---\n wieck\n/contrib/pg_dumpaccounts/pg_dumpaccounts\n\n Added pg_dumpaccounts utility script in contrib.\n \n Derived from pg_dumpall it just dumps the pg_shadow and\n pg_group contents.\n \n Jan\n\n---\n momjian\n/HISTORY\n/INSTALL\n/README\n/register.txt\n/doc/FAQ\n/doc/TODO\n/doc/bug.template\n/doc/src/FAQ.html\n/doc/src/sgml/install.sgml\n/doc/src/sgml/release.sgml\n/src/include/version.h.in\n/src/interfaces/jdbc/postgresql/jdbc1/DatabaseMetaData.java\n/src/interfaces/jdbc/postgresql/jdbc2/DatabaseMetaData.java\n/src/interfaces/libpq/libpq.rc\n\n Brand 7.0.3.\n\n---\n momjian\n/HISTORY\n/doc/src/sgml/release.sgml\n\n cleanup\n\n---\n\n--ELM973373840-2705-0_--\n", "msg_date": "3 Nov 2000 12:48:07 -0800", "msg_from": "Ian Lance Taylor <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 7.0.3 branded" } ]
[ { "msg_contents": "With the new oid file naming, the alternative database location feature\nhas disappeared. Not good.\n\nAlso, is there any merit in keeping the #ifdef OLD_FILE_NAMING code path?\n\nI could probably go through and fix this, but I'm not fully aware about\nthe larger plan of table spaces that's apparently sneaking in here (cf.\nRelFileNode.tblNode).\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Fri, 3 Nov 2000 22:09:41 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Alternative database locations are broken" } ]
[ { "msg_contents": "> With the new oid file naming, the alternative database \n> location feature has disappeared. Not good.\n> \n> Also, is there any merit in keeping the #ifdef \n> OLD_FILE_NAMING code path?\n\nNo one. I've removed some of old code but not all, sorry.\n\n> I could probably go through and fix this, but I'm not fully \n> aware about the larger plan of table spaces that's apparently\n> sneaking in here (cf. RelFileNode.tblNode).\n\nThis would be very appreciated. Table spaces will be in 7.2,\nhopefully. For the moment tblNode is just database OID\n(InvalidOid for shared relations). I think that to handle\nlocations we could symlink catalogs -\nln -s path_to_database_in_some_location .../base/DatabaseOid\n\nTIA,\nVadim\n", "msg_date": "Fri, 3 Nov 2000 13:23:58 -0800 ", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Alternative database locations are broken" }, { "msg_contents": "Mikheev, Vadim writes:\n\n> > I could probably go through and fix this, but I'm not fully \n> > aware about the larger plan of table spaces that's apparently\n> > sneaking in here (cf. RelFileNode.tblNode).\n> \n> This would be very appreciated. Table spaces will be in 7.2,\n> hopefully. For the moment tblNode is just database OID\n> (InvalidOid for shared relations).\n\nI think we have a bit of a problem here. In order to restore the\npreviously existing alternative location feature we'd somehow have to\nstick this information into RelFileNode. Firstly, alternative locations\nwere referenced as text strings (usually environment variable names),\nwhich doesn't seem appropriate to stick into RelFileNode. We could make a\nseparate system catalog (as I have suggested several times) to assign oids\nto these locations.\n\nBut RelFileNode already claims to store the identity of the table space,\nbeing the database oid. This doesn't work because a location can contain\nmore than one database. So effectively we'd need to redefine RelFileNode\nsomething like 'struct { locationid, dbid, relid }'.\n\nI'm afraid I feel incompetent here. RelFileNode is used in too many\nplaces that I don't understand.\n\n> I think that to handle locations we could symlink catalogs - ln -s\n> path_to_database_in_some_location .../base/DatabaseOid\n\nBut that's a kludge. We ought to discourage people from messing with the\nstorage internals.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sat, 4 Nov 2000 14:37:10 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Alternative database locations are broken" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> But RelFileNode already claims to store the identity of the table space,\n> being the database oid. This doesn't work because a location can contain\n> more than one database. So effectively we'd need to redefine RelFileNode\n> something like 'struct { locationid, dbid, relid }'.\n\nNo, I don't think so. The direction we want to head in is that RelFileNode\nshould identify a tablespace (physical storage location) and a table.\nThere isn't any need for a hardwired association between tablespaces and\ndatabases, at least not at this level.\n\nIIRC, the proposed design that Vadim was basing this on is that the\nactual path to a particular file would be\n\t$PGDATA/base/TABLESPACE/TABLE\nor for a segmented relation\n\t$PGDATA/base/TABLESPACE/TABLE.SEGMENT\nwhere TABLESPACE, TABLE, and SEGMENT are all numeric strings --- the\nfirst two come from RelFileNode and the last from the target block #.\n\nIn this design, the tablespace directories appearing under $PGDATA/base\ncan either be plain subdirectories, or symlinks to directories that live\nelsewhere. The low-level file access code doesn't know or care which.\n\nThe questions you are asking seem to concern design of a user interface\nthat lets these directories or symlinks get set up via SQL commands\nrather than direct manual intervention. I agree that's a good thing to\nhave, but it's completely separate from the low-level access code.\n\nThe current implementation has one physical-subdirectory tablespace per\ndatabase, but I don't see any reason that multiple databases couldn't\nshare a tablespace, or that tables in a database couldn't be scattered\nacross multiple tablespaces. We just need to design the commands that\nlet the dbadmin control this.\n\nBTW, we could eliminate special-casing for the shared system relations\nif we treat them as stored in another tablespace. For example, make\n$PGDATA/base/0 be a symlink to ../global, or just move the stuff\ncurrently in $PGDATA/global to a subdirectory of $PGDATA/base.\n\n>> I think that to handle locations we could symlink catalogs - ln -s\n>> path_to_database_in_some_location .../base/DatabaseOid\n\n> But that's a kludge. We ought to discourage people from messing with the\n> storage internals.\n\nIt's not a kluge, it's a perfectly fine implementation. The only kluge\nhere is if people have to reach in and establish such symlinks by hand.\nWe want to set up a user interface that hides the implementation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 04 Nov 2000 12:43:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternative database locations are broken " }, { "msg_contents": "> >> I think that to handle locations we could symlink catalogs - ln -s\n> >> path_to_database_in_some_location .../base/DatabaseOid\n> \n> > But that's a kludge. We ought to discourage people from messing with the\n> > storage internals.\n> \n> It's not a kluge, it's a perfectly fine implementation. The only kluge\n> here is if people have to reach in and establish such symlinks by hand.\n> We want to set up a user interface that hides the implementation.\n\nAgreed. And I don't see problems with handling this at CREATE DATABASE\ntime. Create database dir in specified location, create symlink from\nbase dir and remember location name in pg_database.datpath.\n\nVadim\n\n\n", "msg_date": "Sat, 4 Nov 2000 22:09:16 -0800", "msg_from": "\"Vadim Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternative database locations are broken " }, { "msg_contents": "On Sat, Nov 04, 2000 at 10:09:16PM -0800, Vadim Mikheev wrote:\n> > >> I think that to handle locations we could symlink catalogs - ln -s\n> > >> path_to_database_in_some_location .../base/DatabaseOid\n> > \n> > > But that's a kludge. We ought to discourage people from messing with the\n> > > storage internals.\n> > \n> > It's not a kluge, it's a perfectly fine implementation. The only kluge\n> > here is if people have to reach in and establish such symlinks by hand.\n> > We want to set up a user interface that hides the implementation.\n> \n> Agreed. And I don't see problems with handling this at CREATE DATABASE\n> time. Create database dir in specified location, create symlink from\n> base dir and remember location name in pg_database.datpath.\n> \n\nHmm, I know NT's not really a target, supported OS, but enshrining\nsymlinks in a the design of a backend feature makes it really difficult\nto keep even the semblance of support. Vadim's work _finally_ stomped\nthe mixed case tablename bug (\"Test\" and \"test\" would collide because of\nNTFSi being case insensitive). Symlinks are, I think, only supported via\na Cygwin kludge. \n\n'Course, one could argue that running pgsql via Cygwin is all a big\nkludge. I'm not even sure why I keep coming to the defense of the NT\nport: I'm not using it myself. I keep getting the feeling that there's a\nreal opportunity there: get pgsql onto developer's NT boxes, when their\nprojects need a real database, rather than springing for an MS-SQL or\nOracle license. Makes moving over to a _real_ operating system (when\nthey start to notice those stability problems) that much easier.\n\nBut seriously, there was a long thread concerning the appropriateness\nof using symlinks to manage storage, which I don't recall as coming\nto a conclusion. Admittedly, the opinion of those who take the bull by\nthe horns and actually write the code matters more (rough concensus and\nworking code, as they say).\n\nRoss\n-- \nOpen source code is like a natural resource, it's the result of providing\nfood and sunshine to programmers, and then staying out of their way.\n[...] [It] is not going away because it has utility for both the developers \nand users independent of economic motivations. Jim Flynn, Sunnyvale, Calif.\n", "msg_date": "Mon, 6 Nov 2000 11:08:09 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternative database locations are broken" }, { "msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n> Hmm, I know NT's not really a target, supported OS, but enshrining\n> symlinks in a the design of a backend feature makes it really difficult\n> to keep even the semblance of support.\n\nAs long as we don't require symlinks to exist for standard setups,\nI have no trouble at all with decreeing that alternate database\nlocations won't work on NT.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 Nov 2000 12:13:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternative database locations are broken " }, { "msg_contents": "On Mon, Nov 06, 2000 at 12:13:31PM -0500, Tom Lane wrote:\n> \"Ross J. Reedstrom\" <[email protected]> writes:\n> > Hmm, I know NT's not really a target, supported OS, but enshrining\n> > symlinks in a the design of a backend feature makes it really difficult\n> > to keep even the semblance of support.\n> \n> As long as we don't require symlinks to exist for standard setups,\n> I have no trouble at all with decreeing that alternate database\n> locations won't work on NT.\n\nJust tested 'ln -s' under Cygwin bash, and at least less and cat follow\nthe link, notepad.exe and wordpad.exe open the link file. Checking the\nCygwin FAQ, I see that this was a red herring: postgres, being a cygwin\napp, should follow symlinks just fine.\n\nRoss\n-- \nOpen source code is like a natural resource, it's the result of providing\nfood and sunshine to programmers, and then staying out of their way.\n[...] [It] is not going away because it has utility for both the developers \nand users independent of economic motivations. Jim Flynn, Sunnyvale, Calif.\n", "msg_date": "Mon, 6 Nov 2000 11:40:47 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternative database locations are broken" }, { "msg_contents": "Okay, so we'll do the symlinks.\n\nCREATE DATABASE xxx WITH LOCATION='/else/where';\n\nwill clone ('cp -r') template1 in /else/where/base/<id> and create a\nsymlink to there from $PGDATA/base/<id>. The '/else/where' location will\nbe stored in pg_database.datpath.\n\nHow do we control the allowed paths? Should we continue with the\nenvironment variables? Perhaps a config option listing the allowed\ndirectories? A system catalog?\n\nSomehow I also get the feeling that pg_dumpall should be saving these\npaths...\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 6 Nov 2000 20:33:55 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternative database locations are broken" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> How do we control the allowed paths? Should we continue with the\n> environment variables? Perhaps a config option listing the allowed\n> directories? A system catalog?\n\nThe environment variables are a pretty sucky mechanism, IMHO;\nan installation-wide catalog would be nicer. HOWEVER: I do not\nthink it's reasonable to try to make that happen for 7.1, considering\nhow close we are to beta. So I recommend that we continue to base\nallowed paths on environment variables for this release.\n\n> Somehow I also get the feeling that pg_dumpall should be saving these\n> paths...\n\nYup, probably so. If you stick the LOCATION string into\npg_database.datpath (which no longer has any other use)\nthen it'd be easy to make pg_dumpall do so.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 Nov 2000 15:08:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternative database locations are broken " } ]
[ { "msg_contents": ">What's the matter with the website? I can access it\n>just fine. Perhaps you're finding a bad mirror site?\n\n>Vince.\n\n\nAfter sending a request for \"http://www.postgresql.org/\", I got this:\n\"Error 400 - Proxy Error: Host name not recognized or host not found - URL http://www.404627944.com:81/postgresql/.\nExplanation: The server could not connect to the requested hostname due to bad syntax or an unknown host. \n\nAction: Check to make sure the URL you entered is correct, and then retry your request. \n\nURL: http://www.404627944.com:81/postgresql/ \"\n\nI used to get either this or an asian site.\n\n\n----------------------------------------------------------------\nGet your free email from AltaVista at http://altavista.iname.com\n", "msg_date": "Fri, 3 Nov 2000 16:47:35 -0500 (EST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: me too" }, { "msg_contents": "On Fri, 3 Nov 2000 [email protected] wrote:\n\n> >What's the matter with the website? I can access it\n> >just fine. Perhaps you're finding a bad mirror site?\n>\n> >Vince.\n>\n>\n> After sending a request for \"http://www.postgresql.org/\", I got this:\n> \"Error 400 - Proxy Error: Host name not recognized or host not found - URL http://www.404627944.com:81/postgresql/.\n> Explanation: The server could not connect to the requested hostname due to bad syntax or an unknown host.\n>\n> Action: Check to make sure the URL you entered is correct, and then retry your request.\n>\n> URL: http://www.404627944.com:81/postgresql/ \"\n>\n> I used to get either this or an asian site.\n\nLooks like penguinpowered is having problems. Should be ok now.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 3 Nov 2000 16:57:35 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: me too" } ]
[ { "msg_contents": "Hi\n\nYeach ... I can revoke from public now ;), but .....\nlook at this:\n#create database a\nCREATE\n\n#\\c a\n#create table ala(a int4);\nCREATE\n\n#\\z\nAccess permissions for database \"a\"\n Relation | Access permissions\n----------+--------------------\n ala |\n(1 row)\n\n#revoke all on ala from public;\nCHANGE\n#\\z\nAccess permissions for database \"a\"\n Relation | Access permissions\n----------+-----------------------\n ala | {\"=\",\"postgres=arwR\"}\n(1 row)\n\nhmmm.... is everything work ok ?\n\nregards\nRobert 'BoBsoN' Partyka\n\n", "msg_date": "Fri, 3 Nov 2000 23:33:43 +0100 (CET)", "msg_from": "Partyka Robert <[email protected]>", "msg_from_op": true, "msg_subject": "tables permissions once again" }, { "msg_contents": "Partyka Robert <[email protected]> writes:\n> #create table ala(a int4);\n> CREATE\n\n> #\\z\n> Access permissions for database \"a\"\n> Relation | Access permissions\n> ----------+--------------------\n> ala |\n> (1 row)\n\n> #revoke all on ala from public;\n> CHANGE\n> #\\z\n> Access permissions for database \"a\"\n> Relation | Access permissions\n> ----------+-----------------------\n> ala | {\"=\",\"postgres=arwR\"}\n> (1 row)\n\n> hmmm.... is everything work ok ?\n\nYup, that's the expected behavior. Initially the relacl entry for a new\ntable is NULL, which the system will interpret as default access rights\n(namely, world=no rights, owner=all rights). As soon as you issue a\nGRANT or REVOKE, a real ACL gets installed --- which will consist of the\ndefault access rights made explicit and then modified per your GRANT or\nREVOKE. At that point you see something in \\z, whereas psql doesn't\nshow anything in \\z for a NULL acl entry.\n\nAFAIK it's always worked like that...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Nov 2000 17:56:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tables permissions once again " } ]
[ { "msg_contents": "We've expended a lot of worry and discussion in the past about what\nhappens if the OID generator wraps around. However, there is another\n4-byte counter in the system: the transaction ID (XID) generator.\nWhile OID wraparound is survivable, if XIDs wrap around then we really\ndo have a Ragnarok scenario. The tuple validity checks do ordered\ncomparisons on XIDs, and will consider tuples with xmin > current xact\nto be invalid. Result: after wraparound, your whole database would\ninstantly vanish from view.\n\nThe first thought that comes to mind is that XIDs should be promoted to\neight bytes. However there are several practical problems with this:\n* portability --- I don't believe long long int exists on all the\nplatforms we support.\n* performance --- except on true 64-bit platforms, widening Datum to\neight bytes would be a system-wide performance hit, which is a tad\nunpleasant to fix a scenario that's not yet been reported from the\nfield.\n* disk space --- letting pg_log grow without bound isn't a pleasant\nprospect either.\n\nI believe it is possible to fix these problems without widening XID,\nby redefining XIDs in a way that allows for wraparound. Here's my\nplan:\n\n1. Allow XIDs to range from 0 to WRAPLIMIT-1 (WRAPLIMIT is not\nnecessarily 4G, see discussion below). Ordered comparisons on XIDs\nare no longer simply \"x < y\", but need to be expressed as a macro.\nWe consider x < y if (y - x) % WRAPLIMIT < WRAPLIMIT/2.\nThis comparison will work as long as the range of interesting XIDs\nnever exceeds WRAPLIMIT/2. Essentially, we envision the actual value\nof XID as being the low-order bits of a logical XID that always\nincreases, and we assume that no extant XID is more than WRAPLIMIT/2\ntransactions old, so we needn't keep track of the high-order bits.\n\n2. To keep the system from having to deal with XIDs that are more than\nWRAPLIMIT/2 transactions old, VACUUM should \"freeze\" known-good old\ntuples. To do this, we'll reserve a special XID, say 1, that is always\nconsidered committed and is always less than any ordinary XID. (So the\nordered-comparison macro is really a little more complicated than I said\nabove. Note that there is already a reserved XID just like this in the\nsystem, the \"bootstrap\" XID. We could simply use the bootstrap XID, but\nit seems better to make another one.) When VACUUM finds a tuple that\nis committed good and has xmin < XmaxRecent (the oldest XID that might\nbe considered uncommitted by any open transaction), it will replace that\ntuple's xmin by the special always-good XID. Therefore, as long as\nVACUUM is run on all tables in the installation more often than once per\nWRAPLIMIT/2 transactions, there will be no tuples with ordinary XIDs\nolder than WRAPLIMIT/2.\n\n3. At wraparound, the XID counter has to be advanced to skip over the\nInvalidXID value (zero) and the reserved XIDs, so that no real transaction\nis generated with those XIDs. No biggie here.\n\n4. With the wraparound behavior, pg_log will have a bounded size: it\nwill never exceed WRAPLIMIT*2 bits = WRAPLIMIT/4 bytes. Since we will\nrecycle pg_log entries every WRAPLIMIT xacts, during transaction start\nthe xact manager will have to take care to actively clear its pg_log\nentry to zeroes (I'm not sure if it does that already, or just assumes\nthat new pg_log entries will start out zero). As long as that happens\nbefore the xact makes any data changes, it's OK to recycle the entry.\nNote we are assuming that no tuples will remain in the database with\nxmin or xmax equal to that XID from a prior cycle of the universe.\n\nThis scheme allows us to survive XID wraparound at the cost of slight\nadditional complexity in ordered comparisons of XIDs (which is not a\nreally performance-critical task AFAIK), and at the cost that the\noriginal insertion XIDs of all but recent tuples will be lost by\nVACUUM. The system doesn't particularly care about that, but old XIDs\ndo sometimes come in handy for debugging purposes. A possible\ncompromise is to overwrite only XIDs that are older than, say,\nWRAPLIMIT/4 instead of doing so as soon as possible. This would mean\nthe required VACUUM frequency is every WRAPLIMIT/4 xacts instead of\nevery WRAPLIMIT/2 xacts.\n\nWe have a straightforward tradeoff between the maximum size of pg_log\n(WRAPLIMIT/4 bytes) and the required frequency of VACUUM (at least\nevery WRAPLIMIT/2 or WRAPLIMIT/4 transactions). This could be made\nconfigurable in config.h for those who're intent on customization,\nbut I'd be inclined to set the default value at WRAPLIMIT = 1G.\n\nComments? Vadim, is any of this about to be superseded by WAL?\nIf not, I'd like to fix it for 7.1.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Nov 2000 17:47:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Transaction ID wraparound: problem and proposed solution" }, { "msg_contents": "At 17:47 3/11/00 -0500, Tom Lane wrote:\n>* portability --- I don't believe long long int exists on all the\n>platforms we support.\n\nAre you sure of this, or is it just a 'last time I looked' statement. If\nthe latter, it might be worth verifying.\n\n\n>* performance --- except on true 64-bit platforms, widening Datum to\n>eight bytes would be a system-wide performance hit, \n\nYes, OIDs are used a lot, but it's not that bad, is it? Are there many\ntight loops with thousands of OID-only operations? I'd guess it's only one\nmore instruction & memory fetch.\n\n\n>* disk space --- letting pg_log grow without bound isn't a pleasant\n>prospect either.\n\nMaybe this can be achieved by wrapping XID for the log file only.\n\n\n>I believe it is possible to fix these problems without widening XID,\n>by redefining XIDs in a way that allows for wraparound. Here's my\n>plan:\n\nIt's a cute idea (elegant, even), but maybe we'd be running through hoops\njust for a minor performance gain (which may not exist, since we're adding\nextra comparisons via the macro) and for possible unsupported OSs. Perhaps\nOS's without 8 byte ints have to suffer a performance hit (ie. we declare a\nstruct with appropriate macros).\n\n\n>are no longer simply \"x < y\", but need to be expressed as a macro.\n>We consider x < y if (y - x) % WRAPLIMIT < WRAPLIMIT/2.\n\nYou mean you plan to limit PGSQL to only 1G concurrent transactions. Isn't\nthat a bit short sighted? ;-}\n\n\n>2. To keep the system from having to deal with XIDs that are more than\n>WRAPLIMIT/2 transactions old, VACUUM should \"freeze\" known-good old\n>tuples. \n\nThis is a problem for me; it seems to enshrine VACUUM in perpetuity.\n\n\n>4. With the wraparound behavior, pg_log will have a bounded size: it\n>will never exceed WRAPLIMIT*2 bits = WRAPLIMIT/4 bytes. Since we will\n>recycle pg_log entries every WRAPLIMIT xacts, during transaction start\n\nIs there any was we can use this recycling technique with 8-byte XIDs?\n\nAlso, will there be a problem with backup programs that use XID to\ndetermine newer records and apply/reapply changes?\n\n\n>This scheme allows us to survive XID wraparound at the cost of slight\n>additional complexity in ordered comparisons of XIDs (which is not a\n>really performance-critical task AFAIK)\n\nMaybe I'm really missing the amount of XID manipulation, but I'd be\nsurprised if 16-byte XIDs would slow things down much.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 04 Nov 2000 13:09:22 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transaction ID wraparound: problem and proposed\n solution" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n>> * disk space --- letting pg_log grow without bound isn't a pleasant\n>> prospect either.\n\n> Maybe this can be achieved by wrapping XID for the log file only.\n\nHow's that going to improve matters? pg_log is ground truth for XIDs;\nif you can't distinguish two XIDs in pg_log, there's no point in\ndistinguishing them elsewhere.\n\n> Maybe I'm really missing the amount of XID manipulation, but I'd be\n> surprised if 16-byte XIDs would slow things down much.\n\nIt's not so much XIDs themselves, as that I think we'd need to widen\ntypedef Datum too, and that affects manipulations of *all* data types.\n\nIn any case, the prospect of a multi-gigabyte, ever-growing pg_log file,\nwith no way to recover the space short of dump/initdb/reload, is\nawfully unappetizing for a high-traffic installation...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Nov 2000 21:29:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Transaction ID wraparound: problem and proposed solution " }, { "msg_contents": "Tom Lane wrote:\n> \n> Philip Warner <[email protected]> writes:\n> >> * disk space --- letting pg_log grow without bound isn't a pleasant\n> >> prospect either.\n> \n> > Maybe this can be achieved by wrapping XID for the log file only.\n> \n> How's that going to improve matters? pg_log is ground truth for XIDs;\n> if you can't distinguish two XIDs in pg_log, there's no point in\n> distinguishing them elsewhere.\n> \n> > Maybe I'm really missing the amount of XID manipulation, but I'd be\n> > surprised if 16-byte XIDs would slow things down much.\n> \n> It's not so much XIDs themselves, as that I think we'd need to widen\n> typedef Datum too, and that affects manipulations of *all* data types.\n> \n> In any case, the prospect of a multi-gigabyte, ever-growing pg_log file,\n> with no way to recover the space short of dump/initdb/reload, is\n> awfully unappetizing for a high-traffic installation...\n\nAgreed completely. I'd like to think I could have such an installation\nin the next year or so :) \n\nTo prevent a performance hit to those who don't want, is there a\npossibility of either a compile time option or 'auto-expanding' the\nwidth of the XID's and other items when it becomes appropriate? Start\nwith int4, when that limit is hit goto int8, and should -- quite\nunbelievibly so but there are multi-TB databases -- it be necessary jump\nto int12 or int16? Be the first to support Exa-objects in an RDBMS. \nTesting not necessary ;)\n\nCompiletime option would be appropriate however if theres a significant\nperformance hit.\n\nI'm not much of a c coder (obviously), so I don't know of the\nlimitations. plpgsql is my friend that can do nearly anything :)\n\nHmm... After reading the above I should have stuck with lurking.\n", "msg_date": "Fri, 03 Nov 2000 21:41:29 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transaction ID wraparound: problem and proposed solution" }, { "msg_contents": "One idea I had from this is actually truncating pg_log at some point if\nwe know all the tuples have the special committed xid. It would prevent\nthe file from growing without bounds.\n\nVadim, can you explain how WAL will make pg_log unnecessary someday?\n\n\n> We've expended a lot of worry and discussion in the past about what\n> happens if the OID generator wraps around. However, there is another\n> 4-byte counter in the system: the transaction ID (XID) generator.\n> While OID wraparound is survivable, if XIDs wrap around then we really\n> do have a Ragnarok scenario. The tuple validity checks do ordered\n> comparisons on XIDs, and will consider tuples with xmin > current xact\n> to be invalid. Result: after wraparound, your whole database would\n> instantly vanish from view.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 4 Nov 2000 13:43:15 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transaction ID wraparound: problem and proposed solution" }, { "msg_contents": "> One idea I had from this is actually truncating pg_log at some point if\n> we know all the tuples have the special committed xid. It would prevent\n> the file from growing without bounds.\n\nNot truncating, but implementing pg_log as set of files - we could remove\nfiles for old xids.\n\n> Vadim, can you explain how WAL will make pg_log unnecessary someday?\n\nFirst, I mentioned only that having undo we could remove old pg_log after\npostmaster startup because of only committed changes would be in data\nfiles and they would be visible to new transactions (small changes in tqual\nwill be required to take page' startup id into account) which would reuse xids.\nWhile changing a page first time in current startup, server would do exactly\nwhat Tom is going to do at vacuuming - just update xmin/xmax to \"1\" in all items\n(or setting some flag in t_infomask), - and change page' startup id to current.\n\nI understand that this is not complete solution for xids problem, I just wasn't\ngoing to solve it that time. Now after Tom' proposal I see how to reuse xids\nwithout vacuuming (but having undo): we will add XidWrapId (XWI) - xid wrap\ncounter - to pages and set it when we change page. First time we do this for\npage with old XWI we'll mark old items (to know later that they were changed\nby xids with old XWI). Each time we change page we can mark old xmin/xmax\nwith xid <= current xid as committed long ago (basing on xact TTL restrinctions).\n\nAll above assumes that there will be no xids from aborted transactions in pages,\nso we need not lookup in pg_log to know is a xid committed/aborted, - there will\nbe only xids from running or committed xactions there.\n\nAnd we need in undo for this.\n\nVadim\n\n\n", "msg_date": "Sun, 5 Nov 2000 01:02:01 -0800", "msg_from": "\"Vadim Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transaction ID wraparound: problem and proposed solution" }, { "msg_contents": "Tom Lane wrote:\n> \n> Philip Warner <[email protected]> writes:\n> >> * disk space --- letting pg_log grow without bound isn't a pleasant\n> >> prospect either.\n> \n> > Maybe this can be achieved by wrapping XID for the log file only.\n> \n> How's that going to improve matters? pg_log is ground truth for XIDs;\n> if you can't distinguish two XIDs in pg_log, there's no point in\n> distinguishing them elsewhere.\n\nOne simple way - start a new pg_log file at each wraparound and encode \nthe high 4 bytes in the filename (or in first four bytes of file)\n\n> > Maybe I'm really missing the amount of XID manipulation, but I'd be\n> > surprised if 16-byte XIDs would slow things down much.\n> \n> It's not so much XIDs themselves, as that I think we'd need to widen\n> typedef Datum too, and that affects manipulations of *all* data types.\n\nDo you mean that each _field_ will take more space, not each _record_ ?\n\n> In any case, the prospect of a multi-gigabyte, ever-growing pg_log file,\n> with no way to recover the space short of dump/initdb/reload, is\n> awfully unappetizing for a high-traffic installation...\n\nThe pg_log should be rotated anyway either with long xids or long-long\nxids.\n\n-----------\nHannu\n", "msg_date": "Sun, 05 Nov 2000 15:48:13 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transaction ID wraparound: problem and proposed solution" }, { "msg_contents": "Tom Lane wrote:\n> \n> We've expended a lot of worry and discussion in the past about what\n> happens if the OID generator wraps around. However, there is another\n> 4-byte counter in the system: the transaction ID (XID) generator.\n> While OID wraparound is survivable, if XIDs wrap around then we really\n> do have a Ragnarok scenario. The tuple validity checks do ordered\n> comparisons on XIDs, and will consider tuples with xmin > current xact\n> to be invalid. Result: after wraparound, your whole database would\n> instantly vanish from view.\n> \n> The first thought that comes to mind is that XIDs should be promoted to\n> eight bytes. However there are several practical problems with this:\n> * portability --- I don't believe long long int exists on all the\n> platforms we support.\n\nI suspect that gcc at least supports long long on all OS-s we support\n\n> * performance --- except on true 64-bit platforms, widening Datum to\n> eight bytes would be a system-wide performance hit, which is a tad\n> unpleasant to fix a scenario that's not yet been reported from the\n> field.\n\nComplicating compares would also be a performance hit. It's hard to tell \nwhich one would be bigger\n\n> * disk space --- letting pg_log grow without bound isn't a pleasant\n> prospect either.\n\nHow will 2x size increase of xid cause \"boundless\" growth of pg_log ;)\n\n> I believe it is possible to fix these problems without widening XID,\n> by redefining XIDs in a way that allows for wraparound. Here's my\n> plan:\n\nI'd hate to let go of any hope of restoring time travel.\n\nI suspect that until postgres starts re-using space, time travel is in \ntheory possible, provided one more file is kept with commit (wall-clock)\ntimes of transactions or adding these times to pg_log.\n\nBTW, is there somewhere a timetable of important changes in basic \nprinciples in postgres, so that I could get a CVS checkout just before \nthem happening ?\n\nI'd specially be intereted in following:\n\nt0: system support for time-travel removed\nt1: no longer a no-overwrite systems\nt2: OIDs gone\nt3: got rid of all OO-features ;)\n\n----------\nHannu\n", "msg_date": "Sun, 05 Nov 2000 16:00:31 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transaction ID wraparound: problem and proposed solution" }, { "msg_contents": "* Peter Eisentraut <[email protected]> [001105 09:39]:\n> Hannu Krosing writes:\n> \n> > > The first thought that comes to mind is that XIDs should be promoted to\n> > > eight bytes. However there are several practical problems with this:\n> > > * portability --- I don't believe long long int exists on all the\n> > > platforms we support.\n> > \n> > I suspect that gcc at least supports long long on all OS-s we support\n> \n> Uh, we don't want to depend on gcc, do we?\nDoesn't C99 *REQUIRE* long long? I know the SCO UDK Compiler has had\nit for a long time. I know it's early in C99's life, but...\n\n\n> \n> But we could make the XID a struct of two 4-byte integers, at the obvious\n> increase in storage size.\nWhat is the difference between a native long long and a struct of 2\nlong's? \n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 5 Nov 2000 09:41:59 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transaction ID wraparound: problem and proposed solution" }, { "msg_contents": "Hannu Krosing writes:\n\n> > The first thought that comes to mind is that XIDs should be promoted to\n> > eight bytes. However there are several practical problems with this:\n> > * portability --- I don't believe long long int exists on all the\n> > platforms we support.\n> \n> I suspect that gcc at least supports long long on all OS-s we support\n\nUh, we don't want to depend on gcc, do we?\n\nBut we could make the XID a struct of two 4-byte integers, at the obvious\nincrease in storage size.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sun, 5 Nov 2000 16:42:33 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transaction ID wraparound: problem and proposed solution" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n>> * disk space --- letting pg_log grow without bound isn't a pleasant\n>> prospect either.\n\n> How will 2x size increase of xid cause \"boundless\" growth of pg_log ;)\n\nOK, 2^64 isn't mathematically unbounded, but let's see you buy a disk\nthat will hold it ;-). My point is that if we want to think about\nallowing >4G transactions, part of the answer has to be a way to recycle\npg_log space. Otherwise it's still not really practical.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 05 Nov 2000 13:02:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Transaction ID wraparound: problem and proposed solution " }, { "msg_contents": "Larry Rosenman <[email protected]> writes:\n>> Uh, we don't want to depend on gcc, do we?\n\n> Doesn't C99 *REQUIRE* long long?\n\nWhat difference does that make? It'll be a very long time before\nPostgres can REQUIRE that people have a C99-compliant compiler.\nPortability does not mean \"we work great on just the newest and\nspiffiest platforms\"...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 05 Nov 2000 13:07:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Transaction ID wraparound: problem and proposed solution " }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> Hannu Krosing writes:\n> \n> > > The first thought that comes to mind is that XIDs should be promoted to\n> > > eight bytes. However there are several practical problems with this:\n> > > * portability --- I don't believe long long int exists on all the\n> > > platforms we support.\n> >\n> > I suspect that gcc at least supports long long on all OS-s we support\n> \n> Uh, we don't want to depend on gcc, do we?\n\nI suspect that we do on many platforms (like *BSD, Linux and Win32).\n\nWhat platforms we currently support don't have functional gcc ?\n\n> But we could make the XID a struct of two 4-byte integers, at the obvious\n> increase in storage size.\n\nAnd a (hopefully) small performance hit on operations when defined as\nmacros,\nand some more for less data fitting in cache.\n\nwhat operations do we need to be defined ?\n\nwill >, <, ==, !=, >=, <== and ++ be enough ?\n\n-------------\nHannu\n", "msg_date": "Sun, 05 Nov 2000 22:14:24 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transaction ID wraparound: problem and proposed solution" }, { "msg_contents": "* Tom Lane <[email protected]> [001105 12:07]:\n> Larry Rosenman <[email protected]> writes:\n> >> Uh, we don't want to depend on gcc, do we?\n> \n> > Doesn't C99 *REQUIRE* long long?\n> \n> What difference does that make? It'll be a very long time before\n> Postgres can REQUIRE that people have a C99-compliant compiler.\n> Portability does not mean \"we work great on just the newest and\n> spiffiest platforms\"...\nI understand, but long long should start appearing in mainstream stuff\nnow that the standard is out. I do understand your concern, however.\nI was just making a point that we should start seeing it. \n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 5 Nov 2000 14:59:00 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transaction ID wraparound: problem and proposed solution" }, { "msg_contents": "* Hannu Krosing <[email protected]> [001105 15:21]:\n> Peter Eisentraut wrote:\n> > \n> I suspect that we do on many platforms (like *BSD, Linux and Win32).\nMany, but not *ALL*. I prefer to build my stuff using the Native\nUnixWare 7 (UDK) Compiler. As of the Feature Supplement (due out any\nweek now, I have a pre-release), we have WORKING C++, and the compiler\nis C99, and the C++ is STD C++ (very very very close). \n\nGCC/G++ for this platform (at least from the SCO Skunkware side) is\nNOT C99 nore STD C++.\n\n> \n> What platforms we currently support don't have functional gcc ?\n> \n> > But we could make the XID a struct of two 4-byte integers, at the obvious\n> > increase in storage size.\n> \n> And a (hopefully) small performance hit on operations when defined as\n> macros,\n> and some more for less data fitting in cache.\n> \n> what operations do we need to be defined ?\n> \n> will >, <, ==, !=, >=, <== and ++ be enough ?\n> \n> -------------\n> Hannu\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 5 Nov 2000 15:24:47 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transaction ID wraparound: problem and proposed solution" }, { "msg_contents": "On Sunday 05 November 2000 13:02, Tom Lane wrote:\n> OK, 2^64 isn't mathematically unbounded, but let's see you buy a disk\n> that will hold it ;-). My point is that if we want to think about\n> allowing >4G transactions, part of the answer has to be a way to recycle\n> pg_log space. Otherwise it's still not really practical.\n\nI kind of like vadim's idea of segmenting pg_log. \n\nSegments in which all the xacts have been commited could be deleted.\n\n-- \nMark Hollomon\n", "msg_date": "Mon, 6 Nov 2000 13:09:19 -0500", "msg_from": "Mark Hollomon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transaction ID wraparound: problem and proposed solution" }, { "msg_contents": "On Sun, Nov 05, 2000 at 03:48:13PM +0200, Hannu Krosing wrote:\n> Tom Lane wrote:\n> > \n> > Philip Warner <[email protected]> writes:\n> > >> * disk space --- letting pg_log grow without bound isn't a pleasant\n> > >> prospect either.\n> > \n> > > Maybe this can be achieved by wrapping XID for the log file only.\n> > \n> > How's that going to improve matters? pg_log is ground truth for XIDs;\n> > if you can't distinguish two XIDs in pg_log, there's no point in\n> > distinguishing them elsewhere.\n> \n> One simple way - start a new pg_log file at each wraparound and encode \n> the high 4 bytes in the filename (or in first four bytes of file)\n\nProposal:\n\nAnnotate each log file with the current XID value at the time the file \nis created. Before comparing any two XIDs, subtract that value from \neach operand, using unsigned arithmetic. \n\nAt a sustained rate of 10,000 transactions/second, any pair of 32-bit \nXIDs less than 2.5 days apart compare properly.\n\nNathan Myers\[email protected]\n\n", "msg_date": "Fri, 10 Nov 2000 12:01:25 -0800", "msg_from": "Nathan Myers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transaction ID wraparound: problem and proposed solution" }, { "msg_contents": "\nI have added this email thread to TODO.detail.\n\n> We've expended a lot of worry and discussion in the past about what\n> happens if the OID generator wraps around. However, there is another\n> 4-byte counter in the system: the transaction ID (XID) generator.\n> While OID wraparound is survivable, if XIDs wrap around then we really\n> do have a Ragnarok scenario. The tuple validity checks do ordered\n> comparisons on XIDs, and will consider tuples with xmin > current xact\n> to be invalid. Result: after wraparound, your whole database would\n> instantly vanish from view.\n> \n> The first thought that comes to mind is that XIDs should be promoted to\n> eight bytes. However there are several practical problems with this:\n> * portability --- I don't believe long long int exists on all the\n> platforms we support.\n> * performance --- except on true 64-bit platforms, widening Datum to\n> eight bytes would be a system-wide performance hit, which is a tad\n> unpleasant to fix a scenario that's not yet been reported from the\n> field.\n> * disk space --- letting pg_log grow without bound isn't a pleasant\n> prospect either.\n> \n> I believe it is possible to fix these problems without widening XID,\n> by redefining XIDs in a way that allows for wraparound. Here's my\n> plan:\n> \n> 1. Allow XIDs to range from 0 to WRAPLIMIT-1 (WRAPLIMIT is not\n> necessarily 4G, see discussion below). Ordered comparisons on XIDs\n> are no longer simply \"x < y\", but need to be expressed as a macro.\n> We consider x < y if (y - x) % WRAPLIMIT < WRAPLIMIT/2.\n> This comparison will work as long as the range of interesting XIDs\n> never exceeds WRAPLIMIT/2. Essentially, we envision the actual value\n> of XID as being the low-order bits of a logical XID that always\n> increases, and we assume that no extant XID is more than WRAPLIMIT/2\n> transactions old, so we needn't keep track of the high-order bits.\n> \n> 2. To keep the system from having to deal with XIDs that are more than\n> WRAPLIMIT/2 transactions old, VACUUM should \"freeze\" known-good old\n> tuples. To do this, we'll reserve a special XID, say 1, that is always\n> considered committed and is always less than any ordinary XID. (So the\n> ordered-comparison macro is really a little more complicated than I said\n> above. Note that there is already a reserved XID just like this in the\n> system, the \"bootstrap\" XID. We could simply use the bootstrap XID, but\n> it seems better to make another one.) When VACUUM finds a tuple that\n> is committed good and has xmin < XmaxRecent (the oldest XID that might\n> be considered uncommitted by any open transaction), it will replace that\n> tuple's xmin by the special always-good XID. Therefore, as long as\n> VACUUM is run on all tables in the installation more often than once per\n> WRAPLIMIT/2 transactions, there will be no tuples with ordinary XIDs\n> older than WRAPLIMIT/2.\n> \n> 3. At wraparound, the XID counter has to be advanced to skip over the\n> InvalidXID value (zero) and the reserved XIDs, so that no real transaction\n> is generated with those XIDs. No biggie here.\n> \n> 4. With the wraparound behavior, pg_log will have a bounded size: it\n> will never exceed WRAPLIMIT*2 bits = WRAPLIMIT/4 bytes. Since we will\n> recycle pg_log entries every WRAPLIMIT xacts, during transaction start\n> the xact manager will have to take care to actively clear its pg_log\n> entry to zeroes (I'm not sure if it does that already, or just assumes\n> that new pg_log entries will start out zero). As long as that happens\n> before the xact makes any data changes, it's OK to recycle the entry.\n> Note we are assuming that no tuples will remain in the database with\n> xmin or xmax equal to that XID from a prior cycle of the universe.\n> \n> This scheme allows us to survive XID wraparound at the cost of slight\n> additional complexity in ordered comparisons of XIDs (which is not a\n> really performance-critical task AFAIK), and at the cost that the\n> original insertion XIDs of all but recent tuples will be lost by\n> VACUUM. The system doesn't particularly care about that, but old XIDs\n> do sometimes come in handy for debugging purposes. A possible\n> compromise is to overwrite only XIDs that are older than, say,\n> WRAPLIMIT/4 instead of doing so as soon as possible. This would mean\n> the required VACUUM frequency is every WRAPLIMIT/4 xacts instead of\n> every WRAPLIMIT/2 xacts.\n> \n> We have a straightforward tradeoff between the maximum size of pg_log\n> (WRAPLIMIT/4 bytes) and the required frequency of VACUUM (at least\n> every WRAPLIMIT/2 or WRAPLIMIT/4 transactions). This could be made\n> configurable in config.h for those who're intent on customization,\n> but I'd be inclined to set the default value at WRAPLIMIT = 1G.\n> \n> Comments? Vadim, is any of this about to be superseded by WAL?\n> If not, I'd like to fix it for 7.1.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 20 Jan 2001 00:00:09 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transaction ID wraparound: problem and proposed solution" }, { "msg_contents": "I think the XID wraparound matter might be handled a bit more simply.\n\nGiven a global variable X which is the earliest XID value in use at \nsome event (e.g. startup) you can compare two XIDs x and y, using\nunsigned arithmetic, with just (x-X < y-X). This has the further \nadvantage that old transaction IDs need be \"frozen\" only every 4G \ntransactions, rather than Tom's suggested 256M or 512M transactions. \n\"Freezing\", in this scheme, means to set all older XIDs to equal the \nchosen X, rather than setting them to some constant reserved value. \nNo special cases are required for the comparison, even for folded \nvalues; it is (x-X < y-X) for all valid x and y.\n\nI don't know the role of the \"bootstrap\" XID, or how it must be\nfitted into the above.\n\nNathan Myers\[email protected]\n\n------------------------------------------------------------\n> We've expended a lot of worry and discussion in the past about what\n> happens if the OID generator wraps around. However, there is another\n> 4-byte counter in the system: the transaction ID (XID) generator.\n> While OID wraparound is survivable, if XIDs wrap around then we really\n> do have a Ragnarok scenario. The tuple validity checks do ordered\n> comparisons on XIDs, and will consider tuples with xmin > current xact\n> to be invalid. Result: after wraparound, your whole database would\n> instantly vanish from view.\n> \n> The first thought that comes to mind is that XIDs should be promoted to\n> eight bytes. However there are several practical problems with this:\n> * portability --- I don't believe long long int exists on all the\n> platforms we support.\n> * performance --- except on true 64-bit platforms, widening Datum to\n> eight bytes would be a system-wide performance hit, which is a tad\n> unpleasant to fix a scenario that's not yet been reported from the\n> field.\n> * disk space --- letting pg_log grow without bound isn't a pleasant\n> prospect either.\n> \n> I believe it is possible to fix these problems without widening XID,\n> by redefining XIDs in a way that allows for wraparound. Here's my\n> plan:\n> \n> 1. Allow XIDs to range from 0 to WRAPLIMIT-1 (WRAPLIMIT is not\n> necessarily 4G, see discussion below). Ordered comparisons on XIDs\n> are no longer simply \"x < y\", but need to be expressed as a macro.\n> We consider x < y if (y - x) % WRAPLIMIT < WRAPLIMIT/2.\n> This comparison will work as long as the range of interesting XIDs\n> never exceeds WRAPLIMIT/2. Essentially, we envision the actual value\n> of XID as being the low-order bits of a logical XID that always\n> increases, and we assume that no extant XID is more than WRAPLIMIT/2\n> transactions old, so we needn't keep track of the high-order bits.\n> \n> 2. To keep the system from having to deal with XIDs that are more than\n> WRAPLIMIT/2 transactions old, VACUUM should \"freeze\" known-good old\n> tuples. To do this, we'll reserve a special XID, say 1, that is always\n> considered committed and is always less than any ordinary XID. (So the\n> ordered-comparison macro is really a little more complicated than I said\n> above. Note that there is already a reserved XID just like this in the\n> system, the \"bootstrap\" XID. We could simply use the bootstrap XID, but\n> it seems better to make another one.) When VACUUM finds a tuple that\n> is committed good and has xmin < XmaxRecent (the oldest XID that might\n> be considered uncommitted by any open transaction), it will replace that\n> tuple's xmin by the special always-good XID. Therefore, as long as\n> VACUUM is run on all tables in the installation more often than once per\n> WRAPLIMIT/2 transactions, there will be no tuples with ordinary XIDs\n> older than WRAPLIMIT/2.\n> \n> 3. At wraparound, the XID counter has to be advanced to skip over the\n> InvalidXID value (zero) and the reserved XIDs, so that no real transaction\n> is generated with those XIDs. No biggie here.\n> \n> 4. With the wraparound behavior, pg_log will have a bounded size: it\n> will never exceed WRAPLIMIT*2 bits = WRAPLIMIT/4 bytes. Since we will\n> recycle pg_log entries every WRAPLIMIT xacts, during transaction start\n> the xact manager will have to take care to actively clear its pg_log\n> entry to zeroes (I'm not sure if it does that already, or just assumes\n> that new pg_log entries will start out zero). As long as that happens\n> before the xact makes any data changes, it's OK to recycle the entry.\n> Note we are assuming that no tuples will remain in the database with\n> xmin or xmax equal to that XID from a prior cycle of the universe.\n> \n> This scheme allows us to survive XID wraparound at the cost of slight\n> additional complexity in ordered comparisons of XIDs (which is not a\n> really performance-critical task AFAIK), and at the cost that the\n> original insertion XIDs of all but recent tuples will be lost by\n> VACUUM. The system doesn't particularly care about that, but old XIDs\n> do sometimes come in handy for debugging purposes. A possible\n> compromise is to overwrite only XIDs that are older than, say,\n> WRAPLIMIT/4 instead of doing so as soon as possible. This would mean\n> the required VACUUM frequency is every WRAPLIMIT/4 xacts instead of\n> every WRAPLIMIT/2 xacts.\n> \n> We have a straightforward tradeoff between the maximum size of pg_log\n> (WRAPLIMIT/4 bytes) and the required frequency of VACUUM (at least\n> every WRAPLIMIT/2 or WRAPLIMIT/4 transactions). This could be made\n> configurable in config.h for those who're intent on customization,\n> but I'd be inclined to set the default value at WRAPLIMIT = 1G.\n> \n> Comments? Vadim, is any of this about to be superseded by WAL?\n> If not, I'd like to fix it for 7.1.\n> \n> \t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Jan 2001 00:29:24 -0800", "msg_from": "[email protected] (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Transaction ID wraparound: problem and proposed solution" }, { "msg_contents": "> > The first thought that comes to mind is that XIDs should be promoted to\n> > eight bytes. However there are several practical problems with this:\n> > * portability --- I don't believe long long int exists on all the\n> > platforms we support.\n\n> > \t\t\tregards, tom lane\n\nHow long will such platforms be supported? When will 64bit be a\nrequirement?\n\nThe c.h has following lines in case there is not 64 bit ints:\n\n/* Won't actually work, but fall back to long int so that code\n * compiles */\ntypedef long int int64;\ntypedef unsigned long int uint64;\n#define INT64_IS_BUSTED\n\n\nAt the memont the int64 is mostly used in 'int8' case, so its\nnot too bad. But probably there will be more cases where int64\nis useful, so PostgreSQL will start misbehaving on those\nplatforms, which is worse than not supporting them officially.\n\nOr should int64 be avoided at any cost?\n\n-- \nmarko\n\n", "msg_date": "Sat, 20 Jan 2001 17:53:37 +0200", "msg_from": "Marko Kreen <[email protected]>", "msg_from_op": false, "msg_subject": "status of 64bit ints? was: Re: Transaction ID wraparound: problem and\n\tproposed solution" }, { "msg_contents": "Marko Kreen <[email protected]> writes:\n> When will 64bit be a requirement?\n\nAs far as I'm concerned, it will *never* be a requirement,\nat least not for the foreseeable future.\n\nI do want the option to compile with 8-byte OID and/or XID types.\nThat's not a requirement.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Jan 2001 11:03:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: status of 64bit ints? was: Re: Transaction ID wraparound: problem\n\tand proposed solution" } ]
[ { "msg_contents": "\nServer process (pid 13361) exited with status 26 at Fri Nov 3 17:49:44 2000\nTerminating any active server processes...\nNOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\n I have rolled back the current transaction and am going to terminate your database system connection and exit.\n Please reconnect to the database system and repeat your query.\n\nThis happens fairly regularly. I assume exit code 26 is used to dictate\nthat a specific error has occured.\n\nThe database is a decent size (~3M records) with about 4 indexes.\n\n-Dan\n-- \nMan is a rational animal who always loses his temper when he is called\nupon to act in accordance with the dictates of reason.\n -- Oscar Wilde\n", "msg_date": "Fri, 3 Nov 2000 17:57:51 -0500", "msg_from": "Dan Moschuk <[email protected]>", "msg_from_op": true, "msg_subject": "VACUUM causes violent postmaster death" }, { "msg_contents": "* Dan Moschuk <[email protected]> [001103 14:55] wrote:\n> \n> Server process (pid 13361) exited with status 26 at Fri Nov 3 17:49:44 2000\n> Terminating any active server processes...\n> NOTICE: Message from PostgreSQL backend:\n> The Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\n> I have rolled back the current transaction and am going to terminate your database system connection and exit.\n> Please reconnect to the database system and repeat your query.\n> \n> This happens fairly regularly. I assume exit code 26 is used to dictate\n> that a specific error has occured.\n> \n> The database is a decent size (~3M records) with about 4 indexes.\n\nWhat version of postgresql? Tom Lane recently fixed some severe problems\nwith vacuum and heavily used databases, the fix should be in the latest\n7.0.2-patches/7.0.3 release.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Fri, 3 Nov 2000 15:03:15 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM causes violent postmaster death" }, { "msg_contents": "Dan Moschuk <[email protected]> writes:\n> Server process (pid 13361) exited with status 26 at Fri Nov 3 17:49:44 2000\n\nWhat's signal 26 on your system? (Look in /usr/include/signal.h or\n/usr/include/signum.h or /usr/include/sys/signal.h)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Nov 2000 18:09:43 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM causes violent postmaster death " }, { "msg_contents": "\n| > Server process (pid 13361) exited with status 26 at Fri Nov 3 17:49:44 2000\n| \n| What's signal 26 on your system? (Look in /usr/include/signal.h or\n| /usr/include/signum.h or /usr/include/sys/signal.h)\n\ndan@spirit:/home/dan grep 26 /usr/include/sys/signal.h\n#define SIGVTALRM 26 /* virtual time alarm */\n\nCheers,\n-Dan\n-- \nMan is a rational animal who always loses his temper when he is called\nupon to act in accordance with the dictates of reason.\n -- Oscar Wilde\n", "msg_date": "Fri, 3 Nov 2000 18:30:15 -0500", "msg_from": "Dan Moschuk <[email protected]>", "msg_from_op": true, "msg_subject": "Re: VACUUM causes violent postmaster death" }, { "msg_contents": "\n| > This happens fairly regularly. I assume exit code 26 is used to dictate\n| > that a specific error has occured.\n| > \n| > The database is a decent size (~3M records) with about 4 indexes.\n| \n| What version of postgresql? Tom Lane recently fixed some severe problems\n| with vacuum and heavily used databases, the fix should be in the latest\n| 7.0.2-patches/7.0.3 release.\n\nIt's 7.0.2-patches from about two or three weeks ago.\n\n-Dan\n-- \nMan is a rational animal who always loses his temper when he is called\nupon to act in accordance with the dictates of reason.\n -- Oscar Wilde\n", "msg_date": "Fri, 3 Nov 2000 18:36:38 -0500", "msg_from": "Dan Moschuk <[email protected]>", "msg_from_op": true, "msg_subject": "Re: VACUUM causes violent postmaster death" }, { "msg_contents": "* Dan Moschuk <[email protected]> [001103 15:32] wrote:\n> \n> | > This happens fairly regularly. I assume exit code 26 is used to dictate\n> | > that a specific error has occured.\n> | > \n> | > The database is a decent size (~3M records) with about 4 indexes.\n> | \n> | What version of postgresql? Tom Lane recently fixed some severe problems\n> | with vacuum and heavily used databases, the fix should be in the latest\n> | 7.0.2-patches/7.0.3 release.\n> \n> It's 7.0.2-patches from about two or three weeks ago.\n\nMake sure pgsql/src/backend/commands/vacuum.c is at:\n\nrevision 1.148.2.1\ndate: 2000/09/19 21:01:04; author: tgl; state: Exp; lines: +37 -19\nBack-patch fix to ensure that VACUUM always calls FlushRelationBuffers.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Fri, 3 Nov 2000 15:57:18 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM causes violent postmaster death" }, { "msg_contents": "Dan Moschuk <[email protected]> writes:\n> | > Server process (pid 13361) exited with status 26 at Fri Nov 3 17:49:44 2000\n> | \n> | What's signal 26 on your system?\n\n> #define SIGVTALRM 26 /* virtual time alarm */\n\nWell, that sure shouldn't be happening. You aren't perhaps running it\nunder a ulimit setting that limits total process CPU time, are you?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Nov 2000 19:43:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM causes violent postmaster death " }, { "msg_contents": "I don't think Dan's problem is related to the recently found VACUUM\nbugs. Killing a backend with SIGVTALRM suggests that something thinks\nthe backend's been running too long. ulimit is a likely suspect.\nAnother possibility is some sort of profiling mechanism gone haywire.\nThere's nothing in our source code that would invoke that signal, so\nit's got to be some outside agency, I think.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Nov 2000 20:21:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM causes violent postmaster death " } ]
[ { "msg_contents": "I'm trying to compile the CVS (fresh download) of postgres and I get this \nrunning the configure script:\n\nchecking for tzname... yes\nchecking for union semun... no\nchecking for struct sockaddr_un... yes\nchecking for int timezone... yes\nchecking types of arguments for accept()... configure: error: could not \ndetermine argument types \n\nMy configure options are these:\n\n./configure --prefix=/usr/local/pgsql/ --cache-file=config.cache \n--enable-locale --enable-uniconv --with-maxbackends=128 --without-tk \n--with-openssl=/usr/local/ssl/ --enable-syslog\n\nand I'm running a Solaris 7 with gcc 2.95.2.\n\nAny ideas?\n\n-- \n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Fri, 3 Nov 2000 20:10:22 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": true, "msg_subject": "problems with configure" }, { "msg_contents": "Martin A. Marques writes:\n\n> checking types of arguments for accept()... configure: error: could not determine argument types\n\nAccording to the documentation for Solaris 7 it should be 'accept(int,\nstruct sockaddr *, socklen_t *)', which is the same on my system, so the\nproblem is elsewhere. One possibility is that the earlier tests for\nsys/types.h or sys/socket.h failed. Could you check what the file\nconfig.log says?\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sat, 4 Nov 2000 00:37:13 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problems with configure" }, { "msg_contents": "\"Martin A. Marques\" <[email protected]> writes:\n> I'm trying to compile the CVS (fresh download) of postgres and I get this \n> running the configure script:\n\n> checking types of arguments for accept()... configure: error: could not \n> determine argument types \n\nHm, how do your system's include files declare accept()?\n\nIt would help to see the part of the config.log file that shows the\nerrors configure gets while trying to find workable input types for\naccept().\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Nov 2000 20:16:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problems with configure " }, { "msg_contents": "On Vie 03 Nov 2000 22:16, Tom Lane wrote:\n>\n> Hm, how do your system's include files declare accept()?\n>\n> It would help to see the part of the config.log file that shows the\n> errors configure gets while trying to find workable input types for\n> accept().\n\nThe config.log file starts given fails at this point:\n\nconfigure:5383: checking for struct sockaddr_un\nconfigure:5398: gcc -c -g conftest.c 1>&5\nconfigure:5422: checking for int timezone\nconfigure:5434: gcc -o conftest -g conftest.c -lz -lgen -lnsl -lsocket \n-ldl -lm -lreadline -ltermcap -lcurses 1>&5\nconfigure:5454: checking types of arguments for accept()\nconfigure:5481: gcc -c -g conftest.c 1>&5\nconfigure:5475: conflicting types for `accept'\n/usr/include/sys/socket.h:384: previous declaration of `accept'\nconfigure: failed program was:\n#line 5468 \"configure\"\n#include \"confdefs.h\"\n#ifdef HAVE_SYS_TYPES_H\n#include <sys/types.h>\n#endif\n#ifdef HAVE_SYS_SOCKET_H\n#include <sys/socket.h>\n#endif\nextern accept (int, struct sockaddr *, int *);\nint main() {\n \n; return 0; }\n\nI see there that the extern accept function is different from the one that \nPeter Eisentraut wrote in his mail. Could there be somthing wrong in the cvs \ncode, or in one of the Solaris headers?\n\nSaludos... :-)\n\n\n-- \n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Sat, 4 Nov 2000 17:35:33 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problems with configure" }, { "msg_contents": "\"Martin A. Marques\" <[email protected]> writes:\n> On Vie 03 Nov 2000 22:16, Tom Lane wrote:\n>> Hm, how do your system's include files declare accept()?\n\n> The config.log file starts given fails at this point:\n> configure:5475: conflicting types for `accept'\n> /usr/include/sys/socket.h:384: previous declaration of `accept'\n\nSo how does /usr/include/sys/socket.h declare accept() ?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 04 Nov 2000 16:54:06 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problems with configure " }, { "msg_contents": "On S�b 04 Nov 2000 18:54, Tom Lane wrote:\n> \"Martin A. Marques\" <[email protected]> writes:\n> > On Vie 03 Nov 2000 22:16, Tom Lane wrote:\n> >> Hm, how do your system's include files declare accept()?\n> >\n> > The config.log file starts given fails at this point:\n> > configure:5475: conflicting types for `accept'\n> > /usr/include/sys/socket.h:384: previous declaration of `accept'\n>\n> So how does /usr/include/sys/socket.h declare accept() ?\n\nWell, my socket.h declares accept this way:\n\nextern int accept(int, struct sockaddr *, Psocklen_t);\n\nIt's the same line I have on a Solaris 8 Sparc, in which I have a \nPostgreSQL-7.0.2 compiled (without a problem) and running.\n\nSaludos... ;-)\n\n-- \n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Mon, 6 Nov 2000 08:14:27 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problems with configure" }, { "msg_contents": "On Vie 03 Nov 2000 20:37, Peter Eisentraut wrote:\n> Martin A. Marques writes:\n> > checking types of arguments for accept()... configure: error: could not\n> > determine argument types\n>\n> According to the documentation for Solaris 7 it should be 'accept(int,\n> struct sockaddr *, socklen_t *)', which is the same on my system, so the\n\nWell, mine looks like:\n\nextern int accept(int, struct sockaddr *, Psocklen_t);\n\n> problem is elsewhere. One possibility is that the earlier tests for\n> sys/types.h or sys/socket.h failed. Could you check what the file\n> config.log says?\n\nEverything looks good. No problems with those checks.\n\nI checked the configure on the lines that give the error and I don't \nunderstand what It's trying to do. It has various variables with I don' t \nknow where they are defined. Can somebody give me a clue?\nThis is what I have in the configure:\n\n#line 5479 \"configure\"\n#include \"confdefs.h\"\n#ifdef HAVE_SYS_TYPES_H\n#include <sys/types.h>\n#endif\n#ifdef HAVE_SYS_SOCKET_H\n#include <sys/socket.h>\n#endif\nextern accept ($ac_cv_func_accept_arg1, $ac_cv_func_accept_arg2, \n$ac_cv_func_accept_arg3 *);\nint main() {\n\n; return 0; }\n\nSaludos... :-)\n\n\n-- \n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart���n Marqu���s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Mon, 6 Nov 2000 09:31:04 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problems with configure" }, { "msg_contents": "\"Martin A. Marques\" <[email protected]> writes:\n> Well, mine looks like:\n> extern int accept(int, struct sockaddr *, Psocklen_t);\n\n> This is what I have in the configure:\n> extern accept ($ac_cv_func_accept_arg1, $ac_cv_func_accept_arg2, \n> $ac_cv_func_accept_arg3 *);\n\nHmm ... is it possible that his compiler distinguishes between\n\"extern int foo(...)\" and \"extern foo(...)\" ? Why don't we\nhave the return type there, anyway?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 Nov 2000 10:06:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problems with configure " }, { "msg_contents": "On Lun 06 Nov 2000 12:06, Tom Lane wrote:\n> \"Martin A. Marques\" <[email protected]> writes:\n> > Well, mine looks like:\n> > extern int accept(int, struct sockaddr *, Psocklen_t);\n> >\n> > This is what I have in the configure:\n> > extern accept ($ac_cv_func_accept_arg1, $ac_cv_func_accept_arg2,\n> > $ac_cv_func_accept_arg3 *);\n>\n> Hmm ... is it possible that his compiler distinguishes between\n> \"extern int foo(...)\" and \"extern foo(...)\" ? Why don't we\n> have the return type there, anyway?\n\nIf it's of any help, I'm on Solaris 7, SPARC, gcc-2.95.2, latest Postgres CVS.\nAnother question would be, why didn't I have problems of this type when I \ncompiled PostgreSQL 7.0.2 on Solaris 8, with the same version of gcc?\n\nThanks for the patience.\n\n-- \n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Mon, 6 Nov 2000 13:19:23 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problems with configure" }, { "msg_contents": "\"Martin A. Marques\" <[email protected]> writes:\n>> Hmm ... is it possible that his compiler distinguishes between\n>> \"extern int foo(...)\" and \"extern foo(...)\" ? Why don't we\n>> have the return type there, anyway?\n\n> If it's of any help, I'm on Solaris 7, SPARC, gcc-2.95.2, latest\n> Postgres CVS. Another question would be, why didn't I have problems\n> of this type when I compiled PostgreSQL 7.0.2 on Solaris 8, with the\n> same version of gcc?\n\nDifferent header files, likely. I'm starting to wonder if Solaris 7\nhas some header-file dependency for <sys/socket.h> beyond the one that\nthe test is allowing for (<sys/types.h>).\n\nBTW, does 'Psocklen_t' equate to just 'socklen_t *', or is there\nsomething strange hidden there?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 Nov 2000 11:28:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problems with configure " }, { "msg_contents": "On Lun 06 Nov 2000 13:28, Tom Lane wrote:\n> \"Martin A. Marques\" <[email protected]> writes:\n> >> Hmm ... is it possible that his compiler distinguishes between\n> >> \"extern int foo(...)\" and \"extern foo(...)\" ? Why don't we\n> >> have the return type there, anyway?\n> >\n> > If it's of any help, I'm on Solaris 7, SPARC, gcc-2.95.2, latest\n> > Postgres CVS. Another question would be, why didn't I have problems\n> > of this type when I compiled PostgreSQL 7.0.2 on Solaris 8, with the\n> > same version of gcc?\n>\n> Different header files, likely. I'm starting to wonder if Solaris 7\n> has some header-file dependency for <sys/socket.h> beyond the one that\n> the test is allowing for (<sys/types.h>).\n\nIs there any kind of info you would need that I could provide? If you want I \ncan send the config.log, output of the configure execution, etc. Even the \nsocket.h and the types.h.\nBTW, I didn't find diffs between Solaris 7 .h files and Solaris 8 headers.\n\n> BTW, does 'Psocklen_t' equate to just 'socklen_t *', or is there\n> something strange hidden there?\n\nI don' t have the slightest idea.\n\n\n-- \n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Mon, 6 Nov 2000 15:55:19 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problems with configure" }, { "msg_contents": "Martin A. Marques writes:\n\n> Is there any kind of info you would need that I could provide? If you want I \n> can send the config.log, output of the configure execution, etc. Even the \n> socket.h and the types.h.\n> BTW, I didn't find diffs between Solaris 7 .h files and Solaris 8 headers.\n\nTry one of the attached patches, first patch1, then patch2, preferrably\neach one separately.\n\n(To apply the patch, save the file in the same directory as 'configure'\nand run 'patch -p0 < patchx'. Then remove config.cache and rerun\nconfigure.)\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/", "msg_date": "Mon, 6 Nov 2000 22:25:33 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problems with configure" }, { "msg_contents": "On Lun 06 Nov 2000 18:25, Peter Eisentraut wrote:\n\n> > Martin A. Marques writes:\n> > Is there any kind of info you would need that I could provide? If you\n> > want I can send the config.log, output of the configure execution, etc.\n> > Even the socket.h and the types.h.\n> > BTW, I didn't find diffs between Solaris 7 .h files and Solaris 8\n> > headers.\n>\n> Try one of the attached patches, first patch1, then patch2, preferrably\n> each one separately.\n\nDidn't work. :-(\nThis was the output:\n\nmartin@ultra208 ~/basura/post-final $ patch -p0 < patch1 Looks like a \ncontext diff to me... Hunk #1 failed at line 5483. 1 out of 1 hunks failed: \nsaving rejects to configure.rej done \n\n\nconfigure.rej contains this:\n\nmartin@ultra208 ~/basura/post-final $ less configure.rej\n***************\n*** 5483,5489 ****\n #ifdef HAVE_SYS_SOCKET_H\n #include <sys/socket.h>\n #endif\n! extern accept ($ac_cv_func_accept_arg1, $ac_cv_func_accept_arg2, \n$ac_cv_func_accept_arg3 *);\n int main() {\n \n ; return 0; }\n--- 5483,5489 ----\n #ifdef HAVE_SYS_SOCKET_H\n #include <sys/socket.h>\n #endif\n! extern int accept ($ac_cv_func_accept_arg1, $ac_cv_func_accept_arg2, \n$ac_cv_func_accept_arg3 *);\n int main() {\n \n ; return 0; }\n\n\nTried to apply what the patch said by hand, ran the configure, but I get the \nsame error.\n\nI think today afternoon I will try some new things to see where the problem \ncan be.\n\nSaludos... ;-)\n\n\n-- \n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Tue, 7 Nov 2000 08:17:36 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problems with configure" }, { "msg_contents": "\"Martin A. Marques\" <[email protected]> writes:\n>>>> Is there any kind of info you would need that I could provide?\n>> \n>> If you could put\n>> #include <sys/types.h>\n>> #include <sys/socket.h>\n>> into a file temp.c, and then send the output of \"gcc -E temp.c\",\n>> it might shed some light.\n\n> There it goes!!\n\nWell, that tells the tale all right: the critical lines are\n\n\ttypedef\tuint32_t\tsocklen_t;\n\n\ttypedef\tvoid\t\t*Psocklen_t;\n\n\textern int accept(int, struct sockaddr *, Psocklen_t);\n\nWhat brainless idiot decided it would be a good idea to declare\naccept's last argument as void*, do you suppose? (At least you\nreport that Solaris 8 no longer has this folly, so they did get\na clue eventually.)\n\nNot sure what to do about this. It will clearly not do to define\nACCEPT_TYPE_ARG3 as void. Perhaps we need a special case for\nSolaris 7: if we detect that accept() is declared with \"void *\",\nassume that socklen_t is the thing to use. Peter, any thoughts?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 Nov 2000 16:01:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problems with configure " }, { "msg_contents": "On Mi� 08 Nov 2000 18:01, Tom Lane wrote:\n>\n> Well, that tells the tale all right: the critical lines are\n>\n> \ttypedef\tuint32_t\tsocklen_t;\n>\n> \ttypedef\tvoid\t\t*Psocklen_t;\n>\n> \textern int accept(int, struct sockaddr *, Psocklen_t);\n>\n> What brainless idiot decided it would be a good idea to declare\n> accept's last argument as void*, do you suppose? (At least you\n> report that Solaris 8 no longer has this folly, so they did get\n> a clue eventually.)\n>\n> Not sure what to do about this. It will clearly not do to define\n> ACCEPT_TYPE_ARG3 as void. Perhaps we need a special case for\n> Solaris 7: if we detect that accept() is declared with \"void *\",\n> assume that socklen_t is the thing to use. Peter, any thoughts?\n\nNo. Forgot to tell my latest experience.\n\n1) postgres 7.0.2 compiles great on Solaris 7 and Solaris 8.\n2) postgres cvs (latest download) doesn't compile (same error on both) on \nSolaris 7 nor Solaris 8.\n\nSo it isn't a Solaris 7 problem, but a Solaris problem. ;-)\nI just wish we could install linux on one of these SPARC to have something \ngood running. ;-)\n\nSaludos... :-)\n\n\n-- \n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Wed, 8 Nov 2000 18:11:42 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problems with configure" }, { "msg_contents": "\"Martin A. Marques\" <[email protected]> writes:\n> No. Forgot to tell my latest experience.\n\n> 1) postgres 7.0.2 compiles great on Solaris 7 and Solaris 8.\n> 2) postgres cvs (latest download) doesn't compile (same error on both) on \n> Solaris 7 nor Solaris 8.\n\nAh so. 7.0.*'s configure didn't try to determine the exact datatype of\naccept()'s arguments, which is why it didn't run into this problem.\n\n> So it isn't a Solaris 7 problem, but a Solaris problem. ;-)\n\nI guess we not only need a hack, but a nastygram or three sent off to\nthe Solaris people. void *? What in heavens name were they thinking?\nThat essentially means you've got no parameter type checking at all\non calls to accept() --- or any other socket function that takes a\nsocklen_t. Pass the wrong-size integer, you're out of luck ... silently.\nSheesh.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 Nov 2000 16:17:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problems with configure " }, { "msg_contents": "On Mi� 08 Nov 2000 18:17, Tom Lane wrote:\n>\n> I guess we not only need a hack, but a nastygram or three sent off to\n> the Solaris people. void *? What in heavens name were they thinking?\n> That essentially means you've got no parameter type checking at all\n> on calls to accept() --- or any other socket function that takes a\n> socklen_t. Pass the wrong-size integer, you're out of luck ... silently.\n> Sheesh.\n\nI have to say that I'm totally with you on the thoughts about Solaris's \nimplementation. It's not the first time I have problems compiling. Trying to \ncompile KDE2-alpha some time ago I had to hack on of the ICE headers which \nhad some sort of problem trying to determine the size of ... I can't remember \nwhat, so even Open windows has it's bugs, which aren't fixxed in Solaris 8.\n\nTo finish, which would be the status all this Solaris + Postgres cvs stuff?\n\nSaludos... :-)\n\n\n-- \n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Wed, 8 Nov 2000 18:43:36 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problems with configure" }, { "msg_contents": "Tom Lane writes:\n\n> Not sure what to do about this. It will clearly not do to define\n> ACCEPT_TYPE_ARG3 as void. Perhaps we need a special case for\n> Solaris 7: if we detect that accept() is declared with \"void *\",\n> assume that socklen_t is the thing to use. Peter, any thoughts?\n\nPerhaps we could, in case \"void *\" is discovered, run a similar deal with\nbind() or setsockopt(), i.e., some socket function that takes a\nnon-pointer socklen_t (or whatever), in order to find out the true nature\nof what's behind the \"void *\".\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 8 Nov 2000 23:16:49 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problems with configure " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> Not sure what to do about this. It will clearly not do to define\n>> ACCEPT_TYPE_ARG3 as void. Perhaps we need a special case for\n>> Solaris 7: if we detect that accept() is declared with \"void *\",\n>> assume that socklen_t is the thing to use. Peter, any thoughts?\n\n> Perhaps we could, in case \"void *\" is discovered, run a similar deal with\n> bind() or setsockopt(), i.e., some socket function that takes a\n> non-pointer socklen_t (or whatever), in order to find out the true nature\n> of what's behind the \"void *\".\n\nWell, maybe. But is it worth the trouble? Hard to believe anyone else\ndid the same thing.\n\nIf socklen_t exists, it's presumably the right thing to use, so if we\njust hardwire \"void -> socklen_t\", I think it'd be OK. If we're wrong,\nwe'll hear about it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 Nov 2000 17:34:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problems with configure " }, { "msg_contents": "On Mi� 08 Nov 2000 19:34, Tom Lane wrote:\n>\n> Well, maybe. But is it worth the trouble? Hard to believe anyone else\n> did the same thing.\n>\n> If socklen_t exists, it's presumably the right thing to use, so if we\n> just hardwire \"void -> socklen_t\", I think it'd be OK. If we're wrong,\n> we'll hear about it...\n\nWell, I would like to know how this is going to evolve. I will try to \ndownload an update with cvsup in a few hours.\nHope theres something new. Else, please tell me what would be the best \nsolution (even for the moment).\n\nThanks\n\n-- \n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Thu, 9 Nov 2000 12:04:54 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problems with configure" }, { "msg_contents": "Tom Lane writes:\n > If socklen_t exists, it's presumably the right thing to use, so if\n > we just hardwire \"void -> socklen_t\", I think it'd be OK. If we're\n > wrong, we'll hear about it...\n\nAh, if only life were that simple ;-/\n\nDepending on the version of Solaris and the compiler flags the third\nargument can be a pointer to socklen_t, void, size_t or int.\n\nFor Solaris 7 & 8 the impression I get is that accept() is an XPG4v2\nthing and so the compile flags should include one of the following\nsets of flags. The first specifies XPG4v2 (UNIX95), the second XPG5\n(UNIX98). Using either will make the third argument socklen_t*.\n\n -D_XOPEN_SOURCE -D_XOPEN_SOURCE_EXTENDED\nor\n -D_XOPEN_SOURCE=500\n\n\nSolaris 2.6 only groks the first of those. Setting the flags for\nXPG4v2 will use size_t* for arg3, otherwise it will be int*. The\nunderlying types are the same width, size_t is unsigned. I'd expect\nthat the program would work with either, give or take warnings about\nthe signedness.\n\nThe only choice of arg3 on Solaris 2.5 is int*.\n\n\nMy bottom line is that flags for XPG4v2 should be set on Solaris.\nI've successfully run configure from the current CVS sources on\nSolaris 7 with the following workaround. I presume that there is a\nbetter place to apply the change.\n\n CPPFLAGS=\"-D_XOPEN_SOURCE -D_XOPEN_SOURCE_EXTENDED\" configure\n\n\n-- \nPete Forman -./\\.- Disclaimer: This post is originated\nWestern Geophysical -./\\.- by myself and does not represent\[email protected] -./\\.- the opinion of Baker Hughes or\nhttp://www.crosswinds.net/~petef -./\\.- its divisions.\n***== My old email address [email protected] will ==***\n***== not be operational from Fri 10 to Tue 14 Nov 2000. ==***\n", "msg_date": "Thu, 9 Nov 2000 15:28:13 +0000 (GMT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: problems with configure " }, { "msg_contents": "[email protected] writes:\n\n> Depending on the version of Solaris and the compiler flags the third\n> argument can be a pointer to socklen_t, void, size_t or int.\n\nI think what I'm going to do is this: The argument is question cannot\npossibly be of a different width than int, unless someone is *really* on\ndrugs at Sun. Therefore, if the third argument to accept() is \"void *\"\nthen we just take \"int\". Evidently there will not be a compiler problem\nif you pass an \"int *\" where a \"void *\" is expected. The fact that int\nmay be signed differently than the actual argument should not be a\nproblem, since evidently the true argument type varies with compiler\noptions, but surely the BSD socket layer does not.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 9 Nov 2000 17:34:11 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problems with configure " }, { "msg_contents": "Peter Eisentraut writes:\n > [email protected] writes:\n > \n > > Depending on the version of Solaris and the compiler flags the\n > > third argument can be a pointer to socklen_t, void, size_t or\n > > int.\n > \n > The argument is question cannot possibly be of a different width\n > than int, unless someone is *really* on drugs at Sun. Therefore,\n > if the third argument to accept() is \"void *\" then we just take\n > \"int\". Evidently there will not be a compiler problem if you pass\n > an \"int *\" where a \"void *\" is expected. The fact that int may be\n > signed differently than the actual argument should not be a\n > problem, since evidently the true argument type varies with\n > compiler options, but surely the BSD socket layer does not.\n\nUnless there is more than one library that implements accept, or if\naccept is mapped as a macro to another function.\n\nWhatever, I'd be happier if \"void *\" were mapped to \"unsigned int*\" as\nthat is what the Solaris 7 library is expecting. But it's no big deal\nif you want to go with signed.\n-- \nPete Forman -./\\.- Disclaimer: This post is originated\nWestern Geophysical -./\\.- by myself and does not represent\[email protected] -./\\.- the opinion of Baker Hughes or\nhttp://www.crosswinds.net/~petef -./\\.- its divisions.\n***== My old email address [email protected] will ==***\n***== not be operational from Fri 10 to Tue 14 Nov 2000. ==***\n", "msg_date": "Thu, 9 Nov 2000 16:51:44 +0000 (GMT)", "msg_from": "Pete Forman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problems with configure " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> [email protected] writes:\n>> Depending on the version of Solaris and the compiler flags the third\n>> argument can be a pointer to socklen_t, void, size_t or int.\n\n> I think what I'm going to do is this: The argument is question cannot\n> possibly be of a different width than int, unless someone is *really* on\n> drugs at Sun. Therefore, if the third argument to accept() is \"void *\"\n> then we just take \"int\".\n\nSounds like a plan to me.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 09 Nov 2000 12:17:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problems with configure " } ]
[ { "msg_contents": "is anyone working on the port of PostgreSQL for Alpha FreeBSD ?? I have\nbeen waiting for over a year very very patiently !!!\n\nI really love my Alpha FreeBSD box and I want to use PostgreSQL on it...\nbut postgresql does not build.\n\nIf they need a box I am more than willing to give them complete access\nto my Alpha !\n\nplease let me know\n\nthank you\n\nnathan\n\n", "msg_date": "Fri, 03 Nov 2000 15:33:51 -0800", "msg_from": "Nathan Boeger <[email protected]>", "msg_from_op": true, "msg_subject": "Alpha FreeBSD port of PostgreSQL !!!" }, { "msg_contents": "* Nathan Boeger <[email protected]> [001103 15:43] wrote:\n> is anyone working on the port of PostgreSQL for Alpha FreeBSD ?? I have\n> been waiting for over a year very very patiently !!!\n> \n> I really love my Alpha FreeBSD box and I want to use PostgreSQL on it...\n> but postgresql does not build.\n> \n> If they need a box I am more than willing to give them complete access\n> to my Alpha !\n> \n> please let me know\n\nPart of the problem is that Postgresql assumes FreeBSD == -m486, since\nI have absolutely no 'configure/automake' clue it's where I faltered\nwhen initially trying to compile on FreeBSD.\n\nI have access to a FreeBSD box through the FreeBSD project and would\nlike to have another shot at it, but I was hoping one of the guys\nmore initmate with autoconf could lend me a hand.\n\nthanks,\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Fri, 3 Nov 2000 16:00:41 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alpha FreeBSD port of PostgreSQL !!!" }, { "msg_contents": "Nathan Boeger writes:\n\n> I really love my Alpha FreeBSD box and I want to use PostgreSQL on it...\n> but postgresql does not build.\n\nIf I were to take a guess, then you need to add\n\n// snip\n#elif defined(__alpha__)\ntypedef long int slock_t;\n \n#define HAS_TEST_AND_SET\n// snip\n\ninto src/include/port/freebsd.h to get it to build. Whether it runs is\nanother question. But with the amount of detail you provided, no one can\ntell.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sat, 4 Nov 2000 01:06:08 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alpha FreeBSD port of PostgreSQL !!!" }, { "msg_contents": "Nathan Boeger wrote:\n\n> is anyone working on the port of PostgreSQL for Alpha FreeBSD ?? I have\n> been waiting for over a year very very patiently !!!\n>\n> I really love my Alpha FreeBSD box and I want to use PostgreSQL on it...\n> but postgresql does not build.\n>\n> If they need a box I am more than willing to give them complete access\n> to my Alpha !\n>\n> please let me know\n>\n> thank you\n>\n> nathan\n\nWell I'am new to \"hacking\" but I did get it once to compile by removing the\nCFLAGS entirely (very weak !)\n\nso if you need a hand (you might have to hold mine just a little) then let\nme know. As I said I don't mind giving you access to my box (root) cause I\ncannot use it since I don't have a database on it....\n\n\nlet me know\n\n\nnathan\n\n", "msg_date": "Fri, 03 Nov 2000 16:09:45 -0800", "msg_from": "Nathan Boeger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Alpha FreeBSD port of PostgreSQL !!!" }, { "msg_contents": "Alfred Perlstein writes:\n\n> Part of the problem is that Postgresql assumes FreeBSD == -m486,\n\nIf that's all then go into src/template/freebsd and remove it.\n\nThe interesting question is whether the spinlock code, which was written\nfor Alpha/Linux, works (src/include/storage/s_lock.h). All the rest\nshould work out of the box.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sat, 4 Nov 2000 01:21:12 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alpha FreeBSD port of PostgreSQL !!!" }, { "msg_contents": "* Peter Eisentraut <[email protected]> [001103 16:16] wrote:\n> Alfred Perlstein writes:\n> \n> > Part of the problem is that Postgresql assumes FreeBSD == -m486,\n> \n> If that's all then go into src/template/freebsd and remove it.\n\nok, thanks for the pointer, I'll try to have some patches in the\nnear future.\n\n> The interesting question is whether the spinlock code, which was written\n> for Alpha/Linux, works (src/include/storage/s_lock.h). All the rest\n> should work out of the box.\n\nI'll see. :)\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Fri, 3 Nov 2000 16:51:15 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alpha FreeBSD port of PostgreSQL !!!" }, { "msg_contents": "Nathan Boeger <[email protected]> writes:\n> is anyone working on the port of PostgreSQL for Alpha FreeBSD ??\n\nNot that I know about. DEC/Compaq was kind enough to lend the project\nan Alpha for testing, but it's running Linux (RedHat 6.2).\n\n> If they need a box I am more than willing to give them complete access\n> to my Alpha !\n\nLet me get back to you after we finish wringing out the known Alpha\nportability issues on the Linux box. What with the fmgr changes,\n7.1 has at least a shot at running cleanly on Alphas ... but there's\nstill mop-up work to do.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Nov 2000 20:26:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Alpha FreeBSD port of PostgreSQL !!! " }, { "msg_contents": "On Fri, 3 Nov 2000, Tom Lane wrote:\n\n> Nathan Boeger <[email protected]> writes:\n> > is anyone working on the port of PostgreSQL for Alpha FreeBSD ??\n> \n> Not that I know about. DEC/Compaq was kind enough to lend the project\n> an Alpha for testing, but it's running Linux (RedHat 6.2).\n\nWe've also got a copy of True64 to throw onto the machine, and I *have* to\nwork on getting FreeBSD running on it too ... never enough hours in the\nday :(\n\nJeff, feel like trying out the True64 install and seeing how it\ngoes? Worst case, we have to install Redhat from scratch *shrug*\n\nTom, anything on that machine that you wanna backup? Or its all safe? \n\n\n", "msg_date": "Fri, 3 Nov 2000 23:18:23 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Alpha FreeBSD port of PostgreSQL !!! " }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Jeff, feel like trying out the True64 install and seeing how it\n> goes? Worst case, we have to install Redhat from scratch *shrug*\n\n> Tom, anything on that machine that you wanna backup? Or its all safe? \n\nNo problem for me. Just keep us posted on which OS it's running today\n;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Nov 2000 22:25:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Alpha FreeBSD port of PostgreSQL !!! " }, { "msg_contents": "On Fri, 3 Nov 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > Jeff, feel like trying out the True64 install and seeing how it\n> > goes? Worst case, we have to install Redhat from scratch *shrug*\n> \n> > Tom, anything on that machine that you wanna backup? Or its all safe? \n> \n> No problem for me. Just keep us posted on which OS it's running today\n> ;-)\n\nWill do, I know that Jeff has been antsy since True64 got in the other\nday, so I don't imagine its gonna take him long to get that installed :)\n\n\n\n", "msg_date": "Fri, 3 Nov 2000 23:37:23 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Alpha FreeBSD port of PostgreSQL !!! " }, { "msg_contents": "I defined a procedure\n\nCREATE FUNCTION meta_class (varchar) RETURNS varchar AS '\n...\n' LANGUAGE 'pltcl';\n\nThis works fine. But when I want to call it from another tcl procedure I\nget errors:\nbf2=# CREATE FUNCTION foo (varchar) RETURNS varchar AS '\n return [meta_class $1]\n' LANGUAGE 'pltcl';\n\nbf2'# bf2'# CREATE\n\nbf2=# bf2=# select foo(class) from weapon_Types;\nERROR: pltcl: invalid command name \"meta_class\"\n\nThis IS possible -- isn't it?\n\n-Jonathan\n\n", "msg_date": "Sat, 4 Nov 2000 11:31:25 -0700", "msg_from": "\"Jonathan Ellis\" <[email protected]>", "msg_from_op": false, "msg_subject": "How do you call one pltcl procedure from another?" } ]
[ { "msg_contents": "> This comparison will work as long as the range of interesting XIDs\n> never exceeds WRAPLIMIT/2. Essentially, we envision the actual value\n> of XID as being the low-order bits of a logical XID that always\n> increases, and we assume that no extant XID is more than WRAPLIMIT/2\n> transactions old, so we needn't keep track of the high-order bits.\n\nSo, we'll have to abort some long running transaction.\nAnd before after-wrap XIDs will be close to aborted xid you'd better\nensure that vacuum *successfully* run over all tables in database\n(and shared tables) aborted transaction could touch.\n\n> This scheme allows us to survive XID wraparound at the cost of slight\n> additional complexity in ordered comparisons of XIDs (which is not a\n> really performance-critical task AFAIK), and at the cost that the\n> original insertion XIDs of all but recent tuples will be lost by\n> VACUUM. The system doesn't particularly care about that, but old XIDs\n> do sometimes come in handy for debugging purposes. A possible\n\nI wouldn't care about this.\n\n> compromise is to overwrite only XIDs that are older than, say,\n> WRAPLIMIT/4 instead of doing so as soon as possible. This would mean\n> the required VACUUM frequency is every WRAPLIMIT/4 xacts instead of\n> every WRAPLIMIT/2 xacts.\n> \n> We have a straightforward tradeoff between the maximum size of pg_log\n> (WRAPLIMIT/4 bytes) and the required frequency of VACUUM (at least\n\nRequired frequency of *successful* vacuum over *all* tables.\nWe would have to remember something in pg_class/pg_database\nand somehow force vacuum over \"too-long-unvacuumed-tables\"\n*automatically*.\n\n> every WRAPLIMIT/2 or WRAPLIMIT/4 transactions). This could be made\n> configurable in config.h for those who're intent on customization,\n> but I'd be inclined to set the default value at WRAPLIMIT = 1G.\n> \n> Comments? Vadim, is any of this about to be superseded by WAL?\n> If not, I'd like to fix it for 7.1.\n\nIf undo would be implemented then we could delete pg_log between\npostmaster startups - startup counter is remembered in pages, so\nseeing old startup id in a page we would know that there are only\nlong ago committed xactions (ie only visible changes) there\nand avoid xid comparison. But ... there will be no undo in 7.1.\nAnd I foresee problems with WAL based BAR implementation if we'll\nfollow proposed solution: redo restores original xmin/xmax - how\nto \"freeze\" xids while restoring DB?\n\n(Sorry, I have to run away now... and have to think more about issue).\n\nVadim\n", "msg_date": "Fri, 3 Nov 2000 16:24:38 -0800 ", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Transaction ID wraparound: problem and proposed sol\n\tution" }, { "msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n> So, we'll have to abort some long running transaction.\n\nWell, yes, some transaction that continues running while ~ 500 million\nother transactions come and go might give us trouble. I wasn't really\nplanning to worry about that case ;-)\n\n> Required frequency of *successful* vacuum over *all* tables.\n> We would have to remember something in pg_class/pg_database\n> and somehow force vacuum over \"too-long-unvacuumed-tables\"\n> *automatically*.\n\nI don't think this is a problem now; in practice you couldn't possibly\ngo for half a billion transactions without vacuuming, I'd think.\n\nIf your plans to eliminate regular vacuuming become reality, then this\nscheme might become less reliable, but at present I think there's plenty\nof safety margin.\n\n> If undo would be implemented then we could delete pg_log between\n> postmaster startups - startup counter is remembered in pages, so\n> seeing old startup id in a page we would know that there are only\n> long ago committed xactions (ie only visible changes) there\n> and avoid xid comparison. But ... there will be no undo in 7.1.\n> And I foresee problems with WAL based BAR implementation if we'll\n> follow proposed solution: redo restores original xmin/xmax - how\n> to \"freeze\" xids while restoring DB?\n\nSo, we might eventually have a better answer from WAL, but not for 7.1.\n\nI think my idea is reasonably non-invasive and could be removed without\nmuch trouble once WAL offers a better way. I'd really like to have some\nanswer for 7.1, though. The sort of numbers John Scott was quoting to\nme for Verizon's paging network throughput make it clear that we aren't\ngoing to survive at that level with a limit of 4G transactions per\ndatabase reload. Having to vacuum everything on at least a\n1G-transaction cycle is salable, dump/initdb/reload is not ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Nov 2000 20:12:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transaction ID wraparound: problem and proposed solution" }, { "msg_contents": "> > So, we'll have to abort some long running transaction.\n> \n> Well, yes, some transaction that continues running while ~ 500 million\n> other transactions come and go might give us trouble. I wasn't really\n> planning to worry about that case ;-)\n\nAgreed, I just don't like to rely on assumptions -:)\n\n> > Required frequency of *successful* vacuum over *all* tables.\n> > We would have to remember something in pg_class/pg_database\n> > and somehow force vacuum over \"too-long-unvacuumed-tables\"\n> > *automatically*.\n> \n> I don't think this is a problem now; in practice you couldn't possibly\n> go for half a billion transactions without vacuuming, I'd think.\n\nWhy not?\nAnd once again - assumptions are not good for transaction area.\n\n> If your plans to eliminate regular vacuuming become reality, then this\n> scheme might become less reliable, but at present I think there's plenty\n> of safety margin.\n>\n> > If undo would be implemented then we could delete pg_log between\n> > postmaster startups - startup counter is remembered in pages, so\n> > seeing old startup id in a page we would know that there are only\n> > long ago committed xactions (ie only visible changes) there\n> > and avoid xid comparison. But ... there will be no undo in 7.1.\n> > And I foresee problems with WAL based BAR implementation if we'll\n> > follow proposed solution: redo restores original xmin/xmax - how\n> > to \"freeze\" xids while restoring DB?\n> \n> So, we might eventually have a better answer from WAL, but not for 7.1.\n> I think my idea is reasonably non-invasive and could be removed without\n> much trouble once WAL offers a better way. I'd really like to have some\n> answer for 7.1, though. The sort of numbers John Scott was quoting to\n> me for Verizon's paging network throughput make it clear that we aren't\n> going to survive at that level with a limit of 4G transactions per\n> database reload. Having to vacuum everything on at least a\n> 1G-transaction cycle is salable, dump/initdb/reload is not ...\n\nUnderstandable. And probably we can get BAR too but require full\nbackup every WRAPLIMIT/2 (or better /4) transactions.\n\nVadim\n\n\n", "msg_date": "Sat, 4 Nov 2000 21:59:00 -0800", "msg_from": "\"Vadim Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transaction ID wraparound: problem and proposed solution" }, { "msg_contents": "Tom Lane wrote:\n> \"Mikheev, Vadim\" <[email protected]> writes:\n> > Required frequency of *successful* vacuum over *all* tables.\n> > We would have to remember something in pg_class/pg_database\n> > and somehow force vacuum over \"too-long-unvacuumed-tables\"\n> > *automatically*.\n>\n> I don't think this is a problem now; in practice you couldn't possibly\n> go for half a billion transactions without vacuuming, I'd think.\n\n ISTM you forgot that the XID counter (and usage) is global.\n\n You need to have *any* table of *any* database in the\n instance vacuumed before you are sure. Some low-traffic DB's\n might not get vacuumed for years (for example template1).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Tue, 7 Nov 2000 14:30:31 -0500 (EST)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transaction ID wraparound: problem and proposed solution" } ]
[ { "msg_contents": "Seems we call the command create language and drop language, but the\nsyntax requires CREATE PROCEDURAL LANGUAGE and DROP PROCEDURAL LANGUAGE.\n\nI am going to change the docs and grammar so PROCEDURAL is optional. Is\nthis OK?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 4 Nov 2000 15:49:33 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "DROP [PROCEDURAL] LANGUAGE" } ]
[ { "msg_contents": "Looks like someone changed an error message, but didn't upgrade the\nexpected file...\n(Current sources/multibyte/UnixWare 7.1.1)\n\n\n*** ./expected/geometry.out\tTue Sep 12 16:07:16 2000\n--- ./results/geometry.out\tSat Nov 4 15:55:05 2000\n***************\n*** 127,133 ****\n | (-5,-12) | [(10,-10),(-3,-4)] | (-1.60487804878049,-4.64390243902439)\n | (10,10) | [(10,-10),(-3,-4)] | (2.39024390243902,-6.48780487804878)\n | (0,0) | [(-1000000,200),(300000,-40)] | (0.0028402365895872,15.384614860264)\n! | (-10,0) | [(-1000000,200),(300000,-40)] | (-9.99715942258202,15.3864610140472)\n | (-3,4) | [(-1000000,200),(300000,-40)] | (-2.99789812267519,15.3851688427303)\n | (5.1,34.5) | [(-1000000,200),(300000,-40)] | (5.09647083221496,15.3836744976925)\n | (-5,-12) | [(-1000000,200),(300000,-40)] | (-4.99494420845634,15.3855375281616)\n--- 127,133 ----\n | (-5,-12) | [(10,-10),(-3,-4)] | (-1.60487804878049,-4.64390243902439)\n | (10,10) | [(10,-10),(-3,-4)] | (2.39024390243902,-6.48780487804878)\n | (0,0) | [(-1000000,200),(300000,-40)] | (0.0028402365895872,15.384614860264)\n! | (-10,0) | [(-1000000,200),(300000,-40)] | (-9.99715942258202,15.3864610140473)\n | (-3,4) | [(-1000000,200),(300000,-40)] | (-2.99789812267519,15.3851688427303)\n | (5.1,34.5) | [(-1000000,200),(300000,-40)] | (5.09647083221496,15.3836744976925)\n | (-5,-12) | [(-1000000,200),(300000,-40)] | (-4.99494420845634,15.3855375281616)\n***************\n*** 150,160 ****\n six | box \n -----+----------------------------------------------------------------------------\n | (2.12132034355964,2.12132034355964),(-2.12132034355964,-2.12132034355964)\n! | (71.7106781186548,72.7106781186548),(-69.7106781186548,-68.7106781186548)\n! | (4.53553390593274,6.53553390593274),(-2.53553390593274,-0.535533905932738)\n! | (3.12132034355964,4.12132034355964),(-1.12132034355964,-0.121320343559643)\n | (107.071067811865,207.071067811865),(92.9289321881345,192.928932188135)\n! | (170.710678118655,70.7106781186548),(29.2893218813452,-70.7106781186548)\n (6 rows)\n \n -- translation\n--- 150,160 ----\n six | box \n -----+----------------------------------------------------------------------------\n | (2.12132034355964,2.12132034355964),(-2.12132034355964,-2.12132034355964)\n! | (71.7106781186547,72.7106781186547),(-69.7106781186547,-68.7106781186547)\n! | (4.53553390593274,6.53553390593274),(-2.53553390593274,-0.535533905932737)\n! | (3.12132034355964,4.12132034355964),(-1.12132034355964,-0.121320343559642)\n | (107.071067811865,207.071067811865),(92.9289321881345,192.928932188135)\n! | (170.710678118655,70.7106781186547),(29.2893218813453,-70.7106781186547)\n (6 rows)\n \n -- translation\n***************\n*** 443,454 ****\n FROM CIRCLE_TBL;\n six | polygon \n -----+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n! | ((-3,0),(-2.59807621135076,1.50000000000442),(-1.49999999999116,2.59807621135842),(1.53102359078377e-11,3),(1.50000000001768,2.59807621134311),(2.59807621136607,1.4999999999779),(3,-3.06204718156754e-11),(2.59807621133545,-1.50000000003094),(1.49999999996464,-2.59807621137373),(-4.59307077235131e-11,-3),(-1.5000000000442,-2.5980762113278),(-2.59807621138138,-1.49999999995138))\n! | ((-99,2),(-85.6025403783588,52.0000000001473),(-48.9999999997054,88.602540378614),(1.00000000051034,102),(51.0000000005893,88.6025403781036),(87.6025403788692,51.9999999992634),(101,1.99999999897932),(87.6025403778485,-48.0000000010313),(50.9999999988214,-84.6025403791243),(0.999999998468976,-98),(-49.0000000014732,-84.6025403775933),(-85.6025403793795,-47.9999999983795))\n | ((-4,3),(-3.33012701891794,5.50000000000737),(-1.49999999998527,7.3301270189307),(1.00000000002552,8),(3.50000000002946,7.33012701890518),(5.33012701894346,5.49999999996317),(6,2.99999999994897),(5.33012701889242,0.499999999948437),(3.49999999994107,-1.33012701895622),(0.999999999923449,-2),(-1.50000000007366,-1.33012701887966),(-3.33012701896897,0.500000000081028))\n! | ((-2,2),(-1.59807621135076,3.50000000000442),(-0.499999999991161,4.59807621135842),(1.00000000001531,5),(2.50000000001768,4.59807621134311),(3.59807621136607,3.4999999999779),(4,1.99999999996938),(3.59807621133545,0.499999999969062),(2.49999999996464,-0.59807621137373),(0.999999999954069,-1),(-0.500000000044197,-0.598076211327799),(-1.59807621138138,0.500000000048617))\n | ((90,200),(91.3397459621641,205.000000000015),(95.0000000000295,208.660254037861),(100.000000000051,210),(105.000000000059,208.66025403781),(108.660254037887,204.999999999926),(110,199.999999999898),(108.660254037785,194.999999999897),(104.999999999882,191.339745962088),(99.9999999998469,190),(94.9999999998527,191.339745962241),(91.3397459620621,195.000000000162))\n! | ((0,0),(13.3974596216412,50.0000000001473),(50.0000000002946,86.602540378614),(100.00000000051,100),(150.000000000589,86.6025403781036),(186.602540378869,49.9999999992634),(200,-1.02068239385585e-09),(186.602540377848,-50.0000000010313),(149.999999998821,-86.6025403791243),(99.999999998469,-100),(49.9999999985268,-86.6025403775933),(13.3974596206205,-49.9999999983795))\n (6 rows)\n \n -- convert the circle to an 8-point polygon\n--- 443,454 ----\n FROM CIRCLE_TBL;\n six | polygon \n -----+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n! | ((-3,0),(-2.59807621135076,1.50000000000442),(-1.49999999999116,2.59807621135842),(1.53102359017709e-11,3),(1.50000000001768,2.59807621134311),(2.59807621136607,1.4999999999779),(3,-3.06204718035418e-11),(2.59807621133545,-1.50000000003094),(1.49999999996464,-2.59807621137373),(-4.59307077053127e-11,-3),(-1.5000000000442,-2.5980762113278),(-2.59807621138138,-1.49999999995138))\n! | ((-99,2),(-85.6025403783588,52.0000000001473),(-48.9999999997054,88.602540378614),(1.00000000051034,102),(51.0000000005893,88.6025403781036),(87.6025403788692,51.9999999992634),(101,1.99999999897932),(87.6025403778485,-48.0000000010313),(50.9999999988214,-84.6025403791243),(0.999999998468976,-98),(-49.0000000014732,-84.6025403775933),(-85.6025403793795,-47.9999999983794))\n | ((-4,3),(-3.33012701891794,5.50000000000737),(-1.49999999998527,7.3301270189307),(1.00000000002552,8),(3.50000000002946,7.33012701890518),(5.33012701894346,5.49999999996317),(6,2.99999999994897),(5.33012701889242,0.499999999948437),(3.49999999994107,-1.33012701895622),(0.999999999923449,-2),(-1.50000000007366,-1.33012701887966),(-3.33012701896897,0.500000000081028))\n! | ((-2,2),(-1.59807621135076,3.50000000000442),(-0.499999999991161,4.59807621135842),(1.00000000001531,5),(2.50000000001768,4.59807621134311),(3.59807621136607,3.4999999999779),(4,1.99999999996938),(3.59807621133545,0.499999999969062),(2.49999999996464,-0.598076211373729),(0.999999999954069,-1),(-0.500000000044197,-0.598076211327799),(-1.59807621138138,0.500000000048617))\n | ((90,200),(91.3397459621641,205.000000000015),(95.0000000000295,208.660254037861),(100.000000000051,210),(105.000000000059,208.66025403781),(108.660254037887,204.999999999926),(110,199.999999999898),(108.660254037785,194.999999999897),(104.999999999882,191.339745962088),(99.9999999998469,190),(94.9999999998527,191.339745962241),(91.3397459620621,195.000000000162))\n! | ((0,0),(13.3974596216412,50.0000000001473),(50.0000000002946,86.602540378614),(100.00000000051,100),(150.000000000589,86.6025403781036),(186.602540378869,49.9999999992634),(200,-1.02068239345139e-09),(186.602540377848,-50.0000000010313),(149.999999998821,-86.6025403791243),(99.999999998469,-100),(49.9999999985268,-86.6025403775933),(13.3974596206205,-49.9999999983794))\n (6 rows)\n \n -- convert the circle to an 8-point polygon\n***************\n*** 456,467 ****\n FROM CIRCLE_TBL;\n six | polygon \n -----+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n! | ((-3,0),(-2.12132034355423,2.12132034356506),(1.53102359078377e-11,3),(2.12132034357588,2.1213203435434),(3,-3.06204718156754e-11),(2.12132034353258,-2.12132034358671),(-4.59307077235131e-11,-3),(-2.12132034359753,-2.12132034352175))\n! | ((-99,2),(-69.7106781184743,72.7106781188352),(1.00000000051034,102),(71.710678119196,72.7106781181134),(101,1.99999999897932),(71.7106781177526,-68.7106781195569),(0.999999998468976,-98),(-69.7106781199178,-68.7106781173917))\n | ((-4,3),(-2.53553390592372,6.53553390594176),(1.00000000002552,8),(4.5355339059598,6.53553390590567),(6,2.99999999994897),(4.53553390588763,-0.535533905977846),(0.999999999923449,-2),(-2.53553390599589,-0.535533905869586))\n | ((-2,2),(-1.12132034355423,4.12132034356506),(1.00000000001531,5),(3.12132034357588,4.1213203435434),(4,1.99999999996938),(3.12132034353258,-0.121320343586707),(0.999999999954069,-1),(-1.12132034359753,-0.121320343521752))\n | ((90,200),(92.9289321881526,207.071067811884),(100.000000000051,210),(107.07106781192,207.071067811811),(110,199.999999999898),(107.071067811775,192.928932188044),(99.9999999998469,190),(92.9289321880082,192.928932188261))\n! | ((0,0),(29.2893218815257,70.7106781188352),(100.00000000051,100),(170.710678119196,70.7106781181134),(200,-1.02068239385585e-09),(170.710678117753,-70.7106781195569),(99.999999998469,-100),(29.2893218800822,-70.7106781173917))\n (6 rows)\n \n --\n--- 456,467 ----\n FROM CIRCLE_TBL;\n six | polygon \n -----+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n! | ((-3,0),(-2.12132034355423,2.12132034356506),(1.53102359017709e-11,3),(2.12132034357588,2.1213203435434),(3,-3.06204718035418e-11),(2.12132034353258,-2.12132034358671),(-4.59307077053127e-11,-3),(-2.12132034359753,-2.12132034352175))\n! | ((-99,2),(-69.7106781184743,72.7106781188352),(1.00000000051034,102),(71.710678119196,72.7106781181135),(101,1.99999999897932),(71.7106781177526,-68.7106781195569),(0.999999998468976,-98),(-69.7106781199178,-68.7106781173917))\n | ((-4,3),(-2.53553390592372,6.53553390594176),(1.00000000002552,8),(4.5355339059598,6.53553390590567),(6,2.99999999994897),(4.53553390588763,-0.535533905977846),(0.999999999923449,-2),(-2.53553390599589,-0.535533905869586))\n | ((-2,2),(-1.12132034355423,4.12132034356506),(1.00000000001531,5),(3.12132034357588,4.1213203435434),(4,1.99999999996938),(3.12132034353258,-0.121320343586707),(0.999999999954069,-1),(-1.12132034359753,-0.121320343521752))\n | ((90,200),(92.9289321881526,207.071067811884),(100.000000000051,210),(107.07106781192,207.071067811811),(110,199.999999999898),(107.071067811775,192.928932188044),(99.9999999998469,190),(92.9289321880082,192.928932188261))\n! | ((0,0),(29.2893218815257,70.7106781188352),(100.00000000051,100),(170.710678119196,70.7106781181135),(200,-1.02068239345139e-09),(170.710678117753,-70.7106781195569),(99.999999998469,-100),(29.2893218800822,-70.7106781173917))\n (6 rows)\n \n --\n***************\n*** 503,513 ****\n WHERE (p1.f1 <-> c1.f1) > 0\n ORDER BY distance, circle, point using <<;\n twentyfour | circle | point | distance \n! ------------+----------------+------------+-------------------\n! | <(100,0),100> | (5.1,34.5) | 0.976531926977964\n | <(1,2),3> | (-3,4) | 1.47213595499958\n | <(0,0),3> | (-3,4) | 2\n! | <(100,0),100> | (-3,4) | 3.07764064044151\n | <(100,0),100> | (-5,-12) | 5.68348972285122\n | <(1,3),5> | (-10,0) | 6.40175425099138\n | <(1,3),5> | (10,10) | 6.40175425099138\n--- 503,513 ----\n WHERE (p1.f1 <-> c1.f1) > 0\n ORDER BY distance, circle, point using <<;\n twentyfour | circle | point | distance \n! ------------+----------------+------------+------------------\n! | <(100,0),100> | (5.1,34.5) | 0.97653192697797\n | <(1,2),3> | (-3,4) | 1.47213595499958\n | <(0,0),3> | (-3,4) | 2\n! | <(100,0),100> | (-3,4) | 3.07764064044152\n | <(100,0),100> | (-5,-12) | 5.68348972285122\n | <(1,3),5> | (-10,0) | 6.40175425099138\n | <(1,3),5> | (10,10) | 6.40175425099138\n***************\n*** 519,525 ****\n | <(0,0),3> | (10,10) | 11.142135623731\n | <(1,3),5> | (-5,-12) | 11.1554944214035\n | <(1,2),3> | (-5,-12) | 12.2315462117278\n! | <(1,3),5> | (5.1,34.5) | 26.7657047773224\n | <(1,2),3> | (5.1,34.5) | 29.757594539282\n | <(0,0),3> | (5.1,34.5) | 31.8749193547455\n | <(100,200),10> | (5.1,34.5) | 180.778038568384\n--- 519,525 ----\n | <(0,0),3> | (10,10) | 11.142135623731\n | <(1,3),5> | (-5,-12) | 11.1554944214035\n | <(1,2),3> | (-5,-12) | 12.2315462117278\n! | <(1,3),5> | (5.1,34.5) | 26.7657047773223\n | <(1,2),3> | (5.1,34.5) | 29.757594539282\n | <(0,0),3> | (5.1,34.5) | 31.8749193547455\n | <(100,200),10> | (5.1,34.5) | 180.778038568384\n\n======================================================================\n\n*** ./expected/foreign_key.out\tSun Oct 22 18:32:45 2000\n--- ./results/foreign_key.out\tSat Nov 4 15:56:37 2000\n***************\n*** 694,700 ****\n NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\n CREATE TABLE FKTABLE_FAIL1 ( ftest1 int, CONSTRAINT fkfail1 FOREIGN KEY (ftest2) REFERENCES PKTABLE);\n NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n! ERROR: columns referenced in foreign key constraint not found.\n CREATE TABLE FKTABLE_FAIL2 ( ftest1 int, CONSTRAINT fkfail1 FOREIGN KEY (ftest1) REFERENCES PKTABLE(ptest2));\n NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n ERROR: UNIQUE constraint matching given keys for referenced table \"pktable\" not found\n--- 694,700 ----\n NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\n CREATE TABLE FKTABLE_FAIL1 ( ftest1 int, CONSTRAINT fkfail1 FOREIGN KEY (ftest2) REFERENCES PKTABLE);\n NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n! ERROR: columns in foreign key table of constraint not found.\n CREATE TABLE FKTABLE_FAIL2 ( ftest1 int, CONSTRAINT fkfail1 FOREIGN KEY (ftest1) REFERENCES PKTABLE(ptest2));\n NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n ERROR: UNIQUE constraint matching given keys for referenced table \"pktable\" not found\n\n======================================================================\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sat, 4 Nov 2000 15:58:49 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": true, "msg_subject": "regression failure..." }, { "msg_contents": "Larry Rosenman <[email protected]> writes:\n> Looks like someone changed an error message, but didn't upgrade the\n> expected file...\n\nYup. I just undid the error message change, because the new text did\nnot seem like an improvement...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 04 Nov 2000 19:39:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: regression failure... " }, { "msg_contents": "* Tom Lane <[email protected]> [001104 18:40]:\n> Larry Rosenman <[email protected]> writes:\n> > Looks like someone changed an error message, but didn't upgrade the\n> > expected file...\n> \n> Yup. I just undid the error message change, because the new text did\n> not seem like an improvement...\nI agree. Thanks!\n\nLER\n\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sat, 4 Nov 2000 18:50:34 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: regression failure..." } ]
[ { "msg_contents": "As everyone knows, there was discussion this week about a new utility to\nbe added to 7.0.3. The final decision was that the utility should go\ninto /contrib, and that the utility will not be mentioned in the release\nnotes.\n\nI think we can learn some things from this episode.\n\nCertainly, this had many missteps:\n\n\tCompany requests new feature with no discussion\n\tEmployed core member adds feature without discussion\n\tFeature is added to a subrelease, which usually has no new features\n\tSubrelease is already frozen\n\tFeature is so minor, it is not even on the TODO list\n\tInitially added as a new command that would disappear in 7.1, now\n\t in /contrib\n\nI realize there were extenuating circumstances that caused some of these\nmissteps.\n\nThe good news from all of this is how it was resolved. Each employed\ncore member had a different opinion, showing we still do control our own\nopinions. I believe the final solution was optimal.\n\nHaving gone through this, I am now slightly less concerned about whether\nthe dynamics will change now that companies are involved with\nPostgreSQL. Seems like this was handled just like before. In fact,\nsome employed core members even went beyond their normal inclinations to\nmaintain fairness.\n\nWe do have these goofy issues to resolve occasionally. The interesting\nthing to me is that this company-induced one was really no different\nthan any of the past issues. We discussed it, and we resolved it.\n\nBasically, it seems that while we will have missteps like this from time\nto time, our ability to resolve them is unhampered. \n\nApologies to those who felt this week's discussion was too lengthy. \nThis is the only way we have to make fair, informed decisions that take\neveryone's opinions into account.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 4 Nov 2000 22:29:53 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Thoughts on 7.0.3 pg_dump addition" }, { "msg_contents": "Bruce Momjian wrote:\n> Basically, it seems that while we will have missteps like this from time\n> to time, our ability to resolve them is unhampered.\n \n> Apologies to those who felt this week's discussion was too lengthy.\n> This is the only way we have to make fair, informed decisions that take\n> everyone's opinions into account.\n\nThis discussion just reaffirms to me why the PostgreSQL project is one\nof, if not _the_, best run open source projects out there.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 04 Nov 2000 22:51:31 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thoughts on 7.0.3 pg_dump addition" } ]
[ { "msg_contents": "\nIn order that we can get a few days of testing on these, make sure the\npackaging is right and whatnot, we are holding off on a formal release\nuntil early->mid next week ...\n\nI've just put pre-release tar balls into:\n\n\tftp://ftp.postgresql.org/pub/source/v7.0.3\n\nPlease take a minute to download and test these out, so that when we\nrelease, we don't get a bunch of \"oops, you forgot this\" messages :)\n\nThanks ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 5 Nov 2000 03:02:19 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "v7.0.3 *pre-release* ..." }, { "msg_contents": "The Hermit Hacker writes:\n > \n > \tftp://ftp.postgresql.org/pub/source/v7.0.3\n > \n > Please take a minute to download and test these out, so that when\n > we release, we don't get a bunch of \"oops, you forgot this\"\n > messages :)\n\nI've tried it on a couple of platforms:\n\nIRIX 6.5.5m, MIPSpro 7.3.\n\n Configure detection of accept() is working.\n\n Several regression tests fail. Two patches that I'd submitted to\n fix these on IRIX have not been applied:\n\n 2000-08-18: Update tests/regress/resultmap for IRIX 6\n 2000-10-12: Regression tests - expected file for IRIX geometry test\n\nAIX 4.3.2, xlc 3.6.6.\n\n Same regression test failures as 7.0.2.\n The nasty failures are triggers, misc, and plgpgsql which\n consistently give \"pqReadData() -- backend closed the channel\n unexpectedly.\" at the same point. Also the sanity_check hangs\n during a VACUUM. Killing the backend was the only way to continue.\n\n-- \nPete Forman -./\\.- Disclaimer: This post is originated\nWestern Geophysical -./\\.- by myself and does not represent\[email protected] -./\\.- the opinion of Baker Hughes or\nhttp://www.crosswinds.net/~petef -./\\.- its divisions.\n", "msg_date": "Tue, 7 Nov 2000 09:43:46 +0000 (GMT)", "msg_from": "Pete Forman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: v7.0.3 *pre-release* ..." }, { "msg_contents": "My guess is that your patches are in current, but not in this\nsubrelease. We only put in show-stopper fixes.\n\n> The Hermit Hacker writes:\n> > \n> > \tftp://ftp.postgresql.org/pub/source/v7.0.3\n> > \n> > Please take a minute to download and test these out, so that when\n> > we release, we don't get a bunch of \"oops, you forgot this\"\n> > messages :)\n> \n> I've tried it on a couple of platforms:\n> \n> IRIX 6.5.5m, MIPSpro 7.3.\n> \n> Configure detection of accept() is working.\n> \n> Several regression tests fail. Two patches that I'd submitted to\n> fix these on IRIX have not been applied:\n> \n> 2000-08-18: Update tests/regress/resultmap for IRIX 6\n> 2000-10-12: Regression tests - expected file for IRIX geometry test\n> \n> AIX 4.3.2, xlc 3.6.6.\n> \n> Same regression test failures as 7.0.2.\n> The nasty failures are triggers, misc, and plgpgsql which\n> consistently give \"pqReadData() -- backend closed the channel\n> unexpectedly.\" at the same point. Also the sanity_check hangs\n> during a VACUUM. Killing the backend was the only way to continue.\n> \n> -- \n> Pete Forman -./\\.- Disclaimer: This post is originated\n> Western Geophysical -./\\.- by myself and does not represent\n> [email protected] -./\\.- the opinion of Baker Hughes or\n> http://www.crosswinds.net/~petef -./\\.- its divisions.\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 7 Nov 2000 06:34:04 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: v7.0.3 *pre-release* ..." } ]
[ { "msg_contents": " Date: Sunday, November 5, 2000 @ 17:50:19\nAuthor: vadim\n\nUpdate of /home/projects/pgsql/cvsroot/pgsql/src/backend/access/transam\n from hub.org:/home/projects/pgsql/tmp/cvs-serv21922/src/backend/access/transam\n\nModified Files:\n\txact.c xlog.c \n\n----------------------------- Log Message -----------------------------\n\nNew CHECKPOINT command.\nAuto removing of offline log files and creating new file\nat checkpoint time.\n\n", "msg_date": "Sun, 5 Nov 2000 17:50:19 -0500 (EST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "pgsql/src/backend/access/transam (xact.c xlog.c)" }, { "msg_contents": "[email protected] writes:\n\n> New CHECKPOINT command.\n> Auto removing of offline log files and creating new file\n> at checkpoint time.\n\nIs this the same as a SAVEPOINT?\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n\n", "msg_date": "Mon, 6 Nov 2000 17:10:01 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/src/backend/access/transam (xact.c xlog.c)" } ]
[ { "msg_contents": "\nI'm tryin to figure out how to speed up udmsearch when run under\npostgresql, and am being hit by atrocious performance when using a LIKE\nquery ... the query looks like:\n\nSELECT ndict.url_id,ndict.intag \n FROM ndict,url \n WHERE ndict.word_id=1971739852 \n AND url.rec_id=ndict.url_id \n AND (url.url LIKE 'http://www.postgresql.org/%');\n\nTake off the AND ( LIKE ) part of the query, finishes almost as soon as\nyou hit return. Put it back in, and you can go for coffee before it\nfinishes ...\n\n\tIf I do 'SELECT url_id FROM ndict WHERE word_id=1971739852', there\nare 153 records returned ... is there some way, that I'm not thinking, of\nre-writing the above so that it 'resolves' the equality before the LIKE in\norder to reduce the number of tuples that it has to do the LIKE on? Is\nthere some way of writing the above so that it doesn't take forever to\nexecute?\n\n\tI'm running this on a Dual-PIII 450 Server, 512Meg of RAM, zero\nswap space being used ... the database has its indices on one hard drive,\nthe tables themselves are on a second one ... its PgSQL 7.0.2 (Tom,\nanything in v7.0.3 that might improve this?) and startup is as:\n\n#!/bin/tcsh\nsetenv PORT 5432\nsetenv POSTMASTER /pgsql/bin/postmaster\nunlimit\n${POSTMASTER} -B 384 -N 192 \\\n -o \"-F -S 32768\" \\\n -i -p ${PORT} -D/pgsql/data >&\n/pgsql/logs/postmaster.${PORT}.$$ &\n\n\tSo its not like I'm not throwing alot of resources at this ...\n\n\tIs there anything that we can do to improve this? I was trying to\nthink of some way to use a subselect to narrow the search results, or\nsomething ...\n\n\tOh, the above explains down to:\n\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=0.00..1195.14 rows=1 width=10)\n -> Index Scan using url_url on url (cost=0.00..2.73 rows=1 width=4)\n -> Index Scan using n_word on ndict (cost=0.00..1187.99 rows=353 width=6)\n\nEXPLAIN\n\n\tndict: 663018 tuples\n\t url: 29276 tuples\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 5 Nov 2000 21:03:38 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "How to get around LIKE inefficiencies?" }, { "msg_contents": "A brute-force answer would be to remove the url_url index ;-)\ndunno if that would slow down other queries, however.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 05 Nov 2000 21:28:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get around LIKE inefficiencies? " }, { "msg_contents": "Sorry to be getting in here late. Have you tried CLUSTER? If it is\nusing an index scan, and it is slow, cluster often helps, especially\nwhen there are several duplicate matches, as there is with LIKE. Let me\nknow how that works.\n\n> A brute-force answer would be to remove the url_url index ;-)\n> dunno if that would slow down other queries, however.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 5 Nov 2000 21:36:32 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get around LIKE inefficiencies?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Sorry to be getting in here late. Have you tried CLUSTER?\n\nProlly won't help much. I think what he's getting burnt by\nis that the planner thinks that an indexscan based on the\nLIKE 'http://www.postgresql.org/%' condition will be extremely\nselective --- it has no idea that most of the URLs in his table\nwill match that prefix. It's ye same olde nonuniform-distribution\nproblem; until we have better statistics, there's not much hope\nfor a non-kluge solution.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 05 Nov 2000 21:47:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get around LIKE inefficiencies? " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Sorry to be getting in here late. Have you tried CLUSTER?\n> \n> Prolly won't help much. I think what he's getting burnt by\n> is that the planner thinks that an indexscan based on the\n> LIKE 'http://www.postgresql.org/%' condition will be extremely\n> selective --- it has no idea that most of the URLs in his table\n> will match that prefix. It's ye same olde nonuniform-distribution\n> problem; until we have better statistics, there's not much hope\n> for a non-kluge solution.\n\nBut I think it will help. There will be lots of index lookups, but they\nwill be sequential in the heap, not random over the heap.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 5 Nov 2000 21:48:45 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get around LIKE inefficiencies?]" }, { "msg_contents": "Yes, I am waiting to hear back on that.\n\n> At 21:47 5/11/00 -0500, Tom Lane wrote:\n> >It's ye same olde nonuniform-distribution\n> >problem; until we have better statistics, there's not much hope\n> >for a non-kluge solution.\n> \n> Wasn't somebody trying to do something with that a few weeks back?\n> \n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.B.N. 75 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 5 Nov 2000 21:49:07 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get around LIKE inefficiencies?" }, { "msg_contents": "At 21:28 5/11/00 -0500, Tom Lane wrote:\n>A brute-force answer would be to remove the url_url index ;-)\n>dunno if that would slow down other queries, however.\n\nCould you trick it into not using the index (AND using the other strategy?)\nby using a calculation:\n\nSELECT ndict.url_id,ndict.intag \n FROM ndict,url \n WHERE ndict.word_id=1971739852 \n AND url.rec_id=ndict.url_id \n AND ( (url.url || ' ') LIKE 'http://www.postgresql.org/% ');\n\nit's a bit nasty.\n\nIf you had 7.1, the following might work:\n\nSELECT url_id,intag From \n (Select ndict.url_id,ndict.intag,url \n FROM ndict,url \n WHERE ndict.word_id=1971739852 \n AND url.rec_id=ndict.url_id) as zzz\nWhere \n zzz.url LIKE 'http://www.postgresql.org/%'\n\nBut I don't know how subselect-in-from handles this sort of query - it\nmight be 'clever' enough to fold it somehow.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 06 Nov 2000 13:51:28 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get around LIKE inefficiencies? " }, { "msg_contents": "At 21:47 5/11/00 -0500, Tom Lane wrote:\n>It's ye same olde nonuniform-distribution\n>problem; until we have better statistics, there's not much hope\n>for a non-kluge solution.\n\nWasn't somebody trying to do something with that a few weeks back?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 06 Nov 2000 13:52:59 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get around LIKE inefficiencies? " }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> Could you trick it into not using the index (AND using the other strategy?)\n> by using a calculation:\n\n> AND ( (url.url || ' ') LIKE 'http://www.postgresql.org/% ');\n\n> it's a bit nasty.\n\nLooks like a great kluge to me ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 05 Nov 2000 21:59:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get around LIKE inefficiencies? " }, { "msg_contents": "\nyowch ... removing that one index makes my 'test' search (mvcc) come back\nas:\n\n[97366] SQL 0.05s: SELECT ndict.url_id,ndict.intag FROM ndict,url WHERE ndict.word_id=572517542 AND url.rec_id=ndict.url_id AND (url.url LIKE 'http://www.postgresql.org/%')\n\nvs what we were doing before ... now, let's see how increasing the number\nof docs indexed changes this ...\n\nthanks ... \n\n\nOn Sun, 5 Nov 2000, Tom Lane wrote:\n\n> A brute-force answer would be to remove the url_url index ;-)\n> dunno if that would slow down other queries, however.\n> \n> \t\t\tregards, tom lane\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 5 Nov 2000 23:01:37 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to get around LIKE inefficiencies? " }, { "msg_contents": "At 21:59 5/11/00 -0500, Tom Lane wrote:\n>\n>Looks like a great kluge to me ;-)\n>\n\nHmph. I prefer to think of it as a 'user-defined optimizer hint'. ;-}\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 06 Nov 2000 14:10:36 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get around LIKE inefficiencies? " }, { "msg_contents": "On Mon, 6 Nov 2000, Philip Warner wrote:\n\n> At 21:59 5/11/00 -0500, Tom Lane wrote:\n> >\n> >Looks like a great kluge to me ;-)\n> >\n> \n> Hmph. I prefer to think of it as a 'user-defined optimizer hint'. ;-}\n\nExcept, if we are telling it to get rid of using the index, may as well\nget rid of it altogether, as updates/inserts would be slowed down by\nhaving to update that too ...\n\ni can now do 3 word searches in what looks like no time ... not sure how\nit will look after I have ~100k documents indexed, but at least now its\nnot diead at 22k ...\n\n\n\n", "msg_date": "Sun, 5 Nov 2000 23:12:38 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to get around LIKE inefficiencies? " }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> On Mon, 6 Nov 2000, Philip Warner wrote:\n>> At 21:59 5/11/00 -0500, Tom Lane wrote:\n>>>> Looks like a great kluge to me ;-)\n>> \n>> Hmph. I prefer to think of it as a 'user-defined optimizer hint'. ;-}\n\n> Except, if we are telling it to get rid of using the index, may as well\n> get rid of it altogether, as updates/inserts would be slowed down by\n> having to update that too ...\n\nSure --- but do you have any other query types where the index *is*\nuseful? If so, Philip's idea will let you suppress use of the index\nfor just this one kind of query.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 05 Nov 2000 22:17:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get around LIKE inefficiencies? " }, { "msg_contents": "At 23:12 5/11/00 -0400, The Hermit Hacker wrote:\n>\n>Except, if we are telling it to get rid of using the index, may as well\n>get rid of it altogether, as updates/inserts would be slowed down by\n>having to update that too ...\n>\n\nSo long as you don't ever need the index for anything else, then getting\nrid of it is by far the best solution. But, eg, if you want to check if a\npage is already indexed you will probably end up with a sequential search.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 06 Nov 2000 14:18:34 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get around LIKE inefficiencies? " }, { "msg_contents": "The Hermit Hacker wrote:\n> I'm tryin to figure out how to speed up udmsearch when run under\n> postgresql, and am being hit by atrocious performance when using a LIKE\n> query ... the query looks like:\n> SELECT ndict.url_id,ndict.intag\n> FROM ndict,url\n> WHERE ndict.word_id=1971739852\n> AND url.rec_id=ndict.url_id\n> AND (url.url LIKE 'http://www.postgresql.org/%');\n> Take off the AND ( LIKE ) part of the query, finishes almost as soon as\n> you hit return. Put it back in, and you can go for coffee before it\n> finishes ...\n\nThe entire *approach* is wrong. I'm currently in the process of optimizing\na db which is used for logfile mining, and it was originally built with the same\nkludge.... it seems to make sense when there's only a few thousand records,\nbut at 20 million records, yikes!\n\nThe problem is that there's a \"like\" operation for something that is\nfundamentally static (http://www.postgresql.org/) with some varying\ndata *after it*, that you're not using, in any form, for this operation.\nThis can be solved one of two ways:\n\n1. Preprocess your files to strip out the paths and arguments on\na new field for the domain call. You are only setting up that data once,\nso you shouldn't be using a \"like\" operator for every query. It's not\nlike on monday the server is \"http://www.postgresql.org/1221\" and on\ntuesday the server is \"http://www.postgresql.org/12111\". It's always\nthe *same server*, so split out that data into it's own column, it's own\nindex.\n\nThis turns your query into:\nSELECT ndict.url_id,ndict.intag\n FROM ndict,url\n WHERE ndict.word_id=1971739852\n AND url.rec_id=ndict.url_id\n AND url.server_url='http://www.postgresql.org/';\n\n2. Trigger to do the above, if you're doing on-the-fly inserts into\nyour db (so you can't pre-process).\n\n-Ronabop\n\n--\nBrought to you from iBop the iMac, a MacOS, Win95, Win98, LinuxPPC machine,\nwhich is currently in MacOS land. Your bopping may vary.\n", "msg_date": "Sun, 05 Nov 2000 20:19:21 -0700", "msg_from": "Ron Chmara <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get around LIKE inefficiencies?" }, { "msg_contents": "On Sun, 5 Nov 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > On Mon, 6 Nov 2000, Philip Warner wrote:\n> >> At 21:59 5/11/00 -0500, Tom Lane wrote:\n> >>>> Looks like a great kluge to me ;-)\n> >> \n> >> Hmph. I prefer to think of it as a 'user-defined optimizer hint'. ;-}\n> \n> > Except, if we are telling it to get rid of using the index, may as well\n> > get rid of it altogether, as updates/inserts would be slowed down by\n> > having to update that too ...\n> \n> Sure --- but do you have any other query types where the index *is*\n> useful? If so, Philip's idea will let you suppress use of the index\n> for just this one kind of query.\n\nActually, it looks like they got a bit smart, and they search for the URL\nin the url table based on the CRC32 value instead of text ...\n\n\n", "msg_date": "Sun, 5 Nov 2000 23:34:11 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to get around LIKE inefficiencies? " }, { "msg_contents": "On Mon, 6 Nov 2000, Philip Warner wrote:\n\n> At 23:12 5/11/00 -0400, The Hermit Hacker wrote:\n> >\n> >Except, if we are telling it to get rid of using the index, may as well\n> >get rid of it altogether, as updates/inserts would be slowed down by\n> >having to update that too ...\n> >\n> \n> So long as you don't ever need the index for anything else, then getting\n> rid of it is by far the best solution. But, eg, if you want to check if a\n> page is already indexed you will probably end up with a sequential search.\n\nexcept, it appears that they both store the text of the URL *and* the\nCRC32 value of it, and do other queries based on the CRC32 value of it ...\n\n", "msg_date": "Sun, 5 Nov 2000 23:34:49 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to get around LIKE inefficiencies? " }, { "msg_contents": "I am adding a new TODO item:\n\n\t* Add SET PERFORMANCE_TIPS option to suggest INDEX, VACUUM, VACUUM\n\t ANALYZE, and CLUSTER\n\nSeems we should be able to emit NOTICE messages suggesting performance\nimprovements.\n\n\n\n> Philip Warner <[email protected]> writes:\n> > Could you trick it into not using the index (AND using the other strategy?)\n> > by using a calculation:\n> \n> > AND ( (url.url || ' ') LIKE 'http://www.postgresql.org/% ');\n> \n> > it's a bit nasty.\n> \n> Looks like a great kluge to me ;-)\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 5 Nov 2000 22:53:57 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get around LIKE inefficiencies?" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I am adding a new TODO item:\n> > \t* Add SET PERFORMANCE_TIPS option to suggest INDEX, VACUUM, VACUUM\n> > \t ANALYZE, and CLUSTER\n> > Seems we should be able to emit NOTICE messages suggesting performance\n> > improvements.\n> \n> This would be targeted to help those who refuse to read documentation?\n> \n> I'm not following the point here...\n\nWell, I think there is some confusion about when CLUSTER is good, and I\ncan see people turning it on and running their application to look for\nthings they forgot, like indexes or vacuum analyze.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 5 Nov 2000 22:57:31 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get around LIKE inefficiencies?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I am adding a new TODO item:\n> \t* Add SET PERFORMANCE_TIPS option to suggest INDEX, VACUUM, VACUUM\n> \t ANALYZE, and CLUSTER\n> Seems we should be able to emit NOTICE messages suggesting performance\n> improvements.\n\nThis would be targeted to help those who refuse to read documentation?\n\nI'm not following the point here...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 05 Nov 2000 23:01:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get around LIKE inefficiencies? " }, { "msg_contents": "On Sun, 5 Nov 2000, Bruce Momjian wrote:\n\n> > Bruce Momjian <[email protected]> writes:\n> > > I am adding a new TODO item:\n> > > \t* Add SET PERFORMANCE_TIPS option to suggest INDEX, VACUUM, VACUUM\n> > > \t ANALYZE, and CLUSTER\n> > > Seems we should be able to emit NOTICE messages suggesting performance\n> > > improvements.\n> > \n> > This would be targeted to help those who refuse to read documentation?\n> > \n> > I'm not following the point here...\n> \n> Well, I think there is some confusion about when CLUSTER is good, and I\n> can see people turning it on and running their application to look for\n> things they forgot, like indexes or vacuum analyze.\n\nso, we are gonna have an AI built into the database now too? my\nexperience to date is that each scenario is different for what can be done\nto fix something ... as my last problem shows. I could remove the index,\nsince it isn't used anywhere else that I'm aware of, or, as philip pointed\nout, I could change my query ...\n\nnow, this 'PERFORMANCE_TIPS', would it have to be smart enough to think\nabout Philips solution, or only Tom's? How is such a knowledge base\nmaintained? Who is turned off of PgSQL when they enable that, and it\ndoesn't help their performance even though they follow the\nrecommendations?\n\n\n\n", "msg_date": "Mon, 6 Nov 2000 00:13:56 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to get around LIKE inefficiencies?" }, { "msg_contents": "> so, we are gonna have an AI built into the database now too? my\n> experience to date is that each scenario is different for what can be done\n> to fix something ... as my last problem shows. I could remove the index,\n> since it isn't used anywhere else that I'm aware of, or, as philip pointed\n> out, I could change my query ...\n> \n> now, this 'PERFORMANCE_TIPS', would it have to be smart enough to think\n> about Philips solution, or only Tom's? How is such a knowledge base\n> maintained? Who is turned off of PgSQL when they enable that, and it\n> doesn't help their performance even though they follow the\n> recommendations?\n\nWell, I think it would be helpful to catch the most obvious things\npeople forget, but if no one thinks its a good idea, I will yank it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 5 Nov 2000 23:14:56 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get around LIKE inefficiencies?" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Well, I think it would be helpful to catch the most obvious things\n> > people forget, but if no one thinks its a good idea, I will yank it.\n> \n> If you've got an idea *how* to do it in any sort of reliable fashion,\n> I'm all ears. But it sounds more like pie-in-the-sky to me.\n\nBut I like pie. :-)\n\nWell, we could throw a message when the optimizer tries to get\nstatistics on a column with no analyze stats, or table stats on a table\nthat has never been vacuumed, or does a sequential scan on a table that\nhas >%50 expired rows. \n\nWe could throw a message when a query does an index scan that bounces\nall over the heap looking for a single value. We could though a message\nwhen a constant is compared to a column, and there is no index on the\ncolumn.\n\nNot perfect, but would help catch some obvious things people forget.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 5 Nov 2000 23:24:45 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get around LIKE inefficiencies?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Well, I think it would be helpful to catch the most obvious things\n> people forget, but if no one thinks its a good idea, I will yank it.\n\nIf you've got an idea *how* to do it in any sort of reliable fashion,\nI'm all ears. But it sounds more like pie-in-the-sky to me.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 05 Nov 2000 23:26:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get around LIKE inefficiencies? " }, { "msg_contents": "> \tI'm running this on a Dual-PIII 450 Server, 512Meg of RAM, zero\n> swap space being used ... the database has its indices on one hard drive,\n> the tables themselves are on a second one ... its PgSQL 7.0.2 (Tom,\n> anything in v7.0.3 that might improve this?) and startup is as:\n> \n> #!/bin/tcsh\n> setenv PORT 5432\n> setenv POSTMASTER /pgsql/bin/postmaster\n> unlimit\n> ${POSTMASTER} -B 384 -N 192 \\\n> -o \"-F -S 32768\" \\\n> -i -p ${PORT} -D/pgsql/data >&\n> /pgsql/logs/postmaster.${PORT}.$$ &\n\nBTW, you have a 32MB sort space for each backend, and you allow up to\n192 concurrent backends. So whole postgres requires at least 192*32MB\n= 6144MB memory if all 192 users try to connect to your server at the\nsame time! I would suggest adding enough swap space or descreasing -S\nsetting...\n--\nTatsuo Ishii\n", "msg_date": "Tue, 07 Nov 2000 10:11:41 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get around LIKE inefficiencies?" } ]
[ { "msg_contents": "Here's what I get doing a commit:\n\nProtocol error: 'Directory' missingE Protocol error: 'Directory' missingE\n...\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Mon, 6 Nov 2000 11:07:24 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "CVS problem" }, { "msg_contents": "\nwhat version of CVS are you running? when was the last time you did\nanything with it? \n\ncvs on hub hasn't been upgraded since Sept 13th, so it isn't an upgrade\nissue ... and just tested from work, and I can checkout no probs ...\n\n\n\nOn Mon, 6 Nov 2000, Michael Meskes wrote:\n\n> Here's what I get doing a commit:\n> \n> Protocol error: 'Directory' missingE Protocol error: 'Directory' missingE\n> ...\n> \n> Michael\n> -- \n> Michael Meskes\n> [email protected]\n> Go SF 49ers! Go Rhein Fire!\n> Use Debian GNU/Linux! Use PostgreSQL!\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 6 Nov 2000 08:59:38 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CVS problem" }, { "msg_contents": "On Mon, Nov 06, 2000 at 08:59:38AM -0400, The Hermit Hacker wrote:\n> what version of CVS are you running? when was the last time you did\n> anything with it? \n\nIt works again. I have no idea what failed.\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Tue, 7 Nov 2000 09:52:47 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CVS problem" } ]
[ { "msg_contents": "Due to problems with some parts of the world not being able to access the\nhttp://www.retep.org.uk/ site for the JDBC driver archives (ie: the past\nprecompiled drivers, the current pre-beta driver etc), I'm currently moving\nthat site to a new ISP.\n\nBecause of this, there may be a brief break in email going to any\nretep.org.uk email address, namely [email protected] and to the web site.\n\nHopefully this will be brief, but it may take the DNS a couple of days to\nsee the changes.\n\nIf you need to contact me at the home address, then please use one of the\nfollowing addresses.\n\[email protected]\[email protected]\[email protected]\n\nHopefully everything else will be working properly before long, and the\nretep.org.uk domain will be back online.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support Officer, Maidstone Borough Council\nEmail: [email protected]\nWWW: http://www.maidstone.gov.uk\nAll views expressed within this email are not the views of Maidstone Borough\nCouncil\n\n", "msg_date": "Mon, 6 Nov 2000 12:37:48 -0000 ", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Changes affecting the JDBC archive" } ]
[ { "msg_contents": "I've committed patches to do the following:\n\no Support AT TIME ZONE date/time syntax.\n\no Fix INTERVAL mixed-sign output representation. \"ISO format\" has\nchanged a little, and no longer uses the \"AGO\" qualifier but instead\nuses signed numbers.\n\no Allow functions called with untyped constant strings to fall back to a\nstring type if available. Previously, PostgreSQL refused to chose\nbetween types from different categories (e.g. string vs numeric).\n\no Add tests for OUTER JOIN syntax to the regression test. Needs more\nwork.\n\nSince a couple of functions were added to pg_proc, an initdb is required\n(sorry!).\n\n - Thomas\n", "msg_date": "Mon, 06 Nov 2000 16:41:03 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Committed patches; initdb required" } ]
[ { "msg_contents": "I'm seeing a whole bunch of miscompares in the horology test today.\nIs this stuff supposed to have changed behavior?\n\n\t\t\tregards, tom lane\n\n*** ./expected/horology-no-DST-before-1970.out\tFri Sep 15 15:36:24 2000\n--- ./results/horology.out\tMon Nov 6 13:14:35 2000\n***************\n*** 109,116 ****\n | epoch | @ 1 day 2 hours 3 mins 4 secs | Thu Jan 01 18:03:04 1970 PST\n | epoch | @ 10 days | Sat Jan 10 16:00:00 1970 PST\n | epoch | @ 3 mons | Tue Mar 31 16:00:00 1970 PST\n! | epoch | @ 5 mons | Sun May 31 17:00:00 1970 PDT\n! | epoch | @ 5 mons 12 hours | Mon Jun 01 05:00:00 1970 PDT\n | epoch | @ 6 years | Wed Dec 31 16:00:00 1975 PST\n | Wed Feb 28 17:32:01 1996 PST | @ 14 secs ago | Wed Feb 28 17:31:47 1996 PST\n | Wed Feb 28 17:32:01 1996 PST | @ 1 min | Wed Feb 28 17:33:01 1996 PST\n--- 109,116 ----\n | epoch | @ 1 day 2 hours 3 mins 4 secs | Thu Jan 01 18:03:04 1970 PST\n | epoch | @ 10 days | Sat Jan 10 16:00:00 1970 PST\n | epoch | @ 3 mons | Tue Mar 31 16:00:00 1970 PST\n! | epoch | @ 5 mons | Sun May 31 16:00:00 1970 PDT\n! | epoch | @ 5 mons 12 hours | Mon Jun 01 04:00:00 1970 PDT\n | epoch | @ 6 years | Wed Dec 31 16:00:00 1975 PST\n | Wed Feb 28 17:32:01 1996 PST | @ 14 secs ago | Wed Feb 28 17:31:47 1996 PST\n | Wed Feb 28 17:32:01 1996 PST | @ 1 min | Wed Feb 28 17:33:01 1996 PST\n***************\n*** 127,141 ****\n | Wed Feb 28 17:32:01 1996 PST | @ 10 days | Sat Mar 09 17:32:01 1996 PST\n | Thu Feb 29 17:32:01 1996 PST | @ 10 days | Sun Mar 10 17:32:01 1996 PST\n | Fri Mar 01 17:32:01 1996 PST | @ 10 days | Mon Mar 11 17:32:01 1996 PST\n! | Wed Feb 28 17:32:01 1996 PST | @ 3 mons | Tue May 28 18:32:01 1996 PDT\n! | Thu Feb 29 17:32:01 1996 PST | @ 3 mons | Wed May 29 18:32:01 1996 PDT\n! | Fri Mar 01 17:32:01 1996 PST | @ 3 mons | Sat Jun 01 18:32:01 1996 PDT\n! | Wed Feb 28 17:32:01 1996 PST | @ 5 mons | Sun Jul 28 18:32:01 1996 PDT\n! | Wed Feb 28 17:32:01 1996 PST | @ 5 mons 12 hours | Mon Jul 29 06:32:01 1996 PDT\n! | Thu Feb 29 17:32:01 1996 PST | @ 5 mons | Mon Jul 29 18:32:01 1996 PDT\n! | Thu Feb 29 17:32:01 1996 PST | @ 5 mons 12 hours | Tue Jul 30 06:32:01 1996 PDT\n! | Fri Mar 01 17:32:01 1996 PST | @ 5 mons | Thu Aug 01 18:32:01 1996 PDT\n! | Fri Mar 01 17:32:01 1996 PST | @ 5 mons 12 hours | Fri Aug 02 06:32:01 1996 PDT\n | Mon Dec 30 17:32:01 1996 PST | @ 14 secs ago | Mon Dec 30 17:31:47 1996 PST\n | Mon Dec 30 17:32:01 1996 PST | @ 1 min | Mon Dec 30 17:33:01 1996 PST\n | Mon Dec 30 17:32:01 1996 PST | @ 5 hours | Mon Dec 30 22:32:01 1996 PST\n--- 127,141 ----\n | Wed Feb 28 17:32:01 1996 PST | @ 10 days | Sat Mar 09 17:32:01 1996 PST\n | Thu Feb 29 17:32:01 1996 PST | @ 10 days | Sun Mar 10 17:32:01 1996 PST\n | Fri Mar 01 17:32:01 1996 PST | @ 10 days | Mon Mar 11 17:32:01 1996 PST\n! | Wed Feb 28 17:32:01 1996 PST | @ 3 mons | Tue May 28 17:32:01 1996 PDT\n! | Thu Feb 29 17:32:01 1996 PST | @ 3 mons | Wed May 29 17:32:01 1996 PDT\n! | Fri Mar 01 17:32:01 1996 PST | @ 3 mons | Sat Jun 01 17:32:01 1996 PDT\n! | Wed Feb 28 17:32:01 1996 PST | @ 5 mons | Sun Jul 28 17:32:01 1996 PDT\n! | Wed Feb 28 17:32:01 1996 PST | @ 5 mons 12 hours | Mon Jul 29 05:32:01 1996 PDT\n! | Thu Feb 29 17:32:01 1996 PST | @ 5 mons | Mon Jul 29 17:32:01 1996 PDT\n! | Thu Feb 29 17:32:01 1996 PST | @ 5 mons 12 hours | Tue Jul 30 05:32:01 1996 PDT\n! | Fri Mar 01 17:32:01 1996 PST | @ 5 mons | Thu Aug 01 17:32:01 1996 PDT\n! | Fri Mar 01 17:32:01 1996 PST | @ 5 mons 12 hours | Fri Aug 02 05:32:01 1996 PDT\n | Mon Dec 30 17:32:01 1996 PST | @ 14 secs ago | Mon Dec 30 17:31:47 1996 PST\n | Mon Dec 30 17:32:01 1996 PST | @ 1 min | Mon Dec 30 17:33:01 1996 PST\n | Mon Dec 30 17:32:01 1996 PST | @ 5 hours | Mon Dec 30 22:32:01 1996 PST\n***************\n*** 148,157 ****\n | Tue Dec 31 17:32:01 1996 PST | @ 10 days | Fri Jan 10 17:32:01 1997 PST\n | Mon Dec 30 17:32:01 1996 PST | @ 3 mons | Sun Mar 30 17:32:01 1997 PST\n | Tue Dec 31 17:32:01 1996 PST | @ 3 mons | Mon Mar 31 17:32:01 1997 PST\n! | Mon Dec 30 17:32:01 1996 PST | @ 5 mons | Fri May 30 18:32:01 1997 PDT\n! | Mon Dec 30 17:32:01 1996 PST | @ 5 mons 12 hours | Sat May 31 06:32:01 1997 PDT\n! | Tue Dec 31 17:32:01 1996 PST | @ 5 mons | Sat May 31 18:32:01 1997 PDT\n! | Tue Dec 31 17:32:01 1996 PST | @ 5 mons 12 hours | Sun Jun 01 06:32:01 1997 PDT\n | Fri Dec 31 17:32:01 1999 PST | @ 14 secs ago | Fri Dec 31 17:31:47 1999 PST\n | Fri Dec 31 17:32:01 1999 PST | @ 1 min | Fri Dec 31 17:33:01 1999 PST\n | Fri Dec 31 17:32:01 1999 PST | @ 5 hours | Fri Dec 31 22:32:01 1999 PST\n--- 148,157 ----\n | Tue Dec 31 17:32:01 1996 PST | @ 10 days | Fri Jan 10 17:32:01 1997 PST\n | Mon Dec 30 17:32:01 1996 PST | @ 3 mons | Sun Mar 30 17:32:01 1997 PST\n | Tue Dec 31 17:32:01 1996 PST | @ 3 mons | Mon Mar 31 17:32:01 1997 PST\n! | Mon Dec 30 17:32:01 1996 PST | @ 5 mons | Fri May 30 17:32:01 1997 PDT\n! | Mon Dec 30 17:32:01 1996 PST | @ 5 mons 12 hours | Sat May 31 05:32:01 1997 PDT\n! | Tue Dec 31 17:32:01 1996 PST | @ 5 mons | Sat May 31 17:32:01 1997 PDT\n! | Tue Dec 31 17:32:01 1996 PST | @ 5 mons 12 hours | Sun Jun 01 05:32:01 1997 PDT\n | Fri Dec 31 17:32:01 1999 PST | @ 14 secs ago | Fri Dec 31 17:31:47 1999 PST\n | Fri Dec 31 17:32:01 1999 PST | @ 1 min | Fri Dec 31 17:33:01 1999 PST\n | Fri Dec 31 17:32:01 1999 PST | @ 5 hours | Fri Dec 31 22:32:01 1999 PST\n***************\n*** 189,213 ****\n | Wed Mar 15 08:14:01 2000 PST | @ 10 days | Sat Mar 25 08:14:01 2000 PST\n | Fri Dec 31 17:32:01 1999 PST | @ 3 mons | Fri Mar 31 17:32:01 2000 PST\n | Sat Jan 01 17:32:01 2000 PST | @ 3 mons | Sat Apr 01 17:32:01 2000 PST\n! | Fri Dec 31 17:32:01 1999 PST | @ 5 mons | Wed May 31 18:32:01 2000 PDT\n! | Fri Dec 31 17:32:01 1999 PST | @ 5 mons 12 hours | Thu Jun 01 06:32:01 2000 PDT\n! | Sat Jan 01 17:32:01 2000 PST | @ 5 mons | Thu Jun 01 18:32:01 2000 PDT\n! | Sat Jan 01 17:32:01 2000 PST | @ 5 mons 12 hours | Fri Jun 02 06:32:01 2000 PDT\n! | Wed Mar 15 01:14:05 2000 PST | @ 3 mons | Thu Jun 15 02:14:05 2000 PDT\n! | Wed Mar 15 02:14:03 2000 PST | @ 3 mons | Thu Jun 15 03:14:03 2000 PDT\n! | Wed Mar 15 03:14:04 2000 PST | @ 3 mons | Thu Jun 15 04:14:04 2000 PDT\n! | Wed Mar 15 04:14:02 2000 PST | @ 3 mons | Thu Jun 15 05:14:02 2000 PDT\n! | Wed Mar 15 08:14:01 2000 PST | @ 3 mons | Thu Jun 15 09:14:01 2000 PDT\n! | Wed Mar 15 01:14:05 2000 PST | @ 5 mons | Tue Aug 15 02:14:05 2000 PDT\n! | Wed Mar 15 02:14:03 2000 PST | @ 5 mons | Tue Aug 15 03:14:03 2000 PDT\n! | Wed Mar 15 03:14:04 2000 PST | @ 5 mons | Tue Aug 15 04:14:04 2000 PDT\n! | Wed Mar 15 04:14:02 2000 PST | @ 5 mons | Tue Aug 15 05:14:02 2000 PDT\n! | Wed Mar 15 08:14:01 2000 PST | @ 5 mons | Tue Aug 15 09:14:01 2000 PDT\n! | Wed Mar 15 01:14:05 2000 PST | @ 5 mons 12 hours | Tue Aug 15 14:14:05 2000 PDT\n! | Wed Mar 15 02:14:03 2000 PST | @ 5 mons 12 hours | Tue Aug 15 15:14:03 2000 PDT\n! | Wed Mar 15 03:14:04 2000 PST | @ 5 mons 12 hours | Tue Aug 15 16:14:04 2000 PDT\n! | Wed Mar 15 04:14:02 2000 PST | @ 5 mons 12 hours | Tue Aug 15 17:14:02 2000 PDT\n! | Wed Mar 15 08:14:01 2000 PST | @ 5 mons 12 hours | Tue Aug 15 21:14:01 2000 PDT\n | Sun Dec 31 17:32:01 2000 PST | @ 14 secs ago | Sun Dec 31 17:31:47 2000 PST\n | Sun Dec 31 17:32:01 2000 PST | @ 1 min | Sun Dec 31 17:33:01 2000 PST\n | Sun Dec 31 17:32:01 2000 PST | @ 5 hours | Sun Dec 31 22:32:01 2000 PST\n--- 189,213 ----\n | Wed Mar 15 08:14:01 2000 PST | @ 10 days | Sat Mar 25 08:14:01 2000 PST\n | Fri Dec 31 17:32:01 1999 PST | @ 3 mons | Fri Mar 31 17:32:01 2000 PST\n | Sat Jan 01 17:32:01 2000 PST | @ 3 mons | Sat Apr 01 17:32:01 2000 PST\n! | Fri Dec 31 17:32:01 1999 PST | @ 5 mons | Wed May 31 17:32:01 2000 PDT\n! | Fri Dec 31 17:32:01 1999 PST | @ 5 mons 12 hours | Thu Jun 01 05:32:01 2000 PDT\n! | Sat Jan 01 17:32:01 2000 PST | @ 5 mons | Thu Jun 01 17:32:01 2000 PDT\n! | Sat Jan 01 17:32:01 2000 PST | @ 5 mons 12 hours | Fri Jun 02 05:32:01 2000 PDT\n! | Wed Mar 15 01:14:05 2000 PST | @ 3 mons | Thu Jun 15 01:14:05 2000 PDT\n! | Wed Mar 15 02:14:03 2000 PST | @ 3 mons | Thu Jun 15 02:14:03 2000 PDT\n! | Wed Mar 15 03:14:04 2000 PST | @ 3 mons | Thu Jun 15 03:14:04 2000 PDT\n! | Wed Mar 15 04:14:02 2000 PST | @ 3 mons | Thu Jun 15 04:14:02 2000 PDT\n! | Wed Mar 15 08:14:01 2000 PST | @ 3 mons | Thu Jun 15 08:14:01 2000 PDT\n! | Wed Mar 15 01:14:05 2000 PST | @ 5 mons | Tue Aug 15 01:14:05 2000 PDT\n! | Wed Mar 15 02:14:03 2000 PST | @ 5 mons | Tue Aug 15 02:14:03 2000 PDT\n! | Wed Mar 15 03:14:04 2000 PST | @ 5 mons | Tue Aug 15 03:14:04 2000 PDT\n! | Wed Mar 15 04:14:02 2000 PST | @ 5 mons | Tue Aug 15 04:14:02 2000 PDT\n! | Wed Mar 15 08:14:01 2000 PST | @ 5 mons | Tue Aug 15 08:14:01 2000 PDT\n! | Wed Mar 15 01:14:05 2000 PST | @ 5 mons 12 hours | Tue Aug 15 13:14:05 2000 PDT\n! | Wed Mar 15 02:14:03 2000 PST | @ 5 mons 12 hours | Tue Aug 15 14:14:03 2000 PDT\n! | Wed Mar 15 03:14:04 2000 PST | @ 5 mons 12 hours | Tue Aug 15 15:14:04 2000 PDT\n! | Wed Mar 15 04:14:02 2000 PST | @ 5 mons 12 hours | Tue Aug 15 16:14:02 2000 PDT\n! | Wed Mar 15 08:14:01 2000 PST | @ 5 mons 12 hours | Tue Aug 15 20:14:01 2000 PDT\n | Sun Dec 31 17:32:01 2000 PST | @ 14 secs ago | Sun Dec 31 17:31:47 2000 PST\n | Sun Dec 31 17:32:01 2000 PST | @ 1 min | Sun Dec 31 17:33:01 2000 PST\n | Sun Dec 31 17:32:01 2000 PST | @ 5 hours | Sun Dec 31 22:32:01 2000 PST\n***************\n*** 219,229 ****\n | Sun Dec 31 17:32:01 2000 PST | @ 10 days | Wed Jan 10 17:32:01 2001 PST\n | Mon Jan 01 17:32:01 2001 PST | @ 10 days | Thu Jan 11 17:32:01 2001 PST\n | Sun Dec 31 17:32:01 2000 PST | @ 3 mons | Sat Mar 31 17:32:01 2001 PST\n! | Mon Jan 01 17:32:01 2001 PST | @ 3 mons | Sun Apr 01 18:32:01 2001 PDT\n! | Sun Dec 31 17:32:01 2000 PST | @ 5 mons | Thu May 31 18:32:01 2001 PDT\n! | Sun Dec 31 17:32:01 2000 PST | @ 5 mons 12 hours | Fri Jun 01 06:32:01 2001 PDT\n! | Mon Jan 01 17:32:01 2001 PST | @ 5 mons | Fri Jun 01 18:32:01 2001 PDT\n! | Mon Jan 01 17:32:01 2001 PST | @ 5 mons 12 hours | Sat Jun 02 06:32:01 2001 PDT\n | Wed Feb 28 17:32:01 1996 PST | @ 6 years | Thu Feb 28 17:32:01 2002 PST\n | Thu Feb 29 17:32:01 1996 PST | @ 6 years | Thu Feb 28 17:32:01 2002 PST\n | Fri Mar 01 17:32:01 1996 PST | @ 6 years | Fri Mar 01 17:32:01 2002 PST\n--- 219,229 ----\n | Sun Dec 31 17:32:01 2000 PST | @ 10 days | Wed Jan 10 17:32:01 2001 PST\n | Mon Jan 01 17:32:01 2001 PST | @ 10 days | Thu Jan 11 17:32:01 2001 PST\n | Sun Dec 31 17:32:01 2000 PST | @ 3 mons | Sat Mar 31 17:32:01 2001 PST\n! | Mon Jan 01 17:32:01 2001 PST | @ 3 mons | Sun Apr 01 17:32:01 2001 PDT\n! | Sun Dec 31 17:32:01 2000 PST | @ 5 mons | Thu May 31 17:32:01 2001 PDT\n! | Sun Dec 31 17:32:01 2000 PST | @ 5 mons 12 hours | Fri Jun 01 05:32:01 2001 PDT\n! | Mon Jan 01 17:32:01 2001 PST | @ 5 mons | Fri Jun 01 17:32:01 2001 PDT\n! | Mon Jan 01 17:32:01 2001 PST | @ 5 mons 12 hours | Sat Jun 02 05:32:01 2001 PDT\n | Wed Feb 28 17:32:01 1996 PST | @ 6 years | Thu Feb 28 17:32:01 2002 PST\n | Thu Feb 29 17:32:01 1996 PST | @ 6 years | Thu Feb 28 17:32:01 2002 PST\n | Fri Mar 01 17:32:01 1996 PST | @ 6 years | Fri Mar 01 17:32:01 2002 PST\n***************\n*** 299,310 ****\n | Wed Mar 15 08:14:01 2000 PST | @ 6 years | Tue Mar 15 08:14:01 1994 PST\n | Sun Dec 31 17:32:01 2000 PST | @ 6 years | Sat Dec 31 17:32:01 1994 PST\n | Mon Jan 01 17:32:01 2001 PST | @ 6 years | Sun Jan 01 17:32:01 1995 PST\n! | Wed Feb 28 17:32:01 1996 PST | @ 5 mons 12 hours | Thu Sep 28 06:32:01 1995 PDT\n! | Wed Feb 28 17:32:01 1996 PST | @ 5 mons | Thu Sep 28 18:32:01 1995 PDT\n! | Thu Feb 29 17:32:01 1996 PST | @ 5 mons 12 hours | Fri Sep 29 06:32:01 1995 PDT\n! | Thu Feb 29 17:32:01 1996 PST | @ 5 mons | Fri Sep 29 18:32:01 1995 PDT\n! | Fri Mar 01 17:32:01 1996 PST | @ 5 mons 12 hours | Sun Oct 01 06:32:01 1995 PDT\n! | Fri Mar 01 17:32:01 1996 PST | @ 5 mons | Sun Oct 01 18:32:01 1995 PDT\n | Wed Feb 28 17:32:01 1996 PST | @ 3 mons | Tue Nov 28 17:32:01 1995 PST\n | Thu Feb 29 17:32:01 1996 PST | @ 3 mons | Wed Nov 29 17:32:01 1995 PST\n | Fri Mar 01 17:32:01 1996 PST | @ 3 mons | Fri Dec 01 17:32:01 1995 PST\n--- 299,310 ----\n | Wed Mar 15 08:14:01 2000 PST | @ 6 years | Tue Mar 15 08:14:01 1994 PST\n | Sun Dec 31 17:32:01 2000 PST | @ 6 years | Sat Dec 31 17:32:01 1994 PST\n | Mon Jan 01 17:32:01 2001 PST | @ 6 years | Sun Jan 01 17:32:01 1995 PST\n! | Wed Feb 28 17:32:01 1996 PST | @ 5 mons 12 hours | Thu Sep 28 05:32:01 1995 PDT\n! | Wed Feb 28 17:32:01 1996 PST | @ 5 mons | Thu Sep 28 17:32:01 1995 PDT\n! | Thu Feb 29 17:32:01 1996 PST | @ 5 mons 12 hours | Fri Sep 29 05:32:01 1995 PDT\n! | Thu Feb 29 17:32:01 1996 PST | @ 5 mons | Fri Sep 29 17:32:01 1995 PDT\n! | Fri Mar 01 17:32:01 1996 PST | @ 5 mons 12 hours | Sun Oct 01 05:32:01 1995 PDT\n! | Fri Mar 01 17:32:01 1996 PST | @ 5 mons | Sun Oct 01 17:32:01 1995 PDT\n | Wed Feb 28 17:32:01 1996 PST | @ 3 mons | Tue Nov 28 17:32:01 1995 PST\n | Thu Feb 29 17:32:01 1996 PST | @ 3 mons | Wed Nov 29 17:32:01 1995 PST\n | Fri Mar 01 17:32:01 1996 PST | @ 3 mons | Fri Dec 01 17:32:01 1995 PST\n***************\n*** 323,334 ****\n | Fri Mar 01 17:32:01 1996 PST | @ 5 hours | Fri Mar 01 12:32:01 1996 PST\n | Fri Mar 01 17:32:01 1996 PST | @ 1 min | Fri Mar 01 17:31:01 1996 PST\n | Fri Mar 01 17:32:01 1996 PST | @ 14 secs ago | Fri Mar 01 17:32:15 1996 PST\n! | Mon Dec 30 17:32:01 1996 PST | @ 5 mons 12 hours | Tue Jul 30 06:32:01 1996 PDT\n! | Mon Dec 30 17:32:01 1996 PST | @ 5 mons | Tue Jul 30 18:32:01 1996 PDT\n! | Tue Dec 31 17:32:01 1996 PST | @ 5 mons 12 hours | Wed Jul 31 06:32:01 1996 PDT\n! | Tue Dec 31 17:32:01 1996 PST | @ 5 mons | Wed Jul 31 18:32:01 1996 PDT\n! | Mon Dec 30 17:32:01 1996 PST | @ 3 mons | Mon Sep 30 18:32:01 1996 PDT\n! | Tue Dec 31 17:32:01 1996 PST | @ 3 mons | Mon Sep 30 18:32:01 1996 PDT\n | Mon Dec 30 17:32:01 1996 PST | @ 10 days | Fri Dec 20 17:32:01 1996 PST\n | Tue Dec 31 17:32:01 1996 PST | @ 10 days | Sat Dec 21 17:32:01 1996 PST\n | Mon Dec 30 17:32:01 1996 PST | @ 1 day 2 hours 3 mins 4 secs | Sun Dec 29 15:28:57 1996 PST\n--- 323,334 ----\n | Fri Mar 01 17:32:01 1996 PST | @ 5 hours | Fri Mar 01 12:32:01 1996 PST\n | Fri Mar 01 17:32:01 1996 PST | @ 1 min | Fri Mar 01 17:31:01 1996 PST\n | Fri Mar 01 17:32:01 1996 PST | @ 14 secs ago | Fri Mar 01 17:32:15 1996 PST\n! | Mon Dec 30 17:32:01 1996 PST | @ 5 mons 12 hours | Tue Jul 30 05:32:01 1996 PDT\n! | Mon Dec 30 17:32:01 1996 PST | @ 5 mons | Tue Jul 30 17:32:01 1996 PDT\n! | Tue Dec 31 17:32:01 1996 PST | @ 5 mons 12 hours | Wed Jul 31 05:32:01 1996 PDT\n! | Tue Dec 31 17:32:01 1996 PST | @ 5 mons | Wed Jul 31 17:32:01 1996 PDT\n! | Mon Dec 30 17:32:01 1996 PST | @ 3 mons | Mon Sep 30 17:32:01 1996 PDT\n! | Tue Dec 31 17:32:01 1996 PST | @ 3 mons | Mon Sep 30 17:32:01 1996 PDT\n | Mon Dec 30 17:32:01 1996 PST | @ 10 days | Fri Dec 20 17:32:01 1996 PST\n | Tue Dec 31 17:32:01 1996 PST | @ 10 days | Sat Dec 21 17:32:01 1996 PST\n | Mon Dec 30 17:32:01 1996 PST | @ 1 day 2 hours 3 mins 4 secs | Sun Dec 29 15:28:57 1996 PST\n***************\n*** 339,360 ****\n | Tue Dec 31 17:32:01 1996 PST | @ 5 hours | Tue Dec 31 12:32:01 1996 PST\n | Tue Dec 31 17:32:01 1996 PST | @ 1 min | Tue Dec 31 17:31:01 1996 PST\n | Tue Dec 31 17:32:01 1996 PST | @ 14 secs ago | Tue Dec 31 17:32:15 1996 PST\n! | Fri Dec 31 17:32:01 1999 PST | @ 5 mons 12 hours | Sat Jul 31 06:32:01 1999 PDT\n! | Fri Dec 31 17:32:01 1999 PST | @ 5 mons | Sat Jul 31 18:32:01 1999 PDT\n! | Sat Jan 01 17:32:01 2000 PST | @ 5 mons 12 hours | Sun Aug 01 06:32:01 1999 PDT\n! | Sat Jan 01 17:32:01 2000 PST | @ 5 mons | Sun Aug 01 18:32:01 1999 PDT\n! | Fri Dec 31 17:32:01 1999 PST | @ 3 mons | Thu Sep 30 18:32:01 1999 PDT\n! | Sat Jan 01 17:32:01 2000 PST | @ 3 mons | Fri Oct 01 18:32:01 1999 PDT\n! | Wed Mar 15 01:14:05 2000 PST | @ 5 mons 12 hours | Thu Oct 14 14:14:05 1999 PDT\n! | Wed Mar 15 02:14:03 2000 PST | @ 5 mons 12 hours | Thu Oct 14 15:14:03 1999 PDT\n! | Wed Mar 15 03:14:04 2000 PST | @ 5 mons 12 hours | Thu Oct 14 16:14:04 1999 PDT\n! | Wed Mar 15 04:14:02 2000 PST | @ 5 mons 12 hours | Thu Oct 14 17:14:02 1999 PDT\n! | Wed Mar 15 08:14:01 2000 PST | @ 5 mons 12 hours | Thu Oct 14 21:14:01 1999 PDT\n! | Wed Mar 15 01:14:05 2000 PST | @ 5 mons | Fri Oct 15 02:14:05 1999 PDT\n! | Wed Mar 15 02:14:03 2000 PST | @ 5 mons | Fri Oct 15 03:14:03 1999 PDT\n! | Wed Mar 15 03:14:04 2000 PST | @ 5 mons | Fri Oct 15 04:14:04 1999 PDT\n! | Wed Mar 15 04:14:02 2000 PST | @ 5 mons | Fri Oct 15 05:14:02 1999 PDT\n! | Wed Mar 15 08:14:01 2000 PST | @ 5 mons | Fri Oct 15 09:14:01 1999 PDT\n | Wed Mar 15 01:14:05 2000 PST | @ 3 mons | Wed Dec 15 01:14:05 1999 PST\n | Wed Mar 15 02:14:03 2000 PST | @ 3 mons | Wed Dec 15 02:14:03 1999 PST\n | Wed Mar 15 03:14:04 2000 PST | @ 3 mons | Wed Dec 15 03:14:04 1999 PST\n--- 339,360 ----\n | Tue Dec 31 17:32:01 1996 PST | @ 5 hours | Tue Dec 31 12:32:01 1996 PST\n | Tue Dec 31 17:32:01 1996 PST | @ 1 min | Tue Dec 31 17:31:01 1996 PST\n | Tue Dec 31 17:32:01 1996 PST | @ 14 secs ago | Tue Dec 31 17:32:15 1996 PST\n! | Fri Dec 31 17:32:01 1999 PST | @ 5 mons 12 hours | Sat Jul 31 05:32:01 1999 PDT\n! | Fri Dec 31 17:32:01 1999 PST | @ 5 mons | Sat Jul 31 17:32:01 1999 PDT\n! | Sat Jan 01 17:32:01 2000 PST | @ 5 mons 12 hours | Sun Aug 01 05:32:01 1999 PDT\n! | Sat Jan 01 17:32:01 2000 PST | @ 5 mons | Sun Aug 01 17:32:01 1999 PDT\n! | Fri Dec 31 17:32:01 1999 PST | @ 3 mons | Thu Sep 30 17:32:01 1999 PDT\n! | Sat Jan 01 17:32:01 2000 PST | @ 3 mons | Fri Oct 01 17:32:01 1999 PDT\n! | Wed Mar 15 01:14:05 2000 PST | @ 5 mons 12 hours | Thu Oct 14 13:14:05 1999 PDT\n! | Wed Mar 15 02:14:03 2000 PST | @ 5 mons 12 hours | Thu Oct 14 14:14:03 1999 PDT\n! | Wed Mar 15 03:14:04 2000 PST | @ 5 mons 12 hours | Thu Oct 14 15:14:04 1999 PDT\n! | Wed Mar 15 04:14:02 2000 PST | @ 5 mons 12 hours | Thu Oct 14 16:14:02 1999 PDT\n! | Wed Mar 15 08:14:01 2000 PST | @ 5 mons 12 hours | Thu Oct 14 20:14:01 1999 PDT\n! | Wed Mar 15 01:14:05 2000 PST | @ 5 mons | Fri Oct 15 01:14:05 1999 PDT\n! | Wed Mar 15 02:14:03 2000 PST | @ 5 mons | Fri Oct 15 02:14:03 1999 PDT\n! | Wed Mar 15 03:14:04 2000 PST | @ 5 mons | Fri Oct 15 03:14:04 1999 PDT\n! | Wed Mar 15 04:14:02 2000 PST | @ 5 mons | Fri Oct 15 04:14:02 1999 PDT\n! | Wed Mar 15 08:14:01 2000 PST | @ 5 mons | Fri Oct 15 08:14:01 1999 PDT\n | Wed Mar 15 01:14:05 2000 PST | @ 3 mons | Wed Dec 15 01:14:05 1999 PST\n | Wed Mar 15 02:14:03 2000 PST | @ 3 mons | Wed Dec 15 02:14:03 1999 PST\n | Wed Mar 15 03:14:04 2000 PST | @ 3 mons | Wed Dec 15 03:14:04 1999 PST\n***************\n*** 395,406 ****\n | Wed Mar 15 04:14:02 2000 PST | @ 14 secs ago | Wed Mar 15 04:14:16 2000 PST\n | Wed Mar 15 08:14:01 2000 PST | @ 1 min | Wed Mar 15 08:13:01 2000 PST\n | Wed Mar 15 08:14:01 2000 PST | @ 14 secs ago | Wed Mar 15 08:14:15 2000 PST\n! | Sun Dec 31 17:32:01 2000 PST | @ 5 mons 12 hours | Mon Jul 31 06:32:01 2000 PDT\n! | Sun Dec 31 17:32:01 2000 PST | @ 5 mons | Mon Jul 31 18:32:01 2000 PDT\n! | Mon Jan 01 17:32:01 2001 PST | @ 5 mons 12 hours | Tue Aug 01 06:32:01 2000 PDT\n! | Mon Jan 01 17:32:01 2001 PST | @ 5 mons | Tue Aug 01 18:32:01 2000 PDT\n! | Sun Dec 31 17:32:01 2000 PST | @ 3 mons | Sat Sep 30 18:32:01 2000 PDT\n! | Mon Jan 01 17:32:01 2001 PST | @ 3 mons | Sun Oct 01 18:32:01 2000 PDT\n | Sun Dec 31 17:32:01 2000 PST | @ 10 days | Thu Dec 21 17:32:01 2000 PST\n | Mon Jan 01 17:32:01 2001 PST | @ 10 days | Fri Dec 22 17:32:01 2000 PST\n | Sun Dec 31 17:32:01 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Sat Dec 30 15:28:57 2000 PST\n--- 395,406 ----\n | Wed Mar 15 04:14:02 2000 PST | @ 14 secs ago | Wed Mar 15 04:14:16 2000 PST\n | Wed Mar 15 08:14:01 2000 PST | @ 1 min | Wed Mar 15 08:13:01 2000 PST\n | Wed Mar 15 08:14:01 2000 PST | @ 14 secs ago | Wed Mar 15 08:14:15 2000 PST\n! | Sun Dec 31 17:32:01 2000 PST | @ 5 mons 12 hours | Mon Jul 31 05:32:01 2000 PDT\n! | Sun Dec 31 17:32:01 2000 PST | @ 5 mons | Mon Jul 31 17:32:01 2000 PDT\n! | Mon Jan 01 17:32:01 2001 PST | @ 5 mons 12 hours | Tue Aug 01 05:32:01 2000 PDT\n! | Mon Jan 01 17:32:01 2001 PST | @ 5 mons | Tue Aug 01 17:32:01 2000 PDT\n! | Sun Dec 31 17:32:01 2000 PST | @ 3 mons | Sat Sep 30 17:32:01 2000 PDT\n! | Mon Jan 01 17:32:01 2001 PST | @ 3 mons | Sun Oct 01 17:32:01 2000 PDT\n | Sun Dec 31 17:32:01 2000 PST | @ 10 days | Thu Dec 21 17:32:01 2000 PST\n | Mon Jan 01 17:32:01 2001 PST | @ 10 days | Fri Dec 22 17:32:01 2000 PST\n | Sun Dec 31 17:32:01 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Sat Dec 30 15:28:57 2000 PST\n\n======================================================================\n\n", "msg_date": "Mon, 06 Nov 2000 13:17:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Horology regress test changed?" }, { "msg_contents": "> I'm seeing a whole bunch of miscompares in the horology test today.\n> Is this stuff supposed to have changed behavior?\n\nYup. RTFCVSL :)\n\nIt is a result of fixing the date/time math across daylight savings time\nboundaries. You will find that there is no longer an artifical one hour\noffset in results when you add days, months, or years to a date or\ntimestamp.\n\nI looked fairly carefully in the regression results before committing\nthe changes, but another pair of eyes always helps so let me know if you\nnotice anything funny...\n\n - Thomas\n", "msg_date": "Tue, 07 Nov 2000 06:37:28 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horology regress test changed?" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> I'm seeing a whole bunch of miscompares in the horology test today.\n>> Is this stuff supposed to have changed behavior?\n\n> Yup. RTFCVSL :)\n\nI did, but the log didn't say anything about unfixed regression test\ncases. If you're going to leave some platform-specific comparison\nfiles un-updated, I think it'd be polite to warn people about that\nexplicitly... probably on pghackers, not just committers...\n\nI have updated horology-no-DST-before-1970.out, but that still leaves\nus needing updates for horology-1947-PDT.out and\nhorology-solaris-1947.out.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 07 Nov 2000 01:53:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Horology regress test changed? " }, { "msg_contents": "> I did, but the log didn't say anything about unfixed regression test\n> cases. If you're going to leave some platform-specific comparison\n> files un-updated, I think it'd be polite to warn people about that\n> explicitly... probably on pghackers, not just committers...\n\n*sigh*\n\nI'll probably always leave some platform-specific comparison files\nunupdated, and I would expect others to have to do that also. I\napparently did not meet your expectations on this, but it is quite in\nline with our accepted practices.\n\nSorry for the heart stoppage when you saw your regression tests suddenly\nfailing...\n\n> I have updated horology-no-DST-before-1970.out, but that still leaves\n> us needing updates for horology-1947-PDT.out and\n> horology-solaris-1947.out.\n\nI guess that we have already heard from someone on one of those, and the\nothers will come with time.\n\nRegards.\n\n - Thomas\n", "msg_date": "Wed, 08 Nov 2000 02:57:48 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horology regress test changed?" } ]
[ { "msg_contents": "> > New CHECKPOINT command.\n> > Auto removing of offline log files and creating new file\n> > at checkpoint time.\n> \n> Is this the same as a SAVEPOINT?\n\nNo. Checkpoints are to speedup after crash recovery and\nto remove/archive log files. With WAL server doesn't write\nany datafiles on commit, only commit record goes to log\n(and log fsync-ed). Dirty buffers remains in memory long\ntime - only when some transaction is going to use unpinned\ndirty buffer its content will be written (but not fsync-ed)\nto system buffer cache. So, at any time some changes made by\ntransactions would be saved on disk, others would be in system\ncache and some of them in server buffer pool. In the event of\ncrash recoverer should know what changes are on disk, ie -\nfrom what position in log it should try to start redo\noperation ((re-)applying changes). Obviously, it's not\ngood to start from first log record -:) For this purposes\ncheckpoints are used. At checkpoint time (each ~ 3-5 minutes)\n*all* dirty buffers is forced to disk and checkpoint record\nis written to log, so recoverer will know that up to the\nlast record in log made before checkpoint started all changes\nare on disk and redo is not required for previous records.\n\nVadim\n", "msg_date": "Mon, 6 Nov 2000 12:56:10 -0800 ", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [COMMITTERS] pgsql/src/backend/access/transam (xact.c xlog.c)" }, { "msg_contents": "> > > New CHECKPOINT command.\n> > > Auto removing of offline log files and creating new file\n> > > at checkpoint time.\n\nCan you tell me how to use CHECKPOINT please?\n\n> > Is this the same as a SAVEPOINT?\n> \n> No. Checkpoints are to speedup after crash recovery and\n> to remove/archive log files. With WAL server doesn't write\n> any datafiles on commit, only commit record goes to log\n> (and log fsync-ed). Dirty buffers remains in memory long\n\nIs log fsynced even I turn of -F?\n--\nTatsuo Ishii\n", "msg_date": "Tue, 07 Nov 2000 10:11:33 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RE: [COMMITTERS] pgsql/src/backend/access/transam (xact.c xlog.c)" } ]
[ { "msg_contents": "The long option emulation getopt(argc, argv, \"...xy:-:\") doesn't work on\nFreeBSD (and who knows where else), it throws away options of the form\n'--foo'. That means you cannot specify postmaster options like\n--log-pids, etc. on the command line. Also, the --version option doesn't\nwork, which may cause initdb to fail for you.\n\nSo apparently we'll have to use an actual option letter for passing\nruntime configuration parameters. Any suggestions? Already in use are\n\na A b B C d D e E f F i l L m M n N o O p P Q s S t v W x\n\n'V' will be used as short form for \"version\". My early favourite is '-c'. \n(Using non-letters is probably not portable either.)\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 6 Nov 2000 22:18:23 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Darn, long option emulation doesn't work on FreeBSD" }, { "msg_contents": "On Mon, 6 Nov 2000, Peter Eisentraut wrote:\n\n> So apparently we'll have to use an actual option letter for passing\n> runtime configuration parameters. Any suggestions? Already in use are\n> \n> a A b B C d D e E f F i l L m M n N o O p P Q s S t v W x\n\nQ is not used (or broken):\n\nmorannon:~>postmaster -Q\npostmaster: invalid option -- Q\nTry -? for help.\nmorannon:~>postgres -Q\npostgres: invalid option -- Q\nUsage: postgres [options] [dbname]\n\n... snip ...\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n", "msg_date": "Mon, 6 Nov 2000 19:21:04 -0600 (CST)", "msg_from": "\"Dominic J. Eidson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Darn, long option emulation doesn't work on FreeBSD" } ]
[ { "msg_contents": "> > OK, 2^64 isn't mathematically unbounded, but let's see you \n> > buy a disk that will hold it ;-). My point is that if we want\n> > to think about allowing >4G transactions, part of the answer\n> > has to be a way to recycle pg_log space. Otherwise it's still\n> > not really practical.\n> \n> I kind of like vadim's idea of segmenting pg_log. \n> \n> Segments in which all the xacts have been commited could be deleted.\n\nWithout undo we have to ensure that all tables are vacuumed after\nall transactions related to a segment were committed/aborted.\n\nVadim\n", "msg_date": "Mon, 6 Nov 2000 14:12:07 -0800 ", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Transaction ID wraparound: problem and proposed sol\n\tution" } ]
[ { "msg_contents": "\n> AIX 4.3.2, xlc 3.6.6.\n> \n> Same regression test failures as 7.0.2.\n> The nasty failures are triggers, misc, and plgpgsql which\n> consistently give \"pqReadData() -- backend closed the channel\n> unexpectedly.\" at the same point. Also the sanity_check hangs\n> during a VACUUM. Killing the backend was the only way to continue.\n\nThis should not be so. Your setup should definitely work without \nregression failures. There is something wrong with your dynamic loading\nof shared libs. Can you give more details, e.g. did you add optimization\nwhich does not work yet ?\n\nAndreas\n", "msg_date": "Tue, 7 Nov 2000 10:56:47 +0100 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: v7.0.3 *pre-release* ..." }, { "msg_contents": "Zeugswetter Andreas SB writes:\n > \n > > AIX 4.3.2, xlc 3.6.6.\n > > \n > > Same regression test failures as 7.0.2.\n > > The nasty failures are triggers, misc, and plgpgsql which\n > > consistently give \"pqReadData() -- backend closed the channel\n > > unexpectedly.\" at the same point. Also the sanity_check hangs\n > > during a VACUUM. Killing the backend was the only way to continue.\n > \n > This should not be so. Your setup should definitely work without\n > regression failures. There is something wrong with your dynamic\n > loading of shared libs. Can you give more details, e.g. did you add\n > optimization which does not work yet ?\n\nNo extra flags were added by me. The only build warnings were\nduplicate symbol errors. (There were a couple of warnings about\n0.0/0.0 used to represent a NaN.) Here are a couple of extracts from\nmy build log.\n\n xlc -I../include -I../backend -qmaxmem=16384 -qhalt=w -qsrcmsg\n -qlanglvl=extended -qlonglong -o postgres access/SUBSYS.o\n bootstrap/SUBSYS.o catalog/SUBSYS.o commands/SUBSYS.o\n executor/SUBSYS.o lib/SUBSYS.o libpq/SUBSYS.o main/SUBSYS.o\n parser/SUBSYS.o nodes/SUBSYS.o optimizer/SUBSYS.o port/SUBSYS.o\n postmaster/SUBSYS.o regex/SUBSYS.o rewrite/SUBSYS.o\n storage/SUBSYS.o tcop/SUBSYS.o utils/SUBSYS.o ../utils/version.o\n -lPW -lcrypt -lld -lnsl -ldl -lm -lcurses\n Making postgres.imp\n ../backend/port/aix/mkldexport.sh postgres /usr/local/pgsql/bin >\n postgres.imp\n xlc -Wl,-bE:../backend/postgres.imp -o postgres access/SUBSYS.o\n bootstrap/SUBSYS.o catalog/SUBSYS.o commands/SUBSYS.o\n executor/SUBSYS.o lib/SUBSYS.o libpq/SUBSYS.o main/SUBSYS.o\n parser/SUBSYS.o nodes/SUBSYS.o optimizer/SUBSYS.o port/SUBSYS.o\n postmaster/SUBSYS.o regex/SUBSYS.o rewrite/SUBSYS.o\n storage/SUBSYS.o tcop/SUBSYS.o utils/SUBSYS.o ../utils/version.o\n -lPW -lcrypt -lld -lnsl -ldl -lm -lcurses\n\n\n xlc -I../../include -I../../backend -qmaxmem=16384 -qhalt=w\n -qsrcmsg -qlanglvl=extended -qlonglong -DFRONTEND -c pqsignal.c -o\n pqsignal.o\n ar crs libpq.a fe-auth.o fe-connect.o fe-exec.o fe-misc.o\n fe-print.o fe-lobj.o pqexpbuffer.o dllist.o pqsignal.o\n touch libpq.a\n ../../backend/port/aix/mkldexport.sh libpq.a /usr/local/pgsql/lib\n > libpq.exp\n ld -H512 -bM:SRE -bI:../../backend/postgres.imp -bE:libpq.exp -o\n libpq.so libpq.a -lPW -lcrypt -lld -lnsl -ldl -lm -lcurses -lcrypt\n -lc\n ld: 0711-224 WARNING: Duplicate symbol: .crypt\n ld: 0711-224 WARNING: Duplicate symbol: crypt\n ld: 0711-224 WARNING: Duplicate symbol: .strlen\n ld: 0711-224 WARNING: Duplicate symbol: strlen\n ld: 0711-224 WARNING: Duplicate symbol: .PQuntrace\n ld: 0711-224 WARNING: Duplicate symbol: .PQtrace\n ld: 0711-224 WARNING: Duplicate symbol: .setsockopt\n ld: 0711-224 WARNING: Duplicate symbol: setsockopt\n [and others]\n\n-- \nPete Forman -./\\.- Disclaimer: This post is originated\nWestern Geophysical -./\\.- by myself and does not represent\[email protected] -./\\.- the opinion of Baker Hughes or\nhttp://www.crosswinds.net/~petef -./\\.- its divisions.\n", "msg_date": "Tue, 7 Nov 2000 10:54:32 +0000 (GMT)", "msg_from": "Pete Forman <[email protected]>", "msg_from_op": false, "msg_subject": "AW: v7.0.3 *pre-release* ..." } ]
[ { "msg_contents": "\n> I have updated horology-no-DST-before-1970.out, but that still leaves\n> us needing updates for horology-1947-PDT.out and\n\nThat one is mine (AIX), I'll do it.\n\nAndreas\n", "msg_date": "Tue, 7 Nov 2000 10:57:30 +0100 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Re: Horology regress test changed? " } ]
[ { "msg_contents": "This somehow gets moot. Is there a way to make gcc reject those comments ?\n\nAndreas\n\n*** ./src/backend/utils/adt/varbit.c.orig\tWed Nov 1 10:00:22 2000\n--- ./src/backend/utils/adt/varbit.c\tTue Nov 7 11:07:28 2000\n***************\n*** 1212,1218 ****\n \t\t\t\tis_match = ((cmp ^ *p) & mask1) == 0;\n \t\t\t\tif (!is_match)\n \t\t\t\t\tbreak;\n! \t\t\t\t// Move on to the next byte\n \t\t\t\tp++;\n \t\t\t\tif (p == VARBITEND(arg)) {\n \t\t\t\t\tmask2 = end_mask << (BITS_PER_BYTE - is);\n--- 1212,1218 ----\n \t\t\t\tis_match = ((cmp ^ *p) & mask1) == 0;\n \t\t\t\tif (!is_match)\n \t\t\t\t\tbreak;\n! \t\t\t\t/* Move on to the next byte */\n \t\t\t\tp++;\n \t\t\t\tif (p == VARBITEND(arg)) {\n \t\t\t\t\tmask2 = end_mask << (BITS_PER_BYTE - is);\n\n\n", "msg_date": "Tue, 7 Nov 2000 11:17:13 +0100 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "Again please no // comments !!!!!!!!" }, { "msg_contents": "Applied.\n\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> This somehow gets moot. Is there a way to make gcc reject those comments ?\n> \n> Andreas\n> \n> *** ./src/backend/utils/adt/varbit.c.orig\tWed Nov 1 10:00:22 2000\n> --- ./src/backend/utils/adt/varbit.c\tTue Nov 7 11:07:28 2000\n> ***************\n> *** 1212,1218 ****\n> \t\t\t\tis_match = ((cmp ^ *p) & mask1) == 0;\n> \t\t\t\tif (!is_match)\n> \t\t\t\t\tbreak;\n> ! \t\t\t\t// Move on to the next byte\n> \t\t\t\tp++;\n> \t\t\t\tif (p == VARBITEND(arg)) {\n> \t\t\t\t\tmask2 = end_mask << (BITS_PER_BYTE - is);\n> --- 1212,1218 ----\n> \t\t\t\tis_match = ((cmp ^ *p) & mask1) == 0;\n> \t\t\t\tif (!is_match)\n> \t\t\t\t\tbreak;\n> ! \t\t\t\t/* Move on to the next byte */\n> \t\t\t\tp++;\n> \t\t\t\tif (p == VARBITEND(arg)) {\n> \t\t\t\t\tmask2 = end_mask << (BITS_PER_BYTE - is);\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 7 Nov 2000 06:36:24 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Again please no // comments !!!!!!!!" }, { "msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> This somehow gets moot. Is there a way to make gcc reject those comments ?\n\nIt looks like -ansi would cause gcc to reject // comments, but -ansi has\nenough not-so-desirable side effects that I would vote against making\nthat switch part of our standard switch set for gcc.\n\nI think we should continue to rely on our faithful community of testers\nto find portability glitches like this one ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 07 Nov 2000 10:53:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Again please no // comments !!!!!!!! " } ]
[ { "msg_contents": "Hi\n\nWhile examining recursive use of catalog cache,I found\na refcnt leak of relations.\nAfter further investigation,I found that the following seems\nto be the cause.\n\n[ in EndAppend() in nodeAppend.c ]\n\nappendstate->as_result_relation_info_list = NIL;\n /*\n * This next step is critical to prevent EndPlan() from trying to close\n\n * an already-closed-and-deleted RelationInfo ---\nes_result_relation_info\n * is pointing at one of the nodes we just zapped above.\n */\n estate->es_result_relation_info = NULL;\n\nThis seems to cause a refcnt leak when\nappendstate->as_result_relation_info_list is NIL from the first\ne.g. in the case INSERT INTO .. SELECT ...\n\nComments ?\n\nBTW,doesn't EndAppend() neglect to call ExecCloseIndices()\nfor RelationInfos of appendstate->as_result_relation_info_list ?\n\nRegards.\nHiroshi Inoue\n\n", "msg_date": "Tue, 07 Nov 2000 19:35:17 +0900", "msg_from": "Hiroshi Inoue <[email protected]>", "msg_from_op": true, "msg_subject": "refcnt leak ?" }, { "msg_contents": "Hiroshi Inoue <[email protected]> writes:\n> While examining recursive use of catalog cache,I found\n> a refcnt leak of relations.\n> After further investigation,I found that the following seems\n> to be the cause.\n\n> [ in EndAppend() in nodeAppend.c ]\n\nappendstate-> as_result_relation_info_list = NIL;\n\nThat doesn't look like a problem to me --- the result relations *have*\nbeen closed, just above this line.\n\n> BTW,doesn't EndAppend() neglect to call ExecCloseIndices()\n> for RelationInfos of appendstate->as_result_relation_info_list ?\n\nComparing nodeAppend to EndPlan(), I think you are right --- each\nresultinfo should have ExecCloseIndices applied too, in the loop just\nabove the line you quote. This did not use to be a problem because\nAppend plans were readonly, but now that we have UPDATE/DELETE on\ninheritance hierarchies, there's a missing step here. Was your test\nquery of that kind?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 07 Nov 2000 10:15:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: refcnt leak ? " }, { "msg_contents": "\nOn Tue, 7 Nov 2000, Tom Lane wrote:\n\n> Hiroshi Inoue <[email protected]> writes:\n> > While examining recursive use of catalog cache,I found\n> > a refcnt leak of relations.\n> > After further investigation,I found that the following seems\n> > to be the cause.\n> \n> > [ in EndAppend() in nodeAppend.c ]\n> \n> appendstate-> as_result_relation_info_list = NIL;\n> \n> That doesn't look like a problem to me --- the result relations *have*\n> been closed, just above this line.\n> \n> > BTW,doesn't EndAppend() neglect to call ExecCloseIndices()\n> > for RelationInfos of appendstate->as_result_relation_info_list ?\n> \n> Comparing nodeAppend to EndPlan(), I think you are right --- each\n> resultinfo should have ExecCloseIndices applied too, in the loop just\n> above the line you quote. This did not use to be a problem because\n> Append plans were readonly, but now that we have UPDATE/DELETE on\n> inheritance hierarchies, there's a missing step here. Was your test\n> query of that kind?\n\n\n Show anything configure's switch --enable-cassert? IMHO real leak must \nbe *probably* visible with this compile option in 7.1 (I hope:-).\n\n\t\t\t\t\tKarel\n\n", "msg_date": "Tue, 7 Nov 2000 16:29:50 +0100 (CET)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: refcnt leak ? " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> Hiroshi Inoue <[email protected]> writes:\n> > While examining recursive use of catalog cache,I found\n> > a refcnt leak of relations.\n> > After further investigation,I found that the following seems\n> > to be the cause.\n> \n> > [ in EndAppend() in nodeAppend.c ]\n> \n> appendstate-> as_result_relation_info_list = NIL;\n> \n\nOh no,my point isn't on this line but on the line\n\n estate->es_result_relation_info = NULL;\n\nAs the comment says,it depends on the assumption that \nestate->es_result_relation_info points to one of the node\nof appendstate->as_result_relation_info_list(before set to\nNIL). However ISTM appendstate->as_result_relation_info\n_list is for inheritance and in the case \"INSERT INTO ..\nSELECT .. FROM ..\" it's not used. \n\n> That doesn't look like a problem to me --- the result relations *have*\n> been closed, just above this line.\n> \n> > BTW,doesn't EndAppend() neglect to call ExecCloseIndices()\n> > for RelationInfos of appendstate->as_result_relation_info_list ?\n> \n> Comparing nodeAppend to EndPlan(), I think you are right --- each\n> resultinfo should have ExecCloseIndices applied too, in the loop just\n> above the line you quote. This did not use to be a problem because\n> Append plans were readonly, but now that we have UPDATE/DELETE on\n> inheritance hierarchies, there's a missing step here. Was your test\n> query of that kind?\n>\n\nI first changed this part but rd_refcnt leak didn't disappaear.\nI have no refcnt leak example which is caused due to this flaw(?).\nAfter that I found \" estate->es_result_relation_info = NULL; \"\nin EndAppend() . I changed it to not do so when appendstate->\nas_result_relation_info_list is NIL and rd_refcnt leak disappeared.\n\nRegards.\nHiroshi Inoue\n", "msg_date": "Wed, 8 Nov 2000 01:55:05 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: refcnt leak ? " }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> Oh no,my point isn't on this line but on the line\n> estate->es_result_relation_info = NULL;\n\nOh, I see --- this is mistakenly assuming that es_result_relation_info\n*always* points at one of the Append's relations. So there are actually\ntwo rel-refcnt-leaking bugs here, this one and the lack of index close.\n\nI've fixed both. Thanks for the report!\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 09 Nov 2000 13:20:43 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: refcnt leak ? " } ]
[ { "msg_contents": "How do you run the regression tests ?\n\ngmake all\ngmake install\ninitdb\nstart postmaster\n\ncd src/test/regress\ngmake runtest\n\n?????\n\nThe build looks ok, but remember, that all shared libs and postmaster will only \nwork in the configured location. (gmake runcheck will fail, since it installs into a \ndifferent location)\n\nAndreas\n\n> -----Urspr�ngliche Nachricht-----\n> Von: Pete Forman [mailto:[email protected]]\n> Gesendet: Dienstag, 07. November 2000 11:55\n> An: [email protected]\n> Betreff: AW: [HACKERS] v7.0.3 *pre-release* ...\n> \n> \n> Zeugswetter Andreas SB writes:\n> > \n> > > AIX 4.3.2, xlc 3.6.6.\n> > > \n> > > Same regression test failures as 7.0.2.\n> > > The nasty failures are triggers, misc, and plgpgsql which\n> > > consistently give \"pqReadData() -- backend closed the channel\n> > > unexpectedly.\" at the same point. Also the sanity_check hangs\n> > > during a VACUUM. Killing the backend was the only way \n> to continue.\n> > \n> > This should not be so. Your setup should definitely work without\n> > regression failures. There is something wrong with your dynamic\n> > loading of shared libs. Can you give more details, e.g. did you add\n> > optimization which does not work yet ?\n> \n> No extra flags were added by me. The only build warnings were\n> duplicate symbol errors. (There were a couple of warnings about\n> 0.0/0.0 used to represent a NaN.) Here are a couple of extracts from\n> my build log.\n> \n> xlc -I../include -I../backend -qmaxmem=16384 -qhalt=w -qsrcmsg\n> -qlanglvl=extended -qlonglong -o postgres access/SUBSYS.o\n> bootstrap/SUBSYS.o catalog/SUBSYS.o commands/SUBSYS.o\n> executor/SUBSYS.o lib/SUBSYS.o libpq/SUBSYS.o main/SUBSYS.o\n> parser/SUBSYS.o nodes/SUBSYS.o optimizer/SUBSYS.o port/SUBSYS.o\n> postmaster/SUBSYS.o regex/SUBSYS.o rewrite/SUBSYS.o\n> storage/SUBSYS.o tcop/SUBSYS.o utils/SUBSYS.o ../utils/version.o\n> -lPW -lcrypt -lld -lnsl -ldl -lm -lcurses\n> Making postgres.imp\n> ../backend/port/aix/mkldexport.sh postgres /usr/local/pgsql/bin >\n> postgres.imp\n> xlc -Wl,-bE:../backend/postgres.imp -o postgres access/SUBSYS.o\n> bootstrap/SUBSYS.o catalog/SUBSYS.o commands/SUBSYS.o\n> executor/SUBSYS.o lib/SUBSYS.o libpq/SUBSYS.o main/SUBSYS.o\n> parser/SUBSYS.o nodes/SUBSYS.o optimizer/SUBSYS.o port/SUBSYS.o\n> postmaster/SUBSYS.o regex/SUBSYS.o rewrite/SUBSYS.o\n> storage/SUBSYS.o tcop/SUBSYS.o utils/SUBSYS.o ../utils/version.o\n> -lPW -lcrypt -lld -lnsl -ldl -lm -lcurses\n> \n> \n> xlc -I../../include -I../../backend -qmaxmem=16384 -qhalt=w\n> -qsrcmsg -qlanglvl=extended -qlonglong -DFRONTEND -c pqsignal.c -o\n> pqsignal.o\n> ar crs libpq.a fe-auth.o fe-connect.o fe-exec.o fe-misc.o\n> fe-print.o fe-lobj.o pqexpbuffer.o dllist.o pqsignal.o\n> touch libpq.a\n> ../../backend/port/aix/mkldexport.sh libpq.a /usr/local/pgsql/lib\n> > libpq.exp\n> ld -H512 -bM:SRE -bI:../../backend/postgres.imp -bE:libpq.exp -o\n> libpq.so libpq.a -lPW -lcrypt -lld -lnsl -ldl -lm -lcurses -lcrypt\n> -lc\n> ld: 0711-224 WARNING: Duplicate symbol: .crypt\n> ld: 0711-224 WARNING: Duplicate symbol: crypt\n> ld: 0711-224 WARNING: Duplicate symbol: .strlen\n> ld: 0711-224 WARNING: Duplicate symbol: strlen\n> ld: 0711-224 WARNING: Duplicate symbol: .PQuntrace\n> ld: 0711-224 WARNING: Duplicate symbol: .PQtrace\n> ld: 0711-224 WARNING: Duplicate symbol: .setsockopt\n> ld: 0711-224 WARNING: Duplicate symbol: setsockopt\n> [and others]\n> \n> -- \n> Pete Forman -./\\.- Disclaimer: This post is originated\n> Western Geophysical -./\\.- by myself and does not represent\n> [email protected] -./\\.- the opinion of Baker Hughes or\n> http://www.crosswinds.net/~petef -./\\.- its divisions.\n> \n", "msg_date": "Tue, 7 Nov 2000 12:08:17 +0100 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: v7.0.3 *pre-release* ..." }, { "msg_contents": "Zeugswetter Andreas SB writes:\n > How do you run the regression tests ?\n > \n > gmake all\n > gmake install\n > initdb\n > start postmaster\n > \n > cd src/test/regress\n > gmake runtest\n > \n > ?????\n > \n > The build looks ok, but remember, that all shared libs and\n > postmaster will only work in the configured location. (gmake\n > runcheck will fail, since it installs into a different location)\n\nIndeed. runtest has passed on 7.0.2 and 7.0.3.\n\nThe only remaining failure is geometry. The results I got were nearly\nidentical to geometry-powerpc-aix4.out. The only differences were the\norder of rows returned by three of the tables. I'll submit the\nresults file to pgsql-patches.\n\nI was unable to get runcheck to pass even when I altered all the\nLD_LIBRARY_PATH entries in run_check.sh to LIBPATH for the benefit of\nAIX. If this cannot be fixed there ought to be an entry added to the\nfaq-aix.\n-- \nPete Forman -./\\.- Disclaimer: This post is originated\nWestern Geophysical -./\\.- by myself and does not represent\[email protected] -./\\.- the opinion of Baker Hughes or\nhttp://www.crosswinds.net/~petef -./\\.- its divisions.\n", "msg_date": "Tue, 7 Nov 2000 13:16:31 +0000 (GMT)", "msg_from": "Pete Forman <[email protected]>", "msg_from_op": false, "msg_subject": "AW: v7.0.3 *pre-release* ..." }, { "msg_contents": "Pete Forman writes:\n > The only remaining failure is geometry. The results I got were\n > nearly identical to geometry-powerpc-aix4.out. The only\n > differences were the order of rows returned by three of the tables.\n > I'll submit the results file to pgsql-patches.\n\nI've submitted a one line patch on resultmap.\n\nThere was an oddity, on that one runtest on 7.0.3 the geometry.out had\nthe rows in a different order from three of the select statements.\nRepeating the runtest five times passed consistently (with the new\nresultmap). Now I realise that in an RDB the set of results have no\nintrinsic order but find it a bit surprising that the regression tests\nare not consistent.\n-- \nPete Forman -./\\.- Disclaimer: This post is originated\nWestern Geophysical -./\\.- by myself and does not represent\[email protected] -./\\.- the opinion of Baker Hughes or\nhttp://www.crosswinds.net/~petef -./\\.- its divisions.\n", "msg_date": "Tue, 7 Nov 2000 14:21:12 +0000 (GMT)", "msg_from": "Pete Forman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: v7.0.3 *pre-release* ..." }, { "msg_contents": "Pete Forman <[email protected]> writes:\n> The only remaining failure is geometry. The results I got were nearly\n> identical to geometry-powerpc-aix4.out. The only differences were the\n> order of rows returned by three of the tables. I'll submit the\n> results file to pgsql-patches.\n\nRather than making still another results file, let's fix the .sql file\nto do an explicit ORDER BY for those queries. The regress tests are\nmostly pretty lazy about ensuring a platform-independent ordering of\nquery results. In many places we can get away with that, but every so\noften we notice another place where we can't. Looks like you've just\nidentified another.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 07 Nov 2000 10:09:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: v7.0.3 *pre-release* ... " }, { "msg_contents": "\nIf its that easy to fix the regress test so that it passes, can we get it\ncommitted and build a new tarball so that ppl doing regression on v7.0.3\nsee a clean regress?\n\nOn Tue, 7 Nov 2000, Tom Lane wrote:\n\n> Pete Forman <[email protected]> writes:\n> > The only remaining failure is geometry. The results I got were nearly\n> > identical to geometry-powerpc-aix4.out. The only differences were the\n> > order of rows returned by three of the tables. I'll submit the\n> > results file to pgsql-patches.\n> \n> Rather than making still another results file, let's fix the .sql file\n> to do an explicit ORDER BY for those queries. The regress tests are\n> mostly pretty lazy about ensuring a platform-independent ordering of\n> query results. In many places we can get away with that, but every so\n> often we notice another place where we can't. Looks like you've just\n> identified another.\n> \n> \t\t\tregards, tom lane\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 7 Nov 2000 16:56:15 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: v7.0.3 *pre-release* ... " }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> If its that easy to fix the regress test so that it passes, can we get it\n> committed and build a new tarball so that ppl doing regression on v7.0.3\n> see a clean regress?\n\nThe way I want to fix it will probably require getting new geometry\nfiles for all the platform variants (or at least, those that are still\ndistinct after controlling for sort order). Not worth holding up 7.0.3\nfor that.\n\nBut I thought Pete was having trouble reproducing the sort-order issue\nanyway, so there may still be some investigation to do beforehand.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 07 Nov 2000 16:57:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: v7.0.3 *pre-release* ... " } ]
[ { "msg_contents": "> I was unable to get runcheck to pass even when I altered all the\n> LD_LIBRARY_PATH entries in run_check.sh to LIBPATH for the benefit of\n> AIX. If this cannot be fixed there ought to be an entry added to the\n> faq-aix.\n\nThe fix for AIX below 4.3 would be to relink both postmaster and the libs\nwith altered paths in the imp and exp files, which is imho impractical.\n\nFor AIX 4.3 there might be a fix with new options in the first line of the imp\nand exp files that ld understands. If time permits I will take a go at that.\n\nAndreas\n", "msg_date": "Tue, 7 Nov 2000 15:44:43 +0100 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: v7.0.3 *pre-release* ..." } ]
[ { "msg_contents": "\n> Pete Forman <[email protected]> writes:\n> > The only remaining failure is geometry. The results I got \n> were nearly\n> > identical to geometry-powerpc-aix4.out. The only \n> differences were the\n> > order of rows returned by three of the tables. I'll submit the\n> > results file to pgsql-patches.\n> \n> Rather than making still another results file, let's fix the .sql file\n> to do an explicit ORDER BY for those queries. The regress tests are\n> mostly pretty lazy about ensuring a platform-independent ordering of\n> query results. In many places we can get away with that, but every so\n> often we notice another place where we can't. Looks like you've just\n> identified another.\n\nHas there been a change to geometry.out that was not incorporated into \nthe platform specific geometry-powerpc-aix4.out between 7.0.2 and 7.0.3 ?\nThis looks like a different plan is chosen now. I don't beleive this can be \nplatform dependent.\n\nAndreas\n", "msg_date": "Tue, 7 Nov 2000 16:29:11 +0100 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: AW: v7.0.3 *pre-release* ... " }, { "msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> This looks like a different plan is chosen now. I don't beleive this can be \n> platform dependent.\n\nWell, the plan choice *could* be platform-dependent, given that the\nplanner uses comparisons of floating-point cost estimates. But I agree\nthat's pretty unlikely. What seems to be more common is platform-\nspecific differences of behavior of qsort(). If you have any equal\nkeys in the row set being sorted, then the output ordering depends on\nthe whim of the qsort implementor...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 07 Nov 2000 10:37:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: AW: v7.0.3 *pre-release* ... " } ]
[ { "msg_contents": "Kevin O'Gorman pointed out to me that this example fails in current\nsources:\n\nselect a from foo union all (select b from foo except select a from foo);\n\nThe problem is that the result of the EXCEPT has a resjunk column (which\nis added by the EXCEPT code so that it can tell lefthand input rows from\nrighthand input rows in its merged-and-sorted datastream). But the\nresult of the leftmost SELECT doesn't. The UNION ALL part is done by\na simple Append plan, which means that one of Append's inputs has just\nthe desired data while the other one has the data plus a resjunk column.\n\nAs things stand, the Append blindly claims that its result targetlist\nis the same as its first input plan --- so execMain doesn't see any junk\ncolumns, doesn't run a junk-column-removal filter, and when the tuples\nwith junk in them hit the printtup stage, everything goes to pieces.\n\nA quick and dirty hack that would fix this (or at least this example)\nis to make Append return the longest of its subplan targetlists rather\nthan the first one. execMain doesn't especially care what the junk\ncolumns are stated to be, just whether there are any.\n\nI think a cleaner fix in the long run would be to pull out junkfiltering\nfrom the executor top level and make it a plan node type (in fact, it's\nlikely that we don't even need a new plan node type; Result could\ncompute the cleaned-up targetlist just fine). Then the junk columns\ncould be removed individually from the subplans of Append, leaving clean\ndata coming out. However, I don't really have time to do that for 7.1.\n\nMore generally, this example points up what may be a death blow for\nChris Bitmead's hopes of having a query that can return all the columns\nout of each subclass of an inheritance hierarchy. If we can't safely\nreturn varying tuple structures from an Append, that's never gonna work.\nBut the whole concept of plan nodes with associated targetlists seems\nto depend on the assumption of fixed tuple structures passing through\nany one plan level.\n\nComments, better ideas?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 07 Nov 2000 10:32:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Append plans and resjunk columns" } ]
[ { "msg_contents": "I am working on eliminating the \"relation NNN modified while in use\"\nmisfeature by instead grabbing a lock on each relation at first use\nin a statement, and holding that lock till end of transaction. The\nmain trick here is to make sure that the first lock grabbed is adequate\n--- for example, it won't do to grab AccessShareLock and then have to\nraise that to AccessExclusiveLock, because there will be a deadlock if\ntwo backends do this concurrently.\n\nTo help debug this, I'm planning to add a little bit of code to the\nlock manager that detects a request for a lock on an object on which\nwe already hold a lock of a lower level. What I'm wondering about is\nwhether to make the report be elog(DEBUG) --- ie, send to postmaster\nlog only --- or elog(NOTICE), so that users would see it by default.\n\nA NOTICE might be useful to users since it would complain about\ndeadlock-prone user-level coding practices too, such as\n\n\tbegin;\n\tselect * from foo;\t-- grabs read lock\n\tlock table foo;\t\t-- grabs exclusive lock\n\nHowever, it might not be *very* useful, because the lock manager isn't\nin a position to issue a message that's much more intelligible than\nthis:\n\nNOTICE: Deadlock risk: raising lock level from 1 to 4 on object 85372/5732\n\n(The lock level could be printed symbolically, but I doubt that very\nmuch can be done with the object identifier --- it's not safe for the\nlock manager to try to resolve relation OIDs to names, for example.)\n\nRight now I'm thinking that this sort of notice would just create more\nconfusion than enlightenment for most users, so I'm inclined to make it\na DEBUG message. But that's a judgment call, so I thought I'd throw\nthe issue out for discussion. Any contrary opinions?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 07 Nov 2000 11:26:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Issue NOTICE for attempt to raise lock level?" }, { "msg_contents": "Tom Lane writes:\n\n> To help debug this, I'm planning to add a little bit of code to the\n> lock manager that detects a request for a lock on an object on which\n> we already hold a lock of a lower level. What I'm wondering about is\n> whether to make the report be elog(DEBUG) --- ie, send to postmaster\n> log only --- or elog(NOTICE), so that users would see it by default.\n\nTo me this seems to be a little like the much-disputed notice for adding\nimplicit range-table entries: Either it's an error, then you abort, or\nit's legal, then you leave the user alone and perhaps explain failure\nscenarios in the documentation. At least until we have something like a\nuser-configurable warning level.\n\nelog(DEBUG) might be okay, but only with a positive DebugLvl, IMHO.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Tue, 7 Nov 2000 19:23:33 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue NOTICE for attempt to raise lock level?" }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane\n> Sent: Wednesday, November 08, 2000 1:26 AM\n> To: [email protected]\n> Subject: [HACKERS] Issue NOTICE for attempt to raise lock level?\n> \n> \n> I am working on eliminating the \"relation NNN modified while in use\"\n> misfeature by instead grabbing a lock on each relation at first use\n> in a statement, and holding that lock till end of transaction. \n\nIsn't \"relation NNN modified while in use\" itself coming from heap_\nopen(r) 's LockRelation_after_allocate sequence ?\nOr from a rd_refcnt leak,of cource.\nI'm thinking that RelationCacheInvalidate() should ignore relations\nwhich are while in use. IMHO allocate_after_lock sequence is\nneeded for heap_open(r). \n\n> The\n> main trick here is to make sure that the first lock grabbed is adequate\n> --- for example, it won't do to grab AccessShareLock and then have to\n> raise that to AccessExclusiveLock, because there will be a deadlock if\n> two backends do this concurrently.\n> \n\nI object to you if it also includes parse_rewrite_plan stage.\nIf there's a long transation it would also hold a AccessShareLock\non system tables for a long time. Then vacuum for system tables\nwould be blocked. Other transactions would be blocked......\n\nRegards.\nHiroshi Inoue \n", "msg_date": "Wed, 8 Nov 2000 07:17:26 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Issue NOTICE for attempt to raise lock level?" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> I am working on eliminating the \"relation NNN modified while in use\"\n>> misfeature by instead grabbing a lock on each relation at first use\n>> in a statement, and holding that lock till end of transaction. \n\n> Isn't \"relation NNN modified while in use\" itself coming from heap_\n> open(r) 's LockRelation_after_allocate sequence ?\n\nRight. I intend to eliminate that test entirely, and simply let the\nrelcache update happen. With appropriate start-to-end locking, it\nshouldn't be possible for a schema update to sneak in at an unsafe\npoint.\n\nThe only reason that there is a \"modified while in use\" test at all\nis that I put it in awhile back as a stopgap solution until we did\na better job with the end-to-end locking problem. The reports coming\nback on 7.0.* make it clear that the stopgap answer isn't good enough,\nso I want to fix it right for 7.1.\n\n> I'm thinking that RelationCacheInvalidate() should ignore relations\n> which are while in use.\n\nWon't work unless you somehow store an \"update needed\" flag to make the\nupdate happen later --- you can't just discard a shared-inval\nnotification. And if you did that, you'd have synchronization issues\nto contend with. Since we use SnapshotNow for reading catalogs, catalog\nfetches may see data that is inconsistent with the current state of the\nrelcache. Not good. Forcing the schema update to be held off in the\nfirst place seems the right answer.\n\n> I object to you if it also includes parse_rewrite_plan stage.\n> If there's a long transation it would also hold a AccessShareLock\n> on system tables for a long time.\n\nNo, I'm going to leave locking of system catalogs as-is. This basically\nmeans that we don't support concurrent alteration of schemas for system\ntables. Seems like an OK tradeoff to me ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 07 Nov 2000 17:28:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Issue NOTICE for attempt to raise lock level? " }, { "msg_contents": "\n\nTom Lane wrote:\n\n> \"Hiroshi Inoue\" <[email protected]> writes:\n> >> I am working on eliminating the \"relation NNN modified while in use\"\n> >> misfeature by instead grabbing a lock on each relation at first use\n> >> in a statement, and holding that lock till end of transaction.\n>\n> > Isn't \"relation NNN modified while in use\" itself coming from heap_\n> > open(r) 's LockRelation_after_allocate sequence ?\n>\n> Right. I intend to eliminate that test entirely, and simply let the\n> relcache update happen. With appropriate start-to-end locking, it\n> shouldn't be possible for a schema update to sneak in at an unsafe\n> point.\n>\n> The only reason that there is a \"modified while in use\" test at all\n> is that I put it in awhile back as a stopgap solution until we did\n> a better job with the end-to-end locking problem. The reports coming\n> back on 7.0.* make it clear that the stopgap answer isn't good enough,\n> so I want to fix it right for 7.1.\n>\n> > I'm thinking that RelationCacheInvalidate() should ignore relations\n> > which are while in use.\n>\n> Won't work unless you somehow store an \"update needed\" flag to make the\n> update happen later --- you can't just discard a shared-inval\n> notification.\n\nWhat I mean is to change heap_open(r) like\n\n LockRelationId(Name) -> shared-inval-handling ->\n allocate the relation descriptor and increment rd_refcnt\n\nThis would ensure that relations with rd_refcnt > 0\nacquire some lock. Could any shared-inval-noti\nfication arrive for such relations under the me-\nchanism ? However 'reset system cache' message\ncould arrive at any time. I've examined the error\n'recursive use of cache' for some time. It seems\nvery difficult to avoid the error if we reconstruct\nrelation descriptors whose rd_refcnt > 0 in\nRelationCacheInvalidate().\n\nComments ?\n\nRegards.\nHiroshi Inoue\n\n", "msg_date": "Wed, 08 Nov 2000 15:51:28 +0900", "msg_from": "Hiroshi Inoue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue NOTICE for attempt to raise lock level?" }, { "msg_contents": "Hiroshi Inoue <[email protected]> writes:\n> What I mean is to change heap_open(r) like\n> LockRelationId(Name) -> shared-inval-handling ->\n> allocate the relation descriptor and increment rd_refcnt\n> This would ensure that relations with rd_refcnt > 0\n> acquire some lock. Could any shared-inval-noti\n> fication arrive for such relations under the me-\n> chanism ?\n\nYes, because the system doesn't make any attempt to ensure that relcache\nentries are held open throughout a statement or transaction. (If they\nwere, we largely wouldn't have a problem.) So we can't use relcache\nrefcount going from 0 to 1 as the sole criterion for when to acquire\na lock.\n\nI did look at using the relcache to control holding locks throughout\nstatements, but it seems that it doesn't have enough information\nto grab the right kind of lock. For example, I had to modify the\nparser to ensure that the right kind of lock is grabbed on the\ninitial relcache access, depending on whether the table involved is\naccessed for plain SELECT, SELECT FOR UPDATE, or INSERT/UPDATE/DELETE.\nI still have to make a similar change in the rewriter for table\nreferences that are added to a query by rewrite. The code that is\ndoing this stuff knows full well that it is making the first reference\nto a table, and so the relcache doesn't really have anything to\ncontribute.\n\n> However 'reset system cache' message\n> could arrive at any time. I've examined the error\n> 'recursive use of cache' for some time. It seems\n> very difficult to avoid the error if we reconstruct\n> relation descriptors whose rd_refcnt > 0 in\n> RelationCacheInvalidate().\n\nI haven't had time to look at that yet, but one possible answer is just\nto disable the 'recursive use of cache' test. It's only a debugging\nsanity-check anyway, not essential functionality.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 Nov 2000 10:00:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Issue NOTICE for attempt to raise lock level? " } ]