threads
listlengths
1
2.99k
[ { "msg_contents": "Hello. Hopefully, this is the right mailing list to send\nthis type of question too.\n\nSystem:\nI am running the newest 7.03 build on a dual 866 Pentium III\nwith a 128M raid card.\n\nI have found an error that is quite odd.\nI have a table that is supposed to keep a map between\nurls and ids. Each url in the table should be unique.\nThus I have\n\nCreate table \"urlmap\" (\n \"url\" text not NULL,\n \"id\" int4 not NULL,\n PRIMARY KEY (\"url\"),\n UNIQUE (\"id\",\"url\") );\n\nAfter inserting a number of urls (via spidering) i did the following.\n\nI vacuumed the db : vacuum verbose analyze.\n\nFirst:\nselect * from urlmap where url='blah blah';\n\nHere I got back only one row. Good.\n\nThen i went ahead reindexed the table: I recieved the error:\nCannot create unique index. Table contains non-unique values.\n\nSame problem occurs if I drop the indicies and try to recreate them.\n\nI then :\nselect * from urlmap u1,urlmap u2 where u1.url=u2.url and u1.oid!=u2.oid\n\nI then got back two rows where the url was indeed the same and the\nassociated id\ndifferent. Why, would this ever occur?\n\n-Aditya\n\n", "msg_date": "Wed, 17 Jan 2001 22:20:52 -0500", "msg_from": "Aditya Damle <[email protected]>", "msg_from_op": true, "msg_subject": "A bug with unique indicies" }, { "msg_contents": "Aditya Damle <[email protected]> writes:\n> Then i went ahead reindexed the table: I recieved the error:\n> Cannot create unique index. Table contains non-unique values.\n\nCurious. When you say \"reindexed\" do you mean you used the REINDEX\ncommand? That's new in 7.0.* and possibly not fully debugged. It\nwould be useful to see a complete script for reproducing the problem,\nif you think you can regenerate it from scratch ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jan 2001 16:48:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A bug with unique indicies " } ]
[ { "msg_contents": "\nI'm logging traffic to a database, so that I can do analysis on usage and\nwhatnot, and I need something bigger then int8 :(\n\n/tmp/psql.edit.70.79087: 6 lines, 222 characters.\n ip | maxbytes | port | runtime\n---------------+-------------+------+------------------------\n 216.126.84.28 | 2169898055 | 80 | 2001-01-16 00:00:00-05\n 216.126.84.28 | 160579228 | 873 | 2001-01-16 00:00:00-05\n 216.126.84.28 | 365270 | 20 | 2001-01-16 00:00:00-05\n 216.126.84.28 | 196256 | 21 | 2001-01-16 00:00:00-05\n 216.126.84.28 | 195238 | 22 | 2001-01-16 00:00:00-05\n 216.126.84.28 | 182492 | 1024 | 2001-01-16 00:00:00-05\n 216.126.84.28 | 171155 | 143 | 2001-01-16 00:00:00-05\n 216.126.84.28 | -1392384544 | 80 | 2001-01-13 00:00:00-05\n 216.126.84.28 | -1392384544 | 80 | 2001-01-04 00:00:00-05\n 216.126.84.28 | -1392384544 | 80 | 2001-01-05 00:00:00-05\n 216.126.84.28 | -1392384544 | 80 | 2001-01-06 00:00:00-05\n 216.126.84.28 | -1392384544 | 80 | 2001-01-07 00:00:00-05\n 216.126.84.28 | -1392384544 | 80 | 2001-01-08 00:00:00-05\n 216.126.84.28 | -1392384544 | 80 | 2001-01-14 00:00:00-05\n 216.126.84.28 | -1452855018 | 80 | 2001-01-15 00:00:00-05\n 216.126.84.28 | -1452855018 | 80 | 2001-01-10 00:00:00-05\n 216.126.84.28 | -1452855018 | 80 | 2001-01-09 00:00:00-05\n 216.126.84.28 | -1513325492 | 80 | 2001-01-03 00:00:00-05\n 216.126.84.28 | -1694736914 | 80 | 2001-01-12 00:00:00-05\n 216.126.84.28 | -1815677862 | 80 | 2001-01-11 00:00:00-05\n\nhub_traf_stats=# \\d daily_stats\n Table \"daily_stats\"\n Attribute | Type | Modifier\n-----------+-----------+----------\n ip | inet |\n port | integer |\n bytes | bigint |\n runtime | timestamp |\n\ndo we have anything larger to work with? I've checked docs, but that\nlooks like about it :(\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Thu, 18 Jan 2001 00:55:42 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Nothing larger then int8?" }, { "msg_contents": "\nhrrmm ... ignore this ... I'm suspecting that what I did was copied in\nsum() data from an old table that had bytes declared as int4, without\ncasting it to int8 before storing it to the new table ...\n\nif anyone is interested, here is one days worth of http traffic for the\nmain PostgreSQL.Org server ... this doesn't include the traffic that the\nmirror sites absorb:\n\n1160643846 / ( 1024 * 1024 * 1024 )\n1.08gig\n\n\n\nOn Thu, 18 Jan 2001, The Hermit Hacker wrote:\n\n>\n> I'm logging traffic to a database, so that I can do analysis on usage and\n> whatnot, and I need something bigger then int8 :(\n>\n> /tmp/psql.edit.70.79087: 6 lines, 222 characters.\n> ip | maxbytes | port | runtime\n> ---------------+-------------+------+------------------------\n> 216.126.84.28 | 2169898055 | 80 | 2001-01-16 00:00:00-05\n> 216.126.84.28 | 160579228 | 873 | 2001-01-16 00:00:00-05\n> 216.126.84.28 | 365270 | 20 | 2001-01-16 00:00:00-05\n> 216.126.84.28 | 196256 | 21 | 2001-01-16 00:00:00-05\n> 216.126.84.28 | 195238 | 22 | 2001-01-16 00:00:00-05\n> 216.126.84.28 | 182492 | 1024 | 2001-01-16 00:00:00-05\n> 216.126.84.28 | 171155 | 143 | 2001-01-16 00:00:00-05\n> 216.126.84.28 | -1392384544 | 80 | 2001-01-13 00:00:00-05\n> 216.126.84.28 | -1392384544 | 80 | 2001-01-04 00:00:00-05\n> 216.126.84.28 | -1392384544 | 80 | 2001-01-05 00:00:00-05\n> 216.126.84.28 | -1392384544 | 80 | 2001-01-06 00:00:00-05\n> 216.126.84.28 | -1392384544 | 80 | 2001-01-07 00:00:00-05\n> 216.126.84.28 | -1392384544 | 80 | 2001-01-08 00:00:00-05\n> 216.126.84.28 | -1392384544 | 80 | 2001-01-14 00:00:00-05\n> 216.126.84.28 | -1452855018 | 80 | 2001-01-15 00:00:00-05\n> 216.126.84.28 | -1452855018 | 80 | 2001-01-10 00:00:00-05\n> 216.126.84.28 | -1452855018 | 80 | 2001-01-09 00:00:00-05\n> 216.126.84.28 | -1513325492 | 80 | 2001-01-03 00:00:00-05\n> 216.126.84.28 | -1694736914 | 80 | 2001-01-12 00:00:00-05\n> 216.126.84.28 | -1815677862 | 80 | 2001-01-11 00:00:00-05\n>\n> hub_traf_stats=# \\d daily_stats\n> Table \"daily_stats\"\n> Attribute | Type | Modifier\n> -----------+-----------+----------\n> ip | inet |\n> port | integer |\n> bytes | bigint |\n> runtime | timestamp |\n>\n> do we have anything larger to work with? I've checked docs, but that\n> looks like about it :(\n>\n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n>\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Thu, 18 Jan 2001 01:02:27 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Nothing larger then int8?" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> I'm logging traffic to a database, so that I can do analysis on usage and\n> whatnot, and I need something bigger then int8 :(\n\nThose \"maxbytes\" values shure look like they're only int4. How are\nyou calculating 'em, exactly?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jan 2001 00:08:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nothing larger then int8? " }, { "msg_contents": "The Hermit Hacker wrote:\n> if anyone is interested, here is one days worth of http traffic for the\n> main PostgreSQL.Org server ... this doesn't include the traffic that the\n> mirror sites absorb:\n \n> 1160643846 / ( 1024 * 1024 * 1024 )\n> 1.08gig\n\nNot a bad day.\n\nI've seen 100MB per day out of my http (backed by PostgreSQL since late\n1997!), but the 2.5GB a day out the RealServer is the big hit....\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 18 Jan 2001 00:10:12 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nothing larger then int8?" }, { "msg_contents": "On Thu, 18 Jan 2001, Lamar Owen wrote:\n\n> The Hermit Hacker wrote:\n> > if anyone is interested, here is one days worth of http traffic for the\n> > main PostgreSQL.Org server ... this doesn't include the traffic that the\n> > mirror sites absorb:\n>\n> > 1160643846 / ( 1024 * 1024 * 1024 )\n> > 1.08gig\n>\n> Not a bad day.\n>\n> I've seen 100MB per day out of my http (backed by PostgreSQL since late\n> 1997!), but the 2.5GB a day out the RealServer is the big hit....\n\nmy *big* site:\n\n11395533772/ ( 1024 * 1024 * 1024 )\n10.61gig/day :)\n\n bytes | port\n-------------+------\n 11298475398 | 81\n 94925095 | 80\n 1982130 | 20\n 122043 | 21\n 26766 | 22\n 2340 | 137\n\njust a small site :)\n\n", "msg_date": "Thu, 18 Jan 2001 01:17:34 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Nothing larger then int8?" }, { "msg_contents": "To answer your question, wouldn't numeric(30,0) be the correct?\n\n-alex\nOn Thu, 18 Jan 2001, The Hermit Hacker wrote:\n\n> \n> hrrmm ... ignore this ... I'm suspecting that what I did was copied in\n> sum() data from an old table that had bytes declared as int4, without\n> casting it to int8 before storing it to the new table ...\n> \n> if anyone is interested, here is one days worth of http traffic for the\n> main PostgreSQL.Org server ... this doesn't include the traffic that the\n> mirror sites absorb:\n> \n> 1160643846 / ( 1024 * 1024 * 1024 )\n> 1.08gig\n> \n> \n> \n> On Thu, 18 Jan 2001, The Hermit Hacker wrote:\n> \n> >\n> > I'm logging traffic to a database, so that I can do analysis on usage and\n> > whatnot, and I need something bigger then int8 :(\n> >\n> > /tmp/psql.edit.70.79087: 6 lines, 222 characters.\n> > ip | maxbytes | port | runtime\n> > ---------------+-------------+------+------------------------\n> > 216.126.84.28 | 2169898055 | 80 | 2001-01-16 00:00:00-05\n> > 216.126.84.28 | 160579228 | 873 | 2001-01-16 00:00:00-05\n> > 216.126.84.28 | 365270 | 20 | 2001-01-16 00:00:00-05\n> > 216.126.84.28 | 196256 | 21 | 2001-01-16 00:00:00-05\n> > 216.126.84.28 | 195238 | 22 | 2001-01-16 00:00:00-05\n> > 216.126.84.28 | 182492 | 1024 | 2001-01-16 00:00:00-05\n> > 216.126.84.28 | 171155 | 143 | 2001-01-16 00:00:00-05\n> > 216.126.84.28 | -1392384544 | 80 | 2001-01-13 00:00:00-05\n> > 216.126.84.28 | -1392384544 | 80 | 2001-01-04 00:00:00-05\n> > 216.126.84.28 | -1392384544 | 80 | 2001-01-05 00:00:00-05\n> > 216.126.84.28 | -1392384544 | 80 | 2001-01-06 00:00:00-05\n> > 216.126.84.28 | -1392384544 | 80 | 2001-01-07 00:00:00-05\n> > 216.126.84.28 | -1392384544 | 80 | 2001-01-08 00:00:00-05\n> > 216.126.84.28 | -1392384544 | 80 | 2001-01-14 00:00:00-05\n> > 216.126.84.28 | -1452855018 | 80 | 2001-01-15 00:00:00-05\n> > 216.126.84.28 | -1452855018 | 80 | 2001-01-10 00:00:00-05\n> > 216.126.84.28 | -1452855018 | 80 | 2001-01-09 00:00:00-05\n> > 216.126.84.28 | -1513325492 | 80 | 2001-01-03 00:00:00-05\n> > 216.126.84.28 | -1694736914 | 80 | 2001-01-12 00:00:00-05\n> > 216.126.84.28 | -1815677862 | 80 | 2001-01-11 00:00:00-05\n> >\n> > hub_traf_stats=# \\d daily_stats\n> > Table \"daily_stats\"\n> > Attribute | Type | Modifier\n> > -----------+-----------+----------\n> > ip | inet |\n> > port | integer |\n> > bytes | bigint |\n> > runtime | timestamp |\n> >\n> > do we have anything larger to work with? I've checked docs, but that\n> > looks like about it :(\n> >\n> > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > Systems Administrator @ hub.org\n> > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n> >\n> >\n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n> \n> \n\n", "msg_date": "Thu, 18 Jan 2001 00:39:56 -0500 (EST)", "msg_from": "Alex Pilosov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nothing larger then int8?" } ]
[ { "msg_contents": "Tom Lane wrote:\n> \n> Your last commit seems to have broken timestamp, interval, reltime,\n> and horology regress tests on HPUX. Minus signs are showing up in\n> a lot of unexpected-looking places...\n> Is this now the intended behavior?\n\nUh, no. Believe it or not, I had just noticed this myself, and have\nprepared patches to fix it up.\n\nThe problem is with \"traditional Postgres\" interval output. The behavior\nbefore my recent patches was not correct when there was sign-mixing\nbetween fields, but the patches didn't do anything better, and as you\nnoticed some of the regression test looks terrible.\n\nAnyway, I was just getting ready to send a note to the list to this\neffect. I'll try committing patches in the next few minutes, and I think\nthe result is the cleanest interval representation we've had. I've\nincluded a few changes to the \"leading sign\" inclusion for the\n\"ISO-style\" interval also.\n\nThere is a small chance that I won't be able to prepare good patches,\nsince I'm currently sitting behind a firewall and can't update my CVS\ntree locally, but I expect to be able to do this on postgresql.org. Wish\nme luck ;)\n\n - Thomas\n", "msg_date": "Thu, 18 Jan 2001 05:58:56 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Datetime regression tests are all failing" }, { "msg_contents": "Fixes committed.\n\n - Thomas\n\nFix up \"Postgres-style\" time interval representation when fields have\n mixed-signs. Previous effort left way too many minus signs, and was at\n least as broken as the one before that :(\nClean up \"ISO-style\" time interval representation to omit zero fields if\n there is at least one non-zero field. Supress some leading plus signs\n when not necessary for clarity.\nReplace every #ifdef __CYGWIN__ block with a cleaner TIMEZONE_GLOBAL\nmacro\n defined in datetime.h.\n", "msg_date": "Thu, 18 Jan 2001 06:24:57 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Datetime regression tests are all failing" }, { "msg_contents": "Your last commit seems to have broken timestamp, interval, reltime,\nand horology regress tests on HPUX. Minus signs are showing up in\na lot of unexpected-looking places, eg\n\n*** ./expected/timestamp.out\tSat Nov 25 11:05:59 2000\n--- ./results/timestamp.out\tThu Jan 18 01:28:28 2001\n***************\n*** 631,638 ****\n SELECT '' AS \"53\", d1 - timestamp '1997-01-02' AS diff\n FROM TIMESTAMP_TBL WHERE d1 BETWEEN '1902-01-01' AND '2038-01-01';\n 53 | diff \n! ----+----------------------------------------\n! | @ 9863 days 8 hours ago\n | @ 39 days 17 hours 32 mins 1 sec\n | @ 39 days 17 hours 32 mins 1.00 secs\n | @ 39 days 17 hours 32 mins 2.00 secs\n--- 631,638 ----\n SELECT '' AS \"53\", d1 - timestamp '1997-01-02' AS diff\n FROM TIMESTAMP_TBL WHERE d1 BETWEEN '1902-01-01' AND '2038-01-01';\n 53 | diff \n! ----+--------------------------------------------\n! | @ -9863 days -8 hours ago\n | @ 39 days 17 hours 32 mins 1 sec\n | @ 39 days 17 hours 32 mins 1.00 secs\n | @ 39 days 17 hours 32 mins 2.00 secs\n\nIs this now the intended behavior?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jan 2001 01:35:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Datetime regression tests are all failing" } ]
[ { "msg_contents": "Does anyone know if this feature exists? If so, what version or where \ncan a patch be obtained?\n\nThanks\n\n------- Forwarded message follows -------\nDate sent: \tMon, 15 Jan 2001 08:44:46 +0100\nFrom: \t\"J.H.M. Dassen (Ray)\" <[email protected]>\nTo: \[email protected]\nSubject: \tRe: getting number of rows updated within a procedure\n\nOn Sun, Jan 14, 2001 at 23:27:06 +1300, Dan Langille wrote:\n> I'm writing some stuff in PL/pgsql (actually, a lot of stuff). I have a\n> question: At various times, it does UPDATEs. Is there a way to tell if\n> the UPDATE actually affected any rows or not? I couldn't see how to get\n> UPDATE to return anything.\n \nQuoting a recent message by Jan Wieck <[email protected]>:\n:Do a\n:\n: GET DIAGNOSTICS SELECT PROCESSED INTO <int4_variable>; \n:\n:directly after an INSERT, UPDATE or DELETE statement and you'll know\n:how many rows have been hit.\n:\n:Also you can get the OID of an inserted row with\n:\n: GET DIAGNOSTICS SELECT RESULT INTO <int4_variable>;\n\nHTH,\nRay\n-- \n\"The software `wizard' is the single greatest obstacle to computer literacy\nsince the Mac.\"\n\thttp://www.osopinion.com/Opinions/MichaelKellen/MichaelKellen1.html\n------- End of forwarded message -------\n\n--\nDan Langille\nThe FreeBSD Diary - http://freebsddiary.org/\n FreshPorts - http://freshports.org/\n NZ Broadband - http://unixathome.org/broadband/\n", "msg_date": "Thu, 18 Jan 2001 20:15:39 +1300", "msg_from": "\"Dan Langille\" <[email protected]>", "msg_from_op": true, "msg_subject": "GET DIAGNOSTICS SELECT PROCESSED INTO <int4_variable>" }, { "msg_contents": "On Thu, Jan 18, 2001 at 20:15:39 +1300, Dan Langille wrote:\n> Does anyone know if this feature exists? If so, what version or where \n> can a patch be obtained?\n\nApparently you missed Jan's follow-up to my message: \"New functionality in\n7.1 - sorry.\" So you'll have to use the beta.\n\nRay\n-- \n[Open Source] is the finest expression of the free market. Ideas are\nencouraged to proliferate and the best thinking wins. By contrast, most\ncorporations today operate in a central planning straitjacket. \n\thttp://www.thestandard.com/article/display/0,1151,15772,00.html\n", "msg_date": "Thu, 18 Jan 2001 09:05:56 +0100", "msg_from": "\"J.H.M. Dassen (Ray)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GET DIAGNOSTICS SELECT PROCESSED INTO <int4_variable>" }, { "msg_contents": "On 18 Jan 2001, at 9:05, J.H.M. Dassen (Ray) wrote:\n\n> On Thu, Jan 18, 2001 at 20:15:39 +1300, Dan Langille wrote:\n> > Does anyone know if this feature exists? If so, what version or where \n> > can a patch be obtained?\n> \n> Apparently you missed Jan's follow-up to my message: \"New functionality in\n> 7.1 - sorry.\" So you'll have to use the beta.\n\nYes, I did. Thank you for clearing that up. I have been searching the \nmailing list archives, but they seem to be out of date. I could only find \nmy original post and no replies.\n\nOf course, my next two questions are:\n\nWhen will 7.1 leave beta (roughly)?\n\nHow stable is it? FWIW, I'll be using it for development, not production \nand basically the only feature not in pre 7.1 I'm looking for is the above.\n\n--\nDan Langille\nThe FreeBSD Diary - http://freebsddiary.org/\n FreshPorts - http://freshports.org/\n NZ Broadband - http://unixathome.org/broadband/\n", "msg_date": "Thu, 18 Jan 2001 22:30:48 +1300", "msg_from": "\"Dan Langille\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GET DIAGNOSTICS SELECT PROCESSED INTO <int4_variable>" }, { "msg_contents": "On Thu, Jan 18, 2001 at 22:30:48 +1300, Dan Langille wrote:\n> When will 7.1 leave beta (roughly)?\n\n From what I've been reading on the lists, \"soon\". I suspect within the next\nmonth or two.\n\n> How stable is it? FWIW, I'll be using it for development, not production \n\nI've not tested it extensively myself, but it passed my lithmus test (import\na database of a couple of million records in one of its tables, with\nnumerous constraints and indexes, drop some data based on columns which\naren't indexed, then vacuum it [*]).\n\nHTH,\nRay\n\n[*] I've had infinite loops VACUUMing this database under 7.0.3.\n-- \n\"The problem with the global village is all the global village idiots.\"\n\tPaul Ginsparg\n", "msg_date": "Thu, 18 Jan 2001 11:30:34 +0100", "msg_from": "\"J.H.M. Dassen (Ray)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GET DIAGNOSTICS SELECT PROCESSED INTO <int4_variable>" }, { "msg_contents": "It is in 7.1beta, and is not documented yet.\n\n\nit> Does anyone know if this feature exists? If so, what version or where \n> can a patch be obtained?\n> \n> Thanks\n> \n> ------- Forwarded message follows -------\n> Date sent: \tMon, 15 Jan 2001 08:44:46 +0100\n> From: \t\"J.H.M. Dassen (Ray)\" <[email protected]>\n> To: \[email protected]\n> Subject: \tRe: getting number of rows updated within a procedure\n> \n> On Sun, Jan 14, 2001 at 23:27:06 +1300, Dan Langille wrote:\n> > I'm writing some stuff in PL/pgsql (actually, a lot of stuff). I have a\n> > question: At various times, it does UPDATEs. Is there a way to tell if\n> > the UPDATE actually affected any rows or not? I couldn't see how to get\n> > UPDATE to return anything.\n> \n> Quoting a recent message by Jan Wieck <[email protected]>:\n> :Do a\n> :\n> : GET DIAGNOSTICS SELECT PROCESSED INTO <int4_variable>; \n> :\n> :directly after an INSERT, UPDATE or DELETE statement and you'll know\n> :how many rows have been hit.\n> :\n> :Also you can get the OID of an inserted row with\n> :\n> : GET DIAGNOSTICS SELECT RESULT INTO <int4_variable>;\n> \n> HTH,\n> Ray\n> -- \n> \"The software `wizard' is the single greatest obstacle to computer literacy\n> since the Mac.\"\n> \thttp://www.osopinion.com/Opinions/MichaelKellen/MichaelKellen1.html\n> ------- End of forwarded message -------\n> \n> --\n> Dan Langille\n> The FreeBSD Diary - http://freebsddiary.org/\n> FreshPorts - http://freshports.org/\n> NZ Broadband - http://unixathome.org/broadband/\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Jan 2001 13:01:13 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] GET DIAGNOSTICS SELECT PROCESSED INTO <int4_variable>" } ]
[ { "msg_contents": "\n> > > I do not have the original thread where Andreas describes \n> the behavior\n> > > of mktime() on his machine. Andreas, can you suggest a \n> simple configure\n> > > test to be used?\n> >\n> > #include <time.h>\n> > int main()\n> > {\n> > struct tm tt, *tm=&tt;\n> > int i = -50000000;\n> > tm = localtime (&i);\n> > i = mktime (tm);\n> > if (i != -50000000) /* on AIX this check could also \n> be (i == -1) */\n> > {\n> > printf(\"ERROR: mktime(3) does not correctly support \n> datetimes before 1970\\n\");\n> > return(1);\n> > }\n> > }\n> \n> You don't need to put this check into configure, you can just \n> do the check after mktime() is used.\n\nNo, we need that info for the output functions that only use localtime.\nThe intent is, to not use DST before 1970 on platforms that don't have\nmktime for dates before 1970. \n\nAndreas\n", "msg_date": "Thu, 18 Jan 2001 09:40:44 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: AW: AW: AW: AW: Re: tinterval - operator problems o\n\tn AI X" }, { "msg_contents": "Zeugswetter Andreas SB writes:\n\n> > You don't need to put this check into configure, you can just\n> > do the check after mktime() is used.\n>\n> No, we need that info for the output functions that only use localtime.\n> The intent is, to not use DST before 1970 on platforms that don't have\n> mktime for dates before 1970.\n\nYou can't do execution time checks in configure. You're going to have to\ndo it at run-time.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Fri, 19 Jan 2001 00:48:06 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: AW: AW: AW: AW: Re: tinterval - operator problems\n o n AI X" } ]
[ { "msg_contents": "Hi.\n\nDoes any one know if getCrossReference (jdbc method) works with postgresql 7.0 ?\n\nI`m using jdbc V 7.0-1.2\n\nany example ?\n\nThanks.\n\nFelipe Diaz Cardona\n\n\n\n\n\n\n\n\n\nHi.\n \nDoes any one know if getCrossReference (jdbc \nmethod) works with postgresql 7.0 ?\n \nI`m using jdbc V 7.0-1.2\n \nany example ?\n \nThanks.\n \nFelipe Diaz Cardona", "msg_date": "Thu, 18 Jan 2001 04:57:29 -0500", "msg_from": "\"Felipe Diaz Cardona\" <[email protected]>", "msg_from_op": true, "msg_subject": "getCrossReference" } ]
[ { "msg_contents": "Welcome to psql, the PostgreSQL interactive terminal.\n \nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n \nstage=# alter table dartists add column darank integer ;\nALTER\nstage=# update dartists set darank = 100 ;\nUPDATE 56240\nstage=# vacuum dartists ;\nVACUUM\nstage=# alter table zsong add column zsrank integer ;\nALTER\nstage=# update zsong set zsrank = 100 ;\nFATAL 1: Memory exhausted in AllocSetAlloc()\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Succeeded.\nstage=# \\q \n\nThere are 2273429 records in the table. \nPostgres as started as:\nsu -l $PGUSER -c \"$POSTMASTER -i -S -B 8192 -D$PGDIR -o '-Ffs -S 8192'\"\nThe Machine has 512M RAM. \nThe block size is 32K.\n\nA dump of zsong schema looks like:\n\n Table \"zsong\"\n Attribute | Type | Modifier\n-----------+-----------+----------\n muzenbr | integer |\n disc | integer |\n trk | integer |\n song | varchar() |\n artistid | integer |\n acd | varchar() |\n trackid | integer |\n zsrank | integer |\nIndices: zsong_artistid_ndx,\n zsong_lsong_ndx,\n zsong_muzenbr_ndx,\n zsong_song_ndx,\n \nzsong_trackid_ndx \n\nI have seen this behavior a couple times, in fact I normally write a\nscript that does multiple conditional updates, followed by a vacuum. I\njust forgot this time. I can work around this, but I thought you might\nbe interested. It has also seemed to cause some database corruption:\nNOTICE: Rel zsong: TID 7389/275: OID IS INVALID. TUPGONE 1. \n\nP.S. This was a staging database, it was created as \"pg_dump dbname |\npsql stage\" just prior to the alters.\n\n \n-- \nhttp://www.mohawksoft.com\n", "msg_date": "Thu, 18 Jan 2001 07:17:41 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": true, "msg_subject": "What's with update, 7.0.2?" } ]
[ { "msg_contents": "I'm compiling beta 1 of 7.1 and I have a par of questions.\n\nFirst I see things like this in the compilation output:\n\ngcc -g -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error \n-I/usr/local/ssl//include -I../../../src/include -c analyze.c -o analyze.o\nanalyze.c: In function `transformInsertStmt':\nanalyze.c:425: warning: unused variable `resnode' \n\nI know it's nothing serious, but..... Is it because it's in beta, and some \ntrashed code hasn't been taken off?\n\nThe other question is if I have to do something special (dump and restore) \nwhen upgrading from 7.1-beta1 to 7.1-final (or any of the other betas)?\n\nSaludos... :-)\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Thu, 18 Jan 2001 09:58:16 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": true, "msg_subject": "compiling 7.1-beta1" }, { "msg_contents": "\"Martin A. Marques\" <[email protected]> writes:\n> I'm compiling beta 1 of 7.1 and I have a par of questions.\n\n> analyze.c: In function `transformInsertStmt':\n> analyze.c:425: warning: unused variable `resnode' \n\nFixed in current sources. (I think there is still one unused-var\ncomplaint left in the XLOG code; I've been waiting on Vadim to do\nsomething about it, because it looks like there is code still to be\nwritten there.)\n\n> The other question is if I have to do something special (dump and restore) \n> when upgrading from 7.1-beta1 to 7.1-final (or any of the other betas)?\n\nYou will need an initdb to go from beta1 to beta3. Sorry about that;\nwe try to avoid forced initdb after beta cycle starts, but sometimes\nit's not possible.\n\nYou might want to skip testing beta1 and just start with beta3, or even\na current nightly snapshot.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jan 2001 13:15:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: compiling 7.1-beta1 " }, { "msg_contents": "El Jue 18 Ene 2001 15:15, Tom Lane escribi�:\n>\n> > The other question is if I have to do something special (dump and\n> > restore) when upgrading from 7.1-beta1 to 7.1-final (or any of the other\n> > betas)?\n>\n> You will need an initdb to go from beta1 to beta3. Sorry about that;\n> we try to avoid forced initdb after beta cycle starts, but sometimes\n> it's not possible.\n>\n> You might want to skip testing beta1 and just start with beta3, or even\n> a current nightly snapshot.\n\nWell, my problem was that a downgrade had to be made to the only Solaris8 \nSPARC back to Solaris7, and postgres broke. So I recompiled it, in a new \ndirectory, and copied the data directory to the new postgres instaltion \ndirectory, but when I connect with pgaccess to and db, I see the postgres \ntables (like pg_aggregate, pg_group, pg_scripts, pg_trigger, etc). Did I do \nsomething wrong? Any way to fix it?\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Fri, 19 Jan 2001 09:37:44 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: compiling 7.1-beta1" } ]
[ { "msg_contents": "> Could anyone please tell me what changed in some of the include files. I\n> just noticed that ecpg won't compile anymore.\n\nI haven't been able to update my cvs sources for a few days, but on my\nmachine preproc.y produces 321 shift/reduce errors. Does it still, or is\nthat patched up?\n\n - Thomas\n", "msg_date": "Thu, 18 Jan 2001 13:58:20 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Changes to include files" }, { "msg_contents": "Could anyone please tell me what changed in some of the include files. I\njust noticed that ecpg won't compile anymore.\n\nAlso I think we have yet to agree on the libpq/ecpg problem, or did I miss a\nmail yet again?\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Thu, 18 Jan 2001 15:23:10 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Changes to include files" }, { "msg_contents": "> > Could anyone please tell me what changed in some of the include files. I\n> > just noticed that ecpg won't compile anymore.\n> \n> I haven't been able to update my cvs sources for a few days, but on my\n> machine preproc.y produces 321 shift/reduce errors. Does it still, or is\n> that patched up?\n\nIn /pg/pl/plpgsql/src, I get:\n\n bison -y -d gram.y\n sed -e 's/yy/plpgsql_yy/g' -e 's/YY/PLPGSQL_YY/g' < y.tab.c > ./pl_gram.c\n sed -e 's/yy/plpgsql_yy/g' -e 's/YY/PLPGSQL_YY/g' < y.tab.h > ./pl.tab.h\n rm -f y.tab.c y.tab.h\n\nLooks OK to me.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Jan 2001 13:12:37 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changes to include files" }, { "msg_contents": "That is strange. It is all compiling here.\n\n> Could anyone please tell me what changed in some of the include files. I\n> just noticed that ecpg won't compile anymore.\n> \n> Also I think we have yet to agree on the libpq/ecpg problem, or did I miss a\n> mail yet again?\n> \n> Michael\n> -- \n> Michael Meskes\n> [email protected]\n> Go SF 49ers! Go Rhein Fire!\n> Use Debian GNU/Linux! Use PostgreSQL!\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Jan 2001 13:13:48 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changes to include files" }, { "msg_contents": "Michael Meskes <[email protected]> writes:\n> Could anyone please tell me what changed in some of the include files. I\n> just noticed that ecpg won't compile anymore.\n\nIt was building for me as of last night (last cvs update Jan 18 00:03 EST).\nDid someone break something since then?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jan 2001 14:06:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changes to include files " }, { "msg_contents": "On Thu, Jan 18, 2001 at 01:58:20PM +0000, Thomas Lockhart wrote:\n> I haven't been able to update my cvs sources for a few days, but on my\n> machine preproc.y produces 321 shift/reduce errors. Does it still, or is\n> that patched up?\n\nHmm, I never saw that:\n\npostgres@feivel:~/pgsql/src/interfaces/ecpg.mm/preproc$ make preproc.c\nbison -y -d preproc.y\nmv y.tab.c ./preproc.c\nmv y.tab.h ./preproc.h\n\nSeems to work fine here. But it's pgc.c that does not work anymore:\n\ngcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations\n-I../../../../src/include -I./../include -DMAJOR_VERSION=2 -DMINOR_VERSION=8\n-DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"/usr/local/pgsql/include\\\" -c -o pgc.o\npgc.c\nIn file included from ../../../../src/include/miscadmin.h:26,\n from pgc.l:24:\n../../../../src/include/storage/ipc.h:43: parse error before `IpcSemaphoreKey'\n../../../../src/include/storage/ipc.h:43: warning: type defaults to `int' in declaration of `IpcSemaphoreKey'\n...\n\nAfter that I get lots of warnings and parse errors in several include files.\n\nMichael\n\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Fri, 19 Jan 2001 10:28:09 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changes to include files" }, { "msg_contents": "Michael Meskes <[email protected]> writes:\n> Seems to work fine here. But it's pgc.c that does not work anymore:\n\n> gcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations\n> -I../../../../src/include -I./../include -DMAJOR_VERSION=2 -DMINOR_VERSION=8\n> -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"/usr/local/pgsql/include\\\" -c -o pgc.o\n> pgc.c\n> In file included from ../../../../src/include/miscadmin.h:26,\n> from pgc.l:24:\n> ../../../../src/include/storage/ipc.h:43: parse error before `IpcSemaphoreKey'\n> ../../../../src/include/storage/ipc.h:43: warning: type defaults to `int' in declaration of `IpcSemaphoreKey'\n> ...\n\nDid you get the update I made on 1/14 to #include \"postgres.h\" at the\nhead of pgc.l ?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Jan 2001 10:29:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changes to include files " }, { "msg_contents": "On Fri, Jan 19, 2001 at 10:29:15AM -0500, Tom Lane wrote:\n> Did you get the update I made on 1/14 to #include \"postgres.h\" at the\n> head of pgc.l ?\n\nNo, that one is missing. Thanks. I'll add it immediately.\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Fri, 19 Jan 2001 16:34:07 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changes to include files" } ]
[ { "msg_contents": "> Is it possible to go from text->inet in v7.0.3? if not, is it in v7.1?\n\nSeems to not be in 7.1 either.\n\n - Thomas\n", "msg_date": "Thu, 18 Jan 2001 14:03:25 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: converting from text -> inet ... possible?" }, { "msg_contents": "\nI scanned the archives, and one person asked it back in July, but there\ndoesn't appear to be any followup ...\n\nIs it possible to go from text->inet in v7.0.3? if not, is it in v7.1?\n\nthe following doesn't work:\n\ntemplate1=# select '216.126.84.1'::text::inet;\nERROR: Cannot cast type 'text' to 'inet'\n\nbut I could be missing something in the docs?\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Thu, 18 Jan 2001 10:43:17 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "converting from text -> inet ... possible?" }, { "msg_contents": "\nhow hard would it be to rectify before beta4 is put out? i'm doing a\nmanual dump/restore to get my data from text->inet ... not elegant, but it\nworks ... ::)\n\nOn Thu, 18 Jan 2001, Thomas Lockhart wrote:\n\n> > Is it possible to go from text->inet in v7.0.3? if not, is it in v7.1?\n>\n> Seems to not be in 7.1 either.\n>\n> - Thomas\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Thu, 18 Jan 2001 11:17:03 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: converting from text -> inet ... possible?" }, { "msg_contents": "Is this a TODO item?\n\n> \n> I scanned the archives, and one person asked it back in July, but there\n> doesn't appear to be any followup ...\n> \n> Is it possible to go from text->inet in v7.0.3? if not, is it in v7.1?\n> \n> the following doesn't work:\n> \n> template1=# select '216.126.84.1'::text::inet;\n> ERROR: Cannot cast type 'text' to 'inet'\n> \n> but I could be missing something in the docs?\n> \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Jan 2001 13:14:33 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: converting from text -> inet ... possible?" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> I scanned the archives, and one person asked it back in July, but there\n> doesn't appear to be any followup ...\n> \n> Is it possible to go from text->inet in v7.0.3? if not, is it in v7.1?\n> \n> the following doesn't work:\n> \n> template1=# select '216.126.84.1'::text::inet;\n> ERROR: Cannot cast type 'text' to 'inet'\n\nYou could use something like that:\nhannu=# select inet_in(textout('127.0.0.1/24'::text)) as n;\n n \n--------------\n 127.0.0.1/24\n(1 row)\n-------------------------------\nHannu\n", "msg_date": "Thu, 18 Jan 2001 19:46:38 +0000", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: converting from text -> inet ... possible?" }, { "msg_contents": "> Is this a TODO item?\n\nYup. Somehow there is a conversion from inet to text, but not the other\nway around.\n\n - Thomas\n", "msg_date": "Fri, 19 Jan 2001 00:53:52 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: converting from text -> inet ... possible?" }, { "msg_contents": "Added to TODO:\n\n\t* Add conversion function from text to inet\n\n> > Is this a TODO item?\n> \n> Yup. Somehow there is a conversion from inet to text, but not the other\n> way around.\n> \n> - Thomas\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Jan 2001 21:06:02 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: converting from text -> inet ... possible?" } ]
[ { "msg_contents": "I think I know what the problem is:\n\nI have a couple indexes created with a \"lower\" function to index on\nlowercase.\n\nTo return a lowercase text object, I use the \"lower\" function, as copied\nfrom the postgres source (Oddly enough varchar does not work):\n\ntext * lower(text *string)\n{\n text *ret;\n char *ptr,\n *ptr_ret;\n int m;\n \n if ((string == (text *) NULL) || ((m = VARSIZE(string) -\nVARHDRSZ) <= 0))\n return string;\n \n ret = (text *) palloc(VARSIZE(string));\n VARSIZE(ret) = VARSIZE(string);\n \n ptr = VARDATA(string);\n ptr_ret = VARDATA(ret);\n \n while (m--)\n *ptr_ret++ = tolower((unsigned char) *ptr++);\n \n return ret;\n} \n\nDuring a long update, the indexes must also be updated.\n\nI bet, the memory is not freed until after the update is completed, and\nthat during the update all the previous results of \"lower\" remain in\nRAM. This explains why it is slow, because I begin to hit swap. This\nexplains why it crashes, can't get memory.\n\nIs this a reasonable conclusion given the source and the circumstances?\nIf so, if I alter the text* passed to me, would/could it affect the\nsystem? i.e. will it affect the disk image or other processes currently\naccessing the record? If I return the text pointer passed to me after\nmodification, will postgres attempt to free it twice?\n", "msg_date": "Thu, 18 Jan 2001 09:18:34 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": true, "msg_subject": "palloc? (re: What's with update?)" }, { "msg_contents": "mlw <[email protected]> writes:\n> I think I know what the problem is:\n> I have a couple indexes created with a \"lower\" function to index on\n> lowercase.\n\nAh. You're correct, functional indexes leak memory in existing PG\nreleases. The memory is reclaimed at end of statement, which is not\nsoon enough if you insert/update a large number of rows.\n\nI think this is fixed in 7.1 though, if you want to try it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jan 2001 13:28:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: palloc? (re: What's with update?) " } ]
[ { "msg_contents": "I'm trying to compile postgres on a Solaris 7 SPARC machine and I get this \nerror:\n\nmake[4]: Leaving directory \n`/space/pruebas/postgresql-7.1beta1/src/interfaces/libpq'\ngcc -g -Wall -Wmissing-prototypes -Wmissing-declarations -fPIC \n-I../../../src/include -I../../../src/interfaces/libpq -c pgtcl.c -o pgtcl.o\nIn file included from pgtcl.c:19:\nlibpgtcl.h:19: tcl.h: No such file or directory\nIn file included from pgtcl.c:20:\npgtclCmds.h:17: tcl.h: No such file or directory\nmake[3]: *** [pgtcl.o] Error 1\nmake[3]: Leaving directory \n`/space/pruebas/postgresql-7.1beta1/src/interfaces/libpgtcl'\nmake[2]: *** [all] Error 2\nmake[2]: Leaving directory `/space/pruebas/postgresql-7.1beta1/src/interfaces'\nmake[1]: *** [all] Error 2\nmake[1]: Leaving directory `/space/pruebas/postgresql-7.1beta1/src'\nmake: *** [all] Error 2\n*** Error code 2\n\nNow, tcl.h is in /usr/local/include/.\n\nWhat can I do?\n\nSaludos... :-)\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Thu, 18 Jan 2001 12:06:29 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": true, "msg_subject": "compilation error" } ]
[ { "msg_contents": "\n> I'm trying to compile postgres on a Solaris 7 SPARC machine \n> and I get this error:\n\n> gcc -g -Wall -Wmissing-prototypes -Wmissing-declarations -fPIC \n> -I../../../src/include -I../../../src/interfaces/libpq -c pgtcl.c -o pgtcl.o\n> In file included from pgtcl.c:19:\n> libpgtcl.h:19: tcl.h: No such file or directory\n\n> Now, tcl.h is in /usr/local/include\n\nRun configure with:\n./configure --with-includes=/usr/local/include --with-libraries=/usr/local/lib\n\nAndreas\n", "msg_date": "Thu, 18 Jan 2001 16:24:10 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] compilation error" } ]
[ { "msg_contents": "Hi,\n\nI am working on a project that will integrate data from an Oracle database\nwith data from a Postgresql database. Is there a gateway to Oracle from\nPostgresql? I know that in Oracle there are gateways to other database such\nas DB2.\n\nThanks for the help.\n\nWenjin Zheng\n\n", "msg_date": "Thu, 18 Jan 2001 10:01:50 -0800", "msg_from": "\"wenjin.zheng\" <[email protected]>", "msg_from_op": true, "msg_subject": "Gateway" }, { "msg_contents": "I wish there was, but no gateway exists yet.\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> Hi,\n> \n> I am working on a project that will integrate data from an Oracle database\n> with data from a Postgresql database. Is there a gateway to Oracle from\n> Postgresql? I know that in Oracle there are gateways to other database such\n> as DB2.\n> \n> Thanks for the help.\n> \n> Wenjin Zheng\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Jan 2001 13:22:33 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Gateway" } ]
[ { "msg_contents": "Just tried it for the first time:\n% cd src/test/locale\n% gmake all \ngmake: Circular test-pgsql-locale <- all dependency dropped.\ncd: can't cd to pgsql-locale\ngmake: *** [test-pgsql-locale] Error 2\n\nI think the next stage is gmake test-koi8..\n\nCheers,\n\nPatrick\n", "msg_date": "Thu, 18 Jan 2001 18:24:40 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": true, "msg_subject": "test/locale broken" }, { "msg_contents": "Patrick Welche writes:\n\n> Just tried it for the first time:\n> % cd src/test/locale\n> % gmake all\n> gmake: Circular test-pgsql-locale <- all dependency dropped.\n> cd: can't cd to pgsql-locale\n> gmake: *** [test-pgsql-locale] Error 2\n\nI think it should work now.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Fri, 19 Jan 2001 20:52:13 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: test/locale broken" }, { "msg_contents": "On Fri, Jan 19, 2001 at 08:52:13PM +0100, Peter Eisentraut wrote:\n> Patrick Welche writes:\n> \n> > Just tried it for the first time:\n> > % cd src/test/locale\n> > % gmake all\n> > gmake: Circular test-pgsql-locale <- all dependency dropped.\n> > cd: can't cd to pgsql-locale\n> > gmake: *** [test-pgsql-locale] Error 2\n> \n> I think it should work now.\n\nYes, Thanks!\n\nPatrick\n", "msg_date": "Sat, 20 Jan 2001 16:08:29 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": true, "msg_subject": "Re: test/locale broken" } ]
[ { "msg_contents": ">From http://www.postgresql.org/devel-corner/docs/admin/charset.htm\n\n Once you have chosen a set of localization rules this way you must\n keep them fixed for any particular database cluster. That means that\n the locales that were active when you ran initdb must be kept the same\n when you start the postmaster.\n\nIs that still true? I seem to remember something about the postmaster using\nwhatever initdb set..\n\nCheers,\n\nPatrick\n", "msg_date": "Thu, 18 Jan 2001 18:30:34 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": true, "msg_subject": "charset.htm" }, { "msg_contents": "Patrick Welche <[email protected]> writes:\n>> From http://www.postgresql.org/devel-corner/docs/admin/charset.htm\n> Once you have chosen a set of localization rules this way you must\n> keep them fixed for any particular database cluster. That means that\n> the locales that were active when you ran initdb must be kept the same\n> when you start the postmaster.\n\n> Is that still true?\n\nYup, it's out of date. Working on a fix now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jan 2001 22:17:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: charset.htm " } ]
[ { "msg_contents": "> Fixed in current sources. (I think there is still one unused-var\n> complaint left in the XLOG code; I've been waiting on Vadim to do\n> something about it, because it looks like there is code still to be\n> written there.)\n\nJust commented out for now.\n\nVadim\n", "msg_date": "Thu, 18 Jan 2001 10:34:18 -0800", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: compiling 7.1-beta1 " } ]
[ { "msg_contents": "Doesn't appear that you can use OLD.oid in triggers (actually does more, =\nbut thats good enough):\n\nCREATE FUNCTION history_update_delete() RETURNS OPAQUE AS '\nDECLARE\n v_tablename varchar(40);\n v_history_tablename varchar(48);\n v_operation varchar(6);\nBEGIN\n v_operation :=3D TG_OP;\n v_tablename :=3D TG_RELNAME;\n v_history_tablename :=3D ''history_'' || v_tablename;\n\n INSERT INTO v_history_tablename SELECT ''v_operation'' as =\nhistory_change_type\n , OLD.oid as orignal_oid\n , *\n FROM v_tablename\n WHERE oid =3D OLD.oid;\n\n RETURN NEW;\n\nEND;\n' LANGUAGE 'plpgsql';\n--\nERROR: record old has no field oid\n\n\nThe below also fails.\n\ncreate table example (\n original_oid oid REFERENCES table(oid)\n ON UPDATE CASCADE\n ON DELETE SET NULL\n);\n--\nERROR: UNIQUE constraint matching given keys for reference table \"table\" not found\n\nPostgresql 7.1beta3. I'd consider these to be bugs myself but I've not =\ntried them in previous versions to know if it's really just a new =\nfeature :)\n\n--\nRod Taylor\n\nThere are always four sides to every story: your side, their side, the truth, and what really happened.", "msg_date": "Thu, 18 Jan 2001 14:42:20 -0500", "msg_from": "\"Rod Taylor\" <[email protected]>", "msg_from_op": true, "msg_subject": "OLD.oid issues..." }, { "msg_contents": "On Thu, 18 Jan 2001, Rod Taylor wrote:\n\n> create table example (\n> original_oid oid REFERENCES table(oid)\n> ON UPDATE CASCADE\n> ON DELETE SET NULL\n> );\n> --\n> ERROR: UNIQUE constraint matching given keys for reference table \"table\" not found\n> \n> Postgresql 7.1beta3. I'd consider these to be bugs myself but I've not =\n> tried them in previous versions to know if it's really just a new =\n> feature :)\n\nActually I know the latter never really should have worked in past\nversions. It may have let you define it before, but I believe it would\nhave errored on the actual fk constraint when used, now it just won't let\nyou define it. I think referencing oids is on the todo list (although\nyou have to give ref actions like the ones you have or the constraint is\npretty ugly).\n\n", "msg_date": "Thu, 18 Jan 2001 12:58:17 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OLD.oid issues..." } ]
[ { "msg_contents": "I have attached a simple change to src/pl/plperl/plperl.c to\nenable the :bash_math opcodes. Currently plperl.c only \nenables the :default opcodes. This leave out about five of six\nmath functions including sqrt().\n\nIt might be worth considering allowing the user's to enable\nother packages on the command line. However, most of the other\npackages allow you to do things like access the underlying file\nsystem (as the owner of the backend process), make system calls, \nand perform network operations. \n\nThe patch is off of the 7.0.3 released code.\n\n\n-- \n----------------------------------------------------------------\nTravis Bauer | CS Grad Student | IU |www.cs.indiana.edu/~trbauer\n----------------------------------------------------------------", "msg_date": "Thu, 18 Jan 2001 16:08:34 -0500", "msg_from": "Travis Bauer <[email protected]>", "msg_from_op": true, "msg_subject": "PlPerl.c patch" }, { "msg_contents": "Travis Bauer <[email protected]> writes:\n> I have attached a simple change to src/pl/plperl/plperl.c to\n> enable the :bash_math opcodes. Currently plperl.c only \n> enables the :default opcodes. This leave out about five of six\n> math functions including sqrt().\n\nThis seems like a reasonable change, but could we trouble you to\nsubmit it in diff -c format? Without the context it's way too\nerror-prone to apply.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jan 2001 23:33:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PlPerl.c patch " }, { "msg_contents": "Attached. \n\nTom Lane ([email protected]) wrote:\n\n> Travis Bauer <[email protected]> writes:\n> > I have attached a simple change to src/pl/plperl/plperl.c to\n> > enable the :bash_math opcodes. Currently plperl.c only \n> > enables the :default opcodes. This leave out about five of six\n> > math functions including sqrt().\n> \n> This seems like a reasonable change, but could we trouble you to\n> submit it in diff -c format? Without the context it's way too\n> error-prone to apply.\n> \n> \t\t\tregards, tom lane\n\n\n\n-- \n----------------------------------------------------------------\nTravis Bauer | CS Grad Student | IU |www.cs.indiana.edu/~trbauer\n----------------------------------------------------------------", "msg_date": "Fri, 19 Jan 2001 06:08:32 -0500", "msg_from": "Travis Bauer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PlPerl.c patch" }, { "msg_contents": "Thanks. Applied. (Tom already approved it.)\n\n> Attached. \n> \n> Tom Lane ([email protected]) wrote:\n> \n> > Travis Bauer <[email protected]> writes:\n> > > I have attached a simple change to src/pl/plperl/plperl.c to\n> > > enable the :bash_math opcodes. Currently plperl.c only \n> > > enables the :default opcodes. This leave out about five of six\n> > > math functions including sqrt().\n> > \n> > This seems like a reasonable change, but could we trouble you to\n> > submit it in diff -c format? Without the context it's way too\n> > error-prone to apply.\n> > \n> > \t\t\tregards, tom lane\n> \n> \n> \n> -- \n> ----------------------------------------------------------------\n> Travis Bauer | CS Grad Student | IU |www.cs.indiana.edu/~trbauer\n> ----------------------------------------------------------------\n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Jan 2001 11:14:38 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PlPerl.c patch" } ]
[ { "msg_contents": "\"Stefan Waidele jun.\" wrote:\n >At 13:37 18.01.2001 -0500, Tom Lane wrote:\n >>\"Stefan Waidele jun.\" <[email protected]> writes:\n >> > How can I tell Postgres to return an interval value in an format like \n >> hhh:mm?\n >>\n >>See to_char(),\n >>http://www.postgresql.org/devel-corner/docs/postgres/functions-formatting.h\n >tm\n\nto_char() can't take an interval, even in 7.1:\n\nbray=# select proname,pronargs,proargtypes from pg_proc where proname = 'to_char';\n proname | pronargs | proargtypes \n---------+----------+-------------\n to_char | 2 | 20 25\n to_char | 2 | 23 25\n to_char | 2 | 700 25\n to_char | 2 | 701 25\n to_char | 2 | 1184 25\n to_char | 2 | 1700 25\n(6 rows)\n\nand date_part() merely extracts the requested part, thus losing data:\n\nbray=# select date_part('hour','3 days 10:23'::INTERVAL);\n date_part \n-----------\n 10\n(1 row)\n\nCan to_char be extended?\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"For the eyes of the LORD run to and fro throughout the\n whole earth, to show himself strong in the behalf of \n them whose heart is perfect toward him...\" \n II Chronicles 16:9 \n\n\n", "msg_date": "Fri, 19 Jan 2001 00:02:41 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [NOVICE] Re: Interval output format " } ]
[ { "msg_contents": "I've completed a paper design for the reimplementation of DeadLockCheck.\nNotes attached, in case anyone wants to kibitz. (This will appear in\nstorage/lmgr/README when I commit the code.)\n\n\t\t\tregards, tom lane\n\n\n---------------------------------------------------------------------------\n\nThe deadlock detection algorithm:\n\nSince we allow user transactions to request locks in any order, deadlock\nis possible. We use a deadlock detection/breaking algorithm that is\nfairly standard in essence, but there are many special considerations\nneeded to deal with Postgres' generalized locking model.\n\nA key design consideration is that we want to make routine operations\n(lock grant and release) run quickly when there is no deadlock, and avoid\nthe overhead of deadlock handling as much as possible. We do this using\nan \"optimistic waiting\" approach: if a process cannot acquire the lock\nit wants immediately, it goes to sleep without any deadlock check. But\nit also sets a delay timer, with a delay of DeadlockTimeout milliseconds\n(typically set to one second). If the delay expires before the process is\ngranted the lock it wants, it runs the deadlock detection/breaking code.\nNormally this code will determine that there is no deadlock condition,\nand then the process will go back to sleep and wait quietly until it is\ngranted the lock. But if a deadlock condition does exist, it will be\nresolved, usually by aborting the detecting process' transaction. In this\nway, we avoid deadlock handling overhead whenever the wait time for a lock\nis less than DeadlockTimeout, while not imposing an unreasonable delay of\ndetection when there is an error.\n\nLock acquisition (routines LockAcquire and ProcSleep) follows these rules:\n\n1. A lock request is granted immediately if it does not conflict with any\nexisting or waiting lock request, or if the process already holds an\ninstance of the same lock type (eg, there's no penalty to acquire a read\nlock twice). Note that a process never conflicts with itself, eg one can\nobtain read lock when one already holds exclusive lock.\n\n2. Otherwise the process joins the lock's wait queue. Normally it will be\nadded to the end of the queue, but there is an exception: if the process\nalready holds locks on this same lockable object that conflict with the\nrequest of any pending waiter, then the process will be inserted in the\nwait queue just ahead of the first such waiter. (If we did not make this\ncheck, the deadlock detection code would adjust the queue order to resolve\nthe conflict, but it's relatively cheap to make the check in ProcSleep and\navoid a deadlock timeout delay in this case.) Note special case: if the\nprocess holds locks that conflict with the first waiter, so that it would\ngo at the front of the queue, and its request does not conflict with the\nalready-granted locks, then the process will be granted the lock without\ngoing to sleep at all.\n\nWhen a lock is released, the lock release routine (ProcLockWakeup) scans\nthe lock object's wait queue. Each waiter is awoken if (a) its request\ndoes not conflict with already-granted locks, and (b) its request does\nnot conflict with the requests of prior un-wakable waiters. Rule (b)\nensures that conflicting requests are granted in order of arrival.\nThere are cases where a later waiter must be allowed to go in front of\nconflicting earlier waiters to avoid deadlock, but it is not\nProcLockWakeup's responsibility to recognize these cases; instead, the\ndeadlock detection code re-orders the wait queue when necessary.\n\nTo perform deadlock checking, we use the standard method of viewing the\nvarious processes as nodes in a directed graph (the waits-for graph or\nWFG). There is a graph edge leading from process A to process B if A\nwaits for B, ie, A is waiting for some lock and B holds a conflicting\nlock. There is a deadlock condition if and only if the WFG contains\na cycle. We detect cycles by searching outward along waits-for edges\nto see if we return to our starting point. There are three possible\noutcomes:\n\n1. All outgoing paths terminate at a running process (which has no\noutgoing edge).\n\n2. A deadlock is detected by looping back to the start point. We resolve\nsuch a deadlock by canceling the start point's lock request and reporting\nan error in that transaction, which normally leads to transaction abort\nand release of that transaction's held locks. Note that it's sufficient\nto cancel one request to remove the cycle; we don't need to kill all the\ntransactions involved.\n\n3. Some path(s) loop back to a node other than the start point. This\nindicates a deadlock, but one that does not involve our starting process.\nWe ignore this condition on the grounds that resolving such a deadlock\nis the responsibility of the processes involved --- killing our start-\npoint process would not resolve the deadlock. So, cases 1 and 3 both\nreport \"no deadlock\".\n\nPostgres' situation is a little more complex than the standard discussion\nof deadlock detection, for two reasons:\n\n1. A process can be waiting for more than one other process, since there\nmight be multiple holders of (nonconflicting) lock types that all conflict\nwith the waiter's request. This creates no real difficulty however; we\nsimply need to be prepared to trace more than one outgoing edge.\n\n2. If a process A is behind a process B in some lock's wait queue, and\ntheir requested locks conflict, then we must say that A waits for B, since\nProcLockWakeup will never awaken A before B. This creates additional\nedges in the WFG. We call these \"soft\" edges, as opposed to the \"hard\"\nedges induced by locks already held. Note that if B already holds any\nlocks conflicting with A's request, then their relationship is a hard edge\nnot a soft edge.\n\nA \"soft\" block, or wait-priority block, has the same potential for\ninducing deadlock as a hard block. However, we may be able to resolve\na soft block without aborting the transactions involved: we can instead\nrearrange the order of the wait queue. This rearrangement reverses the\ndirection of the soft edge between two processes with conflicting requests\nwhose queue order is reversed. If we can find a rearrangement that\neliminates a cycle without creating new ones, then we can avoid an abort.\nChecking for such possible rearrangements is the trickiest part of the\nalgorithm.\n\nThe workhorse of the deadlock detector is a routine FindLockCycle() which\nis given a starting point process (which must be a waiting process).\nIt recursively scans outwards across waits-for edges as discussed above.\nIf it finds no cycle involving the start point, it returns \"false\".\n(As discussed above, we can ignore cycles not involving the start point.)\nWhen such a cycle is found, FindLockCycle() returns \"true\", and as it\nunwinds it also builds a list of any \"soft\" edges involved in the cycle.\nIf the resulting list is empty then there is a hard deadlock and the\nconfiguration cannot succeed. However, if the list is not empty, then\nreversing any one of the listed edges through wait-queue rearrangement\nwill eliminate that cycle. Since such a reversal might create cycles\nelsewhere, we may need to try every possibility. Therefore, we need to\nbe able to invoke FindLockCycle() on hypothetical configurations (wait\norders) as well as the current real order.\n\nThe easiest way to handle this seems to be to have a lookaside table that\nshows the proposed new queue order for each wait queue that we are\nconsidering rearranging. This table is passed to FindLockCycle, and it\nbelieves the given queue order rather than the \"real\" order for each lock\nthat has an entry in the lookaside table.\n\nWe build a proposed new queue order by doing a \"topological sort\" of the\nexisting entries. Each soft edge that we are currently considering\nreversing is a property of the partial order that the topological sort\nhas to enforce. We must use a sort method that preserves the input\nordering as much as possible, so as not to gratuituously break arrival\norder for processes not involved in a deadlock. (This is not true of the\ntsort method shown in Knuth, for example, but it's easily done by a simple\ndoubly-nested-loop method that emits the first legal candidate at each\nstep. Fortunately, we don't need a highly efficient sort algorithm, since\nthe number of partial order constraints is not likely to be large.) Note\nthat failure of the topological sort tells us we have conflicting ordering\nconstraints, and therefore that the last-added soft edge reversal\nconflicts with a prior edge reversal. We need to detect this case to\navoid an infinite loop in the case where no possible rearrangement will\nwork: otherwise, we might try a reversal, find that it still leads to\na cycle, then try to un-reverse the reversal while trying to get rid of\nthat cycle, etc etc. Topological sort failure tells us the un-reversal\nis not a legitimate move in this context.\n\nSo, the basic step in our rearrangement method is to take a list of\nsoft edges in a cycle (as returned by FindLockCycle()) and successively\ntry the reversal of each one as a topological-sort constraint added to\nwhatever constraints we are already considering. We recursively search\nthrough all such sets of constraints to see if any one eliminates all\nthe deadlock cycles at once. Although this might seem impossibly\ninefficient, it shouldn't be a big problem in practice, because there\nwill normally be very few, and not very large, deadlock cycles --- if\nany at all. So the combinatorial inefficiency isn't going to hurt us.\nBesides, it's better to spend some time to guarantee that we've checked\nall possible escape routes than to abort a transaction when we didn't\nreally have to.\n\nEach edge reversal constraint can be viewed as requesting that the waiting\nprocess A be moved to before the blocking process B in the wait queue they\nare both in. This action will reverse the desired soft edge, as well as\nany other soft edges between A and other processes it is advanced over.\nNo other edges will be affected (note this is actually a constraint on our\ntopological sort method to not re-order the queue more than necessary.)\nTherefore, we can be sure we have not created any new deadlock cycles if\nneither FindLockCycle(A) nor FindLockCycle(B) discovers any cycle. Given\nthe above-defined behavior of FindLockCycle, each of these searches is\nnecessary as well as sufficient, since FindLockCycle starting at the\noriginal start point will not complain about cycles that include A or B\nbut not the original start point.\n\nIn short then, a proposed rearrangement of the wait queue(s) is determined\nby one or more broken soft edges A->B, fully specified by the output of\ntopological sorts of each wait queue involved, and then tested by invoking\nFindLockCycle() starting at the original start point as well as each of\nthe mentioned processes (A's and B's). If none of the tests detect a\ncycle, then we have a valid configuration and can implement it by\nreordering the wait queues per the sort outputs (and then applying\nProcLockWakeup on each reordered queue, in case a waiter has become wakable).\nIf any test detects a soft cycle, we can try to resolve it by adding each\nsoft link in that cycle, in turn, to the proposed rearrangement list.\nThis is repeated recursively until we either find a workable rearrangement\nor determine that none exists. In the latter case, the outer level\nresolves the deadlock by aborting the original start-point transaction.\n\nThe particular order in which rearrangements are tried depends on the\norder FindLockCycle() happens to scan in, so if there are multiple\nworkable rearrangements of the wait queues, then it is unspecified which\none will be chosen. What's more important is that we guarantee to try\nevery queue rearrangement that could lead to success. (For example,\nif we have A before B before C and the needed order constraints are\nC before A and B before C, we would first discover that A before C\ndoesn't work and try the rearrangement C before A before B. This would\neventually lead to the discovery of the additional constraint B before C.)\n\nGot that?\n\nMiscellaneous notes:\n\n1. It is easily proven that no deadlock will be missed due to our\nasynchronous invocation of deadlock checking. A deadlock cycle in the WFG\nis formed when the last edge in the cycle is added; therefore the last\nprocess in the cycle to wait (the one from which that edge is outgoing) is\ncertain to detect and resolve the cycle when it later runs HandleDeadLock.\nThis holds even if that edge addition created multiple cycles; the process\nmay indeed abort without ever noticing those additional cycles, but we\ndon't particularly care. The only other possible creation of deadlocks is\nduring deadlock resolution's rearrangement of wait queues, and we already\nsaw that that algorithm will prove that it creates no new deadlocks before\nit attempts to actually execute any rearrangement.\n\n2. It is not certain that a deadlock will be resolved by aborting the\nlast-to-wait process. If earlier waiters in the cycle have not yet run\nHandleDeadLock, then the first one to do so will be the victim.\n\n3. No live (wakable) process can be missed by ProcLockWakeup, since it\nexamines every member of the wait queue (this was not true in the 7.0\nimplementation, BTW). Therefore, if ProcLockWakeup is always invoked\nafter a lock is released or a wait queue is rearranged, there can be no\nfailure to wake a wakable process. One should also note that\nLockWaitCancel (abort a waiter due to outside factors) must run\nProcLockWakeup, in case the cancelled waiter was soft-blocking other\nwaiters.\n\n4. We can minimize excess rearrangement-trial work by being careful to scan\nthe wait queue from the front when looking for soft edges. For example,\nif we have queue order A,B,C and C has deadlock conflicts with both A and B,\nwe want to generate the \"C before A\" constraint first, rather than wasting\ntime with \"C before B\", which won't move C far enough up. So we look for\nsoft edges outgoing from C starting at the front of the wait queue.\n\n5. The working data structures needed by the deadlock detection code can\nbe proven not to need more than MAXBACKENDS entries. Therefore the\nworking storage can be statically allocated instead of depending on\npalloc(). This is a good thing, since if the deadlock detector could\nfail for extraneous reasons, all the above safety proofs fall down.\n", "msg_date": "Thu, 18 Jan 2001 20:20:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Deadlock reimplementation notes (kinda long)" } ]
[ { "msg_contents": "It seems for me your test program do same thing as PostgreSQL backend\ndoes. So I would say if the program worked well, why not PostgreSQL\nworks?\n\nWhat do you think Oleg?\n\nFrom: Maneerat Sappaso <[email protected]>\nSubject: Program test strcoll().\nDate: Thu, 18 Jan 2001 19:52:39 -0700 (GMT)\nMessage-ID: <[email protected]>\n\n> \tDeer sir,\n> \n> \tProgram collsort.c is a test program in THAI locale version\n> \tth_TH-2.1.1-5.src.tar.gz for testing strcoll(). It sort by \n> \tthai dictionary not by ascii. I use to test this program with \n> \tthai or english (or thai+english) words and it sorted correctly. \n> \t\n> \tI found that before run this program we must\n> \tsetlocale(LC_COLLATE,\"th_TH\");\n> \n> \tWhen I install PostgreSQL with locale I try to test data by\n> \tuse sql command like this \" select * from table order by name\"\n> \tThe result are sorted by ascii.\n> \n> \tregards,\n> \tmaneerat sappaso \n\n[the test program from Maneerat]\n\n/*\n * collsort.c - a word list sorting tool using strcoll()\n * Created: 26 Nov 1998\n * Author: Theppitak Karoonboonyanan\n */\n\n/*\n Copyright (C) 1999 Theppiak Karoonboonyanan\n \n collsort.c is free software; you can redistribute it and/or\n modify it under the terms of the GNU General Public License as\n published by the Free Software Foundation; either version 2 of\n the License, or (at your option) any later version.\n*/\n\n#include <locale.h>\n#include <stdio.h>\n#include <string.h>\n#include <stdlib.h>\n\ntypedef unsigned char tchar;\n\n/* for qsort() */\ntypedef int (*CMPFUNC)(const void *, const void *);\n\nstatic size_t readData(FILE *dataFile, tchar *data[], int maxData)\n{\n size_t nData = 0;\n static char wordBuf[128];\n\n while (nData < maxData && fgets(wordBuf, sizeof wordBuf, dataFile) != NULL)\n {\n int len = strlen(wordBuf);\n if (len == 0) { return nData; }\n /* eliminate terminating '\\n' */\n wordBuf[--len] = 0;\n\n /* allocate & copy the line */\n data[nData] = (tchar*)malloc(len+1);\n if (data[nData] == NULL) {\n printf(\"Warning: Only %d items were read\\n\", nData);\n return nData;\n }\n strcpy((char*)data[nData], wordBuf);\n nData++;\n }\n\n return nData;\n}\n\nstatic void freeData(tchar *data[], size_t nItems)\n{\n size_t i;\n\n for (i=0; i<nItems; i++) {\n free(data[i]);\n }\n}\n\nstatic int dataCmp(const char **pStr1, const char **pStr2)\n{\n return strcoll(*pStr1, *pStr2);\n}\n\nstatic void sortData(tchar *data[], size_t nItems)\n{\n qsort(data, nItems, sizeof data[0], (CMPFUNC)dataCmp);\n}\n\nstatic void writeData(FILE *outFile, tchar *data[], size_t nItems)\n{\n size_t i;\n\n for (i = nItems; i > 0; i--) {\n fprintf(outFile, \"%s\\n\", *data);\n data++;\n }\n}\n\n#define MAX_DATA 40000\nstatic tchar *data[MAX_DATA];\n\nint main(int argc, char *argv[])\n{\n FILE *dataFile;\n FILE *outFile;\n size_t dataRead;\n char DataFileName[64];\n char OutFileName[64];\n const char* pPrevLocale;\n\n pPrevLocale = setlocale(LC_COLLATE, \"\");\n if (pPrevLocale == 0) {\n fprintf(stderr, \"Cannot set locale\\n\");\n exit(1);\n }\n\n if (argc == 3) {\n strcpy(DataFileName, argv[1]);\n strcpy(OutFileName, argv[2]);\n } else {\n fprintf(stderr, \"Usage: collsort <input file> <output file>\\n\");\n return 1;\n }\n\n dataFile = fopen(DataFileName, \"rt\");\n if (dataFile == NULL) {\n fprintf(stderr, \"Can't open file %s\\n\", DataFileName);\n perror(\"fopen\");\n return 1;\n }\n\n outFile = fopen(OutFileName, \"wt\");\n if (outFile == NULL) {\n fprintf(stderr, \"Can't open file %s for write\\n\", OutFileName);\n perror(\"fopen\");\n return 1;\n }\n\n dataRead = readData(dataFile, data, MAX_DATA);\n sortData(data, dataRead);\n writeData(outFile, data, dataRead);\n freeData(data, dataRead);\n\n fclose(outFile);\n fclose(dataFile);\n\n setlocale(LC_COLLATE, pPrevLocale);\n\n return 0;\n}\n\n", "msg_date": "Fri, 19 Jan 2001 11:31:45 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Program test strcoll()." } ]
[ { "msg_contents": "In looking at the VAX ASM problem, I realized that the ASM in s_lock.h\nis all formatted differently, making it even more confusing. I have\napplied the following patch to s_lock.h to try and clean it up.\n\nThe new standard format is:\n\n\t/*\n\t * Standard __asm__ format:\n\t *\n\t * __asm__(\n\t * \"command;\"\n\t * \"command;\"\n\t * \"command;\"\n\t * : \"=r\"(_res) return value, in register\n\t * : \"r\"(lock) argument, 'lock pointer', in register\n\t * : \"r0\"); inline code uses this register\n\t */\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n? config.log\n? config.cache\n? config.status\n? GNUmakefile\n? src/Makefile.custom\n? src/GNUmakefile\n? src/Makefile.global\n? src/log\n? src/crtags\n? src/backend/postgres\n? src/backend/catalog/global.bki\n? src/backend/catalog/global.description\n? src/backend/catalog/template1.bki\n? src/backend/catalog/template1.description\n? src/backend/port/Makefile\n? src/bin/initdb/initdb\n? src/bin/initlocation/initlocation\n? src/bin/ipcclean/ipcclean\n? src/bin/pg_config/pg_config\n? src/bin/pg_ctl/pg_ctl\n? src/bin/pg_dump/pg_dump\n? src/bin/pg_dump/pg_restore\n? src/bin/pg_dump/pg_dumpall\n? src/bin/pg_id/pg_id\n? src/bin/pg_passwd/pg_passwd\n? src/bin/pgaccess/pgaccess\n? src/bin/pgtclsh/Makefile.tkdefs\n? src/bin/pgtclsh/Makefile.tcldefs\n? src/bin/pgtclsh/pgtclsh\n? src/bin/pgtclsh/pgtksh\n? src/bin/psql/psql\n? src/bin/scripts/createlang\n? src/include/config.h\n? src/include/stamp-h\n? src/interfaces/ecpg/lib/libecpg.so.3.2.0\n? src/interfaces/ecpg/preproc/ecpg\n? src/interfaces/libpgeasy/libpgeasy.so.2.1\n? src/interfaces/libpgtcl/libpgtcl.so.2.1\n? src/interfaces/libpq/libpq.so.2.1\n? src/interfaces/perl5/blib\n? src/interfaces/perl5/Makefile\n? src/interfaces/perl5/pm_to_blib\n? src/interfaces/perl5/Pg.c\n? src/interfaces/perl5/Pg.bs\n? src/pl/plperl/blib\n? src/pl/plperl/Makefile\n? src/pl/plperl/pm_to_blib\n? src/pl/plperl/SPI.c\n? src/pl/plperl/plperl.bs\n? src/pl/plpgsql/src/libplpgsql.so.1.0\n? src/pl/tcl/Makefile.tcldefs\nIndex: src/include/storage/s_lock.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/storage/s_lock.h,v\nretrieving revision 1.78\ndiff -c -r1.78 s_lock.h\n*** src/include/storage/s_lock.h\t2001/01/18 23:40:26\t1.78\n--- src/include/storage/s_lock.h\t2001/01/19 02:52:49\n***************\n*** 35,41 ****\n *\n *\tint TAS(slock_t *lock)\n *\t\tAtomic test-and-set instruction. Attempt to acquire the lock,\n! *\t\tbut do *not* wait. Returns 0 if successful, nonzero if unable\n *\t\tto acquire the lock.\n *\n *\tTAS() is a lower-level part of the API, but is used directly in a\n--- 35,41 ----\n *\n *\tint TAS(slock_t *lock)\n *\t\tAtomic test-and-set instruction. Attempt to acquire the lock,\n! *\t\tbut do *not* wait.\tReturns 0 if successful, nonzero if unable\n *\t\tto acquire the lock.\n *\n *\tTAS() is a lower-level part of the API, but is used directly in a\n***************\n*** 48,56 ****\n *\t\tunsigned\tspins = 0;\n *\n *\t\twhile (TAS(lock))\n- *\t\t{\n *\t\t\tS_LOCK_SLEEP(lock, spins++);\n- *\t\t}\n *\t}\n *\n *\twhere S_LOCK_SLEEP() checks for timeout and sleeps for a short\n--- 48,54 ----\n***************\n*** 87,96 ****\n \n /* Platform-independent out-of-line support routines */\n extern void s_lock(volatile slock_t *lock,\n! \t\t\t\t const char *file, const int line);\n extern void s_lock_sleep(unsigned spins, int microsec,\n! \t\t\t\t\t\t volatile slock_t *lock,\n! \t\t\t\t\t\t const char *file, const int line);\n \n \n #if defined(HAS_TEST_AND_SET)\n--- 85,94 ----\n \n /* Platform-independent out-of-line support routines */\n extern void s_lock(volatile slock_t *lock,\n! \t const char *file, const int line);\n extern void s_lock_sleep(unsigned spins, int microsec,\n! \t\t\t volatile slock_t *lock,\n! \t\t\t const char *file, const int line);\n \n \n #if defined(HAS_TEST_AND_SET)\n***************\n*** 101,106 ****\n--- 99,116 ----\n * All the gcc inlines\n */\n \n+ /*\n+ * Standard __asm__ format:\n+ *\n+ *\t__asm__(\n+ *\t\t\t\"command;\"\n+ *\t\t\t\"command;\"\n+ *\t\t\t\"command;\"\n+ *\t\t:\t\"=r\"(_res)\t\t\treturn value, in register\n+ *\t\t:\t\"r\"(lock)\t\t\targument, 'lock pointer', in register\n+ *\t\t:\t\"r0\");\t\t\t\tinline code uses this register\n+ */\n+ \n \n #if defined(__i386__)\n #define TAS(lock) tas(lock)\n***************\n*** 110,116 ****\n {\n \tregister slock_t _res = 1;\n \n! __asm__(\"lock; xchgb %0,%1\": \"=q\"(_res), \"=m\"(*lock):\"0\"(_res));\n \treturn (int) _res;\n }\n \n--- 120,130 ----\n {\n \tregister slock_t _res = 1;\n \n! \t__asm__(\n! \t\t\t\"lock;\"\n! \t\t\t\"xchgb %0,%1;\"\n! :\t\t\t\"=q\"(_res), \"=m\"(*lock)\n! :\t\t\t\"0\"(_res));\n \treturn (int) _res;\n }\n \n***************\n*** 121,139 ****\n #define TAS(lock) tas(lock)\n \n static __inline__ int\n! tas (volatile slock_t *lock)\n {\n! long int ret;\n \n! __asm__ __volatile__(\n! \"xchg4 %0=%1,%2\"\n! : \"=r\"(ret), \"=m\"(*lock)\n! : \"r\"(1), \"1\"(*lock)\n! : \"memory\");\n \n! return (int) ret;\n }\n! #endif /* __ia64__ */\n \n \n #if defined(__arm__) || defined(__arm__)\n--- 135,154 ----\n #define TAS(lock) tas(lock)\n \n static __inline__ int\n! tas(volatile slock_t *lock)\n {\n! \tlong int\tret;\n \n! \t__asm__\t\t__volatile__(\n! \t\t\t\t\t\t\t\t\t\t \"xchg4 %0=%1,%2;\"\n! \t\t\t\t\t\t\t :\t\t\t \"=r\"(ret), \"=m\"(*lock)\n! \t\t\t\t\t\t\t :\t\t\t \"r\"(1), \"1\"(*lock)\n! \t\t\t\t\t\t\t :\t\t\t \"memory\");\n \n! \treturn (int) ret;\n }\n! \n! #endif\t /* __ia64__ */\n \n \n #if defined(__arm__) || defined(__arm__)\n***************\n*** 142,180 ****\n static __inline__ int\n tas(volatile slock_t *lock)\n {\n! register slock_t _res = 1;\n \n! __asm__(\"swpb %0, %0, [%3]\": \"=r\"(_res), \"=m\"(*lock):\"0\"(_res), \"r\" (lock));\n! return (int) _res;\n }\n \n! #endif /* __arm__ */\n \n #if defined(__s390__)\n /*\n * S/390 Linux\n */\n! #define TAS(lock) tas(lock)\n \n static inline int\n tas(volatile slock_t *lock)\n {\n! int _res;\n \n! __asm__ __volatile(\" la 1,1\\n\"\n! \" l 2,%2\\n\"\n! \" slr 0,0\\n\"\n! \" cs 0,1,0(2)\\n\"\n! \" lr %1,0\"\n! : \"=m\" (lock), \"=d\" (_res)\n! : \"m\" (lock)\n! : \"0\", \"1\", \"2\");\n \n! return (_res);\n }\n- #endif /* __s390__ */\n \n \n #if defined(__sparc__)\n #define TAS(lock) tas(lock)\n \n--- 157,200 ----\n static __inline__ int\n tas(volatile slock_t *lock)\n {\n! \tregister slock_t _res = 1;\n \n! \t__asm__(\n! \t\t\t\"swpb %0, %0, [%3];\"\n! :\t\t\t\"=r\"(_res), \"=m\"(*lock)\n! :\t\t\t\"0\"(_res), \"r\"(lock));\n! \treturn (int) _res;\n }\n \n! #endif\t /* __arm__ */\n \n #if defined(__s390__)\n /*\n * S/390 Linux\n */\n! #define TAS(lock)\t tas(lock)\n \n static inline int\n tas(volatile slock_t *lock)\n {\n! \tint\t\t\t_res;\n \n! \t__asm__\t\t__volatile(\n! \t\t\t\t\t\t\t\t\t \"la 1,1;\"\n! \t\t\t\t\t\t\t\t\t \"l 2,%2;\"\n! \t\t\t\t\t\t\t\t\t \"slr 0,0;\"\n! \t\t\t\t\t\t\t\t\t \"cs 0,1,0(2);\"\n! \t\t\t\t\t\t\t\t\t \"lr %1,0;\"\n! \t\t\t\t\t\t :\t\t \"=m\"(lock), \"=d\"(_res)\n! \t\t\t\t\t\t :\t\t \"m\"(lock)\n! \t\t\t\t\t\t :\t\t \"0\", \"1\", \"2\");\n \n! \treturn (_res);\n }\n \n+ #endif\t /* __s390__ */\n \n+ \n #if defined(__sparc__)\n #define TAS(lock) tas(lock)\n \n***************\n*** 183,190 ****\n {\n \tregister slock_t _res = 1;\n \n! \t__asm__(\"ldstub [%2], %0\" \\\n! :\t\t\t\"=r\"(_res), \"=m\"(*lock) \\\n :\t\t\t\"r\"(lock));\n \treturn (int) _res;\n }\n--- 203,211 ----\n {\n \tregister slock_t _res = 1;\n \n! \t__asm__(\n! \t\t\t\"ldstub [%2], %0;\"\n! :\t\t\t\"=r\"(_res), \"=m\"(*lock)\n :\t\t\t\"r\"(lock));\n \treturn (int) _res;\n }\n***************\n*** 199,214 ****\n tas(volatile slock_t *lock)\n {\n \tregister int rv;\n! \t\n! \t__asm__ __volatile__ (\n! \t\t\"tas %1; sne %0\"\n! \t\t: \"=d\" (rv), \"=m\"(*lock)\n! \t\t: \"1\" (*lock)\n! \t\t: \"cc\" );\n \treturn rv;\n }\n \n! #endif /* defined(__mc68000__) && defined(__linux__) */\n \n \n #if defined(NEED_VAX_TAS_ASM)\n--- 220,237 ----\n tas(volatile slock_t *lock)\n {\n \tregister int rv;\n! \n! \t__asm__\t\t__volatile__(\n! \t\t\t\t\t\t\t\t\t\t \"tas %1;\"\n! \t\t\t\t\t\t\t\t\t\t \"sne %0;\"\n! \t\t\t\t\t\t\t :\t\t\t \"=d\"(rv), \"=m\"(*lock)\n! \t\t\t\t\t\t\t :\t\t\t \"1\"(*lock)\n! \t\t\t\t\t\t\t :\t\t\t \"cc\");\n! \n \treturn rv;\n }\n \n! #endif\t /* defined(__mc68000__) && defined(__linux__) */\n \n \n #if defined(NEED_VAX_TAS_ASM)\n***************\n*** 225,237 ****\n {\n \tregister\t_res;\n \n! \t__asm__(\"\tmovl $1, r0 \\\n! \t\t\tbbssi $0, (%1), 1f \\\n! \t\t\tclrl r0 \\\n! 1:\t\t\tmovl r0, %0 \"\n! :\t\t\t\"=r\"(_res)\t\t\t/* return value, in register */\n! :\t\t\t\"r\"(lock)\t\t\t/* argument, 'lock pointer', in register */\n! :\t\t\t\"r0\");\t\t\t\t/* inline code uses this register */\n \treturn (int) _res;\n }\n \n--- 248,261 ----\n {\n \tregister\t_res;\n \n! \t__asm__(\n! \t\t\t\"movl $1, r0;\"\n! \t\t\t\"bbssi $0, (%1), 1f;\"\n! \t\t\t\"clrl r0;\"\n! \t\t\t\"1: movl r0, %0;\"\n! :\t\t\t\"=r\"(_res)\n! :\t\t\t\"r\"(lock)\n! :\t\t\t\"r0\");\n \treturn (int) _res;\n }\n \n***************\n*** 244,257 ****\n static __inline__ int\n tas(volatile slock_t *lock)\n {\n! register _res;\n! __asm__(\"sbitb 0, %0 \\n\\\n! \tsfsd %1\"\n! \t: \"=m\"(*lock), \"=r\"(_res));\n! return (int) _res; \n }\n \n! #endif /* NEED_NS32K_TAS_ASM */\n \n \n \n--- 268,283 ----\n static __inline__ int\n tas(volatile slock_t *lock)\n {\n! \tregister\t_res;\n! \n! \t__asm__(\n! \t\t\t\"sbitb 0, %0;\"\n! \t\t\t\"sfsd %1;\"\n! :\t\t\t\"=m\"(*lock), \"=r\"(_res));\n! \treturn (int) _res;\n }\n \n! #endif\t /* NEED_NS32K_TAS_ASM */\n \n \n \n***************\n*** 268,274 ****\n tas(volatile slock_t *s_lock)\n {\n /* UNIVEL wants %mem in column 1, so we don't pg_indent this file */\n! %mem s_lock\n \tpushl %ebx\n \tmovl s_lock, %ebx\n \tmovl $255, %eax\n--- 294,300 ----\n tas(volatile slock_t *s_lock)\n {\n /* UNIVEL wants %mem in column 1, so we don't pg_indent this file */\n! \t%mem s_lock\n \tpushl %ebx\n \tmovl s_lock, %ebx\n \tmovl $255, %eax\n***************\n*** 277,283 ****\n \tpopl %ebx\n }\n \n! #endif /* defined(NEED_I386_TAS_ASM) && defined(USE_UNIVEL_CC) */\n \n #endif\t /* defined(__GNUC__) */\n \n--- 303,309 ----\n \tpopl %ebx\n }\n \n! #endif\t /* defined(NEED_I386_TAS_ASM) && defined(USE_UNIVEL_CC) */\n \n #endif\t /* defined(__GNUC__) */\n \n***************\n*** 300,329 ****\n #if defined(__GNUC__)\n \n #define TAS(lock) tas(lock)\n! #define S_UNLOCK(lock) do { __asm__ volatile (\"mb\"); *(lock) = 0; } while (0)\n \n static __inline__ int\n tas(volatile slock_t *lock)\n {\n \tregister slock_t _res;\n \n! \t__asm__ volatile\n! (\"\t\tldq $0, %0\t\t\\n\\\n! \t\tbne $0, 2f\t\t\\n\\\n! \t\tldq_l %1, %0\t\t\\n\\\n! \t\tbne %1, 2f\t\t\\n\\\n! \t\tmov 1, $0\t\t\t\\n\\\n! \t\tstq_c $0, %0\t\t\\n\\\n! \t\tbeq $0, 2f\t\t\\n\\\n! \t\tmb\t\t\t\t\t\\n\\\n! \t\tbr 3f\t\t\t\\n\\\n! \t 2: mov 1, %1\t\t\t\\n\\\n! \t 3: \\n\" : \"=m\"(*lock), \"=r\"(_res) : : \"0\");\n \n \treturn (int) _res;\n }\n \n! #else /* !defined(__GNUC__) */\n \n /*\n * The Tru64 compiler doesn't support gcc-style inline asm, but it does\n--- 326,358 ----\n #if defined(__GNUC__)\n \n #define TAS(lock) tas(lock)\n! #define S_UNLOCK(lock)\tdo { __asm__ volatile (\"mb\"); *(lock) = 0; } while (0)\n \n static __inline__ int\n tas(volatile slock_t *lock)\n {\n \tregister slock_t _res;\n \n! \t__asm__\t\tvolatile(\n! \t\t\t\t\t\t\t\t\t \"ldq $0, %0;\"\n! \t\t\t\t\t\t\t\t\t \"bne $0, 2f;\"\n! \t\t\t\t\t\t\t\t\t \"ldq_l %1, %0;\"\n! \t\t\t\t\t\t\t\t\t \"bne %1, 2f;\"\n! \t\t\t\t\t\t\t\t\t \"mov 1, $0;\"\n! \t\t\t\t\t\t\t\t\t \"stq_c $0, %0;\"\n! \t\t\t\t\t\t\t\t\t \"beq $0, 2f;\"\n! \t\t\t\t\t\t\t\t\t \"mb;\"\n! \t\t\t\t\t\t\t\t\t \"br 3f;\"\n! \t\t\t\t\t\t\t\t\t \"2: mov 1, %1;\"\n! \t\t\t\t\t\t\t\t\t \"3:\"\n! \t\t\t\t\t\t :\t\t\t \"=m\"(*lock), \"=r\"(_res)\n! \t\t\t\t\t\t :\n! \t\t\t\t\t\t :\t\t\t \"0\");\n \n \treturn (int) _res;\n }\n \n! #else\t\t\t\t\t\t\t/* !defined(__GNUC__) */\n \n /*\n * The Tru64 compiler doesn't support gcc-style inline asm, but it does\n***************\n*** 337,348 ****\n #include <alpha/builtins.h>\n \n #define S_INIT_LOCK(lock) (*(lock) = 0)\n! #define TAS(lock) (__LOCK_LONG_RETRY((lock), 1) == 0)\n! #define S_UNLOCK(lock) __UNLOCK_LONG(lock)\n \n! #endif /* defined(__GNUC__) */\n \n! #endif /* __alpha */\n \n \n #if defined(__hpux)\n--- 366,377 ----\n #include <alpha/builtins.h>\n \n #define S_INIT_LOCK(lock) (*(lock) = 0)\n! #define TAS(lock)\t\t (__LOCK_LONG_RETRY((lock), 1) == 0)\n! #define S_UNLOCK(lock)\t __UNLOCK_LONG(lock)\n \n! #endif\t /* defined(__GNUC__) */\n \n! #endif\t /* __alpha */\n \n \n #if defined(__hpux)\n***************\n*** 373,390 ****\n *\n * Note that slock_t under QNX is sem_t instead of char\n */\n! #define TAS(lock) (sem_trywait((lock)) < 0)\n! #define S_UNLOCK(lock) sem_post((lock))\n! #define S_INIT_LOCK(lock) sem_init((lock), 1, 1)\n! #define S_LOCK_FREE(lock) ((lock)->value)\n! #endif /* __QNX__ */\n \n \n #if defined(__sgi)\n /*\n * SGI IRIX 5\n * slock_t is defined as a unsigned long. We use the standard SGI\n! * mutex API. \n *\n * The following comment is left for historical reasons, but is probably\n * not a good idea since the mutex ABI is supported.\n--- 402,419 ----\n *\n * Note that slock_t under QNX is sem_t instead of char\n */\n! #define TAS(lock)\t\t(sem_trywait((lock)) < 0)\n! #define S_UNLOCK(lock)\tsem_post((lock))\n! #define S_INIT_LOCK(lock)\t\tsem_init((lock), 1, 1)\n! #define S_LOCK_FREE(lock)\t\t((lock)->value)\n! #endif\t /* __QNX__ */\n \n \n #if defined(__sgi)\n /*\n * SGI IRIX 5\n * slock_t is defined as a unsigned long. We use the standard SGI\n! * mutex API.\n *\n * The following comment is left for historical reasons, but is probably\n * not a good idea since the mutex ABI is supported.\n***************\n*** 402,408 ****\n \n #if defined(sinix)\n /*\n! * SINIX / Reliant UNIX \n * slock_t is defined as a struct abilock_t, which has a single unsigned long\n * member. (Basically same as SGI)\n *\n--- 431,437 ----\n \n #if defined(sinix)\n /*\n! * SINIX / Reliant UNIX\n * slock_t is defined as a struct abilock_t, which has a single unsigned long\n * member. (Basically same as SGI)\n *\n***************\n*** 412,418 ****\n #define S_INIT_LOCK(lock)\tinit_lock(lock)\n #define S_LOCK_FREE(lock)\t(stat_lock(lock) == UNLOCKED)\n #endif\t /* sinix */\n! \n \n #if defined(_AIX)\n /*\n--- 441,447 ----\n #define S_INIT_LOCK(lock)\tinit_lock(lock)\n #define S_LOCK_FREE(lock)\t(stat_lock(lock) == UNLOCKED)\n #endif\t /* sinix */\n! \n \n #if defined(_AIX)\n /*\n***************\n*** 440,446 ****\n \n \n \n! #else\t /* !HAS_TEST_AND_SET */\n \n /*\n * Fake spinlock implementation using SysV semaphores --- slow and prone\n--- 469,475 ----\n \n \n \n! #else\t\t\t\t\t\t\t/* !HAS_TEST_AND_SET */\n \n /*\n * Fake spinlock implementation using SysV semaphores --- slow and prone\n***************\n*** 451,469 ****\n typedef struct\n {\n \t/* reference to semaphore used to implement this spinlock */\n! \tIpcSemaphoreId\tsemId;\n! \tint\t\t\t\tsem;\n } slock_t;\n \n extern bool s_lock_free_sema(volatile slock_t *lock);\n extern void s_unlock_sema(volatile slock_t *lock);\n extern void s_init_lock_sema(volatile slock_t *lock);\n! extern int tas_sema(volatile slock_t *lock);\n \n! #define S_LOCK_FREE(lock) s_lock_free_sema(lock)\n! #define S_UNLOCK(lock) s_unlock_sema(lock)\n! #define S_INIT_LOCK(lock) s_init_lock_sema(lock)\n! #define TAS(lock) tas_sema(lock)\n \n #endif\t /* HAS_TEST_AND_SET */\n \n--- 480,498 ----\n typedef struct\n {\n \t/* reference to semaphore used to implement this spinlock */\n! \tIpcSemaphoreId semId;\n! \tint\t\t\tsem;\n } slock_t;\n \n extern bool s_lock_free_sema(volatile slock_t *lock);\n extern void s_unlock_sema(volatile slock_t *lock);\n extern void s_init_lock_sema(volatile slock_t *lock);\n! extern int\ttas_sema(volatile slock_t *lock);\n \n! #define S_LOCK_FREE(lock)\ts_lock_free_sema(lock)\n! #define S_UNLOCK(lock)\t s_unlock_sema(lock)\n! #define S_INIT_LOCK(lock)\ts_init_lock_sema(lock)\n! #define TAS(lock)\ttas_sema(lock)\n \n #endif\t /* HAS_TEST_AND_SET */", "msg_date": "Thu, 18 Jan 2001 21:58:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "s_lock.h cleanup" }, { "msg_contents": "As long as we're cleaning things up, I would suggest that all the ports\nthat use gcc assembler be made to declare it uniformly, as\n\n\t__asm__ __volatile__ ( ... );\n\nAs I read the GCC manual, there's some risk of the asm sections getting\nmoved around in the program flow if they are not marked volatile. Also\nwe oughta be consistent about using the double-underscore keywords IMHO.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jan 2001 22:49:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: s_lock.h cleanup " }, { "msg_contents": "Done and applied.\n\n> As long as we're cleaning things up, I would suggest that all the ports\n> that use gcc assembler be made to declare it uniformly, as\n> \n> \t__asm__ __volatile__ ( ... );\n> \n> As I read the GCC manual, there's some risk of the asm sections getting\n> moved around in the program flow if they are not marked volatile. Also\n> we oughta be consistent about using the double-underscore keywords IMHO.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Jan 2001 22:58:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: s_lock.h cleanup" }, { "msg_contents": "Bruce Momjian writes:\n\n> In looking at the VAX ASM problem, I realized that the ASM in s_lock.h\n> is all formatted differently, making it even more confusing. I have\n> applied the following patch to s_lock.h to try and clean it up.\n\nI don't believe in this patch at all. It makes the assumption that all\nassemblers have equally forgiving lexical rules as a certain subset of\nsaid assemblers. For example, the VAX code does not look at all like the\none back when it still worked.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Fri, 19 Jan 2001 17:16:12 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] s_lock.h cleanup" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > In looking at the VAX ASM problem, I realized that the ASM in s_lock.h\n> > is all formatted differently, making it even more confusing. I have\n> > applied the following patch to s_lock.h to try and clean it up.\n> \n> I don't believe in this patch at all. It makes the assumption that all\n> assemblers have equally forgiving lexical rules as a certain subset of\n> said assemblers. For example, the VAX code does not look at all like the\n> one back when it still worked.\n\nI agree the VAX code was changed in the patch, but the VAX person sent\nemail that he had to add the semicolons to make it work on his platform,\nand that the original \" \\n\\\" code did compile at all.\n\nI believe the formatting problem was that some code had\n\"command;command; : lkjasfd : asldfk\" while some had them spread over\nseparate lines, and others used \\n\\, all very randomly. Now at least\nthey are all consistent and use similar formatting.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Jan 2001 11:40:54 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: [PATCHES] s_lock.h cleanup" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Bruce Momjian writes:\n>> In looking at the VAX ASM problem, I realized that the ASM in s_lock.h\n>> is all formatted differently, making it even more confusing. I have\n>> applied the following patch to s_lock.h to try and clean it up.\n\n> I don't believe in this patch at all. It makes the assumption that all\n> assemblers have equally forgiving lexical rules as a certain subset of\n> said assemblers. For example, the VAX code does not look at all like the\n> one back when it still worked.\n\nGood point. I think it's safe to use the split-up-string-literal\nfeature, but assuming that ';' can replace '\\n' is sheer folly, and so\nis assuming that whitespace doesn't matter (ie, that opcodes starting\nin column 1 are OK). Bruce, I'd suggest a format more like\n\n\t\"[label] opcode operands \\n\"\n\nfor each line of assembly code.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Jan 2001 13:24:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] s_lock.h cleanup " }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I believe the formatting problem was that some code had\n> \"command;command; : lkjasfd : asldfk\" while some had them spread over\n> separate lines, and others used \\n\\, all very randomly. Now at least\n> they are all consistent and use similar formatting.\n\nAnd they may all be broken, except for the one(s) you have tested.\nYou shouldn't be assuming that a platform that uses gcc necessarily\nalso uses gas.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Jan 2001 13:37:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] s_lock.h cleanup " }, { "msg_contents": "> Peter Eisentraut <[email protected]> writes:\n> > Bruce Momjian writes:\n> >> In looking at the VAX ASM problem, I realized that the ASM in s_lock.h\n> >> is all formatted differently, making it even more confusing. I have\n> >> applied the following patch to s_lock.h to try and clean it up.\n> \n> > I don't believe in this patch at all. It makes the assumption that all\n> > assemblers have equally forgiving lexical rules as a certain subset of\n> > said assemblers. For example, the VAX code does not look at all like the\n> > one back when it still worked.\n> \n> Good point. I think it's safe to use the split-up-string-literal\n> feature, but assuming that ';' can replace '\\n' is sheer folly, and so\n> is assuming that whitespace doesn't matter (ie, that opcodes starting\n> in column 1 are OK). Bruce, I'd suggest a format more like\n> \n> \t\"[label] opcode operands \\n\"\n> \n> for each line of assembly code.\n\nInterestingly, we have very few non-gcc ASM entries in s_lock.h. The\nonly non-gcc one I see are Univel/i386, and I didn't touch that. Isn't\nthe semicolon the standard command terminator for all gcc assemblers?\n\nI see non-gcc stuff in s_lock.c, but I didn't touch that. I also see\nvolatile missing in s_lock.c, which I will add for GCC entries.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Jan 2001 13:39:22 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: [PATCHES] s_lock.h cleanup" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I believe the formatting problem was that some code had\n> > \"command;command; : lkjasfd : asldfk\" while some had them spread over\n> > separate lines, and others used \\n\\, all very randomly. Now at least\n> > they are all consistent and use similar formatting.\n> \n> And they may all be broken, except for the one(s) you have tested.\n> You shouldn't be assuming that a platform that uses gcc necessarily\n> also uses gas.\n\nOh, wow, I never suspected gcc could work without gas. Can it?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Jan 2001 13:40:16 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: [PATCHES] s_lock.h cleanup" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I believe the formatting problem was that some code had\n> > \"command;command; : lkjasfd : asldfk\" while some had them spread over\n> > separate lines, and others used \\n\\, all very randomly. Now at least\n> > they are all consistent and use similar formatting.\n> \n> And they may all be broken, except for the one(s) you have tested.\n> You shouldn't be assuming that a platform that uses gcc necessarily\n> also uses gas.\n> \n> \t\t\tregards, tom lane\n\nI can tell you that they all used __asm__, and all used the colon\nterminators for each __asm__ block:\n\n * __asm__ __volatile__(\n * \"command;\"\n * \"command;\"\n * \"command;\"\n * : \"=r\"(_res) return value, in register\n * : \"r\"(lock) argument, 'lock pointer', in register\n * : \"r0\"); inline code uses this register\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Jan 2001 13:42:37 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: [PATCHES] s_lock.h cleanup" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> And they may all be broken, except for the one(s) you have tested.\n>> You shouldn't be assuming that a platform that uses gcc necessarily\n>> also uses gas.\n\n> I can tell you that they all used __asm__, and all used the colon\n> terminators for each __asm__ block:\n\n> * __asm__ __volatile__(\n> * \"command;\"\n> * \"command;\"\n> * \"command;\"\n> * : \"=r\"(_res) return value, in register\n> * : \"r\"(lock) argument, 'lock pointer', in register\n> * : \"r0\"); inline code uses this register\n\nThe __asm___ and splitting up the assembly code into multiple string\nliterals and the consistent formatting of the register addendums are\nall fine, because those are read by gcc and this whole code block is\ngcc-only. But the assembly code string literal will be spit out\nessentially verbatim by gcc to the assembler, and the assembler may\nnot be nearly as forgiving as you think.\n\n> Oh, wow, I never suspected gcc could work without gas. Can it?\n\nGcc with platform-specific as used to be the standard configuration\non HPUX, and may still be standard on some platforms.\n\nBottom line: I see no point in taking any risks, especially not this\nlate in beta, with code that you cannot test for yourself, and\n*especially* not when the change is only for cosmetic reasons.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Jan 2001 13:49:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] s_lock.h cleanup " }, { "msg_contents": "* Tom Lane <[email protected]> [010119 13:08]:\n> \n> > Oh, wow, I never suspected gcc could work without gas. Can it?\n> \n> Gcc with platform-specific as used to be the standard configuration\n> on HPUX, and may still be standard on some platforms.\nStill is the standard on UnixWare with GCC. The standard assembler\nand linker are used. \n\n\n> \n> Bottom line: I see no point in taking any risks, especially not this\n> late in beta, with code that you cannot test for yourself, and\n> *especially* not when the change is only for cosmetic reasons.\nI agree with this sentiment. \n\nLER\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Fri, 19 Jan 2001 13:24:10 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] s_lock.h cleanup" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n\n> Interestingly, we have very few non-gcc ASM entries in s_lock.h. The\n> only non-gcc one I see are Univel/i386, and I didn't touch that. Isn't\n> the semicolon the standard command terminator for all gcc assemblers?\n\nNo.\n\nIt is for most, but not for the a29k, AVR, CRIS, d10v, d30v, FR30,\nH8/300, HP/PA, TIC30, TIC54x, or TIC80.\n\nAren't you glad you know that now?\n\nIan\nFormer GNU binutils maintainer\n", "msg_date": "19 Jan 2001 12:05:08 -0800", "msg_from": "Ian Lance Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] s_lock.h cleanup" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n\n> > Bruce Momjian <[email protected]> writes:\n> > > I believe the formatting problem was that some code had\n> > > \"command;command; : lkjasfd : asldfk\" while some had them spread over\n> > > separate lines, and others used \\n\\, all very randomly. Now at least\n> > > they are all consistent and use similar formatting.\n> > \n> > And they may all be broken, except for the one(s) you have tested.\n> > You shouldn't be assuming that a platform that uses gcc necessarily\n> > also uses gas.\n> \n> Oh, wow, I never suspected gcc could work without gas. Can it?\n\nYes.\n\nIn fact, I don't think there is any Unix system on which gcc requires\ngas. There used to be at least one, but I think they have all been\ncleaned up at this point.\n\nIan\n", "msg_date": "19 Jan 2001 12:06:15 -0800", "msg_from": "Ian Lance Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] s_lock.h cleanup" }, { "msg_contents": "> The __asm___ and splitting up the assembly code into multiple string\n> literals and the consistent formatting of the register addendums are\n> all fine, because those are read by gcc and this whole code block is\n> gcc-only. But the assembly code string literal will be spit out\n> essentially verbatim by gcc to the assembler, and the assembler may\n> not be nearly as forgiving as you think.\n> \n> > Oh, wow, I never suspected gcc could work without gas. Can it?\n> \n> Gcc with platform-specific as used to be the standard configuration\n> on HPUX, and may still be standard on some platforms.\n> \n> Bottom line: I see no point in taking any risks, especially not this\n> late in beta, with code that you cannot test for yourself, and\n> *especially* not when the change is only for cosmetic reasons.\n\nOK, remove semicolons and put back the \\n at the end of each line. \nPatch attached.\n\nI wasn't going to mess with this while in beta, but when I found the VAX\ncode broken, it seemed worth making sure they were all OK. The VAX\nstuff was broken because in 7.0.3 it shows:\n\n __asm__(\" movl $1, r0 \\\n bbssi $0, (%1), 1 f \\\n clrl r0 \\\n1: movl r0, %0 \"\n\nThe '1 f' we broken, but also the thing missing here is \\n\\. With \\, it\njust makes one long line, which certainly can't be asembled. The VAX\nguy added semicolons, but I can see that \\n\\ is safer, and have done\nthat.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n? config.log\n? config.cache\n? config.status\n? GNUmakefile\n? src/Makefile.custom\n? src/GNUmakefile\n? src/Makefile.global\n? src/log\n? src/crtags\n? src/backend/postgres\n? src/backend/catalog/global.bki\n? src/backend/catalog/global.description\n? src/backend/catalog/template1.bki\n? src/backend/catalog/template1.description\n? src/backend/port/Makefile\n? src/bin/initdb/initdb\n? src/bin/initlocation/initlocation\n? src/bin/ipcclean/ipcclean\n? src/bin/pg_config/pg_config\n? src/bin/pg_ctl/pg_ctl\n? src/bin/pg_dump/pg_dump\n? src/bin/pg_dump/pg_restore\n? src/bin/pg_dump/pg_dumpall\n? src/bin/pg_id/pg_id\n? src/bin/pg_passwd/pg_passwd\n? src/bin/pgaccess/pgaccess\n? src/bin/pgtclsh/Makefile.tkdefs\n? src/bin/pgtclsh/Makefile.tcldefs\n? src/bin/pgtclsh/pgtclsh\n? src/bin/pgtclsh/pgtksh\n? src/bin/psql/psql\n? src/bin/scripts/createlang\n? src/include/config.h\n? src/include/stamp-h\n? src/interfaces/ecpg/lib/libecpg.so.3.2.0\n? src/interfaces/ecpg/preproc/ecpg\n? src/interfaces/libpgeasy/libpgeasy.so.2.1\n? src/interfaces/libpgtcl/libpgtcl.so.2.1\n? src/interfaces/libpq/libpq.so.2.1\n? src/interfaces/perl5/blib\n? src/interfaces/perl5/Makefile\n? src/interfaces/perl5/pm_to_blib\n? src/interfaces/perl5/Pg.c\n? src/interfaces/perl5/Pg.bs\n? src/pl/plperl/blib\n? src/pl/plperl/Makefile\n? src/pl/plperl/pm_to_blib\n? src/pl/plperl/SPI.c\n? src/pl/plperl/plperl.bs\n? src/pl/plpgsql/src/libplpgsql.so.1.0\n? src/pl/tcl/Makefile.tcldefs\nIndex: src/backend/storage/buffer/s_lock.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/storage/buffer/s_lock.c,v\nretrieving revision 1.29\ndiff -c -r1.29 s_lock.c\n*** src/backend/storage/buffer/s_lock.c\t2001/01/14 05:08:15\t1.29\n--- src/backend/storage/buffer/s_lock.c\t2001/01/19 20:28:20\n***************\n*** 115,123 ****\n \t}\n }\n \n- \n- \n- \n /*\n * Various TAS implementations that cannot live in s_lock.h as no inline\n * definition exists (yet).\n--- 115,120 ----\n***************\n*** 136,153 ****\n tas_dummy()\t\t\t\t\t\t/* really means: extern int tas(slock_t\n \t\t\t\t\t\t\t\t * **lock); */\n {\n! \t__asm__(\"\t\t\\n\\\n! .global\t\t_tas\t\t\\n\\\n! _tas:\t\t\t\t\\n\\\n! \tmovel sp@(0x4),a0\t\\n\\\n! \ttas a0@\t\t\t\\n\\\n! \tbeq _success\t\t\\n\\\n! \tmoveq #-128,d0\t\\n\\\n! \trts\t\t\t\\n\\\n! _success:\t\t\t\\n\\\n! \tmoveq #0,d0\t\t\\n\\\n! \trts\t\t\t\\n\\\n! \t\");\n }\n \n #endif\t /* __m68k__ */\n--- 133,150 ----\n tas_dummy()\t\t\t\t\t\t/* really means: extern int tas(slock_t\n \t\t\t\t\t\t\t\t * **lock); */\n {\n! \t__asm__ __volatile__(\n! \"\\\n! .global\t\t_tas\t\t\t\t\\n\\\n! _tas:\t\t\t\t\t\t\t\\n\\\n! \t\t\tmovel\tsp@(0x4),a0\t\\n\\\n! \t\t\ttas \ta0@\t\t\t\\n\\\n! \t\t\tbeq \t_success\t\\n\\\n! \t\t\tmoveq \t#-128,d0\t\\n\\\n! \t\t\trts\t\t\t\t\t\\n\\\n! _success:\t\t\t\t\t\t\\n\\\n! \t\t\tmoveq \t#0,d0\t\t\\n\\\n! \t\t\trts\");\n }\n \n #endif\t /* __m68k__ */\n***************\n*** 160,181 ****\n static void\n tas_dummy()\n {\n! __asm__(\" \\n\\\n! .globl tas \\n\\\n! .globl _tas \\n\\\n! _tas: \\n\\\n! tas: \\n\\\n! lwarx r5,0,r3 \\n\\\n! cmpwi r5,0 \\n\\\n! bne fail \\n\\\n! addi r5,r5,1 \\n\\\n! stwcx. r5,0,r3 \\n\\\n! beq success \\n\\\n! fail: li r3,1 \\n\\\n! blr \\n\\\n! success: \\n\\\n! li r3,0 \\n\\\n! blr \\n\\\n \t\");\n }\n \n--- 157,179 ----\n static void\n tas_dummy()\n {\n! __asm__ __volatile__(\n! \"\\\n! \t\t\t.globl tas\t\t\t\\n\\\n! \t\t\t.globl _tas\t\t\t\\n\\\n! _tas:\t\t\t\t\t\t\t\\n\\\n! tas:\t\t\t\t\t\t\t\\n\\\n! \t\t\tlwarx\tr5,0,r3\t\t\\n\\\n! \t\t\tcmpwi \tr5,0\t\t\\n\\\n! \t\t\tbne \tfail\t\t\\n\\\n! \t\t\taddi \tr5,r5,1\t\t\\n\\\n! \t\t\tstwcx. \tr5,0,r3\t\t\\n\\\n! \t\t\tbeq \tsuccess\t\t\\n\\\n! fail:\t\tli \t\tr3,1\t\t\\n\\\n! \t\t\tblr \t\t\t\t\\n\\\n! success:\t\t\t\t\t\t\\n\\\n! \t\t\tli \t\tr3,0\t\t\\n\\\n! \t\t\tblr \t\t\t\t\\n\\\n \t\");\n }\n \n***************\n*** 186,206 ****\n static void\n tas_dummy()\n {\n! \t__asm__(\"\t\t\\n\\\n! .global\t\ttas\t\t\\n\\\n! tas:\t\t\t\t\\n\\\n! \t\tlwarx\t5,0,3\t\\n\\\n! \t\tcmpwi\t5,0\t\\n\\\n! \t\tbne\tfail\t\\n\\\n! \t\taddi\t5,5,1\t\\n\\\n! \tstwcx. 5,0,3\t\\n\\\n! \t\tbeq\tsuccess\t\\n\\\n! fail:\t\tli\t3,1\t\\n\\\n! \t\tblr\t\t\\n\\\n! success:\t\t\t\\n\\\n! \t\tli 3,0\t\t\\n\\\n! \tblr\t\t\\n\\\n! \t\");\n }\n \n #endif\t /* __powerpc__ */\n--- 184,204 ----\n static void\n tas_dummy()\n {\n! \t__asm__ __volatile__(\n! \"\\\n! .global tas \t\t\t\t\t\\n\\\n! tas:\t\t\t\t\t\t\t\\n\\\n! \t\t\tlwarx\t5,0,3\t\t\\n\\\n! \t\t\tcmpwi \t5,0 \t\t\\n\\\n! \t\t\tbne \tfail\t\t\\n\\\n! \t\t\taddi \t5,5,1\t\t\\n\\\n! \t\t\tstwcx.\t5,0,3\t\t\\n\\\n! \t\t\tbeq \tsuccess \t\\n\\\n! fail:\t\tli\t\t3,1 \t\t\\n\\\n! \t\t\tblr \t\t\t\t\\n\\\n! success:\t\t\t\t\t\t\\n\\\n! \t\t\tli \t\t3,0\t\t\t\\n\\\n! \t\t\tblr\");\n }\n \n #endif\t /* __powerpc__ */\n***************\n*** 209,230 ****\n static void\n tas_dummy()\n {\n! \t__asm__(\"\t\t\\n\\\n! .global\ttas\t\t\t\\n\\\n! tas:\t\t\t\t\\n\\\n! \t.frame\t$sp, 0, $31\t\\n\\\n! \tll\t$14, 0($4)\t\\n\\\n! \tor\t$15, $14, 1\t\\n\\\n! \tsc\t$15, 0($4)\t\\n\\\n! \tbeq\t$15, 0, fail\t\\n\\\n! \tbne\t$14, 0, fail\t\\n\\\n! \tli\t$2, 0\t\t\\n\\\n! \t.livereg 0x2000FF0E,0x00000FFF\t\\n\\\n! \tj $31\t\t\\n\\\n! fail:\t\t\t\t\\n\\\n! \tli\t$2, 1\t\t\\n\\\n! \tj $31\t\t\\n\\\n! \t\");\n }\n \n #endif\t /* __mips__ */\n--- 207,228 ----\n static void\n tas_dummy()\n {\n! \t__asm__ _volatile__(\n! \"\\\n! .global\ttas\t\t\t\t\t\t\\n\\\n! tas:\t\t\t\t\t\t\t\\n\\\n! \t\t\t.frame\t$sp, 0, $31\t\\n\\\n! \t\t\tll\t\t$14, 0($4)\t\\n\\\n! \t\t\tor\t\t$15, $14, 1\t\\n\\\n! \t\t\tsc\t\t$15, 0($4)\t\\n\\\n! \t\t\tbeq\t\t$15, 0, fail\\n\\\n! \t\t\tbne\t\t$14, 0, fail\\n\\\n! \t\t\tli\t\t$2, 0\t\t\\n\\\n! \t\t\t.livereg 0x2000FF0E,0x00000FFF\t\\n\\\n! \t\t\tj\t\t$31\t\t\t\\n\\\n! fail:\t\t\t\t\t\t\t\\n\\\n! \t\t\tli\t\t$2, 1\t\t\\n\\\n! \t\t\tj \t$31\");\n }\n \n #endif\t /* __mips__ */\nIndex: src/include/storage/s_lock.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/storage/s_lock.h,v\nretrieving revision 1.82\ndiff -c -r1.82 s_lock.h\n*** src/include/storage/s_lock.h\t2001/01/19 07:03:53\t1.82\n--- src/include/storage/s_lock.h\t2001/01/19 20:28:21\n***************\n*** 103,111 ****\n * Standard _asm format:\n *\n *\t__asm__ __volatile__(\n! *\t\t\t\"command;\"\n! *\t\t\t\"command;\"\n! *\t\t\t\"command;\"\n *\t\t:\t\"=r\"(_res)\t\t\treturn value, in register\n *\t\t:\t\"r\"(lock)\t\t\targument, 'lock pointer', in register\n *\t\t:\t\"r0\");\t\t\t\tinline code uses this register\n--- 103,111 ----\n * Standard _asm format:\n *\n *\t__asm__ __volatile__(\n! *\t\t\t\"command\t\\n\"\n! *\t\t\t\"command\t\\n\"\n! *\t\t\t\"command\t\\n\"\n *\t\t:\t\"=r\"(_res)\t\t\treturn value, in register\n *\t\t:\t\"r\"(lock)\t\t\targument, 'lock pointer', in register\n *\t\t:\t\"r0\");\t\t\t\tinline code uses this register\n***************\n*** 121,128 ****\n \tregister slock_t _res = 1;\n \n \t__asm__ __volatile__(\n! \t\t\t\t\t\t\"lock;\"\n! \t\t\t\t\t\t\"xchgb %0,%1;\"\n \t\t\t:\t\t\t\"=q\"(_res), \"=m\"(*lock)\n \t\t\t:\t\t\t\"0\"(_res));\n \treturn (int) _res;\n--- 121,128 ----\n \tregister slock_t _res = 1;\n \n \t__asm__ __volatile__(\n! \t\t\t\t\t\t\"lock\t\t\t\\n\"\n! \t\t\t\t\t\t\"xchgb\t%0,%1\t\\n\"\n \t\t\t:\t\t\t\"=q\"(_res), \"=m\"(*lock)\n \t\t\t:\t\t\t\"0\"(_res));\n \treturn (int) _res;\n***************\n*** 140,146 ****\n \tlong int\tret;\n \n \t__asm__ __volatile__(\n! \t\t\t\t\t\t\"xchg4 %0=%1,%2;\"\n \t\t\t :\t\t\t\"=r\"(ret), \"=m\"(*lock)\n \t\t\t :\t\t\t\"r\"(1), \"1\"(*lock)\n \t\t\t :\t\t\t\"memory\");\n--- 140,146 ----\n \tlong int\tret;\n \n \t__asm__ __volatile__(\n! \t\t\t\t\t\t\"xchg4 \t%0=%1,%2\t\t\\n\"\n \t\t\t :\t\t\t\"=r\"(ret), \"=m\"(*lock)\n \t\t\t :\t\t\t\"r\"(1), \"1\"(*lock)\n \t\t\t :\t\t\t\"memory\");\n***************\n*** 160,166 ****\n \tregister slock_t _res = 1;\n \n \t__asm__ __volatile__(\n! \t\t\t\t\t\t\"swpb %0, %0, [%3];\"\n \t\t\t:\t\t\t\"=r\"(_res), \"=m\"(*lock)\n \t\t\t:\t\t\t\"0\"(_res), \"r\"(lock));\n \treturn (int) _res;\n--- 160,166 ----\n \tregister slock_t _res = 1;\n \n \t__asm__ __volatile__(\n! \t\t\t\t\t\t\"swpb \t%0, %0, [%3]\t\\n\"\n \t\t\t:\t\t\t\"=r\"(_res), \"=m\"(*lock)\n \t\t\t:\t\t\t\"0\"(_res), \"r\"(lock));\n \treturn (int) _res;\n***************\n*** 180,190 ****\n \tint\t\t\t_res;\n \n \t__asm__\t__volatile__(\n! \t\t\t\t\t\t\"la 1,1;\"\n! \t\t\t\t\t\t\"l 2,%2;\"\n! \t\t\t\t\t\t\"slr 0,0;\"\n! \t\t\t\t\t\t\"cs 0,1,0(2);\"\n! \t\t\t\t\t\t\"lr %1,0;\"\n \t\t :\t\t\t\"=m\"(lock), \"=d\"(_res)\n \t\t :\t\t\t\"m\"(lock)\n \t\t :\t\t\t\"0\", \"1\", \"2\");\n--- 180,190 ----\n \tint\t\t\t_res;\n \n \t__asm__\t__volatile__(\n! \t\t\t\t\t\t\"la\t1,1\t\t\t\\n\"\n! \t\t\t\t\t\t\"l \t2,%2\t\t\t\\n\"\n! \t\t\t\t\t\t\"slr 0,0\t\t\\n\"\n! \t\t\t\t\t\t\"cs 0,1,0(2)\t\\n\"\n! \t\t\t\t\t\t\"lr %1,0\t\t\\n\"\n \t\t :\t\t\t\"=m\"(lock), \"=d\"(_res)\n \t\t :\t\t\t\"m\"(lock)\n \t\t :\t\t\t\"0\", \"1\", \"2\");\n***************\n*** 204,210 ****\n \tregister slock_t _res = 1;\n \n \t__asm__ __volatile__(\n! \t\t\t\t\t\t\"ldstub [%2], %0;\"\n \t\t\t:\t\t\t\"=r\"(_res), \"=m\"(*lock)\n \t\t\t:\t\t\t\"r\"(lock));\n \treturn (int) _res;\n--- 204,210 ----\n \tregister slock_t _res = 1;\n \n \t__asm__ __volatile__(\n! \t\t\t\t\t\t\"ldstub\t[%2], %0\t\t\\n\"\n \t\t\t:\t\t\t\"=r\"(_res), \"=m\"(*lock)\n \t\t\t:\t\t\t\"r\"(lock));\n \treturn (int) _res;\n***************\n*** 222,229 ****\n \tregister int rv;\n \n \t__asm__\t__volatile__(\n! \t\t\t\t\t\t\"tas %1;\"\n! \t\t\t\t\t\t\"sne %0;\"\n \t\t\t :\t\t\t\"=d\"(rv), \"=m\"(*lock)\n \t\t\t :\t\t\t\"1\"(*lock)\n \t\t\t :\t\t\t\"cc\");\n--- 222,229 ----\n \tregister int rv;\n \n \t__asm__\t__volatile__(\n! \t\t\t\t\t\t\"tas %1\t\t\\n\"\n! \t\t\t\t\t\t\"sne %0\t\t\\n\"\n \t\t\t :\t\t\t\"=d\"(rv), \"=m\"(*lock)\n \t\t\t :\t\t\t\"1\"(*lock)\n \t\t\t :\t\t\t\"cc\");\n***************\n*** 249,258 ****\n \tregister\t_res;\n \n \t__asm__ __volatile__(\n! \t\t\t\t\t\t\"movl $1, r0;\"\n! \t\t\t\t\t\t\"bbssi $0, (%1), 1f;\"\n! \t\t\t\t\t\t\"clrl r0;\"\n! \t\t\t\t\t\t\"1: movl r0, %0;\"\n \t\t\t:\t\t\t\"=r\"(_res)\n \t\t\t:\t\t\t\"r\"(lock)\n \t\t\t:\t\t\t\"r0\");\n--- 249,258 ----\n \tregister\t_res;\n \n \t__asm__ __volatile__(\n! \t\t\t\t\t\t\"movl \t$1, r0\t\t\t\\n\"\n! \t\t\t\t\t\t\"bbssi \t$0, (%1), 1f\t\\n\"\n! \t\t\t\t\t\t\"clrl \tr0\t\t\t\t\\n\"\n! \t\t\t\t\t\t\"1: movl r0, %0\t\t\t\\n\"\n \t\t\t:\t\t\t\"=r\"(_res)\n \t\t\t:\t\t\t\"r\"(lock)\n \t\t\t:\t\t\t\"r0\");\n***************\n*** 271,278 ****\n \tregister\t_res;\n \n \t__asm__ __volatile__(\n! \t\t\t\t\t\t\"sbitb 0, %0;\"\n! \t\t\t\t\t\t\"sfsd %1;\"\n \t\t\t:\t\t\t\"=m\"(*lock), \"=r\"(_res));\n \treturn (int) _res;\n }\n--- 271,278 ----\n \tregister\t_res;\n \n \t__asm__ __volatile__(\n! \t\t\t\t\t\t\"sbitb \t0, %0\t\\n\"\n! \t\t\t\t\t\t\"sfsd \t%1\t\t\\n\"\n \t\t\t:\t\t\t\"=m\"(*lock), \"=r\"(_res));\n \treturn (int) _res;\n }\n***************\n*** 339,354 ****\n \tregister slock_t _res;\n \n \t__asm__\t__volatile__(\n! \t\t\t\t\t\t\"ldq $0, %0;\"\n! \t\t\t\t\t\t\"bne $0, 2f;\"\n! \t\t\t\t\t\t\"ldq_l %1, %0;\"\n! \t\t\t\t\t\t\"bne %1, 2f;\"\n! \t\t\t\t\t\t\"mov 1, $0;\"\n! \t\t\t\t\t\t\"stq_c $0, %0;\"\n! \t\t\t\t\t\t\"beq $0, 2f;\"\n! \t\t\t\t\t\t\"mb;\"\n! \t\t\t\t\t\t\"br 3f;\"\n! \t\t\t\t\t\t\"2: mov 1, %1;\"\n \t\t\t\t\t\t\"3:\"\n \t\t\t :\t\t\t\"=m\"(*lock), \"=r\"(_res)\n \t\t\t :\n--- 339,354 ----\n \tregister slock_t _res;\n \n \t__asm__\t__volatile__(\n! \t\t\t\t\t\t\"ldq $0, %0\t\\n\"\n! \t\t\t\t\t\t\"bne $0, 2f\t\\n\"\n! \t\t\t\t\t\t\"ldq_l %1, %0\t\\n\"\n! \t\t\t\t\t\t\"bne %1, 2f\t\\n\"\n! \t\t\t\t\t\t\"mov 1, $0\t\\n\"\n! \t\t\t\t\t\t\"stq_c $0, %0\t\\n\"\n! \t\t\t\t\t\t\"beq $0, 2f\t\\n\"\n! \t\t\t\t\t\t\"mb\t\t\t\t\\n\"\n! \t\t\t\t\t\t\"br 3f\t\t\t\\n\"\n! \t\t\t\t\t\t\"2: mov 1, %1\t\\n\"\n \t\t\t\t\t\t\"3:\"\n \t\t\t :\t\t\t\"=m\"(*lock), \"=r\"(_res)\n \t\t\t :", "msg_date": "Fri, 19 Jan 2001 15:44:15 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: [PATCHES] s_lock.h cleanup" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Patrick Dunford [mailto:[email protected]] \n> Sent: Friday, 19 January 2001 15:19\n> To: [email protected]\n> Subject: User management\n> \n> \n> What commands in SQL enable administrators to view user / group \n> information?\n> Are there special SQL commands? Or are there special system tables?\n> \n> =======================================================================\n> Patrick Dunford, Christchurch, NZ - http://pdunford.godzone.net.nz/\n> \n> Not only so, but we also rejoice in our sufferings, because we\n> know that suffering produces perseverance; perseverance, character;\n> and character, hope.\n> -- Romans 5:3-4\n> http://www.heartlight.org/cgi-shl/todaysverse.cgi?day=20010118\n> =======================================================================\n> Created by Mail2Sig - http://pdunford.godzone.net.nz/software/mail2sig/\n> \n> \n", "msg_date": "Fri, 19 Jan 2001 17:10:22 +1300", "msg_from": "\"Patrick Dunford\" <[email protected]>", "msg_from_op": true, "msg_subject": "FW: User management" } ]
[ { "msg_contents": "Here are the URL's for the SQL standards. Where should these go? Are\nthey legal?\n\n\nhttp://www.ansi.org\nhttp://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt\nftp://gatekeeper.dec.com/pub/standards/sql\nftp://jerry.ece.umassd.edu/isowg3/x3h2/Standards/\n\nANSI PDF $20\nISO PDF $310\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Jan 2001 23:54:52 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Standards URL's" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Here are the URL's for the SQL standards. Where should these go? Are\n> they legal?\n\n> http://www.ansi.org\n> http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt\n> ftp://gatekeeper.dec.com/pub/standards/sql\n> ftp://jerry.ece.umassd.edu/isowg3/x3h2/Standards/\n\n> ANSI PDF $20\n> ISO PDF $310\n\nThe document at CMU is a draft version of SQL92, which is legal to\ndistribute free AFAIK (at least that's been ISO's practice with other\ndraft standards). The documents at DEC are some intermediate version\nthat is probably best ignored. The documents at umassd are SQL99, and\nnot marked as drafts; if they are final text then they might well be\nconsidered pirate copies by ISO. But more likely they are late drafts.\n\nOf course, ANSI will be happy to sell you a certifiably legal copy.\n\nWhat I want to know is what's the difference between the \"ANSI\" and\n\"ISO\" PDF versions that ANSI sells, other than a factor of 15 in price?\nI sent an email inquiry about that to ANSI a month ago, and have not\ngotten an answer. Anyone know?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Jan 2001 00:14:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Standards URL's " }, { "msg_contents": "In one of the emails I deleted from my mailbox, the person stated they\nare identical,�except in price.\n\n> Bruce Momjian <[email protected]> writes:\n> > Here are the URL's for the SQL standards. Where should these go? Are\n> > they legal?\n> \n> > http://www.ansi.org\n> > http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt\n> > ftp://gatekeeper.dec.com/pub/standards/sql\n> > ftp://jerry.ece.umassd.edu/isowg3/x3h2/Standards/\n> \n> > ANSI PDF $20\n> > ISO PDF $310\n> \n> The document at CMU is a draft version of SQL92, which is legal to\n> distribute free AFAIK (at least that's been ISO's practice with other\n> draft standards). The documents at DEC are some intermediate version\n> that is probably best ignored. The documents at umassd are SQL99, and\n> not marked as drafts; if they are final text then they might well be\n> considered pirate copies by ISO. But more likely they are late drafts.\n> \n> Of course, ANSI will be happy to sell you a certifiably legal copy.\n> \n> What I want to know is what's the difference between the \"ANSI\" and\n> \"ISO\" PDF versions that ANSI sells, other than a factor of 15 in price?\n> I sent an email inquiry about that to ANSI a month ago, and have not\n> gotten an answer. Anyone know?\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Jan 2001 00:23:41 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Standards URL's" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> In one of the emails I deleted from my mailbox, the person stated they\n> are identical,�except in price.\n\nThat's what I kinda suspected, but I'd like to see an authoritative\nstatement. Who was this person?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Jan 2001 00:38:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Standards URL's " }, { "msg_contents": "Tom Lane wrote:\n> \n> \n> What I want to know is what's the difference between the \"ANSI\" and\n> \"ISO\" PDF versions that ANSI sells, other than a factor of 15 in price?\n> I sent an email inquiry about that to ANSI a month ago, and have not\n> gotten an answer. Anyone know?\n>\nIIRC the SQL92 drafts had in every second paragraph notions about \n\"not in ANSI\" and \"not in ISO\" so maybe 5% of things were different.\n\nI hope that they have been able to agree on more things since.\n\n---------------\nHannu\n", "msg_date": "Fri, 19 Jan 2001 11:12:06 +0000", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Standards URL's" }, { "msg_contents": "\nI have just added the following developer's FAQ item. The SQL99 URL in\nthe attached email, ftp://jerry.ece.umassd.edu/isowg3/x3h2/Standards/,\nis not longer active so I did not mention it.\n\n\n---------------------------------------------------------------------------\n\n\n1.12) Where can I get a copy of the SQL standards?\n\nThere are two pertinent standards, SQL92 and SQL99. These standards are\nendorsed by ANSI and ISO. A draft of the SQL92 standard is available at\nhttp://www.contrib.andrew.cmu.edu/~shadow/. The SQL99 standard must be\npurchased from ANSI at http://www.ansi.org/. The main standards document\nis ANSI/ISO/IEC 9075-2-1999.\n\nA summary of these standards is at\nhttp://dbs.uni-leipzig.de/en/lokal/standards.pdf and\nhttp://db.konkuk.ac.kr/present/SQL3.pdf.\n\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Here are the URL's for the SQL standards. Where should these go? Are\n> > they legal?\n> \n> > http://www.ansi.org\n> > http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt\n> > ftp://gatekeeper.dec.com/pub/standards/sql\n> > ftp://jerry.ece.umassd.edu/isowg3/x3h2/Standards/\n> \n> > ANSI PDF $20\n> > ISO PDF $310\n> \n> The document at CMU is a draft version of SQL92, which is legal to\n> distribute free AFAIK (at least that's been ISO's practice with other\n> draft standards). The documents at DEC are some intermediate version\n> that is probably best ignored. The documents at umassd are SQL99, and\n> not marked as drafts; if they are final text then they might well be\n> considered pirate copies by ISO. But more likely they are late drafts.\n> \n> Of course, ANSI will be happy to sell you a certifiably legal copy.\n> \n> What I want to know is what's the difference between the \"ANSI\" and\n> \"ISO\" PDF versions that ANSI sells, other than a factor of 15 in price?\n> I sent an email inquiry about that to ANSI a month ago, and have not\n> gotten an answer. Anyone know?\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Apr 2002 01:01:38 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Standards URL's" } ]
[ { "msg_contents": "\n> > > You don't need to put this check into configure, you can just\n> > > do the check after mktime() is used.\n> >\n> > No, we need that info for the output functions that only use localtime.\n> > The intent is, to not use DST before 1970 on platforms that don't have\n> > mktime for dates before 1970.\n> \n> You can't do execution time checks in configure. You're going to have to\n> do it at run-time.\n\nWe do not need any execution time checks for this at all. The objective is\nto determine whether mktime works for any results that would be negative.\nOn AIX and IRIX all calls to mktime for dates before 1970 lead to a result of \n-1, and the configure test is supposed to give a define for exactly that behavior.\n\nAndreas\n", "msg_date": "Fri, 19 Jan 2001 09:15:01 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: AW: AW: AW: AW: AW: Re: tinterval - operator proble\n\tms o n AI X" }, { "msg_contents": "Zeugswetter Andreas SB writes:\n\n> We do not need any execution time checks for this at all. The objective is\n> to determine whether mktime works for any results that would be negative.\n> On AIX and IRIX all calls to mktime for dates before 1970 lead to a result of\n> -1, and the configure test is supposed to give a define for exactly that behavior.\n\nOkay, so you call mktime with a pre-1970 date once when the system starts\nup or when the particular function is first used and then save the result\nin a static variable.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Fri, 19 Jan 2001 17:24:27 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: AW: AW: AW: AW: AW: Re: tinterval - operator proble\n\tms o n AI X" } ]
[ { "msg_contents": "Hi all,\n\nif this is already known, I'm sorry. My search in the Mailing list archives\nvia the web interface yielded nothing.\n\nIn my setup, which is RedHat 7.0, libc-2.2, glibc 2.96 (yes, the bad one)\nand perl 5.6.0 with DBI-1.14 and DBD-Pg-0.95 libpq.so.2.1 segfaults due to\na null pointer dereference in printfPQExpBuffer.\n\n\nThis is my gdb output so far:\n\n[hekker@rincewind dnsadmin2]$ gdb /usr/bin/perl core\nGNU gdb 5.0\nCopyright 2000 Free Software Foundation, Inc.\nGDB is free software, covered by the GNU General Public License, and you\nare\nwelcome to change it and/or distribute copies of it under certain\nconditions.\nType \"show copying\" to see the conditions.\nThere is absolutely no warranty for GDB. Type \"show warranty\" for details.\nThis GDB was configured as \"i386-redhat-linux\"...\n(no debugging symbols found)...\nCore was generated by `perl zonemod.pl addzone nudl.bla'.\nProgram terminated with signal 11, Segmentation fault.\nReading symbols from /lib/libnsl.so.1...done.\nLoaded symbols for /lib/libnsl.so.1\nReading symbols from /lib/libdl.so.2...done.\nLoaded symbols for /lib/libdl.so.2\nReading symbols from /lib/libm.so.6...done.\nLoaded symbols for /lib/libm.so.6\nReading symbols from /lib/libc.so.6...done.\nLoaded symbols for /lib/libc.so.6\nReading symbols from /lib/libcrypt.so.1...done.\nLoaded symbols for /lib/libcrypt.so.1\nReading symbols from /lib/ld-linux.so.2...done.\nLoaded symbols for /lib/ld-linux.so.2\nReading symbols from\n/usr/lib/perl5/site_perl/5.6.0/i386-linux/auto/DBI/DBI.so..\n.done.\nLoaded symbols for\n/usr/lib/perl5/site_perl/5.6.0/i386-linux/auto/DBI/DBI.so\nReading symbols from\n/usr/lib/perl5/site_perl/5.6.0/i386-linux/auto/DBD/Pg/Pg.so\n...done.\nLoaded symbols for\n/usr/lib/perl5/site_perl/5.6.0/i386-linux/auto/DBD/Pg/Pg.so\nReading symbols from /opt/postgres/lib/libpq.so.2.1...done.\nLoaded symbols for /opt/postgres/lib/libpq.so.2.1\n#0 _IO_vsnprintf (string=0x0, maxlen=255,\n format=0x401f5ce0 \"PQsendQuery() -- There is no connection to the\nbackend.\\n\n---Type <return> to continue, or q <return> to quit---q\nQuit\n) at vsnprintf.c:127\n127 vsnprintf.c: No such file or directory.\n(gdb) bt\n#0 _IO_vsnprintf (string=0x0, maxlen=255,\n format=0x401f5ce0 \"PQsendQuery() -- There is no connection to the\nbackend.\\n\n\", args=0xbffff620) at vsnprintf.c:127\n#1 0x401f4c2f in printfPQExpBuffer () from /opt/postgres/lib/libpq.so.2.1\n#2 0x401f0307 in PQsendQuery () from /opt/postgres/lib/libpq.so.2.1\n#3 0x401f0dc9 in PQexec () from /opt/postgres/lib/libpq.so.2.1\n#4 0x401e455c in dbd_db_commit ()\n from /usr/lib/perl5/site_perl/5.6.0/i386-linux/auto/DBD/Pg/Pg.so\n#5 0x401e10c6 in XS_DBD__Pg__db_commit ()\n from /usr/lib/perl5/site_perl/5.6.0/i386-linux/auto/DBD/Pg/Pg.so\n#6 0x401d57f7 in XS_DBI_dispatch ()\n from /usr/lib/perl5/site_perl/5.6.0/i386-linux/auto/DBI/DBI.so\n#7 0x809ddae in Perl_pp_entersub ()\n#8 0x809865a in Perl_runops_standard ()\n#9 0x805bfbe in perl_run ()\n#10 0x805bd22 in perl_run ()\n#11 0x8059a11 in main ()\n#12 0x40074a7c in __libc_start_main (main=0x80599a0 <main>, argc=4,\n ubp_av=0xbffffa4c, init=0x8058b80 <_init>, fini=0x80df51c\n<_fini>,\n rtld_fini=0x4000d684 <_dl_fini>, stack_end=0xbffffa44)\n at ../sysdeps/generic/libc-start.c:111\n\n\nIn this case, I called $dbh->commit on the database handler in Perl.\nApparantely the connection was not valid at that point. I noticed segfaults\nin several such conditions (performing operations on a database handler\nwith invalid connection).\n\nI was able to trace it back (not to hard anyway, given above output). A\nsimple check avoids these segfaults, although now an operation on an\nillegal dbh just returns an error without message:\n\nA simple check avoids this segfault:\n\n*** pqexpbuffer.c Fri Jan 19 13:10:48 2001\n--- pqexpbuffer.c.new Fri Jan 19 13:10:42 2001\n***************\n*** 167,172 ****\n--- 167,173 ----\n size_t avail;\n int nprinted;\n\n+ if (str->data == NULL) return;\n resetPQExpBuffer(str);\n\n for (;;)\n\n\nAs this was my first peek into the source of PostgreSQL I can't provide\nmore details. \n\n\nGreets,\n\n-- \nHeinz Ekker, Operations Center \nNetway Communications AG, Hollandstra�e 11-13, A-1020 Wien\nphone +43 1 99 599 200/fax +43 1 99 599 191\nhttp://www.netway.at/ mailto:[email protected]\n\n\n", "msg_date": "Fri, 19 Jan 2001 13:35:23 +0100", "msg_from": "\"Heinz Ekker\" <[email protected]>", "msg_from_op": true, "msg_subject": "SegFault in 7.0.3 libpq.so.2.1" }, { "msg_contents": "\"Heinz Ekker\" <[email protected]> writes:\n> In my setup, which is RedHat 7.0, libc-2.2, glibc 2.96 (yes, the bad one)\n> and perl 5.6.0 with DBI-1.14 and DBD-Pg-0.95 libpq.so.2.1 segfaults due to\n> a null pointer dereference in printfPQExpBuffer.\n\n> (gdb) bt\n> #0 _IO_vsnprintf (string=0x0, maxlen=255,\n> format=0x401f5ce0 \"PQsendQuery() -- There is no connection to the\n> backend.\\n\n> \", args=0xbffff620) at vsnprintf.c:127\n> #1 0x401f4c2f in printfPQExpBuffer () from /opt/postgres/lib/libpq.so.2.1\n> #2 0x401f0307 in PQsendQuery () from /opt/postgres/lib/libpq.so.2.1\n> #3 0x401f0dc9 in PQexec () from /opt/postgres/lib/libpq.so.2.1\n> #4 0x401e455c in dbd_db_commit ()\n> from /usr/lib/perl5/site_perl/5.6.0/i386-linux/auto/DBD/Pg/Pg.so\n> #5 0x401e10c6 in XS_DBD__Pg__db_commit ()\n> from /usr/lib/perl5/site_perl/5.6.0/i386-linux/auto/DBD/Pg/Pg.so\n> #6 0x401d57f7 in XS_DBI_dispatch ()\n> from /usr/lib/perl5/site_perl/5.6.0/i386-linux/auto/DBI/DBI.so\n> #7 0x809ddae in Perl_pp_entersub ()\n> #8 0x809865a in Perl_runops_standard ()\n> #9 0x805bfbe in perl_run ()\n> #10 0x805bd22 in perl_run ()\n> #11 0x8059a11 in main ()\n\nI am going to guess that the root problem is in the Perl code: I suspect\nthat PQexec is being handed a bogus PGconn pointer --- possibly a\npointer to a connection object that had already been closed. Can't\nprove it on this amount of data, however.\n\nIt does look like libpq will behave ungracefully in that case :-(.\nWill fix that part.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Jan 2001 10:40:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SegFault in 7.0.3 libpq.so.2.1 " } ]
[ { "msg_contents": "Hi, \n\nI've done some tests with large objects and it works just fine under\nLinux, but when I try the same code under Windows (I use the libpq\nfrontend) it fails for some reason with lo_read() always returning 0.\n\nThe test program I'm using is basically a modified version of the one\nlisted in chapter 3 of the programmer's guide (testlo.c). I first\ntried to compile it using MSVC 6.0, but found that I had to replace\nthe open() calls in lo_import() and lo_export() in the\ninterfaces/libpq/fe-lobj.c file with the native Windows calls. The\nfiles were not opened in the correct mode for some reason.\n\nI then tried to compile the example with gcc and the cygwin tools -\nthe test program worked just fine without modifications to the\nfe-lobj.c file. However, my lo_read() calls still returns 0 no matter\nwhat I do.\n\nI've tried this with both 6.51, 7.0.2 and 7.0.3 - but they all give\nthe same results.\n\nDoes anyone have any experience with this under Windows? I would\ngreatly appreciate any help in getting to the bottom of this problem.\n\nThanks,\n\n-- \nTrond K.\n", "msg_date": "19 Jan 2001 13:58:26 +0100", "msg_from": "Trond Kjernaasen <[email protected]>", "msg_from_op": true, "msg_subject": "Problems with BLOBs under Windows?" }, { "msg_contents": "Sorry for posting followups on my own mails, but I've noticed that\nI can actually use lo_read() if I read the BLOBs in chunks of \n32760 bytes. If I try to read 32761 bytes it fails for some reason.\n\nThanks,\n\n-- \nTrond K.\n\nTrond Kjernaasen <[email protected]> writes:\n\n> Hi, \n> \n> I've done some tests with large objects and it works just fine under\n> Linux, but when I try the same code under Windows (I use the libpq\n> frontend) it fails for some reason with lo_read() always returning 0.\n> \n> The test program I'm using is basically a modified version of the one\n> listed in chapter 3 of the programmer's guide (testlo.c). I first\n> tried to compile it using MSVC 6.0, but found that I had to replace\n> the open() calls in lo_import() and lo_export() in the\n> interfaces/libpq/fe-lobj.c file with the native Windows calls. The\n> files were not opened in the correct mode for some reason.\n> \n> I then tried to compile the example with gcc and the cygwin tools -\n> the test program worked just fine without modifications to the\n> fe-lobj.c file. However, my lo_read() calls still returns 0 no matter\n> what I do.\n> \n> I've tried this with both 6.51, 7.0.2 and 7.0.3 - but they all give\n> the same results.\n> \n> Does anyone have any experience with this under Windows? I would\n> greatly appreciate any help in getting to the bottom of this problem.\n", "msg_date": "19 Jan 2001 17:31:37 +0100", "msg_from": "Trond Kjernaasen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problems with BLOBs under Windows?" }, { "msg_contents": "Trond Kjernaasen <[email protected]> writes:\n> Sorry for posting followups on my own mails, but I've noticed that\n> I can actually use lo_read() if I read the BLOBs in chunks of \n> 32760 bytes. If I try to read 32761 bytes it fails for some reason.\n\nI'm betting that something is rounding up to the next multiple of 8\nbytes, and then something else is trying to fit the result in a short\ninteger. Dunno where though --- AFAIR, all the LO-related code uses\ninteger counts, and I can't think of a good reason for rounding off\nto an alignment multiple either.\n\nCan you pursue this further and identify the culprit? I have no time\nfor it at the moment.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Jan 2001 13:43:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Problems with BLOBs under Windows? " } ]
[ { "msg_contents": "\nFrom: Maneerat Sappaso <[email protected]>\nSubject: install locale.\nDate: Fri, 19 Jan 2001 17:40:56 -0700 (GMT)\nMessage-ID: <[email protected]>\n\n> \t\n> \tDeer sir,\n> \t\n> \tI found something wrong when I test program in\n> \t/usr/src/test/locale/test-pgsql-locale.c\n> \tIt show \"PostgreSQL copiled without locale support\"\n> \n> \tMy locale data are in /usr/share/locale/th_TH compose of\n> \t\tLC_CTYPE\n> \t\tLC_COLLATE\n> \t\tLC_MONETARY\n> \t\tLC_NUMERIC\n> \t\tLC_TIME\n> \n> \tI install by step\n> \t./configure --enable-locale\n> \t./configure --with-perl\n> \t./configure --wiht-odbc\n> \t./configure --with-tcl\n> \t./configure --with-x\n\nYou should specify these options at once. i.e.:\n\n./configure --enable-locale --with-perl --wiht-odbc --with-tcl\n(I don't think you need --with-x at all)\n\n> \tgmake\n> \tgmake install\t\t\n> \n> \tMy OS is Linux(develop from slackware kernel 2.2.12).\n> \tWhat is the mistake?\n", "msg_date": "Fri, 19 Jan 2001 23:19:17 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: install locale." } ]
[ { "msg_contents": "I`mtrying to make a select which envolves two tables with in a\nfunction....if the query is written this way: (this is just an example,\nnot my query)\n\na := (select count(*) from xx);\n\nit works fine, but if I type the query like this\n\nselect count(*) from xx;\n\nit throws a message that says unexpected query in exec_stmt_execsql.\nIf anyone knows how to fix it, it woul be great.\nThanx\n\n", "msg_date": "Fri, 19 Jan 2001 15:26:53 +0100", "msg_from": "=?iso-8859-1?Q?Sinuh=E9?= Arroyo <[email protected]>", "msg_from_op": true, "msg_subject": "select within a fucntion" }, { "msg_contents": "Sinuhi Arroyo wrote:\n> I`mtrying to make a select which envolves two tables with in a\n> function....if the query is written this way: (this is just an example,\n> not my query)\n>\n> a := (select count(*) from xx);\n>\n> it works fine, but if I type the query like this\n>\n> select count(*) from xx;\n>\n> it throws a message that says unexpected query in exec_stmt_execsql.\n> If anyone knows how to fix it, it woul be great.\n> Thanx\n\n What should this \"select count(*) from xx;\" be good for, if\n you don't want to use the result? You can of course do\n \"perform select ...\" because that'd use another PL/pgSQL\n executor construct that doesn't complain about getting an\n unused return value, but I still wonder why you want to waste\n CPU and IO (bought an oversized system?).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Fri, 19 Jan 2001 14:47:59 -0500 (EST)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select within a fucntion" }, { "msg_contents": "The thing is that I�m not interested in wasting CPU nor my ystem is\noversized, and of course, \"my friend\", I willl use the results of the\nselect, because as a matter of fact it was a select .. into statement the\none I was trying, but to make it easier to understand (I now see you were so\nsmart that this was a waste of time), I just wrote a select statement which\nby the way, trows the same exception.\nThanks\n\n\n\"Jan Wieck\" <[email protected]> escribi� en el mensaje\nnews:[email protected]...\n> Sinuhi Arroyo wrote:\n> > I`mtrying to make a select which envolves two tables with in a\n> > function....if the query is written this way: (this is just an example,\n> > not my query)\n> >\n> > a := (select count(*) from xx);\n> >\n> > it works fine, but if I type the query like this\n> >\n> > select count(*) from xx;\n> >\n> > it throws a message that says unexpected query in exec_stmt_execsql.\n> > If anyone knows how to fix it, it woul be great.\n> > Thanx\n>\n> What should this \"select count(*) from xx;\" be good for, if\n> you don't want to use the result? You can of course do\n> \"perform select ...\" because that'd use another PL/pgSQL\n> executor construct that doesn't complain about getting an\n> unused return value, but I still wonder why you want to waste\n> CPU and IO (bought an oversized system?).\n>\n>\n> Jan\n>\n> --\n>\n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #================================================== [email protected] #\n>\n>\n>\n> _________________________________________________________\n> Do You Yahoo!?\n> Get your free @yahoo.com address at http://mail.yahoo.com\n>\n\n\n", "msg_date": "Tue, 23 Jan 2001 10:19:08 +0100", "msg_from": "\"Sinuh��� Arroyo G���mez\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select within a fucntion" }, { "msg_contents": "\"Sinuhďż˝ Arroyo Gďż˝mez\" wrote:\n> \n> The thing is that Iďż˝m not interested in wasting CPU nor my ystem is\n> oversized, and of course, \"my friend\", I willl use the results of the\n> select, because as a matter of fact it was a select .. into statement the\n> one I was trying,\n\nThere was probably a syntax error that made it into a SELECT statement \n(which SELECT .. INTO is not)\n\n> but to make it easier to understand (I now see you were so\n> smart that this was a waste of time), I just wrote a select statement which\n> by the way, trows the same exception.\n\nwhen writing for help or to report a bug, _always_ include the _actual_\ncode that misbehaves not some other code. I just confuses people.\n\n------------------\nHannu\n", "msg_date": "Wed, 24 Jan 2001 17:43:36 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select within a fucntion" } ]
[ { "msg_contents": "\n> > I agree that configure is the way to go. What if someone has\n> > installed a third party library to provide a better mktime() and\n> > localtime()?\n> \n> Exactly. What if someone has a binary PostgreSQL package installed, then\n> updates his time library to something supposedly binary compatible and\n> finds out that PostgreSQL still doesn't use the enhanced capabilities?\n> Runtime behaviour checks are done at runtime, it's as simple as that.\n\nSo, you are suggesting to call mktime after every call to localtime,\njust to check whether to output the time in DST before 1970 on all platforms ?\nBeleive me, we need a configure check.\n\nBesides, mktime is in libc on AIX and all available versions show the same behavior,\nthus I do not think that it is going to change soon. The only way would be to link\nwith a third party lib, and that would definitely need a new configure run anyway.\n\nThis configure is needed to avoid an otherwise necessary \n#if defined(_AIX) || defined(__sgi) \nIf, and only if, you say the above is better then we don't need a configure check.\n\nAndreas\n", "msg_date": "Fri, 19 Jan 2001 17:28:12 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: AW: AW: AW: Re: tinterval - operator problems on AI\n\tX" } ]
[ { "msg_contents": "\n> > We do not need any execution time checks for this at all. The objective is\n> > to determine whether mktime works for any results that would be negative.\n> > On AIX and IRIX all calls to mktime for dates before 1970 lead to a result of\n> > -1, and the configure test is supposed to give a define for exactly that behavior.\n> \n> Okay, so you call mktime with a pre-1970 date once when the system starts\n> up or when the particular function is first used and then save the result\n> in a static variable.\n\nCan anybody else give an OK to this approach, that affects all platforms ?\nI am not convinced, that this is the way to go.\n\nAndreas\n\nPS: next response not before Monday, I am off now :-)\n", "msg_date": "Fri, 19 Jan 2001 17:38:28 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: AW: AW: AW: AW: AW: AW: Re: tinterval - operator pr\n\toble ms o n AIX" }, { "msg_contents": "> > Okay, so you call mktime with a pre-1970 date once when the system starts\n> > up or when the particular function is first used and then save the result\n> > in a static variable.\n> Can anybody else give an OK to this approach, that affects all platforms ?\n> I am not convinced, that this is the way to go.\n\nNope. So far we have consensus that #ifdef <something> is the way to go\n(I just made some changes to the date/time stuff to isolate the #ifdef\n__CYGWIN__ garbage, and would like to avoid more cruft), but we are not\nunanimous on the way to get that set.\n\n - Thomas\n", "msg_date": "Fri, 19 Jan 2001 19:58:42 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: AW: AW: AW: AW: AW: AW: Re: tinterval - operator proble ms o\n\tn AIX" } ]
[ { "msg_contents": "Due to a number of interrelated factors, the 7.1beta3-2 source rpm will\nlikely not build properly on any build machine in its present state. It\nwil build just fine on mine -- but mine isn't exactly 'clean' in the\nsense that it has PostgreSQL 7.1beta3 RPMS _installed_ already. \n\nI will be fixing this tomorrow, and will upload a -3 RPMset this weekend\n(provided I don't lose power at home, as a snow/sleet/ice event is\nlikely tomorrow in my area).\n\nThe workaround is to not build the perl subpackage using:\nrpm --define 'perl 0' -ba .....\nor\nrpm --define 'perl 0' --rebuild .....\n\nThanks to Tom Lane for finding this.\n\nSorry for the inconvenience.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 19 Jan 2001 12:46:43 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "RPMset 7.1beta3-2 partially broken." } ]
[ { "msg_contents": "> > Tom, did we ever test this? I think we did and found that \n> > it was the same or worse, right?\n> \n> I tried it and didn't see any noticeable improvement on the particular\n> test case I was using, so I got discouraged and didn't pursue the idea\n> further. I'd like to come back to it someday, though.\n\nI don't know how much useful could be LRU-2 but with WAL we should try\nto reuse undirty free buffers first, not dirty ones, just to postpone\nwrites as long as we can. (BTW, this is what Oracle does.)\nSo, we probably should put new free dirty buffer just before first\nundirty one in LRU.\n\nVadim\n", "msg_date": "Fri, 19 Jan 2001 10:07:27 -0800", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Possible performance improvement: buffer replacemen\n\tt policy" } ]
[ { "msg_contents": "> So, we probably should put new free dirty buffer just before first\n> undirty one in LRU.\n\nOps - new free UNdirty buffer before first DIRTY one in LRU,\nsorry -:)\n\nVadim\n", "msg_date": "Fri, 19 Jan 2001 11:13:54 -0800", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Possible performance improvement: buffer replacemen\n\tt policy" }, { "msg_contents": "Got it. Corrected TODO.detail.\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> > So, we probably should put new free dirty buffer just before first\n> > undirty one in LRU.\n> \n> Ops - new free UNdirty buffer before first DIRTY one in LRU,\n> sorry -:)\n> \n> Vadim\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Jan 2001 14:50:02 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible performance improvement: buffer replacemen t\n policy" } ]
[ { "msg_contents": "I had to re-compile and re-install postgresql-7.1-beta1.\nI changed the directory where it was installed from /usr/local/pgsql to \n/dbs/postgres/. After re-installing I copied the data/ directory that was \nunder the old instalation to where I have the new instalation.\nAfter a bit of work I got it to work, but when I conect to any of the \ndatabases I see tables that don't belong there:\n\npostgres@ultra31:~ > psql horde\nWelcome to psql, the PostgreSQL interactive terminal.\n \nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n \nhorde=# \\dt\n List of relations\n Name | Type | Owner\n-----------------+-------+----------\n active_sessions | table | postgres\n imp_addr | table | postgres\n imp_pref | table | postgres\n pga_forms | table | postgres\n pga_queries | table | postgres\n pga_reports | table | postgres\n pga_schema | table | postgres\n pga_scripts | table | postgres\n(8 rows)\n\n\nAny ideas why those pg* tables are there?\n\nTIA.\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Fri, 19 Jan 2001 17:23:22 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": true, "msg_subject": "re-instalation" }, { "msg_contents": "On Fri, 19 Jan 2001, Martin A. Marques wrote:\n\n> I had to re-compile and re-install postgresql-7.1-beta1.\n> I changed the directory where it was installed from /usr/local/pgsql to\n> /dbs/postgres/. After re-installing I copied the data/ directory that was\n> under the old instalation to where I have the new instalation.\n> After a bit of work I got it to work, but when I conect to any of the\n> databases I see tables that don't belong there:\n\nYou probably should have used pg_dumpall to backup the databases and\nrestore them after the rebuild. That's a more reliable way of migrating\nyour data.\n\n> horde=# \\dt\n> List of relations\n> Name | Type | Owner\n> -----------------+-------+----------\n> active_sessions | table | postgres\n> imp_addr | table | postgres\n> imp_pref | table | postgres\n> pga_forms | table | postgres\n> pga_queries | table | postgres\n> pga_reports | table | postgres\n> pga_schema | table | postgres\n> pga_scripts | table | postgres\n> (8 rows)\n>\n> Any ideas why those pg* tables are there?\n\nThose are system tables created and used by pgAccess.\n\n-- Brett\n http://www.chapelperilous.net/~bmccoy/\n---------------------------------------------------------------------------\nNever look a gift horse in the mouth.\n\t\t-- Saint Jerome\n\n", "msg_date": "Fri, 19 Jan 2001 20:32:41 -0500 (EST)", "msg_from": "\"Brett W. McCoy\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: re-instalation" }, { "msg_contents": "El Vie 19 Ene 2001 22:32, Brett W. McCoy escribi�:\n> On Fri, 19 Jan 2001, Martin A. Marques wrote:\n> > I had to re-compile and re-install postgresql-7.1-beta1.\n> > I changed the directory where it was installed from /usr/local/pgsql to\n> > /dbs/postgres/. After re-installing I copied the data/ directory that was\n> > under the old instalation to where I have the new instalation.\n> > After a bit of work I got it to work, but when I conect to any of the\n> > databases I see tables that don't belong there:\n>\n> You probably should have used pg_dumpall to backup the databases and\n> restore them after the rebuild. That's a more reliable way of migrating\n> your data.\n\nThe problem was that the server got downgraded from Solaris 8 to Solaris 7 \nand the binaries didn't work, so I recompiled. There was no way of using \npg_dump because I couldn't get the postmaster up.\n\n> > horde=# \\dt\n> > List of relations\n> > Name | Type | Owner\n> > -----------------+-------+----------\n> > active_sessions | table | postgres\n> > imp_addr | table | postgres\n> > imp_pref | table | postgres\n> > pga_forms | table | postgres\n> > pga_queries | table | postgres\n> > pga_reports | table | postgres\n> > pga_schema | table | postgres\n> > pga_scripts | table | postgres\n> > (8 rows)\n> >\n> > Any ideas why those pg* tables are there?\n>\n> Those are system tables created and used by pgAccess.\n\nThey never apeared before.\n\nAnd by the way, I see all the system tables when looking at any database with \npgaccess (another thing that didn't happen before).\n\nSaludos... :-)\n\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Sat, 20 Jan 2001 16:29:39 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: re-instalation" }, { "msg_contents": "On Sat, 20 Jan 2001, Martin A. Marques wrote:\n\n> > You probably should have used pg_dumpall to backup the databases and\n> > restore them after the rebuild. That's a more reliable way of migrating\n> > your data.\n>\n> The problem was that the server got downgraded from Solaris 8 to Solaris 7\n> and the binaries didn't work, so I recompiled. There was no way of using\n> pg_dump because I couldn't get the postmaster up.\n\nDoh! You should smack the admin who downgraded without warning you so you\ncould take appropriate measures! :-) Why did you downgrade in the\nfirstplace? Just curious...\n\nThe problem with just moving your database to the new location is that\nthere are location dependencies built into it when you use initdb to\ninitialize it, so it's not reliable. Of course, if you have no other\nchoice, you have no ther choice... but this may be why hidden tables are\nsuddenly showing up everywhere.\n\n-- Brett\n http://www.chapelperilous.net/~bmccoy/\n---------------------------------------------------------------------------\nI'm a Lisp variable -- bind me!\n\n", "msg_date": "Sat, 20 Jan 2001 14:41:51 -0500 (EST)", "msg_from": "\"Brett W. McCoy\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: re-instalation" }, { "msg_contents": "\"Brett W. McCoy\" <[email protected]> writes:\n> On Sat, 20 Jan 2001, Martin A. Marques wrote:\n>> The problem was that the server got downgraded from Solaris 8 to Solaris 7\n>> and the binaries didn't work, so I recompiled. There was no way of using\n>> pg_dump because I couldn't get the postmaster up.\n\n> The problem with just moving your database to the new location is that\n> there are location dependencies built into it when you use initdb to\n> initialize it, so it's not reliable.\n\nAFAIK, there are *not* any location dependencies in a standard PG\ndatabase (other than path names associated with alternate database\nlocations, if you are so foolish as to use hard-wired alternate-location\npathnames). So in theory Martin should have been able to tar up the\nentire $PGDATA tree and move it somewhere else. I'm not sure why he's\nreporting a problem, but I don't think that's it.\n\nA more plausible line of thought is that there is some environmental\ndifference between the old and new setup --- perhaps PG was compiled\nwith different options (with/without locale or multibyte support), or\nis being run under a different LOCALE environment, etc. I'm not sure\nexactly why that sort of thing would manifest itself as re-appearance\nof tables that had been deleted in the old setup, but that's my bet.\n\nBefore you write this off as unreasonable, consider that changing\nlocales renders indexes on text columns effectively corrupt, if there\nare any entries whose sort order changes in the new locale. I don't\nquite see the chain of events from corrupt indexes on pg_class to\nunexpected table entries, but it doesn't seem as unlikely as blaming\nthe problem on a physical move of the database directory tree.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Jan 2001 16:09:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: re-instalation " }, { "msg_contents": "On Sat, 20 Jan 2001, Tom Lane wrote:\n\n> > The problem with just moving your database to the new location is that\n> > there are location dependencies built into it when you use initdb to\n> > initialize it, so it's not reliable.\n>\n> AFAIK, there are *not* any location dependencies in a standard PG\n> database (other than path names associated with alternate database\n> locations, if you are so foolish as to use hard-wired alternate-location\n> pathnames). So in theory Martin should have been able to tar up the\n> entire $PGDATA tree and move it somewhere else. I'm not sure why he's\n> reporting a problem, but I don't think that's it.\n\nAh, right... I misunderstood and thought he had also changed to a\ndifferent version because of OS incompatilities. Never mind. :-)\n\n-- Brett\n http://www.chapelperilous.net/~bmccoy/\n---------------------------------------------------------------------------\nVax Vobiscum\n\n", "msg_date": "Sat, 20 Jan 2001 16:34:05 -0500 (EST)", "msg_from": "\"Brett W. McCoy\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: re-instalation " }, { "msg_contents": "Martin A. Marques writes:\n\n> > > Any ideas why those pg* tables are there?\n> >\n> > Those are system tables created and used by pgAccess.\n>\n> They never apeared before.\n\nYou probably never ran pgaccess before.\n\n> And by the way, I see all the system tables when looking at any database with\n> pgaccess (another thing that didn't happen before).\n\nGo to the menu Databases -> Preferences and check off \"View system\ntables\".\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sat, 20 Jan 2001 22:43:37 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: re-instalation" }, { "msg_contents": "El S�b 20 Ene 2001 18:43, Peter Eisentraut escribi�:\n> Martin A. Marques writes:\n> > > > Any ideas why those pg* tables are there?\n> > >\n> > > Those are system tables created and used by pgAccess.\n> >\n> > They never apeared before.\n>\n> You probably never ran pgaccess before.\n\nNo, I used to run pgaccess daily.\n\n> > And by the way, I see all the system tables when looking at any database\n> > with pgaccess (another thing that didn't happen before).\n>\n> Go to the menu Databases -> Preferences and check off \"View system\n> tables\".\n\nGreat Peter!! That did it! \n(I'm clueless!!!! :-P )\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Sat, 20 Jan 2001 18:58:52 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: re-instalation" } ]
[ { "msg_contents": "> pga_forms | table | postgres\n> pga_queries | table | postgres\n> pga_reports | table | postgres\n> pga_schema | table | postgres\n> pga_scripts | table | postgres\n>(8 rows)\n>\n>\n>Any ideas why those pg* tables are there?\n>\n\nYou have apparently used pg_access at least once. It creates these tables\nfor itself.\n\nLen Morgan\n\n", "msg_date": "Fri, 19 Jan 2001 14:36:11 -0600", "msg_from": "\"Len Morgan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: re-instalation" }, { "msg_contents": "El Vie 19 Ene 2001 17:36, Len Morgan escribi�:\n> > pga_forms | table | postgres\n> > pga_queries | table | postgres\n> > pga_reports | table | postgres\n> > pga_schema | table | postgres\n> > pga_scripts | table | postgres\n> >(8 rows)\n> >\n> >\n> >Any ideas why those pg* tables are there?\n>\n> You have apparently used pg_access at least once. It creates these tables\n> for itself.\n\nYes, but they shouldn't be visable (am I wrong?), just like I shouldn't be \nseeing with pgaccess, and conecting to the same db, tables like:\npg_proc\npg_inherits\npg_type, etc.\n\nI didn't have this problems before doing this re-instalation.\n\nTIA\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Fri, 19 Jan 2001 17:40:41 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: re-instalation" } ]
[ { "msg_contents": "Vadim,\n\nI have committed changes to separate the notions of critical sections\nand sections that just want to hold off cancel/die interrupts, as we\ndiscussed. While I was doing that, I noticed a couple of places that\nI think you should take a second look at:\n\n1. src/backend/access/nbtree/nbtinsert.c, line 867: shouldn't this\nEND_CRIT_SECTION be moved up to before the _bt_wrtbuf call? It seems\nto me that an elog during the wrtbuf is not a critical failure. If\nthis code is correct, then all the other crit sections are wrong,\nbecause all of them release the crit section before writing buffers,\nnot after.\n\n2. src/backend/commands/vacuum.c, line 1907: does this\nSTART_CRIT_SECTION really have to be here, and not down at line 1935,\njust before PageRepairFragmentation()? I really don't like the idea of\nturning those elogs that are inside the loop into reasons to force a\nsystem-wide restart.\n\n3. src/backend/access/transam/xlog.c, routine CreateCheckPoint:\ndoes this *entire* routine need to be a critical section? Again,\nI fear a shotgun approach will mean a net decrease in reliability,\nnot an improvement. How much of this code really has to be critical?\nDo you really want a failure in, say, MoveOfflineLogs to take down the\nwhole database?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Jan 2001 17:24:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "A couple of fishy-looking critical sections" } ]
[ { "msg_contents": "> 1. src/backend/access/nbtree/nbtinsert.c, line 867: shouldn't this\n> END_CRIT_SECTION be moved up to before the _bt_wrtbuf call? It seems\n> to me that an elog during the wrtbuf is not a critical failure. If\n> this code is correct, then all the other crit sections are wrong,\n> because all of them release the crit section before writing buffers,\n> not after.\n\nOk to move it up.\n\n> 2. src/backend/commands/vacuum.c, line 1907: does this\n> START_CRIT_SECTION really have to be here, and not down at line 1935,\n> just before PageRepairFragmentation()? I really don't like the idea\n> of turning those elogs that are inside the loop into reasons to force a\n> system-wide restart.\n\nAgreed.\n\n> 3. src/backend/access/transam/xlog.c, routine CreateCheckPoint:\n> does this *entire* routine need to be a critical section? Again,\n> I fear a shotgun approach will mean a net decrease in reliability,\n> not an improvement. How much of this code really has to be critical?\n\nWhen postmaster has to create Checkpoint this routine is called from\nbootstrap.c:BootstrapMain() - ie without normal initialization, so\nI don't know result of elog(ERROR) in this case -:(\nProbably I should spend more time in this area in attempt to make\nCheckpoints rollback-able, but ...\nAnyway it seems that the real source of elog(ERROR) there is\nFlushBufferPool(). Maybe we could just initialize Warn_restart in\nthat point? All other places are mostly related to WAL itself\n- they are more critical and less likely subject to elog(ERROR).\n\n> Do you really want a failure in, say, MoveOfflineLogs to take down the\n> whole database?\n\nWell, this one could be changed at least (ie STOP --> LOG).\n\nVadim\n", "msg_date": "Fri, 19 Jan 2001 14:53:09 -0800", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: A couple of fishy-looking critical sections" }, { "msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n>> 3. src/backend/access/transam/xlog.c, routine CreateCheckPoint:\n>> does this *entire* routine need to be a critical section? Again,\n>> I fear a shotgun approach will mean a net decrease in reliability,\n>> not an improvement. How much of this code really has to be critical?\n\n> When postmaster has to create Checkpoint this routine is called from\n> bootstrap.c:BootstrapMain() - ie without normal initialization, so\n> I don't know result of elog(ERROR) in this case -:(\n\nI believe elog(ERROR) will be treated like FATAL in this case (because\nWarn_restart isn't set). So the checkpoint process will clean up and\nexit, but there wouldn't be a system-wide restart were it not for the\ncritical section.\n\nThe question that's bothering me is whether a system-wide restart is\nactually going to make things better, rather than worse, if the\ncheckpoint process has a problem ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Jan 2001 18:00:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A couple of fishy-looking critical sections " } ]
[ { "msg_contents": "I am a Comp. Sci. student at Ryerson Polytechnic University in toronto. I am in the midst of a software engineering project that involves the development of a (possibly) relational database on a RedHat 6.2 development environment, we are coding in C. now my question is, how closely related are Postgre and MySQL, and are the necessary PostgreSQL libraries included in RedHat 6.2?\n\nthanks,\n\nCameron Laird\n\n\n\n\n\n\n\nI am a Comp. Sci. student at Ryerson Polytechnic \nUniversity in toronto.  I am in the midst of a software engineering project \nthat involves the development of a (possibly) relational database on a RedHat \n6.2 development environment, we are coding in C.  now my question is, how \nclosely related are Postgre and MySQL, and are the \nnecessary PostgreSQL libraries included in RedHat 6.2?\n \nthanks,\n \nCameron Laird", "msg_date": "Fri, 19 Jan 2001 20:28:53 -0500", "msg_from": "\"Cameron Laird\" <[email protected]>", "msg_from_op": true, "msg_subject": "question" }, { "msg_contents": "On Friday 19 January 2001 20:28, Cameron Laird wrote:\n\n> > I am a Comp. Sci. student at Ryerson Polytechnic University in toronto. I\n> am in the midst of a software engineering project that involves the\n> development of a (possibly) relational database on a RedHat 6.2 development\n> environment, we are coding in C. now my question is, how closely related\n> are Postgre and MySQL, and are the necessary PostgreSQL libraries included\n> in RedHat 6.2?\n\nAFAIK, PostgreSQL and MySQL are from totally different codebases (never \nshared any code). PostgreSQL is BSD license and MySQL is now GNU GPL. They \nboth implement SQL to varying levels of conformance. PostgreSQL has some \nobject-oriented features, like table inheritance. Try them both and see what \nyou like, but I think you'll find PostgreSQL more interesting. For instance, \nPostgres can load C functions from shared objects and use them as functions \nin SQL, user defined aggregates, procedural language call handlers, and to \ncreate user defined data types (and possibly other things). The 7.1 beta has \nimplemented some great new features, like write-ahead logging (WAL) and \ncomplete support for SQL table joins, among other things. A C project can do \na lot with Postgres.\n\nRPM packages of PostgreSQL are available at:\n\n\thttp://www.postgresql.org/sites.html\n\nYou'll have to check redhat.com or do an rpm query to see if it should be or \nis installed on RedHat 6.2.\n\n>\n> thanks,\n>\n> Cameron Laird\n\n----------------------------------------\nContent-Type: text/html; charset=\"iso-8859-1\"; name=\"Attachment: 1\"\nContent-Transfer-Encoding: quoted-printable\nContent-Description: \n----------------------------------------\n\n-- \n-------- Robert B. Easter [email protected] ---------\n-- CompTechNews Message Board http://www.comptechnews.com/ --\n-- CompTechServ Tech Services http://www.comptechserv.com/ --\n---------- http://www.comptechnews.com/~reaster/ ------------\n", "msg_date": "Tue, 23 Jan 2001 04:09:49 -0500", "msg_from": "\"Robert B. Easter\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: question" }, { "msg_contents": "\"Robert B. Easter\" <[email protected]> writes:\n> You'll have to check redhat.com or do an rpm query to see if it should be or \n> is installed on RedHat 6.2.\n\nI believe redhat does ship Postgres RPMs, but they're PG version\n6.5.something, which is pretty old --- ie, fewer features and more bugs\nthan later versions. You really ought to install PG 7.0.3 (use RPMs\nfrom www.postgresql.org) or if you're feeling bleeding edge, try out the\n7.1 beta distribution.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Jan 2001 11:12:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: question " }, { "msg_contents": "Tom Lane wrote:\n> \"Robert B. Easter\" <[email protected]> writes:\n> > You'll have to check redhat.com or do an rpm query to see if it should be or\n> > is installed on RedHat 6.2.\n\n> I believe redhat does ship Postgres RPMs, but they're PG version\n> 6.5.something, which is pretty old --- ie, fewer features and more bugs\n> than later versions. You really ought to install PG 7.0.3 (use RPMs\n> from www.postgresql.org) or if you're feeling bleeding edge, try out the\n> 7.1 beta distribution.\n\nRH 6.2 shipped with PostgreSQL 6.5.3, RPM release 6. PostgreSQL 7.0 was\nin beta at the time.\n\nPostgreSQL 7.0 was first shipped as 7.0.2, release 17, in RedHat 7.0.\n\nRPMS for PostgreSQL 7.0.3 for RedHat 6.2 are available on\nftp.postgresql.org, as Tom mentioned, in\n/pub/binary/v7.0.3/RPMS/RedHat-6.2 \n\nThe upgrade from 6.5.3 RPM to 7.0.3 RPM is not the easiest in the world\n-- please be sure to read the README.rpm-dist file in the main\npostgresql RPM.\n\nAlso, you will need to read this file to see which packages you want --\nfor a full client-server install, install postgresql and\npostgresql-server. Pick and choose the other clients and development\nRPM's you need from there.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 23 Jan 2001 11:37:21 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: question" }, { "msg_contents": "Ned Lilly wrote:\n> \n> Great Bridge makes PostgreSQL 7.0.3 RPMs for 8 different Linux distros\n> at http://www.greatbridge.com/download ...\n\nFor the record (with permission of Great Bridge a few months back), I\nwant to thank Great Bridge for helping with the development of the\ncurrent Official RPMs, including financial assistance (:-)), servers\nrunning the distributions in question for building/testing, and top-tier\nprofessional feedback (when they say this release has been\nprofessionally QA tested, they _mean_ it!) on my little project.\n\nKudos to GreatBridge!\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 23 Jan 2001 12:14:55 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "GreatBridge RPMs (was: Re: question)" }, { "msg_contents": "Great Bridge makes PostgreSQL 7.0.3 RPMs for 8 different Linux distros \nat http://www.greatbridge.com/download ...\n\n\n\nTom Lane wrote:\n\n> \"Robert B. Easter\" <[email protected]> writes:\n> \n>> You'll have to check redhat.com or do an rpm query to see if it should be or \n>> is installed on RedHat 6.2.\n> \n> \n> I believe redhat does ship Postgres RPMs, but they're PG version\n> 6.5.something, which is pretty old --- ie, fewer features and more bugs\n> than later versions. You really ought to install PG 7.0.3 (use RPMs\n> from www.postgresql.org) or if you're feeling bleeding edge, try out the\n> 7.1 beta distribution.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n-- \n----------------------------------------------------\nNed Lilly e: [email protected]\nVice President w: www.greatbridge.com\nEvangelism / Hacker Relations v: 757.233.5523\nGreat Bridge, LLC f: 757.233.5555\n\n", "msg_date": "Tue, 23 Jan 2001 11:38:13 -0600", "msg_from": "Ned Lilly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: question" }, { "msg_contents": "I've just moved from Redhat to Mandrake.\n\nBut do I have to use the Mandrake RPM? Doesn't the standard RPM work on\nmandrake? \n\nWhat is the difference between these two RPM's?\n\nI'd hate to wait for the Mandrake specific RPM for every release.\n\nPoul L. Christiansen\n\nOn Tue, 23 Jan 2001, Lamar Owen wrote:\n\n> Ned Lilly wrote:\n> > \n> > Great Bridge makes PostgreSQL 7.0.3 RPMs for 8 different Linux distros\n> > at http://www.greatbridge.com/download ...\n> \n> For the record (with permission of Great Bridge a few months back), I\n> want to thank Great Bridge for helping with the development of the\n> current Official RPMs, including financial assistance (:-)), servers\n> running the distributions in question for building/testing, and top-tier\n> professional feedback (when they say this release has been\n> professionally QA tested, they _mean_ it!) on my little project.\n> \n> Kudos to GreatBridge!\n> --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n> \n\n", "msg_date": "Tue, 23 Jan 2001 19:30:19 +0100 (MET)", "msg_from": "Poul Laust Christiansen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GreatBridge RPMs (was: Re: question)" }, { "msg_contents": "> I'd hate to wait for the Mandrake specific RPM for every release.\n\nI've been building the Mandrake RPMs, and there is currently a small\nproblem in the build which I haven't had time to pursue (yet). The\nMandrake distro should be available on the postgresql.org ftp site very\nsoon after release.\n\n - Thomas\n", "msg_date": "Tue, 23 Jan 2001 19:18:00 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GreatBridge RPMs (was: Re: question)" }, { "msg_contents": "Poul Laust Christiansen writes:\n\n> I've just moved from Redhat to Mandrake.\n>\n> But do I have to use the Mandrake RPM? Doesn't the standard RPM work on\n> mandrake?\n\nIn general, RPMs only work on systems that are the same as the one they\nwere built on, for various degrees of \"same\". If you're not picking up\nthe RPMs from your distributor or you're sure that the builder used the\nsame version as you have, it's always prudent to rebuild from the source\nRPM. That should work, unless the package spec makes some unportable\nassumptions, such as different file system layouts. But that is often\nonly an annoyance, not a real problem.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Tue, 23 Jan 2001 20:51:39 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GreatBridge RPMs (was: Re: question)" }, { "msg_contents": "On Tue, 23 Jan 2001, Peter Eisentraut wrote:\n\n> In general, RPMs only work on systems that are the same as the one they\n> were built on, for various degrees of \"same\". If you're not picking up\n> the RPMs from your distributor or you're sure that the builder used the\n> same version as you have, it's always prudent to rebuild from the source\n> RPM. That should work, unless the package spec makes some unportable\n> assumptions, such as different file system layouts. But that is often\n> only an annoyance, not a real problem.\n\nWhile trying to get the FrontPage Extensions installed on a RedHat/Apache\nsystem I ran into to different version numbering systems between RedHat\nand Mandrake. Major pain. One called for perl 5.6.0-xxx and the other\nperl 5.60-xxx. After several hours of screwing around with it I took a\nbreak. Fortunately before I spent any more time on it the client I was\ngoing to do it for decided to not run them with Apache.\n\nI'm glad to see GreatBridge will be providing RPM's for many\ndistributions. Though I do tend to re-compile from source I've found that\nthose mdk's don't work too good with RHL.\n\nRod\n-- \n\n\n", "msg_date": "Tue, 23 Jan 2001 12:57:09 -0800 (PST)", "msg_from": "\"Roderick A. Anderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GreatBridge RPMs (was: Re: question)" }, { "msg_contents": "\"Roderick A. Anderson\" wrote:\n> On Tue, 23 Jan 2001, Peter Eisentraut wrote:\n> > RPM. That should work, unless the package spec makes some unportable\n> > assumptions, such as different file system layouts. But that is often\n> > only an annoyance, not a real problem.\n\n> I'm glad to see GreatBridge will be providing RPM's for many\n> distributions. Though I do tend to re-compile from source I've found that\n> those mdk's don't work too good with RHL.\n\nAnd I _love_ to get feedback about the nonportable things I do in the\nspec files (right, Peter ? :-)).\n\nI am trying (and Great Bridge helped) to get a fully\ndistribution-independent source RPM working. I am closer than I was --\nthe same spec file now works on RedHat, Mandrake, Turbo, and (to a\nlesser extent) Caldera, and soon will work seamlessly on SuSE. It may\nvery well work on others. The hooks are there now for SuSE -- just some\nfill-in work left to be done.\n\nPortability is hard. C programmers have known this for some time -- but\nthe RPM specfile doesn't really lend itself to vast portability. \nAlthough, I am learning some real tricks that really help.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 23 Jan 2001 16:55:14 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GreatBridge RPMs (was: Re: question)" }, { "msg_contents": "Ypu can always use the source to build and install it instead of the RPM.\n\nAt 07:18 PM 1/23/2001 +0000, Thomas Lockhart wrote:\n>> I'd hate to wait for the Mandrake specific RPM for every release.\n>\n>I've been building the Mandrake RPMs, and there is currently a small\n>problem in the build which I haven't had time to pursue (yet). The\n>Mandrake distro should be available on the postgresql.org ftp site very\n>soon after release.\n>\n> - Thomas\n>\n", "msg_date": "Wed, 24 Jan 2001 07:30:18 +0000", "msg_from": "Samy Elashmawy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: GreatBridge RPMs (was: Re: question)" }, { "msg_contents": "Thomas Lockhart ([email protected]) wrote:\n\n> > I'd hate to wait for the Mandrake specific RPM for every release.\n> \n> I've been building the Mandrake RPMs, and there is currently a small\n> problem in the build which I haven't had time to pursue (yet). The\n> Mandrake distro should be available on the postgresql.org ftp site very\n> soon after release.\n> \n> - Thomas\n\nI use Mandrake 7.2 and since we are talking about mandrake RPMS . . . \nThe psql application that shipped with 7.2 was not compiled with ncurses\nsupport, so the up-arrow key did not work. Could this be changed\nwith the next release? I recompiled postgres to upgrade to 7.0.3 \n(and compile perl support) and when I manually compiled it, the\nup arrow worked in psql. \n-- \n----------------------------------------------------------------\nTravis Bauer | CS Grad Student | IU |www.cs.indiana.edu/~trbauer\n----------------------------------------------------------------\n", "msg_date": "Wed, 24 Jan 2001 08:23:26 -0500", "msg_from": "Travis Bauer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GreatBridge RPMs (was: Re: question)" } ]
[ { "msg_contents": "Can someone comment on this? I believe we were waiting for a comment\nfrom Jan.\n\n> I suggest that the CREATE RULE syntax be relaxed so that\n> it is legal to have a list of SELECT commands in a rule.\n> \n> I'll argue that:\n> 1) The change is simple (2 lines in gram.y). Diff is\n> attached.\n> 2) It breaks nothing (more things become legal)\n> 3) It makes the parser agree with the published syntax,\n> which currently makes no distinction about SELECT\n> commands.\n> 4) It makes the language more \"regular\" in that SELECT\n> commands are no longer special.:\n> \n> Diff is an attachment because of line-wrapping. I'm new\n> here\n> so I don't know if this works. But as I said it's only 2\n> lines.\n> \n> ++ kevin \n> \n> \n> \n> --- \n> Kevin O'Gorman (805) 650-6274 mailto:[email protected]\n> Permanent e-mail forwarder: \n> mailto:Kevin.O'[email protected]\n> At school: mailto:[email protected]\n> Web: http://www.cs.ucsb.edu/~kogorman/index.html\n> Web: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n> \n> \"There is a freedom lying beyond circumstance,\n> derived from the direct intuition that life can\n> be grounded upon its absorption in what is\n> changeless amid change\" \n> -- Alfred North Whitehead\n\n> --- gram.y.orig\tThu May 25 15:42:17 2000\n> +++ gram.y\tThu Oct 19 14:34:47 2000\n> @@ -2585,7 +2585,6 @@\n> \t\t;\n> \n> RuleActionList: NOTHING\t\t\t\t{ $$ = NIL; }\n> -\t\t| SelectStmt\t\t\t\t\t{ $$ = lcons($1, NIL); }\n> \t\t| RuleActionStmt\t\t\t\t{ $$ = lcons($1, NIL); }\n> \t\t| '[' RuleActionMulti ']'\t\t{ $$ = $2; }\n> \t\t| '(' RuleActionMulti ')'\t\t{ $$ = $2; } \n> @@ -2607,6 +2606,7 @@\n> \t\t;\n> \n> RuleActionStmt:\tInsertStmt\n> +\t\t| SelectStmt\n> \t\t| UpdateStmt\n> \t\t| DeleteStmt\n> \t\t| NotifyStmt\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Jan 2001 22:06:56 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Rules and SELECT" } ]
[ { "msg_contents": "I see now that this has been applied. Thanks for pointing it out.\n\t\n\tRuleActionStmt:\tSelectStmt\n\t\t\t| InsertStmt\n\t\t\t| UpdateStmt\n\t\t\t| DeleteStmt\n\t\t\t| NotifyStmt\n\t\t\t;\n\n> I suggest that the CREATE RULE syntax be relaxed so that\n> it is legal to have a list of SELECT commands in a rule.\n> \n> I'll argue that:\n> 1) The change is simple (2 lines in gram.y). Diff is\n> attached.\n> 2) It breaks nothing (more things become legal)\n> 3) It makes the parser agree with the published syntax,\n> which currently makes no distinction about SELECT\n> commands.\n> 4) It makes the language more \"regular\" in that SELECT\n> commands are no longer special.:\n> \n> Diff is an attachment because of line-wrapping. I'm new\n> here\n> so I don't know if this works. But as I said it's only 2\n> lines.\n> \n> ++ kevin \n> \n> \n> \n> --- \n> Kevin O'Gorman (805) 650-6274 mailto:[email protected]\n> Permanent e-mail forwarder: \n> mailto:Kevin.O'[email protected]\n> At school: mailto:[email protected]\n> Web: http://www.cs.ucsb.edu/~kogorman/index.html\n> Web: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n> \n> \"There is a freedom lying beyond circumstance,\n> derived from the direct intuition that life can\n> be grounded upon its absorption in what is\n> changeless amid change\" \n> -- Alfred North Whitehead\n\n> --- gram.y.orig\tThu May 25 15:42:17 2000\n> +++ gram.y\tThu Oct 19 14:34:47 2000\n> @@ -2585,7 +2585,6 @@\n> \t\t;\n> \n> RuleActionList: NOTHING\t\t\t\t{ $$ = NIL; }\n> -\t\t| SelectStmt\t\t\t\t\t{ $$ = lcons($1, NIL); }\n> \t\t| RuleActionStmt\t\t\t\t{ $$ = lcons($1, NIL); }\n> \t\t| '[' RuleActionMulti ']'\t\t{ $$ = $2; }\n> \t\t| '(' RuleActionMulti ')'\t\t{ $$ = $2; } \n> @@ -2607,6 +2606,7 @@\n> \t\t;\n> \n> RuleActionStmt:\tInsertStmt\n> +\t\t| SelectStmt\n> \t\t| UpdateStmt\n> \t\t| DeleteStmt\n> \t\t| NotifyStmt\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Jan 2001 22:45:27 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Rule/SELECT" } ]
[ { "msg_contents": "It is reported that building C++ interface on FreeBSD 4.2 fails.\nCan someone comment on this?\n\nc++ -pipe -O3 -mpentiumpro -Wall -fpic -DPIC -I/usr/local/ssl/include\n-I../../../src/include -I../../../src/interfaces/libpq -c -o pgconnec\ntion.o pgconnection.cc\nIn file included from ../../../src/include/postgres.h:40,\n from pgconnection.h:41,\n from pgconnection.cc:18:\n../../../src/include/c.h:997: conflicting types for `int sys_nerr'\n/usr/include/stdio.h:224: previous declaration as `const int sys_nerr'\ngmake[3]: *** [pgconnection.o] Error 1\ngmake[3]: Leaving directory `/home/ichimura/Work/postgresql-snapshot/s\nrc/interfaces/libpq++'\ngmake[2]: *** [all] Error 2\ngmake[2]: Leaving directory `/home/ichimura/Work/postgresql-snapshot/s\nrc/interfaces'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/ichimura/Work/postgresql-snapshot/s\nrc'\ngmake: *** [all] Error 2\n", "msg_date": "Sat, 20 Jan 2001 13:12:27 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "C++ interface build on FreeBSD 4.2 broken?" }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> It is reported that building C++ interface on FreeBSD 4.2 fails.\n> Can someone comment on this?\n\n> In file included from ../../../src/include/postgres.h:40,\n> from pgconnection.h:41,\n> from pgconnection.cc:18:\n> ../../../src/include/c.h:997: conflicting types for `int sys_nerr'\n> /usr/include/stdio.h:224: previous declaration as `const int sys_nerr'\n> gmake[3]: *** [pgconnection.o] Error 1\n\nA quick look at hub.org confirms:\n\n> uname -a \nFreeBSD hub.org 4.2-STABLE FreeBSD 4.2-STABLE #0: Tue Dec 19 07:52:31 EST 2000 [email protected]:/home/projects/pgsql/operating_system/obj/home/projects/pgsql/operating_system/src/sys/kernel i386\n\n> grep sys_nerr /usr/include/*.h\n/usr/include/stdio.h:extern __const int sys_nerr; /* perror(3) external variables */\n\nSigh. Looks like we need yet another configure test. Or maybe it's\nbetter to just cut and run on the check for in-range errno; although\nI put that in to begin with, I'm certainly starting to wonder if it's\nworth the trouble. Comments anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Jan 2001 02:01:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: C++ interface build on FreeBSD 4.2 broken? " }, { "msg_contents": "Tatsuo Ishii writes:\n\n> c++ -pipe -O3 -mpentiumpro -Wall -fpic -DPIC -I/usr/local/ssl/include\n> -I../../../src/include -I../../../src/interfaces/libpq -c -o pgconnec\n> tion.o pgconnection.cc\n> In file included from ../../../src/include/postgres.h:40,\n> from pgconnection.h:41,\n> from pgconnection.cc:18:\n> ../../../src/include/c.h:997: conflicting types for `int sys_nerr'\n> /usr/include/stdio.h:224: previous declaration as `const int sys_nerr'\n> gmake[3]: *** [pgconnection.o] Error 1\n\nC++ apparently doesn't allow this, but C does. So you have to put #ifndef\n__cplusplus at the appropriate place in c.h.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sat, 20 Jan 2001 16:59:14 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: C++ interface build on FreeBSD 4.2 broken?" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> ../../../src/include/c.h:997: conflicting types for `int sys_nerr'\n>> /usr/include/stdio.h:224: previous declaration as `const int sys_nerr'\n\n> C++ apparently doesn't allow this, but C does. So you have to put #ifndef\n> __cplusplus at the appropriate place in c.h.\n\nEr, what will you ifdef exactly, and what are the odds that it will fail\non some other platform? (On my machine, for example, sys_nerr is\ndefinitely NOT declared const by the system header files, and so adding\na const decoration to our declaration would fail.)\n\nI think our choices are to do a configure test or to stop using sys_nerr\naltogether. At the moment I'm kind of leaning towards the latter.\nI suspect machines whose strerror() is without a rangecheck are history,\nand even if they're not, it's unproven that we ever pass a bogus errno\nto strerror anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Jan 2001 11:36:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: C++ interface build on FreeBSD 4.2 broken? " }, { "msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut <[email protected]> writes:\n> >> ../../../src/include/c.h:997: conflicting types for `int sys_nerr'\n> >> /usr/include/stdio.h:224: previous declaration as `const int sys_nerr'\n>\n> > C++ apparently doesn't allow this, but C does. So you have to put #ifndef\n> > __cplusplus at the appropriate place in c.h.\n>\n> Er, what will you ifdef exactly,\n\n+ #ifdef __cplusplus\n #ifdef HAVE_SYS_NERR\n extern int sys_nerr;\n #endif\n+ #endif\n\n> and what are the odds that it will fail on some other platform?\n\nI don't see how it would fail. At least it won't add more possible\nfailure cases.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sat, 20 Jan 2001 19:14:23 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: C++ interface build on FreeBSD 4.2 broken? " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> Er, what will you ifdef exactly,\n\n> + #ifdef __cplusplus\n> #ifdef HAVE_SYS_NERR\n> extern int sys_nerr;\n> #endif\n> + #endif\n\n>> and what are the odds that it will fail on some other platform?\n\n> I don't see how it would fail. At least it won't add more possible\n> failure cases.\n\nIf that can't fail, why do we need to provide a declaration of sys_nerr\nat all?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Jan 2001 13:14:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: C++ interface build on FreeBSD 4.2 broken? " }, { "msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut <[email protected]> writes:\n> > Tom Lane writes:\n> >> Er, what will you ifdef exactly,\n>\n> > + #ifdef __cplusplus\n\n#ifndef, I meant. I.e., only declare it when in C, since libpq++ does not\nuse it. libpq doesn't use it either, but it uses tons of strerror().\n\n> > #ifdef HAVE_SYS_NERR\n> > extern int sys_nerr;\n> > #endif\n> > + #endif\n>\n> >> and what are the odds that it will fail on some other platform?\n>\n> > I don't see how it would fail. At least it won't add more possible\n> > failure cases.\n>\n> If that can't fail, why do we need to provide a declaration of sys_nerr\n> at all?\n>\n> \t\t\tregards, tom lane\n>\n>\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sat, 20 Jan 2001 19:57:00 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: C++ interface build on FreeBSD 4.2 broken? " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> libpq doesn't use it either, but it uses tons of strerror().\n\nAnd also there are quite a few places in the backend that use strerror()\ndirectly. This lack of consistency seems like another reason to forget\nabout testing errno against sys_nerr in elog() ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Jan 2001 14:18:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: C++ interface build on FreeBSD 4.2 broken? " }, { "msg_contents": "On Sat, Jan 20, 2001 at 02:18:55PM -0500, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n> > libpq doesn't use it either, but it uses tons of strerror().\n> \n> And also there are quite a few places in the backend that use strerror()\n> directly. This lack of consistency seems like another reason to forget\n> about testing errno against sys_nerr in elog() ...\n\nAll relevant standards discourage use of sys_nerr too. There\nwas a discussion on cygwin lists once ... *searching*\n\nin <http://cygwin.com/ml/cygwin/1999-11/msg00097.html>\n\nFrom: \"J. J. Farrell\" <jjf at bcs dot org dot uk>\n\n[ about using sys_nerr & sys_errlist ]\n\n> Nothing written in the last 10 years or so should be using these\n> anyway; strerror() was introduced in C89 and POSIX.1 to replace\n> them, as Mumit pointed out.\n\n...\n\n-- \nmarko\n\n", "msg_date": "Sat, 20 Jan 2001 23:43:09 +0200", "msg_from": "Marko Kreen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: C++ interface build on FreeBSD 4.2 broken?" }, { "msg_contents": "What I've done to solve the immediate C++ problem is to take the\ndeclaration of sys_nerr out of c.h entirely, and put it into the\ntwo C modules that actually need it. However, I'm still wondering\nwhether we should not drop the rangecheck on errno completely.\n\nOne interesting thing I discovered while wandering the web is that\non at least some flavors of Windows, there are valid negative values\nof errno --- which our code will not convert to a useful string,\nas it stands...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Jan 2001 20:06:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: C++ interface build on FreeBSD 4.2 broken? " }, { "msg_contents": "On Sat, Jan 20, 2001 at 08:06:51PM -0500, Tom Lane wrote:\n> What I've done to solve the immediate C++ problem is to take the\n> declaration of sys_nerr out of c.h entirely, and put it into the\n> two C modules that actually need it. However, I'm still wondering\n> whether we should not drop the rangecheck on errno completely.\n\nProbably not useful, but in our <errno.h>, sys_nerr is defined\n\n#if !defined(_ANSI_SOURCE) && !defined(_POSIX_C_SOURCE) && \\\n !defined(_XOPEN_SOURCE)\n\n\nP\n", "msg_date": "Sun, 21 Jan 2001 12:11:47 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: C++ interface build on FreeBSD 4.2 broken?" } ]
[ { "msg_contents": "Hello!\n\nHere is a patch to make the current snapshot compile on Win32 (native, libpq\nand psql) again. Changes are:\n1) psql requires the includes of \"io.h\" and \"fcntl.h\" in command.c in order\nto make a call to open() work (io.h for _open(), fcntl.h for the O_xxx)\n2) PG_VERSION is no longer defined in version.h[.in], but in configure.in.\nSince we don't do configure on native win32, we need to put it in\nconfig.h.win32 :-(\n3) Added define of SYSCONFDIR to config.h.win32 - libpq won't compile\nwithout it. This functionality is *NOT* tested - it's just defined as \"\" for\nnow. May work, may not.\n4) DEF_PGPORT renamed to DEF_PGPORT_STR\n\nI have done the \"basic tests\" on it - it connects to a database, and I can\nrun queries. Haven't tested any of the fancier functions (yet).\n\nHowever, I stepped on a much bigger problem when fixing psql to work. It no\nlonger works when linked against the .DLL version of libpq (which the\nMakefile does for it). I have left it linked against this version anyway,\npending the comments I get on this mail :-)\nThe problem is that there are strings being allocated from libpq.dll using\nPQExpBuffers (for example, initPQExpBuffer() on line 92 of input.c). These\nare being allocated using the malloc function used by libpq.dll. This\nfunction *may* be different from the malloc function used by psql.exe - only\nthe resulting pointer must be valid. And with the default linking methods,\nit *WILL* be different. Later, psql.exe tries to free() this string, at\nwhich point it crashes because the free() function can't find the allocated\nblock (it's on the allocated blocks list used by the runtime lib of\nlibpq.dll).\n\nShouldn't the right thing to do be to have psql call termPQExpBuffer() on\nthe data instead? As it is now, gets_fromFile() will just return the pointer\nreceived from the PQExpBuffer.data (this may well be present at several\nplaces - this is the one I was bitten by so far). Isn't that kind of\n\"accessing the internals of the PQExpBuffer structure\" wrong? Instead,\nperhaps it shuold make a copy of the string, adn then termPQExpBuffer() it?\nIn that case, the string will have been allocated from within the same\nlibrary as the free() is called.\n\nI can get it to work just fine by doing this - changing from (around line\n100 of input.c):\n if (buffer.data[buffer.len - 1] == '\\n')\n {\n buffer.data[buffer.len - 1] = '\\0';\n return buffer.data;\n }\nto\n\t\tif (buffer.data[buffer.len - 1] == '\\n')\n\t\t{\n\t\t\tchar *tmps;\n\t\t\tbuffer.data[buffer.len - 1] = '\\0';\n\t\t\ttmps = strdup(buffer.data);\n\t\t\ttermPQExpBuffer(&buffer);\n\t\t\treturn tmps;\n\t\t}\n\nand the same a bit further down in the same function.\n\nBut, as I said above, this may be at more places in the code? Perhaps\nsomeone more familiar to it could comment on that?\n\n\nWhat do you think shuld be done about this? Personally, I go by the \"If you\nallocate a piece of memory using an interface, use the same interface to\nfree it\", but the question is how to make it work :-)\n\n\nAlso, AFAIK this only affects psql.exe, so the changes made to the libpq\nfiles by this patch are required no matter how the other issue is handled.\n\nRegards,\n Magnus\n\n\n <<pgsql-win32.patch>>", "msg_date": "Sat, 20 Jan 2001 14:28:45 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "Postgresql on win32" }, { "msg_contents": "\nThanks. Applied.\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> Hello!\n> \n> Here is a patch to make the current snapshot compile on Win32 (native, libpq\n> and psql) again. Changes are:\n> 1) psql requires the includes of \"io.h\" and \"fcntl.h\" in command.c in order\n> to make a call to open() work (io.h for _open(), fcntl.h for the O_xxx)\n> 2) PG_VERSION is no longer defined in version.h[.in], but in configure.in.\n> Since we don't do configure on native win32, we need to put it in\n> config.h.win32 :-(\n> 3) Added define of SYSCONFDIR to config.h.win32 - libpq won't compile\n> without it. This functionality is *NOT* tested - it's just defined as \"\" for\n> now. May work, may not.\n> 4) DEF_PGPORT renamed to DEF_PGPORT_STR\n> \n> I have done the \"basic tests\" on it - it connects to a database, and I can\n> run queries. Haven't tested any of the fancier functions (yet).\n> \n> However, I stepped on a much bigger problem when fixing psql to work. It no\n> longer works when linked against the .DLL version of libpq (which the\n> Makefile does for it). I have left it linked against this version anyway,\n> pending the comments I get on this mail :-)\n> The problem is that there are strings being allocated from libpq.dll using\n> PQExpBuffers (for example, initPQExpBuffer() on line 92 of input.c). These\n> are being allocated using the malloc function used by libpq.dll. This\n> function *may* be different from the malloc function used by psql.exe - only\n> the resulting pointer must be valid. And with the default linking methods,\n> it *WILL* be different. Later, psql.exe tries to free() this string, at\n> which point it crashes because the free() function can't find the allocated\n> block (it's on the allocated blocks list used by the runtime lib of\n> libpq.dll).\n> \n> Shouldn't the right thing to do be to have psql call termPQExpBuffer() on\n> the data instead? As it is now, gets_fromFile() will just return the pointer\n> received from the PQExpBuffer.data (this may well be present at several\n> places - this is the one I was bitten by so far). Isn't that kind of\n> \"accessing the internals of the PQExpBuffer structure\" wrong? Instead,\n> perhaps it shuold make a copy of the string, adn then termPQExpBuffer() it?\n> In that case, the string will have been allocated from within the same\n> library as the free() is called.\n> \n> I can get it to work just fine by doing this - changing from (around line\n> 100 of input.c):\n> if (buffer.data[buffer.len - 1] == '\\n')\n> {\n> buffer.data[buffer.len - 1] = '\\0';\n> return buffer.data;\n> }\n> to\n> \t\tif (buffer.data[buffer.len - 1] == '\\n')\n> \t\t{\n> \t\t\tchar *tmps;\n> \t\t\tbuffer.data[buffer.len - 1] = '\\0';\n> \t\t\ttmps = strdup(buffer.data);\n> \t\t\ttermPQExpBuffer(&buffer);\n> \t\t\treturn tmps;\n> \t\t}\n> \n> and the same a bit further down in the same function.\n> \n> But, as I said above, this may be at more places in the code? Perhaps\n> someone more familiar to it could comment on that?\n> \n> \n> What do you think shuld be done about this? Personally, I go by the \"If you\n> allocate a piece of memory using an interface, use the same interface to\n> free it\", but the question is how to make it work :-)\n> \n> \n> Also, AFAIK this only affects psql.exe, so the changes made to the libpq\n> files by this patch are required no matter how the other issue is handled.\n> \n> Regards,\n> Magnus\n> \n> \n> <<pgsql-win32.patch>> \n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Jan 2001 22:42:36 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql on win32" } ]
[ { "msg_contents": "I haven't tried everything to recover from this yet, but will quickly try to document the\ncrash before I lose track of what exactly went into it and what I did: Basically I deleted\na table and then ran vacuum verbose, with the net result that I cannot connect to this\ndatabase anymore with the error:\n\nfrank@kelis:/usr/local/httpd/htdocs > psql mpi\npsql: FATAL 1: Index 'pg_trigger_tgrelid_index' does not exist\n\nThis is, fortunately, not the production system but my development machine. I was going to\ngo live with this in a couple of week's time on beta3. Should I reconsider and move back\nto 7.03 (I'd hate to cuz I'll have rows bigger than 32K, potentially . . . )?\n\nThe vacuum went like this:\n\n------------------------------- begin vacuum -------------------------------\nmpi=# drop table wimis;\nDROP\nmpi=# vacuum verbose;\nNOTICE: --Relation pg_type--\nNOTICE: Pages 3: Changed 2, reaped 2, Empty 0, New 0; Tup 159: Vac 16, Keep/VTL 0/0,\nCrash 0, UnUsed 0, MinLen 106, MaxLen 109; Re-using: Free/Avail. Space 6296/156;\nEndEmpty/Avail. Pages 0/1. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_type_oid_index: Pages 2; Tuples 159: Deleted 16. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_type_typname_index: Pages 2; Tuples 159: Deleted 16. CPU 0.00s/0.00u\nsec.\nNOTICE: Rel pg_type: Pages: 3 --> 3; Tuple(s) moved: 1. CPU 0.01s/0.00u sec.\nNOTICE: Index pg_type_oid_index: Pages 2; Tuples 159: Deleted 1. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_type_typname_index: Pages 2; Tuples 159: Deleted 1. CPU 0.00s/0.00u sec.\nNOTICE: --Relation pg_attribute--\nNOTICE: Pages 16: Changed 9, reaped 8, Empty 0, New 0; Tup 1021: Vac 160, Keep/VTL 0/0,\nCrash 0, UnUsed 0, MinLen 98, MaxLen 98; Re-using: Free/Avail. Space 16480/16480;\nEndEmpty/Avail. Pages 0/8. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_attribute_relid_attnam_index: Pages 16; Tuples 1021: Deleted 160. CPU\n0.00s/0.01u sec.\nNOTICE: Index pg_attribute_relid_attnum_index: Pages 8; Tuples 1021: Deleted 160. CPU\n0.00s/0.00u sec.\nNOTICE: Rel pg_attribute: Pages: 16 --> 14; Tuple(s) moved: 43. CPU 0.01s/0.01u sec.\nNOTICE: Index pg_attribute_relid_attnam_index: Pages 16; Tuples 1021: Deleted 43. CPU\n0.00s/0.00u sec.\nNOTICE: Index pg_attribute_relid_attnum_index: Pages 8; Tuples 1021: Deleted 43. CPU\n0.00s/0.00u sec.\nNOTICE: --Relation pg_class--\nNOTICE: Pages 7: Changed 1, reaped 7, Empty 0, New 0; Tup 136: Vac 257, Keep/VTL 0/0,\nCrash 0, UnUsed 0, MinLen 115, MaxLen 160; Re-using: Free/Avail. Space 38880/31944;\nEndEmpty/Avail. Pages 0/6. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_class_oid_index: Pages 2; Tuples 136: Deleted 257. CPU 0.00s/0.01u sec.\nNOTICE: Index pg_class_relname_index: Pages 6; Tuples 136: Deleted 257. CPU 0.00s/0.00u\nsec.\nNOTICE: Rel pg_class: Pages: 7 --> 3; Tuple(s) moved: 76. CPU 0.01s/0.01u sec.\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!# \\q\n------------------------------- end vacuum -------------------------------\n\nThe log says (I'm running the backend with -d 2):\n\n------------------------------- begin log -------------------------------\nDEBUG: query: vacuum verbose;\nDEBUG: ProcessUtility: vacuum verbose;\nNOTICE: --Relation pg_type--\nNOTICE: Pages 3: Changed 2, reaped 2, Empty 0, New 0; Tup 159: Vac 16, Keep/VTL 0/0,\nCrash 0, UnUsed 0, MinLen 106, MaxLen 109; Re-using: Free/Avail. Space 6296/156;\nEndEmpty/Avail. Pages 0/1. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_type_oid_index: Pages 2; Tuples 159: Deleted 16. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_type_typname_index: Pages 2; Tuples 159: Deleted 16. CPU 0.00s/0.00u\nsec.\nNOTICE: Rel pg_type: Pages: 3 --> 3; Tuple(s) moved: 1. CPU 0.01s/0.00u sec.\nNOTICE: Index pg_type_oid_index: Pages 2; Tuples 159: Deleted 1. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_type_typname_index: Pages 2; Tuples 159: Deleted 1. CPU 0.00s/0.00u sec.\nNOTICE: --Relation pg_attribute--\nNOTICE: Pages 16: Changed 9, reaped 8, Empty 0, New 0; Tup 1021: Vac 160, Keep/VTL 0/0,\nCrash 0, UnUsed 0, MinLen 98, MaxLen 98; Re-using: Free/Avail. Space 16480/16480;\nEndEmpty/Avail. Pages 0/8. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_attribute_relid_attnam_index: Pages 16; Tuples 1021: Deleted 160. CPU\n0.00s/0.01u sec.\nNOTICE: Index pg_attribute_relid_attnum_index: Pages 8; Tuples 1021: Deleted 160. CPU\n0.00s/0.00u sec.\nNOTICE: Rel pg_attribute: Pages: 16 --> 14; Tuple(s) moved: 43. CPU 0.01s/0.01u sec.\nNOTICE: Index pg_attribute_relid_attnam_index: Pages 16; Tuples 1021: Deleted 43. CPU\n0.00s/0.00u sec.\nNOTICE: Index pg_attribute_relid_attnum_index: Pages 8; Tuples 1021: Deleted 43. CPU\n0.00s/0.00u sec.\nNOTICE: --Relation pg_class--\nNOTICE: Pages 7: Changed 1, reaped 7, Empty 0, New 0; Tup 136: Vac 257, Keep/VTL 0/0,\nCrash 0, UnUsed 0, MinLen 115, MaxLen 160; Re-using: Free/Avail. Space 38880/31944;\nEndEmpty/Avail. Pages 0/6. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_class_oid_index: Pages 2; Tuples 136: Deleted 257. CPU 0.00s/0.01u sec.\nNOTICE: Index pg_class_relname_index: Pages 6; Tuples 136: Deleted 257. CPU 0.00s/0.00u\nsec.\nNOTICE: Rel pg_class: Pages: 7 --> 3; Tuple(s) moved: 76. CPU 0.01s/0.01u sec.\n/usr/local/pgsql/bin/postmaster: reaping dead processes...\n/usr/local/pgsql/bin/postmaster: CleanupProc: pid 29832 exited with status 11\nServer process (pid 29832) exited with status 11 at Sat Jan 20 21:46:22 2001\nTerminating any active server processes...\n/usr/local/pgsql/bin/postmaster: CleanupProc: sending SIGUSR1 to process 29335\n/usr/local/pgsql/bin/postmaster: CleanupProc: sending SIGUSR1 to process 29330\nNOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend died abnormally and\npossibly corrupted shared memory.\n I have rolled back the current transaction and am going to terminate your database\nsystem connection and exit.\n Please reconnect to the database system and repeat your query.\n/usr/local/pgsql/bin/postmaster: ServerLoop: handling reading 5\n/usr/local/pgsql/bin/postmaster: ServerLoop: handling reading 5\nThe Data Base System is in recovery mode\n/usr/local/pgsql/bin/postmaster: ServerLoop: handling writing 5\nNOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend died abnormally and\npossibly corrupted shared memory.\n I have rolled back the current transaction and am going to terminate your database\nsystem connection and exit.\n Please reconnect to the database system and repeat your query.\n/usr/local/pgsql/bin/postmaster: reaping dead processes...\n/usr/local/pgsql/bin/postmaster: CleanupProc: pid 29335 exited with status 256\n/usr/local/pgsql/bin/postmaster: CleanupProc: sending SIGUSR1 to process 29330\n/usr/local/pgsql/bin/postmaster: CleanupProc: pid 29330 exited with status 256\nServer processes were terminated at Sat Jan 20 21:46:22 2001\nReinitializing shared memory and semaphores\ninvoking IpcMemoryCreate(size=17334272)\n/usr/local/pgsql/bin/postmaster: reaping dead processes...\nDEBUG: starting up\nDEBUG: database system was interrupted at 2001-01-20 21:43:20\nDEBUG: CheckPoint record at (0, 9309416)\nDEBUG: Redo record at (0, 9309416); Undo record at (0, 0); Shutdown FALSE\nDEBUG: NextTransactionId: 36262; NextOid: 38319\nDEBUG: database system was not properly shut down; automatic recovery in progress...\nDEBUG: redo starts at (0, 9309472)\nDEBUG: redo done at (0, 9391148)\nDEBUG: database system is in production state\n------------------------------- end log -------------------------------\n\n- Frank\n", "msg_date": "Sat, 20 Jan 2001 20:59:12 +0100", "msg_from": "Frank Joerdens <[email protected]>", "msg_from_op": true, "msg_subject": "beta3 vacuum crash" } ]
[ { "msg_contents": "I haven't tried everything to recover from this yet, but will quickly try to document the\ncrash before I lose track of what exactly went into it and what I did: Basically I deleted\na table and then ran vacuum verbose, with the net result that I cannot connect to this\ndatabase anymore with the error:\n\nfrank@kelis:/usr/local/httpd/htdocs > psql mpi\npsql: FATAL 1: Index 'pg_trigger_tgrelid_index' does not exist\n\nThis is, fortunately, not the production system but my development machine. I was going to\ngo live with this in a couple of week's time on beta3. Should I reconsider and move back\nto 7.03 (I'd hate to cuz I'll have rows bigger than 32K, potentially . . . )?\n\nThe vacuum went like this:\n\n------------------------------- begin vacuum -------------------------------\nmpi=# drop table wimis;\nDROP\nmpi=# vacuum verbose;\nNOTICE: --Relation pg_type--\nNOTICE: Pages 3: Changed 2, reaped 2, Empty 0, New 0; Tup 159: Vac 16, Keep/VTL 0/0,\nCrash 0, UnUsed 0, MinLen 106, MaxLen 109; Re-using: Free/Avail. Space 6296/156;\nEndEmpty/Avail. Pages 0/1. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_type_oid_index: Pages 2; Tuples 159: Deleted 16. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_type_typname_index: Pages 2; Tuples 159: Deleted 16. CPU 0.00s/0.00u\nsec.\nNOTICE: Rel pg_type: Pages: 3 --> 3; Tuple(s) moved: 1. CPU 0.01s/0.00u sec.\nNOTICE: Index pg_type_oid_index: Pages 2; Tuples 159: Deleted 1. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_type_typname_index: Pages 2; Tuples 159: Deleted 1. CPU 0.00s/0.00u sec.\nNOTICE: --Relation pg_attribute--\nNOTICE: Pages 16: Changed 9, reaped 8, Empty 0, New 0; Tup 1021: Vac 160, Keep/VTL 0/0,\nCrash 0, UnUsed 0, MinLen 98, MaxLen 98; Re-using: Free/Avail. Space 16480/16480;\nEndEmpty/Avail. Pages 0/8. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_attribute_relid_attnam_index: Pages 16; Tuples 1021: Deleted 160. CPU\n0.00s/0.01u sec.\nNOTICE: Index pg_attribute_relid_attnum_index: Pages 8; Tuples 1021: Deleted 160. CPU\n0.00s/0.00u sec.\nNOTICE: Rel pg_attribute: Pages: 16 --> 14; Tuple(s) moved: 43. CPU 0.01s/0.01u sec.\nNOTICE: Index pg_attribute_relid_attnam_index: Pages 16; Tuples 1021: Deleted 43. CPU\n0.00s/0.00u sec.\nNOTICE: Index pg_attribute_relid_attnum_index: Pages 8; Tuples 1021: Deleted 43. CPU\n0.00s/0.00u sec.\nNOTICE: --Relation pg_class--\nNOTICE: Pages 7: Changed 1, reaped 7, Empty 0, New 0; Tup 136: Vac 257, Keep/VTL 0/0,\nCrash 0, UnUsed 0, MinLen 115, MaxLen 160; Re-using: Free/Avail. Space 38880/31944;\nEndEmpty/Avail. Pages 0/6. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_class_oid_index: Pages 2; Tuples 136: Deleted 257. CPU 0.00s/0.01u sec.\nNOTICE: Index pg_class_relname_index: Pages 6; Tuples 136: Deleted 257. CPU 0.00s/0.00u\nsec.\nNOTICE: Rel pg_class: Pages: 7 --> 3; Tuple(s) moved: 76. CPU 0.01s/0.01u sec.\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!# \\q\n------------------------------- end vacuum -------------------------------\n\nThe log says (I'm running the backend with -d 2):\n\n------------------------------- begin log -------------------------------\nDEBUG: query: vacuum verbose;\nDEBUG: ProcessUtility: vacuum verbose;\nNOTICE: --Relation pg_type--\nNOTICE: Pages 3: Changed 2, reaped 2, Empty 0, New 0; Tup 159: Vac 16, Keep/VTL 0/0,\nCrash 0, UnUsed 0, MinLen 106, MaxLen 109; Re-using: Free/Avail. Space 6296/156;\nEndEmpty/Avail. Pages 0/1. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_type_oid_index: Pages 2; Tuples 159: Deleted 16. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_type_typname_index: Pages 2; Tuples 159: Deleted 16. CPU 0.00s/0.00u\nsec.\nNOTICE: Rel pg_type: Pages: 3 --> 3; Tuple(s) moved: 1. CPU 0.01s/0.00u sec.\nNOTICE: Index pg_type_oid_index: Pages 2; Tuples 159: Deleted 1. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_type_typname_index: Pages 2; Tuples 159: Deleted 1. CPU 0.00s/0.00u sec.\nNOTICE: --Relation pg_attribute--\nNOTICE: Pages 16: Changed 9, reaped 8, Empty 0, New 0; Tup 1021: Vac 160, Keep/VTL 0/0,\nCrash 0, UnUsed 0, MinLen 98, MaxLen 98; Re-using: Free/Avail. Space 16480/16480;\nEndEmpty/Avail. Pages 0/8. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_attribute_relid_attnam_index: Pages 16; Tuples 1021: Deleted 160. CPU\n0.00s/0.01u sec.\nNOTICE: Index pg_attribute_relid_attnum_index: Pages 8; Tuples 1021: Deleted 160. CPU\n0.00s/0.00u sec.\nNOTICE: Rel pg_attribute: Pages: 16 --> 14; Tuple(s) moved: 43. CPU 0.01s/0.01u sec.\nNOTICE: Index pg_attribute_relid_attnam_index: Pages 16; Tuples 1021: Deleted 43. CPU\n0.00s/0.00u sec.\nNOTICE: Index pg_attribute_relid_attnum_index: Pages 8; Tuples 1021: Deleted 43. CPU\n0.00s/0.00u sec.\nNOTICE: --Relation pg_class--\nNOTICE: Pages 7: Changed 1, reaped 7, Empty 0, New 0; Tup 136: Vac 257, Keep/VTL 0/0,\nCrash 0, UnUsed 0, MinLen 115, MaxLen 160; Re-using: Free/Avail. Space 38880/31944;\nEndEmpty/Avail. Pages 0/6. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_class_oid_index: Pages 2; Tuples 136: Deleted 257. CPU 0.00s/0.01u sec.\nNOTICE: Index pg_class_relname_index: Pages 6; Tuples 136: Deleted 257. CPU 0.00s/0.00u\nsec.\nNOTICE: Rel pg_class: Pages: 7 --> 3; Tuple(s) moved: 76. CPU 0.01s/0.01u sec.\n/usr/local/pgsql/bin/postmaster: reaping dead processes...\n/usr/local/pgsql/bin/postmaster: CleanupProc: pid 29832 exited with status 11\nServer process (pid 29832) exited with status 11 at Sat Jan 20 21:46:22 2001\nTerminating any active server processes...\n/usr/local/pgsql/bin/postmaster: CleanupProc: sending SIGUSR1 to process 29335\n/usr/local/pgsql/bin/postmaster: CleanupProc: sending SIGUSR1 to process 29330\nNOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend died abnormally and\npossibly corrupted shared memory.\n I have rolled back the current transaction and am going to terminate your database\nsystem connection and exit.\n Please reconnect to the database system and repeat your query.\n/usr/local/pgsql/bin/postmaster: ServerLoop: handling reading 5\n/usr/local/pgsql/bin/postmaster: ServerLoop: handling reading 5\nThe Data Base System is in recovery mode\n/usr/local/pgsql/bin/postmaster: ServerLoop: handling writing 5\nNOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend died abnormally and\npossibly corrupted shared memory.\n I have rolled back the current transaction and am going to terminate your database\nsystem connection and exit.\n Please reconnect to the database system and repeat your query.\n/usr/local/pgsql/bin/postmaster: reaping dead processes...\n/usr/local/pgsql/bin/postmaster: CleanupProc: pid 29335 exited with status 256\n/usr/local/pgsql/bin/postmaster: CleanupProc: sending SIGUSR1 to process 29330\n/usr/local/pgsql/bin/postmaster: CleanupProc: pid 29330 exited with status 256\nServer processes were terminated at Sat Jan 20 21:46:22 2001\nReinitializing shared memory and semaphores\ninvoking IpcMemoryCreate(size=17334272)\n/usr/local/pgsql/bin/postmaster: reaping dead processes...\nDEBUG: starting up\nDEBUG: database system was interrupted at 2001-01-20 21:43:20\nDEBUG: CheckPoint record at (0, 9309416)\nDEBUG: Redo record at (0, 9309416); Undo record at (0, 0); Shutdown FALSE\nDEBUG: NextTransactionId: 36262; NextOid: 38319\nDEBUG: database system was not properly shut down; automatic recovery in progress...\nDEBUG: redo starts at (0, 9309472)\nDEBUG: redo done at (0, 9391148)\nDEBUG: database system is in production state\n------------------------------- end log -------------------------------\n\n- Frank\n\nBTW: I posted this earlier today from a different account, which is not\nsubscribed, and the message did not come through. I am assuming now that\nnon-subscribers can't post. If this is incorrect, and it eventually\ncomes through twice, I apologize and promise not to do it again!\n", "msg_date": "Sat, 20 Jan 2001 23:07:07 +0100", "msg_from": "Frank Joerdens <[email protected]>", "msg_from_op": true, "msg_subject": "beta3 vacuum crash" }, { "msg_contents": "Frank Joerdens <[email protected]> writes:\n> I haven't tried everything to recover from this yet, but will quickly\n> try to document the crash before I lose track of what exactly went\n> into it and what I did: Basically I deleted a table and then ran\n> vacuum verbose, with the net result that I cannot connect to this\n> database anymore with the error:\n\nAny chance of a backtrace from the core dump file? The log only tells\nus that the vacuum process crashed, which isn't much to go on ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Jan 2001 17:35:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 vacuum crash " }, { "msg_contents": "On Sat, Jan 20, 2001 at 05:35:41PM -0500, Tom Lane wrote:\n> Frank Joerdens <[email protected]> writes:\n> > I haven't tried everything to recover from this yet, but will quickly\n> > try to document the crash before I lose track of what exactly went\n> > into it and what I did: Basically I deleted a table and then ran\n> > vacuum verbose, with the net result that I cannot connect to this\n> > database anymore with the error:\n> \n> Any chance of a backtrace from the core dump file? The log only tells\n> us that the vacuum process crashed, which isn't much to go on ...\n\nSilly me, this was in fact a December 13 snapshot. I have 3 machines\nwith an identical setup, two of which I already had moved to beta3. On\nthose, the error does not occur. On the December 13 version, I can\nreproduce the error, which obviously has been fixed.\n\nSorry about the confusion.\n\n- Frank\n", "msg_date": "Sun, 21 Jan 2001 18:46:50 +0100", "msg_from": "Frank Joerdens <[email protected]>", "msg_from_op": true, "msg_subject": "Re: beta3 vacuum crash" } ]
[ { "msg_contents": "Hello, I didn't know pgsql-sources close,\nso I wrote this code just as example of idea.\nCan somebody review and make patch for pgsql?\n(if this idea is good, of cource).\n\nlike-optimization is working only with ASCII, but it is simple to fix.\nThis programm makes greater string(I tested with KOI8-R):\n\n-- \nBye\nJuriy Goloveshkin", "msg_date": "Sun, 21 Jan 2001 01:57:11 +0300", "msg_from": "Juriy Goloveshkin <[email protected]>", "msg_from_op": true, "msg_subject": "like and optimization" }, { "msg_contents": "Juriy Goloveshkin <[email protected]> writes:\n> Hello, I didn't know pgsql-sources close,\n> so I wrote this code just as example of idea.\n> Can somebody review and make patch for pgsql?\n\nAFAICT this only deals with the issue of single-byte characters that\nsort in an order different from their numeric order. The existing\nmake_greater_string() code already deals with that case. Where it\nfalls down is cases where sorting is context-dependent (multi-pass\nsort rules, digraphs, things like that). But I don't see anything\nhere that would make such cases work.\n\nIf you're trying to tell us that the 7.0.* code works correctly for\nKOI8-R locale, we'd be glad to re-enable LIKE optimization for that\nlocale ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Jan 2001 19:22:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: like and optimization " }, { "msg_contents": "On Sun, 21 Jan 2001 00:25:17 +0000 (UTC), Tom Lane <[email protected]> wrote:\n>Juriy Goloveshkin <[email protected]> writes:\n>> Hello, I didn't know pgsql-sources close,\n>> so I wrote this code just as example of idea.\n>> Can somebody review and make patch for pgsql?\n>\n>AFAICT this only deals with the issue of single-byte characters that\n>sort in an order different from their numeric order. The existing\n>make_greater_string() code already deals with that case. Where it\n>falls down is cases where sorting is context-dependent (multi-pass\n>sort rules, digraphs, things like that). But I don't see anything\n>here that would make such cases work.\n>\n>If you're trying to tell us that the 7.0.* code works correctly for\n>KOI8-R locale, we'd be glad to re-enable LIKE optimization for that\n>locale ...\n\nHello,\n\nI have no knowledge of postgres internals at all (yet !), and I'm not\nquite sure what this thread is exactly about.\n\nBut if anybody thinks that selects with LIKE on indexed columns with\nsingle-byte non-ASCII characters are working OK: they are not !! See my\nposting and following thread \"7.0.3 reproduceable serious select error\"\nfrom a couple of days ago.\n\nI made a reproduceable example of things going wrong with a \"en_US\"\nlocale which is the widely-used (single-byte) ISO-8859-1 Latin 1 charset.\n\nPlease excuse me if this has nothing to do with what you are talking\nabout. I'm just very eager to get rid of this (for our application)\nextremely nasty bug !\n\n\tfriendly greetings,\n\tRob van Nieuwkerk\n", "msg_date": "Sun, 21 Jan 2001 01:19:40 +0000 (UTC)", "msg_from": "[email protected] (Rob van Nieuwkerk)", "msg_from_op": false, "msg_subject": "Re: like and optimization" }, { "msg_contents": "[email protected] (Rob van Nieuwkerk) writes:\n> But if anybody thinks that selects with LIKE on indexed columns with\n> single-byte non-ASCII characters are working OK: they are not !! See my\n> posting and following thread \"7.0.3 reproduceable serious select error\"\n> from a couple of days ago.\n\nYes, we know :-(. That's why that optimization is currently disabled\nfor non-ASCII locales in 7.1. Juriy appears to be saying that it does\nwork OK in KOI8-R locale.\n\n> I made a reproduceable example of things going wrong with a \"en_US\"\n> locale which is the widely-used (single-byte) ISO-8859-1 Latin 1 charset.\n\nen_US uses multi-pass collation rules. It's those collation rules, not\nthe charset per se, that causes the problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Jan 2001 21:52:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: like and optimization " } ]
[ { "msg_contents": "[ Charset KOI8-R unsupported, converting... ]\n> On Saturday 20 January 2001 10:05, you wrote:\n> > I just wanted to confirm that this patch was applied.\n> \n> Yes, it is. But the following patch is not applied. But I sure that it is \n> neccessary, otherwise we will get really strange errors (see discussion in \n> the thread).\n> \n> http://www.postgresql.org/mhonarc/pgsql-patches/2000-11/msg00013.html\n\nCan people comment on the following patch that Dennis says is needed?\nIt prevents BLOB operations outside transactions. Dennis, can you\nexplain why BLOB operations have to be done inside transactions?\n\n---------------------------------------------------------------------------\nHello,\n\nhere is the patch attached which do check in each BLOB operation, if we are \nin transaction, and raise an error otherwise. This will prevent such\nmistakes.\n\n-- \nSincerely Yours,\nDenis Perchine\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n<!-- MHonArc v2.4.7 -->\n<!--X-Subject: Patch to check whether we are in TX when to lo_* -->\n<!--X-From-R13: Rravf Brepuvar <qlcNcrepuvar.pbz> -->\n<!--X-Date: Fri, 3 Nov 2000 11:59:39 &#45;0500 (EST)(envelope&#45;from [email protected]) -->\n<!--X-Message-Id: [email protected] -->\n<!--X-Content-Type: multipart/mixed -->\n<!--X-Head-End-->\n<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML//EN\">\n<HTML> \n<HEAD> \n<META NAME=\"robots\" CONTENT=\"all\">\n<TITLE>Patch to check whether we are in TX when to lo_*</TITLE> \n</HEAD>\n<BODY BGCOLOR=\"#FFFDEC\">\n<CENTER><A HREF=\"http://ads.pgsql.com/cgi-bin/redirect.cgi\"><IMG SRC=\"http://ads.pgsql.com/cgi-bin/display_image.cgi\" BORDER=0 HEIGHT=60 WIDTH=468></A></CENTER>\n<P><HR WIDTH=40% SIZE=3 NOSHADE><P>\n<!--X-Body-Begin-->\n<!--X-User-Header-->\n<!--X-User-Header-End-->\n<!--X-TopPNI-->\n<p>\n\n<!--X-TopPNI-End-->\n<!--X-MsgBody-->\n<!--X-Subject-Header-Begin-->\n<h2>Patch to check whether we are in TX when to lo_*</h2>\n<HR SIZE=3 WIDTH=40% NOSHADE>\n<!--X-Subject-Header-End-->\n<!--X-Head-of-Message-->\n<ul>\n<li><strong>From</strong>: <strong>Denis Perchine <<A HREF=\"mailto:[email protected]\">[email protected]</A>></strong></li>\n<li><strong>To</strong>: <strong><A HREF=\"mailto:[email protected]\">[email protected]</A></strong></li>\n<li><strong>Subject</strong>: <strong>Patch to check whether we are in TX when to lo_*</strong></li>\n<li>Date: Fri, 3 Nov 2000 21:59:47 +0600</li>\n</ul>\n<!--X-Head-of-Message-End-->\n<!--X-Head-Body-Sep-Begin-->\n<hr>\n<!--X-Head-Body-Sep-End-->\n<!--X-Body-of-Message-->\n<PRE>\nHello,\n\nhere is the patch attached which do check in each BLOB operation, if we are \nin transaction, and raise an error otherwise. This will prevent such mistakes.\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: <A HREF=\"http://www.perchine.com/dyp/\">http://www.perchine.com/dyp/</A>\nFidoNet: 2:5000/120.5\n----------------------------------\n</PRE>\n<PRE>\nIndex: inv_api.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/storage/large_object/inv_api.c,v\nretrieving revision 1.80\ndiff -u -r1.80 inv_api.c\n--- inv_api.c\t2000/11/02 23:52:06\t1.80\n+++ inv_api.c\t2000/11/03 16:57:57\n@@ -64,6 +64,9 @@\n \tOid\t\t\tfile_oid;\n \tLargeObjectDesc *retval;\n \n+\tif (!IsTransactionBlock())\n+\t\telog(ERROR, \"inv_create: Not in transaction. BLOBs should be used inside transaction.\");\n+\n \t/*\n \t * Allocate an OID to be the LO's identifier.\n \t */\n@@ -117,6 +120,9 @@\n {\n \tLargeObjectDesc *retval;\n \n+\tif (!IsTransactionBlock())\n+\t\telog(ERROR, \"inv_open: Not in transaction. BLOBs should be used inside transaction.\");\n+\n \tif (! LargeObjectExists(lobjId))\n \t\telog(ERROR, \"inv_open: large object %u not found\", lobjId);\n \t\n@@ -145,6 +151,9 @@\n void\n inv_close(LargeObjectDesc *obj_desc)\n {\n+\tif (!IsTransactionBlock())\n+\t\telog(ERROR, \"inv_close: Not in transaction. BLOBs should be used inside transaction.\");\n+\n \tAssert(PointerIsValid(obj_desc));\n \n \tif (obj_desc->flags & IFS_WRLOCK)\n@@ -164,6 +173,9 @@\n int\n inv_drop(Oid lobjId)\n {\n+\tif (!IsTransactionBlock())\n+\t\telog(ERROR, \"inv_drop: Not in transaction. BLOBs should be used inside transaction.\");\n+\n \tLargeObjectDrop(lobjId);\n \n \t/*\n@@ -248,6 +260,9 @@\n int\n inv_seek(LargeObjectDesc *obj_desc, int offset, int whence)\n {\n+\tif (!IsTransactionBlock())\n+\t\telog(ERROR, \"inv_seek: Not in transaction. BLOBs should be used inside transaction.\");\n+\n \tAssert(PointerIsValid(obj_desc));\n \n \tswitch (whence)\n@@ -280,6 +295,9 @@\n int\n inv_tell(LargeObjectDesc *obj_desc)\n {\n+\tif (!IsTransactionBlock())\n+\t\telog(ERROR, \"inv_tell: Not in transaction. BLOBs should be used inside transaction.\");\n+\n \tAssert(PointerIsValid(obj_desc));\n \n \treturn obj_desc->offset;\n@@ -303,6 +321,9 @@\n \tbytea\t\t *datafield;\n \tbool\t\t\tpfreeit;\n \n+\tif (!IsTransactionBlock())\n+\t\telog(ERROR, \"inv_read: Not in transaction. BLOBs should be used inside transaction.\");\n+\n \tAssert(PointerIsValid(obj_desc));\n \tAssert(buf != NULL);\n \n@@ -414,6 +435,9 @@\n \tchar\t\t\treplace[Natts_pg_largeobject];\n \tbool\t\t\twrite_indices;\n \tRelation\t\tidescs[Num_pg_largeobject_indices];\n+\n+\tif (!IsTransactionBlock())\n+\t\telog(ERROR, \"inv_write: Not in transaction. BLOBs should be used inside transaction.\");\n \n \tAssert(PointerIsValid(obj_desc));\n \tAssert(buf != NULL);\n</PRE>\n\n<!--X-Body-of-Message-End-->\n<!--X-MsgBody-End-->\n<!--X-Follow-Ups-->\n<hr>\n<!--X-Follow-Ups-End-->\n<!--X-References-->\n<!--X-References-End-->\n<!--X-BotPNI-->\n<ul>\n<li>Prev by Date:\n<strong><a href=\"msg00012.html\">Re: Small fix for inv_getsize</a></strong>\n</li>\n<li>Next by Date:\n<strong><a href=\"msg00014.html\">Re: Patch to check whether we are in TX when to lo_*</a></strong>\n</li>\n<li>Prev by thread:\n<strong><a href=\"msg00018.html\">Re: Inherited column patches</a></strong>\n</li>\n<li>Next by thread:\n<strong><a href=\"msg00014.html\">Re: Patch to check whether we are in TX when to lo_*</a></strong>\n</li>\n<li>Index(es):\n<ul>\n<li><a href=\"index.html#00013\"><strong>Date</strong></a></li>\n<li><a href=\"threads.html#00013\"><strong>Thread</strong></a></li>\n</ul>\n</li>\n</ul>\n\n<!--X-BotPNI-End-->\n<!--X-User-Footer-->\n<strong>\n<a href=\"http://www.postgresql.org/mhonarc/\">Home</a> |\n<a href=\"index.html\">Main Index</a> |\n<a href=\"threads.html\">Thread Index</a>\n</strong>\n<!--X-User-Footer-End-->\n</body>\n</html>", "msg_date": "Sat, 20 Jan 2001 22:51:23 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: [PATCHES] Patch to support transactions with BLOBs\n\tfor current CVS" }, { "msg_contents": "Sorry, here is a clean version of the patch.\n\n> > http://www.postgresql.org/mhonarc/pgsql-patches/2000-11/msg00013.html\n> \n> Can people comment on the following patch that Dennis says is needed?\n> It prevents BLOB operations outside transactions. Dennis, can you\n> explain why BLOB operations have to be done inside transactions?\n> \n> ---------------------------------------------------------------------------\n> Hello,\n> \n> here is the patch attached which do check in each BLOB operation, if we are \n> in transaction, and raise an error otherwise. This will prevent such\n> mistakes.\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: inv_api.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/storage/large_object/inv_api.c,v\nretrieving revision 1.80\ndiff -u -r1.80 inv_api.c\n--- inv_api.c\t2000/11/02 23:52:06\t1.80\n+++ inv_api.c\t2000/11/03 16:57:57\n@@ -64,6 +64,9 @@\n \tOid\t\t\tfile_oid;\n \tLargeObjectDesc *retval;\n \n+\tif (!IsTransactionBlock())\n+\t\telog(ERROR, \"inv_create: Not in transaction. BLOBs should be used inside transaction.\");\n+\n \t/*\n \t * Allocate an OID to be the LO's identifier.\n \t */\n@@ -117,6 +120,9 @@\n {\n \tLargeObjectDesc *retval;\n \n+\tif (!IsTransactionBlock())\n+\t\telog(ERROR, \"inv_open: Not in transaction. BLOBs should be used inside transaction.\");\n+\n \tif (! LargeObjectExists(lobjId))\n \t\telog(ERROR, \"inv_open: large object %u not found\", lobjId);\n \t\n@@ -145,6 +151,9 @@\n void\n inv_close(LargeObjectDesc *obj_desc)\n {\n+\tif (!IsTransactionBlock())\n+\t\telog(ERROR, \"inv_close: Not in transaction. BLOBs should be used inside transaction.\");\n+\n \tAssert(PointerIsValid(obj_desc));\n \n \tif (obj_desc->flags & IFS_WRLOCK)\n@@ -164,6 +173,9 @@\n int\n inv_drop(Oid lobjId)\n {\n+\tif (!IsTransactionBlock())\n+\t\telog(ERROR, \"inv_drop: Not in transaction. BLOBs should be used inside transaction.\");\n+\n \tLargeObjectDrop(lobjId);\n \n \t/*\n@@ -248,6 +260,9 @@\n int\n inv_seek(LargeObjectDesc *obj_desc, int offset, int whence)\n {\n+\tif (!IsTransactionBlock())\n+\t\telog(ERROR, \"inv_seek: Not in transaction. BLOBs should be used inside transaction.\");\n+\n \tAssert(PointerIsValid(obj_desc));\n \n \tswitch (whence)\n@@ -280,6 +295,9 @@\n int\n inv_tell(LargeObjectDesc *obj_desc)\n {\n+\tif (!IsTransactionBlock())\n+\t\telog(ERROR, \"inv_tell: Not in transaction. BLOBs should be used inside transaction.\");\n+\n \tAssert(PointerIsValid(obj_desc));\n \n \treturn obj_desc->offset;\n@@ -303,6 +321,9 @@\n \tbytea\t\t *datafield;\n \tbool\t\t\tpfreeit;\n \n+\tif (!IsTransactionBlock())\n+\t\telog(ERROR, \"inv_read: Not in transaction. BLOBs should be used inside transaction.\");\n+\n \tAssert(PointerIsValid(obj_desc));\n \tAssert(buf != NULL);\n \n@@ -414,6 +435,9 @@\n \tchar\t\t\treplace[Natts_pg_largeobject];\n \tbool\t\t\twrite_indices;\n \tRelation\t\tidescs[Num_pg_largeobject_indices];\n+\n+\tif (!IsTransactionBlock())\n+\t\telog(ERROR, \"inv_write: Not in transaction. BLOBs should be used inside transaction.\");\n \n \tAssert(PointerIsValid(obj_desc));\n \tAssert(buf != NULL);", "msg_date": "Sat, 20 Jan 2001 22:58:40 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: [PATCHES] Patch to support transactions with BLOBs\n\tfor current CVS" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Can people comment on the following patch that Dennis says is needed?\n\nI object strongly. As given, this would break lo_creat, lo_unlink,\nlo_import, and lo_export --- none of which need to be in a transaction\nblock --- not to mention possibly causing gratuitous failures during\nlo_commit.\n\nI'm not convinced that we need such a check at all; I don't see anything\nespecially wrong with the existing behavior. But if we do want it, this\nis the wrong abstraction level. be-fsstubs.c is the place to do it,\nand only in the routines that take or return an open-LO descriptor.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Jan 2001 23:39:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] Patch to support transactions with BLOBs for\n\tcurrent CVS" }, { "msg_contents": "> > On Saturday 20 January 2001 10:05, you wrote:\n> > > I just wanted to confirm that this patch was applied.\n> >\n> > Yes, it is. But the following patch is not applied. But I sure that it is\n> > neccessary, otherwise we will get really strange errors (see discussion\n> > in the thread).\n> >\n> > http://www.postgresql.org/mhonarc/pgsql-patches/2000-11/msg00013.html\n>\n> Can people comment on the following patch that Dennis says is needed?\n> It prevents BLOB operations outside transactions. Dennis, can you\n> explain why BLOB operations have to be done inside transactions?\n\nIf you forget to put BLOB in TX, you will get errors like 'lo_read: invalid \nlarge obj descriptor (0)'. The problem is that in be-fsstubs.c in lo_commit \nall descriptors are removed. And if you did not opened TX, it will be \ncommited after each function call. And for the next call there will be no \nsuch fd in the tables.\n\nTom later wrote:\n> I object strongly. As given, this would break lo_creat, lo_unlink,\n> lo_import, and lo_export --- none of which need to be in a transaction\n> block --- not to mention possibly causing gratuitous failures during\n> lo_commit.\n\nFirst of all it will not break lo_creat, lo_unlink for sure. But we can \nremove checks from inv_create, and inv_drop. They are not important. At least \nthere will be no strange errors issued.\n\nI do not know why do you think there will be any problems with lo_commit. I \ncan not find such reasons.\n\nI can not say anything about lo_import/lo_export, as I do not know why they \nare not inside TX themselves.\n\nI am not sure, maybe Tom is right, and we should fix be-fsstubs.c instead. \nBut I do not see any reasons why we not put lo_import, and lo_export in TX. \nAt least this will prevent other backends from reading partially imported \nBLOBs...\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Sun, 21 Jan 2001 12:48:17 +0600", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] Patch to support transactions with BLOBs for\n\tcurrent CVS" }, { "msg_contents": "Denis Perchine <[email protected]> writes:\n> First of all it will not break lo_creat, lo_unlink for sure.\n\nlo_creat depends on inv_create followed by inv_close; your patch\nproposed to disable both of those outside transaction blocks.\nlo_unlink depends on inv_drop, which ditto. Your patch therefore\nrestricts lo_creat and lo_unlink to be done inside transaction blocks,\nwhich is a new and completely unnecessary restriction that will\ndoubtless break many existing applications.\n\n> But I do not see any reasons why we not put lo_import, and lo_export in TX. \n> At least this will prevent other backends from reading partially imported \n> BLOBs...\n\nlo_import and lo_export always execute in a transaction, just like any\nother backend operation. There is no need to force them to be done in\na transaction block. If you're not clear about this, perhaps you need\nto review the difference between transactions and transaction blocks.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Jan 2001 02:08:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] Patch to support transactions with BLOBs for\n\tcurrent CVS" }, { "msg_contents": "> > First of all it will not break lo_creat, lo_unlink for sure.\n>\n> lo_creat depends on inv_create followed by inv_close; your patch\n> proposed to disable both of those outside transaction blocks.\n> lo_unlink depends on inv_drop, which ditto. Your patch therefore\n> restricts lo_creat and lo_unlink to be done inside transaction blocks,\n> which is a new and completely unnecessary restriction that will\n> doubtless break many existing applications.\n\nOK.As I already said we can remove checks from inv_create/inv_drop. They are \nnot needed there.\n\n> > But I do not see any reasons why we not put lo_import, and lo_export in\n> > TX. At least this will prevent other backends from reading partially\n> > imported BLOBs...\n>\n> lo_import and lo_export always execute in a transaction, just like any\n> other backend operation. There is no need to force them to be done in\n> a transaction block. If you're not clear about this, perhaps you need\n> to review the difference between transactions and transaction blocks.\n\nHmmm... Where can I read about it? At least which source/header?\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Sun, 21 Jan 2001 13:09:46 +0600", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] Patch to support transactions with BLOBs for\n\tcurrent CVS" }, { "msg_contents": "Denis Perchine <[email protected]> writes:\n>> lo_import and lo_export always execute in a transaction, just like any\n>> other backend operation. There is no need to force them to be done in\n>> a transaction block. If you're not clear about this, perhaps you need\n>> to review the difference between transactions and transaction blocks.\n\n> Hmmm... Where can I read about it? At least which source/header?\n\nTry src/backend/access/transam/xact.c. The point is that you need a\ntransaction block only if you need to combine multiple SQL commands\ninto a single transaction. A standalone command or function call is\nstill done inside a transaction.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Jan 2001 13:34:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] Patch to support transactions with BLOBs for\n\tcurrent CVS" } ]
[ { "msg_contents": "Helo all,\n\n Please try this script, it will crash the current connection.\n I'm using the 01/18/2001 PostgreSQL v7.1 beta3 snapshot.\n\n-- Script begin -------------------------------------\ncreate table blah(\n var_field varchar(8),\n n1 integer default 23,\n n2 integer,\n arr_str varchar[],\n m money,\n s text\n);\n\ncreate rule blah_update as\n on update to blah\n do\n notify TestEvent;\n\nINSERT INTO blah (var_field, n1, n2, arr_str, m, s) VALUES ('aaa', 1, 2,\nNULL, NULL, NULL);\nUPDATE blah SET n1=n1+1; -- Won't crash the connection\nUPDATE blah SET n1=2 WHERE var_field='aaa' AND n1=1 AND n2=2 AND arr_str IS\nNULL AND m IS NULL; -- Will crash the connection\n\n-- Script end -------------------------------------\n psql will print :\n\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!#\n\nAny comments ?\n\nI need this kind of code working for a demo for ZeosDBO users...\n\n\nBest Regards,\nSteve Howe\n\n\n", "msg_date": "Sun, 21 Jan 2001 03:04:41 -0200", "msg_from": "\"Steve Howe\" <[email protected]>", "msg_from_op": true, "msg_subject": "This script will crash the connection" }, { "msg_contents": "\"Steve Howe\" <[email protected]> writes:\n> Please try this script, it will crash the current connection.\n\nCrash confirmed. Thanks for the report --- I'm on it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Jan 2001 00:06:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: This script will crash the connection " }, { "msg_contents": "\"Steve Howe\" <[email protected]> writes:\n> create rule blah_update as\n> on update to blah\n> do\n> notify TestEvent;\n\n> UPDATE blah SET n1=n1+1; -- Won't crash the connection\n> UPDATE blah SET n1=2 WHERE var_field='aaa' AND n1=1 AND n2=2 AND arr_str IS\n> NULL AND m IS NULL; -- Will crash the connection\n\nThe problem here is that the query rewriter tries to hang the query's\nqualification (WHERE clause) onto the rule's action query, so that\nthe action query won't be done unless the query finds at least one\nrow to update.\n\nNOTIFY commands, being utility statements, don't have qualifications.\nIn 7.0 and before, the qual clause just vanished into the ether, and\nso in this example the NOTIFY would execute whether the UPDATE updated\nany rows or not. In 7.1 there is physically noplace to hang the qual\n(no jointree) and thus a crash.\n\nNot sure what to do here. Adding quals to utility statements is right\nout, however --- even if we weren't late in beta, the concept doesn't\nmake any sense to me. For one reason, utility statements don't have\nFROM clauses against which to evaluate the quals. I am leaning to the\nidea that we should forbid NOTIFY in rules altogether. Jan, what's your\nthought?\n\nSteve, your immediate move is to use a trigger rather than a rule to\nexecute the NOTIFY. Meanwhile, we have to think about what to do...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Jan 2001 00:57:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: This script will crash the connection " }, { "msg_contents": "\nIs there a TODO item here, Tom?\n\n> \"Steve Howe\" <[email protected]> writes:\n> > create rule blah_update as\n> > on update to blah\n> > do\n> > notify TestEvent;\n> \n> > UPDATE blah SET n1=n1+1; -- Won't crash the connection\n> > UPDATE blah SET n1=2 WHERE var_field='aaa' AND n1=1 AND n2=2 AND arr_str IS\n> > NULL AND m IS NULL; -- Will crash the connection\n> \n> The problem here is that the query rewriter tries to hang the query's\n> qualification (WHERE clause) onto the rule's action query, so that\n> the action query won't be done unless the query finds at least one\n> row to update.\n> \n> NOTIFY commands, being utility statements, don't have qualifications.\n> In 7.0 and before, the qual clause just vanished into the ether, and\n> so in this example the NOTIFY would execute whether the UPDATE updated\n> any rows or not. In 7.1 there is physically noplace to hang the qual\n> (no jointree) and thus a crash.\n> \n> Not sure what to do here. Adding quals to utility statements is right\n> out, however --- even if we weren't late in beta, the concept doesn't\n> make any sense to me. For one reason, utility statements don't have\n> FROM clauses against which to evaluate the quals. I am leaning to the\n> idea that we should forbid NOTIFY in rules altogether. Jan, what's your\n> thought?\n> \n> Steve, your immediate move is to use a trigger rather than a rule to\n> execute the NOTIFY. Meanwhile, we have to think about what to do...\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 07:08:51 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: This script will crash the connection" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Is there a TODO item here, Tom?\n\nHopefully we can just decide what to do and do it. I'm waiting to hear\nJan's opinion ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Jan 2001 11:40:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: This script will crash the connection " }, { "msg_contents": "Tom Lane wrote:\n> \"Steve Howe\" <[email protected]> writes:\n> > create rule blah_update as\n> > on update to blah\n> > do\n> > notify TestEvent;\n>\n> > UPDATE blah SET n1=n1+1; -- Won't crash the connection\n> > UPDATE blah SET n1=2 WHERE var_field='aaa' AND n1=1 AND n2=2 AND arr_str IS\n> > NULL AND m IS NULL; -- Will crash the connection\n>\n> The problem here is that the query rewriter tries to hang the query's\n> qualification (WHERE clause) onto the rule's action query, so that\n> the action query won't be done unless the query finds at least one\n> row to update.\n>\n> NOTIFY commands, being utility statements, don't have qualifications.\n> In 7.0 and before, the qual clause just vanished into the ether, and\n> so in this example the NOTIFY would execute whether the UPDATE updated\n> any rows or not. In 7.1 there is physically noplace to hang the qual\n> (no jointree) and thus a crash.\n>\n> Not sure what to do here. Adding quals to utility statements is right\n> out, however --- even if we weren't late in beta, the concept doesn't\n> make any sense to me. For one reason, utility statements don't have\n> FROM clauses against which to evaluate the quals. I am leaning to the\n> idea that we should forbid NOTIFY in rules altogether. Jan, what's your\n> thought?\n>\n> Steve, your immediate move is to use a trigger rather than a rule to\n> execute the NOTIFY. Meanwhile, we have to think about what to do...\n\n Would be something for a STATEMENT trigger. We don't have 'em\n yet and I'm not sure what kind of information they will\n receive if we finally implement them. But the number of rows\n affected by the statement is a good candidate.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 24 Jan 2001 17:35:54 -0500 (EST)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: This script will crash the connection" }, { "msg_contents": "Jan Wieck <[email protected]> writes:\n> Tom Lane wrote:\n>> The problem here is that the query rewriter tries to hang the query's\n>> qualification (WHERE clause) onto the rule's action query, so that\n>> the action query won't be done unless the query finds at least one\n>> row to update.\n>> NOTIFY commands, being utility statements, don't have qualifications.\n>> In 7.0 and before, the qual clause just vanished into the ether, and\n>> so in this example the NOTIFY would execute whether the UPDATE updated\n>> any rows or not. In 7.1 there is physically noplace to hang the qual\n>> (no jointree) and thus a crash.\n\n> Would be something for a STATEMENT trigger. We don't have 'em\n> yet and I'm not sure what kind of information they will\n> receive if we finally implement them. But the number of rows\n> affected by the statement is a good candidate.\n\nThat's no help for a 7.1 solution however. We can't start inventing\na new feature at this stage.\n\nWhat I am inclined to do is have the rewriter reject conditional rules\nthat contain NOTIFY. That seems like the minimal restriction that will\nprevent a crash or incorrect behavior. Comments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Jan 2001 13:30:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: This script will crash the connection " }, { "msg_contents": "> > Would be something for a STATEMENT trigger. We don't have 'em\n> > yet and I'm not sure what kind of information they will\n> > receive if we finally implement them. But the number of rows\n> > affected by the statement is a good candidate.\n> \n> That's no help for a 7.1 solution however. We can't start inventing\n> a new feature at this stage.\n> \n> What I am inclined to do is have the rewriter reject conditional rules\n> that contain NOTIFY. That seems like the minimal restriction that will\n> prevent a crash or incorrect behavior. Comments?\n\nOK, added to TODO:\n\n\t* Allow NOTIFY in rules\n\nand removed from open items list.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 26 Jan 2001 15:35:22 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: This script will crash the connection" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> OK, added to TODO:\n> \t* Allow NOTIFY in rules\n\nUh, what does that have to do with the problem? It's certainly not\nan accurate rendering of either the current or proposed status ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Jan 2001 16:07:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: This script will crash the connection " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > OK, added to TODO:\n> > \t* Allow NOTIFY in rules\n> \n> Uh, what does that have to do with the problem? It's certainly not\n> an accurate rendering of either the current or proposed status ...\n\nOops, can you give me a line. What was the issue?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 26 Jan 2001 23:56:14 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: This script will crash the connection" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> OK, added to TODO:\n> * Allow NOTIFY in rules\n>> \n>> Uh, what does that have to do with the problem? It's certainly not\n>> an accurate rendering of either the current or proposed status ...\n\n> Oops, can you give me a line. What was the issue?\n\n\"Allow NOTIFY in conditional rules\" would be an approximation. It's\nnot the whole story though, because presently we also have to fail\nif the rule is applied to a query with conditions, even if the rule\nitself is unconditional. As of my last commit:\n\nregression=# create rule r1 as on update to int4_tbl do notify foo;\nCREATE\nregression=# update int4_tbl set f1 = f1;\nUPDATE 5\nregression=# update int4_tbl set f1 = f1 where f1 < 0;\nERROR: Conditional NOTIFY is not implemented\n\nwhich is pretty ugly but at least it doesn't pretend to do something\nit can't, which was the 7.0 behavior. (In 7.0 you'd have gotten a\nNOTIFY whether the update updated any rows or not.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 27 Jan 2001 00:31:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: This script will crash the connection " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > OK, added to TODO:\n> > * Allow NOTIFY in rules\n> >> \n> >> Uh, what does that have to do with the problem? It's certainly not\n> >> an accurate rendering of either the current or proposed status ...\n> \n> > Oops, can you give me a line. What was the issue?\n> \n> \"Allow NOTIFY in conditional rules\" would be an approximation. It's\n> not the whole story though, because presently we also have to fail\n> if the rule is applied to a query with conditions, even if the rule\n> itself is unconditional. As of my last commit:\n> \n> regression=# create rule r1 as on update to int4_tbl do notify foo;\n> CREATE\n> regression=# update int4_tbl set f1 = f1;\n> UPDATE 5\n> regression=# update int4_tbl set f1 = f1 where f1 < 0;\n> ERROR: Conditional NOTIFY is not implemented\n> \n> which is pretty ugly but at least it doesn't pretend to do something\n> it can't, which was the 7.0 behavior. (In 7.0 you'd have gotten a\n> NOTIFY whether the update updated any rows or not.)\n\nAdded to TODO:\n\n\t* Allow NOTIFY in rules involving conditionals \n\nThis covers both cases of conditionals in the rule or the query.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 27 Jan 2001 00:40:23 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: This script will crash the connection" } ]
[ { "msg_contents": "\nHi,\nI'm using ODBC/MS Access and every time my client shuts down\nI get the log message:\n\npq_recvbuf: unexpected EOF on client connection\n\nIs there something I'm doing wrong here? I haven't been too bothered by\nthis but I do wonder...\n\nMany thanks for any tips!\n\n-Cedar\n\n(yes I did copy the JDBC post.. ;)\n\n", "msg_date": "Sun, 21 Jan 2001 12:47:09 +0200 (IST)", "msg_from": "Cedar Cox <[email protected]>", "msg_from_op": true, "msg_subject": "ODBC gives pq_recvbuf: unexpected EOF on client connection" }, { "msg_contents": "Cedar Cox <[email protected]> writes:\n> I'm using ODBC/MS Access and every time my client shuts down\n> I get the log message:\n> pq_recvbuf: unexpected EOF on client connection\n> Is there something I'm doing wrong here?\n\nNot you, the ODBC driver --- it's just unceremoniously closing the\nsocket connection without being polite enough to send the disconnect\nmessage (a single 'X', I think) first.\n\nSomebody ought to fix that, but it's not a real high priority.\nThere's no bad side-effects other than cluttering the postmaster log.\n\n> (yes I did copy the JDBC post.. ;)\n\nI haven't looked at the JDBC code, but evidently it's equally impolite.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Jan 2001 13:39:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ODBC gives pq_recvbuf: unexpected EOF on client connection " }, { "msg_contents": "At 13:39 21/01/01 -0500, Tom Lane wrote:\n>Cedar Cox <[email protected]> writes:\n> > I'm using ODBC/MS Access and every time my client shuts down\n> > I get the log message:\n> > pq_recvbuf: unexpected EOF on client connection\n> > Is there something I'm doing wrong here?\n>\n>Not you, the ODBC driver --- it's just unceremoniously closing the\n>socket connection without being polite enough to send the disconnect\n>message (a single 'X', I think) first.\n>\n>Somebody ought to fix that, but it's not a real high priority.\n>There's no bad side-effects other than cluttering the postmaster log.\n>\n> > (yes I did copy the JDBC post.. ;)\n>\n>I haven't looked at the JDBC code, but evidently it's equally impolite.\n\nThis was fixed in 6.5.3 as long as the client doesn't kill the JVM \nprematurely. As long as client code calls the Connection's close() method \nthen the backend will get the X message. Problem is most people \nconveniently forget to put close() in their code.\n\nPeter\n\n", "msg_date": "Mon, 22 Jan 2001 20:45:55 +0000", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ODBC gives pq_recvbuf: unexpected EOF on\n client connection" }, { "msg_contents": "\nCan someone research this? I heard JDBC has the same problem.\n\n\n> Cedar Cox <[email protected]> writes:\n> > I'm using ODBC/MS Access and every time my client shuts down\n> > I get the log message:\n> > pq_recvbuf: unexpected EOF on client connection\n> > Is there something I'm doing wrong here?\n> \n> Not you, the ODBC driver --- it's just unceremoniously closing the\n> socket connection without being polite enough to send the disconnect\n> message (a single 'X', I think) first.\n> \n> Somebody ought to fix that, but it's not a real high priority.\n> There's no bad side-effects other than cluttering the postmaster log.\n> \n> > (yes I did copy the JDBC post.. ;)\n> \n> I haven't looked at the JDBC code, but evidently it's equally impolite.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 13:47:49 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] ODBC gives pq_recvbuf: unexpected EOF on client\n\tconnection" } ]
[ { "msg_contents": "Hi all,\n\nI'm experiencing some strange behaviour with postgresql 7.0.3 on Red Hat \nLinux 7. I'm sending lots of insert statements to the postgresql server \nfrom another machine via JDBC. During that process postgresql continues to \ntake up more and more memory and seemingly never returns it to the system. \nOddly if I watch the postmaster and it's sub processes in ktop, I can't see \nwhich process takes up this memory. ktop shows that the postgresql related \nprocesses have a constant memory usage but the overall memory usage always \nincreases as long as I continue to send insert statements.\n\nWhen the database connection is closed, no memory is reclaimed, the overall \nmemory usage stays the same. And when I close down all postgresql processes \nincluding postmaster, it's the same.\nI'm rather new to Linux and postgresql so I'm not sure if I should call \nthis a memory leak :-)\nHas anybody experienced a similar thing?\n\nthanks,\nAlexander Jerusalemvknn \n\n", "msg_date": "Sun, 21 Jan 2001 13:18:54 +0100", "msg_from": "Alexander Jerusalem <[email protected]>", "msg_from_op": true, "msg_subject": "postgres memory management" }, { "msg_contents": "Thank you for your answer Mark!\n\nNow I have updated glibc to the latest version (2.2) and it's still the \nsame. I don't have the time to change to a different Linux version just to \ntry if that solves the problem. What else could I do?\n\nthanks,\nAlexander Jerusalem\[email protected]\n\n\nAt 15:49 21.01.01, you wrote:\n>First Things First. I would not use a .0 version of Redhat for anything. The\n>7.0 version is very buggy. Switch to Redhat 6.2 or another distribution like\n>Slackware 7.2.\n>\n> > Hi all,\n> >\n> > I'm experiencing some strange behaviour with postgresql 7.0.3 on Red Hat\n> > Linux 7. I'm sending lots of insert statements to the postgresql server\n> > from another machine via JDBC. During that process postgresql continues to\n> > take up more and more memory and seemingly never returns it to the system.\n> > Oddly if I watch the postmaster and it's sub processes in ktop, I can't\n>see\n> > which process takes up this memory. ktop shows that the postgresql related\n> > processes have a constant memory usage but the overall memory usage always\n> > increases as long as I continue to send insert statements.\n> >\n> > When the database connection is closed, no memory is reclaimed, the\n>overall\n> > memory usage stays the same. And when I close down all postgresql\n>processes\n> > including postmaster, it's the same.\n> > I'm rather new to Linux and postgresql so I'm not sure if I should call\n> > this a memory leak :-)\n> > Has anybody experienced a similar thing?\n> >\n> > thanks,\n> > Alexander Jerusalemvknn\n> >\n> >\n\n", "msg_date": "Sun, 21 Jan 2001 17:01:34 +0100", "msg_from": "Alexander Jerusalem <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres memory management" }, { "msg_contents": "On Sun, Jan 21, 2001 at 13:18:54 +0100, Alexander Jerusalem wrote:\n> During that process postgresql continues to take up more and more memory\n> and seemingly never returns it to the system. Oddly if I watch the\n> postmaster and it's sub processes in ktop, I can't see which process takes\n> up this memory. ktop shows that the postgresql related processes have a\n> constant memory usage but the overall memory usage always increases as\n> long as I continue to send insert statements.\n> \n> When the database connection is closed, no memory is reclaimed, the\n> overall memory usage stays the same. And when I close down all postgresql\n> processes including postmaster, it's the same.\n\nLinux uses memory that wouldn't otherwise be used as buffer/cache space\n(watch the \"cached\" entry in \"top\"). This is nothing to worry about.\n\nHTH,\nRay\n-- \nUSDoJ/Judge Jackson: \"Microsoft has performed an illegal operation and will\nbe shut down.\"\n\tJames Turinsky in alt.sysadmin.recovery\n", "msg_date": "Sun, 21 Jan 2001 18:46:38 +0100", "msg_from": "\"J.H.M. Dassen (Ray)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres memory management" }, { "msg_contents": "On Sun, Jan 21, 2001 at 01:18:54PM +0100, Alexander Jerusalem wrote:\n> When the database connection is closed, no memory is reclaimed, the overall \n> memory usage stays the same. And when I close down all postgresql processes \n> including postmaster, it's the same.\n> I'm rather new to Linux and postgresql so I'm not sure if I should call \n> this a memory leak :-)\n\nHow much memory is being used? Do you ever go into swap? If not,\nwhat's probably happening is Linux is using free memory to cache data\nlike I/O. Linux should automatically release this memory if it's\nneeded by a process. So as long as you have some free memory, I'd\nsay don't worry about it -- but if you start going into swap and\nthis memory isn't released, then you might have a problem.\n\nBTW, you're using 'ktop', the KDE front end to 'top'? If you're\nconcerned about memory usage, I'd definately recommend not running\nKDE, X, or any other GUI stuff.\n\nHTH,\n\nNeil\n\n-- \nNeil Conway <[email protected]>\nGet my GnuPG key from: http://klamath.dyndns.org/mykey.asc\nEncrypted mail welcomed\n\nViolence is to dictatorship as propaganda is to democracy.\n -- Noam Chomsky\n", "msg_date": "Sun, 21 Jan 2001 12:49:49 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres memory management" }, { "msg_contents": "Neil, thank you for your answer,\n\nI thought about that possibility and it is possible since that computer has \n512 MB RAM. But when I start and stop other programs like emacs the memory \nis freed as soon as I stop them. As to KDE: I'm not concerned about a lack \nof memory in general but I'm about to deploy an application on a server \nthat I hope will be running for a long time without me having to restart it \nevery two days because of a memory leak in some software. Anyway, I hope \nyou're right, I'll just try it :-)\n\nthanks,\n\nAlexander Jerusalem\[email protected]\nvknn\n\n\nAt 18:49 21.01.01, Neil Conway wrote:\n>On Sun, Jan 21, 2001 at 01:18:54PM +0100, Alexander Jerusalem wrote:\n> > When the database connection is closed, no memory is reclaimed, the \n> overall\n> > memory usage stays the same. And when I close down all postgresql \n> processes\n> > including postmaster, it's the same.\n> > I'm rather new to Linux and postgresql so I'm not sure if I should call\n> > this a memory leak :-)\n>\n>How much memory is being used? Do you ever go into swap? If not,\n>what's probably happening is Linux is using free memory to cache data\n>like I/O. Linux should automatically release this memory if it's\n>needed by a process. So as long as you have some free memory, I'd\n>say don't worry about it -- but if you start going into swap and\n>this memory isn't released, then you might have a problem.\n>\n>BTW, you're using 'ktop', the KDE front end to 'top'? If you're\n>concerned about memory usage, I'd definately recommend not running\n>KDE, X, or any other GUI stuff.\n>\n>HTH,\n>\n>Neil\n>\n>--\n>Neil Conway <[email protected]>\n>Get my GnuPG key from: http://klamath.dyndns.org/mykey.asc\n>Encrypted mail welcomed\n>\n>Violence is to dictatorship as propaganda is to democracy.\n> -- Noam Chomsky\n\n", "msg_date": "Sun, 21 Jan 2001 22:27:45 +0100", "msg_from": "Alexander Jerusalem <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres memory management" }, { "msg_contents": "Hello,\n\nI'm converting a mysql database to postgres. Is there an equivalent \nfor the enum data type?\n\nThanks,\n\nSteve L\n", "msg_date": "Sun, 21 Jan 2001 22:33:02 -0500", "msg_from": "Steve Leibel <[email protected]>", "msg_from_op": false, "msg_subject": "'enum' equivalent?" }, { "msg_contents": "\nOn Sun, Jan 21, 2001 at 10:33:02PM -0500, Steve Leibel wrote:\n> Hello,\n> \n> I'm converting a mysql database to postgres. Is there an equivalent \n> for the enum data type?\n\nIf you want to make a column limited to few certain values, you can\ndefine it something like:\n\n mycolumn varchar(3) check (mycolumn in ('foo', 'bar', 'baz'))\n\nThat would be somewhat similar in effec to mysql's\n\n mycolumn enum('foo', 'bar', 'baz')\n\nZach \n-- \[email protected] Zachary Beane http://www.xach.com/\n", "msg_date": "Sun, 21 Jan 2001 23:01:04 -0500", "msg_from": "Zachary Beane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 'enum' equivalent?" }, { "msg_contents": "Steve Leibel writes:\n> I'm converting a mysql database to postgres. Is there an equivalent \n> for the enum data type?\n\nNo, but you can put the enum data in a separate table and join\nthem. This also makes the operation of adding entries to the enum list\nbetter defined.\n\nDan\n", "msg_date": "Mon, 22 Jan 2001 10:50:32 -0800 (PST)", "msg_from": "Dan Lyke <[email protected]>", "msg_from_op": false, "msg_subject": "'enum' equivalent?" }, { "msg_contents": "At 13:18 21/01/01 +0100, Alexander Jerusalem wrote:\n>Hi all,\n>\n>I'm experiencing some strange behaviour with postgresql 7.0.3 on Red Hat \n>Linux 7. I'm sending lots of insert statements to the postgresql server \n>from another machine via JDBC. During that process postgresql continues to \n>take up more and more memory and seemingly never returns it to the system. \n>Oddly if I watch the postmaster and it's sub processes in ktop, I can't \n>see which process takes up this memory. ktop shows that the postgresql \n>related processes have a constant memory usage but the overall memory \n>usage always increases as long as I continue to send insert statements.\n>\n>When the database connection is closed, no memory is reclaimed, the \n>overall memory usage stays the same. And when I close down all postgresql \n>processes including postmaster, it's the same.\n>I'm rather new to Linux and postgresql so I'm not sure if I should call \n>this a memory leak :-)\n>Has anybody experienced a similar thing?\n\nI'm not sure myself. You can rule out JDBC (or Java) here as you say you \nare connecting from another machine.\n\nWhen your JDBC app closes, does it call the connection's close() method? \nDoes any messages like \"Unexpected EOF from client\" appear on the server side?\n\nThe only other thing that comes to mine is possibly something weird is \nhappening with IPC. After you closed down postgres, does ipcclean free up \nany memory?\n\nI'm cc'in the hackers list and the new jdbc list.\n\nPeter\n\n\n>thanks,\n>Alexander Jerusalemvknn\n\n", "msg_date": "Mon, 22 Jan 2001 20:40:53 +0000", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres memory management" }, { "msg_contents": "* Peter Mount <[email protected]> [010122 13:21] wrote:\n> At 13:18 21/01/01 +0100, Alexander Jerusalem wrote:\n> >Hi all,\n> >\n> >I'm experiencing some strange behaviour with postgresql 7.0.3 on Red Hat \n> >Linux 7. I'm sending lots of insert statements to the postgresql server \n> >from another machine via JDBC. During that process postgresql continues to \n> >take up more and more memory and seemingly never returns it to the system. \n> >Oddly if I watch the postmaster and it's sub processes in ktop, I can't \n> >see which process takes up this memory. ktop shows that the postgresql \n> >related processes have a constant memory usage but the overall memory \n> >usage always increases as long as I continue to send insert statements.\n> >\n> >When the database connection is closed, no memory is reclaimed, the \n> >overall memory usage stays the same. And when I close down all postgresql \n> >processes including postmaster, it's the same.\n> >I'm rather new to Linux and postgresql so I'm not sure if I should call \n> >this a memory leak :-)\n> >Has anybody experienced a similar thing?\n> \n> I'm not sure myself. You can rule out JDBC (or Java) here as you say you \n> are connecting from another machine.\n> \n> When your JDBC app closes, does it call the connection's close() method? \n> Does any messages like \"Unexpected EOF from client\" appear on the server side?\n> \n> The only other thing that comes to mine is possibly something weird is \n> happening with IPC. After you closed down postgres, does ipcclean free up \n> any memory?\n\nI don't know if this is valid for Linux, but it is how FreeBSD\nworks, for the most part used memory is never free'd, it is only\nmarked as reclaimable. This is so the system can cache more data.\nOn a freshly booted FreeBSD box you'll have a lot of 'free' memory,\nafter the box has been running for a long time the 'free' memory\nwill probably never go higher that 10megs, the rest is being used\nas cache.\n\nThe main things you have to worry about is:\na) really running out of memory (are you useing a lot of swap?)\nb) not cleaning up IPC as Peter suggested.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Mon, 22 Jan 2001 13:29:55 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: postgres memory management" }, { "msg_contents": "At 21:40 22.01.01, Peter Mount wrote:\n>At 13:18 21/01/01 +0100, Alexander Jerusalem wrote:\n>>Hi all,\n>>\n>>I'm experiencing some strange behaviour with postgresql 7.0.3 on Red Hat \n>>Linux 7. I'm sending lots of insert statements to the postgresql server\n>>>from another machine via JDBC. During that process postgresql continues to\n>>take up more and more memory and seemingly never returns it to the \n>>system. Oddly if I watch the postmaster and it's sub processes in ktop, I \n>>can't see which process takes up this memory. ktop shows that the \n>>postgresql related processes have a constant memory usage but the overall \n>>memory usage always increases as long as I continue to send insert statements.\n>>\n>>When the database connection is closed, no memory is reclaimed, the \n>>overall memory usage stays the same. And when I close down all postgresql \n>>processes including postmaster, it's the same.\n>>I'm rather new to Linux and postgresql so I'm not sure if I should call \n>>this a memory leak :-)\n>>Has anybody experienced a similar thing?\n>\n>I'm not sure myself. You can rule out JDBC (or Java) here as you say you \n>are connecting from another machine.\n>\n>When your JDBC app closes, does it call the connection's close() method? \n>Does any messages like \"Unexpected EOF from client\" appear on the server side?\n>\n>The only other thing that comes to mine is possibly something weird is \n>happening with IPC. After you closed down postgres, does ipcclean free up \n>any memory?\n>\n>I'm cc'in the hackers list and the new jdbc list.\n>\n>Peter\n\nThanks for your answer!\n\nYes I'm calling Connection.close(). I don't get any error messages but \nmaybe I just don't see them because postgresql is started automatically at \nrun level 3. I'm not sure where the output goes. (pg_log contains only \ngarbage or maybe it's a binary file) I tried ipcclean right now and it \ndoesn't free the memory but it gives me some messages that I cannot interpret:\n\nShared memory 0 ... skipped. Process still exists (pid ).\nShared memory 1 ... skipped. Process still exists (pid ).\nShared memory 2 ... skipped. Process still exists (pid ).\nShared memory 3 ... skipped. Process still exists (pid ).\nSemaphore 0 ... resource(s) deleted\nSemaphore 1 ... resource(s) deleted\n\nOddly, when I try to run ipcclean a second time, it says: ipcclean: You \nstill have a postmaster running. Which is not the case as ps -e proves.\n\nAlexander Jerusalem\[email protected]\nvknn\n\n", "msg_date": "Tue, 23 Jan 2001 00:53:54 +0100", "msg_from": "Alexander Jerusalem <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres memory management" }, { "msg_contents": "At 22:29 22.01.01, Alfred Perlstein wrote:\n>* Peter Mount <[email protected]> [010122 13:21] wrote:\n> > At 13:18 21/01/01 +0100, Alexander Jerusalem wrote:\n> > >Hi all,\n> > >\n> > >I'm experiencing some strange behaviour with postgresql 7.0.3 on Red Hat\n> > >Linux 7. I'm sending lots of insert statements to the postgresql server\n> > >from another machine via JDBC. During that process postgresql \n> continues to\n> > >take up more and more memory and seemingly never returns it to the \n> system.\n> > >Oddly if I watch the postmaster and it's sub processes in ktop, I can't\n> > >see which process takes up this memory. ktop shows that the postgresql\n> > >related processes have a constant memory usage but the overall memory\n> > >usage always increases as long as I continue to send insert statements.\n> > >\n> > >When the database connection is closed, no memory is reclaimed, the\n> > >overall memory usage stays the same. And when I close down all postgresql\n> > >processes including postmaster, it's the same.\n> > >I'm rather new to Linux and postgresql so I'm not sure if I should call\n> > >this a memory leak :-)\n> > >Has anybody experienced a similar thing?\n> >\n> > I'm not sure myself. You can rule out JDBC (or Java) here as you say you\n> > are connecting from another machine.\n> >\n> > When your JDBC app closes, does it call the connection's close() method?\n> > Does any messages like \"Unexpected EOF from client\" appear on the \n> server side?\n> >\n> > The only other thing that comes to mine is possibly something weird is\n> > happening with IPC. After you closed down postgres, does ipcclean free up\n> > any memory?\n>\n>I don't know if this is valid for Linux, but it is how FreeBSD\n>works, for the most part used memory is never free'd, it is only\n>marked as reclaimable. This is so the system can cache more data.\n>On a freshly booted FreeBSD box you'll have a lot of 'free' memory,\n>after the box has been running for a long time the 'free' memory\n>will probably never go higher that 10megs, the rest is being used\n>as cache.\n>\n>The main things you have to worry about is:\n>a) really running out of memory (are you useing a lot of swap?)\n>b) not cleaning up IPC as Peter suggested.\n\nThanks for your answer!\n\nI'm rather new to Linux, so I can't tell if it's that way on Linux. But I \nnoticed that other programs free some memory when I quit them. But it's \ntrue that I'm not running out of memory. I have 300 MB of free RAM and no \nswap space is used. As I wrote in reply to Peters mail, ipcclean doesn't \nchange anything.\n\nAlexander Jerusalem\[email protected]\nvknn\n\n", "msg_date": "Tue, 23 Jan 2001 00:58:49 +0100", "msg_from": "Alexander Jerusalem <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: postgres memory management" }, { "msg_contents": "I just looked at the code in the close() method and all it is doing is\nclosing the socket connection to the server. However in looking at the\ndoc on the backend/frontend protocol it appears that the client (JDBC in\nthis case) is supposed to send a connection termination message first,\nthen close the socket. I beleive that the termination message is\nsupposed to be a one byte value of 'X'. If I read this correctly, then\nit does appear that the JDBC connection class does have a bug.\n\nthanks,\n--Barry\n\n\nAlexander Jerusalem wrote:\n> \n> At 21:40 22.01.01, Peter Mount wrote:\n> >At 13:18 21/01/01 +0100, Alexander Jerusalem wrote:\n> >>Hi all,\n> >>\n> >>I'm experiencing some strange behaviour with postgresql 7.0.3 on Red Hat\n> >>Linux 7. I'm sending lots of insert statements to the postgresql server\n> >>>from another machine via JDBC. During that process postgresql continues to\n> >>take up more and more memory and seemingly never returns it to the\n> >>system. Oddly if I watch the postmaster and it's sub processes in ktop, I\n> >>can't see which process takes up this memory. ktop shows that the\n> >>postgresql related processes have a constant memory usage but the overall\n> >>memory usage always increases as long as I continue to send insert statements.\n> >>\n> >>When the database connection is closed, no memory is reclaimed, the\n> >>overall memory usage stays the same. And when I close down all postgresql\n> >>processes including postmaster, it's the same.\n> >>I'm rather new to Linux and postgresql so I'm not sure if I should call\n> >>this a memory leak :-)\n> >>Has anybody experienced a similar thing?\n> >\n> >I'm not sure myself. You can rule out JDBC (or Java) here as you say you\n> >are connecting from another machine.\n> >\n> >When your JDBC app closes, does it call the connection's close() method?\n> >Does any messages like \"Unexpected EOF from client\" appear on the server side?\n> >\n> >The only other thing that comes to mine is possibly something weird is\n> >happening with IPC. After you closed down postgres, does ipcclean free up\n> >any memory?\n> >\n> >I'm cc'in the hackers list and the new jdbc list.\n> >\n> >Peter\n> \n> Thanks for your answer!\n> \n> Yes I'm calling Connection.close(). I don't get any error messages but\n> maybe I just don't see them because postgresql is started automatically at\n> run level 3. I'm not sure where the output goes. (pg_log contains only\n> garbage or maybe it's a binary file) I tried ipcclean right now and it\n> doesn't free the memory but it gives me some messages that I cannot interpret:\n> \n> Shared memory 0 ... skipped. Process still exists (pid ).\n> Shared memory 1 ... skipped. Process still exists (pid ).\n> Shared memory 2 ... skipped. Process still exists (pid ).\n> Shared memory 3 ... skipped. Process still exists (pid ).\n> Semaphore 0 ... resource(s) deleted\n> Semaphore 1 ... resource(s) deleted\n> \n> Oddly, when I try to run ipcclean a second time, it says: ipcclean: You\n> still have a postmaster running. Which is not the case as ps -e proves.\n> \n> Alexander Jerusalem\n> [email protected]\n> vknn\n", "msg_date": "Mon, 22 Jan 2001 16:48:20 -0800", "msg_from": "Barry Lind <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] postgres memory management" }, { "msg_contents": "Hi Alexander,\n\nI've noticed that the PG 7.03 ipcclean script uses \"ps x | grep -s\n'postmaster'\" to determine if a postmaster daemon is still running,\nwhich at least for Mandrake Linux 7.2 doesn't work as expected. With\nthis version of linux, the ps & grep combination will find itself and\nthen ipcclean will complain about an existing postmaster.\n\nI found the solution to this being to edit the ipcclean script and\nchange the \"ps x | grep -s 'postmaster'\" part to \"ps -e | grep -s\n'postmaster'\". This then works correctly with Mandrake 7.2.\n\nRegards and best wishes,\n\nJustin Clift\n\n<snip>\n> \n> Oddly, when I try to run ipcclean a second time, it says: ipcclean: You\n> still have a postmaster running. Which is not the case as ps -e proves.\n> \n> Alexander Jerusalem\n> [email protected]\n> vknn\n", "msg_date": "Tue, 23 Jan 2001 13:03:58 +1100", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres memory management" }, { "msg_contents": "Justin Clift writes:\n > I found the solution to this being to edit the ipcclean script and\n > change the \"ps x | grep -s 'postmaster'\" part to \"ps -e | grep -s\n > 'postmaster'\". This then works correctly with Mandrake 7.2.\n\nA standard way of finding a process by name without the grep itself\nappearing is use something like \"grep '[p]ostmaster'\".\n-- \nPete Forman -./\\.- Disclaimer: This post is originated\nWesternGeco -./\\.- by myself and does not represent\[email protected] -./\\.- opinion of Schlumberger, Baker\nhttp://www.crosswinds.net/~petef -./\\.- Hughes or their divisions.\n", "msg_date": "Tue, 23 Jan 2001 10:59:28 +0000", "msg_from": "Pete Forman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres memory management" }, { "msg_contents": "Hi Clift,\n\nyou are right, I have the same problem on RedHat. After I inserted -e it \nworks so far. But there's something else that seems strange to me I'm not \nquite sure if I'm reading this right since I understand only half of what \nhappens in this script. After the comment that says \"Don't do anything if \nprocess still running...\" on line there is the following sequence of lines:\n\nps hj$ipcs_pid >/dev/null 2>&1\nif [ $? -eq 0 ]; then\n echo \"skipped....\"\n\nAs I understand it the if statement tests the output of the previous ps \nstatement. The strange thing is that the variable $ipcs_pid is never used \nanywhere before this line, so I think it's always null (or whatever this \nscripting language defaults to). There are three other variables ipcs_id, \nipcs_cpid and ipcs_lpid but no ipcs_pid. If I'm right here, it seems that \nthis script does effectively nothing in terms of shared memory.\n\nPlease tell me if I'm on a completely wrong track :-)\n\nAlexander Jerusalem\[email protected]\nvknn\n\n\n\nAt 03:03 23.01.01, Justin Clift wrote:\n>Hi Alexander,\n>\n>I've noticed that the PG 7.03 ipcclean script uses \"ps x | grep -s\n>'postmaster'\" to determine if a postmaster daemon is still running,\n>which at least for Mandrake Linux 7.2 doesn't work as expected. With\n>this version of linux, the ps & grep combination will find itself and\n>then ipcclean will complain about an existing postmaster.\n>\n>I found the solution to this being to edit the ipcclean script and\n>change the \"ps x | grep -s 'postmaster'\" part to \"ps -e | grep -s\n>'postmaster'\". This then works correctly with Mandrake 7.2.\n>\n>Regards and best wishes,\n>\n>Justin Clift\n>\n><snip>\n> >\n> > Oddly, when I try to run ipcclean a second time, it says: ipcclean: You\n> > still have a postmaster running. Which is not the case as ps -e proves.\n> >\n> > Alexander Jerusalem\n> > [email protected]\n> > vknn\n\n", "msg_date": "Tue, 23 Jan 2001 12:19:50 +0100", "msg_from": "Alexander Jerusalem <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres memory management" }, { "msg_contents": "Alexander Jerusalem <[email protected]> writes:\n> you are right, I have the same problem on RedHat. After I inserted -e it \n> works so far. But there's something else that seems strange to me I'm not \n> quite sure if I'm reading this right since I understand only half of what \n> happens in this script. After the comment that says \"Don't do anything if \n> process still running...\" on line there is the following sequence of lines:\n\n> ps hj$ipcs_pid >/dev/null 2>&1\n> if [ $? -eq 0 ]; then\n> echo \"skipped....\"\n\n> As I understand it the if statement tests the output of the previous ps \n> statement. The strange thing is that the variable $ipcs_pid is never used \n> anywhere before this line, so I think it's always null (or whatever this \n> scripting language defaults to). There are three other variables ipcs_id, \n> ipcs_cpid and ipcs_lpid but no ipcs_pid. If I'm right here, it seems that \n> this script does effectively nothing in terms of shared memory.\n\nI think you are right --- the Linux portion of this script is broken.\nAside from the bogus variable, the awk call at the top of the loop is\nwrong (printf has three arguments and only two percents). Given those\ntwo typos, there are probably more.\n\nFeel free to submit a patch to make it actually work ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Jan 2001 11:47:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: postgres memory management " } ]
[ { "msg_contents": "\nFinally found a log analyzer that could parse/handle the logs for\nhttp://www.postgresql.org ... the following is the results. It doesn't\ninclude the mirrors, which I'd be curious as to how much they get on top\nof this, but 1,548,354 Requests in last 7 days ...\n\n\thttp://www.postgresql.org/stats/reportmagic\n\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Sun, 21 Jan 2001 14:52:45 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Tangent ... For those that like stats ..." } ]
[ { "msg_contents": "\nI would like to do a interface change in pgcrypto. (Good\ntiming, I know :)) At the moment the digest() function returns\nhexadecimal coded hash, but I want it to return pure binary. I\nhave also included functions encode() and decode() which support \n'base64' and 'hex' encodings, so if anyone needs digest() in hex\nhe can do encode(digest(...), 'hex').\n\nMain reason for it is \"to do one thing and do it well\" :)\n\nAnother reason is if someone needs really lot of digesting, in\nthe end he wants to store the binary not the hexadecimal result.\nIt is really silly to convert it to hex then back to binary\nagain. As I said if someone needs hex he can get it.\n\nWell, and the real reason that I am doing encrypt()/decrypt()\nfunctions and _they_ return binary. For testing I like to see\nit in hex occasionally, but it is really wrong to let them\nreturn hex. Only now it caught my eye that hex-coding in\ndigest() is wrong. When doing digest() I thought about 'common\ncase' but hacking with psql is probably _not_ the common case :)\n\n-- \nmarko\n\n\ndiff -urNX /home/marko/misc/diff-exclude contrib/pgcrypto.orig/Makefile contrib/pgcrypto/Makefile\n--- contrib/pgcrypto.orig/Makefile\tTue Oct 31 15:11:28 2000\n+++ contrib/pgcrypto/Makefile\tSun Jan 21 00:14:54 2001\n@@ -34,7 +34,7 @@\n endif\n \n NAME\t:= pgcrypto\n-SRCS\t+= pgcrypto.c\n+SRCS\t+= pgcrypto.c encode.c\n OBJS\t:= $(SRCS:.c=.o)\n SO_MAJOR_VERSION = 0\n SO_MINOR_VERSION = 1\ndiff -urNX /home/marko/misc/diff-exclude contrib/pgcrypto.orig/README.pgcrypto contrib/pgcrypto/README.pgcrypto\n--- contrib/pgcrypto.orig/README.pgcrypto\tTue Oct 31 15:11:28 2000\n+++ contrib/pgcrypto/README.pgcrypto\tSun Jan 21 00:21:29 2001\n@@ -1,14 +1,21 @@\n \n DESCRIPTION\n \n- Here is a implementation of crypto hashes for PostgreSQL.\n- It exports 2 functions to SQL level:\n+ Here are various cryptographic and otherwise useful\n+ functions for PostgreSQL.\n+\n+ encode(data, type)\n+ encodes binary data into ASCII-only representation.\n+\tTypes supported are 'hex' and 'base64'.\n+\n+ decode(data, type)\n+ \tdecodes the data processed by encode()\n \n digest(data::text, hash_name::text)\n-\twhich returns hexadecimal coded hash over data by\n+\twhich returns cryptographic checksum over data by\n \tspecified algorithm. eg\n \n-\t> select digest('blah', 'sha1');\n+\t> select encode(digest('blah', 'sha1'), 'hex');\n \t5bf1fd927dfb8679496a2e6cf00cbe50c1c87145\n \n digest_exists(hash_name::text)::bool\ndiff -urNX /home/marko/misc/diff-exclude contrib/pgcrypto.orig/encode.c contrib/pgcrypto/encode.c\n--- contrib/pgcrypto.orig/encode.c\tThu Jan 1 03:00:00 1970\n+++ contrib/pgcrypto/encode.c\tSun Jan 21 23:48:55 2001\n@@ -0,0 +1,345 @@\n+/*\n+ * encode.c\n+ *\t\tVarious data encoding/decoding things.\n+ * \n+ * Copyright (c) 2001 Marko Kreen\n+ * All rights reserved.\n+ *\n+ * Redistribution and use in source and binary forms, with or without\n+ * modification, are permitted provided that the following conditions\n+ * are met:\n+ * 1. Redistributions of source code must retain the above copyright\n+ * notice, this list of conditions and the following disclaimer.\n+ * 2. Redistributions in binary form must reproduce the above copyright\n+ * notice, this list of conditions and the following disclaimer in the\n+ * documentation and/or other materials provided with the distribution.\n+ *\n+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND\n+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n+ * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE\n+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\n+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\n+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\n+ * SUCH DAMAGE.\n+ *\n+ * $Id$\n+ */\n+\n+#include <postgres.h>\n+#include <fmgr.h>\n+\n+#include \"encode.h\"\n+\n+/*\n+ * NAMEDATALEN is used for hash names\n+ */\n+#if NAMEDATALEN < 16\n+#error \"NAMEDATALEN < 16: too small\"\n+#endif\n+\n+static pg_coding *\n+find_coding(pg_coding *hbuf, text *name, int silent);\n+static pg_coding *\n+pg_find_coding(pg_coding *res, char *name);\n+\n+\n+/* SQL function: encode(bytea, text) returns text */\n+PG_FUNCTION_INFO_V1(encode);\n+\n+Datum\n+encode(PG_FUNCTION_ARGS)\n+{\n+\ttext *arg;\n+\ttext *name;\n+\tuint len, rlen, rlen0;\n+\tpg_coding *c, cbuf;\n+\ttext *res;\n+\t\n+\tif (PG_ARGISNULL(0) || PG_ARGISNULL(1))\n+\t\tPG_RETURN_NULL();\n+\t\n+\tname = PG_GETARG_TEXT_P(1);\t\n+\tc = find_coding(&cbuf, name, 0); /* will give error if fails */\n+\n+\targ = PG_GETARG_TEXT_P(0);\n+\tlen = VARSIZE(arg) - VARHDRSZ;\n+\t\n+\trlen0 = c->encode_len(len);\n+\t\n+\tres = (text *)palloc(rlen0 + VARHDRSZ);\n+\t\n+\trlen = c->encode(VARDATA(arg), len, VARDATA(res));\n+\tVARATT_SIZEP(res) = rlen + VARHDRSZ;\n+\n+\tif (rlen > rlen0)\n+\t\telog(FATAL, \"pg_encode: overflow, encode estimate too small\");\n+\t\n+\tPG_FREE_IF_COPY(arg, 0);\n+\tPG_FREE_IF_COPY(name, 0);\n+\t\n+\tPG_RETURN_TEXT_P(res);\n+}\n+\n+/* SQL function: decode(text, text) returns bytea */\n+PG_FUNCTION_INFO_V1(decode);\n+\n+Datum\n+decode(PG_FUNCTION_ARGS)\n+{\n+\ttext *arg;\n+\ttext *name;\n+\tuint len, rlen, rlen0;\n+\tpg_coding *c, cbuf;\n+\ttext *res;\n+\t\n+\tif (PG_ARGISNULL(0) || PG_ARGISNULL(1))\n+\t\tPG_RETURN_NULL();\n+\t\n+\tname = PG_GETARG_TEXT_P(1);\t\n+\tc = find_coding(&cbuf, name, 0); /* will give error if fails */\n+\n+\targ = PG_GETARG_TEXT_P(0);\n+\tlen = VARSIZE(arg) - VARHDRSZ;\n+\t\n+\trlen0 = c->decode_len(len);\n+\t\n+\tres = (text *)palloc(rlen0 + VARHDRSZ);\n+\t\n+\trlen = c->decode(VARDATA(arg), len, VARDATA(res));\n+\tVARATT_SIZEP(res) = rlen + VARHDRSZ;\n+\n+\tif (rlen > rlen0)\n+\t\telog(FATAL, \"pg_decode: overflow, decode estimate too small\");\n+\t\n+\tPG_FREE_IF_COPY(arg, 0);\n+\tPG_FREE_IF_COPY(name, 0);\n+\t\n+\tPG_RETURN_TEXT_P(res);\n+}\n+\n+static pg_coding *\n+find_coding(pg_coding *dst, text *name, int silent)\n+{\n+\tpg_coding *p;\n+\tchar buf[NAMEDATALEN];\n+\tuint len;\n+\t\n+\tlen = VARSIZE(name) - VARHDRSZ;\n+\tif (len >= NAMEDATALEN) {\n+\t\tif (silent)\n+\t\t\treturn NULL;\n+\t\telog(ERROR, \"Encoding type does not exist (name too long)\");\n+\t}\n+\t\t\n+\tmemcpy(buf, VARDATA(name), len);\n+\tbuf[len] = 0;\n+\t\n+\tp = pg_find_coding(dst, buf);\n+\n+\tif (p == NULL && !silent)\n+\t\telog(ERROR, \"Encoding type does not exist: '%s'\", buf);\n+\treturn p;\n+}\n+\n+static char *hextbl = \"0123456789abcdef\";\n+\n+uint\n+hex_encode(uint8 *src, uint len, uint8 *dst)\n+{\n+\tuint8 *end = src + len;\n+\twhile (src < end) {\n+\t\t*dst++ = hextbl[(*src >> 4) & 0xF];\n+\t\t*dst++ = hextbl[*src & 0xF];\n+\t\tsrc++;\n+\t}\n+\treturn len*2;\n+}\n+\n+/* probably should use lookup table */\n+static uint8\n+get_hex(char c)\n+{\n+\tuint8 res = 0;\n+\t\n+\tif (c >= '0' && c <= '9')\n+\t\tres = c - '0';\n+\telse if (c >= 'a' && c <= 'f')\n+\t\tres = c - 'a' + 10;\n+\telse if (c >= 'A' && c <= 'F')\n+\t\tres = c - 'A' + 10;\n+\telse\n+\t\telog(ERROR, \"Bad hex code: '%c'\", c);\n+\t\n+\treturn res;\n+}\n+\n+uint\n+hex_decode(uint8 *src, uint len, uint8 *dst)\n+{\n+\tuint8 *s, *srcend, v1, v2, *p = dst;\n+\t\n+\tsrcend = src + len;\n+\ts = src; p = dst;\n+\twhile (s < srcend) {\n+\t\tif (*s == ' ' || *s == '\\n' || *s == '\\t' || *s == '\\r') {\n+\t\t\ts++;\n+\t\t\tcontinue;\n+\t\t}\n+\t\tv1 = get_hex(*s++) << 4;\n+\t\tif (s >= srcend)\n+\t\t\telog(ERROR, \"hex_decode: invalid data\");\n+\t\tv2 = get_hex(*s++);\n+\t\t*p++ = v1 | v2;\n+\t}\n+\t\n+\treturn p - dst;\n+}\n+\n+\n+static unsigned char _base64[] =\n+\t\"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/\";\n+\n+uint\n+b64_encode(uint8 *src, uint len, uint8 *dst)\n+{\n+\tuint8 *s, *p, *end = src + len, *lend = dst + 76;\n+\tint pos = 2;\n+\tunsigned long buf = 0;\n+\n+\ts = src; p = dst;\n+\t\n+\twhile (s < end) {\n+\t\tbuf |= *s << (pos << 3);\n+\t\tpos--;\n+\t\ts++;\n+\t\t\n+\t\t/* write it out */\n+\t\tif (pos < 0) {\n+\t\t\t*p++ = _base64[(buf >> 18) & 0x3f];\n+\t\t\t*p++ = _base64[(buf >> 12) & 0x3f];\n+\t\t\t*p++ = _base64[(buf >> 6) & 0x3f];\n+\t\t\t*p++ = _base64[buf & 0x3f];\n+\n+\t\t\tpos = 2;\n+\t\t\tbuf = 0;\n+\t\t}\n+\t\tif (p >= lend) {\n+\t\t\t*p++ = '\\n';\n+\t\t\tlend = p + 76;\n+\t\t}\n+\t}\n+\tif (pos != 2) {\n+\t\t*p++ = _base64[(buf >> 18) & 0x3f];\n+\t\t*p++ = _base64[(buf >> 12) & 0x3f];\n+\t\t*p++ = (pos == 0) ? _base64[(buf >> 6) & 0x3f] : '=';\n+\t\t*p++ = '=';\n+\t}\n+\n+\treturn p - dst;\n+}\n+\n+/* probably should use lookup table */\n+uint\n+b64_decode(uint8 *src, uint len, uint8 *dst)\n+{\n+\tchar *srcend = src + len, *s = src;\n+\tuint8 *p = dst;\n+\tchar c;\n+\tuint b = 0;\n+\tunsigned long buf = 0;\n+\tint pos = 0, end = 0;\n+\t\n+\twhile (s < srcend) {\n+\t\tc = *s++;\n+\t\tif (c >= 'A' && c <= 'Z')\n+\t\t\tb = c - 'A';\n+\t\telse if (c >= 'a' && c <= 'z')\n+\t\t\tb = c - 'a' + 26;\n+\t\telse if (c >= '0' && c <= '9')\n+\t\t\tb = c - '0' + 52;\n+\t\telse if (c == '+')\n+\t\t\tb = 62;\n+\t\telse if (c == '/')\n+\t\t\tb = 63;\n+\t\telse if (c == '=') {\n+\t\t\t/* end sequence */\n+\t\t\tif (!end) {\n+\t\t\t\tif (pos == 2) end = 1;\n+\t\t\t\telse if (pos == 3) end = 2;\n+\t\t\t\telse\n+\t\t\t\t\telog(ERROR, \"base64: unexpected '='\");\n+\t\t\t}\n+\t\t\tb = 0;\n+\t\t} else if (c == ' ' || c == '\\t' || c == '\\n' || c == '\\r')\n+\t\t\tcontinue;\n+\t\telse\n+\t\t\telog(ERROR, \"base64: Invalid symbol\");\n+\n+\t\t/* add it to buffer */\n+\t\tbuf = (buf << 6) + b;\n+\t\tpos++;\n+\t\tif (pos == 4) {\n+\t\t\t*p++ = (buf >> 16) & 255;\n+\t\t\tif (end == 0 || end > 1)\n+\t\t\t\t*p++ = (buf >> 8) & 255;\n+\t\t\tif (end == 0 || end > 2)\n+\t\t\t\t*p++ = buf & 255;\n+\t\t\tbuf = 0;\n+\t\t\tpos = 0;\n+\t\t}\n+\t}\n+\n+\tif (pos != 0)\n+\t\telog(ERROR, \"base64: invalid end sequence\");\n+\n+\treturn p - dst;\n+}\n+\n+\n+uint\n+hex_enc_len(uint srclen)\n+{\n+\treturn srclen << 1;\n+}\n+\n+uint\n+hex_dec_len(uint srclen)\n+{\n+\treturn srclen >> 1;\n+}\n+\n+uint\n+b64_enc_len(uint srclen)\n+{\n+\treturn srclen + (srclen / 3) + (srclen / (76 / 2));\n+}\n+\n+uint\n+b64_dec_len(uint srclen)\n+{\n+\treturn (srclen * 3) >> 2;\n+}\n+\n+static pg_coding\n+encoding_list [] = {\n+\t{ \"hex\", hex_enc_len, hex_dec_len, hex_encode, hex_decode},\n+\t{ \"base64\", b64_enc_len, b64_dec_len, b64_encode, b64_decode},\n+\t{ NULL, NULL, NULL, NULL, NULL}\n+};\n+\n+\n+static pg_coding *\n+pg_find_coding(pg_coding *res, char *name)\n+{\n+\tpg_coding *p;\n+\tfor (p = encoding_list; p->name; p++) {\n+\t\tif (!strcasecmp(p->name, name))\n+\t\t\treturn p;\n+\t}\n+\treturn NULL;\n+}\n+\ndiff -urNX /home/marko/misc/diff-exclude contrib/pgcrypto.orig/encode.h contrib/pgcrypto/encode.h\n--- contrib/pgcrypto.orig/encode.h\tThu Jan 1 03:00:00 1970\n+++ contrib/pgcrypto/encode.h\tSun Jan 21 20:01:01 2001\n@@ -0,0 +1,60 @@\n+/*\n+ * pg_encode.h\n+ *\t\tencode.c\n+ * \n+ * Copyright (c) 2001 Marko Kreen\n+ * All rights reserved.\n+ *\n+ * Redistribution and use in source and binary forms, with or without\n+ * modification, are permitted provided that the following conditions\n+ * are met:\n+ * 1. Redistributions of source code must retain the above copyright\n+ * notice, this list of conditions and the following disclaimer.\n+ * 2. Redistributions in binary form must reproduce the above copyright\n+ * notice, this list of conditions and the following disclaimer in the\n+ * documentation and/or other materials provided with the distribution.\n+ *\n+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND\n+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n+ * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE\n+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\n+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\n+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\n+ * SUCH DAMAGE.\n+ *\n+ * $Id$\n+ */\n+\n+#ifndef __PG_ENCODE_H\n+#define __PG_ENCODE_H\n+\n+/* exported functions */\n+Datum encode(PG_FUNCTION_ARGS);\n+Datum decode(PG_FUNCTION_ARGS);\n+\n+typedef struct _pg_coding pg_coding;\n+struct _pg_coding {\n+\tchar *name;\n+\tuint (*encode_len)(uint dlen);\n+\tuint (*decode_len)(uint dlen);\n+\tuint (*encode)(uint8 *data, uint dlen, uint8 *res);\n+\tuint (*decode)(uint8 *data, uint dlen, uint8 *res);\n+};\n+\n+/* They are for outside usage in C code, if needed */\n+uint hex_encode(uint8 *src, uint len, uint8 *dst);\n+uint hex_decode(uint8 *src, uint len, uint8 *dst);\n+uint b64_encode(uint8 *src, uint len, uint8 *dst);\n+uint b64_decode(uint8 *src, uint len, uint8 *dst);\n+\n+uint hex_enc_len(uint srclen);\n+uint hex_dec_len(uint srclen);\n+uint b64_enc_len(uint srclen);\n+uint b64_dec_len(uint srclen);\n+\n+#endif /* __PG_ENCODE_H */\n+\nBinary files contrib/pgcrypto.orig/libpgcrypto.so.0 and contrib/pgcrypto/libpgcrypto.so.0 differ\ndiff -urNX /home/marko/misc/diff-exclude contrib/pgcrypto.orig/pgcrypto.c contrib/pgcrypto/pgcrypto.c\n--- contrib/pgcrypto.orig/pgcrypto.c\tWed Jan 10 08:23:22 2001\n+++ contrib/pgcrypto/pgcrypto.c\tSun Jan 21 19:59:38 2001\n@@ -35,11 +35,6 @@\n #include \"pgcrypto.h\"\n \n /*\n- * maximum length of digest for internal buffers\n- */\n-#define MAX_DIGEST_LENGTH\t128\n-\n-/*\n * NAMEDATALEN is used for hash names\n */\n #if NAMEDATALEN < 16\n@@ -52,8 +47,6 @@\n Datum digest_exists(PG_FUNCTION_ARGS);\n \n /* private stuff */\n-static char *\n-to_hex(uint8 *src, uint len, char *dst);\n static pg_digest *\n find_digest(pg_digest *hbuf, text *name, int silent);\n \n@@ -66,7 +59,6 @@\n {\n \ttext *arg;\n \ttext *name;\n-\tuint8 *p, buf[MAX_DIGEST_LENGTH];\n \tuint len, hlen;\n \tpg_digest *h, _hbuf;\n \ttext *res;\n@@ -78,17 +70,14 @@\n \th = find_digest(&_hbuf, name, 0); /* will give error if fails */\n \n \thlen = h->length(h);\n-\tif (hlen > MAX_DIGEST_LENGTH)\n-\t\telog(ERROR, \"Hash length overflow: %d\", hlen);\n \t\n-\tres = (text *)palloc(hlen*2 + VARHDRSZ);\n-\tVARATT_SIZEP(res) = hlen*2 + VARHDRSZ;\n+\tres = (text *)palloc(hlen + VARHDRSZ);\n+\tVARATT_SIZEP(res) = hlen + VARHDRSZ;\n \t\n \targ = PG_GETARG_TEXT_P(0);\n \tlen = VARSIZE(arg) - VARHDRSZ;\n \t\n-\tp = h->digest(h, VARDATA(arg), len, buf);\n-\tto_hex(p, hlen, VARDATA(res));\n+\th->digest(h, VARDATA(arg), len, VARDATA(res));\n \t\n \tPG_FREE_IF_COPY(arg, 0);\n \tPG_FREE_IF_COPY(name, 0);\n@@ -141,19 +130,5 @@\n \tif (p == NULL && !silent)\n \t\telog(ERROR, \"Hash type does not exist: '%s'\", buf);\n \treturn p;\n-}\n-\n-static unsigned char *hextbl = \"0123456789abcdef\";\n-\n-/* dumps binary to hex... Note that it does not null-terminate */\n-static char *\n-to_hex(uint8 *buf, uint len, char *dst)\n-{\n-\tuint i;\n-\tfor (i = 0; i < len; i++) {\n-\t\tdst[i*2] = hextbl[(buf[i] >> 4) & 0xF];\n-\t\tdst[i*2 + 1] = hextbl[buf[i] & 0xF];\n-\t}\n-\treturn dst;\n }\n \ndiff -urNX /home/marko/misc/diff-exclude contrib/pgcrypto.orig/pgcrypto.sql.in contrib/pgcrypto/pgcrypto.sql.in\n--- contrib/pgcrypto.orig/pgcrypto.sql.in\tMon Nov 20 22:36:56 2000\n+++ contrib/pgcrypto/pgcrypto.sql.in\tSun Jan 21 21:27:48 2001\n@@ -1,6 +1,9 @@\n \n -- drop function digest(text, text);\n -- drop function digest_exists(text);\n+-- drop function encode(text, text);\n+-- drop function decode(text, text);\n+\n \n CREATE FUNCTION digest(text, text) RETURNS text\n AS '@MODULE_FILENAME@',\n@@ -9,4 +12,12 @@\n CREATE FUNCTION digest_exists(text) RETURNS bool\n AS '@MODULE_FILENAME@',\n 'digest_exists' LANGUAGE 'C';\n+\n+CREATE FUNCTION encode(text, text) RETURNS text\n+ AS '@MODULE_FILENAME@',\n+ 'encode' LANGUAGE 'C';\n+\n+CREATE FUNCTION decode(text, text) RETURNS text\n+ AS '@MODULE_FILENAME@',\n+ 'decode' LANGUAGE 'C';\n", "msg_date": "Mon, 22 Jan 2001 00:28:22 +0200", "msg_from": "Marko Kreen <[email protected]>", "msg_from_op": true, "msg_subject": "update to contrib/pgcrypto" }, { "msg_contents": "\nThanks. Applied.\n\n> \n> I would like to do a interface change in pgcrypto. (Good\n> timing, I know :)) At the moment the digest() function returns\n> hexadecimal coded hash, but I want it to return pure binary. I\n> have also included functions encode() and decode() which support \n> 'base64' and 'hex' encodings, so if anyone needs digest() in hex\n> he can do encode(digest(...), 'hex').\n> \n> Main reason for it is \"to do one thing and do it well\" :)\n> \n> Another reason is if someone needs really lot of digesting, in\n> the end he wants to store the binary not the hexadecimal result.\n> It is really silly to convert it to hex then back to binary\n> again. As I said if someone needs hex he can get it.\n> \n> Well, and the real reason that I am doing encrypt()/decrypt()\n> functions and _they_ return binary. For testing I like to see\n> it in hex occasionally, but it is really wrong to let them\n> return hex. Only now it caught my eye that hex-coding in\n> digest() is wrong. When doing digest() I thought about 'common\n> case' but hacking with psql is probably _not_ the common case :)\n> \n> -- \n> marko\n> \n> \n> diff -urNX /home/marko/misc/diff-exclude contrib/pgcrypto.orig/Makefile contrib/pgcrypto/Makefile\n> --- contrib/pgcrypto.orig/Makefile\tTue Oct 31 15:11:28 2000\n> +++ contrib/pgcrypto/Makefile\tSun Jan 21 00:14:54 2001\n> @@ -34,7 +34,7 @@\n> endif\n> \n> NAME\t:= pgcrypto\n> -SRCS\t+= pgcrypto.c\n> +SRCS\t+= pgcrypto.c encode.c\n> OBJS\t:= $(SRCS:.c=.o)\n> SO_MAJOR_VERSION = 0\n> SO_MINOR_VERSION = 1\n> diff -urNX /home/marko/misc/diff-exclude contrib/pgcrypto.orig/README.pgcrypto contrib/pgcrypto/README.pgcrypto\n> --- contrib/pgcrypto.orig/README.pgcrypto\tTue Oct 31 15:11:28 2000\n> +++ contrib/pgcrypto/README.pgcrypto\tSun Jan 21 00:21:29 2001\n> @@ -1,14 +1,21 @@\n> \n> DESCRIPTION\n> \n> - Here is a implementation of crypto hashes for PostgreSQL.\n> - It exports 2 functions to SQL level:\n> + Here are various cryptographic and otherwise useful\n> + functions for PostgreSQL.\n> +\n> + encode(data, type)\n> + encodes binary data into ASCII-only representation.\n> +\tTypes supported are 'hex' and 'base64'.\n> +\n> + decode(data, type)\n> + \tdecodes the data processed by encode()\n> \n> digest(data::text, hash_name::text)\n> -\twhich returns hexadecimal coded hash over data by\n> +\twhich returns cryptographic checksum over data by\n> \tspecified algorithm. eg\n> \n> -\t> select digest('blah', 'sha1');\n> +\t> select encode(digest('blah', 'sha1'), 'hex');\n> \t5bf1fd927dfb8679496a2e6cf00cbe50c1c87145\n> \n> digest_exists(hash_name::text)::bool\n> diff -urNX /home/marko/misc/diff-exclude contrib/pgcrypto.orig/encode.c contrib/pgcrypto/encode.c\n> --- contrib/pgcrypto.orig/encode.c\tThu Jan 1 03:00:00 1970\n> +++ contrib/pgcrypto/encode.c\tSun Jan 21 23:48:55 2001\n> @@ -0,0 +1,345 @@\n> +/*\n> + * encode.c\n> + *\t\tVarious data encoding/decoding things.\n> + * \n> + * Copyright (c) 2001 Marko Kreen\n> + * All rights reserved.\n> + *\n> + * Redistribution and use in source and binary forms, with or without\n> + * modification, are permitted provided that the following conditions\n> + * are met:\n> + * 1. Redistributions of source code must retain the above copyright\n> + * notice, this list of conditions and the following disclaimer.\n> + * 2. Redistributions in binary form must reproduce the above copyright\n> + * notice, this list of conditions and the following disclaimer in the\n> + * documentation and/or other materials provided with the distribution.\n> + *\n> + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND\n> + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n> + * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE\n> + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n> + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\n> + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n> + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n> + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\n> + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\n> + * SUCH DAMAGE.\n> + *\n> + * $Id$\n> + */\n> +\n> +#include <postgres.h>\n> +#include <fmgr.h>\n> +\n> +#include \"encode.h\"\n> +\n> +/*\n> + * NAMEDATALEN is used for hash names\n> + */\n> +#if NAMEDATALEN < 16\n> +#error \"NAMEDATALEN < 16: too small\"\n> +#endif\n> +\n> +static pg_coding *\n> +find_coding(pg_coding *hbuf, text *name, int silent);\n> +static pg_coding *\n> +pg_find_coding(pg_coding *res, char *name);\n> +\n> +\n> +/* SQL function: encode(bytea, text) returns text */\n> +PG_FUNCTION_INFO_V1(encode);\n> +\n> +Datum\n> +encode(PG_FUNCTION_ARGS)\n> +{\n> +\ttext *arg;\n> +\ttext *name;\n> +\tuint len, rlen, rlen0;\n> +\tpg_coding *c, cbuf;\n> +\ttext *res;\n> +\t\n> +\tif (PG_ARGISNULL(0) || PG_ARGISNULL(1))\n> +\t\tPG_RETURN_NULL();\n> +\t\n> +\tname = PG_GETARG_TEXT_P(1);\t\n> +\tc = find_coding(&cbuf, name, 0); /* will give error if fails */\n> +\n> +\targ = PG_GETARG_TEXT_P(0);\n> +\tlen = VARSIZE(arg) - VARHDRSZ;\n> +\t\n> +\trlen0 = c->encode_len(len);\n> +\t\n> +\tres = (text *)palloc(rlen0 + VARHDRSZ);\n> +\t\n> +\trlen = c->encode(VARDATA(arg), len, VARDATA(res));\n> +\tVARATT_SIZEP(res) = rlen + VARHDRSZ;\n> +\n> +\tif (rlen > rlen0)\n> +\t\telog(FATAL, \"pg_encode: overflow, encode estimate too small\");\n> +\t\n> +\tPG_FREE_IF_COPY(arg, 0);\n> +\tPG_FREE_IF_COPY(name, 0);\n> +\t\n> +\tPG_RETURN_TEXT_P(res);\n> +}\n> +\n> +/* SQL function: decode(text, text) returns bytea */\n> +PG_FUNCTION_INFO_V1(decode);\n> +\n> +Datum\n> +decode(PG_FUNCTION_ARGS)\n> +{\n> +\ttext *arg;\n> +\ttext *name;\n> +\tuint len, rlen, rlen0;\n> +\tpg_coding *c, cbuf;\n> +\ttext *res;\n> +\t\n> +\tif (PG_ARGISNULL(0) || PG_ARGISNULL(1))\n> +\t\tPG_RETURN_NULL();\n> +\t\n> +\tname = PG_GETARG_TEXT_P(1);\t\n> +\tc = find_coding(&cbuf, name, 0); /* will give error if fails */\n> +\n> +\targ = PG_GETARG_TEXT_P(0);\n> +\tlen = VARSIZE(arg) - VARHDRSZ;\n> +\t\n> +\trlen0 = c->decode_len(len);\n> +\t\n> +\tres = (text *)palloc(rlen0 + VARHDRSZ);\n> +\t\n> +\trlen = c->decode(VARDATA(arg), len, VARDATA(res));\n> +\tVARATT_SIZEP(res) = rlen + VARHDRSZ;\n> +\n> +\tif (rlen > rlen0)\n> +\t\telog(FATAL, \"pg_decode: overflow, decode estimate too small\");\n> +\t\n> +\tPG_FREE_IF_COPY(arg, 0);\n> +\tPG_FREE_IF_COPY(name, 0);\n> +\t\n> +\tPG_RETURN_TEXT_P(res);\n> +}\n> +\n> +static pg_coding *\n> +find_coding(pg_coding *dst, text *name, int silent)\n> +{\n> +\tpg_coding *p;\n> +\tchar buf[NAMEDATALEN];\n> +\tuint len;\n> +\t\n> +\tlen = VARSIZE(name) - VARHDRSZ;\n> +\tif (len >= NAMEDATALEN) {\n> +\t\tif (silent)\n> +\t\t\treturn NULL;\n> +\t\telog(ERROR, \"Encoding type does not exist (name too long)\");\n> +\t}\n> +\t\t\n> +\tmemcpy(buf, VARDATA(name), len);\n> +\tbuf[len] = 0;\n> +\t\n> +\tp = pg_find_coding(dst, buf);\n> +\n> +\tif (p == NULL && !silent)\n> +\t\telog(ERROR, \"Encoding type does not exist: '%s'\", buf);\n> +\treturn p;\n> +}\n> +\n> +static char *hextbl = \"0123456789abcdef\";\n> +\n> +uint\n> +hex_encode(uint8 *src, uint len, uint8 *dst)\n> +{\n> +\tuint8 *end = src + len;\n> +\twhile (src < end) {\n> +\t\t*dst++ = hextbl[(*src >> 4) & 0xF];\n> +\t\t*dst++ = hextbl[*src & 0xF];\n> +\t\tsrc++;\n> +\t}\n> +\treturn len*2;\n> +}\n> +\n> +/* probably should use lookup table */\n> +static uint8\n> +get_hex(char c)\n> +{\n> +\tuint8 res = 0;\n> +\t\n> +\tif (c >= '0' && c <= '9')\n> +\t\tres = c - '0';\n> +\telse if (c >= 'a' && c <= 'f')\n> +\t\tres = c - 'a' + 10;\n> +\telse if (c >= 'A' && c <= 'F')\n> +\t\tres = c - 'A' + 10;\n> +\telse\n> +\t\telog(ERROR, \"Bad hex code: '%c'\", c);\n> +\t\n> +\treturn res;\n> +}\n> +\n> +uint\n> +hex_decode(uint8 *src, uint len, uint8 *dst)\n> +{\n> +\tuint8 *s, *srcend, v1, v2, *p = dst;\n> +\t\n> +\tsrcend = src + len;\n> +\ts = src; p = dst;\n> +\twhile (s < srcend) {\n> +\t\tif (*s == ' ' || *s == '\\n' || *s == '\\t' || *s == '\\r') {\n> +\t\t\ts++;\n> +\t\t\tcontinue;\n> +\t\t}\n> +\t\tv1 = get_hex(*s++) << 4;\n> +\t\tif (s >= srcend)\n> +\t\t\telog(ERROR, \"hex_decode: invalid data\");\n> +\t\tv2 = get_hex(*s++);\n> +\t\t*p++ = v1 | v2;\n> +\t}\n> +\t\n> +\treturn p - dst;\n> +}\n> +\n> +\n> +static unsigned char _base64[] =\n> +\t\"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/\";\n> +\n> +uint\n> +b64_encode(uint8 *src, uint len, uint8 *dst)\n> +{\n> +\tuint8 *s, *p, *end = src + len, *lend = dst + 76;\n> +\tint pos = 2;\n> +\tunsigned long buf = 0;\n> +\n> +\ts = src; p = dst;\n> +\t\n> +\twhile (s < end) {\n> +\t\tbuf |= *s << (pos << 3);\n> +\t\tpos--;\n> +\t\ts++;\n> +\t\t\n> +\t\t/* write it out */\n> +\t\tif (pos < 0) {\n> +\t\t\t*p++ = _base64[(buf >> 18) & 0x3f];\n> +\t\t\t*p++ = _base64[(buf >> 12) & 0x3f];\n> +\t\t\t*p++ = _base64[(buf >> 6) & 0x3f];\n> +\t\t\t*p++ = _base64[buf & 0x3f];\n> +\n> +\t\t\tpos = 2;\n> +\t\t\tbuf = 0;\n> +\t\t}\n> +\t\tif (p >= lend) {\n> +\t\t\t*p++ = '\\n';\n> +\t\t\tlend = p + 76;\n> +\t\t}\n> +\t}\n> +\tif (pos != 2) {\n> +\t\t*p++ = _base64[(buf >> 18) & 0x3f];\n> +\t\t*p++ = _base64[(buf >> 12) & 0x3f];\n> +\t\t*p++ = (pos == 0) ? _base64[(buf >> 6) & 0x3f] : '=';\n> +\t\t*p++ = '=';\n> +\t}\n> +\n> +\treturn p - dst;\n> +}\n> +\n> +/* probably should use lookup table */\n> +uint\n> +b64_decode(uint8 *src, uint len, uint8 *dst)\n> +{\n> +\tchar *srcend = src + len, *s = src;\n> +\tuint8 *p = dst;\n> +\tchar c;\n> +\tuint b = 0;\n> +\tunsigned long buf = 0;\n> +\tint pos = 0, end = 0;\n> +\t\n> +\twhile (s < srcend) {\n> +\t\tc = *s++;\n> +\t\tif (c >= 'A' && c <= 'Z')\n> +\t\t\tb = c - 'A';\n> +\t\telse if (c >= 'a' && c <= 'z')\n> +\t\t\tb = c - 'a' + 26;\n> +\t\telse if (c >= '0' && c <= '9')\n> +\t\t\tb = c - '0' + 52;\n> +\t\telse if (c == '+')\n> +\t\t\tb = 62;\n> +\t\telse if (c == '/')\n> +\t\t\tb = 63;\n> +\t\telse if (c == '=') {\n> +\t\t\t/* end sequence */\n> +\t\t\tif (!end) {\n> +\t\t\t\tif (pos == 2) end = 1;\n> +\t\t\t\telse if (pos == 3) end = 2;\n> +\t\t\t\telse\n> +\t\t\t\t\telog(ERROR, \"base64: unexpected '='\");\n> +\t\t\t}\n> +\t\t\tb = 0;\n> +\t\t} else if (c == ' ' || c == '\\t' || c == '\\n' || c == '\\r')\n> +\t\t\tcontinue;\n> +\t\telse\n> +\t\t\telog(ERROR, \"base64: Invalid symbol\");\n> +\n> +\t\t/* add it to buffer */\n> +\t\tbuf = (buf << 6) + b;\n> +\t\tpos++;\n> +\t\tif (pos == 4) {\n> +\t\t\t*p++ = (buf >> 16) & 255;\n> +\t\t\tif (end == 0 || end > 1)\n> +\t\t\t\t*p++ = (buf >> 8) & 255;\n> +\t\t\tif (end == 0 || end > 2)\n> +\t\t\t\t*p++ = buf & 255;\n> +\t\t\tbuf = 0;\n> +\t\t\tpos = 0;\n> +\t\t}\n> +\t}\n> +\n> +\tif (pos != 0)\n> +\t\telog(ERROR, \"base64: invalid end sequence\");\n> +\n> +\treturn p - dst;\n> +}\n> +\n> +\n> +uint\n> +hex_enc_len(uint srclen)\n> +{\n> +\treturn srclen << 1;\n> +}\n> +\n> +uint\n> +hex_dec_len(uint srclen)\n> +{\n> +\treturn srclen >> 1;\n> +}\n> +\n> +uint\n> +b64_enc_len(uint srclen)\n> +{\n> +\treturn srclen + (srclen / 3) + (srclen / (76 / 2));\n> +}\n> +\n> +uint\n> +b64_dec_len(uint srclen)\n> +{\n> +\treturn (srclen * 3) >> 2;\n> +}\n> +\n> +static pg_coding\n> +encoding_list [] = {\n> +\t{ \"hex\", hex_enc_len, hex_dec_len, hex_encode, hex_decode},\n> +\t{ \"base64\", b64_enc_len, b64_dec_len, b64_encode, b64_decode},\n> +\t{ NULL, NULL, NULL, NULL, NULL}\n> +};\n> +\n> +\n> +static pg_coding *\n> +pg_find_coding(pg_coding *res, char *name)\n> +{\n> +\tpg_coding *p;\n> +\tfor (p = encoding_list; p->name; p++) {\n> +\t\tif (!strcasecmp(p->name, name))\n> +\t\t\treturn p;\n> +\t}\n> +\treturn NULL;\n> +}\n> +\n> diff -urNX /home/marko/misc/diff-exclude contrib/pgcrypto.orig/encode.h contrib/pgcrypto/encode.h\n> --- contrib/pgcrypto.orig/encode.h\tThu Jan 1 03:00:00 1970\n> +++ contrib/pgcrypto/encode.h\tSun Jan 21 20:01:01 2001\n> @@ -0,0 +1,60 @@\n> +/*\n> + * pg_encode.h\n> + *\t\tencode.c\n> + * \n> + * Copyright (c) 2001 Marko Kreen\n> + * All rights reserved.\n> + *\n> + * Redistribution and use in source and binary forms, with or without\n> + * modification, are permitted provided that the following conditions\n> + * are met:\n> + * 1. Redistributions of source code must retain the above copyright\n> + * notice, this list of conditions and the following disclaimer.\n> + * 2. Redistributions in binary form must reproduce the above copyright\n> + * notice, this list of conditions and the following disclaimer in the\n> + * documentation and/or other materials provided with the distribution.\n> + *\n> + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND\n> + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n> + * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE\n> + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n> + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\n> + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n> + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n> + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\n> + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\n> + * SUCH DAMAGE.\n> + *\n> + * $Id$\n> + */\n> +\n> +#ifndef __PG_ENCODE_H\n> +#define __PG_ENCODE_H\n> +\n> +/* exported functions */\n> +Datum encode(PG_FUNCTION_ARGS);\n> +Datum decode(PG_FUNCTION_ARGS);\n> +\n> +typedef struct _pg_coding pg_coding;\n> +struct _pg_coding {\n> +\tchar *name;\n> +\tuint (*encode_len)(uint dlen);\n> +\tuint (*decode_len)(uint dlen);\n> +\tuint (*encode)(uint8 *data, uint dlen, uint8 *res);\n> +\tuint (*decode)(uint8 *data, uint dlen, uint8 *res);\n> +};\n> +\n> +/* They are for outside usage in C code, if needed */\n> +uint hex_encode(uint8 *src, uint len, uint8 *dst);\n> +uint hex_decode(uint8 *src, uint len, uint8 *dst);\n> +uint b64_encode(uint8 *src, uint len, uint8 *dst);\n> +uint b64_decode(uint8 *src, uint len, uint8 *dst);\n> +\n> +uint hex_enc_len(uint srclen);\n> +uint hex_dec_len(uint srclen);\n> +uint b64_enc_len(uint srclen);\n> +uint b64_dec_len(uint srclen);\n> +\n> +#endif /* __PG_ENCODE_H */\n> +\n> Binary files contrib/pgcrypto.orig/libpgcrypto.so.0 and contrib/pgcrypto/libpgcrypto.so.0 differ\n> diff -urNX /home/marko/misc/diff-exclude contrib/pgcrypto.orig/pgcrypto.c contrib/pgcrypto/pgcrypto.c\n> --- contrib/pgcrypto.orig/pgcrypto.c\tWed Jan 10 08:23:22 2001\n> +++ contrib/pgcrypto/pgcrypto.c\tSun Jan 21 19:59:38 2001\n> @@ -35,11 +35,6 @@\n> #include \"pgcrypto.h\"\n> \n> /*\n> - * maximum length of digest for internal buffers\n> - */\n> -#define MAX_DIGEST_LENGTH\t128\n> -\n> -/*\n> * NAMEDATALEN is used for hash names\n> */\n> #if NAMEDATALEN < 16\n> @@ -52,8 +47,6 @@\n> Datum digest_exists(PG_FUNCTION_ARGS);\n> \n> /* private stuff */\n> -static char *\n> -to_hex(uint8 *src, uint len, char *dst);\n> static pg_digest *\n> find_digest(pg_digest *hbuf, text *name, int silent);\n> \n> @@ -66,7 +59,6 @@\n> {\n> \ttext *arg;\n> \ttext *name;\n> -\tuint8 *p, buf[MAX_DIGEST_LENGTH];\n> \tuint len, hlen;\n> \tpg_digest *h, _hbuf;\n> \ttext *res;\n> @@ -78,17 +70,14 @@\n> \th = find_digest(&_hbuf, name, 0); /* will give error if fails */\n> \n> \thlen = h->length(h);\n> -\tif (hlen > MAX_DIGEST_LENGTH)\n> -\t\telog(ERROR, \"Hash length overflow: %d\", hlen);\n> \t\n> -\tres = (text *)palloc(hlen*2 + VARHDRSZ);\n> -\tVARATT_SIZEP(res) = hlen*2 + VARHDRSZ;\n> +\tres = (text *)palloc(hlen + VARHDRSZ);\n> +\tVARATT_SIZEP(res) = hlen + VARHDRSZ;\n> \t\n> \targ = PG_GETARG_TEXT_P(0);\n> \tlen = VARSIZE(arg) - VARHDRSZ;\n> \t\n> -\tp = h->digest(h, VARDATA(arg), len, buf);\n> -\tto_hex(p, hlen, VARDATA(res));\n> +\th->digest(h, VARDATA(arg), len, VARDATA(res));\n> \t\n> \tPG_FREE_IF_COPY(arg, 0);\n> \tPG_FREE_IF_COPY(name, 0);\n> @@ -141,19 +130,5 @@\n> \tif (p == NULL && !silent)\n> \t\telog(ERROR, \"Hash type does not exist: '%s'\", buf);\n> \treturn p;\n> -}\n> -\n> -static unsigned char *hextbl = \"0123456789abcdef\";\n> -\n> -/* dumps binary to hex... Note that it does not null-terminate */\n> -static char *\n> -to_hex(uint8 *buf, uint len, char *dst)\n> -{\n> -\tuint i;\n> -\tfor (i = 0; i < len; i++) {\n> -\t\tdst[i*2] = hextbl[(buf[i] >> 4) & 0xF];\n> -\t\tdst[i*2 + 1] = hextbl[buf[i] & 0xF];\n> -\t}\n> -\treturn dst;\n> }\n> \n> diff -urNX /home/marko/misc/diff-exclude contrib/pgcrypto.orig/pgcrypto.sql.in contrib/pgcrypto/pgcrypto.sql.in\n> --- contrib/pgcrypto.orig/pgcrypto.sql.in\tMon Nov 20 22:36:56 2000\n> +++ contrib/pgcrypto/pgcrypto.sql.in\tSun Jan 21 21:27:48 2001\n> @@ -1,6 +1,9 @@\n> \n> -- drop function digest(text, text);\n> -- drop function digest_exists(text);\n> +-- drop function encode(text, text);\n> +-- drop function decode(text, text);\n> +\n> \n> CREATE FUNCTION digest(text, text) RETURNS text\n> AS '@MODULE_FILENAME@',\n> @@ -9,4 +12,12 @@\n> CREATE FUNCTION digest_exists(text) RETURNS bool\n> AS '@MODULE_FILENAME@',\n> 'digest_exists' LANGUAGE 'C';\n> +\n> +CREATE FUNCTION encode(text, text) RETURNS text\n> + AS '@MODULE_FILENAME@',\n> + 'encode' LANGUAGE 'C';\n> +\n> +CREATE FUNCTION decode(text, text) RETURNS text\n> + AS '@MODULE_FILENAME@',\n> + 'decode' LANGUAGE 'C';\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Jan 2001 22:46:14 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] update to contrib/pgcrypto" } ]
[ { "msg_contents": "Seems this one got lost along the way somewhere. At least, I didn't get it\nback... Trying a resend.\n\n//Magnus\n\n> -----Original Message-----\n> From: \tMagnus Hagander \n> Sent:\tden 20 januari 2001 14:29\n> To:\t'[email protected]'\n> Subject:\tPostgresql on win32\n> \n> Hello!\n> \n> Here is a patch to make the current snapshot compile on Win32 \n> (native, libpq and psql) again. Changes are:\n> 1) psql requires the includes of \"io.h\" and \"fcntl.h\" in \n> command.c in order to make a call to open() work (io.h for \n> _open(), fcntl.h for the O_xxx)\n> 2) PG_VERSION is no longer defined in version.h[.in], but in \n> configure.in. Since we don't do configure on native win32, we \n> need to put it in config.h.win32 :-(\n> 3) Added define of SYSCONFDIR to config.h.win32 - libpq won't \n> compile without it. This functionality is *NOT* tested - it's \n> just defined as \"\" for now. May work, may not.\n> 4) DEF_PGPORT renamed to DEF_PGPORT_STR\n> \n> I have done the \"basic tests\" on it - it connects to a \n> database, and I can run queries. Haven't tested any of the \n> fancier functions (yet).\n> \n> However, I stepped on a much bigger problem when fixing psql \n> to work. It no longer works when linked against the .DLL \n> version of libpq (which the Makefile does for it). I have \n> left it linked against this version anyway, pending the \n> comments I get on this mail :-)\n> The problem is that there are strings being allocated from \n> libpq.dll using PQExpBuffers (for example, initPQExpBuffer() \n> on line 92 of input.c). These are being allocated using the \n> malloc function used by libpq.dll. This function *may* be \n> different from the malloc function used by psql.exe - only \n> the resulting pointer must be valid. And with the default \n> linking methods, it *WILL* be different. Later, psql.exe \n> tries to free() this string, at which point it crashes \n> because the free() function can't find the allocated block \n> (it's on the allocated blocks list used by the runtime lib of \n> libpq.dll).\n> \n> Shouldn't the right thing to do be to have psql call \n> termPQExpBuffer() on the data instead? As it is now, \n> gets_fromFile() will just return the pointer received from \n> the PQExpBuffer.data (this may well be present at several \n> places - this is the one I was bitten by so far). Isn't that \n> kind of \"accessing the internals of the PQExpBuffer \n> structure\" wrong? Instead, perhaps it shuold make a copy of \n> the string, adn then termPQExpBuffer() it? In that case, the \n> string will have been allocated from within the same library \n> as the free() is called.\n> \n> I can get it to work just fine by doing this - changing from \n> (around line 100 of input.c):\n> if (buffer.data[buffer.len - 1] == '\\n')\n> {\n> buffer.data[buffer.len - 1] = '\\0';\n> return buffer.data;\n> }\n> to\n> \t\tif (buffer.data[buffer.len - 1] == '\\n')\n> \t\t{\n> \t\t\tchar *tmps;\n> \t\t\tbuffer.data[buffer.len - 1] = '\\0';\n> \t\t\ttmps = strdup(buffer.data);\n> \t\t\ttermPQExpBuffer(&buffer);\n> \t\t\treturn tmps;\n> \t\t}\n> \n> and the same a bit further down in the same function.\n> \n> But, as I said above, this may be at more places in the code? \n> Perhaps someone more familiar to it could comment on that?\n> \n> \n> What do you think shuld be done about this? Personally, I go \n> by the \"If you allocate a piece of memory using an interface, \n> use the same interface to free it\", but the question is how \n> to make it work :-)\n> \n> \n> Also, AFAIK this only affects psql.exe, so the changes made \n> to the libpq files by this patch are required no matter how \n> the other issue is handled.\n> \n> Regards,\n> Magnus\n> \n> \n> <<pgsql-win32.patch>>", "msg_date": "Mon, 22 Jan 2001 09:45:24 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "FW: Postgresql on win32" }, { "msg_contents": "Magnus Hagander <[email protected]> writes:\n>> 2) PG_VERSION is no longer defined in version.h[.in], but in \n>> configure.in. Since we don't do configure on native win32, we \n>> need to put it in config.h.win32 :-(\n\nPutting\n\n> ! #define PG_VERSION 7.1\n> ! #define PG_VERSION_STR \"7.1 (win32)\"\n\ninto config.h.win32 is most certainly *not* an acceptable solution.\nWe are not going to start maintaining this file's idea of the version\nby hand, now that we've centralized the version info otherwise.\nFind another way.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Jan 2001 10:33:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Postgresql on win32 " }, { "msg_contents": "Tom Lane writes:\n\n> Putting\n>\n> > ! #define PG_VERSION 7.1\n> > ! #define PG_VERSION_STR \"7.1 (win32)\"\n>\n> into config.h.win32 is most certainly *not* an acceptable solution.\n> We are not going to start maintaining this file's idea of the version\n> by hand, now that we've centralized the version info otherwise.\n\nWe're losing this battle anyway. Look into src/interfaces/libpq/libpq.rc.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 22 Jan 2001 18:43:09 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Postgresql on win32 " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> We're losing this battle anyway. Look into src/interfaces/libpq/libpq.rc.\n\nUgh. Magnus, is there any reasonable way to generate that thing on the\nfly on Win32?\n\nOne could imagine fixing this in configure --- have configure generate\nlibpq.rc from libpq.rc.in, and then treat libpq.rc as part of the\ndistribution the same as we do for gram.c and so forth. The version\ninfo could get substituted into config.h.win32 the same way, I suppose.\n\nThis is pretty ugly, but you could look at it as being no different\nfrom providing gram.c for those without bison: ship those dependent\nfiles that can't be remade without tools that may not exist on the\ntarget platform.\n\nYou'll probably say \"that's more trouble than it's worth\", but version\ninfo in a file that's only used by a marginally-supported platform is\njust the kind of thing that humans will forget to update.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Jan 2001 13:01:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Postgresql on win32 " }, { "msg_contents": "> Tom Lane writes:\n> \n> > Putting\n> >\n> > > ! #define PG_VERSION 7.1\n> > > ! #define PG_VERSION_STR \"7.1 (win32)\"\n> >\n> > into config.h.win32 is most certainly *not* an acceptable solution.\n> > We are not going to start maintaining this file's idea of the version\n> > by hand, now that we've centralized the version info otherwise.\n> \n> We're losing this battle anyway. Look into src/interfaces/libpq/libpq.rc.\n> \n> -- \n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n> \n> \n\nYes, I update this file for every release.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 22 Jan 2001 17:03:59 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Postgresql on win32" }, { "msg_contents": "At 13:01 22/01/01 -0500, Tom Lane wrote:\n>Peter Eisentraut <[email protected]> writes:\n> > We're losing this battle anyway. Look into src/interfaces/libpq/libpq.rc.\n>\n>Ugh. Magnus, is there any reasonable way to generate that thing on the\n>fly on Win32?\n\n\nWhile on this subject, does anyone have a version of cygipc that works with \nthe current version of CygWin? What's on postgresql.org doesn't work, and \nthe link on the site was broken.\n\nIf I can get it working under NT4, then I can get on with JDBC testing \nwhile the linux box is down (intalled but no networking).\n\nPeter\n\n\n\n", "msg_date": "Mon, 22 Jan 2001 22:19:00 +0000", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Postgresql on win32 " }, { "msg_contents": "On Mon, Jan 22, 2001 at 10:19:00PM +0000, Peter Mount wrote:\n> While on this subject, does anyone have a version of cygipc that works with \n> the current version of CygWin? What's on postgresql.org doesn't work, and \n> the link on the site was broken.\n\nThe latest cygipc distribution (source and binary) seems to be at\n<http://www.neuro.gatech.edu/users/cwilson/cygutils/V1.1/cygipc/index.html>.\nVersion 1.08 works fine for me, with the HEAD version of PostgreSQL\nand DLL version 1.1.7 of cygwin.\n\nI've been messing with ipc-daemon so that it can run as an NT service\nall by itself, with no funky wrappers like 'invoker' or 'srvany'.\nIt's working pretty well, and even knows how to install and remove\nitself as a service. I'd be happy to make the patch available if\nanyone is interested in shaking it down. I plan to submit it to the\nguy who's currently maintaining cygipc.\n\nAnd then I'd like to get postmaster itself also running as an NT\nservice, able to shut down cleanly when the service is stopped.\n\n-- \nFred Yankowski [email protected] tel: +1.630.879.1312\nPrincipal Consultant www.OntoSys.com fax: +1.630.879.1370\nOntoSys, Inc 38W242 Deerpath Rd, Batavia, IL 60510, USA\n", "msg_date": "Mon, 22 Jan 2001 21:30:45 -0600", "msg_from": "Fred Yankowski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Postgresql on win32" }, { "msg_contents": "Quoting Fred Yankowski <[email protected]>:\n\n> On Mon, Jan 22, 2001 at 10:19:00PM +0000, Peter Mount wrote:\n> > While on this subject, does anyone have a version of cygipc that works\n> with \n> > the current version of CygWin? What's on postgresql.org doesn't work,\n> and \n> > the link on the site was broken.\n> \n> The latest cygipc distribution (source and binary) seems to be at\n> <http://www.neuro.gatech.edu/users/cwilson/cygutils/V1.1/cygipc/index.html>.\n> Version 1.08 works fine for me, with the HEAD version of PostgreSQL\n> and DLL version 1.1.7 of cygwin.\n\nThanks, I'll see if it's going to work for me. Hopefully it's going to help \ngetting JDBC debugged while my Linux box is down, and also as I only have NT at \nwork now.\n\n> I've been messing with ipc-daemon so that it can run as an NT service\n> all by itself, with no funky wrappers like 'invoker' or 'srvany'.\n> It's working pretty well, and even knows how to install and remove\n> itself as a service. I'd be happy to make the patch available if\n> anyone is interested in shaking it down. I plan to submit it to the\n> guy who's currently maintaining cygipc.\n\nI've used srvany before with cygwin. Nice little gotcha's like remembering to \nmount with the -s flag etc ;-)\n\n> And then I'd like to get postmaster itself also running as an NT\n> service, able to shut down cleanly when the service is stopped.\n\nNow that would be useful.\n\nPeter\n\n-- \nPeter Mount [email protected]\nPostgreSQL JDBC Driver: http://www.retep.org.uk/postgres/\nRetepPDF PDF library for Java: http://www.retep.org.uk/pdf/\n", "msg_date": "Tue, 23 Jan 2001 03:52:50 -0500 (EST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Postgresql on win32" } ]
[ { "msg_contents": "\n> The other category is run-time behaviour, which is the present case of\n> testing whether mktime() behaves a certain way when given certain\n> arguments. Another item that has been thrown around over the years is\n> whether the system supports Unix domain sockets (essentially a run-time\n> behaviour check of socket()). You do not need to check these things in\n> configure; you can just do it when the program runs and adjust\n> accordingly.\n\nI think in PostgreSQL there is also a third category: \"features\" that affect on disk\nstorage of data. Those can definitely not be changed at runtime, because\na change would corrupt that database. It is very hard to judge what \"feature\"\nreally has an on disk effect. It can e.g. be an application that selected a \nrow that was inserted before the feature existed, do some calculations \nand insert a new row with the feature now available. Thus I think available\nfeatures should be as immune to OS feature changes as possible, but let us\npostpone that discussion until after 7.1 release.\n\n> So I would currently suggest that you define\n> \n> #ifdef AIX || IRIX\n> # define NO_DST_BEFORE_1970\n> #endif\n\nThat would be OK for me. Where do we put that ? Imho a good place would be \nconfig.h.\n\nOne problem though is, that I don't have a patch, that really turns off DST before\n1970 (which would be considerably more work). I only have a patch that makes\nplatforms, that don't have a working mktime for values before 1970, behave like \nNO_DST_... Thus more wanting a #define NO_MKTIME_BEFORE_1970.\n\nAndreas\n", "msg_date": "Mon, 22 Jan 2001 11:07:07 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: AW: AW: AW: Re: tinterval - operator problems on AI\n\tX" }, { "msg_contents": "Zeugswetter Andreas SB writes:\n\n> I think in PostgreSQL there is also a third category: \"features\" that affect on disk\n> storage of data.\n\nWe have those already, for example the locale collation order. They are\ndetermined when initdb is run and are then fixed for the database cluster.\n\n> > So I would currently suggest that you define\n> >\n> > #ifdef AIX || IRIX\n> > # define NO_DST_BEFORE_1970\n> > #endif\n>\n> That would be OK for me. Where do we put that ? Imho a good place would be\n> config.h.\n\nThat or maybe c.h. Don't care.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 22 Jan 2001 18:24:27 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: AW: AW: AW: Re: tinterval - operator problems on\n AI X" } ]
[ { "msg_contents": "rfb=# insert into person (id,surname) values (2274,'Unknown!');\nERROR: Relation 'subject' does not exist\n\nCorrect - where does subject come from?!\n\nrfb=# \\d person\n Table \"person\"\n Attribute | Type | Modifier \n-----------+---------------+----------\n id | bigint | \n surname | character(20) | \n firstname | character(30) | \n email | character(30) | \n phone | character(16) | \n rfbdate | date | \nIndex: name_idx\n\n(in fact no 'suject' in any table anywhere)\n\nAm I going spare?\n\nCheers,\n\nPatrick\n\n PostgreSQL 7.1beta3 on i386-unknown-netbsdelf1.5Q, compiled by GCC egcs-1.1.2\n\n", "msg_date": "Mon, 22 Jan 2001 10:29:33 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": true, "msg_subject": "Strange.." } ]
[ { "msg_contents": "\n> > I made a reproduceable example of things going wrong with a \"en_US\"\n> > locale which is the widely-used (single-byte) ISO-8859-1 Latin 1 charset.\n> \n> en_US uses multi-pass collation rules. It's those collation rules, not\n> the charset per se, that causes the problem.\n\nJust to understand things correctly. Is the Like optimization disabled\nfor all non-ASCII char sets, or (imho correctly) for non charset ordered \ncollations (LC_COLLATE) ?\n\nThus can you enable index optimization by simply setting\nLC_COLLATE to C if your LANG is not set to C ?\n\nAndreas\n", "msg_date": "Mon, 22 Jan 2001 11:50:20 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: like and optimization " }, { "msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> Just to understand things correctly. Is the Like optimization disabled\n> for all non-ASCII char sets, or (imho correctly) for non charset ordered \n> collations (LC_COLLATE) ?\n\nCurrently it's disabled whenever LC_COLLATE is neither C nor POSIX.\nWe can add other names to the \"OK\" list as we verify that they are safe\n(see locale_is_like_safe() in src/backend/utils/adt/selfuncs.c).\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Jan 2001 10:18:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: like and optimization " }, { "msg_contents": "Tom Lane writes:\n\n> Zeugswetter Andreas SB <[email protected]> writes:\n> > Just to understand things correctly. Is the Like optimization disabled\n> > for all non-ASCII char sets, or (imho correctly) for non charset ordered\n> > collations (LC_COLLATE) ?\n>\n> Currently it's disabled whenever LC_COLLATE is neither C nor POSIX.\n> We can add other names to the \"OK\" list as we verify that they are safe\n> (see locale_is_like_safe() in src/backend/utils/adt/selfuncs.c).\n\nI have pretty severe doubts that any locale for a language that uses the\nLatin, Cyrillic, or Greek alphabets (i.e., those that are conceptually\nsimilar to English) is like-optimization safe (for the optimization\nalgorithm in its current state), at least across all platforms.\nSomewhere a vendor is going to adhere to some ISO standard and implement\nthe same multi-pass \"letters first\" rules that we observed in en_US.\n\nThere should be some extensive \"stress test\" that a locale should have to\npass before being labelled safe.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 22 Jan 2001 17:09:36 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: like and optimization " }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> Tom Lane writes:\n> \n> > Zeugswetter Andreas SB <[email protected]> writes:\n> > > Just to understand things correctly. Is the Like optimization disabled\n> > > for all non-ASCII char sets, or (imho correctly) for non charset ordered\n> > > collations (LC_COLLATE) ?\n> >\n> > Currently it's disabled whenever LC_COLLATE is neither C nor POSIX.\n> > We can add other names to the \"OK\" list as we verify that they are safe\n> > (see locale_is_like_safe() in src/backend/utils/adt/selfuncs.c).\n> \n> I have pretty severe doubts that any locale for a language that uses the\n> Latin, Cyrillic, or Greek alphabets (i.e., those that are conceptually\n> similar to English) is like-optimization safe (for the optimization\n> algorithm in its current state), at least across all platforms.\n> Somewhere a vendor is going to adhere to some ISO standard and implement\n> the same multi-pass \"letters first\" rules that we observed in en_US.\n\nIs there any possibility to use, in a portable way, only our own locale \ndefinition files, without reimplementing all the sorts uppercases etc. ?\n\nIf we had control over the locale definition contents we would be much\nbetter \noff when optimizing as well.\n\nAnd IIRC SQL9x prescribe support for multiple locales (or at least\nmultiple\ncollating sequences) within one database simultaneously.\n \n> There should be some extensive \"stress test\" that a locale should have to\n> pass before being labelled safe.\n\nSure.\n\n-------------\nHannu\n", "msg_date": "Tue, 23 Jan 2001 00:01:38 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: like and optimization" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> Is there any possibility to use, in a portable way, only our own locale \n> definition files, without reimplementing all the sorts uppercases etc. ?\n\nAFAIK there is not --- the standard C library APIs do not specify how to\nrepresent this information. Thus, we'd have to provide our own complete\nimplementation of locale-specific comparisons, etc, etc. Not to mention\nacquiring all the raw data for the locale definitions.\n\nI think we'd be nuts to try to develop and maintain our own\nimplementation of that. What we should probably think about is somehow\npiggybacking on someone else's i18n library work, with just enough\ntweaking of the source so that it can cope efficiently with N different\nlocales at runtime, instead of only one.\n\nThe situation is not too much different for timezones, BTW. Might make\nsense to deal with both of those problems in the same way.\n\nAre there any BSD-license locale and/or timezone libraries that we might\nassimilate in this way? We could use an LGPL'd library if there is no\nother alternative, but I'd just as soon not open up the license issue.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Jan 2001 17:46:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: like and optimization " }, { "msg_contents": "On Mon, Jan 22, 2001 at 05:46:09PM -0500, Tom Lane wrote:\n> Hannu Krosing <[email protected]> writes:\n> > Is there any possibility to use, in a portable way, only our own locale \n> > definition files, without reimplementing all the sorts uppercases etc. ?\n> \n> The situation is not too much different for timezones, BTW. Might make\n> sense to deal with both of those problems in the same way.\n\nThe timezone situation is much better, in that there is a separate\norganization which maintains a timezone database and code to operate\non it. It wouldn't be necessary to include the package with PG, \nbecause it can be got at a standard place. You would only need \nscripts to download, build, and integrate it.\n\n> Are there any BSD-license locale and/or timezone libraries that we might\n> assimilate in this way? We could use an LGPL'd library if there is no\n> other alternative, but I'd just as soon not open up the license issue.\n\nPosix systems include a set of commands for dumping locales in a standard \nformat, and building from them. Instead of shipping locales and code to \noperate on them, one might include a script to run these tools (where \nthey exist) to dump an existing locale, edit it a bit, and build a more \nPG-friendly locale.\n\nNathan Myers\[email protected]\n", "msg_date": "Mon, 22 Jan 2001 15:09:03 -0800", "msg_from": "[email protected] (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: AW: like and optimization" }, { "msg_contents": "> And IIRC SQL9x prescribe support for multiple locales (or at least\n> multiple\n> collating sequences) within one database simultaneously.\n\nSounds like SQL92/99 COLLATE things is the way we should go, IMHO.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 23 Jan 2001 10:54:53 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: like and optimization" }, { "msg_contents": "On Mon, Jan 22, 2001 at 05:46:09PM -0500, Tom Lane wrote:\n... \n> Are there any BSD-license locale and/or timezone libraries that we might\n> assimilate in this way? We could use an LGPL'd library if there is no\n> other alternative, but I'd just as soon not open up the license issue.\n\nThe \"Citrus Project\" is coming up with with i18n for BSD.\n\nFYI\n\nPatrick\n", "msg_date": "Tue, 23 Jan 2001 20:24:26 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: like and optimization" }, { "msg_contents": "On Mon, Jan 22, 2001 at 03:09:03PM -0800, Nathan Myers wrote:\n... \n> Posix systems include a set of commands for dumping locales in a standard \n> format, and building from them. Instead of shipping locales and code to \n> operate on them, one might include a script to run these tools (where \n> they exist) to dump an existing locale, edit it a bit, and build a more \n> PG-friendly locale.\n\nIs there really a standard format for locales? Apparantly there are 3 different\nways of doing LC_COLLATE ?!\n\nCheers,\n\nPatrick\n", "msg_date": "Tue, 23 Jan 2001 20:25:30 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: like and optimization" } ]
[ { "msg_contents": "\n> > The problem is that there are strings being allocated from \n> > libpq.dll using PQExpBuffers (for example, initPQExpBuffer() \n> > on line 92 of input.c). These are being allocated using the \n> > malloc function used by libpq.dll. This function *may* be \n> > different from the malloc function used by psql.exe - only \n> > the resulting pointer must be valid. And with the default \n> > linking methods, it *WILL* be different. Later, psql.exe \n> > tries to free() this string, at which point it crashes \n> > because the free() function can't find the allocated block \n> > (it's on the allocated blocks list used by the runtime lib of \n> > libpq.dll).\n\nIt is possible to make the above work (at least on MSVC).\nThe switch is /MD that needs to be used for both the psql.exe and \nlibpq.dll. This forces the use of Multithreaded DLL runtime libraries.\nThe problem at hand is, that it uses different runtime libs for dll and exe\nper default, if both use the same runtime libs it is possible to malloc in \nthe dll and free in the exe.\n\nAndreas\n", "msg_date": "Mon, 22 Jan 2001 12:31:10 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: FW: Postgresql on win32" }, { "msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> It is possible to make the above work (at least on MSVC).\n> The switch is /MD that needs to be used for both the psql.exe and \n> libpq.dll. This forces the use of Multithreaded DLL runtime libraries.\n\nI like this answer. We should be trying to make the Win32 environment\nmore like Unix, rather than catering to its gratuitous differences.\n\nThe malloc/free problem that Magnus discovered is certainly not the only\none, anyway. Notify processing comes to mind immediately, and who's to\nsay what else there may be, today or in the future? Let's solve the\nproblem at a systemic level, not try to patch our way to working code.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Jan 2001 10:39:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: FW: Postgresql on win32 " } ]
[ { "msg_contents": "\nImho the behavior of timestamp code is somewhat awkward \nfor dates that do not fit into a time_t (>2038 or < 1901).\n\nTimes in the time_t range are displayed in local time including DST.\nTimes outside that range are displayed in UTC. I would have expected\nUTC plus local offset not taking DST into account. \n\nWhen setting datestyle to ISO I get a timezone offset, even when SQL99\nsays timezone is only available if the column was defined as \n\"timestamp with timezone\". Dealing with the stupid mktime issue\nI think the standard intended to not involve any issues with \ntimezones or DST for the timestamp datatype. \n\nThe timestamp implementation is not SQL99 conformant.\n\nAndreas\n", "msg_date": "Mon, 22 Jan 2001 12:37:16 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "timestamp (mis)behaviors" }, { "msg_contents": "> Imho the behavior of timestamp code is somewhat awkward\n> for dates that do not fit into a time_t (>2038 or < 1901).\n> Times in the time_t range are displayed in local time including DST.\n> Times outside that range are displayed in UTC. I would have expected\n> UTC plus local offset not taking DST into account.\n\nFor times before 1901, any assumption about time zone behavior being\nsimilar to the years around 2000 are likely to be completely wrong .\nTime zones are a 1800's phenomemon (in the US, they came about when the\nrailroads were built across the continent) but the conventions did not\nsettle down until the 1900's. Not sure what the conventions are for\nAustria, but I'd be interested in hearing about your regional history\nfor this.\n\nIn the US, the current conventions seem to be from post-WWII, and there\nare a few years in the 1970's where DST was declared to be year-round.\nSo which years should we use as a model for appropriate 1800's behavior?\nimho there is no reasonable assumption one can make about this without\nhaving specific info.\n\nNot sure why we should assume that the time zone behavior of 2050 is the\nsame as the years near 2000, especially since some years even recently\nhave variations in time zones.\n\n> When setting datestyle to ISO I get a timezone offset, even when SQL99\n> says timezone is only available if the column was defined as\n> \"timestamp with timezone\". Dealing with the stupid mktime issue\n> I think the standard intended to not involve any issues with\n> timezones or DST for the timestamp datatype.\n> The timestamp implementation is not SQL99 conformant.\n\nRight. Eventually, we can separate out \"timestamp\" and \"timestamp with\ntime zone\", but for now they are the same. Not sure *when* we can make\nthem unique, since there is a backward-compatibility issue, but it would\nbe nice to do this before the 8.0 release.\n\n - Thomas\n", "msg_date": "Mon, 22 Jan 2001 15:43:23 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: timestamp (mis)behaviors" } ]
[ { "msg_contents": "Hello !\n\nI'm using postgres quiet succsessfully on RedHat 6.2\n\nI tried to install PostgresQL on the distribution Gentoo\n(aviable for free from www.gentoo.org) because I\nwas curious how PostgresQL would perform on a\n2.4.0 based linux system with ReiserFs as file system.\n(I tested also with 2.4.1-pre8 kernel with the same results)\n\nAll compiled nicely (I used the tarball source version \n7.1beta3, i didn't gave any parameters to configure), \nbut i could not run initdb like in my privious \ninstallations under RedHat 6.2.\n\nThis is what happend:\n\nbash-2.04# rm -rf /var/lib/pgsql/*\nbash-2.04# su postgres\nbash-2.04$ /usr/local/pgsql/bin/initdb -D /var/lib/pgsql\nThis database system will be initialized with username \"postgres\".\nThis user will own all the data files and must also own the server process.\n\nFixing permissions on existing directory /var/lib/pgsql\nCreating directory /var/lib/pgsql/base\nCreating directory /var/lib/pgsql/global\nCreating directory /var/lib/pgsql/pg_xlog\nCreating template1 database in /var/lib/pgsql/base/1\nDEBUG: starting up\nDEBUG: database system was shut down at 2001-01-22 12:36:25\nDEBUG: CheckPoint record at (0, 8)\nDEBUG: Redo record at (0, 8); Undo record at (0, 8); Shutdown TRUE\nDEBUG: NextTransactionId: 514; NextOid: 16384\nDEBUG: database system is in production state\nERROR: Error: unknown type 'ame'.\n\nCreating global relations in /var/lib/pgsql/global\nDEBUG: starting up\nDEBUG: database system was interrupted at 2001-01-22 12:36:25\nDEBUG: CheckPoint record at (0, 8)\nDEBUG: Redo record at (0, 8); Undo record at (0, 8); Shutdown TRUE\nDEBUG: NextTransactionId: 514; NextOid: 16384\nDEBUG: database system was not properly shut down; automatic recovery in \nprogress...\nFATAL 2: cannot open pg_log: No such file or directory\n\ninitdb failed.\nRemoving temp file /tmp/initdb.8499.\nbash-2.04$\n", "msg_date": "Mon, 22 Jan 2001 12:47:46 +0100", "msg_from": "Robert Schrem <[email protected]>", "msg_from_op": true, "msg_subject": "Strange initdb error with kernel 2.4: unknown type 'ame'." } ]
[ { "msg_contents": "By comparing backups, I found\n\nCREATE CONSTRAINT TRIGGER \"<unnamed>\" AFTER INSERT OR UPDATE ON \"person\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE \"RI_FKey_check_ins\" ('<unnamed>', 'person', 'subject', 'UNSPECIFIED', 'subjectid', 'id');\n\nDon't know where that came from, but probably operator error.. There isn't\nan easy way of scrubbing an unnamed trigger is there? (I dump/edit/reloaded)\n\nCheers,\n\nPatrick\n", "msg_date": "Mon, 22 Jan 2001 12:57:48 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": true, "msg_subject": "Strange.. solved" }, { "msg_contents": "\nActually, if you look in pg_trigger, <unnamed> is technically the\nconstraint name and it should have a system generated constraint name\nwhich you probably can use drop trigger on. It looks like part of\na FK constraint so I'm not sure how you got just 1/2 of it since\ndropping subject should have dropped it (unless you did a partial\ndump and restore).\n\n\nOn Mon, 22 Jan 2001, Patrick Welche wrote:\n\n> By comparing backups, I found\n> \n> CREATE CONSTRAINT TRIGGER \"<unnamed>\" AFTER INSERT OR UPDATE ON \"person\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE \"RI_FKey_check_ins\" ('<unnamed>', 'person', 'subject', 'UNSPECIFIED', 'subjectid', 'id');\n> \n> Don't know where that came from, but probably operator error.. There isn't\n> an easy way of scrubbing an unnamed trigger is there? (I dump/edit/reloaded)\n> \n> Cheers,\n> \n> Patrick\n> \n\n", "msg_date": "Wed, 24 Jan 2001 11:01:14 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange.. solved" }, { "msg_contents": "There seems to be a bug in the 'REFERENCES' statement. You can create\nforeign key references to fields that do not exist, that then cause odd (ie.\nhard to resolve) error messages.\n\nThe operator error below (that should not be possible) is in creating a\nreference to a column that does not exist users(id).\n\nMy example:\n\ntest=# select version();\n version\n-----------------------------------------------------------------\n PostgreSQL 7.0.3 on i386-unknown-freebsdelf4.2, compiled by cc\n(1 row)\n\ntest=# create table users(userid int4);\nCREATE\ntest=# create table newsletter(user_id int4 references users(id));\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY\ncheck(s)\nCREATE\ntest=# insert into newsletter values (4);\nERROR: constraint <unnamed>: table users does not have an attribute id\ntest=#\n\nWhen we got this error message we spent an hour trying to figure out what\nthe heck the problem was! In the end we simply deleted the bad trigger by\noid and just recreated it using CREATE CONSTRAINT TRIGGER.\n\nI have not yet checked whether table foreign key constraints, or the CREATE\nCONSTRAINT TRIGGER functionality has the same bug.\n\nChris\n\n", "msg_date": "Thu, 25 Jan 2001 09:22:42 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Bad REFERENCES behaviour" }, { "msg_contents": "On Thu, 25 Jan 2001, Christopher Kings-Lynne wrote:\n\n> There seems to be a bug in the 'REFERENCES' statement. You can create\n> foreign key references to fields that do not exist, that then cause odd (ie.\n> hard to resolve) error messages.\n> \n> The operator error below (that should not be possible) is in creating a\n> reference to a column that does not exist users(id).\n> \n> My example:\n> \n> test=# select version();\n> version\n> -----------------------------------------------------------------\n> PostgreSQL 7.0.3 on i386-unknown-freebsdelf4.2, compiled by cc\n> (1 row)\n> \n> test=# create table users(userid int4);\n> CREATE\n> test=# create table newsletter(user_id int4 references users(id));\n> NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY\n> check(s)\n> CREATE\n> test=# insert into newsletter values (4);\n> ERROR: constraint <unnamed>: table users does not have an attribute id\n> test=#\n> \n> When we got this error message we spent an hour trying to figure out what\n> the heck the problem was! In the end we simply deleted the bad trigger by\n> oid and just recreated it using CREATE CONSTRAINT TRIGGER.\n> \n> I have not yet checked whether table foreign key constraints, or the CREATE\n> CONSTRAINT TRIGGER functionality has the same bug.\n\nThey all did. In 7.1 you should be safe from invalid column names in the\nactual constraint definitions but create constraint trigger doesn't check\n(because it has no real way of knowing what its parameters are supposed to\nmean).\n\n", "msg_date": "Wed, 24 Jan 2001 17:49:44 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad REFERENCES behaviour" } ]
[ { "msg_contents": "\nIs anyone looking at doing this? Is this purely a MySQL-ism, or is it\nsomething that everyone else has except us?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n---------- Forwarded message ----------\nDate: Mon, 22 Jan 2001 09:03:58 -0600\nFrom: Dave Glowacki <[email protected]>\nTo: The Hermit Hacker <[email protected]>\nCc: Radovan Gibala <[email protected]>, [email protected]\nSubject: Re: MySQL and BerkleyDB\n\nThe Hermit Hacker wrote:\n> On Mon, 22 Jan 2001, Radovan Gibala wrote:\n> > Is there any possibility to get a port for MySQL with BerkleyDB support?\n> > I realy need the transaction support and I'd like to build MySQL from a\n> > port.\n>\n> why not just build PgSQL, and have transaction support *with* subselects\n> and everything else that mySQL doesn't have?\n\nI'd *love* to use PgSQL, but it doesn't support cross-DB joins (or at\nleast I couldn't figure out how to do it.) MySQL handles this, so\nI'm using MySQL and would also like to have transaction support...\n\n\n", "msg_date": "Mon, 22 Jan 2001 11:30:17 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: MySQL and BerkleyDB (fwd)" }, { "msg_contents": "On Mon, Jan 22, 2001 at 11:30:17AM -0400, The Hermit Hacker wrote:\n> \n> Is anyone looking at doing this? Is this purely a MySQL-ism, or is it\n> something that everyone else has except us?\n\nAfaik either Informix or Sybase (both?) has it too. Dunno about\nanything else.\n\n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n> \n> ---------- Forwarded message ----------\n> Date: Mon, 22 Jan 2001 09:03:58 -0600\n> From: Dave Glowacki <[email protected]>\n> To: The Hermit Hacker <[email protected]>\n> Cc: Radovan Gibala <[email protected]>, [email protected]\n> Subject: Re: MySQL and BerkleyDB\n> \n> The Hermit Hacker wrote:\n> > On Mon, 22 Jan 2001, Radovan Gibala wrote:\n> > > Is there any possibility to get a port for MySQL with BerkleyDB support?\n> > > I realy need the transaction support and I'd like to build MySQL from a\n> > > port.\n> >\n> > why not just build PgSQL, and have transaction support *with* subselects\n> > and everything else that mySQL doesn't have?\n> \n> I'd *love* to use PgSQL, but it doesn't support cross-DB joins (or at\n> least I couldn't figure out how to do it.) MySQL handles this, so\n> I'm using MySQL and would also like to have transaction support...\n> \n> \n\n-- \nmarko\n\n", "msg_date": "Mon, 22 Jan 2001 17:52:33 +0200", "msg_from": "Marko Kreen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: MySQL and BerkleyDB (fwd)" }, { "msg_contents": "The Hermit Hacker writes:\n\n> Is anyone looking at doing this? Is this purely a MySQL-ism, or is it\n> something that everyone else has except us?\n\nIt's not required by SQL, that's for sure. I think in 7.2 we'll tackle\nschema support, which will accomplish the same thing. Many people\n(including myself) are of the opinion that not allowing cross-db access is\nin fact a feature.\n\n>\n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n>\n> ---------- Forwarded message ----------\n> Date: Mon, 22 Jan 2001 09:03:58 -0600\n> From: Dave Glowacki <[email protected]>\n> To: The Hermit Hacker <[email protected]>\n> Cc: Radovan Gibala <[email protected]>, [email protected]\n> Subject: Re: MySQL and BerkleyDB\n>\n> The Hermit Hacker wrote:\n> > On Mon, 22 Jan 2001, Radovan Gibala wrote:\n> > > Is there any possibility to get a port for MySQL with BerkleyDB support?\n> > > I realy need the transaction support and I'd like to build MySQL from a\n> > > port.\n> >\n> > why not just build PgSQL, and have transaction support *with* subselects\n> > and everything else that mySQL doesn't have?\n>\n> I'd *love* to use PgSQL, but it doesn't support cross-DB joins (or at\n> least I couldn't figure out how to do it.) MySQL handles this, so\n> I'm using MySQL and would also like to have transaction support...\n>\n>\n>\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 22 Jan 2001 17:12:43 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: MySQL and BerkleyDB (fwd)" }, { "msg_contents": "> On Mon, Jan 22, 2001 at 11:30:17AM -0400, The Hermit Hacker wrote:\n>> Is anyone looking at doing this? Is this purely a MySQL-ism, or is it\n>> something that everyone else has except us?\n\nThink \"schema\". I don't foresee supporting joins across multiple\ndatabases any time soon (unless someone wants to resurrect the old\nMariposa code), but once we have schemas you can put things into\ndifferent schemas of one database and get most of the user-level\nbenefit of multiple databases, while still being able to join to\nthings that are in other schemas.\n\nI haven't looked hard at what it will take to do schemas, but it's\nhigh on my priority list for 7.2. Ross Reedstrom has already done\nsome preliminary work ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Jan 2001 11:27:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: MySQL and BerkleyDB (fwd) " }, { "msg_contents": "> It's not required by SQL, that's for sure. I think in 7.2 we'll tackle\n> schema support, which will accomplish the same thing. Many people\n> (including myself) are of the opinion that not allowing cross-db access is\n> in fact a feature.\n\nI am hoping that when we get to query tree redesign we will have the\nhooks to do distributed databases etc. Then \"cross-db access\" will come\nnearly for free, which the user can choose to use or not.\n\n - Thomas\n", "msg_date": "Mon, 22 Jan 2001 17:00:49 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL and BerkleyDB (fwd)" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> The Hermit Hacker writes:\n> \n> > Is anyone looking at doing this? Is this purely a MySQL-ism, or is it\n> > something that everyone else has except us?\n\n\n\n> It's not required by SQL, that's for sure. I think in 7.2 we'll tackle\n> schema support, which will accomplish the same thing. Many people\n> (including myself) are of the opinion that not allowing cross-db access is\n> in fact a feature.\n\nI am of the inverse opinion : cross-DB joining is the only reasonable\nway to cope with the unfortunate, disgracious, unreasonable, but quite\ninescapable real-life fact that all data do not live in the same server\nin any but the smallest sites ...\n\nI recently did a plea in this list (\"A post-7.1 wishlist\") in this\ndirection, and got an answer (Peter Einstraut ?) that was more or less\non the lines of \"over our dead bodies !\" ... Sigh ...\n\nHowever, I *think* that it could be done by another tool, such as\nEasysoft's (Nick Gorham's, I think) SQL Engine, which allows for joins\nbetween any ODBC-reachable tools. This tool is unreasonably expensive\nfor private use ($800 + $200/year mandatory maintainance). A PostgreSQL\nalternative would be, IMSAO, a huge benefit, even huger if able to\ncross-join with ODBC data sources ...\n\nM$ Access has this, since version 1, and that's a hell of a handy\nfeature for a lot of cases involving management of multiple data sources\n...\n\n> > > why not just build PgSQL, and have transaction support *with* subselects\n> > > and everything else that mySQL doesn't have?\n> >\n> > I'd *love* to use PgSQL, but it doesn't support cross-DB joins (or at\n> > least I couldn't figure out how to do it.) MySQL handles this, so\n> > I'm using MySQL and would also like to have transaction support...\n\nI have to tell that my daily work involves this kind of problems, with\ndata sources ranging from SAS datasets under MVS/XA to Excel files to\nOracle databases to younameit ... That's the kind of problem I would\n*love* to have PostgreSQL to cope with, and *not* M$ Access ...\n\n[ Back to lurking mode ... ]\n\n\t\t\t\t\tE. Charpentier\n\n--\nEmmanuel Charpentier\n", "msg_date": "Mon, 22 Jan 2001 21:18:34 +0100", "msg_from": "Emmanuel Charpentier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: MySQL and BerkleyDB (fwd)" }, { "msg_contents": "Emmanuel Charpentier wrote:\n> \n> Peter Eisentraut wrote:\n>\n\nRe: cross-database joins\n> \n> However, I *think* that it could be done by another tool, such as\n> Easysoft's (Nick Gorham's, I think) SQL Engine, which allows for joins\n> between any ODBC-reachable tools. This tool is unreasonably expensive\n> for private use ($800 + $200/year mandatory maintainance). A PostgreSQL\n> alternative would be, IMSAO, a huge benefit, even huger if able to\n> cross-join with ODBC data sources ...\n> \n> M$ Access has this, since version 1, and that's a hell of a handy\n> feature for a lot of cases involving management of multiple data sources\n\nYou should probably use some front-end tools for most of it (I'd\nrecommend \nZope - http://www.zope.org/ )\n\nOr you could try to make something up starting from Gadfly (an\nall-python \nSQL databse engine) that is included with zope and also available\nseparately.\n\n...\n\n> I have to tell that my daily work involves this kind of problems, with\n> data sources ranging from SAS datasets under MVS/XA to Excel files to\n> Oracle databases to younameit ... That's the kind of problem I would\n> *love* to have PostgreSQL to cope with, and *not* M$ Access ...\n\nOTOH, much of it could be done if postgres functions could return\ndatasets,\nwhich is planned for not too distant future.\n\n--------------------\nHannu\n", "msg_date": "Wed, 24 Jan 2001 11:30:22 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: MySQL and BerkleyDB (fwd)" } ]
[ { "msg_contents": "> Zeugswetter Andreas SB <[email protected]> writes:\n> > It is possible to make the above work (at least on MSVC).\n> > The switch is /MD that needs to be used for both the psql.exe and \n> > libpq.dll. This forces the use of Multithreaded DLL runtime \n> libraries.\n> \n> I like this answer. We should be trying to make the Win32 environment\n> more like Unix, rather than catering to its gratuitous differences.\n\nDefinitly, me too. I'll try this as soon as I get time on it, and update my\npatch with it. Unless somebody beats me to it, that is.\n\n//Magnus\n", "msg_date": "Mon, 22 Jan 2001 16:44:01 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "RE: AW: FW: Postgresql on win32 " }, { "msg_contents": "\nMagnus, is this done?\n\n> > Zeugswetter Andreas SB <[email protected]> writes:\n> > > It is possible to make the above work (at least on MSVC).\n> > > The switch is /MD that needs to be used for both the psql.exe and \n> > > libpq.dll. This forces the use of Multithreaded DLL runtime \n> > libraries.\n> > \n> > I like this answer. We should be trying to make the Win32 environment\n> > more like Unix, rather than catering to its gratuitous differences.\n> \n> Definitly, me too. I'll try this as soon as I get time on it, and update my\n> patch with it. Unless somebody beats me to it, that is.\n> \n> //Magnus\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 22:33:15 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "7.1 question" } ]
[ { "msg_contents": "> Magnus Hagander <[email protected]> writes:\n> >> 2) PG_VERSION is no longer defined in version.h[.in], but in \n> >> configure.in. Since we don't do configure on native win32, we \n> >> need to put it in config.h.win32 :-(\n> \n> Putting\n> \n> > ! #define PG_VERSION 7.1\n> > ! #define PG_VERSION_STR \"7.1 (win32)\"\n> \n> into config.h.win32 is most certainly *not* an acceptable solution.\n> We are not going to start maintaining this file's idea of the version\n> by hand, now that we've centralized the version info otherwise.\n> Find another way.\n\nI definitly did not like that either.. Hmm.\n\nQuestion: Can I assume that configure.in stays the way it is now? In the way\nthat if I could just extract the line \"VERSION='xyz'\" from it, that will\ncontinue to work? It might be possible to do something around that. If not,\nthen does anybody else have an idea of how to do it since autoconf won't\nwork?\n\nI thought it *was* already centralised in 7.0, but back then it was in a\nheader file (version.h), which made it cross-platform... I'm sure there is\nsome advantage of having it in configure.in, but it does make it a *lot*\nharder to support it on Win32, with it's very limited scripting environment\n(by default).\n\n\n//Magnus\n", "msg_date": "Mon, 22 Jan 2001 16:48:14 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "RE: FW: Postgresql on win32 " }, { "msg_contents": "Magnus Hagander <[email protected]> writes:\n> Question: Can I assume that configure.in stays the way it is now? In the way\n> that if I could just extract the line \"VERSION='xyz'\" from it, that will\n> continue to work? It might be possible to do something around that.\n\nThat'd be OK with me. I don't suppose Win32 has \"sed\" though :-(\n\n> I thought it *was* already centralised in 7.0, but back then it was in a\n> header file (version.h), which made it cross-platform... I'm sure there is\n> some advantage of having it in configure.in, but it does make it a *lot*\n> harder to support it on Win32, with it's very limited scripting environment\n> (by default).\n\nActually, it might be easier to go back to keeping it in a file\nversion.h (NOT .in) which configure could read it out of. I never\nfigured out why Peter put it directly in configure.in in the first\nplace; that means it is actually hard-coded in two files (configure.in\nand configure) which is a recipe for trouble. Peter?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Jan 2001 11:01:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Postgresql on win32 " }, { "msg_contents": "Tom Lane writes:\n > That'd be OK with me. I don't suppose Win32 has \"sed\" though :-(\n\nCygwin does. We can worry about native support for PostgreSQL later ;-) \n-- \nPete Forman -./\\.- Disclaimer: This post is originated\nWesternGeco -./\\.- by myself and does not represent\[email protected] -./\\.- opinion of Schlumberger, Baker\nhttp://www.crosswinds.net/~petef -./\\.- Hughes or their divisions.\n", "msg_date": "Mon, 22 Jan 2001 16:19:36 +0000", "msg_from": "Pete Forman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Postgresql on win32 " }, { "msg_contents": "Pete Forman <[email protected]> writes:\n> Tom Lane writes:\n>>> That'd be OK with me. I don't suppose Win32 has \"sed\" though :-(\n\n> Cygwin does. We can worry about native support for PostgreSQL later ;-) \n\nThis is the native support we are talking about. The Cygwin port uses\nconfigure, no?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Jan 2001 11:30:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Postgresql on win32 " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> Actually, it might be easier to go back to keeping it in a file\n>> version.h (NOT .in) which configure could read it out of. I never\n>> figured out why Peter put it directly in configure.in in the first\n>> place; that means it is actually hard-coded in two files (configure.in\n>> and configure) which is a recipe for trouble. Peter?\n\n> The original reason was to make it available to non-C programs.\n\nSure, but we can do that by having configure read the data from\nversion.h and insert it into wherever else it needs to be. This\nputs the sed hackery into configure, which depends on sed anyway,\nand not into the native-Win32 compilation process where there's\nno easy way to do it.\n\n> I think you can just define it empty since the only way it will be used\n> (in the subset of things Win32 builds) is for psql --version output.\n\nI don't much care for the idea of being unable to determine the version\nof a Win32 psql. Psql's backslash commands are sufficiently\nversion-specific that you can't really treat it as being the same\nacross all versions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Jan 2001 12:08:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Postgresql on win32 " }, { "msg_contents": "Tom Lane writes:\n\n> Actually, it might be easier to go back to keeping it in a file\n> version.h (NOT .in) which configure could read it out of. I never\n> figured out why Peter put it directly in configure.in in the first\n> place; that means it is actually hard-coded in two files (configure.in\n> and configure) which is a recipe for trouble. Peter?\n\nThe original reason was to make it available to non-C programs. The\nreason it was put in configure.in in particular was that the next Autoconf\nupgrade would require that anyway. (Well, nothing is required, but that's\nkind of the way things were supposed to work.)\n\nI think you can just define it empty since the only way it will be used\n(in the subset of things Win32 builds) is for psql --version output.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 22 Jan 2001 18:10:46 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Postgresql on win32 " } ]
[ { "msg_contents": "\n> Is anyone looking at doing this? Is this purely a MySQL-ism, or is it\n> something that everyone else has except us?\n\nWe should not only support access to all db's under one postmaster,\nbut also remote access to other postmaster's databases.\nAll biggie db's allow this in one way or another (synonyms, \nqualified object names) including 2-phase commit.\nIdeally this includes access to other db manufacturers, flat files, bdb ...\nMeaning, that this is a problem needing a generic approach.\n\nAndreas\n\n> > > Is there any possibility to get a port for MySQL with BerkleyDB support?\n> > > I realy need the transaction support and I'd like to build MySQL from a\n> > > port.\n> >\n> > why not just build PgSQL, and have transaction support *with* subselects\n> > and everything else that mySQL doesn't have?\n> \n> I'd *love* to use PgSQL, but it doesn't support cross-DB joins (or at\n> least I couldn't figure out how to do it.) MySQL handles this, so\n> I'm using MySQL and would also like to have transaction support...\n", "msg_date": "Mon, 22 Jan 2001 17:01:13 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Re: MySQL and BerkleyDB (fwd)" }, { "msg_contents": "\nsounds like something that should be handled at the application level\nthough ... at least the concept of 'access to other db manufacturers' ...\nno?\n\n\nOn Mon, 22 Jan 2001, Zeugswetter Andreas SB wrote:\n\n>\n> > Is anyone looking at doing this? Is this purely a MySQL-ism, or is it\n> > something that everyone else has except us?\n>\n> We should not only support access to all db's under one postmaster,\n> but also remote access to other postmaster's databases.\n> All biggie db's allow this in one way or another (synonyms,\n> qualified object names) including 2-phase commit.\n> Ideally this includes access to other db manufacturers, flat files, bdb ...\n> Meaning, that this is a problem needing a generic approach.\n>\n> Andreas\n>\n> > > > Is there any possibility to get a port for MySQL with BerkleyDB support?\n> > > > I realy need the transaction support and I'd like to build MySQL from a\n> > > > port.\n> > >\n> > > why not just build PgSQL, and have transaction support *with* subselects\n> > > and everything else that mySQL doesn't have?\n> >\n> > I'd *love* to use PgSQL, but it doesn't support cross-DB joins (or at\n> > least I couldn't figure out how to do it.) MySQL handles this, so\n> > I'm using MySQL and would also like to have transaction support...\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Mon, 22 Jan 2001 12:31:19 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Re: MySQL and BerkleyDB (fwd)" }, { "msg_contents": "On Mon, 22 Jan 2001, Zeugswetter Andreas SB wrote:\n\n> \n> > Is anyone looking at doing this? Is this purely a MySQL-ism, or is it\n> > something that everyone else has except us?\n> \n> We should not only support access to all db's under one postmaster,\n> but also remote access to other postmaster's databases.\n> All biggie db's allow this in one way or another (synonyms, \n> qualified object names) including 2-phase commit.\n> Ideally this includes access to other db manufacturers, flat files, bdb ...\n> Meaning, that this is a problem needing a generic approach.\n\nOf course, a generic, powerful approach would be great.\n\nHowever, a simple, limited approach would a be solution for (I\nsuspect) 97% of the cases, which is that one software package creates a\ndatabase to store mailing list names, and another creates a database to\nstore web permissions, and you want to write a query that encompasses\nboth, w/o semi-tedious COPY TO FILEs to temporarily move a table back and\nforth. And of course, a simple solution might be completed faster :-)\n\nHow could this be handled? \n\n* a syntax for db-table names, such as mydb.myfield or something like\nthat. (do we have any unused punctuation? :-) )\n\n* aliases, so that tblFoo in dbA can be called as ToFoo in dbB\n\n* other ways?\n\nThe second might be easier from a conversion view: the user wouldn't have\nto understand that this was a 'link', but it might prove complicated when\nthere are many links to keep track of, etc.\n\n\n-- \nJoel Burton <[email protected]>\nDirector of Information Systems, Support Center of Washington\n\n", "msg_date": "Mon, 22 Jan 2001 12:18:54 -0500 (EST)", "msg_from": "Joel Burton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Re: MySQL and BerkleyDB (fwd)" }, { "msg_contents": "On Mon, Jan 22, 2001 at 12:18:54PM -0500, Joel Burton wrote:\n> On Mon, 22 Jan 2001, Zeugswetter Andreas SB wrote:\n> \n> > \n> > > Is anyone looking at doing this? Is this purely a MySQL-ism, or is it\n> > > something that everyone else has except us?\n> > \n> > We should not only support access to all db's under one postmaster,\n> > but also remote access to other postmaster's databases.\n> > All biggie db's allow this in one way or another (synonyms, \n> > qualified object names) including 2-phase commit.\n> > Ideally this includes access to other db manufacturers, flat files, bdb ...\n> > Meaning, that this is a problem needing a generic approach.\n> \n> Of course, a generic, powerful approach would be great.\n> \n> However, a simple, limited approach would a be solution for (I\n> suspect) 97% of the cases, which is that one software package creates a\n> database to store mailing list names, and another creates a database to\n> store web permissions, and you want to write a query that encompasses\n> both, w/o semi-tedious COPY TO FILEs to temporarily move a table back and\n> forth. And of course, a simple solution might be completed faster :-)\n> \n> How could this be handled? \n> \n\nAnd this case can be handled within one database by having multiple\nschema, one for each package. It's not there yet, but it's a simpler\nsolution than the generic solution. The problem (as others have mentioned)\nis that we don't want to open the door to remote access until we have a\ntwo-phase transaction commit mechanism in place. Doing it any other way\nis not a 'partial solution', it's a corrupt database waiting to happen.\n\n\n> * a syntax for db-table names, such as mydb.myfield or something like\n> that. (do we have any unused punctuation? :-) )\n\nThis is the sort of syntax that SQL9* specify for cross schema access.\nSo far, it fits into the parser just fine.\n\n> * aliases, so that tblFoo in dbA can be called as ToFoo in dbB\n\nThis can be done with views, once schema are in place.\n\nRoss\n-- \nOpen source code is like a natural resource, it's the result of providing\nfood and sunshine to programmers, and then staying out of their way.\n[...] [It] is not going away because it has utility for both the developers \nand users independent of economic motivations. Jim Flynn, Sunnyvale, Calif.\n", "msg_date": "Mon, 22 Jan 2001 11:32:31 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: AW: Re: MySQL and BerkleyDB (fwd)" }, { "msg_contents": "On Mon, 22 Jan 2001, Ross J. Reedstrom wrote:\n\n> And this case can be handled within one database by having multiple\n> schema, one for each package. It's not there yet, but it's a simpler\n> solution than the generic solution. The problem (as others have mentioned)\n> is that we don't want to open the door to remote access until we have a\n> two-phase transaction commit mechanism in place. Doing it any other way\n> is not a 'partial solution', it's a corrupt database waiting to happen.\n\nWhat does '2-phase transaction commit mechanism' mean in this case?\n\n-- \nJoel Burton <[email protected]>\nDirector of Information Systems, Support Center of Washington\n\n", "msg_date": "Mon, 22 Jan 2001 12:41:38 -0500 (EST)", "msg_from": "Joel Burton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: AW: Re: MySQL and BerkleyDB (fwd)" }, { "msg_contents": "On Mon, Jan 22, 2001 at 12:41:38PM -0500, Joel Burton wrote:\n> On Mon, 22 Jan 2001, Ross J. Reedstrom wrote:\n> \n> > And this case can be handled within one database by having multiple\n> > schema, one for each package. It's not there yet, but it's a simpler\n> > solution than the generic solution. The problem (as others have mentioned)\n> > is that we don't want to open the door to remote access until we have a\n> > two-phase transaction commit mechanism in place. Doing it any other way\n> > is not a 'partial solution', it's a corrupt database waiting to happen.\n> \n> What does '2-phase transaction commit mechanism' mean in this case?\n\nSame thing it means elsewhere. Typing \"two phase commit\" into Google gets me\nthis url:\n\nhttp://webopedia.internet.com/Computer_Science/Transaction_Processing/two_phase_commit.html\n\nWhich says:\n\n A feature of transaction processing systems that enables databases\n to be returned to the pre-transaction state if some error condition\n occurs. A single transaction can update many different databases. The\n two-phase commit strategy is designed to ensure that either all the\n databases are updated or none of them, so that the databases remain\n synchronized.\n\n Database changes required by a transaction are initially stored\n temporarily by each database. The transaction monitor then\n issues a \"pre-commit\" command to each database which requires an\n acknowledgment. If the monitor receives the appropriate response from\n each database, the monitor issues the \"commit\" command, which causes\n all databases to simultaneously make the transaction changes permanent.\n\n\nThis 'pre-commit' 'really commit' two-step (get 'yer cowboy hats, right\nhere) is what's needed, and is currently missing from pgsql. \n\n\nRoss\n", "msg_date": "Mon, 22 Jan 2001 11:55:36 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: AW: Re: MySQL and BerkleyDB (fwd)" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> sounds like something that should be handled at the application level\n> though ... at least the concept of 'access to other db manufacturers' ...\n> no?\n\nIf and when we will get functions that can return rowsets (IIRC Oracle's \nRETURN AND CONTINUE)the simplest case can be easily implemented by\nhaving \na user-defined method that just does the query for the whole table (or \nfor rows where \"field in (x,y,z)\"\n\nThen putting this in a view and then using it as a table should also \nbe quite simple (or at least possible ;).\n\nOnly after that should we start optimizing ...\n\n--------------\nHannu\n", "msg_date": "Tue, 23 Jan 2001 00:10:19 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Re: MySQL and BerkleyDB (fwd)" }, { "msg_contents": "> This 'pre-commit' 'really commit' two-step (get 'yer cowboy hats, right\n> here) is what's needed, and is currently missing from pgsql. \n\n Hello,\n\n I'm very interested in this topic since I am involved in a\ndistributed, several-PostgreSQLs-backed, open-source,\nbuzzword-compliant database replication middleware (still in the draft\nstage though --- this is not an announcement :-).\n I had thought that the pre-commit information could be stored in an\nauxiliary table by the middleware program ; we would then have\nto re-implement some sort of higher-level WAL (I thought of the list\nof the commands performed in the current transaction, with a sequence\nnumber for each of them that would guarantee correct ordering between\nconcurrent transactions in case of a REDO). But I fear I am missing\na number of important issues there ; so could you please comment on my\nidea ? \n * what should I try not to forget to record in the higher-level WAL\n if I want consistency ?\n * how could one collect consistent ordering information without\n impacting performance too much ? Will ordering suffice to guarantee\n correctness of the REDO ? (I mean, are there sources of\n nondeterminism in PostgreSQL such as resource exhaustion etc. that I\n should be aware of ?)\n * would it be easier or harder to help implement 2-phase commit\n inside PostgreSQL (but I am not quite a PostgreSQL hacker yet !)\n\n Many thanks in advance !\n\n-- \n<< Tout n'y est pas parfait, mais on y honore certainement les jardiniers >>\n\n\t\t\tDominique Quatravaux <[email protected]>\n", "msg_date": "Tue, 23 Jan 2001 20:45:54 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Re: AW: Re: MySQL and BerkleyDB (fwd)" }, { "msg_contents": " [ sorry to repost this, but I didn't receive my mail back... Anything\n wrong with the mailserver ? ]\n\n I am involved in a project of open-source, PostgreSQL-backed,\nbuzzword-compliant replication/high availability software that would\nact as an SQL � one-to-many � gateway (but still in the design phase\n--- this is *not* an announcement :-). Of course, the topic of 2-phase\ncommit is important to us ; we currently plan to record all write\ncommands issued during the transaction in an auxiliary table (some\nsort of higher-level WAL). First commit phase would then consist in\nclosing this record and ensuring it can be REDOne in the case of a\ncrash (UNDO would be done by just rolling back the current\ntransaction). But this is quite complicated and may require to\nserialize all accesses (both read and write) to a given database so as\nto guarantee that REDO will yield the very same result.\n\n I understand it would certainly be better and more profitable for\nthe community if I could help implement 2-phase commit inside\nPostgreSQL. But I am not much of a PostgreSQL hacker yet. What do you\nthink ?\n\n-- \n<< Tout n'y est pas parfait, mais on y honore certainement les jardiniers >>\n\n\t\t\tDominique Quatravaux <[email protected]>\n", "msg_date": "Wed, 24 Jan 2001 13:00:25 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: 2-phase commit" } ]
[ { "msg_contents": "Hi all!\n\nI'm using the PostgreSQL 7.03 and PHP 4.x to create a DB driven site, which\nis intended to work under RedHat.\n\nAnd I often get very strange errors like\n\"Warning: PostgreSQL query failed: pqReadData() -- read() failed:\nerrno=32 Broken pipe in /usr/local/home/httpd/htdocs/zyx/xyz.php on line xx\"\n\nWould somebody please write me what is the problem?\n\nP.S.\n In some docs I've found that there were an error reported and fixed in\nearly years (1994,...) in HTTPD when using POST methods for forms caused to\nalmost the same situation (SIGPIPE error).\n\nThanks in advance,\nDmitri.\n\n\n", "msg_date": "Mon, 22 Jan 2001 18:13:08 +0200", "msg_from": "\"Dmitri E. Gurevich\" <[email protected]>", "msg_from_op": true, "msg_subject": "Strange error in PHP/Postgre on RadHat?" } ]
[ { "msg_contents": "\n> > Is anyone looking at doing this? Is this purely a MySQL-ism, or is it\n> > something that everyone else has except us?\n> \n> It's not required by SQL, that's for sure. I think in 7.2 we'll tackle\n> schema support, which will accomplish the same thing.\n\nIt does not (e.g. remote access). \n\n> Many people\n> (including myself) are of the opinion that not allowing cross-db access is\n> in fact a feature.\n\nCan you tell me what that \"feature\" gains you other than mere inconvenience ?\nAnd are you implying, that all the other db's are misfeatured in this regard?\n\nAndreas\n", "msg_date": "Mon, 22 Jan 2001 17:28:49 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Re: MySQL and BerkleyDB (fwd)" }, { "msg_contents": "Zeugswetter Andreas SB writes:\n\n> > It's not required by SQL, that's for sure. I think in 7.2 we'll tackle\n> > schema support, which will accomplish the same thing.\n>\n> It does not (e.g. remote access).\n\nMaybe this is handled better by an external corba server or some such\nthing.\n\n> > Many people\n> > (including myself) are of the opinion that not allowing cross-db access is\n> > in fact a feature.\n>\n> Can you tell me what that \"feature\" gains you other than mere inconvenience ?\n> And are you implying, that all the other db's are misfeatured in this regard?\n\nIt's a safety/security measure to me. As long as one backend process can\nonly touch one database you can control things much better. This could be\novercome of course, but I'm quite happy with it.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 22 Jan 2001 18:14:32 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Re: MySQL and BerkleyDB (fwd)" } ]
[ { "msg_contents": "Hey guys, I am just getting back into the swing of things (after a nice\nvacation and a not-so-nice move across country).. I just installed 7.1 Beta\n3 and am playing with it (I'm impressed with the speed increase I've seen\nwith virtually no tweaking BTW).. I see a lot of this when I'm importing\ndata :\n\nDEBUG: copy: line 2865, XLogWrite: new log file created - try to increase\nWAL_FILES\n\nIs that anything to be concerned about? Do I need to increase WAL_FILES? If\nso, how?\n\nThanks!\n\n-Mitch\n\n", "msg_date": "Mon, 22 Jan 2001 12:36:37 -0500", "msg_from": "\"Mitch Vincent\" <[email protected]>", "msg_from_op": true, "msg_subject": "Beta 3 question(s)" } ]
[ { "msg_contents": "> I see a lot of this when I'm importing data :\n> \n> DEBUG: copy: line 2865, XLogWrite: new log file created - \n> try to increase WAL_FILES\n> \n> Is that anything to be concerned about?\n\nNo, WAL_FILES affects performance only.\n\n> Do I need to increase WAL_FILES? If so, how?\n\nYou may ignore this for data import, but it's good\nto have WAL_FILES >= 1 for database use under high load.\nSorry, Oliver Elphick and me write WAL docs right now -\nshould be ready in 1-2 days.\n\nVadim\n", "msg_date": "Mon, 22 Jan 2001 10:03:36 -0800", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Beta 3 question(s)" } ]
[ { "msg_contents": "Is there any easy method for getting a hold of the OID and XID for a\ngiven row within a plpgsql (or another type) of function?\n\nStatements like NEW.oid appear to fail, and xid isn't to be found.\nI'd like to use these to create the history of an application which\ncan be rolled back (by the application) which is stored in another\ntable for easy stats generation.\n\nThe reason for having the database do the work is that we want to keep\nthe actions of the DBA's available to our clients to scrutinize,\notherwise the wrappers could do the extra required dirty work to\naccomplish this.\n\nIs there an easier way of doing this? Since there's no environment\nvariable type entity (variables common for an entire transaction) I\nwas wondering if a cursor can flow between multiple functions?\nEach function uses FETCH -1 FROM transaction_id; to get information.\nPassing the transaction_id as a parameter isn't really an option as\nmost of the functions() would be run via automated triggers.\n\nBEGIN WORK;\nDECLARE transaction_id CURSOR FOR SELECT 8329;\nFETCH 1 FROM transaction_id;\n\nselect function1();\nselect function2();\nselect function3();\n\nCLOSE transaction_id;\nCOMMIT WORK;\n\nThanks for your help in advance.\n\n--\nRod Taylor\n\nThere are always four sides to every story: your side, their side, the\ntruth, and what really happened.", "msg_date": "Mon, 22 Jan 2001 14:53:50 -0500", "msg_from": "\"Rod Taylor\" <[email protected]>", "msg_from_op": true, "msg_subject": "Generating History" }, { "msg_contents": "\"Rod Taylor\" <[email protected]> writes:\n> Is there any easy method for getting a hold of the OID and XID for a\n> given row within a plpgsql (or another type) of function?\n\n> Statements like NEW.oid appear to fail, and xid isn't to be found.\n\nThe reason this doesn't work in plpgsql is that the underlying \"SPI\"\ntuple-access routines don't support access to the system attributes\nof a tuple. It'd be relatively straightforward to extend them to\ndo so, if someone cared to work on that.\n\nI thought you could do it in an SQL-language function, but that doesn't\nseem to work either, for reasons that may be strictly historical at\nthis point --- the relevant error message is coming out of this:\n\n attno = get_attnum(relid, attname);\n\n /* XXX Is there still a reason for this restriction? */\n if (attno < 0)\n elog(ERROR, \"Cannot reference attribute '%s'\"\n \" of tuple params/return values for functions\", attname);\n\nThink I'll look and see if this restriction could be dropped now.\n\nFor 7.0.* though, it seems the only way to get this data in a PL\nfunction is to write a C-language function to extract it ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Jan 2001 20:56:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Generating History " } ]
[ { "msg_contents": "\n> sounds like something that should be handled at the application level\n> though ... at least the concept of 'access to other db \n> manufacturers' ...\n> no?\n\nWell I know of Informix for one, that allows transparent access to flat files\nor other dbs. And this is a feature that often shows very handy.\n\nAndreas\n", "msg_date": "Mon, 22 Jan 2001 21:49:29 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: AW: Re: MySQL and BerkleyDB (fwd)" } ]
[ { "msg_contents": "\n Hi Tom,\n \n I again a little look at aset code and I probably found small performance \nreserve in small chunks (chunk <= ALLOC_CHUNK_LIMIT) reallocation.\n \n Current code for *all* small chunks call AllocSetAlloc() for new larger\nchunk, memcpy() for data copy, AllocSetFree() for old chunk. But this\nexpensive solution is not needful for all chunks in all instances.\n\n Sometimes is AllocSetRealloc() called for chunk that is *last* chunk\nin actual block and after this chunk is free space (block->freeptr). If\nafter this reallocated chunk is sufficient memory not is needful create \nnew chunk and copy data, but is possible only increase chunk->size and move\nset->block->freeptr to new position. \n \n Current code always create new chunk and old add to freelist. If you \nwill call often prealloc (with proper size) over new context block this \nblock will spit to free not used chunks and only last chulk will used.\n\nI test this idea via folloving code taht I add to AllocSetRealloc() to\n\"Normal small-chunk case\".\n \nfprintf(stderr, \"REALLOC-SMALL-CHUNK\\n\");\n\t\t\n{\n\tchar *chunk_end = (char *) chunk + \n\t\t\tchunk->size + ALLOC_CHUNKHDRSZ;\n\t\t\n\tif (chunk_end == set->blocks->freeptr)\n\t{ \n\t\t/*\n\t\t * Cool, prealloc is called for last chunk \n\t\t * in actual block, try check space after this\n\t\t * chunk.\n\t\t */\n\t\tint\tsz = set->blocks->endptr - \n\t\t\t\tset->blocks->freeptr;\n\t\t\t\t\n\t\tfprintf(stderr, \"REALLOC-LAST-CHUNK\\n\");\n\t\t\t\t\n\t\tif (sz + chunk->size >= size)\n\t\t\t/*\n\t\t\t * After chunk is sufficient memory \n\t\t\t */\n\t\t\tfprintf(stderr, \"NEED-EXPAND-CHUNK-ONLY\\n\");\n\t}\n} \n \n \n For example if I run regress test I found \"NEED-EXPAND-CHUNK-ONLY\"\nin 38% of all AllocSetRealloc() usage! - in my other tests with own\nDB dumps it was less (10%). It's more often than I was expect :-)\n\n My suggestion is expand chunk if after chunk is in block free space \nand not by brutal forge create always new chunks and call expensive \nAllocSetAlloc(), AllocSetFree() and memcpy(). \n \n Comments?\n \n \t\t\tKarel\n \nPS. anyway .. needful is check if new (wanted) size is not large than\n ALLOC_CHUNK_LIMIT too. In this situation will probably better use\n current brutal forge solution.\n \n \n \n\n", "msg_date": "Tue, 23 Jan 2001 00:22:11 +0100 (CET)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "realloc suggestion" }, { "msg_contents": "Karel Zak <[email protected]> writes:\n> I again a little look at aset code and I probably found small performance \n> reserve in small chunks (chunk <= ALLOC_CHUNK_LIMIT) reallocation.\n\nHmm. I wouldn't have thought that realloc got called often enough to be\nworth optimizing, but it does seem to get called a few hundred times\nduring the regress tests, so maybe it's worth a little more code to do\nthis. (Looks like most of the realloc calls come from enlargeStringInfo\nwhile dealing with long query strings --- since in this case the string\nbuffer is the only thing yet allocated in QueryContext, the special-case\ncheck wins.)\n\nI've committed this change. Thanks for the suggestion!\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Jan 2001 20:06:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: realloc suggestion " }, { "msg_contents": "\nOn Mon, 22 Jan 2001, Tom Lane wrote:\n\n> Karel Zak <[email protected]> writes:\n> > I again a little look at aset code and I probably found small performance \n> > reserve in small chunks (chunk <= ALLOC_CHUNK_LIMIT) reallocation.\n> \n> Hmm. I wouldn't have thought that realloc got called often enough to be\n> worth optimizing, but it does seem to get called a few hundred times\n> during the regress tests, so maybe it's worth a little more code to do\n> this. (Looks like most of the realloc calls come from enlargeStringInfo\n> while dealing with long query strings --- since in this case the string\n> buffer is the only thing yet allocated in QueryContext, the special-case\n> check wins.)\n> \n> I've committed this change. Thanks for the suggestion!\n\n I love OpenSource and CVS source distribution model - only couple hours\nbetween idea and official source change :-)\n\n\tKarel\n\n", "msg_date": "Tue, 23 Jan 2001 16:59:50 +0100 (CET)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: realloc suggestion " } ]
[ { "msg_contents": "\nI have begun perusing the source, eventually hoping to\ncontribute to the development effort. I have noticed that\nlong ago the include files for a \"component\" were in the\ncomponent directory, eg src/backend/catalog/Attic/catalog.h,v\n(here I'm assuming that Attic means a storage place for\nitems that use to be there but are no longer there...)\nand now this file resides in src/include/catalog/catalog.h,v. A quick\nscan seems that this was a common transformation.\n\nThis seems to be a mighty step backwards, especially if\none considers the recommendations on large systems development\nwhere you want to minimize dependancies...\n\nCould someone point out the history of code relocation and rationale.\n\nThanks for any pointers.\n\nBest regards,\n\n.. Otto\n\nOtto Hirr\nOLAB Inc.\[email protected]\n503 / 617-6595\n\n", "msg_date": "Mon, 22 Jan 2001 21:44:30 -0800", "msg_from": "\"Otto A. Hirr, Jr.\" <[email protected]>", "msg_from_op": true, "msg_subject": "Newbe hacker, interface question on includes..." }, { "msg_contents": "\"Otto A. Hirr, Jr.\" wrote:\n> \n> I have begun perusing the source, eventually hoping to\n> contribute to the development effort.\n\nWelcome!\n\n> I have noticed that\n> long ago the include files for a \"component\" were in the\n> component directory, eg src/backend/catalog/Attic/catalog.h,v\n> (here I'm assuming that Attic means a storage place for\n> items that use to be there but are no longer there...)\n\nActually, you are looking at CVS directory structures, not the\nchecked-out tree. CVS uses the attic to retain the history of deleted\nfiles and of files which are on branches. Check out the cvs docs for\nmore info.\n\n> and now this file resides in src/include/catalog/catalog.h,v. A quick\n> scan seems that this was a common transformation.\n> This seems to be a mighty step backwards, especially if\n> one considers the recommendations on large systems development\n> where you want to minimize dependancies...\n\nWe support well over 20 platforms now, and the platform dependencies are\npretty well isolated, though not perfectly.\n\n> Could someone point out the history of code relocation and rationale.\n\nWe have a *long* history. But since you have access to a copy of the CVS\ntree, you can check the comments and status on every file and every\nchange made since circa 1996.\n\n - Thomas\n", "msg_date": "Tue, 23 Jan 2001 06:07:39 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Newbe hacker, interface question on includes..." } ]
[ { "msg_contents": "> My current version of PostgreSQL (v6.5.3) does not support Chinese\n> characters (what I intented to store in the database != what I retrieved\n> from it). I wonder where I can get the patch, which allow it to support\n> Chinese. Thx in advance.\n> \n> Peter Cheung\n> http://www.borderfree.com\n> \n\nWhat kind of encoding are you using for Chinese exactly?\n--\nTatsuo Ishii\n", "msg_date": "Tue, 23 Jan 2001 18:08:58 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL v6.5.3" } ]
[ { "msg_contents": "\n> There were only a few to fix, so I fixed them.\n> \n> > Peter Eisentraut <[email protected]> writes:\n> > > Which one of these should we use?\n> > > int4 is a data type, int32 isn't. c.h has DatumGetInt8, but no\n> > > DatumGetInt64; it also has DatumGetInt32 but no \n> DatumGetInt4. fmgr has\n\nWait a sec !\nThe patch to timestamp.h and date.h replaces int4 with int instead of int32.\nAt least the timestamp.h struct is on disk stuff, thus the patch is not so good :-)\n\nAndreas\n", "msg_date": "Tue, 23 Jan 2001 10:15:14 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: int4 or int32" }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> \n> > There were only a few to fix, so I fixed them.\n> > \n> > > Peter Eisentraut <[email protected]> writes:\n> > > > Which one of these should we use?\n> > > > int4 is a data type, int32 isn't. c.h has DatumGetInt8, but no\n> > > > DatumGetInt64; it also has DatumGetInt32 but no \n> > DatumGetInt4. fmgr has\n> \n> Wait a sec !\n> The patch to timestamp.h and date.h replaces int4 with int instead of int32.\n> At least the timestamp.h struct is on disk stuff, thus the patch is not so good :-)\n\nFixed to int32 now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Jan 2001 09:33:09 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: int4 or int32" } ]
[ { "msg_contents": "I am faced with the task of installing, configuring, and tuning my\ndatabase, which is currently running under Linux, under Solaris 7 on a\nbrand-new and shiny Sun UltraSPARC (3 CPUs, 768 MB RAM), because the sysadmin at the site\nhasn't used or installed PostgreSQL and would rather have me do it. Is\nthis actually supported? The FAQ (the one bundled with the 7.1 beta3\nwhich I'll be using) lists only:\n\nsparc_solaris - SUN SPARC on Solaris 2.4, 2.5, 2.5.1\n\nIf it is supported (I don't suppose a little OS version number increment\nwould make a differnce here), I've never used Solaris or anything other\nthan Intel-based hardware and am looking for some info on what to watch\nout for and consider when installing and tuning PostgreSQL on Solaris on\na SPARC plattform. Aside from the shared memory stuff in the Admin\nGuide, I haven't found anything so far. Particularly, I would expect\nthat you could gain a significant performance boost from running the\ndatabase on a 64 bit plattform (without knowing exactly why, only\npicking up on word-of-mouth and assorted hype on 64 bit architectures).\nHow do you get the most out of it? Would I use gcc or the native Sun\ncompiler (how do you control that anyway)?\n\nAny pointers would be much appreciated.\n\nThanks, Frank\n", "msg_date": "Tue, 23 Jan 2001 16:38:31 +0100", "msg_from": "Frank Joerdens <[email protected]>", "msg_from_op": true, "msg_subject": "Looking for info on Solaris 7 (SPARC) specific considerations" }, { "msg_contents": "First, you read the installation instructions. Then, if specific problems\ncome up you send specific problem reports. But I think we have this\nplatform pretty much covered. Good luck.\n\n\nFrank Joerdens writes:\n\n> I am faced with the task of installing, configuring, and tuning my\n> database, which is currently running under Linux, under Solaris 7 on a\n> brand-new and shiny Sun UltraSPARC (3 CPUs, 768 MB RAM), because the sysadmin at the site\n> hasn't used or installed PostgreSQL and would rather have me do it. Is\n> this actually supported? The FAQ (the one bundled with the 7.1 beta3\n> which I'll be using) lists only:\n>\n> sparc_solaris - SUN SPARC on Solaris 2.4, 2.5, 2.5.1\n>\n> If it is supported (I don't suppose a little OS version number increment\n> would make a differnce here), I've never used Solaris or anything other\n> than Intel-based hardware and am looking for some info on what to watch\n> out for and consider when installing and tuning PostgreSQL on Solaris on\n> a SPARC plattform. Aside from the shared memory stuff in the Admin\n> Guide, I haven't found anything so far. Particularly, I would expect\n> that you could gain a significant performance boost from running the\n> database on a 64 bit plattform (without knowing exactly why, only\n> picking up on word-of-mouth and assorted hype on 64 bit architectures).\n> How do you get the most out of it? Would I use gcc or the native Sun\n> compiler (how do you control that anyway)?\n>\n> Any pointers would be much appreciated.\n>\n> Thanks, Frank\n>\n>\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Tue, 23 Jan 2001 17:37:39 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for info on Solaris 7 (SPARC) specific\n considerations" }, { "msg_contents": "Frank Joerdens <[email protected]> writes:\n> I am faced with the task of installing, configuring, and tuning my\n> database, which is currently running under Linux, under Solaris 7 on a\n> brand-new and shiny Sun UltraSPARC (3 CPUs, 768 MB RAM), because the\n> sysadmin at the site hasn't used or installed PostgreSQL and would\n> rather have me do it. Is this actually supported? The FAQ (the one\n> bundled with the 7.1 beta3 which I'll be using) lists only:\n\n> sparc_solaris - SUN SPARC on Solaris 2.4, 2.5, 2.5.1\n\nAfter you build PG and test it, send us a port report, and we'll add\nSolaris 7 to the list of recently tested platforms. That's how it\nworks ...\n\n> Would I use gcc or the native Sun\n> compiler (how do you control that anyway)?\n\nTry 'em both --- set CC environment variable before running configure\nto control configure's choice of compiler.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Jan 2001 11:57:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for info on Solaris 7 (SPARC) specific considerations " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> > sparc_solaris - SUN SPARC on Solaris 2.4, 2.5, 2.5.1\n> \n> After you build PG and test it, send us a port report, and we'll add\n> Solaris 7 to the list of recently tested platforms. That's how it\n> works ...\n> \n\nWe've had client running pgsql 7.0 on Solaris 7 since early May 2000. No\nproblems at all. \n\n> > Would I use gcc or the native Sun\n> > compiler (how do you control that anyway)?\n> \n> Try 'em both --- set CC environment variable before running configure\n> to control configure's choice of compiler.\n\nMost Solaris 7 installations I've seen come without the Sun compiler as\nstandard, so gcc is probably your safest bet. You probably have to\n/usr/ccs/bin to your path to get ar and friends.\n\nregards, \n\n\tGunnar\n", "msg_date": "23 Jan 2001 19:07:28 +0100", "msg_from": "Gunnar R|nning <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for info on Solaris 7 (SPARC) specific considerations" }, { "msg_contents": "El Mar 23 Ene 2001 12:38, Frank Joerdens escribi�:\n> I am faced with the task of installing, configuring, and tuning my\n> database, which is currently running under Linux, under Solaris 7 on a\n> brand-new and shiny Sun UltraSPARC (3 CPUs, 768 MB RAM), because the\n> sysadmin at the site hasn't used or installed PostgreSQL and would rather\n> have me do it. Is this actually supported? The FAQ (the one bundled with\n> the 7.1 beta3 which I'll be using) lists only:\n>\n> sparc_solaris - SUN SPARC on Solaris 2.4, 2.5, 2.5.1\n\nNo problem for me. Solaris 7 and 8.\n\n> If it is supported (I don't suppose a little OS version number increment\n> would make a differnce here), I've never used Solaris or anything other\n> than Intel-based hardware and am looking for some info on what to watch\n> out for and consider when installing and tuning PostgreSQL on Solaris on\n> a SPARC plattform. Aside from the shared memory stuff in the Admin\n> Guide, I haven't found anything so far. Particularly, I would expect\n> that you could gain a significant performance boost from running the\n> database on a 64 bit plattform (without knowing exactly why, only\n> picking up on word-of-mouth and assorted hype on 64 bit architectures).\n> How do you get the most out of it? Would I use gcc or the native Sun\n> compiler (how do you control that anyway)?\n\nWell, maybe I'm wrong, but I guess the 64 bit push you get it compiling. I \nmean, memory pages of the OS should be bigger, and the int should also be a \n64 bit int, and not a 32 bit int. Maybe there is more then that, but it's all \nI have to the moment.\nTalking about compiler, I use gcc (compiled with the pre-compiled gcc \nbinaries) with the the Solaris binutils, especially because it makes life \neasier.\n\nNow, it would be a good idea to try Linux or NetBSD on the SPARC instead of \nSolaris. I am at this moment getting info on the instalation of Linux distros \nand the BSD distros for SPARC, and really think we can get a boost from this.\n\nHope this helps. ;-)\n\n\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Tue, 23 Jan 2001 18:59:16 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for info on Solaris 7 (SPARC) specific considerations" }, { "msg_contents": "On Tue, Jan 23, 2001 at 06:59:16PM -0300, Martin A. Marques wrote:\n[ . . . ]\n> Now, it would be a good idea to try Linux or NetBSD on the SPARC instead of \n> Solaris. I am at this moment getting info on the instalation of Linux distros \n\nNot my decision, unfortunately. Otherwise I'd certainly stick with\nLinux.\n\nRegards, Frank\n", "msg_date": "Wed, 24 Jan 2001 10:29:11 +0100", "msg_from": "Frank Joerdens <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Looking for info on Solaris 7 (SPARC) specific considerations" }, { "msg_contents": "On Tue, Jan 23, 2001 at 11:57:52AM -0500, Tom Lane wrote:\n[ . . . ]\n> After you build PG and test it, send us a port report, and we'll add\n> Solaris 7 to the list of recently tested platforms. That's how it\n> works ...\n\nThe installation by simply running configure, make, make install went\ncompletely smoothly, no hassle whatsoever (except for the\nflex-is-not-present warning which I think you can ignore)! \n\nThe system is, to be precise:\n\n$ uname -a \n\nSunOS [hostname] 5.7 Generic_106541-12 sun4u sparc SUNW,Ultra-4\n\nI did encounter some _weird_ stuff with the regression tests. Does that\nnot work via make check (the 'standalone' variety) when you've already\ntyped make install (on Linux it does!)?? Make installcheck seems to\nproduce non-failures semi-reliably (why does the random test not fail on\nthe 1st try, but on the 2nd, and then again not on the 3rd???). Below\nare the dirty details.\n\nAs to what is mentioned in the Admin Guide about Solaris' default\nsettings for shared memore being too low, at least on the machine I am\ntesting on it is set to 4 GB!\n\n$ cat /etc/system |grep shm\n* exclude: sys/shmsys\nset shmsys:shminfo_shmmax = 4294967295\nset shmsys:shminfo_shmmin = 1\nset shmsys:shminfo_shmmni = 100\nset shmsys:shminfo_shmseg = 10\n\n\nCheers, Frank\n\n------------------ begin dirty details ------------------\nI can start, connect, create databases etc.. However, running the\nregression tests gives 4 failed out of 76:\n\n reltime ... FAILED\n tinterval ... FAILED\ntest horology ... FAILED\ntest misc ... FAILED\n\nI checked the timezone issue mentioned in the src/test/regress/README\nfile. The command\n\n$ env TZ=PST8PDT date\n\nreturns 'Wed Jan 24 11:19:02 PST 2001', 9 hrs back, which is the time\ndifference between here and California, so I guess that is OK.\n\nRunning the tests on my Linux box gives no failed tests. Must I assume\nthat those failed tests indicate some issue that is is detrimental to\nthe proper functioning of the server on this Solaris installation? Do\nyou want the regression.diffs?\n\nI also tried using the Sun compiler, which didn't work at all. \n\n . . . [ goes away to do more testing ] . . .\n\nWhat's really weird, I just ran ./configure, make, make install, make\ncheck again, again with 4 failed, but different ones! \n\n\n tinterval ... FAILED\n inet ... FAILED\n comments ... FAILED\ntest misc ... FAILED\n\n\n2 things were different: a) I set the compiler explicitly to\n/usr/local/bin/gcc via the CC environment variable and b) I used the\ndefault prefix this time. I'll try again with the old settings. \n\n . . . [ goes away to do more testing ] . . .\n\nmake distclean\n./configure --prefix=/usr/db/pgsql\nmake\nmake check\n\nproduces 6 out of 76 this time! They are:\n\n date ... FAILED\n type_sanity ... FAILED\n opr_sanity ... FAILED\n arrays ... FAILED\n btree_index ... FAILED\ntest misc ... FAILED\n\nIt looks progressively worse. I'll remove the source tree and start from scratch.\n\n . . . [ goes away to do more testing ] . . .\n\n6 out of 76 again, but different ones . . .\n\n interval ... FAILED\n abstime ... FAILED\n comments ... FAILED\n oidjoins ... FAILED\ntest horology ... FAILED\ntest misc ... FAILED\n\n . . . [ goes away to do more testing ] . . .\n\nThis time with the already installed database after initdb:\n\n$ make installcheck\n\nnow I get scary stuff like:\n\n----------------------- begin scary stuff -----------------------\ntest int2 ... ERROR: pg_atoi: error in \"34.5\": can't\nparse \".5\"\nERROR: pg_atoi: error reading \"100000\": Result too large\nERROR: pg_atoi: error in \"asdf\": can't parse \"asdf\"\nok\ntest int4 ... ERROR: pg_atoi: error in \"34.5\": can't\nparse \".5\"\nERROR: pg_atoi: error reading \"1000000000000\": Result too large\nERROR: pg_atoi: error in \"asdf\": can't parse \"asdf\"\nok\ntest int8 ... ok\ntest oid ... ERROR: oidin: error in \"asdfasd\": can't\nparse \"asdfasd\"\nERROR: oidin: error in \"99asdfasd\": can't parse \"asdfasd\"\nok\ntest float4 ... ERROR: Bad float4 input format --\noverflow\n----------------------- end scary stuff -----------------------\n\nHowever, it works! All 76 tests pass.\n\n . . . [ goes away to do more testing ] . . .\n\nrunning make installcheck again gives:\n\ntest random ... failed (ignored)\n\n . . . [ goes away to do more testing ] . . .\n\nAll 76 tests pass.\n------------------ end dirty details ------------------\n", "msg_date": "Wed, 24 Jan 2001 21:55:31 +0100", "msg_from": "Frank Joerdens <[email protected]>", "msg_from_op": true, "msg_subject": "beta3 Solaris 7 (SPARC) port report [ Was: Looking for . . . ]" }, { "msg_contents": "Frank Joerdens writes:\n\n[randomly varying set of regression tests fail]\n\n> Running the tests on my Linux box gives no failed tests. Must I assume\n> that those failed tests indicate some issue that is is detrimental to\n> the proper functioning of the server on this Solaris installation? Do\n> you want the regression.diffs?\n\nCould you go into src/test/regress/pg_regress.sh and edit around line 162\n\n#case $host_platform in\n# *-*-qnx* | *beos*)\n unix_sockets=no;;\n# *)\n# unix_sockets=yes;;\n#esac\n\n(i.e., ensure that unix_sockets is set to 'no'), and rerun 'make check'.\n\nI have experienced before that Unix sockets will cause random connection\nabortions on Solaris, which will cause the regression tests to fail\narbitrarily.\n\n> I also tried using the Sun compiler, which didn't work at all.\n\ndetails on \"didn't work\" requested...\n\n> now I get scary stuff like:\n>\n> ----------------------- begin scary stuff -----------------------\n> test int2 ... ERROR: pg_atoi: error in \"34.5\": can't\n> parse \".5\"\n> ERROR: pg_atoi: error reading \"100000\": Result too large\n> ERROR: pg_atoi: error in \"asdf\": can't parse \"asdf\"\n\nThis is normal. The regression tests sometimes involve intentional\ninvalid input.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 25 Jan 2001 00:42:45 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 Solaris 7 (SPARC) port report [ Was: Looking\n for . . . ]" }, { "msg_contents": "On Thu, Jan 25, 2001 at 12:42:45AM +0100, Peter Eisentraut wrote:\n> Frank Joerdens writes:\n> \n> [randomly varying set of regression tests fail]\n> \n> > Running the tests on my Linux box gives no failed tests. Must I assume\n> > that those failed tests indicate some issue that is is detrimental to\n> > the proper functioning of the server on this Solaris installation? Do\n> > you want the regression.diffs?\n> \n> Could you go into src/test/regress/pg_regress.sh and edit around line 162\n> \n> #case $host_platform in\n> # *-*-qnx* | *beos*)\n> unix_sockets=no;;\n> # *)\n> # unix_sockets=yes;;\n> #esac\n> \n> (i.e., ensure that unix_sockets is set to 'no'), and rerun 'make check'.\n\nI just did that and ran make check 4 times. 3 times went completely\nsmoothly, once I had random fail. This is the same behaviour that I saw\nwhen running make installcheck (76 successful most of the time,\nsometimes you get 75 out of 76 with random being the one that fails).\n> \n> I have experienced before that Unix sockets will cause random connection\n> abortions on Solaris [ . . . ]\n\nIsn't that _really_ bad? Random connection abortions when going over\nUnix sockets?? My app does _all_ the connecting over Unix sockets?!\n\n> > I also tried using the Sun compiler, which didn't work at all.\n> \n> details on \"didn't work\" requested...\n\n------------------ begin details ------------------\n$ export CC=CC\n$ echo $CC\nCC\n$ ./configure\ncreating cache ./config.cache\nchecking host system type... sparc-sun-solaris2.7\nchecking which template to use... solaris\nchecking whether to build with locale support... no\nchecking whether to build with recode support... no\nchecking whether to build with multibyte character support... no\nchecking whether to build with Unicode conversion support... no\nchecking for default port number... 5432\nchecking for default soft limit on number of connections... 32\nchecking for gcc... CC\nchecking whether the C compiler (CC ) works... yes\nchecking whether the C compiler (CC ) is a cross-compiler... no\nchecking whether we are using GNU C... no\nchecking whether CC accepts -g... yes\nusing CFLAGS=-v\nchecking whether the C compiler (CC -Xa -v ) works... no\nconfigure: error: installation or configuration problem: C compiler\ncannot create executables.\n------------------ end details ------------------\n\nCheers, Frank\n", "msg_date": "Thu, 25 Jan 2001 13:35:05 +0100", "msg_from": "Frank Joerdens <[email protected]>", "msg_from_op": true, "msg_subject": "Re: beta3 Solaris 7 (SPARC) port report [ Was: Looking for . . . ]" }, { "msg_contents": "Worked fine for me...\n\n% uname -a\n\nSunOS lancelot 5.7 Generic_106541-14 sun4m sparc SUNW,SPARCstation-4\n\n% ls -l\n\n-rw-r--r-- 1 bpalmer staff 32860160 Jan 23 16:45\npostgresql-snapshot.tar\n\n...\n...\n...\n transactions ... ok\n random ... failed (ignored)\n portals ... ok\n...\n...\n...\n\n==================================================\n 75 of 76 tests passed, 1 failed test(s) ignored.\n==================================================\n\n\n\nOn Thu, 25 Jan 2001, Peter Eisentraut wrote:\n\n> Frank Joerdens writes:\n>\n> [randomly varying set of regression tests fail]\n>\n> > Running the tests on my Linux box gives no failed tests. Must I assume\n> > that those failed tests indicate some issue that is is detrimental to\n> > the proper functioning of the server on this Solaris installation? Do\n> > you want the regression.diffs?\n>\n> Could you go into src/test/regress/pg_regress.sh and edit around line 162\n>\n> #case $host_platform in\n> # *-*-qnx* | *beos*)\n> unix_sockets=no;;\n> # *)\n> # unix_sockets=yes;;\n> #esac\n>\n> (i.e., ensure that unix_sockets is set to 'no'), and rerun 'make check'.\n>\n> I have experienced before that Unix sockets will cause random connection\n> abortions on Solaris, which will cause the regression tests to fail\n> arbitrarily.\n>\n> > I also tried using the Sun compiler, which didn't work at all.\n>\n> details on \"didn't work\" requested...\n>\n> > now I get scary stuff like:\n> >\n> > ----------------------- begin scary stuff -----------------------\n> > test int2 ... ERROR: pg_atoi: error in \"34.5\": can't\n> > parse \".5\"\n> > ERROR: pg_atoi: error reading \"100000\": Result too large\n> > ERROR: pg_atoi: error in \"asdf\": can't parse \"asdf\"\n>\n> This is normal. The regression tests sometimes involve intentional\n> invalid input.\n>\n> --\n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n>\n>\n>\n\n\nb. palmer, [email protected]\npgp: www.crimelabs.net/bpalmer.pgp5\n\n\n", "msg_date": "Thu, 25 Jan 2001 09:42:45 -0500 (EST)", "msg_from": "bpalmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 Solaris 7 (SPARC) port report [ Was: Looking\n for . . . ]" }, { "msg_contents": "Frank Joerdens writes:\n\n> > I have experienced before that Unix sockets will cause random connection\n> > abortions on Solaris [ . . . ]\n>\n> Isn't that _really_ bad? Random connection abortions when going over\n> Unix sockets?? My app does _all_ the connecting over Unix sockets?!\n\nThat's bad, for sure. Maybe you can check for odd conditions surrounding\nthe /tmp directory, like is it on NFS, permission problems, mount options.\nOr is there something odd in the kernel configuration? If I'm counting\ncorrectly this is the third independent report of this problem, which is\nscary.\n\n> > > I also tried using the Sun compiler, which didn't work at all.\n> >\n> > details on \"didn't work\" requested...\n>\n> ------------------ begin details ------------------\n> $ export CC=CC\n\nUsing a C++ compiler to compile C code won't work. You probably meant\nCC=cc and CXX=CC.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 25 Jan 2001 17:12:02 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 Solaris 7 (SPARC) port report [ Was: Looking\n for . . . ]" }, { "msg_contents": "On Thu, Jan 25, 2001 at 05:12:02PM +0100, Peter Eisentraut wrote:\n> Frank Joerdens writes:\n> \n> > > I have experienced before that Unix sockets will cause random connection\n> > > abortions on Solaris [ . . . ]\n> >\n> > Isn't that _really_ bad? Random connection abortions when going over\n> > Unix sockets?? My app does _all_ the connecting over Unix sockets?!\n> \n> That's bad, for sure. Maybe you can check for odd conditions surrounding\n> the /tmp directory, like is it on NFS, permission problems, mount options.\n\nI don't have neither root nor physical access to this machine, hence my\noptions are kinda limited. However, the sysadmin told me that most of\nthe storage space on this box is mounted over a fibre channel (I only\nhave a very hazy notion of what exactly that might be) from a \"storage\nserver\" which is allegedly as fast as a local SCSI disk.\n\n> Or is there something odd in the kernel configuration? If I'm counting\n> correctly this is the third independent report of this problem, which is\n> scary.\n\nI'll question the sysadmin about that. But why does make installcheck\nwork? Because it goes over TCP/IP sockets by default?\n\n> \n> > > > I also tried using the Sun compiler, which didn't work at all.\n> > >\n> > > details on \"didn't work\" requested...\n> >\n> > ------------------ begin details ------------------\n> > $ export CC=CC\n> \n> Using a C++ compiler to compile C code won't work. You probably meant\n> CC=cc and CXX=CC.\n\nWhen I do that, make fails with the following error (after giving lots\nof warnings):\n\n\"pg_dump.c\", line 1063: warning: Function has no return statement : main\ncc -Xa -v -I../../../src/include -I../../../src/interfaces/libpq -c -o\ncommon.o common.c\ncc -Xa -v -I../../../src/include -I../../../src/interfaces/libpq -c -o\npg_backup_archiver.o pg_backup_archiver.c\ncc -Xa -v -I../../../src/include -I../../../src/interfaces/libpq -c -o\npg_backup_db.o pg_backup_db.c\ncc -Xa -v -I../../../src/include -I../../../src/interfaces/libpq -c -o\npg_backup_custom.o pg_backup_custom.c\ncc -Xa -v -I../../../src/include -I../../../src/interfaces/libpq -c -o\npg_backup_files.o pg_backup_files.c\ncc -Xa -v -I../../../src/include -I../../../src/interfaces/libpq -c -o\npg_backup_null.o pg_backup_null.c\n\"pg_backup_null.c\", line 90: controlling expressions must have scalar\ntype\ncc: acomp failed for pg_backup_null.c\nmake[3]: *** [pg_backup_null.o] Error 2\nmake[3]: Leaving directory\n`/usr/users/fjoerde/postgres/postgresql-7.1beta3_test/src/bin/pg_dump'\nmake[2]: *** [all] Error 2\nmake[2]: Leaving directory\n`/usr/users/fjoerde/postgres/postgresql-7.1beta3_test/src/bin'\nmake[1]: *** [all] Error 2\nmake[1]: Leaving directory\n`/usr/users/fjoerde/postgres/postgresql-7.1beta3_test/src'\nmake: *** [all] Error 2\n\nRegards, Frank\n", "msg_date": "Thu, 25 Jan 2001 19:22:55 +0100", "msg_from": "Frank Joerdens <[email protected]>", "msg_from_op": true, "msg_subject": "Re: beta3 Solaris 7 (SPARC) port report [ Was: Looking for . . . ]" }, { "msg_contents": "On Thu, Jan 25, 2001 at 05:12:02PM +0100, Peter Eisentraut wrote:\n> Frank Joerdens writes:\n> \n> > > I have experienced before that Unix sockets will cause random connection\n> > > abortions on Solaris [ . . . ]\n> >\n> > Isn't that _really_ bad? Random connection abortions when going over\n> > Unix sockets?? My app does _all_ the connecting over Unix sockets?!\n> \n> That's bad, for sure. Maybe you can check for odd conditions surrounding\n> the /tmp directory, like is it on NFS, permission problems, mount options.\n\nI just typed\n\n$ mount\n\nand I get\n\n/tmp on swap read/write/setuid on Mon Jan 22 16:39:32 2001\n\nfor the /tmp directory, which looks distinctly odd to me. What kind of\ndevice is swap (I know what swap is normally but I didn't know you could\nmount stuff there . . . )??\n\nRegards, Frank\n", "msg_date": "Thu, 25 Jan 2001 19:53:55 +0100", "msg_from": "Frank Joerdens <[email protected]>", "msg_from_op": true, "msg_subject": "Re: beta3 Solaris 7 (SPARC) port report [ Was: Looking for . . . ]" }, { "msg_contents": "Frank Joerdens writes:\n\n> > That's bad, for sure. Maybe you can check for odd conditions surrounding\n> > the /tmp directory, like is it on NFS, permission problems, mount options.\n>\n> I don't have neither root nor physical access to this machine, hence my\n> options are kinda limited.\n\nEntering 'mount' should tell you.\n\n> I'll question the sysadmin about that. But why does make installcheck\n> work? Because it goes over TCP/IP sockets by default?\n\nNo. Presumably because it does not run more than one test in parallel.\n\n> \"pg_backup_null.c\", line 90: controlling expressions must have scalar type\n> cc: acomp failed for pg_backup_null.c\n\nLine 90 has a comment in my copy.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 25 Jan 2001 20:02:50 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 Solaris 7 (SPARC) port report [ Was: Looking\n for . . . ]" }, { "msg_contents": "Frank Joerdens writes:\n\n> I just typed\n>\n> $ mount\n>\n> and I get\n>\n> /tmp on swap read/write/setuid on Mon Jan 22 16:39:32 2001\n\nThat's sufficiently suspicious.\n\nPerhaps you could try to change the definition of DEFAULT_PGSOCKET_DIR in\nsrc/include/config.h[.in] to something that's on a real disk.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 25 Jan 2001 21:02:34 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 Solaris 7 (SPARC) port report [ Was: Looking\n for . . . ]" }, { "msg_contents": "Frank Joerdens <[email protected]> writes:\n\n> I just typed\n> \n> $ mount\n> \n> and I get\n> \n> /tmp on swap read/write/setuid on Mon Jan 22 16:39:32 2001\n> \n> for the /tmp directory, which looks distinctly odd to me. What kind of\n> device is swap (I know what swap is normally but I didn't know you could\n> mount stuff there . . . )??\n\nThat is a tmpfs file system which uses swap space for /tmp storage.\nBoth swap usage and /tmp compete for the same partition on the disk.\nIf you have a lot of swapping programs, you don't get to put much in\n/tmp. If you have a lot of files in /tmp, you don't get to run many\nprograms.\n\nAs far as I can recall, this is a Sun specific thing.\n\nIt's a reasonable idea on a stable system. It's a pretty crummy idea\non a development system, or one with unpredictable loads. My\nexperience is that either something goes crazy and fills up /tmp and\nthen you can't run anything else and you have to reboot, or something\ngoes crazy and fills up swap and then you can't write any /tmp files\nand daemon processes start to silently die and you have to reboot.\n\nIan\n", "msg_date": "25 Jan 2001 12:04:40 -0800", "msg_from": "Ian Lance Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 Solaris 7 (SPARC) port report [ Was: Looking for . . . ]" }, { "msg_contents": "On Thu, Jan 25, 2001 at 12:04:40PM -0800, Ian Lance Taylor wrote:\n[ . . . ]\n> > for the /tmp directory, which looks distinctly odd to me. What kind of\n> > device is swap (I know what swap is normally but I didn't know you could\n> > mount stuff there . . . )??\n> \n> That is a tmpfs file system which uses swap space for /tmp storage.\n> Both swap usage and /tmp compete for the same partition on the disk.\n> If you have a lot of swapping programs, you don't get to put much in\n> /tmp. If you have a lot of files in /tmp, you don't get to run many\n> programs.\n> \n> As far as I can recall, this is a Sun specific thing.\n> \n> It's a reasonable idea on a stable system. It's a pretty crummy idea\n> on a development system, or one with unpredictable loads. My\n> experience is that either something goes crazy and fills up /tmp and\n> then you can't run anything else and you have to reboot, or something\n> goes crazy and fills up swap and then you can't write any /tmp files\n> and daemon processes start to silently die and you have to reboot.\n\nVery peculiar, or crummy, indeed. This is system is not used by anyone\nelse besides myself at the moment (cuz it's just being built up), as far\na I can tell, and is ludicrously overpowered (3 CPUs, 768 MB RAM) for\nthe mundane uses I am subjecting it to (installing and testing\nPostgresql).\n\nRegards, Frank \n", "msg_date": "Thu, 25 Jan 2001 21:47:16 +0100", "msg_from": "Frank Joerdens <[email protected]>", "msg_from_op": true, "msg_subject": "Re: beta3 Solaris 7 (SPARC) port report [ Was: Looking for . . . ]" }, { "msg_contents": "On Thu, Jan 25, 2001 at 09:47:16PM +0100, Frank Joerdens wrote:\n> On Thu, Jan 25, 2001 at 12:04:40PM -0800, Ian Lance Taylor wrote:\n> [ . . . ]\n> > > for the /tmp directory, which looks distinctly odd to me. What kind of\n> > > device is swap (I know what swap is normally but I didn't know you could\n> > > mount stuff there . . . )??\n> > \n> > That is a tmpfs file system which uses swap space for /tmp storage.\n> > Both swap usage and /tmp compete for the same partition on the disk.\n> > If you have a lot of swapping programs, you don't get to put much in\n> > /tmp. If you have a lot of files in /tmp, you don't get to run many\n> > programs.\n> > \n> > As far as I can recall, this is a Sun specific thing.\n> > \n> > It's a reasonable idea on a stable system. It's a pretty crummy idea\n> > on a development system, or one with unpredictable loads. My\n> > experience is that either something goes crazy and fills up /tmp and\n> > then you can't run anything else and you have to reboot, or something\n> > goes crazy and fills up swap and then you can't write any /tmp files\n> > and daemon processes start to silently die and you have to reboot.\n> \n> Very peculiar, or crummy, indeed. This is system is not used by anyone\n> else besides myself at the moment (cuz it's just being built up), as far\n> a I can tell, and is ludicrously overpowered (3 CPUs, 768 MB RAM) for\n> the mundane uses I am subjecting it to (installing and testing\n> Postgresql).\n\nI doubt you can blame any problems on tmpfs, here. tmpfs has been \nin Solarix for many years, and has had plenty of time to stabilize.\nWith 768M of RAM and running only PG you not using any swap space at \nall, and unix sockets don't use any appreciable space either, so the \nconflicts Ian describes are impossible in your case. \n\nNathan Myers\[email protected]\n", "msg_date": "Thu, 25 Jan 2001 13:20:23 -0800", "msg_from": "[email protected] (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: beta3 Solaris 7 (SPARC) port report" }, { "msg_contents": "Frank Joerdens <[email protected]> writes:\n> I just did that and ran make check 4 times. 3 times went completely\n> smoothly, once I had random fail. This is the same behaviour that I saw\n> when running make installcheck (76 successful most of the time,\n> sometimes you get 75 out of 76 with random being the one that fails).\n\nEr, you do realize that the random test is *supposed* to fail every so\noften? (Else it'd not be random...) See the pages on interpreting\nregression test results in the admin guide.\n\nWhat troubles me is the nonrepeatable failures you saw on other tests.\nAs Peter says, if \"make installcheck\" (serial tests) is perfectly solid\nand \"make check\" (parallel tests) is not, that suggests some kind of\ninterprocess locking problem. But we haven't heard about any such issue\non Solaris.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Jan 2001 22:13:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 Solaris 7 (SPARC) port report [ Was: Looking for . . . ] " }, { "msg_contents": "Peter Eisentraut writes:\n > Frank Joerdens writes:\n > \n > > > I have experienced before that Unix sockets will cause random\n > > > connection abortions on Solaris [ . . . ]\n > >\n > > Isn't that _really_ bad? Random connection abortions when going\n > > over Unix sockets?? My app does _all_ the connecting over Unix\n > > sockets?!\n > \n > That's bad, for sure. Maybe you can check for odd conditions\n > surrounding the /tmp directory, like is it on NFS, permission\n > problems, mount options. Or is there something odd in the kernel\n > configuration? If I'm counting correctly this is the third\n > independent report of this problem, which is scary.\n\nI'm not sure if you counted me. I also observed that Unix sockets\ncause the parallel tests to fail in random places on Solaris.\n\n\nWe had a similar problem porting a product that uses a lot of IPC to\nSolaris. There were failures involving the overloading of the Unix\ndomain sockets. We took the code to Sun and they were unable to\nresolve the problems. It should have been possible to tune the kernel\nto provide more resources. However it turns out that some of the\nparameters that we wanted to tune were ignored in favour of hard coded\nvalues. In the end we rewrote our code to use Internet domain sockets\n(AF_INET).\n\n\n\nBTW, owing to a DNS error email to me has bounced over the last couple\nof days. It should be okay now if anything needs to be resent.\n-- \nPete Forman -./\\.- Disclaimer: This post is originated\nWesternGeco -./\\.- by myself and does not represent\[email protected] -./\\.- opinion of Schlumberger, Baker\nhttp://www.crosswinds.net/~petef -./\\.- Hughes or their divisions.\n", "msg_date": "Fri, 26 Jan 2001 11:12:32 +0000", "msg_from": "Pete Forman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 Solaris 7 (SPARC) port report [ Was: Looking\n for . . . ]" }, { "msg_contents": "On Thu, Jan 25, 2001 at 10:13:29PM -0500, Tom Lane wrote:\n> Frank Joerdens <[email protected]> writes:\n> > I just did that and ran make check 4 times. 3 times went completely\n> > smoothly, once I had random fail. This is the same behaviour that I saw\n> > when running make installcheck (76 successful most of the time,\n> > sometimes you get 75 out of 76 with random being the one that fails).\n> \n> Er, you do realize that the random test is *supposed* to fail every so\n> often? (Else it'd not be random...) See the pages on interpreting\n> regression test results in the admin guide.\n> \n> What troubles me is the nonrepeatable failures you saw on other tests.\n> As Peter says, if \"make installcheck\" (serial tests) is perfectly solid\n> and \"make check\" (parallel tests) is not, that suggests some kind of\n> interprocess locking problem. But we haven't heard about any such issue\n> on Solaris.\n\nOr simply running out of processes - check maxproc? (Deleted beginning of\nthis thread, so may have missed something)\n\nCheers,\n\nPatrick\n", "msg_date": "Fri, 26 Jan 2001 15:29:59 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 Solaris 7 (SPARC) port report [ Was: Looking for . . . ]" }, { "msg_contents": "On Fri, Jan 26, 2001 at 03:29:59PM +0000, Patrick Welche wrote:\n> On Thu, Jan 25, 2001 at 10:13:29PM -0500, Tom Lane wrote:\n> > Frank Joerdens <[email protected]> writes:\n> > > I just did that and ran make check 4 times. 3 times went completely\n> > > smoothly, once I had random fail. This is the same behaviour that I saw\n> > > when running make installcheck (76 successful most of the time,\n> > > sometimes you get 75 out of 76 with random being the one that fails).\n> > \n> > Er, you do realize that the random test is *supposed* to fail every so\n> > often?\n\nI do. I just included the info for completeness' sake.\n\n> > What troubles me is the nonrepeatable failures you saw on other tests.\n> > As Peter says, if \"make installcheck\" (serial tests) is perfectly solid\n> > and \"make check\" (parallel tests) is not, that suggests some kind of\n> > interprocess locking problem. But we haven't heard about any such issue\n> > on Solaris.\n> \n> Or simply running out of processes - check maxproc? (Deleted beginning of\n> this thread, so may have missed something)\n\nThere is no load at all on this server at the moment. The sysadmin and\nmyself are currently the only people accessing a brand new UltraSPARC with 3\nCPUs and 3/4 GB of RAM to install stuff.\n\nWhatever the reason for it, Peter's suggestion at least seems to\nmitigate the issue with the regression tests. I've set DEFAULT_PGSOCKET_DIR\nin src/include/config.h.in to /usr/db/pgsql/tmp (/usr/db/pgsql is the\npostgres user's home dir and the install dir for Postgres). Running make\ncheck after that gives:\n\n1: none failed\n2: random ... failed (ignored)\n3: Oh. What's the expression (in German you'd say 'Zu frueh gefreut.')\nhere. Now I get:\n\n select_distinct_on ... FAILED\n select_implicit ... FAILED\n random ... failed (ignored)\n portals ... FAILED\ntest misc ... FAILED\n\nTyping \n\n$ ps -a \n\nI can see that 2 postgres processes are still active . . . ?? And\n/usr/db/pgsql/tmp does not contain any lock file??? I killed those 2 and\nran make check again:\n\n4: none failed\n5: random ... failed (ignored)\n6: none failed\n7: random ... failed (ignored)\n8: none failed\n9: none failed\n9: comments ... FAILED\n\nHm. Bizarre. The issue isn't solved but it definitely looks better than\nbefore (also, the sysadmin just told me that /tmp is cleaned out\nnightly anyway by cron). I'm gonna test it over TCP/IP sockets again,\nand if that works, stick with those:\n\nWhen setting unix_sockets=no; for any plattform in\nsrc/test/regress/pg_regress.sh, 7 consecutive tests showed no errors.\nI'll just connect to the server over TCP/IP.\n\nRegards, Frank\n", "msg_date": "Fri, 26 Jan 2001 17:03:13 +0100", "msg_from": "Frank Joerdens <[email protected]>", "msg_from_op": true, "msg_subject": "Re: beta3 Solaris 7 (SPARC) port report" }, { "msg_contents": "Frank Joerdens <[email protected]> writes:\n> Now I get:\n\n> select_distinct_on ... FAILED\n> select_implicit ... FAILED\n> random ... failed (ignored)\n> portals ... FAILED\n> test misc ... FAILED\n\nReporting a regression failure this way is pretty unhelpful. What are\nthe actual diffs (regression.diffs)? What shows up in the postmaster\nlog (logs/postmaster.log)?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Jan 2001 11:15:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 Solaris 7 (SPARC) port report " }, { "msg_contents": "On Fri, Jan 26, 2001 at 05:03:13PM +0100, Frank Joerdens wrote:\n> \n> There is no load at all on this server at the moment. The sysadmin and\n> myself are currently the only people accessing a brand new UltraSPARC with 3\n> CPUs and 3/4 GB of RAM to install stuff.\n\nHmm, multiple processors, and lots of IPC: I've got a bad feeling\nabout this. Nothing solid (don't do a lot with Solaris), but there are\na _lot_ of gotchas in getting that combo right, many of which _kill_\nperformance for the normal case to get correct behavior in an edge\ncase. I could imagine Sun missing one or two, and not catching it (or\nactively ignoring it, to get better CPU utilization)\n\nSince it seems to hit only when using Unix domain sockets, I'd take a\nwild guess that explicit use of shared memory and Unix domain sockets\nare stepping on each other in a multiprocessor environment. Invoking\nInet sockets gets more of the networking code in play, which is usually\nmore heavily tested in such an environment.\n\nSince it's just you and the sysadmin: any chance you could bring the\nsystem up uniprocessor (I don't even know if this is _possible_ with\nSun hardware, let alone how hard) and run the regressions some more?\nIf that makes it go away, I'd say it pretty well points straight into\nthe Solaris kernel.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Fri, 26 Jan 2001 10:36:11 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 Solaris 7 (SPARC) port report" }, { "msg_contents": "On Fri, Jan 26, 2001 at 11:15:45AM -0500, Tom Lane wrote:\n> Frank Joerdens <[email protected]> writes:\n> > Now I get:\n> \n> > select_distinct_on ... FAILED\n> > select_implicit ... FAILED\n> > random ... failed (ignored)\n> > portals ... FAILED\n> > test misc ... FAILED\n> \n> Reporting a regression failure this way is pretty unhelpful. \n\nSorry. My thinking was that the bottom line here is the very\nnon-reproducability of particular results. No two regression test\nfailures where identical of the couple dozen or so I conducted, and\nhence it wouldn't make all that much sense to analyze any single test\nall by itself.\n\nAs I wrote earlier, I don't have neither physical nor root access to\nthis box. Moreover, the sysadmin tells me that he didn't install the OS\nhimself, a friend of his did, because he himself was on holiday. There\nmay well be something very fishy about the OSs configuration, but I\nwouldn't have the first notion as to where to start looking. It\n_appears_ that setting DEFAULT_PGSOCKET_DIR somewhere else besides /tmp\nhas some positive effect, but that ain't conclusive.\n\n> What are\n> the actual diffs (regression.diffs)? What shows up in the postmaster\n> log (logs/postmaster.log)?\n\nThose results were overwritten by the last 10 tests that didn't show any\nerrors, so I can't retrieve them, now.\n\nRegards, Frank\n", "msg_date": "Fri, 26 Jan 2001 17:39:51 +0100", "msg_from": "Frank Joerdens <[email protected]>", "msg_from_op": true, "msg_subject": "Re: beta3 Solaris 7 (SPARC) port report" }, { "msg_contents": "Ross J. Reedstrom writes:\n\n> Hmm, multiple processors, and lots of IPC: I've got a bad feeling\n> about this.\n\nAlthough I'm not absolutely certain, the systems on which I had this\nproblem were not multi-processor, they were just plain-old workstations in\na university computer lab. At the time (7.0 beta) I had attributed this\nproblem to the possibly supicious nature of the /tmp partition, since Marc\ndidn't have any such problems with his Solaris boxes.\n\nAfter reading Pete Forman's anecdote I looked around some more and found\nthis:\n\nhttp://www.cise.ufl.edu/depot/doc/postfix/HISTORY\n\n19990321\n\n Workaround: from now on, Postfix on Solaris uses stream\n pipes instead of UNIX-domain sockets. Despite workarounds,\n the latter were causing more trouble than anything else on\n all systems combined.\n\n\nThere are also some reports that indicate problems in this direction at\nhttp://www.landfield.com/faqs/usenet/software/inn-faq/part2/.\n\n\nConclusion: Don't use it.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Fri, 26 Jan 2001 19:41:54 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 Solaris 7 (SPARC) port report" }, { "msg_contents": "Ross J. Reedstrom writes:\n > Hmm, multiple processors, and lots of IPC:\n > [snip]\n > Since it's just you and the sysadmin: any chance you could bring\n > the system up uniprocessor (I don't even know if this is _possible_\n > with Sun hardware, let alone how hard) and run the regressions some\n > more? If that makes it go away, I'd say it pretty well points\n > straight into the Solaris kernel.\n\nMy observations of Solaris UNIX domain socket problems were on single\nprocessor machines.\n-- \nPete Forman -./\\.- Disclaimer: This post is originated\nWesternGeco -./\\.- by myself and does not represent\[email protected] -./\\.- opinion of Schlumberger, Baker\nhttp://www.crosswinds.net/~petef -./\\.- Hughes or their divisions.\n", "msg_date": "Mon, 29 Jan 2001 08:39:22 +0000", "msg_from": "Pete Forman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 Solaris 7 (SPARC) port report" } ]
[ { "msg_contents": "Hello,\n\njust small question.\nI just realized that it seems that Oracle stores indexed values in the index \nitself. This mean that it is not necessary to access table when you need to \nget only indexed values.\n\niso table has an index for vin field. Here is an output for different queries.\n\nSQL> explain plan for select * from iso where vin='dfgdfgdhf';\n \nExplained.\n \nSQL> @?/rdbms/admin/utlxpls\n \nPlan Table\n--------------------------------------------------------------------------------\n| Operation | Name | Rows | Bytes| Cost | Pstart| \nPstop |\n--------------------------------------------------------------------------------\n| SELECT STATEMENT | | 6 | 402 | 8 | | \n |\n| TABLE ACCESS BY INDEX ROW|ISO | 6 | 402 | 8 | | \n |\n| INDEX RANGE SCAN |IX_ISO_VI | 6 | | 3 | | \n |\n--------------------------------------------------------------------------------\n \n6 rows selected.\n \nSQL> explain plan for select vin from iso where vin='dfgdfgdhf';\n \nExplained.\n \nSQL> @?/rdbms/admin/utlxpls\n \nPlan Table\n--------------------------------------------------------------------------------\n| Operation | Name | Rows | Bytes| Cost | Pstart| \nPstop |\n--------------------------------------------------------------------------------\n| SELECT STATEMENT | | 6 | 42 | 3 | | \n |\n| INDEX RANGE SCAN |IX_ISO_VI | 6 | 42 | 3 | | \n |\n--------------------------------------------------------------------------------\n\n\nI think this question already was raised here, but... Why PostgreSQL does not \ndo this? What are the pros, and contros?\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Tue, 23 Jan 2001 22:20:54 +0600", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": true, "msg_subject": "Does Oracle store values in indices?" }, { "msg_contents": "Denis Perchine <[email protected]> writes:\n> I think this question already was raised here, but... Why PostgreSQL\n> does not do this? What are the pros, and contros?\n\nThe reason you have to visit the main table is that tuple validity\nstatus is only stored in the main table, not in each index. See prior\ndiscussions in the archives.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Jan 2001 12:56:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does Oracle store values in indices? " }, { "msg_contents": "> Denis Perchine <[email protected]> writes:\n> > I think this question already was raised here, but... Why PostgreSQL\n> > does not do this? What are the pros, and contros?\n>\n> The reason you have to visit the main table is that tuple validity\n> status is only stored in the main table, not in each index. See prior\n> discussions in the archives.\n\nBut how Oracle handles this?\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Tue, 23 Jan 2001 23:56:25 +0600", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Does Oracle store values in indices?" } ]
[ { "msg_contents": "Here is documentation for WAL, as text for immediate review and as SGML\nsource, generated from Vadim's original text with my editing.\n\nPlease review for correctness.\n\n=========================== WAL chapter ==========================\n\nWrite-Ahead Logging (WAL) in Postgres\n\n Author: Written by Vadim Mikheev and Oliver Elphick.\n\n\n\nGeneral description\n\nWrite Ahead Logging (WAL) is a standard approach to transaction logging.\nIts detailed description may be found in most (if not all) books about\ntransaction processing. Briefly, WAL's central concept is that changes to\ndata files (where tables and indices reside) must be written only after\nthose changes have been logged - that is, when log records have been\nflushed to permanent storage. When we follow this procedure, we do not\nneed to flush data pages to disk on every transaction commit, because we\nknow that in the event of a crash we will be able to recover the database\nusing the log: any changes that have not been applied to the data pages\nwill first be redone from the log records (this is roll-forward recovery,\nalso known as REDO) and then changes made by uncommitted transactions\nwill be removed from the data pages (roll-backward recovery - UNDO).\n\n\n\nImmediate benefits of WAL\n\nThe first obvious benefit of using WAL is a significantly reduced number\nof disk writes, since only the log file needs to be flushed to disk at\nthe time of transaction commit; in multi-user environments, commits of\nmany transactions may be accomplished with a single fsync() of the log\nfile. Furthermore, the log file is written sequentially, and so the cost\nof syncing the log is much less than the cost of syncing the data pages.\n\nThe next benefit is consistency of the data pages. The truth is that,\nbefore WAL, PostgreSQL was never able to guarantee consistency in the\ncase of a crash. Before WAL, any crash during writing could result in:\n\n1. index tuples pointing to non-existent table rows;\n2. index tuples lost in split operations;\n3. totally corrupted table or index page content, because of\n partially written data pages.\n\n(Actually, the first two cases could even be caused by use of the \"pg_ctl\n-m {fast | immediate} stop\" command.) Problems with indices (problems\n1 and 2) might have been capable of being fixed by additional fsync()\ncalls, but it is not obvious how to handle the last case without WAL;\nWAL saves the entire data page content in the log if that is required\nto ensure page consistency for after-crash recovery.\n\n\n\nFuture benefits\n\nIn this first release of WAL, UNDO operation is not implemented, because\nof lack of time. This means that changes made by aborted transactions\nwill still occupy disk space and that we still need a permanent pg_log\nfile to hold the status of transactions, since we are not able to re-use\ntransaction identifiers. Once UNDO is implemented, pg_log will no longer\nbe required to be permanent; it will be possible to remove pg_log at\nshutdown, split it into segments and remove old segments.\n\nWith UNDO, it will also be possible to implement SAVEPOINTs to allow\npartial rollback of invalid transaction operations (parser errors caused\nby mistyping commands, insertion of duplicate primary/unique keys and\nso on) with the ability to continue or commit valid operations made by\nthe transaction before the error. At present, any error will invalidate\nthe whole transaction and require a transaction abort.\n\nWAL offers the opportunity for a new method for database on-line backup\nand restore (BAR). To use this method, one would have to make periodic\nsaves of data files to another disk, a tape or another host and also\narchive the WAL log files. The database file copy and the archived\nlog files could be used to restore just as if one were restoring after a\ncrash. Each time a new database file copy was made the old log files could\nbe removed. Implementing this facility will require the logging of data\nfile and index creation and deletion; it will also require development of\na method for copying the data files (O/S copy commands are not suitable).\n\n\n\nImplementation\n\nWAL is automatically enabled from release 7.1 onwards.\tNo action is\nrequired from the administrator with the exception of ensuring that the\nadditional disk-space requirements of the WAL logs are met, and that\nany necessary tuning is done (see below).\n\nWAL logs are stored in $PGDATA/pg_xlog, as a set of segment files, each\n16Mb in size. Each segment is divided into 8Kb pages.\tThe log record\nheaders are described in access/xlog.h; record content is dependent on the\ntype of event that is being logged. Segment files are given sequential\nnumbers as names, starting at 0000000000000000. The numbers do not wrap,\nat present, but it should take a very long time to exhaust the available\nstock of numbers.\n\nThe WAL buffers and control structure are in shared memory, and are\nhandled by the backends; they are protected by spinlocks. The demand\non shared memory is dependent on the number of buffers; the default size\nof the WAL buffers is 64Kb.\n\nIt is desirable for the log to be located on another disk than the main\ndatabase files. This may be achieved by moving the directory, pg_xlog,\nto another filesystem (while the postmaster is shut down, of course)\nand creating a symbolic link from $PGDATA to the new location.\n\nThe aim of WAL, to ensure that the log is written before database\nrecords are altered, may be subverted by disk drives that falsely report\na successful write to the kernel, when, in fact, they have only cached\nthe data and not yet stored it on the disk. A power failure in such a\nsituation may still lead to irrecoverable data corruption; administrators\nshould try to ensure that disks holding PostgreSQL's data and log files\ndo not make such false reports.\n\n\n\nWAL parameters\n\nThere are several WAL-related parameters that affect database\nperformance. This section explains their use.\n\nThere are two commonly used WAL functions - LogInsert and LogFlush.\nLogInsert is used to place a new record into the WAL buffers in shared\nmemory. If there is no space for the new record, LogInsert will have to\nwrite (move to OS cache) a few filled WAL buffers. This is undesirable\nbecause LogInsert is used on every database low level modification\n(for example, tuple insertion) at a time when an exclusive lock is held\non affected data pages and the operation is supposed to be as fast as\npossible; what is worse, writing WAL buffers may also cause the creation\nof a new log segment, which takes even more time. Normally, WAL buffers\nshould be written and flushed by a LogFlush request, which is made,\nfor the most part, at transaction commit time to ensure that transaction\nrecords are flushed to permanent storage. On systems with high log output,\nLogFlush requests may not occur often enough to prevent WAL buffers'\nbeing written by LogInsert. On such systems one should increase the\nnumber of WAL buffers by modifying the \"WAL_BUFFERS\" parameter. The\ndefault number of WAL buffers is 8. Increasing this value will have an\nimpact on shared memory usage.\n\nCheckpoints are points in the sequence of transactions at which it is\nguaranteed that the data files have been updated with all information\nlogged before the checkpoint. At checkpoint time, all dirty data pages\nare flushed to disk and a special checkpoint record is written to the\nlog file. As result, in the event of a crash, the recoverer knows from\nwhat record in the log (known as the redo record) it should start the\nREDO operation, since any changes made to data files before that record\nare already on disk. After a checkpoint has been made, any log segments\nwritten before the redo record may be removed/archived, so checkpoints\nare used to free disk space in the WAL directory. The checkpoint maker\nis also able to create a few log segments for future use, so as to avoid\nthe need for LogInsert or LogFlush to spend time in creating them.\n\nThe WAL log is held on the disk as a set of 16Mb files called segments.\nBy default a new segment is created only if more than 75% of the current\nsegment is used. One can instruct the server to create up to 64 log\nsegments at checkpoint time by modifying the \"WAL_FILES\" parameter.\n\nFor faster after-crash recovery, it would be better to create checkpoints\nmore often. However, one should balance this against the cost of flushing\ndirty data pages; in addition, to ensure data page consistency,the first\nmodification of a data page after each checkpoint results in logging\nthe entire page content, thus increasing output to log and the log's size.\n\nBy default, the postmaster spawns a special backend process to create the\nnext checkpoint 300 seconds after the previous checkpoint's creation.\nOne can change this interval by modifying the \"CHECKPOINT_TIMEOUT\"\nparameter. It is also possible to force a checkpoint by using the SQL\ncommand, CHECKPOINT.\n\nSetting the \"WAL_DEBUG\" parameter to any non-zero value will result in\neach LogInsert and LogFlush WAL call's being logged to standard error.\nAt present, it makes no difference what the non-zero value is.\n\nThe \"COMMIT_DELAY\" parameter defines for how long the backend will be\nforced to sleep after writing a commit record to the log with LogInsert\ncall but before performing a LogFlush. This delay allows other backends\nto add their commit records to the log so as to have all of them flushed\nwith a single log sync. Unfortunately, this mechanism is not fully\nimplemented at release 7.1, so there is at present no point in changing\nthis parameter from its default value of 5 microseconds.\n\n===================== CHECKPOINT manual page ======================\n\nCHECKPOINT -- Forces a checkpoint in the transaction log\n\nSynopsis\n\nCHECKPOINT\n\nInputs\n\nNone\n\nOutputs\n\nCHECKPOINT\n\nThis signifies that a checkpoint has been placed into the transaction log.\n\nDescription\n\nWrite-Ahead Logging (WAL) puts a checkpoint in the log every 300 seconds\nby default. (This may be changed by the parameter CHECKPOINT_TIMEOUT\nin postgresql.conf.)\n\nThe CHECKPOINT command forces a checkpoint at the point at which the\ncommand is issued. The next automatic checkpoint will happen the default\ntime after the forced checkpoint.\n\nRestrictions\n\nUse of the CHECKPOINT command is restricted to users with administrative\naccess.\n\nUsage\n\nTo force a checkpoint in the transaction log:\n\nCHECKPOINT;\n\nCompatibility\n\nSQL92\n\nCHECKPOINT is a Postgres language extension. There is no CHECKPOINT\ncommand in SQL92.\n\nNote: The handling of database storage and logging is a matter that the\nstandard leaves to the implementation.\n\n\n\n\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Look not every man on his own interests, but every man\n also on the things of others. Let this mind be in you,\n which was also in Christ Jesus\" \n Philippians 2:4,5", "msg_date": "Tue, 23 Jan 2001 16:40:10 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "WAL documentation" }, { "msg_contents": "Not knowing much about WAL, but understanding a good deal about Oracle's\nlogs, I read the WAL documentation below. While it is good, after\nreading it I am still left with a couple of questions and therefore\nbelieve the doc could be improved a bit.\n\nThe two questions I am left with after reading the WAL doc are:\n\n1) In the 'WAL Parameters' section, paragraph 3 there is the following\nsentence:\n\"After a checkpoint has been made, any log segments written before the\nredo record may be removed/archived...\" What does the 'may' refer\nmean? Does the database administrator need to go into the directory and\nremove the no longer necessary log files? What does archiving have to\ndo with this? If I archived all log files, could I roll forward a\nbackup made previously? That is the only reason I can think of that you\nwould archive log files (at least that is why you archive log files in\nOracle).\n\n2) The doc doesn't seem to explain how on database recovery the database\nknows which log file to start with. I think walking through an example\nof how after a database crash, the log file is used for recovery, would\nbe useful. At least it would make me as a user of postgres feel better\nif I understood how crashes are recovered from.\n\nthanks,\n--Barry\n\n\n\n\nOliver Elphick wrote:\n> \n> Here is documentation for WAL, as text for immediate review and as SGML\n> source, generated from Vadim's original text with my editing.\n> \n> Please review for correctness.\n> \n> =========================== WAL chapter ==========================\n> \n> Write-Ahead Logging (WAL) in Postgres\n> \n> Author: Written by Vadim Mikheev and Oliver Elphick.\n> \n> General description\n> \n> Write Ahead Logging (WAL) is a standard approach to transaction logging.\n> Its detailed description may be found in most (if not all) books about\n> transaction processing. Briefly, WAL's central concept is that changes to\n> data files (where tables and indices reside) must be written only after\n> those changes have been logged - that is, when log records have been\n> flushed to permanent storage. When we follow this procedure, we do not\n> need to flush data pages to disk on every transaction commit, because we\n> know that in the event of a crash we will be able to recover the database\n> using the log: any changes that have not been applied to the data pages\n> will first be redone from the log records (this is roll-forward recovery,\n> also known as REDO) and then changes made by uncommitted transactions\n> will be removed from the data pages (roll-backward recovery - UNDO).\n> \n> Immediate benefits of WAL\n> \n> The first obvious benefit of using WAL is a significantly reduced number\n> of disk writes, since only the log file needs to be flushed to disk at\n> the time of transaction commit; in multi-user environments, commits of\n> many transactions may be accomplished with a single fsync() of the log\n> file. Furthermore, the log file is written sequentially, and so the cost\n> of syncing the log is much less than the cost of syncing the data pages.\n> \n> The next benefit is consistency of the data pages. The truth is that,\n> before WAL, PostgreSQL was never able to guarantee consistency in the\n> case of a crash. Before WAL, any crash during writing could result in:\n> \n> 1. index tuples pointing to non-existent table rows;\n> 2. index tuples lost in split operations;\n> 3. totally corrupted table or index page content, because of\n> partially written data pages.\n> \n> (Actually, the first two cases could even be caused by use of the \"pg_ctl\n> -m {fast | immediate} stop\" command.) Problems with indices (problems\n> 1 and 2) might have been capable of being fixed by additional fsync()\n> calls, but it is not obvious how to handle the last case without WAL;\n> WAL saves the entire data page content in the log if that is required\n> to ensure page consistency for after-crash recovery.\n> \n> Future benefits\n> \n> In this first release of WAL, UNDO operation is not implemented, because\n> of lack of time. This means that changes made by aborted transactions\n> will still occupy disk space and that we still need a permanent pg_log\n> file to hold the status of transactions, since we are not able to re-use\n> transaction identifiers. Once UNDO is implemented, pg_log will no longer\n> be required to be permanent; it will be possible to remove pg_log at\n> shutdown, split it into segments and remove old segments.\n> \n> With UNDO, it will also be possible to implement SAVEPOINTs to allow\n> partial rollback of invalid transaction operations (parser errors caused\n> by mistyping commands, insertion of duplicate primary/unique keys and\n> so on) with the ability to continue or commit valid operations made by\n> the transaction before the error. At present, any error will invalidate\n> the whole transaction and require a transaction abort.\n> \n> WAL offers the opportunity for a new method for database on-line backup\n> and restore (BAR). To use this method, one would have to make periodic\n> saves of data files to another disk, a tape or another host and also\n> archive the WAL log files. The database file copy and the archived\n> log files could be used to restore just as if one were restoring after a\n> crash. Each time a new database file copy was made the old log files could\n> be removed. Implementing this facility will require the logging of data\n> file and index creation and deletion; it will also require development of\n> a method for copying the data files (O/S copy commands are not suitable).\n> \n> Implementation\n> \n> WAL is automatically enabled from release 7.1 onwards. No action is\n> required from the administrator with the exception of ensuring that the\n> additional disk-space requirements of the WAL logs are met, and that\n> any necessary tuning is done (see below).\n> \n> WAL logs are stored in $PGDATA/pg_xlog, as a set of segment files, each\n> 16Mb in size. Each segment is divided into 8Kb pages. The log record\n> headers are described in access/xlog.h; record content is dependent on the\n> type of event that is being logged. Segment files are given sequential\n> numbers as names, starting at 0000000000000000. The numbers do not wrap,\n> at present, but it should take a very long time to exhaust the available\n> stock of numbers.\n> \n> The WAL buffers and control structure are in shared memory, and are\n> handled by the backends; they are protected by spinlocks. The demand\n> on shared memory is dependent on the number of buffers; the default size\n> of the WAL buffers is 64Kb.\n> \n> It is desirable for the log to be located on another disk than the main\n> database files. This may be achieved by moving the directory, pg_xlog,\n> to another filesystem (while the postmaster is shut down, of course)\n> and creating a symbolic link from $PGDATA to the new location.\n> \n> The aim of WAL, to ensure that the log is written before database\n> records are altered, may be subverted by disk drives that falsely report\n> a successful write to the kernel, when, in fact, they have only cached\n> the data and not yet stored it on the disk. A power failure in such a\n> situation may still lead to irrecoverable data corruption; administrators\n> should try to ensure that disks holding PostgreSQL's data and log files\n> do not make such false reports.\n> \n> WAL parameters\n> \n> There are several WAL-related parameters that affect database\n> performance. This section explains their use.\n> \n> There are two commonly used WAL functions - LogInsert and LogFlush.\n> LogInsert is used to place a new record into the WAL buffers in shared\n> memory. If there is no space for the new record, LogInsert will have to\n> write (move to OS cache) a few filled WAL buffers. This is undesirable\n> because LogInsert is used on every database low level modification\n> (for example, tuple insertion) at a time when an exclusive lock is held\n> on affected data pages and the operation is supposed to be as fast as\n> possible; what is worse, writing WAL buffers may also cause the creation\n> of a new log segment, which takes even more time. Normally, WAL buffers\n> should be written and flushed by a LogFlush request, which is made,\n> for the most part, at transaction commit time to ensure that transaction\n> records are flushed to permanent storage. On systems with high log output,\n> LogFlush requests may not occur often enough to prevent WAL buffers'\n> being written by LogInsert. On such systems one should increase the\n> number of WAL buffers by modifying the \"WAL_BUFFERS\" parameter. The\n> default number of WAL buffers is 8. Increasing this value will have an\n> impact on shared memory usage.\n> \n> Checkpoints are points in the sequence of transactions at which it is\n> guaranteed that the data files have been updated with all information\n> logged before the checkpoint. At checkpoint time, all dirty data pages\n> are flushed to disk and a special checkpoint record is written to the\n> log file. As result, in the event of a crash, the recoverer knows from\n> what record in the log (known as the redo record) it should start the\n> REDO operation, since any changes made to data files before that record\n> are already on disk. After a checkpoint has been made, any log segments\n> written before the redo record may be removed/archived, so checkpoints\n> are used to free disk space in the WAL directory. The checkpoint maker\n> is also able to create a few log segments for future use, so as to avoid\n> the need for LogInsert or LogFlush to spend time in creating them.\n> \n> The WAL log is held on the disk as a set of 16Mb files called segments.\n> By default a new segment is created only if more than 75% of the current\n> segment is used. One can instruct the server to create up to 64 log\n> segments at checkpoint time by modifying the \"WAL_FILES\" parameter.\n> \n> For faster after-crash recovery, it would be better to create checkpoints\n> more often. However, one should balance this against the cost of flushing\n> dirty data pages; in addition, to ensure data page consistency,the first\n> modification of a data page after each checkpoint results in logging\n> the entire page content, thus increasing output to log and the log's size.\n> \n> By default, the postmaster spawns a special backend process to create the\n> next checkpoint 300 seconds after the previous checkpoint's creation.\n> One can change this interval by modifying the \"CHECKPOINT_TIMEOUT\"\n> parameter. It is also possible to force a checkpoint by using the SQL\n> command, CHECKPOINT.\n> \n> Setting the \"WAL_DEBUG\" parameter to any non-zero value will result in\n> each LogInsert and LogFlush WAL call's being logged to standard error.\n> At present, it makes no difference what the non-zero value is.\n> \n> The \"COMMIT_DELAY\" parameter defines for how long the backend will be\n> forced to sleep after writing a commit record to the log with LogInsert\n> call but before performing a LogFlush. This delay allows other backends\n> to add their commit records to the log so as to have all of them flushed\n> with a single log sync. Unfortunately, this mechanism is not fully\n> implemented at release 7.1, so there is at present no point in changing\n> this parameter from its default value of 5 microseconds.\n> \n> ===================== CHECKPOINT manual page ======================\n> \n> CHECKPOINT -- Forces a checkpoint in the transaction log\n> \n> Synopsis\n> \n> CHECKPOINT\n> \n> Inputs\n> \n> None\n> \n> Outputs\n> \n> CHECKPOINT\n> \n> This signifies that a checkpoint has been placed into the transaction log.\n> \n> Description\n> \n> Write-Ahead Logging (WAL) puts a checkpoint in the log every 300 seconds\n> by default. (This may be changed by the parameter CHECKPOINT_TIMEOUT\n> in postgresql.conf.)\n> \n> The CHECKPOINT command forces a checkpoint at the point at which the\n> command is issued. The next automatic checkpoint will happen the default\n> time after the forced checkpoint.\n> \n> Restrictions\n> \n> Use of the CHECKPOINT command is restricted to users with administrative\n> access.\n> \n> Usage\n> \n> To force a checkpoint in the transaction log:\n> \n> CHECKPOINT;\n> \n> Compatibility\n> \n> SQL92\n> \n> CHECKPOINT is a Postgres language extension. There is no CHECKPOINT\n> command in SQL92.\n> \n> Note: The handling of database storage and logging is a matter that the\n> standard leaves to the implementation.\n> \n> ------------------------------------------------------------------------\n> Name: checkpoint.sgml\n> checkpoint.sgml Type: text/x-sgml\n> Description: checkpoint.sgml\n> \n> Name: wal.sgml\n> wal.sgml Type: text/x-sgml\n> Description: wal.sgml\n> \n> Part 1.4Type: Plain Text (text/plain)\n", "msg_date": "Tue, 23 Jan 2001 20:32:02 -0800", "msg_from": "Barry Lind <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL documentation" }, { "msg_contents": "Also, what happens with the size of the WAL logs? Do they just grow forever eventually filling up your hard drive, or should they reach a stable point where they tend not to grow any further?\n\nie. Will we sysadmins have to put cron jobs in to tar/gz old WAL logs or what???\n\nChris\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Barry Lind\n> Sent: Wednesday, January 24, 2001 12:32 PM\n> To: [email protected]\n> Cc: [email protected]\n> Subject: Re: [HACKERS] WAL documentation\n> \n> \n> Not knowing much about WAL, but understanding a good deal about Oracle's\n> logs, I read the WAL documentation below. While it is good, after\n> reading it I am still left with a couple of questions and therefore\n> believe the doc could be improved a bit.\n> \n> The two questions I am left with after reading the WAL doc are:\n> \n> 1) In the 'WAL Parameters' section, paragraph 3 there is the following\n> sentence:\n> \"After a checkpoint has been made, any log segments written before the\n> redo record may be removed/archived...\" What does the 'may' refer\n> mean? Does the database administrator need to go into the directory and\n> remove the no longer necessary log files? What does archiving have to\n> do with this? If I archived all log files, could I roll forward a\n> backup made previously? That is the only reason I can think of that you\n> would archive log files (at least that is why you archive log files in\n> Oracle).\n> \n> 2) The doc doesn't seem to explain how on database recovery the database\n> knows which log file to start with. I think walking through an example\n> of how after a database crash, the log file is used for recovery, would\n> be useful. At least it would make me as a user of postgres feel better\n> if I understood how crashes are recovered from.\n> \n> thanks,\n> --Barry\n> \n> \n> \n> \n> Oliver Elphick wrote:\n> > \n> > Here is documentation for WAL, as text for immediate review and as SGML\n> > source, generated from Vadim's original text with my editing.\n> > \n> > Please review for correctness.\n> > \n> > =========================== WAL chapter ==========================\n> > \n> > Write-Ahead Logging (WAL) in Postgres\n> > \n> > Author: Written by Vadim Mikheev and Oliver Elphick.\n> > \n> > General description\n> > \n> > Write Ahead Logging (WAL) is a standard approach to transaction logging.\n> > Its detailed description may be found in most (if not all) books about\n> > transaction processing. Briefly, WAL's central concept is that \n> changes to\n> > data files (where tables and indices reside) must be written only after\n> > those changes have been logged - that is, when log records have been\n> > flushed to permanent storage. When we follow this procedure, we do not\n> > need to flush data pages to disk on every transaction commit, because we\n> > know that in the event of a crash we will be able to recover \n> the database\n> > using the log: any changes that have not been applied to the data pages\n> > will first be redone from the log records (this is roll-forward \n> recovery,\n> > also known as REDO) and then changes made by uncommitted transactions\n> > will be removed from the data pages (roll-backward recovery - UNDO).\n> > \n> > Immediate benefits of WAL\n> > \n> > The first obvious benefit of using WAL is a significantly reduced number\n> > of disk writes, since only the log file needs to be flushed to disk at\n> > the time of transaction commit; in multi-user environments, commits of\n> > many transactions may be accomplished with a single fsync() of the log\n> > file. Furthermore, the log file is written sequentially, and so the cost\n> > of syncing the log is much less than the cost of syncing the data pages.\n> > \n> > The next benefit is consistency of the data pages. The truth is that,\n> > before WAL, PostgreSQL was never able to guarantee consistency in the\n> > case of a crash. Before WAL, any crash during writing could result in:\n> > \n> > 1. index tuples pointing to non-existent table rows;\n> > 2. index tuples lost in split operations;\n> > 3. totally corrupted table or index page content, because of\n> > partially written data pages.\n> > \n> > (Actually, the first two cases could even be caused by use of \n> the \"pg_ctl\n> > -m {fast | immediate} stop\" command.) Problems with indices (problems\n> > 1 and 2) might have been capable of being fixed by additional fsync()\n> > calls, but it is not obvious how to handle the last case without WAL;\n> > WAL saves the entire data page content in the log if that is required\n> > to ensure page consistency for after-crash recovery.\n> > \n> > Future benefits\n> > \n> > In this first release of WAL, UNDO operation is not implemented, because\n> > of lack of time. This means that changes made by aborted transactions\n> > will still occupy disk space and that we still need a permanent pg_log\n> > file to hold the status of transactions, since we are not able to re-use\n> > transaction identifiers. Once UNDO is implemented, pg_log will \n> no longer\n> > be required to be permanent; it will be possible to remove pg_log at\n> > shutdown, split it into segments and remove old segments.\n> > \n> > With UNDO, it will also be possible to implement SAVEPOINTs to allow\n> > partial rollback of invalid transaction operations (parser errors caused\n> > by mistyping commands, insertion of duplicate primary/unique keys and\n> > so on) with the ability to continue or commit valid operations made by\n> > the transaction before the error. At present, any error will invalidate\n> > the whole transaction and require a transaction abort.\n> > \n> > WAL offers the opportunity for a new method for database on-line backup\n> > and restore (BAR). To use this method, one would have to make periodic\n> > saves of data files to another disk, a tape or another host and also\n> > archive the WAL log files. The database file copy and the archived\n> > log files could be used to restore just as if one were restoring after a\n> > crash. Each time a new database file copy was made the old log \n> files could\n> > be removed. Implementing this facility will require the logging of data\n> > file and index creation and deletion; it will also require \n> development of\n> > a method for copying the data files (O/S copy commands are not \n> suitable).\n> > \n> > Implementation\n> > \n> > WAL is automatically enabled from release 7.1 onwards. No action is\n> > required from the administrator with the exception of ensuring that the\n> > additional disk-space requirements of the WAL logs are met, and that\n> > any necessary tuning is done (see below).\n> > \n> > WAL logs are stored in $PGDATA/pg_xlog, as a set of segment files, each\n> > 16Mb in size. Each segment is divided into 8Kb pages. The log record\n> > headers are described in access/xlog.h; record content is \n> dependent on the\n> > type of event that is being logged. Segment files are given sequential\n> > numbers as names, starting at 0000000000000000. The numbers do \n> not wrap,\n> > at present, but it should take a very long time to exhaust the available\n> > stock of numbers.\n> > \n> > The WAL buffers and control structure are in shared memory, and are\n> > handled by the backends; they are protected by spinlocks. The demand\n> > on shared memory is dependent on the number of buffers; the default size\n> > of the WAL buffers is 64Kb.\n> > \n> > It is desirable for the log to be located on another disk than the main\n> > database files. This may be achieved by moving the directory, pg_xlog,\n> > to another filesystem (while the postmaster is shut down, of course)\n> > and creating a symbolic link from $PGDATA to the new location.\n> > \n> > The aim of WAL, to ensure that the log is written before database\n> > records are altered, may be subverted by disk drives that falsely report\n> > a successful write to the kernel, when, in fact, they have only cached\n> > the data and not yet stored it on the disk. A power failure in such a\n> > situation may still lead to irrecoverable data corruption; \n> administrators\n> > should try to ensure that disks holding PostgreSQL's data and log files\n> > do not make such false reports.\n> > \n> > WAL parameters\n> > \n> > There are several WAL-related parameters that affect database\n> > performance. This section explains their use.\n> > \n> > There are two commonly used WAL functions - LogInsert and LogFlush.\n> > LogInsert is used to place a new record into the WAL buffers in shared\n> > memory. If there is no space for the new record, LogInsert will have to\n> > write (move to OS cache) a few filled WAL buffers. This is undesirable\n> > because LogInsert is used on every database low level modification\n> > (for example, tuple insertion) at a time when an exclusive lock is held\n> > on affected data pages and the operation is supposed to be as fast as\n> > possible; what is worse, writing WAL buffers may also cause the creation\n> > of a new log segment, which takes even more time. Normally, WAL buffers\n> > should be written and flushed by a LogFlush request, which is made,\n> > for the most part, at transaction commit time to ensure that transaction\n> > records are flushed to permanent storage. On systems with high \n> log output,\n> > LogFlush requests may not occur often enough to prevent WAL buffers'\n> > being written by LogInsert. On such systems one should increase the\n> > number of WAL buffers by modifying the \"WAL_BUFFERS\" parameter. The\n> > default number of WAL buffers is 8. Increasing this value will have an\n> > impact on shared memory usage.\n> > \n> > Checkpoints are points in the sequence of transactions at which it is\n> > guaranteed that the data files have been updated with all information\n> > logged before the checkpoint. At checkpoint time, all dirty data pages\n> > are flushed to disk and a special checkpoint record is written to the\n> > log file. As result, in the event of a crash, the recoverer knows from\n> > what record in the log (known as the redo record) it should start the\n> > REDO operation, since any changes made to data files before that record\n> > are already on disk. After a checkpoint has been made, any log segments\n> > written before the redo record may be removed/archived, so checkpoints\n> > are used to free disk space in the WAL directory. The checkpoint maker\n> > is also able to create a few log segments for future use, so as to avoid\n> > the need for LogInsert or LogFlush to spend time in creating them.\n> > \n> > The WAL log is held on the disk as a set of 16Mb files called segments.\n> > By default a new segment is created only if more than 75% of the current\n> > segment is used. One can instruct the server to create up to 64 log\n> > segments at checkpoint time by modifying the \"WAL_FILES\" parameter.\n> > \n> > For faster after-crash recovery, it would be better to create \n> checkpoints\n> > more often. However, one should balance this against the cost \n> of flushing\n> > dirty data pages; in addition, to ensure data page consistency,the first\n> > modification of a data page after each checkpoint results in logging\n> > the entire page content, thus increasing output to log and the \n> log's size.\n> > \n> > By default, the postmaster spawns a special backend process to \n> create the\n> > next checkpoint 300 seconds after the previous checkpoint's creation.\n> > One can change this interval by modifying the \"CHECKPOINT_TIMEOUT\"\n> > parameter. It is also possible to force a checkpoint by using the SQL\n> > command, CHECKPOINT.\n> > \n> > Setting the \"WAL_DEBUG\" parameter to any non-zero value will result in\n> > each LogInsert and LogFlush WAL call's being logged to standard error.\n> > At present, it makes no difference what the non-zero value is.\n> > \n> > The \"COMMIT_DELAY\" parameter defines for how long the backend will be\n> > forced to sleep after writing a commit record to the log with LogInsert\n> > call but before performing a LogFlush. This delay allows other backends\n> > to add their commit records to the log so as to have all of them flushed\n> > with a single log sync. Unfortunately, this mechanism is not fully\n> > implemented at release 7.1, so there is at present no point in changing\n> > this parameter from its default value of 5 microseconds.\n> > \n> > ===================== CHECKPOINT manual page ======================\n> > \n> > CHECKPOINT -- Forces a checkpoint in the transaction log\n> > \n> > Synopsis\n> > \n> > CHECKPOINT\n> > \n> > Inputs\n> > \n> > None\n> > \n> > Outputs\n> > \n> > CHECKPOINT\n> > \n> > This signifies that a checkpoint has been placed into the \n> transaction log.\n> > \n> > Description\n> > \n> > Write-Ahead Logging (WAL) puts a checkpoint in the log every 300 seconds\n> > by default. (This may be changed by the parameter CHECKPOINT_TIMEOUT\n> > in postgresql.conf.)\n> > \n> > The CHECKPOINT command forces a checkpoint at the point at which the\n> > command is issued. The next automatic checkpoint will happen the default\n> > time after the forced checkpoint.\n> > \n> > Restrictions\n> > \n> > Use of the CHECKPOINT command is restricted to users with administrative\n> > access.\n> > \n> > Usage\n> > \n> > To force a checkpoint in the transaction log:\n> > \n> > CHECKPOINT;\n> > \n> > Compatibility\n> > \n> > SQL92\n> > \n> > CHECKPOINT is a Postgres language extension. There is no CHECKPOINT\n> > command in SQL92.\n> > \n> > Note: The handling of database storage and logging is a matter that the\n> > standard leaves to the implementation.\n> > \n> > \n> ------------------------------------------------------------------------\n> > Name: checkpoint.sgml\n> > checkpoint.sgml Type: text/x-sgml\n> > Description: checkpoint.sgml\n> > \n> > Name: wal.sgml\n> > wal.sgml Type: text/x-sgml\n> > Description: wal.sgml\n> > \n> > Part 1.4Type: Plain Text (text/plain)\n> \n\n", "msg_date": "Wed, 24 Jan 2001 13:52:53 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: WAL documentation" }, { "msg_contents": "The WAL logs auto-delete I think.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 07:09:49 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL documentation" } ]
[ { "msg_contents": "> > The reason you have to visit the main table is that tuple validity\n> > status is only stored in the main table, not in each index. \n> > See prior discussions in the archives.\n> \n> But how Oracle handles this?\n\nOracle doesn't have non-overwriting storage manager but uses\nrollback segments to maintain MVCC. Rollback segments are used\nto restore valid version of entire index/table page.\n\nVadim\n", "msg_date": "Tue, 23 Jan 2001 10:23:22 -0800", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Does Oracle store values in indices?" }, { "msg_contents": "> > > The reason you have to visit the main table is that tuple validity\n> > > status is only stored in the main table, not in each index.\n> > > See prior discussions in the archives.\n> >\n> > But how Oracle handles this?\n>\n> Oracle doesn't have non-overwriting storage manager but uses\n> rollback segments to maintain MVCC. Rollback segments are used\n> to restore valid version of entire index/table page.\n\nAre there any plans to have something like this? I mean overwriting storage \nmanager.\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Wed, 24 Jan 2001 00:25:53 +0600", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does Oracle store values in indices?" }, { "msg_contents": "[ Charset KOI8-R unsupported, converting... ]\n> > > > The reason you have to visit the main table is that tuple validity\n> > > > status is only stored in the main table, not in each index.\n> > > > See prior discussions in the archives.\n> > >\n> > > But how Oracle handles this?\n> >\n> > Oracle doesn't have non-overwriting storage manager but uses\n> > rollback segments to maintain MVCC. Rollback segments are used\n> > to restore valid version of entire index/table page.\n> \n> Are there any plans to have something like this? I mean overwriting storage \n> manager.\n\nWe hope to have it some day, hopefully soon.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Jan 2001 14:02:23 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does Oracle store values in indices?" }, { "msg_contents": "* Bruce Momjian <[email protected]> [010123 11:17] wrote:\n> [ Charset KOI8-R unsupported, converting... ]\n> > > > > The reason you have to visit the main table is that tuple validity\n> > > > > status is only stored in the main table, not in each index.\n> > > > > See prior discussions in the archives.\n> > > >\n> > > > But how Oracle handles this?\n> > >\n> > > Oracle doesn't have non-overwriting storage manager but uses\n> > > rollback segments to maintain MVCC. Rollback segments are used\n> > > to restore valid version of entire index/table page.\n> > \n> > Are there any plans to have something like this? I mean overwriting storage \n> > manager.\n> \n> We hope to have it some day, hopefully soon.\n\nVadim says that he hopes it to be done by 7.2, so if things go\nwell it shouldn't be that far off...\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Tue, 23 Jan 2001 11:38:29 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does Oracle store values in indices?" } ]
[ { "msg_contents": "Alfred,\n\n Is there a tarbar with the updated files for the vacuum patch? Or,\nis there some way to use the 'v.diff' file without the need to modify\nthe files by hand? I started changing the files by hand, but realized\nthat there is so much information that I'm bound to make a mistake in\nthe manual update.\n\nThanks.\n-Tony Reina\n\n\n\n\n> There's three patchsets and they are available at:\n>\n> http://people.freebsd.org/~alfred/vacfix/\n>\n> complete diff:\n> http://people.freebsd.org/~alfred/vacfix/v.diff\n>\n> only lazy vacuum option to speed up index vacuums:\n> http://people.freebsd.org/~alfred/vacfix/vlazy.tgz\n>\n> only lazy vacuum option to only scan from start of modified\n> data:\n> http://people.freebsd.org/~alfred/vacfix/mnmb.tgz\n>\n\n", "msg_date": "Tue, 23 Jan 2001 11:46:55 -0800", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patches with vacuum fixes available for 7.0.x" } ]
[ { "msg_contents": "While I'm at it and before I forget the 76 places one needs to edit to\nadd/remove a system catalog column, what are people's feelings about the\nusecatupd column?\n\nThe use of this field is that, if false, it disallows any direct\nmodification of system catalogs, even for superusers. In the past there\nwere several opinions that this field was useless/confusing/stupid/not\nworthwhile.\n\nI'm also going to remove the usetrace column, since that's not used.\n\n(post-7.1 material, btw.)\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Tue, 23 Jan 2001 21:23:30 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "pg_shadow.usecatupd attribute" }, { "msg_contents": "\nYes, I vote for removal.\n\n> While I'm at it and before I forget the 76 places one needs to edit to\n> add/remove a system catalog column, what are people's feelings about the\n> usecatupd column?\n> \n> The use of this field is that, if false, it disallows any direct\n> modification of system catalogs, even for superusers. In the past there\n> were several opinions that this field was useless/confusing/stupid/not\n> worthwhile.\n> \n> I'm also going to remove the usetrace column, since that's not used.\n> \n> (post-7.1 material, btw.)\n> \n> -- \n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Jan 2001 15:28:42 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_shadow.usecatupd attribute" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> While I'm at it and before I forget the 76 places one needs to edit to\n> add/remove a system catalog column, what are people's feelings about the\n> usecatupd column?\n\nUnless someone pops up and says that they're actually using it,\nI'd agree with removing it. It seems like the sort of thing that\nmight be a good idea but isn't actually getting used.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Jan 2001 17:08:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_shadow.usecatupd attribute " }, { "msg_contents": "Just to clarify for stupid me: you want to remove it and forbid catalog\nupdates or remove it and allow catalog updates? (I hope its latter :)\n\nOn Tue, 23 Jan 2001, Tom Lane wrote:\n\n> Peter Eisentraut <[email protected]> writes:\n> > While I'm at it and before I forget the 76 places one needs to edit to\n> > add/remove a system catalog column, what are people's feelings about the\n> > usecatupd column?\n> \n> Unless someone pops up and says that they're actually using it,\n> I'd agree with removing it. It seems like the sort of thing that\n> might be a good idea but isn't actually getting used.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n", "msg_date": "Tue, 23 Jan 2001 21:15:54 -0500 (EST)", "msg_from": "Alex Pilosov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_shadow.usecatupd attribute " }, { "msg_contents": "Alex Pilosov <[email protected]> writes:\n> Just to clarify for stupid me: you want to remove it and forbid catalog\n> updates or remove it and allow catalog updates? (I hope its latter :)\n\nRight, the latter. If anyone is actually using usecatupd to prevent\nthemselves from shooting themselves in the foot, speak now or forever\nhold your peace ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Jan 2001 23:32:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_shadow.usecatupd attribute " } ]
[ { "msg_contents": "> I had thought that the pre-commit information could be stored in an\n> auxiliary table by the middleware program ; we would then have\n> to re-implement some sort of higher-level WAL (I thought of the list\n> of the commands performed in the current transaction, with a sequence\n> number for each of them that would guarantee correct ordering between\n> concurrent transactions in case of a REDO). But I fear I am missing\n\nThis wouldn't work for READ COMMITTED isolation level.\nBut why do you want to log commands into WAL where each modification\nis already logged in, hm, correct order?\nWell, it has sense if you're looking for async replication but\nyou need not in two-phase commit for this and should aware about\nproblems with READ COMMITTED isolevel.\n\nBack to two-phase commit - it's easiest part of work required for\ndistributed transaction processing.\nCurrently we place single commit record to log and transaction is\ncommitted when this record (and so all other transaction records)\nis on disk.\nTwo-phase commit:\n\n1. For 1st phase we'll place into log \"prepared-to-commit\" record\n and this phase will be accomplished after record is flushed on disk.\n At this point transaction may be committed at any time because of\n all its modifications are logged. But it still may be rolled back\n if this phase failed on other sites of distributed system.\n\n2. When all sites are prepared to commit we'll place \"committed\"\n record into log. No need to flush it because of in the event of\n crash for all \"prepared\" transactions recoverer will have to\n communicate other sites to know their statuses anyway.\n\nThat's all! It is really hard to implement distributed lock- and\ncommunication- managers but there is no problem with logging two\nrecords instead of one. Period.\n\nVadim\n", "msg_date": "Tue, 23 Jan 2001 13:10:34 -0800", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Re: AW: Re: MySQL and BerkleyDB (fwd)" }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> > I had thought that the pre-commit information could be stored in an\n> > auxiliary table by the middleware program ; we would then have\n> > to re-implement some sort of higher-level WAL (I thought of the list\n> > of the commands performed in the current transaction, with a sequence\n> > number for each of them that would guarantee correct ordering between\n> > concurrent transactions in case of a REDO). But I fear I am missing\n> \n> This wouldn't work for READ COMMITTED isolation level.\n> But why do you want to log commands into WAL where each modification\n> is already logged in, hm, correct order?\n> Well, it has sense if you're looking for async replication but\n> you need not in two-phase commit for this and should aware about\n> problems with READ COMMITTED isolevel.\n> \n\nI believe the issue here is that while SERIALIZABLE ISOLATION means all\nqueries can be run serially, our default is READ COMMITTED, meaning that\nopen transactions see committed transactions, even if the transaction\ncommitted after our transaction started. (FYI, see my chapter on\ntransactions for help, http://www.postgresql.org/docs/awbook.html.)\n\nTo do higher-level WAL, you would have to record not only the queries,\nbut the other queries that were committed at the start of each command\nin your transaction.\n\nIdeally, you could number every commit by its XID your log, and then\nwhen processing the query, pass the \"committed\" transaction ids that\nwere visible at the time each command began.\n\nIn other words, you can replay the queries in transaction commit order,\nexcept that you have to have some transactions committed at specific\npoints while other transactions are open, i.e.:\n\nXID\tOpen XIDS\tQuery\n500\t\t\tUPDATE t SET col = 3;\n501\t500\t\tBEGIN;\n501\t500\t\tUPDATE t SET col = 4;\n501\t\t\tUPDATE t SET col = 5;\n501\t\t\tCOMMIT;\n\nThis is a silly example, but it shows that 500 must commit after the\nfirst command in transaction 501, but before the second command in the\ntransaction. This is because UPDATE t SET col = 5 actually sees the\nchanges made by transaction 500 in READ COMMITTED isolation level.\n\nI am not advocating this. I think WAL is a better choice. I just\nwanted to outline how replaying the queries in commit order is \ninsufficient.\n\n> Back to two-phase commit - it's easiest part of work required for\n> distributed transaction processing.\n> Currently we place single commit record to log and transaction is\n> committed when this record (and so all other transaction records)\n> is on disk.\n> Two-phase commit:\n> \n> 1. For 1st phase we'll place into log \"prepared-to-commit\" record\n> and this phase will be accomplished after record is flushed on disk.\n> At this point transaction may be committed at any time because of\n> all its modifications are logged. But it still may be rolled back\n> if this phase failed on other sites of distributed system.\n> \n> 2. When all sites are prepared to commit we'll place \"committed\"\n> record into log. No need to flush it because of in the event of\n> crash for all \"prepared\" transactions recoverer will have to\n> communicate other sites to know their statuses anyway.\n> \n> That's all! It is really hard to implement distributed lock- and\n> communication- managers but there is no problem with logging two\n> records instead of one. Period.\n\nGreat.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Jan 2001 16:53:53 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: AW: Re: MySQL and BerkleyDB (fwd)" }, { "msg_contents": "\nAdded to TODO.detail/replication.\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> > I had thought that the pre-commit information could be stored in an\n> > auxiliary table by the middleware program ; we would then have\n> > to re-implement some sort of higher-level WAL (I thought of the list\n> > of the commands performed in the current transaction, with a sequence\n> > number for each of them that would guarantee correct ordering between\n> > concurrent transactions in case of a REDO). But I fear I am missing\n> \n> This wouldn't work for READ COMMITTED isolation level.\n> But why do you want to log commands into WAL where each modification\n> is already logged in, hm, correct order?\n> Well, it has sense if you're looking for async replication but\n> you need not in two-phase commit for this and should aware about\n> problems with READ COMMITTED isolevel.\n> \n> Back to two-phase commit - it's easiest part of work required for\n> distributed transaction processing.\n> Currently we place single commit record to log and transaction is\n> committed when this record (and so all other transaction records)\n> is on disk.\n> Two-phase commit:\n> \n> 1. For 1st phase we'll place into log \"prepared-to-commit\" record\n> and this phase will be accomplished after record is flushed on disk.\n> At this point transaction may be committed at any time because of\n> all its modifications are logged. But it still may be rolled back\n> if this phase failed on other sites of distributed system.\n> \n> 2. When all sites are prepared to commit we'll place \"committed\"\n> record into log. No need to flush it because of in the event of\n> crash for all \"prepared\" transactions recoverer will have to\n> communicate other sites to know their statuses anyway.\n> \n> That's all! It is really hard to implement distributed lock- and\n> communication- managers but there is no problem with logging two\n> records instead of one. Period.\n> \n> Vadim\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 22:36:39 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: AW: Re: MySQL and BerkleyDB (fwd)" }, { "msg_contents": "\nAdded to TODO.detail/replication.\n\n> [ Charset ISO-8859-1 unsupported, converting... ]\n> > > I had thought that the pre-commit information could be stored in an\n> > > auxiliary table by the middleware program ; we would then have\n> > > to re-implement some sort of higher-level WAL (I thought of the list\n> > > of the commands performed in the current transaction, with a sequence\n> > > number for each of them that would guarantee correct ordering between\n> > > concurrent transactions in case of a REDO). But I fear I am missing\n> > \n> > This wouldn't work for READ COMMITTED isolation level.\n> > But why do you want to log commands into WAL where each modification\n> > is already logged in, hm, correct order?\n> > Well, it has sense if you're looking for async replication but\n> > you need not in two-phase commit for this and should aware about\n> > problems with READ COMMITTED isolevel.\n> > \n> \n> I believe the issue here is that while SERIALIZABLE ISOLATION means all\n> queries can be run serially, our default is READ COMMITTED, meaning that\n> open transactions see committed transactions, even if the transaction\n> committed after our transaction started. (FYI, see my chapter on\n> transactions for help, http://www.postgresql.org/docs/awbook.html.)\n> \n> To do higher-level WAL, you would have to record not only the queries,\n> but the other queries that were committed at the start of each command\n> in your transaction.\n> \n> Ideally, you could number every commit by its XID your log, and then\n> when processing the query, pass the \"committed\" transaction ids that\n> were visible at the time each command began.\n> \n> In other words, you can replay the queries in transaction commit order,\n> except that you have to have some transactions committed at specific\n> points while other transactions are open, i.e.:\n> \n> XID\tOpen XIDS\tQuery\n> 500\t\t\tUPDATE t SET col = 3;\n> 501\t500\t\tBEGIN;\n> 501\t500\t\tUPDATE t SET col = 4;\n> 501\t\t\tUPDATE t SET col = 5;\n> 501\t\t\tCOMMIT;\n> \n> This is a silly example, but it shows that 500 must commit after the\n> first command in transaction 501, but before the second command in the\n> transaction. This is because UPDATE t SET col = 5 actually sees the\n> changes made by transaction 500 in READ COMMITTED isolation level.\n> \n> I am not advocating this. I think WAL is a better choice. I just\n> wanted to outline how replaying the queries in commit order is \n> insufficient.\n> \n> > Back to two-phase commit - it's easiest part of work required for\n> > distributed transaction processing.\n> > Currently we place single commit record to log and transaction is\n> > committed when this record (and so all other transaction records)\n> > is on disk.\n> > Two-phase commit:\n> > \n> > 1. For 1st phase we'll place into log \"prepared-to-commit\" record\n> > and this phase will be accomplished after record is flushed on disk.\n> > At this point transaction may be committed at any time because of\n> > all its modifications are logged. But it still may be rolled back\n> > if this phase failed on other sites of distributed system.\n> > \n> > 2. When all sites are prepared to commit we'll place \"committed\"\n> > record into log. No need to flush it because of in the event of\n> > crash for all \"prepared\" transactions recoverer will have to\n> > communicate other sites to know their statuses anyway.\n> > \n> > That's all! It is really hard to implement distributed lock- and\n> > communication- managers but there is no problem with logging two\n> > records instead of one. Period.\n> \n> Great.\n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 22:36:58 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: AW: Re: MySQL and BerkleyDB (fwd)" } ]
[ { "msg_contents": "A patch to gram.y in src/backend/parser\n\nProvides for the SQL99 expected behavior of\n select * from foo where fo_num between 1 and 5\nyields the same result as\n select * from foo where fo_num between 5 and 1\n\nGranted this is brute force and not very elegant, however it does provide \nthe correct behavior. Optimally it would be nice to do a comparison on the \nvalues after between and then sort the two limiters and do a single \nrewrite leaving only one pass or scan.\n\nIn other words in pseudo SQL:\n\nselect * from foo where fo_num between a and b\n\nbecomes\n\nselect * from foo where ((fo_num >= min_value(a, b)) and (fo_num <= \nmax_value(a,b))\n\nThis would yield only two comparisons or resolutions and then a single \nsequential or index scan to find the correct tuples.\n\nThis was done against beta1...\n\n-- \n- Thomas Swan\n- Network Administrator\n- Graduate Student - Computer Science\n-\n- The Institute for Continuing Studies\n- The University of Mississippi\n-\n- http://www.ics.olemiss.edu\n- http://www.olemiss.edu", "msg_date": "Tue, 23 Jan 2001 15:17:49 -0600", "msg_from": "Thomas Swan <[email protected]>", "msg_from_op": true, "msg_subject": "BETWEEN patch" }, { "msg_contents": "Thomas Swan <[email protected]> writes:\n> A patch to gram.y in src/backend/parser\n> Provides for the SQL99 expected behavior of\n> select * from foo where fo_num between 1 and 5\n> yields the same result as\n> select * from foo where fo_num between 5 and 1\n\nThis is NOT correct under either SQL92 or SQL99. Read the spec again.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Jan 2001 11:19:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BETWEEN patch " }, { "msg_contents": "At 1/24/2001 10:19 AM, Tom Lane wrote:\n>Thomas Swan <[email protected]> writes:\n> > A patch to gram.y in src/backend/parser\n> > Provides for the SQL99 expected behavior of\n> > select * from foo where fo_num between 1 and 5\n> > yields the same result as\n> > select * from foo where fo_num between 5 and 1\n>\n>This is NOT correct under either SQL92 or SQL99. Read the spec again.\n>\n> regards, tom lane\n\nAfter sending it... I realized that it was not correct either. So, I'm \nback to figuring how to do it... so, um, ignore the previous patch...\n\nThanks..\n\n", "msg_date": "Wed, 24 Jan 2001 13:23:44 -0600", "msg_from": "Thomas Swan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: BETWEEN patch " }, { "msg_contents": "Thomas Swan <[email protected]> writes:\n> select * from foo where fo_num between 1 and 5\n> yields the same result as\n> select * from foo where fo_num between 5 and 1\n>> \n>> This is NOT correct under either SQL92 or SQL99. Read the spec again.\n\n> After sending it... I realized that it was not correct either. So, I'm \n> back to figuring how to do it... so, um, ignore the previous patch...\n\nSomeone (Easter, IIRC) already submitted a patch to support the SQL99\nSYMMETRIC BETWEEN option. I think we're sitting on it till 7.2, on the\ngrounds of \"no new features in beta\". Check the list archives for the\nlast few weeks if you need it now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Jan 2001 23:15:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BETWEEN patch " } ]
[ { "msg_contents": "This snippet in pg_dumpall\n\n$PSQL -d template1 -At -F ' ' \\\n -c \"SELECT datname, usename, pg_encoding_to_char(d.encoding),\ndatistemplate, datpath FROM pg_database d LEFT JOIN pg_shadow u ON (datdba\n= usesysid) WHERE datallowconn;\" | \\\nwhile read DATABASE DBOWNER ENCODING ISTEMPLATE DBPATH; do\n\n(line breaks messed up)\n\nwon't actually work if there indeed happens to be a database without a\nvalid owner, because the 'read' command will take ENCODING as the dba\nname.\n\nI guess the real question is, what should be done in this case? I think\nit might be better to error out and let the user fix his database before\nbacking it up.\n\n(At a glance, I think pg_dump also has some problems with these sort of\nconstellations.)\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Tue, 23 Jan 2001 22:22:22 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "LEFT JOIN in pg_dumpall is a bug" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> This snippet in pg_dumpall\n> $PSQL -d template1 -At -F ' ' \\\n> -c \"SELECT datname, usename, pg_encoding_to_char(d.encoding),\n> datistemplate, datpath FROM pg_database d LEFT JOIN pg_shadow u ON (datdba\n> = usesysid) WHERE datallowconn;\" | \\\n> while read DATABASE DBOWNER ENCODING ISTEMPLATE DBPATH; do\n\n> won't actually work if there indeed happens to be a database without a\n> valid owner, because the 'read' command will take ENCODING as the dba\n> name.\n\nOops, you're right, the read won't keep the columns straight. Come to\nthink of it, it would do the wrong thing for empty-string datname or\nusename, too, and it's only because datpath is the last column that\nwe haven't noticed it doing the wrong thing on empty datpath.\n\nIs there a more robust way of reading the data into the script?\n\n> I guess the real question is, what should be done in this case? I think\n> it might be better to error out and let the user fix his database before\n> backing it up.\n\nPossibly. The prior state of the code (before I put in the LEFT JOIN)\nwould silently ignore any database with no matching user, which was\ndefinitely NOT a good idea.\n\nI think I'd rather see a warning, though, and let the script try to dump\nthe DB anyway.\n\n> (At a glance, I think pg_dump also has some problems with these sort of\n> constellations.)\n\nYes, there are a number of places where pg_dump should be doing outer\njoins and isn't. I think Tatsuo is at work on that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Jan 2001 16:59:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LEFT JOIN in pg_dumpall is a bug " }, { "msg_contents": "Tom Lane writes:\n\n> > $PSQL -d template1 -At -F ' ' \\\n> > -c \"SELECT datname, usename, pg_encoding_to_char(d.encoding),\n> > datistemplate, datpath FROM pg_database d LEFT JOIN pg_shadow u ON (datdba\n> > = usesysid) WHERE datallowconn;\" | \\\n> > while read DATABASE DBOWNER ENCODING ISTEMPLATE DBPATH; do\n\n> Oops, you're right, the read won't keep the columns straight. Come to\n> think of it, it would do the wrong thing for empty-string datname or\n> usename, too,\n\nIt won't actually work to restore such a setup, because zero-length\nidentifiers are no longer allowed.\n\n> Is there a more robust way of reading the data into the script?\n\nProvided that 'cut' is portable, then this works for me:\n\nTAB=' ' # tab here\n\n$PSQL -d template1 -At -F \"$TAB\" \\\n -c \"SELECT datname, usename, pg_encoding_to_char(d.encoding),\ndatistemplate, datpath FROM pg_database d LEFT JOIN pg_shadow u ON (datdba\n= usesysid) WHERE datallowconn;\" | \\\nwhile read THINGS; do\n DATABASE=`echo \"$THINGS\" | cut -f 1`\n DBOWNER=`echo \"$THINGS\" | cut -f 2`\n ENCODING=`echo \"$THINGS\" | cut -f 3`\n ISTEMPLATE=`echo \"$THINGS\" | cut -f 4`\n DBPATH=`echo \"$THINGS\" | cut -f 5`\n\nIf 'cut' is not portable, then I don't believe you can do it with\nIFS-based word splitting, because two adjacent separator characters don't\nseem to indicate an empty field but are instead taken as one separator.\n\n> I think I'd rather see a warning, though, and let the script try to dump\n> the DB anyway.\n\nMaybe for databases without an owner, but not for empty database or user\nnames.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 24 Jan 2001 19:03:32 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LEFT JOIN in pg_dumpall is a bug " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> Is there a more robust way of reading the data into the script?\n\n> Provided that 'cut' is portable, then this works for me:\n\nMy old copy of Horton's _Portable C Software_ says that cut(1) is a\nSysV-ism adopted by POSIX. At that time (1990) it wasn't portable,\nand he recommended using awk or sed instead.\n\nIf you think depending on POSIX utilities is OK, then use cut.\nI'd recommend sed, though. The GNU coding standards for Makefiles\nsuggest not depending on programs outside this set:\n\n cat cmp cp diff echo egrep expr false grep install-info\n ln ls mkdir mv pwd rm rmdir sed sleep sort tar test touch true\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Jan 2001 13:08:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LEFT JOIN in pg_dumpall is a bug " }, { "msg_contents": "Tom Lane writes:\n\n> If you think depending on POSIX utilities is OK, then use cut.\n> I'd recommend sed, though.\n\nThis has gotten pretty silly:\n\nTAB=' ' # tab here\n\n$PSQL -d template1 -At -F \"$TAB\" \\\n -c \"SELECT datname, usename, pg_encoding_to_char(d.encoding),\ndatistemplate, datpath, 'x' FROM pg_database d LEFT JOIN pg_shadow u ON\n(datdba = usesysid) WHERE datallowconn;\" | \\\nwhile read RECORDS; do\n DATABASE=`echo \"x$RECORDS\" | sed \"s/^x\\([^$TAB]*\\).*/\\1/\"`\n DBOWNER=`echo \"x$RECORDS\" | sed \"s/^x[^$TAB]*$TAB\\([^$TAB]*\\).*/\\1/\"`\n ENCODING=`echo \"x$RECORDS\" | sed \"s/^x[^$TAB]*$TAB[^$TAB]*$TAB\\([^$TAB]*\\).*/\\1/\"`\n ISTEMPLATE=`echo \"x$RECORDS\" | sed \"s/^x[^$TAB]*$TAB[^$TAB]*$TAB[^$TAB]*$TAB\\([^$TAB]*\\).*/\\1/\"`\n DBPATH=`echo \"x$RECORDS\" | sed \"s/^x[^$TAB]*$TAB[^$TAB]*$TAB[^$TAB]*$TAB[^$TAB]*$TAB\\([^$TAB]*\\).*/\\1/\"`\n\nI'm not sure whether this is actually an overall improvement. I'm tempted\nto just coalesce(usename, {some default user}) instead.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 24 Jan 2001 22:36:37 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LEFT JOIN in pg_dumpall is a bug " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I'm not sure whether this is actually an overall improvement. I'm tempted\n> to just coalesce(usename, {some default user}) instead.\n\nI thought about that to begin with, but figured you wouldn't like it ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Jan 2001 18:03:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LEFT JOIN in pg_dumpall is a bug " } ]
[ { "msg_contents": "> > > But how Oracle handles this?\n> >\n> > Oracle doesn't have non-overwriting storage manager but uses\n> > rollback segments to maintain MVCC. Rollback segments are used\n> > to restore valid version of entire index/table page.\n> \n> Are there any plans to have something like this? I mean \n> overwriting storage manager.\n\nWell, I have plans to reimplement storage manager to allow space\nre-use without vacuum but without switching to overwriting, at least\nin near future - achievements/drawbacks are still questionable.\n\nWe could add transaction data to index tuples but this would increase\ntheir size by ~ 16bytes. To estimate how this would affect performance\nfor mostly statical tables one can run tests with schema below:\n\ncreate table i1 (i int, k int, l char(16));\ncreate index i_i1 on i1 (i);\ncreate table i2 (i int, k int, l char(16));\ncreate index i_i2 on i2 (i, k, l);\n\nNow fill tables with same data and run queries using only \"I\" in where\nclause.\n\nVadim\n", "msg_date": "Tue, 23 Jan 2001 14:20:34 -0800", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Does Oracle store values in indices?" }, { "msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n> We could add transaction data to index tuples but this would increase\n> their size by ~ 16bytes.\n\nThe increased space is the least of the drawbacks. Consider also the\ntime needed to maintain N copies of a tuple's commit status instead of\none. Even finding the N copies would cost a lot more than the single\ndisk transfer involved now ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Jan 2001 18:01:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does Oracle store values in indices? " } ]
[ { "msg_contents": "\nWe'd like to employ a Postgres hacker to speed development of enterprise\nfeatures in Postgres. All code would be immediately contributed back to\nthe community. We are not interested in proprietary extensions.\n\nWe're a fun company located in Mountain View, CA, with all the usual\nbenefits such as medical and stock options plus some less common benefits\nsuch as free massages and shower facilities on site.\n\nPlease reply with an ASCII resume.\n\nM Carling\nCIO\nAxis, Inc.\n\n", "msg_date": "Tue, 23 Jan 2001 17:45:28 -0800 (PST)", "msg_from": "M Carling <[email protected]>", "msg_from_op": true, "msg_subject": "[JOB] pqsql hacker needed" } ]
[ { "msg_contents": "> > What I will probably do is make a wrapper around it so it I can do:\n> >\n> > \tls | oidmapper\n> >\n> > and see the files as table names.\n> \n> Hmmm.... I think I can add that to the code..\n> \n> will try..\n> \n\nIt has to be pretty smart. Consider this:\n\n\t$ pwd\n\t/u/pg/data/base/18720\n\t$ ls -l\n\nIt has to read the directories above, looking for a directory name that\nis all numbers. It needs to then use that to find the database name. \nOf course, if you are not in the directory, you may have a problem with\nthe database and require them to specify it on the command line.\n\nIt then has to process the the contents of ls -l and find the oids in\nthere and map them:\n\n\ttotal 2083\n\t-rw------- 1 postgres postgres 8192 Jan 15 23:43 1215\n\t-rw------- 1 postgres postgres 8192 Jan 15 23:43 1216\n\t-rw------- 1 postgres postgres 8192 Jan 15 23:43 1219\n\t-rw------- 1 postgres postgres 24576 Jan 15 23:43 1247\n\t-rw------- 1 postgres postgres 114688 Jan 19 21:43 1249\n\t-rw------- 1 postgres postgres 229376 Jan 15 23:43 1255\n\t-rw------- 1 postgres postgres 24576 Jan 15 23:59 1259\n\t-rw------- 1 postgres postgres 8192 Jan 15 23:43 16567\n\t-rw------- 1 postgres postgres 16384 Jan 16 00:04 16579\n\nThe numbers <16k are system tables so you probably need code to lookup\nstuff <16k, and if it doesn't begin with pg_, it is not an oid.\n\nIt also should handle 'du':\n\n\t$ du\n\t1517 ./1\n\t1517 ./18719\n\t2085 ./18720\n\t1517 ./27592\n\t20561 ./27593\n\t27198 .\n\t\nAs you can see, this could be tricky.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Jan 2001 23:10:49 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: $PGDATA/base/???" } ]
[ { "msg_contents": "Now that MySQL has transaction support through Berkeley DB lib, and it's\nalways had way more data types, what are the main advantages postgresql has\nover it? I don't think mysql has subselects and such, but they did add a\nmaster-slave replication feature as well as online reorganization (perhaps\nlocks tables like vacuum?).\n\nAnybody used both of the current releases who can comment?\n\nThanks,\nDavid\n\n\n", "msg_date": "Tue, 23 Jan 2001 20:30:10 -0800", "msg_from": "\"David Wall\" <[email protected]>", "msg_from_op": true, "msg_subject": "MySQL has transactions" }, { "msg_contents": "Postgresql's SQL implementation is way ahead of MySQL's relatively\nstunted vocabulary. But on the other hand, MySQL implements most\nof the popular functionality. The other thing is that MySQL is\nblindingly fast and has a very uncomplicated API.\n\nIf you need real SQL and can't afford Oracle/Sybase/DB2 then the\nobvious choice is Postgresql. If you need speed and simplicity\nand maximum ease of administration and maintenance, that would \nbe MySQL.\n\n -joseph\n\nDavid Wall wrote:\n> \n> Now that MySQL has transaction support through Berkeley DB lib, and it's\n> always had way more data types, what are the main advantages postgresql has\n> over it? I don't think mysql has subselects and such, but they did add a\n> master-slave replication feature as well as online reorganization (perhaps\n> locks tables like vacuum?).\n> \n> Anybody used both of the current releases who can comment?\n", "msg_date": "Tue, 23 Jan 2001 22:19:12 -0700", "msg_from": "\"Joseph N. Hall\" <\" <heard_it_on_the_internet> \"@5sigma.com>", "msg_from_op": false, "msg_subject": "Re: MySQL has transactions" }, { "msg_contents": "At 8:30 PM -0800 1/23/01, David Wall wrote:\n>Now that MySQL has transaction support through Berkeley DB lib, and it's\n>always had way more data types, what are the main advantages postgresql has\n>over it? I don't think mysql has subselects and such, but they did add a\n>master-slave replication feature as well as online reorganization (perhaps\n>locks tables like vacuum?).\n>\n>Anybody used both of the current releases who can comment?\n\n\nI haven't seen the new mysql. My feeling is that all things being \nequal, gluing transactions on top of a database implementation can \nnot possibly be as stable and correct as building them in from the \nbeginning. The design heuristic that applies is \"Make it run first, \nTHEN make it run fast.\" Mysql was built to run fast from the \nbeginning, and now they're jamming in functionality. So if I needed \ntransactions I'd go with postgres until mysql has a track record.\n\nI happen to be on a project at this very moment in which we're \nconverting a mysql database to postgres specifically to get \ntransactions, and I prefer making the conversion rather than taking a \nchance on mysql transactions.\n\nI'd be interested to hear any arguments or real-life experiences pro or con.\n\nSteve Leibel\n\n", "msg_date": "Wed, 24 Jan 2001 01:09:06 -0500", "msg_from": "Steve Leibel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL has transactions" }, { "msg_contents": "> I haven't seen the new mysql. My feeling is that all things being\n> equal, gluing transactions on top of a database implementation can\n> not possibly be as stable and correct as building them in from the\n> beginning. The design heuristic that applies is \"Make it run first,\n> THEN make it run fast.\" Mysql was built to run fast from the\n> beginning, and now they're jamming in functionality. So if I needed\n> transactions I'd go with postgres until mysql has a track record.\n\nYou may be right, though they did this with berkeley db, which I guess is\npretty stable with transaction support.\n\nThe problems I'm having with postgresql are mainly in the area of blobs. I\nneed to store several binary objects, generally in the 800-2400 byte range,\nand I also need to store text messages sent by people, and I have to deal\nwith truncation and such to stay within the 8k row-size limit. I've heard I\ncan update the blocksize to 32k, but then I've read this has other negative\nimpacts and that 7.1 solves it anyway -- but when will that be stable and\nready?\n\nAnyway, I'm giving them both a quick test, primarily with regard to\ntransactions and blobs. I can report back what I learn, but it will only be\nat a testing level, and I'd prefer to hear from production users.\n\nDavid\n\n", "msg_date": "Tue, 23 Jan 2001 22:52:00 -0800", "msg_from": "\"David Wall\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: MySQL has transactions" }, { "msg_contents": "On Wed, 24 Jan 2001 01:09:06 -0500\nSteve Leibel <[email protected]> wrote:\n\nHi all\nI have had the unpleasant experience of developing for MySQL at work, while at home I can enjoy using PostGres for my part-time work.\n\n> At 8:30 PM -0800 1/23/01, David Wall wrote:\n> >Now that MySQL has transaction support through Berkeley DB lib, and it's\n> >always had way more data types, what are the main advantages postgresql has\n> >over it? I don't think mysql has subselects and such, but they did add a\n> >master-slave replication feature as well as online reorganization (perhaps\n> >locks tables like vacuum?).\n> >\n> >Anybody used both of the current releases who can comment?\n\nI must admit, I *haven't* used the version of MySQL with transaction support enabled, but they have numerous other issues too....\n \n> \n> I haven't seen the new mysql. My feeling is that all things being \n> equal, gluing transactions on top of a database implementation can \n> not possibly be as stable and correct as building them in from the \n> beginning. The design heuristic that applies is \"Make it run first, \n> THEN make it run fast.\" Mysql was built to run fast from the \n> beginning, and now they're jamming in functionality. So if I needed \n> transactions I'd go with postgres until mysql has a track record.\n> \n> I happen to be on a project at this very moment in which we're \n> converting a mysql database to postgres specifically to get \n> transactions, and I prefer making the conversion rather than taking a \n> chance on mysql transactions.\n> \n> I'd be interested to hear any arguments or real-life experiences pro or con.\n\nFirstly, I agree whole-heartedly with this. Transactions are unlikely to work well if they haven't been designed in from the outset. They're also sure to put quite substantial overhead on the processing of writes, so we'll see how well it performs now. But since I've not used the transaction-enabled MySQL at all, I think that's all I'm fit to say at this point...\n\nOther irritations I've found with MySQL are (briefly):\n- no subselects (makes for ugly hacks in code)\n- no views\n- no foreign keys\n- no constraint support\n- completely lacking date integrity checking (eg will accept '2001-15-45' as a valid date).\n- no rules\n- no triggers\n- no intersects or unions\n- table-level locking only\n- inability to go beyond FS limits of filesize for databases\nAll in all, about the only thing MySQL has going for it is the replication.\n\nThe only issues I've had with PostGres are:\n- this doesn't work: select a from b where a.fld in (select c from d where e = f intersect select c from d where e=g)\n\tbut I believe that will be working in 7.1\n- 8k row limit \n\tpretty severe, but can be fixed at copmpile-time to 32k. Completely removed for 7.1\n\n\nThanks guys!\n\nCiao\n", "msg_date": "Wed, 24 Jan 2001 13:06:30 +0200", "msg_from": "Zak McGregor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: MySQL has transactions" }, { "msg_contents": "On Tue, 23 Jan 2001, David Wall wrote:\n\n> Now that MySQL has transaction support through Berkeley DB lib, and it's\n> always had way more data types, what are the main advantages postgresql has\n> over it? I don't think mysql has subselects and such, but they did add a\n> master-slave replication feature as well as online reorganization (perhaps\n> locks tables like vacuum?).\n\nI've been using PostgreSQL since about 1997, and only used MySQL for the\nfirst time last year, and MySQL just seems to sparse for a lot of things.\nThe lack of foreign key contraints and views is a problem for me.\nPostgreSQL still has more features, like triggers, rules, referential\nintegrity, views, sub-selects, row-level locking, to name a few things.\n\nI think MySQL is a very good way to introduce beginners to SQL and\ndatabase concepts, but you can only go so far with it. It's very good for\ndoing archiving of static data and fast retrieval for websites, but I\nwouldn't build an e-commerce site with it.\n\n-- Brett\n http://www.chapelperilous.net/~bmccoy/\n---------------------------------------------------------------------------\nDear Lord:\n\tI just want *___\b\b\bone* one-armed manager so I never have to hear \"On\nthe other hand\", again.\n\n", "msg_date": "Wed, 24 Jan 2001 07:18:02 -0500 (EST)", "msg_from": "\"Brett W. McCoy\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL has transactions" }, { "msg_contents": "I completely agree! I've been \"shopping\" for affordable databases over the \nlast months and have come to the conclusion that postgresql has the most \npowerful features. I ruled out mySql immediately because of the same things \nyou pointed out. I found Interbase to be the biggest contender of postgresql.\n\nOn my wishlist for postgresql the top three would be:\n\n* 24x7 support (load-balancing, failover, online-backup, multiple parallel \nservers, ...)\n* Fast case insensitive text search via indexes (function based indexes)\n* Java in the server (for triggers and functions)\n\n\nI know I'm quite modest :-)\n\nAlexander Jerusalem\[email protected]\nvknn\n\n\nAt 12:06 24.01.01, Zak McGregor wrote:\n>On Wed, 24 Jan 2001 01:09:06 -0500\n>Steve Leibel <[email protected]> wrote:\n>\n>Hi all\n>I have had the unpleasant experience of developing for MySQL at work, \n>while at home I can enjoy using PostGres for my part-time work.\n>\n> > At 8:30 PM -0800 1/23/01, David Wall wrote:\n> > >Now that MySQL has transaction support through Berkeley DB lib, and it's\n> > >always had way more data types, what are the main advantages \n> postgresql has\n> > >over it? I don't think mysql has subselects and such, but they did add a\n> > >master-slave replication feature as well as online reorganization (perhaps\n> > >locks tables like vacuum?).\n> > >\n> > >Anybody used both of the current releases who can comment?\n>\n>I must admit, I *haven't* used the version of MySQL with transaction \n>support enabled, but they have numerous other issues too....\n>\n> >\n> > I haven't seen the new mysql. My feeling is that all things being\n> > equal, gluing transactions on top of a database implementation can\n> > not possibly be as stable and correct as building them in from the\n> > beginning. The design heuristic that applies is \"Make it run first,\n> > THEN make it run fast.\" Mysql was built to run fast from the\n> > beginning, and now they're jamming in functionality. So if I needed\n> > transactions I'd go with postgres until mysql has a track record.\n> >\n> > I happen to be on a project at this very moment in which we're\n> > converting a mysql database to postgres specifically to get\n> > transactions, and I prefer making the conversion rather than taking a\n> > chance on mysql transactions.\n> >\n> > I'd be interested to hear any arguments or real-life experiences pro or \n> con.\n>\n>Firstly, I agree whole-heartedly with this. Transactions are unlikely to \n>work well if they haven't been designed in from the outset. They're also \n>sure to put quite substantial overhead on the processing of writes, so \n>we'll see how well it performs now. But since I've not used the \n>transaction-enabled MySQL at all, I think that's all I'm fit to say at \n>this point...\n>\n>Other irritations I've found with MySQL are (briefly):\n>- no subselects (makes for ugly hacks in code)\n>- no views\n>- no foreign keys\n>- no constraint support\n>- completely lacking date integrity checking (eg will accept '2001-15-45' \n>as a valid date).\n>- no rules\n>- no triggers\n>- no intersects or unions\n>- table-level locking only\n>- inability to go beyond FS limits of filesize for databases\n>All in all, about the only thing MySQL has going for it is the replication.\n>\n>The only issues I've had with PostGres are:\n>- this doesn't work: select a from b where a.fld in (select c from d where \n>e = f intersect select c from d where e=g)\n> but I believe that will be working in 7.1\n>- 8k row limit\n> pretty severe, but can be fixed at copmpile-time to 32k. \n> Completely removed for 7.1\n>\n>\n>Thanks guys!\n>\n>Ciao\n\n", "msg_date": "Wed, 24 Jan 2001 13:48:25 +0100", "msg_from": "Alexander Jerusalem <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: MySQL has transactions" }, { "msg_contents": "> * Fast case insensitive text search via indexes (function based indexes)\n\nTry to:\n\ncreate table test (s text);\ncreate index ix_test_s on test(lower(s));\n\nAnd try select * from test where lower(s) = 'test';\n\nIf you made vacuum, and have enough data in the table you will get index \nscan. Also you will get index scan for this:\nselect * from test where lower(s) like 'test%';\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Wed, 24 Jan 2001 19:02:16 +0600", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Re: MySQL has transactions" }, { "msg_contents": "El Mi� 24 Ene 2001 08:06, Zak McGregor escribi�:\n>\n> Other irritations I've found with MySQL are (briefly):\n> - no subselects (makes for ugly hacks in code)\n> - no views\n> - no foreign keys\n\nDidn't know they didn't have foreign keys. :-(\n\n> - no constraint support\n> - completely lacking date integrity checking (eg will accept '2001-15-45'\n> as a valid date). \n\nThat is pretty ugly.\n\n- no rules\n> - no triggers\n> - no intersects or unions\n> - table-level locking only\n> - inability to go beyond FS limits of filesize for databases\n> All in all, about the only thing MySQL has going for it is the replication.\n>\n> The only issues I've had with PostGres are:\n> - this doesn't work: select a from b where a.fld in (select c from d where\n> e = f intersect select c from d where e=g) but I believe that will be\n> working in 7.1\n> - 8k row limit\n> \tpretty severe, but can be fixed at copmpile-time to 32k. Completely\n> removed for 7.1\n\nBoth (AFAIK) are added in 7.1\n\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Wed, 24 Jan 2001 10:18:56 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: MySQL has transactions" }, { "msg_contents": "Wow! postgresql is a miracle! :-)\n\nI'm starting to wonder why anybody would want to use Oracle...\n\nAlexander Jerusalem\[email protected]\nvknn\n\nAt 14:02 24.01.01, Denis Perchine wrote:\n> > * Fast case insensitive text search via indexes (function based indexes)\n>\n>Try to:\n>\n>create table test (s text);\n>create index ix_test_s on test(lower(s));\n>\n>And try select * from test where lower(s) = 'test';\n>\n>If you made vacuum, and have enough data in the table you will get index\n>scan. Also you will get index scan for this:\n>select * from test where lower(s) like 'test%';\n>\n>--\n>Sincerely Yours,\n>Denis Perchine\n>\n>----------------------------------\n>E-Mail: [email protected]\n>HomePage: http://www.perchine.com/dyp/\n>FidoNet: 2:5000/120.5\n>----------------------------------\n\n", "msg_date": "Wed, 24 Jan 2001 14:38:25 +0100", "msg_from": "Alexander Jerusalem <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Re: MySQL has transactions" }, { "msg_contents": "On Wed, 24 Jan 2001 10:18:56 -0300\n\"Martin A. Marques\" <[email protected]> wrote:\n\n> El Mi� 24 Ene 2001 08:06, Zak McGregor escribi�:\n> >\n> > Other irritations I've found with MySQL are (briefly):\n> > - no subselects (makes for ugly hacks in code)\n> > - no views\n> > - no foreign keys\n> \n> Didn't know they didn't have foreign keys. :-(\n\nNot only that - this is what the MySQL site used to say about foreign\nkeys:\n\nThe FOREIGN KEY syntax in MySQL exists only for compatibility with other\nSQL vendors CREATE TABLE commands: It doesn't do anything. \n...\nForeign keys is something that makes life very complicated, because the\nforeign key definition must be stored in some database and\nthen the hole [sic] 'nice approach' by using only files that can be moved,\ncopied and removed will be destroyed. In the near future we will extend\nFOREIGN KEYS so that the at least the information will be saved and may be\nretrieved by mysqldump and ODBC. \n\nCiao\n\n", "msg_date": "Wed, 24 Jan 2001 16:34:36 +0200", "msg_from": "Zak McGregor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL has transactions" }, { "msg_contents": "David Wall writes:\n\n> Now that MySQL has transaction support through Berkeley DB lib, and it's\n> always had way more data types,\n\nI count 25 documented and distinct data types for MySQL, and 30 for\nPostgreSQL.\n\nhttp://www.postgresql.org/devel-corner/docs/postgres/datatype.htm\nhttp://www.mysql.com/documentation/mysql/bychapter/manual_Reference.html#Column_types\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 24 Jan 2001 17:10:19 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL has transactions" }, { "msg_contents": "There have been several recent benchmarks by non-mysql and postgres people\nand the speed argument does not seem to be valid.\n\nEven though MySQL still beats postgres in speed if they are compared with\none user on the DB, postgres seems to destroy MySQL in speed as you tend to\nadd users.\n\nAdam Lang\nSystems Engineer\nRutgers Casualty Insurance Company\nhttp://www.rutgersinsurance.com\n----- Original Message -----\nFrom: \"Joseph N. Hall @5sigma.com>\" <\" <heard_it_on_the_internet>\nTo: <[email protected]>\nSent: Wednesday, January 24, 2001 12:19 AM\nSubject: [GENERAL] Re: MySQL has transactions\n\n\n> Postgresql's SQL implementation is way ahead of MySQL's relatively\n> stunted vocabulary. But on the other hand, MySQL implements most\n> of the popular functionality. The other thing is that MySQL is\n> blindingly fast and has a very uncomplicated API.\n>\n> If you need real SQL and can't afford Oracle/Sybase/DB2 then the\n> obvious choice is Postgresql. If you need speed and simplicity\n> and maximum ease of administration and maintenance, that would\n> be MySQL.\n>\n> -joseph\n>\n> David Wall wrote:\n> >\n> > Now that MySQL has transaction support through Berkeley DB lib, and it's\n> > always had way more data types, what are the main advantages postgresql\nhas\n> > over it? I don't think mysql has subselects and such, but they did add\na\n> > master-slave replication feature as well as online reorganization\n(perhaps\n> > locks tables like vacuum?).\n> >\n> > Anybody used both of the current releases who can comment?\n\n", "msg_date": "Wed, 24 Jan 2001 11:18:18 -0500", "msg_from": "\"Adam Lang\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: MySQL has transactions" }, { "msg_contents": "\n\nDavid Wall schrieb:\n> \n> Now that MySQL has transaction support through Berkeley DB lib, and it's\n> always had way more data types, what are the main advantages postgresql has\n> over it? I don't think mysql has subselects and such, but they did add a\n> master-slave replication feature as well as online reorganization (perhaps\n> locks tables like vacuum?).\n> \n> Anybody used both of the current releases who can comment?\n> \n> Thanks,\n> David\n\n Another question: now the sap-db is open and the source will be\nreleased \nthis years, what is the advantage of MySQL over SPA-DB ?\n\nMarten\n", "msg_date": "Wed, 24 Jan 2001 16:47:03 +0000", "msg_from": "[email protected] (Marten Feldtmann)", "msg_from_op": false, "msg_subject": "Re: MySQL has transactions" }, { "msg_contents": "> I count 25 documented and distinct data types for MySQL, and 30 for\n> PostgreSQL.\n\nIn my case, I'd just settle for a workable BLOB/LONGBLOB. I think counting\ntypes is less interesting than meeting ones needs. They \"redefine\" types\nlike BLOB as LONGVARBINARY and TEXT as LONGVARCHAR, but does that add two\ntypes? Anyway, blobs are prettys standard for sql, and that's what I'm\nlooking to have work for me.\n\nDavid\n\n", "msg_date": "Wed, 24 Jan 2001 08:51:51 -0800", "msg_from": "\"David Wall\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: MySQL has transactions" }, { "msg_contents": "On Wed, 24 Jan 2001, Peter Eisentraut wrote:\n\n> David Wall writes:\n>\n> > Now that MySQL has transaction support through Berkeley DB lib, and it's\n> > always had way more data types,\n>\n> I count 25 documented and distinct data types for MySQL, and 30 for\n> PostgreSQL.\n\nNot to mention that Postgres has an extensible type system whereas MySQL\ndoes not.\n\n-- Brett\n http://www.chapelperilous.net/~bmccoy/\n---------------------------------------------------------------------------\nWhen a man knows he is to be hanged in a fortnight, it concentrates his\nmind wonderfully.\n\t\t-- Samuel Johnson\n\n", "msg_date": "Wed, 24 Jan 2001 13:57:19 -0500 (EST)", "msg_from": "\"Brett W. McCoy\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL has transactions" }, { "msg_contents": "You should probably ask the MySQL people that.\n\nAdam Lang\nSystems Engineer\nRutgers Casualty Insurance Company\nhttp://www.rutgersinsurance.com\n----- Original Message -----\nFrom: \"Marten Feldtmann\" <[email protected]>\nTo: \"David Wall\" <[email protected]>\nCc: <[email protected]>\nSent: Wednesday, January 24, 2001 11:47 AM\nSubject: Re: [GENERAL] MySQL has transactions\n\n\n>\n>\n> David Wall schrieb:\n> >\n> > Now that MySQL has transaction support through Berkeley DB lib, and it's\n> > always had way more data types, what are the main advantages postgresql\nhas\n> > over it? I don't think mysql has subselects and such, but they did add\na\n> > master-slave replication feature as well as online reorganization\n(perhaps\n> > locks tables like vacuum?).\n> >\n> > Anybody used both of the current releases who can comment?\n> >\n> > Thanks,\n> > David\n>\n> Another question: now the sap-db is open and the source will be\n> released\n> this years, what is the advantage of MySQL over SPA-DB ?\n>\n> Marten\n\n", "msg_date": "Wed, 24 Jan 2001 13:59:51 -0500", "msg_from": "\"Adam Lang\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL has transactions" }, { "msg_contents": "I currently use both and here's my quick sound-bite summary of the things\nI have used and like from each one. I won't go into the stuff I don't like\nso as to avoid the start of a flame war. If you want, email me and we can\ndiscuss it off the list.\n\nMySQL 3.32.x\n----------------\n* \"heap\" table type (create temporary table in RAM)\n* good disaster-recovery tools\n* excellent documentation (online documentation forum), GNU info file\n* some useful extensions to SQL (REPLACE, DROP <table> IF EXISITS, SHOW\nTABLES)\n* very flexible config files\n* easy to upgrade between versions\n\nPostgreSQL 7.1 beta\n----------------\n* mature transaction support\n* stored procedures in SQL, PL/PgSQL, Perl, and TCL\n* triggers, foreign keys\n* more complete SQL (UNION, EXISTS, CREATE VIEW)\n* excellent shell (psql)\n* very friendly/well organized development team and mailing list :-)\n* JDBC type 4 driver\n* user-defined data types\n\nBasically I have used MySQL for some web-based projects written in PHP\nwhere most of the logic is application logic and the database needs are\nreasonably simple, but need to be fast and stable. I generally have found\nit to work well in that regard.\n\nFor some Java applications I am working on that require the use of an\napplication server, and have more extensive database logic, I use Pg.\nDepending on your needs you may be able to use either one, although my\nbias is usually to go with Pg since it's generally more featureful and\nrecent benchmarks have shown it to be faster under heavy loads. \n\nHope this helps.\n\nNorm\n\n--------------------------------------\nNorman Clarke\nCombimatrix Corp Software Development\nHarbour Pointe Tech Center\n6500 Harbour Heights Pkwy, Suite 301\nMukilteo, WA 98275\n \ntel: 425.493.2240\nfax: 425.493.2010\n--------------------------------------\n\nOn Tue, 23 Jan 2001, David Wall wrote:\n\n> Now that MySQL has transaction support through Berkeley DB lib, and it's\n> always had way more data types, what are the main advantages postgresql has\n> over it? I don't think mysql has subselects and such, but they did add a\n> master-slave replication feature as well as online reorganization (perhaps\n> locks tables like vacuum?).\n> \n> Anybody used both of the current releases who can comment?\n> \n> Thanks,\n> David\n> \n> \n> \n\n", "msg_date": "Wed, 24 Jan 2001 14:21:00 -0800 (PST)", "msg_from": "\"Norman J. Clarke\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL has transactions" }, { "msg_contents": "Great comparison. I've just compiled MySQL 3.23.32 with the Berkeley DB\nsupport for transactions (the binary distribution, sad to say, does not\ninclude it!). I know that MySQL also has a type 4 JDBC driver by Mark\nMatthews and it's worked well for me in the past using the pre-BDB\ntransaction files.\n\nI do love the features of Postgresql 7.0.3, but the large object support has\nbeen really bad, causing an 800 byte binary item to require 24K of disk\nspace across two files, neither of which are part of the backup of the\ndatabase, and neither of which are deleted when the row pointing to them is\ndeleted. (There's a vacuumlo that solves that one in the background.) And\nthe JDBC library doesn't seem to want me to use the BYTEA type for small\nbyte arrays. What I really want is a good-old BLOBs with minimal overhead\nthat are truly part of the database and its transaction/backup world.\n\nDavid\n\n\n", "msg_date": "Wed, 24 Jan 2001 15:18:35 -0800", "msg_from": "\"David Wall\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: MySQL has transactions" }, { "msg_contents": "Hi David,\n\nThanks for the pointer on the type 4 driver for MySQL.\n\nI too found the tuple size limitations in 7.0.3 constraining - I'm helping\ndevelop an app that stores genetic sequences which are routinely much\nlarger than the 8 to 32k limit in 7.0.3.\n\nSince my project timeline has a June release date I've been developing for\nPg 7.1 and have been quite pleased with the results and stability so far.\nI believe it's pretty close to release now, so if your timeline allows for\nit you may wish to give it a try.\n\nLike anything it's not perfect but I think Pg is by and large a better\nlong-term solution for my project than MySQL. Our first alpha version runs\nwith a MySQL backend (we needed blobs), and the lack of the \"standard\" SQL\nfeatures (triggers, foreign keys, stored procedures) led to many\nuncomfortable workarounds. Much of the core database logic needed to go\ninto Java, which lead to the need for extensive collaboration with other\nprogrammers on the team.\n\nI was afraid to use the then-alpha transactions in MySQL because \"CHECK\nTABLE\" and isamchk did not work for BDB tables. Hopefully this is resolved\nnow.\n\nBy using Postgres I have been able to provide the other programmers a\nclean API to access the database: complex queries are reduced to \"SELECT *\nFROM <view> WHERE ...\" and error checking can occur inside constraints,\nstored procedures and triggers. This has made all of us more a great deal\nmore productive.\n\nI have heard there is some interest among the MySQL developers to get\nstored procedures in MySQL using the same Zend scripting engine used by\nPHP. If they were to do that, implement foreign keys, implement row-level\nlocking, and get the performance of BDB tables up to par with MyISAM (or\nget transactions in MyISAM tables), then I think it will be quite usable\nfor complex schemas.\n\nNorm\n\n--------------------------------------\nNorman Clarke\nCombimatrix Corp Software Development\nHarbour Pointe Tech Center\n6500 Harbour Heights Pkwy, Suite 301\nMukilteo, WA 98275\n \ntel: 425.493.2240\nfax: 425.493.2010\n--------------------------------------\n\nOn Wed, 24 Jan 2001, David Wall wrote:\n\n> Great comparison. I've just compiled MySQL 3.23.32 with the Berkeley DB\n> support for transactions (the binary distribution, sad to say, does not\n> include it!). I know that MySQL also has a type 4 JDBC driver by Mark\n> Matthews and it's worked well for me in the past using the pre-BDB\n> transaction files.\n> \n> I do love the features of Postgresql 7.0.3, but the large object support has\n> been really bad, causing an 800 byte binary item to require 24K of disk\n> space across two files, neither of which are part of the backup of the\n> database, and neither of which are deleted when the row pointing to them is\n> deleted. (There's a vacuumlo that solves that one in the background.) And\n> the JDBC library doesn't seem to want me to use the BYTEA type for small\n> byte arrays. What I really want is a good-old BLOBs with minimal overhead\n> that are truly part of the database and its transaction/backup world.\n> \n> David\n\n", "msg_date": "Wed, 24 Jan 2001 17:51:25 -0800 (PST)", "msg_from": "\"Norman J. Clarke\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL has transactions" }, { "msg_contents": "> Since my project timeline has a June release date I've been developing for\n> Pg 7.1 and have been quite pleased with the results and stability so far.\n> I believe it's pretty close to release now, so if your timeline allows for\n> it you may wish to give it a try.\n\nThanks. In fact, I've done just that after reading the release notes. I've\nput in 7.1beta3, and it appears to be supporting large objects much better,\nbut it's harder to tell now because everything uses the OID for the name\n(tables ,database). It appears that the two files are no longer being\ncreated. I hope that this means the the large objects are also included in\nthe backup with pg_dump and they are automatically removed when the row\ncontaining them is removed.\n\n> I was afraid to use the then-alpha transactions in MySQL because \"CHECK\n> TABLE\" and isamchk did not work for BDB tables. Hopefully this is resolved\nnow.\n\nIt does appear to be better now, but I've not spent too much time because I,\ntoo, believe that Postgresql will be the better route for me. I just need\nto figure out if the JDBC driver will let me store small binary objects\nusing types like BYTEA.\n\nDavid\n\n", "msg_date": "Wed, 24 Jan 2001 21:08:47 -0800", "msg_from": "\"David Wall\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: MySQL has transactions" }, { "msg_contents": "At 11:18 AM 1/24/01 -0500, Adam Lang wrote:\n>There have been several recent benchmarks by non-mysql and postgres people\n>and the speed argument does not seem to be valid.\n>\n>Even though MySQL still beats postgres in speed if they are compared with\n>one user on the DB, postgres seems to destroy MySQL in speed as you tend to\n>add users.\n\nThings change, and they've changed quite quickly. Postgres 95 was abysmal.\nAnd Postgresql 6.4 was subpar.\n\nLots of people used MySQL because there wasn't a decent alternative at that\ntime, and it was good at what it did.\n\nWhen I first started running DBs on Linux, it was either MySQL or\nPostgres95. And believe me MySQL won hands down. I had problems indexing a\n400,000 row table on Pg95 - it took longer than I could wait, especially\nsince MySQL did it a lot faster :). Sure Pg had transactions etc but it was\nway too slow to be practical.\n\nWhen Postgresql 6.5 came out it, it was VERY MUCH better ( many many thanks\nto the developers and all involved). And I'm waiting for a solid 7.1 to fix\nthat <8KB issue.\n\nSo give it a few years and maybe things will be different, maybe not. But\nit's been a good journey so far :), whether you're on the MySQL or\nPostgresql wagon (just duck the stuff being thrown about from time to time\n;) ).\n\nCheerio,\nLink.\n\n", "msg_date": "Thu, 25 Jan 2001 13:25:18 +0800", "msg_from": "Lincoln Yeoh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: MySQL has transactions" }, { "msg_contents": "On Wed, 24 Jan 2001, Norman J. Clarke wrote:\n\n> * excellent documentation (online documentation forum), GNU info file\n\nPostgreSQL has an excellent book, and good manpages and general\ndocumentation.\n\n\n", "msg_date": "Thu, 25 Jan 2001 09:13:59 +0100 (MET)", "msg_from": "Marc SCHAEFER <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL has transactions" }, { "msg_contents": "> When Postgresql 6.5 came out it, it was VERY MUCH better ( many many\nthanks\n> to the developers and all involved). And I'm waiting for a solid 7.1 to\nfix\n> that <8KB issue.\n\nTechnically..\n\n<= BLCKSZ (can be up to 32k)\n\nI've been using PostgreSQL with a 32k BLCKSZ since 7.0 (on a productions\nserver) and haven't had a problem one..\n\n-Mitch\n\n\n", "msg_date": "Thu, 25 Jan 2001 10:02:29 -0500", "msg_from": "\"Mitch Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: MySQL has transactions" }, { "msg_contents": "At 10:02 AM 1/25/01 -0500, you wrote:\n>> When Postgresql 6.5 came out it, it was VERY MUCH better ( many many\n>thanks\n>> to the developers and all involved). And I'm waiting for a solid 7.1 to\n>fix\n>> that <8KB issue.\n>\n>Technically..\n>\n><= BLCKSZ (can be up to 32k)\n>\n>I've been using PostgreSQL with a 32k BLCKSZ since 7.0 (on a productions\n>server) and haven't had a problem one..\n\nYep but doesn't quite help my webmail app :). \n\nI'm wondering if TOAST is going to be efficient enough for me to plonk\nmultimegabyte email attachments into the database. \n\nHowever I've also a suspicion that there might be problems doing\n\nINSERT INTO mytable (a) values ( 'aa.......');\n\nWhere aa... is a few megabytes long :). There's probably a query size limit\nsomewhere between my app and TOAST. \n\nCheerio,\nLink.\n\n", "msg_date": "Sat, 27 Jan 2001 14:25:42 +0800", "msg_from": "Lincoln Yeoh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: MySQL has transactions" }, { "msg_contents": "Lincoln Yeoh <[email protected]> writes:\n> I'm wondering if TOAST is going to be efficient enough for me to plonk\n> multimegabyte email attachments into the database. \n\nShould work. The main limitation on TOAST is that it wants to treat\neach datum as a unit, ie you must fetch or store the whole value in one\ngo. When your datums get big enough that that's inconvenient, you won't\nlike TOAST so much. I don't foresee it being a big issue for emailable\nitems though ...\n\n> However I've also a suspicion that there might be problems doing\n\n> INSERT INTO mytable (a) values ( 'aa.......');\n\n> Where aa... is a few megabytes long :). There's probably a query size limit\n> somewhere between my app and TOAST. \n\nI've tested this, it works fine since 7.0 or so.\n\nAmusing anecdote: since 7.0, MySQL's \"crashme\" test crashes when run\nagainst Postgres. Postgres is fine, it's the perl job running the\ncrashme script that goes belly-up. It seems that crashme's loop that\ntries to discover the maximum query length is more memory-hungry than\nPostgres itself, and so the perl job hits the kernel-imposed maximum\nprocess size before the backend does. Moral: before assuming Postgres\ncan't do something, make sure your own code can hold up its end...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 27 Jan 2001 01:47:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Re: MySQL has transactions " } ]
[ { "msg_contents": "\nThere has been alot of fixing/patches going into the tree ... woudl like\nto wrap up a beta4 before the weekend, unless there are any objections?\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Wed, 24 Jan 2001 00:34:56 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "beta4 ... almost time to wrap one ..." }, { "msg_contents": "\nFYI, I still have >50 open items. I will post a list.\n\n> \n> There has been alot of fixing/patches going into the tree ... woudl like\n> to wrap up a beta4 before the weekend, unless there are any objections?\n> \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 00:06:28 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta4 ... almost time to wrap one ..." }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> There has been alot of fixing/patches going into the tree ... woudl like\n> to wrap up a beta4 before the weekend, unless there are any objections?\n\nAgreed, we should push out beta4 before most of core leaves for\nLinuxWorld.\n\nI'd like to see if I can get that HandleDeadLock rewrite done\nbeforehand. Anyone else have any \"must fix\" items?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Jan 2001 00:12:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta4 ... almost time to wrap one ... " }, { "msg_contents": "> The Hermit Hacker <[email protected]> writes:\n> > There has been alot of fixing/patches going into the tree ... woudl like\n> > to wrap up a beta4 before the weekend, unless there are any objections?\n> \n> Agreed, we should push out beta4 before most of core leaves for\n> LinuxWorld.\n> \n> I'd like to see if I can get that HandleDeadLock rewrite done\n> beforehand. Anyone else have any \"must fix\" items?\n\nTons of them before final. I am about to put out an email.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 00:23:37 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta4 ... almost time to wrap one ..." }, { "msg_contents": "On Wed, 24 Jan 2001, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > There has been alot of fixing/patches going into the tree ... woudl like\n> > to wrap up a beta4 before the weekend, unless there are any objections?\n>\n> Agreed, we should push out beta4 before most of core leaves for\n> LinuxWorld.\n\nOkay, am going to aim for Friday for Beta4 ... if we can get as many fixes\nin as possible before then, great ...\n\n\n", "msg_date": "Wed, 24 Jan 2001 01:31:51 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: beta4 ... almost time to wrap one ... " } ]
[ { "msg_contents": "I have about 20 open 7.1 items that I need to get resolved before I can\nstart getting the doc TODO list started. The issues relate to JDBC,\nODBC, and lots of other stuff that need to be settled before we can\nfinalize 7.1.\n\nThey can not be easily summarized in one line. You really have to see\nthe whole email to understand the issues.\n\nHow do people want to do this? I can post them to hackers, or put them\non my web site. I posted them to hackers during the past few days, but\nmany went unanswered. These are all relatively new from the past few\nmonths.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 00:28:45 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Open 7.1 items" }, { "msg_contents": "\nhaven't seen it posted to hackers, or, if I did, I didn't clue into it ...\nand just checked to see if maybe it was waiting for approval due to size,\nand nothing in the queue ...\n\nposting it here is easier to respond to ...\n\n\nOn Wed, 24 Jan 2001, Bruce Momjian wrote:\n\n> I have about 20 open 7.1 items that I need to get resolved before I can\n> start getting the doc TODO list started. The issues relate to JDBC,\n> ODBC, and lots of other stuff that need to be settled before we can\n> finalize 7.1.\n>\n> They can not be easily summarized in one line. You really have to see\n> the whole email to understand the issues.\n>\n> How do people want to do this? I can post them to hackers, or put them\n> on my web site. I posted them to hackers during the past few days, but\n> many went unanswered. These are all relatively new from the past few\n> months.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Wed, 24 Jan 2001 09:11:43 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Open 7.1 items" }, { "msg_contents": "I have trickled the emails as I reviewed them, asking for comments. It\nwas not one big email.\n\n> \n> haven't seen it posted to hackers, or, if I did, I didn't clue into it ...\n> and just checked to see if maybe it was waiting for approval due to size,\n> and nothing in the queue ...\n> \n> posting it here is easier to respond to ...\n> \n> \n> On Wed, 24 Jan 2001, Bruce Momjian wrote:\n> \n> > I have about 20 open 7.1 items that I need to get resolved before I can\n> > start getting the doc TODO list started. The issues relate to JDBC,\n> > ODBC, and lots of other stuff that need to be settled before we can\n> > finalize 7.1.\n> >\n> > They can not be easily summarized in one line. You really have to see\n> > the whole email to understand the issues.\n> >\n> > How do people want to do this? I can post them to hackers, or put them\n> > on my web site. I posted them to hackers during the past few days, but\n> > many went unanswered. These are all relatively new from the past few\n> > months.\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >\n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 08:20:33 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Open 7.1 items" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> I have trickled the emails as I reviewed them, asking for comments. It\n> was not one big email.\n\n...\n\n> > > They can not be easily summarized in one line. You really have to see\n> > > the whole email to understand the issues.\n> > >\n> > > How do people want to do this? I can post them to hackers, or put them\n> > > on my web site. I posted them to hackers during the past few days, but\n> > > many went unanswered. These are all relatively new from the past few\n> > > months.\n\nI guess that having _one_ document with all of them would get much more\nattention.\n\nIt could be just copy and paste from the e-mails, numbered to be easily\nreferred to.\n\nNot everybody here reads _every_ mail on the list, even if it is from\nyou ;) \n\n\n---------------\nHannu\n", "msg_date": "Wed, 24 Jan 2001 17:33:37 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Open 7.1 items" }, { "msg_contents": "> > > > They can not be easily summarized in one line. You really have to see\n> > > > the whole email to understand the issues.\n> > > >\n> > > > How do people want to do this? I can post them to hackers, or put them\n> > > > on my web site. I posted them to hackers during the past few days, but\n> > > > many went unanswered. These are all relatively new from the past few\n> > > > months.\n> \n> I guess that having _one_ document with all of them would get much more\n> attention.\n> \n> It could be just copy and paste from the e-mails, numbered to be easily\n> referred to.\n> \n> Not everybody here reads _every_ mail on the list, even if it is from\n> you ;) \n\nIt has to be done separately because you need to see the full content\nand reply to each individually. Also, they go to different lists\nsometimes. Pretty confusing.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 10:39:51 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Open 7.1 items" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > > > > They can not be easily summarized in one line. You really have to see\n> > > > > the whole email to understand the issues.\n> > > > >\n> > > > > How do people want to do this? I can post them to hackers, or put them\n> > > > > on my web site. I posted them to hackers during the past few days, but\n> > > > > many went unanswered. These are all relatively new from the past few\n> > > > > months.\n> >\n> > I guess that having _one_ document with all of them would get much more\n> > attention.\n> >\n> > It could be just copy and paste from the e-mails, numbered to be easily\n> > referred to.\n> >\n> > Not everybody here reads _every_ mail on the list, even if it is from\n> > you ;)\n> \n> It has to be done separately because you need to see the full content\n> and reply to each individually. Also, they go to different lists\n> sometimes. Pretty confusing.\n\nCould you post a list of open issues where each has just a number,\nheading \n(optional) and link to an email in some mailing-list archive ?\n\n----------------\nHannu\n", "msg_date": "Wed, 24 Jan 2001 18:02:14 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Open 7.1 items" }, { "msg_contents": "> > It has to be done separately because you need to see the full content\n> > and reply to each individually. Also, they go to different lists\n> > sometimes. Pretty confusing.\n> \n> Could you post a list of open issues where each has just a number,\n> heading \n> (optional) and link to an email in some mailing-list archive ?\n\nNot really. I don't have time to make a web site out of this thing. \n:-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 11:02:23 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Open 7.1 items" }, { "msg_contents": "On Wed, 24 Jan 2001, Bruce Momjian wrote:\n\n> > > It has to be done separately because you need to see the full content\n> > > and reply to each individually. Also, they go to different lists\n> > > sometimes. Pretty confusing.\n> >\n> > Could you post a list of open issues where each has just a number,\n> > heading\n> > (optional) and link to an email in some mailing-list archive ?\n>\n> Not really. I don't have time to make a web site out of this thing.\n> :-)\n\nIf they were previously sent to the lists, there should be a link in the\narchives to point ppl to, no? :0\n\n\n\n", "msg_date": "Wed, 24 Jan 2001 12:37:30 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Open 7.1 items" }, { "msg_contents": "> On Wed, 24 Jan 2001, Bruce Momjian wrote:\n> \n> > > > It has to be done separately because you need to see the full content\n> > > > and reply to each individually. Also, they go to different lists\n> > > > sometimes. Pretty confusing.\n> > >\n> > > Could you post a list of open issues where each has just a number,\n> > > heading\n> > > (optional) and link to an email in some mailing-list archive ?\n> >\n> > Not really. I don't have time to make a web site out of this thing.\n> > :-)\n> \n> If they were previously sent to the lists, there should be a link in the\n> archives to point ppl to, no? :0\n\nSure, but this is hard enough. Finding them is even harder. May as\nwell read the email rather than link to a web page.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 12:01:17 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Open 7.1 items" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > On Wed, 24 Jan 2001, Bruce Momjian wrote:\n> >\n> > > > > It has to be done separately because you need to see the full content\n> > > > > and reply to each individually. Also, they go to different lists\n> > > > > sometimes. Pretty confusing.\n> > > >\n> > > > Could you post a list of open issues where each has just a number,\n> > > > heading\n> > > > (optional) and link to an email in some mailing-list archive ?\n> > >\n> > > Not really. I don't have time to make a web site out of this thing.\n> > > :-)\n> >\n> > If they were previously sent to the lists, there should be a link in the\n> > archives to point ppl to, no? :0\n> \n> Sure, but this is hard enough. Finding them is even harder. \n\nIIRC you complained that there are 20 open issues that needed to be\nresolved\nbefor releasing 7.1, no?\n\nJust to start discussing them, we should need a more or less closed list\nof issues.\n\nOr was the original posting (the one that started this thread) just a\ncomment \nabout how hard life is ;) ;)\n\nYou can't reasonably expect people to read all e-mail from last few\nweeks\n(probably a few thousands) and spot the same items as you.\n\n> May as well read the email rather than link to a web page.\n\nStill it would be good to have a \"TODO before 7.1 relese\"\n\n-------------\nHannu\n", "msg_date": "Wed, 24 Jan 2001 22:01:19 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Open 7.1 items" }, { "msg_contents": "> You can't reasonably expect people to read all e-mail from last few\n> weeks\n> (probably a few thousands) and spot the same items as you.\n> \n> > May as well read the email rather than link to a web page.\n> \n> Still it would be good to have a \"TODO before 7.1 relese\"\n\nI send each article to the appropriate list with a comment at the top\nasking for assistance. I can't boild down many of these items into\nshort descriptions. People need to see the detail of the emails.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 15:06:39 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Open 7.1 items" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > You can't reasonably expect people to read all e-mail from last few\n> > weeks\n> > (probably a few thousands) and spot the same items as you.\n> >\n> > > May as well read the email rather than link to a web page.\n> >\n> > Still it would be good to have a \"TODO before 7.1 relese\"\n> \n> I send each article to the appropriate list with a comment at the top\n> asking for assistance. I can't boild down many of these items into\n> short descriptions. People need to see the detail of the emails.\n\nBut maybe we could agree on something in Subject:, maybe [TODO - 7.1] \nso that it is immediately clear it's urgent ?\n\n-------------\nHannu\n", "msg_date": "Wed, 24 Jan 2001 23:07:59 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Open 7.1 items" }, { "msg_contents": "On Wed, 24 Jan 2001, Hannu Krosing wrote:\n\n> Bruce Momjian wrote:\n> >\n> > > You can't reasonably expect people to read all e-mail from last few\n> > > weeks\n> > > (probably a few thousands) and spot the same items as you.\n> > >\n> > > > May as well read the email rather than link to a web page.\n> > >\n> > > Still it would be good to have a \"TODO before 7.1 relese\"\n> >\n> > I send each article to the appropriate list with a comment at the top\n> > asking for assistance. I can't boild down many of these items into\n> > short descriptions. People need to see the detail of the emails.\n>\n> But maybe we could agree on something in Subject:, maybe [TODO - 7.1]\n> so that it is immediately clear it's urgent ?\n\nSounds reasonable to me ...\n\n\n", "msg_date": "Wed, 24 Jan 2001 18:20:26 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Open 7.1 items" }, { "msg_contents": "> On Wed, 24 Jan 2001, Hannu Krosing wrote:\n> \n> > Bruce Momjian wrote:\n> > >\n> > > > You can't reasonably expect people to read all e-mail from last few\n> > > > weeks\n> > > > (probably a few thousands) and spot the same items as you.\n> > > >\n> > > > > May as well read the email rather than link to a web page.\n> > > >\n> > > > Still it would be good to have a \"TODO before 7.1 relese\"\n> > >\n> > > I send each article to the appropriate list with a comment at the top\n> > > asking for assistance. I can't boild down many of these items into\n> > > short descriptions. People need to see the detail of the emails.\n> >\n> > But maybe we could agree on something in Subject:, maybe [TODO - 7.1]\n> > so that it is immediately clear it's urgent ?\n> \n> Sounds reasonable to me ...\n\nGreat idea.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 17:42:04 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Open 7.1 items" }, { "msg_contents": "Quoting Bruce Momjian <[email protected]>:\n\n> I have trickled the emails as I reviewed them, asking for comments. It\n> was not one big email.\n\nI haven't seen them either, although my Inbox is big again, and I'm filtering \nout mails by their subject line, so it's possible I've missed them.\n\nI'm slowly working my way through JDBC, but it's all hinging on getting my \nlinux box back online. It's powered up, but not talking to the network :-(\n\nShould be sorted by Saturday morning...\n\nPeter\n\n> > haven't seen it posted to hackers, or, if I did, I didn't clue into it\n> ...\n> > and just checked to see if maybe it was waiting for approval due to\n> size,\n> > and nothing in the queue ...\n> > \n> > posting it here is easier to respond to ...\n> > \n> > \n> > On Wed, 24 Jan 2001, Bruce Momjian wrote:\n> > \n> > > I have about 20 open 7.1 items that I need to get resolved before I\n> can\n> > > start getting the doc TODO list started. The issues relate to\n> JDBC,\n> > > ODBC, and lots of other stuff that need to be settled before we can\n> > > finalize 7.1.\n> > >\n> > > They can not be easily summarized in one line. You really have to\n> see\n> > > the whole email to understand the issues.\n> > >\n> > > How do people want to do this? I can post them to hackers, or put\n> them\n> > > on my web site. I posted them to hackers during the past few days,\n> but\n> > > many went unanswered. These are all relatively new from the past\n> few\n> > > months.\n> > >\n> > > --\n> > > Bruce Momjian | http://candle.pha.pa.us\n> > > [email protected] | (610) 853-3000\n> > > + If your life is a hard drive, | 830 Blythe Avenue\n> > > + Christ can be your backup. | Drexel Hill, Pennsylvania\n> 19026\n> > >\n> > \n> > Marc G. Fournier ICQ#7615664 IRC Nick:\n> Scrappy\n> > Systems Administrator @ hub.org\n> > primary: [email protected] secondary:\n> scrappy@{freebsd|postgresql}.org\n> > \n> > \n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania\n> 19026\n> \n\n\n\n-- \nPeter Mount [email protected]\nPostgreSQL JDBC Driver: http://www.retep.org.uk/postgres/\nRetepPDF PDF library for Java: http://www.retep.org.uk/pdf/\n", "msg_date": "Thu, 25 Jan 2001 05:06:16 -0500 (EST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Open 7.1 items" } ]
[ { "msg_contents": "\n> 1. For 1st phase we'll place into log \"prepared-to-commit\" record\n> and this phase will be accomplished after record is \n> flushed on disk.\n> At this point transaction may be committed at any time because of\n> all its modifications are logged. But it still may be rolled back\n> if this phase failed on other sites of distributed system.\n\n1st phase will also need to do all the delayed constraint checks,\nand all other work a commit currently does, that could possibly lead \nto a transaction abort. The 2nd phase of 2phase commit is not \nallowed to return an error, unless of course in case of catastrophe.\n\n> 2. When all sites are prepared to commit we'll place \"committed\"\n> record into log. No need to flush it because of in the event of\n> crash for all \"prepared\" transactions recoverer will have to\n> communicate other sites to know their statuses anyway.\n> \n> That's all! It is really hard to implement distributed lock- and\n> communication- managers but there is no problem with logging two\n> records instead of one. Period.\n\nyup :-) Maybe this could even be raised to the SQL level, \nso applications could use this ? I have not seen this elsewhere,\nbut why actually not ?\n\nAndreas\n", "msg_date": "Wed, 24 Jan 2001 10:19:36 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Re: AW: Re: MySQL and BerkleyDB (fwd)" }, { "msg_contents": "> yup :-) Maybe this could even be raised to the SQL level, \n> so applications could use this ? I have not seen this elsewhere,\n> but why actually not ?\n\n Yes please :-) if someone is to code this quicker than me (I suppose\nso, since I have other projects to deal with concurrently).\n\n-- \n<< Tout n'y est pas parfait, mais on y honore certainement les jardiniers >>\n\n\t\t\tDominique Quatravaux <[email protected]>\n", "msg_date": "Thu, 25 Jan 2001 15:49:03 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Re: AW: Re: MySQL and BerkleyDB (fwd)" } ]
[ { "msg_contents": "\nFolks, I need help on this. It would be nice to support unixODBC, but I\ndon't understand the ramifications of these changes.\n\n> [email protected] wrote:\n> \n> > Nick, sorry this was never resolved. Do have any recollection of the\n> > issues involved?\n> \n> Hi Bruce,\n> \n> Yes I can tell you what I was changing, I would love to get the code in the unixODBC distrib to match the one you have, or even to remove it\n> and point people to you.\n> \n> There are a few simple changes.\n> \n> 1. Add options to use unixODBC in the configure.in file, the mainly consists of finding the root of the unixODBC install prefix, and adding\n> -I /unixODBC/path/include and -L /unixODBC/path/lib to the driver build\n> \n> 2. Change the way the driver gets config info, to be the same as when built under windows. link with -lodbcinst and it provides\n> SQLGetPrivateProfileString. the code that calls this works as long as the correct define is set.\n> \n> 3. Stop calling ODBC functions in the driver, this is simple but messy, the problem being the call (say) in SQLAllocStmt that calls\n> SQLAllocHandle in the driver, ends up calling the SQLAllocHandle in the driver manager.\n> \n> There are a couple of other changes I have made, that you may want to add, I added the code to allow encrypted passwords (taken from the pg\n> lib), as crypt is avaiable on unix. Add the option to detect a server name of localhost, and open the unix domain socket, in fact try two\n> places, to handle the debian build where the location is different. Again both of these would have no place on Windows but in Unix.\n> \n> Its chaos here at the moment, having lost a machine (dead disk) in the move to nice new (old building) offices in the country side, but if\n> you want any help, just shout.\n> \n> \n> --\n> Nick Gorham\n> When I die, I want to go like my grandfather did, gently while sleeping,\n> and not like his passangers, screaming in a panic, looking for the\n> inflatable raft. -- Seen on ./\n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 08:50:20 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: unixODBC again :-(" }, { "msg_contents": "Bruce Momjian writes:\n\n> > 1. Add options to use unixODBC in the configure.in file, the mainly consists of finding the root of the unixODBC install prefix, and adding\n> > -I /unixODBC/path/include and -L /unixODBC/path/lib to the driver build\n\n--with-includes, --with-libraries\n\n> > 2. Change the way the driver gets config info, to be the same as when built under windows. link with -lodbcinst and it provides\n> > SQLGetPrivateProfileString. the code that calls this works as long as the correct define is set.\n\nI don't understand this. The driver gets the config info just fine; why\nadd another way?\n\n> > 3. Stop calling ODBC functions in the driver, this is simple but messy, the problem being the call (say) in SQLAllocStmt that calls\n> > SQLAllocHandle in the driver, ends up calling the SQLAllocHandle in the driver manager.\n\nThis is fixed using magic linker options on ELF platforms. I don't recall\nhow the patch tried to address this, but a better solution is probably\nnecessary.\n\n> > There are a couple of other changes I have made, that you may want\n> to add, I added the code to allow encrypted passwords (taken from the\n> pg > lib), as crypt is avaiable on unix.\n\nWhy not.\n\n> Add the option to detect a\n> server name of localhost, and open the unix domain socket,\n\nI don't think so. localhost is a valid host name.\n\n> in fact try\n> two > places, to handle the debian build where the location is\n> different.\n\nWe have a general approach to non-standard socket names now.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 24 Jan 2001 19:15:12 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: unixODBC again :-(" }, { "msg_contents": "[email protected] wrote:\n\n> Bruce Momjian writes:\n>\n> > > 1. Add options to use unixODBC in the configure.in file, the mainly consists of finding the root of the unixODBC install prefix, and adding\n> > > -I /unixODBC/path/include and -L /unixODBC/path/lib to the driver build\n>\n> --with-includes, --with-libraries\n\nIf it works then fine, other drivers use a --with-unixODBC to enable any other changes that are needed.\n\n> > > 2. Change the way the driver gets config info, to be the same as when built under windows. link with -lodbcinst and it provides\n> > > SQLGetPrivateProfileString. the code that calls this works as long as the correct define is set.\n>\n> I don't understand this. The driver gets the config info just fine; why\n> add another way?\n\nWell because the driver does not know where to get the config info from, libodbcinst.so in unixODBC provides SQLGetPrivateProfileString, the\nlocation of user and system ini files are defined by this lib, if it doesn't do this you may have the situation where the driver manager gets\ninformation from one ini file and the driver from a different one.\n\n> > > 3. Stop calling ODBC functions in the driver, this is simple but messy, the problem being the call (say) in SQLAllocStmt that calls\n> > > SQLAllocHandle in the driver, ends up calling the SQLAllocHandle in the driver manager.\n>\n> This is fixed using magic linker options on ELF platforms. I don't recall\n> how the patch tried to address this, but a better solution is probably\n> necessary.\n\nIf there is a better way, please let me know, I would love to have a better solution.\n\n> > > There are a couple of other changes I have made, that you may want\n> > to add, I added the code to allow encrypted passwords (taken from the\n> > pg > lib), as crypt is avaiable on unix.\n>\n> Why not.\n>\n> > Add the option to detect a\n> > server name of localhost, and open the unix domain socket,\n>\n> I don't think so. localhost is a valid host name.\n\nOk, but don't you think it is worth having some way to get it to use UNIX domain sockets instead of TCP ones, for instance if postmaster isn't\nstarted with a -i ?\n\n> We have a general approach to non-standard socket names now.\n\nGreat, thats a non problem then, what do you do ?\n\n--\nNick Gorham\nWhen I die, I want to go like my grandfather did, gently while sleeping,\nand not like his passangers, screaming in a panic, looking for the\ninflatable raft. -- Seen on ./\n\n\n\n", "msg_date": "Wed, 24 Jan 2001 18:53:42 +0000", "msg_from": "Nick Gorham <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: unixODBC again :-(" }, { "msg_contents": "Nick Gorham writes:\n\n> Well because the driver does not know where to get the config info\n> from,\n\nThen the driver should be fixed to do that, with or without unixODBC.\n\n> libodbcinst.so in unixODBC provides SQLGetPrivateProfileString,\n> the location of user and system ini files are defined by this lib, if\n> it doesn't do this you may have the situation where the driver manager\n> gets information from one ini file and the driver from a different\n> one.\n\n--with-odbcinst=DIRECTORY\n\n> > > Add the option to detect a\n> > > server name of localhost, and open the unix domain socket,\n> >\n> > I don't think so. localhost is a valid host name.\n>\n> Ok, but don't you think it is worth having some way to get it to use\n> UNIX domain sockets instead of TCP ones, for instance if postmaster\n> isn't started with a -i ?\n\nYes, that would be okay, but it's not okay to eliminate a feature to add\nanother one.\n\n> > We have a general approach to non-standard socket names now.\n>\n> Great, thats a non problem then, what do you do ?\n\nPick up DEFAULT_PGSOCKET_DIR from config.h.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 24 Jan 2001 21:20:51 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: unixODBC again :-(" }, { "msg_contents": "[email protected] wrote:\n\n> Nick Gorham writes:\n>\n> > Well because the driver does not know where to get the config info\n> > from,\n>\n> Then the driver should be fixed to do that, with or without unixODBC.\n\nWell yes, but again, using the Windows situation as a model (not that\nI would normally suggest windows as a role model for anything), its not the\ndrivers job to know or care where the info comes from, that the job of the\n(a) driver manager.\n\n> > libodbcinst.so in unixODBC provides SQLGetPrivateProfileString,\n> > the location of user and system ini files are defined by this lib, if\n> > it doesn't do this you may have the situation where the driver manager\n> > gets information from one ini file and the driver from a different\n> > one.\n>\n> --with-odbcinst=DIRECTORY\n\nYes but there are two places, the user ~/.odbc.ini directory, and the\nsystem /sysconfdir/odbc.ini.\n\nusing the odbcinst lib, means all drivers can use the same info store, and\nyou can just install a binary driver without having to set any\nconfiguration.\n\n> > > > Add the option to detect a\n> > > > server name of localhost, and open the unix domain socket,\n> > >\n> > > I don't think so. localhost is a valid host name.\n> >\n> > Ok, but don't you think it is worth having some way to get it to use\n> > UNIX domain sockets instead of TCP ones, for instance if postmaster\n> > isn't started with a -i ?\n>\n> Yes, that would be okay, but it's not okay to eliminate a feature to add\n> another one.\n\nI would agree with that, I just did it the way I did as it fitted what some\nusers needed. Not sure how many people would have a network setup with\nlocalhost set in dns to point to another machine, Though I agree there is\nno reason why you couldn't do it.\n\n> > > We have a general approach to non-standard socket names now.\n> >\n> > Great, thats a non problem then, what do you do ?\n>\n> Pick up DEFAULT_PGSOCKET_DIR from config.h.\n\nThats ok, but if I was to keep a driver in unixODBC distrib, I would have\nto have a --postgres-socket= option in the config, same problem with\nodbcinst but in reverse. Maybe no simple answer to that one.\n\nAll I do at the moment, is have the driver try the two places it knows\nabout, maybe it should be in the ini file, perhaps if the socket_location\nis set it would connect via that. It would fix the problem with using\nlocalhost to switch the connection method.\n\n--\nNick Gorham\nWhen I die, I want to go like my grandfather did, gently while sleeping,\nand not like his passangers, screaming in a panic, looking for the\ninflatable raft. -- Seen on ./\n\n\n\n", "msg_date": "Wed, 24 Jan 2001 22:37:52 +0000", "msg_from": "Nick Gorham <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: unixODBC again :-(" }, { "msg_contents": "Glad these items are being resolved. THis has sat around too long. \nPlease keep discussing and come up with a good patch. We will help\nhowever we can.\n\n\n> [email protected] wrote:\n> \n> > Nick Gorham writes:\n> >\n> > > Well because the driver does not know where to get the config info\n> > > from,\n> >\n> > Then the driver should be fixed to do that, with or without unixODBC.\n> \n> Well yes, but again, using the Windows situation as a model (not that\n> I would normally suggest windows as a role model for anything), its not the\n> drivers job to know or care where the info comes from, that the job of the\n> (a) driver manager.\n> \n> > > libodbcinst.so in unixODBC provides SQLGetPrivateProfileString,\n> > > the location of user and system ini files are defined by this lib, if\n> > > it doesn't do this you may have the situation where the driver manager\n> > > gets information from one ini file and the driver from a different\n> > > one.\n> >\n> > --with-odbcinst=DIRECTORY\n> \n> Yes but there are two places, the user ~/.odbc.ini directory, and the\n> system /sysconfdir/odbc.ini.\n> \n> using the odbcinst lib, means all drivers can use the same info store, and\n> you can just install a binary driver without having to set any\n> configuration.\n> \n> > > > > Add the option to detect a\n> > > > > server name of localhost, and open the unix domain socket,\n> > > >\n> > > > I don't think so. localhost is a valid host name.\n> > >\n> > > Ok, but don't you think it is worth having some way to get it to use\n> > > UNIX domain sockets instead of TCP ones, for instance if postmaster\n> > > isn't started with a -i ?\n> >\n> > Yes, that would be okay, but it's not okay to eliminate a feature to add\n> > another one.\n> \n> I would agree with that, I just did it the way I did as it fitted what some\n> users needed. Not sure how many people would have a network setup with\n> localhost set in dns to point to another machine, Though I agree there is\n> no reason why you couldn't do it.\n> \n> > > > We have a general approach to non-standard socket names now.\n> > >\n> > > Great, thats a non problem then, what do you do ?\n> >\n> > Pick up DEFAULT_PGSOCKET_DIR from config.h.\n> \n> Thats ok, but if I was to keep a driver in unixODBC distrib, I would have\n> to have a --postgres-socket= option in the config, same problem with\n> odbcinst but in reverse. Maybe no simple answer to that one.\n> \n> All I do at the moment, is have the driver try the two places it knows\n> about, maybe it should be in the ini file, perhaps if the socket_location\n> is set it would connect via that. It would fix the problem with using\n> localhost to switch the connection method.\n> \n> --\n> Nick Gorham\n> When I die, I want to go like my grandfather did, gently while sleeping,\n> and not like his passangers, screaming in a panic, looking for the\n> inflatable raft. -- Seen on ./\n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 18:47:33 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: unixODBC again :-(" }, { "msg_contents": "I am working with delphi through odbc to get to Postgres, but I have a\nproblem:\n\nEverytime I try to make an insertion, I get a message from the odbc driver\nsaying a Primary key value cannot be null (which is on purpose since I want\npostgres to use it serial value properties). Can anyone tell me if there's\nsomething special I have to do on the ODBC configuration, or how do I make an\ninsertion through odbc?\n\nThanx.\n\n", "msg_date": "Thu, 25 Jan 2001 09:46:14 -0600", "msg_from": "Alfonso Peniche <[email protected]>", "msg_from_op": false, "msg_subject": "serial values and odbc" }, { "msg_contents": "\n> I am working with delphi through odbc to get to Postgres, but I have a\n> problem:\n> \n> Everytime I try to make an insertion, I get a message from the odbc driver\n> saying a Primary key value cannot be null (which is on purpose since I want\n> postgres to use it serial value properties). Can anyone tell me if there's\n> something special I have to do on the ODBC configuration, or how do I make an\n> insertion through odbc?\n\nI think it isn't ODBC related. But we use the following method:\n\nWth the DataSet's OnBeforPost method fetch the serial's NextValue and fill in\nthe Field in the DataSet like this:\n\nQueryBeforPost(TDataSet* DaraSet)\n{\n if (DataSet->State == dsInsert && \n DataSet->FieldByName(\"<MySerialField\">)->Value == Null)\n {\n TQuery* q = new TQuery(this);\n q->DataBaseName = \"<MyBDEDataBaseAliasName>\";\n q->SQL->Clear();\n q->SQL->Add(\"SELECT nextval('<MyTable><MySerialField>_seq'\");\n \n // put these two statments into a try block...\n q->Active = true;\n DatSet->FieldByName(\"<MySerialField>\")->Value =\n q->FieldByName(\"nextval\")->Value;\n\n q->Active = false;\n delete q;\n }\n}\n\nSorry, I'm a C++ programmer, so I can't write ObjectPascal code but I think it\nwill help.\n\nThis is the recommended method with PostgreSQL and with visual database\nactions. You may put this code without the dsInsert condition into the\nBeforInsert event to show the user the value but we prefer the one shown.\n\nIf the post fails - or the user cancels - the next nextval will send (the\nnext) safe value (see docs and archive). No matter the unused values. It works\neven in a transaction block.\n\n-- \nTibor Laszlo\[email protected]\n", "msg_date": "Fri, 26 Jan 2001 18:30:38 +0100", "msg_from": "Tibor Laszlo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: serial values and odbc" } ]
[ { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> ----- Original Message ----- \n> From: \"Bruce Momjian\" <[email protected]>\n> To: \"Karel Zak\" <[email protected]>\n> Cc: \"pgsql-hackers\" <[email protected]>; <[email protected]>\n> Sent: Wednesday, January 24, 2001 3:41 PM\n> Subject: Re: [HACKERS] PgAccess - small bug?\n> \n> \n> > \n> > Has this been dealt with?\n> \n> Nope ... :-(\n> \n> I'll fix it this weekend!\n\nOK, let me know if you release a new pgaccess and I will add it to CVS.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 08:52:13 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PgAccess - small bug?" } ]
[ { "msg_contents": "Hi,\n\nI've the following function:\n\n CREATE FUNCTION book_info(pricing)\n RETURNS catalog_general AS '\n select *\n from catalog_general\n where star_isbn = $1.vista_isbn\n ' LANGUAGE 'sql';\n\ncalling it as:\n \n SELECT p.*, p.book_info.title FROM pricing p WHERE vista_ans='POD';\n\nbackground and observation:\n \n the pricing table is fairly large, but only a small number meet\n \"WHERE vista_ans='POD'\". I can select all where vista_ans='POD'\n very quickly (.2 sec), but adding in the get_book(pricing) call\n slows this down to about 20sec. I can, with an external sql query,\n select all of the desired records in about 1 sec, so it appears\n to me that the function is being called regardless of whether\n or not the WHERE clause is being satisfied.\n\nquestion:\n \n is there any way the function call could be _not_ called if:\n 1) the WHERE clause does not reference any of its return values, and\n 2) the WHERE clause has already been satisified.\n\n ???\n\nIf this behavior is reasonable, could someone point me _toward_ the\ncode where I'd need to make this optimization. I think this would be\nnice to have for 7.2 :)\n\nbrent\n\n", "msg_date": "Wed, 24 Jan 2001 08:54:38 -0500", "msg_from": "Brent Verner <[email protected]>", "msg_from_op": true, "msg_subject": "function optimization ???" }, { "msg_contents": "Brent Verner <[email protected]> writes:\n> calling it as:\n> SELECT p.*, p.book_info.title FROM pricing p WHERE vista_ans='POD';\n> background and observation:\n> the pricing table is fairly large, but only a small number meet\n> \"WHERE vista_ans='POD'\". I can select all where vista_ans='POD'\n> very quickly (.2 sec), but adding in the get_book(pricing) call\n> slows this down to about 20sec. I can, with an external sql query,\n> select all of the desired records in about 1 sec, so it appears\n> to me that the function is being called regardless of whether\n> or not the WHERE clause is being satisfied.\n\nThis conclusion is absolutely false: the SELECT target list is NOT\nevaluated except at rows where the WHERE condition is satisfied.\n\nI suspect the real problem is that the select inside the function\nis not being done as efficiently as you'd like. How big is\ncatalog_general, and would a sequential scan over it inside the\nfunction account for the performance discrepancy?\n\nIIRC, 7.0.* is not very bright about using indexscans in situations\nwhere the righthand side of the WHERE clause is anything more complex\nthan a literal constant or simple parameter reference ($n). The\nfieldselect you have here would be enough to defeat the indexscan\nrecognizer. This is fixed in 7.1, however. For now, you could\ndeclare book_info as taking a simple datum and invoke it as\n\tp.vista_isbn.book_info.title\n\nBTW, star_isbn and vista_isbn are the same datatype, I trust, else\nthat might cause failure to use an indexscan too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Jan 2001 12:14:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: function optimization ??? " }, { "msg_contents": "On 24 Jan 2001 at 12:14 (-0500), Tom Lane wrote:\n| Brent Verner <[email protected]> writes:\n| > calling it as:\n| > SELECT p.*, p.book_info.title FROM pricing p WHERE vista_ans='POD';\n| > background and observation:\n| > the pricing table is fairly large, but only a small number meet\n| > \"WHERE vista_ans='POD'\". I can select all where vista_ans='POD'\n| > very quickly (.2 sec), but adding in the get_book(pricing) call\n| > slows this down to about 20sec. I can, with an external sql query,\n| > select all of the desired records in about 1 sec, so it appears\n| > to me that the function is being called regardless of whether\n| > or not the WHERE clause is being satisfied.\n| \n| This conclusion is absolutely false: the SELECT target list is NOT\n| evaluated except at rows where the WHERE condition is satisfied.\n| \n| I suspect the real problem is that the select inside the function\n| is not being done as efficiently as you'd like.\n\nyes, this is indeed the case. Sorry for the noise, my 'with an external\nquery' case was a broken product of sleep-dep :\\.\n\nthanks.\n brent\n\n", "msg_date": "Wed, 24 Jan 2001 17:45:07 -0500", "msg_from": "Brent Verner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: function optimization ???" } ]
[ { "msg_contents": "\n> Thanks. Applied.\n> \n> [ Charset ISO-8859-1 unsupported, converting... ]\n> > Hello!\n> > \n> > Here is a patch to make the current snapshot compile on \n> Win32 (native, libpq\n> > and psql) again. Changes are:\n\nI thought the consensus was to do something other than that patch.\nAs it looks, if nothing else is changed on win32 it only produces a memory \nleak instead of a crash.\n\nAndreas\n", "msg_date": "Wed, 24 Jan 2001 15:18:56 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Postgresql on win32" }, { "msg_contents": "\nOK, suggestions? \n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> \n> > Thanks. Applied.\n> > \n> > [ Charset ISO-8859-1 unsupported, converting... ]\n> > > Hello!\n> > > \n> > > Here is a patch to make the current snapshot compile on \n> > Win32 (native, libpq\n> > > and psql) again. Changes are:\n> \n> I thought the consensus was to do something other than that patch.\n> As it looks, if nothing else is changed on win32 it only produces a memory \n> leak instead of a crash.\n> \n> Andreas\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 09:21:24 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Postgresql on win32" } ]
[ { "msg_contents": "Needs fixing - no. The current version *works*.\nThe fix would remove one unnecessary step from it, but it still *works* in\nit's current state.\n\nSorry about this - I've missed looking at it.\n\n//Magnus\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: den 24 januari 2001 15:47\n> To: Magnus Hagander\n> Cc: PostgreSQL-development; PostgreSQL-documentation\n> Subject: Re: [PATCHES] RE: SSL Connections [doc PATCH]\n> \n> \n> \n> Again, is this something that needs fixing? Just a YES or NO is all I\n> need.\n> \n> \n> \n> > It looks Ok, but it has one unnecessary step. There is no \n> need to do the \"mv\n> > privkey.pem cert.pem.pw\" if you just use \"privkey.pem\" in \n> the following\n> > openssl command (e.g. openssl rsa -in privkey.pem -out cert.pem\".\n> > But there is nothing wrong with it as it is now, as far as \n> I can see.\n> > \n> > \n> > //Magnus\n> > \n> > \n> > > -----Original Message-----\n> > > From: Bruce Momjian [mailto:[email protected]]\n> > > Sent: den 21 december 2000 20:15\n> > > To: Magnus Hagander\n> > > Cc: 'Matthew Kirkwood'; '[email protected]'\n> > > Subject: Re: [PATCHES] RE: SSL Connections [doc PATCH]\n> > > \n> > > \n> > > I have applied an earlier patch to this file for SSL. \n> Could you check\n> > > the current tree and see how you like it?\n> > > \n> > > \n> > > > Thanks for that one!\n> > > > \n> > > > Here is a patch to update the documentation based on this - \n> > > this should make\n> > > > it less dependant on the version of OpenSSL used.\n> > > > \n> > > > //Magnus\n> > > > \n> > > > \n> > > > \n> > > > > -----Original Message-----\n> > > > > From: Matthew Kirkwood [mailto:[email protected]]\n> > > > > Sent: den 21 december 2000 16:49\n> > > > > To: Oliver Elphick\n> > > > > Cc: [email protected]\n> > > > > Subject: Re: [HACKERS] SSL Connections\n> > > > > \n> > > > > \n> > > > > On Wed, 20 Dec 2000, Oliver Elphick wrote:\n> > > > > \n> > > > > > To create a quick self-signed certificate, use the \n> CA.pl script\n> > > > > > included in OpenSSL:\n> > > > > > \n> > > > > > CA.pl -newcert\n> > > > > \n> > > > > Or you can do it manually:\n> > > > > \n> > > > > openssl req -new -text -out cert.req (you will have to enter \n> > > > > a password)\n> > > > > mv privkey.pem cert.pem.pw\n> > > > > openssl rsa -in cert.pem.pw -out cert.pem (this removes \n> > > the password)\n> > > > > openssl req -x509 -in cert.req -text -key cert.pem \n> -out cert.cert\n> > > > > \n> > > > > Matthew.\n> > > > > \n> > > > \n> > > \n> > > [ Attachment, skipping... ]\n> > > \n> > > \n> > > -- \n> > > Bruce Momjian | http://candle.pha.pa.us\n> > > [email protected] | (610) 853-3000\n> > > + If your life is a hard drive, | 830 Blythe Avenue\n> > > + Christ can be your backup. | Drexel Hill, \n> > > Pennsylvania 19026\n> > > \n> > \n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, \n> Pennsylvania 19026\n> \n", "msg_date": "Wed, 24 Jan 2001 15:58:22 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [PATCHES] RE: SSL Connections [doc PATCH]" }, { "msg_contents": "\nBut shouldn't we remove it to make it clearer?\n\n> Needs fixing - no. The current version *works*.\n> The fix would remove one unnecessary step from it, but it still *works* in\n> it's current state.\n> \n> Sorry about this - I've missed looking at it.\n> \n> //Magnus\n> \n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:[email protected]]\n> > Sent: den 24 januari 2001 15:47\n> > To: Magnus Hagander\n> > Cc: PostgreSQL-development; PostgreSQL-documentation\n> > Subject: Re: [PATCHES] RE: SSL Connections [doc PATCH]\n> > \n> > \n> > \n> > Again, is this something that needs fixing? Just a YES or NO is all I\n> > need.\n> > \n> > \n> > \n> > > It looks Ok, but it has one unnecessary step. There is no \n> > need to do the \"mv\n> > > privkey.pem cert.pem.pw\" if you just use \"privkey.pem\" in \n> > the following\n> > > openssl command (e.g. openssl rsa -in privkey.pem -out cert.pem\".\n> > > But there is nothing wrong with it as it is now, as far as \n> > I can see.\n> > > \n> > > \n> > > //Magnus\n> > > \n> > > \n> > > > -----Original Message-----\n> > > > From: Bruce Momjian [mailto:[email protected]]\n> > > > Sent: den 21 december 2000 20:15\n> > > > To: Magnus Hagander\n> > > > Cc: 'Matthew Kirkwood'; '[email protected]'\n> > > > Subject: Re: [PATCHES] RE: SSL Connections [doc PATCH]\n> > > > \n> > > > \n> > > > I have applied an earlier patch to this file for SSL. \n> > Could you check\n> > > > the current tree and see how you like it?\n> > > > \n> > > > \n> > > > > Thanks for that one!\n> > > > > \n> > > > > Here is a patch to update the documentation based on this - \n> > > > this should make\n> > > > > it less dependant on the version of OpenSSL used.\n> > > > > \n> > > > > //Magnus\n> > > > > \n> > > > > \n> > > > > \n> > > > > > -----Original Message-----\n> > > > > > From: Matthew Kirkwood [mailto:[email protected]]\n> > > > > > Sent: den 21 december 2000 16:49\n> > > > > > To: Oliver Elphick\n> > > > > > Cc: [email protected]\n> > > > > > Subject: Re: [HACKERS] SSL Connections\n> > > > > > \n> > > > > > \n> > > > > > On Wed, 20 Dec 2000, Oliver Elphick wrote:\n> > > > > > \n> > > > > > > To create a quick self-signed certificate, use the \n> > CA.pl script\n> > > > > > > included in OpenSSL:\n> > > > > > > \n> > > > > > > CA.pl -newcert\n> > > > > > \n> > > > > > Or you can do it manually:\n> > > > > > \n> > > > > > openssl req -new -text -out cert.req (you will have to enter \n> > > > > > a password)\n> > > > > > mv privkey.pem cert.pem.pw\n> > > > > > openssl rsa -in cert.pem.pw -out cert.pem (this removes \n> > > > the password)\n> > > > > > openssl req -x509 -in cert.req -text -key cert.pem \n> > -out cert.cert\n> > > > > > \n> > > > > > Matthew.\n> > > > > > \n> > > > > \n> > > > \n> > > > [ Attachment, skipping... ]\n> > > > \n> > > > \n> > > > -- \n> > > > Bruce Momjian | http://candle.pha.pa.us\n> > > > [email protected] | (610) 853-3000\n> > > > + If your life is a hard drive, | 830 Blythe Avenue\n> > > > + Christ can be your backup. | Drexel Hill, \n> > > > Pennsylvania 19026\n> > > > \n> > > \n> > \n> > \n> > -- \n> > Bruce Momjian | http://candle.pha.pa.us\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, \n> > Pennsylvania 19026\n> > \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 10:02:46 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] RE: SSL Connections [doc PATCH]" } ]
[ { "msg_contents": "That would probably be good, yes :-)\n\nYou shuold then change:\nmv privkey.pem cert.pem.pw\nopenssl rsa -in cert.pem.pw -out cert.pem\n\nto\nopenssl rsa -in privkey.pem -out cert.pem\n\n(Sorry, don't have access to the SGML source now, so I can't give you a\npatch)\n\n//Magnus\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: den 24 januari 2001 16:03\n> To: Magnus Hagander\n> Cc: PostgreSQL-development; PostgreSQL-documentation\n> Subject: Re: [PATCHES] RE: SSL Connections [doc PATCH]\n> \n> \n> \n> But shouldn't we remove it to make it clearer?\n> \n> > Needs fixing - no. The current version *works*.\n> > The fix would remove one unnecessary step from it, but it \n> still *works* in\n> > it's current state.\n> > \n> > Sorry about this - I've missed looking at it.\n> > \n> > //Magnus\n> > \n> > > -----Original Message-----\n> > > From: Bruce Momjian [mailto:[email protected]]\n> > > Sent: den 24 januari 2001 15:47\n> > > To: Magnus Hagander\n> > > Cc: PostgreSQL-development; PostgreSQL-documentation\n> > > Subject: Re: [PATCHES] RE: SSL Connections [doc PATCH]\n> > > \n> > > \n> > > \n> > > Again, is this something that needs fixing? Just a YES \n> or NO is all I\n> > > need.\n> > > \n> > > \n> > > \n> > > > It looks Ok, but it has one unnecessary step. There is no \n> > > need to do the \"mv\n> > > > privkey.pem cert.pem.pw\" if you just use \"privkey.pem\" in \n> > > the following\n> > > > openssl command (e.g. openssl rsa -in privkey.pem -out \n> cert.pem\".\n> > > > But there is nothing wrong with it as it is now, as far as \n> > > I can see.\n> > > > \n> > > > \n> > > > //Magnus\n> > > > \n> > > > \n> > > > > -----Original Message-----\n> > > > > From: Bruce Momjian [mailto:[email protected]]\n> > > > > Sent: den 21 december 2000 20:15\n> > > > > To: Magnus Hagander\n> > > > > Cc: 'Matthew Kirkwood'; '[email protected]'\n> > > > > Subject: Re: [PATCHES] RE: SSL Connections [doc PATCH]\n> > > > > \n> > > > > \n> > > > > I have applied an earlier patch to this file for SSL. \n> > > Could you check\n> > > > > the current tree and see how you like it?\n> > > > > \n> > > > > \n> > > > > > Thanks for that one!\n> > > > > > \n> > > > > > Here is a patch to update the documentation based on this - \n> > > > > this should make\n> > > > > > it less dependant on the version of OpenSSL used.\n> > > > > > \n> > > > > > //Magnus\n> > > > > > \n> > > > > > \n> > > > > > \n> > > > > > > -----Original Message-----\n> > > > > > > From: Matthew Kirkwood [mailto:[email protected]]\n> > > > > > > Sent: den 21 december 2000 16:49\n> > > > > > > To: Oliver Elphick\n> > > > > > > Cc: [email protected]\n> > > > > > > Subject: Re: [HACKERS] SSL Connections\n> > > > > > > \n> > > > > > > \n> > > > > > > On Wed, 20 Dec 2000, Oliver Elphick wrote:\n> > > > > > > \n> > > > > > > > To create a quick self-signed certificate, use the \n> > > CA.pl script\n> > > > > > > > included in OpenSSL:\n> > > > > > > > \n> > > > > > > > CA.pl -newcert\n> > > > > > > \n> > > > > > > Or you can do it manually:\n> > > > > > > \n> > > > > > > openssl req -new -text -out cert.req (you will \n> have to enter \n> > > > > > > a password)\n> > > > > > > mv privkey.pem cert.pem.pw\n> > > > > > > openssl rsa -in cert.pem.pw -out cert.pem (this removes \n> > > > > the password)\n> > > > > > > openssl req -x509 -in cert.req -text -key cert.pem \n> > > -out cert.cert\n> > > > > > > \n> > > > > > > Matthew.\n> > > > > > > \n> > > > > > \n> > > > > \n> > > > > [ Attachment, skipping... ]\n> > > > > \n> > > > > \n> > > > > -- \n> > > > > Bruce Momjian | \nhttp://candle.pha.pa.us\n> > > > [email protected] | (610) 853-3000\n> > > > + If your life is a hard drive, | 830 Blythe Avenue\n> > > > + Christ can be your backup. | Drexel Hill, \n> > > > Pennsylvania 19026\n> > > > \n> > > \n> > \n> > \n> > -- \n> > Bruce Momjian | http://candle.pha.pa.us\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, \n> > Pennsylvania 19026\n> > \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 16:08:39 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [PATCHES] RE: SSL Connections [doc PATCH]" }, { "msg_contents": "> That would probably be good, yes :-)\n> \n> You shuold then change:\n> mv privkey.pem cert.pem.pw\n> openssl rsa -in cert.pem.pw -out cert.pem\n> \n> to\n> openssl rsa -in privkey.pem -out cert.pem\n> \n> (Sorry, don't have access to the SGML source now, so I can't give you a\n> patch)\n\nOK, the SGML diff is:\n\n---------------------------------------------------------------------------\n\nIndex: doc/src/sgml/runtime.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/runtime.sgml,v\nretrieving revision 1.46\ndiff -c -r1.46 runtime.sgml\n*** doc/src/sgml/runtime.sgml\t2001/01/08 21:01:54\t1.46\n--- doc/src/sgml/runtime.sgml\t2001/01/24 15:17:09\n***************\n*** 1911,1918 ****\n To remove the passphrase (as you must if you want automatic start-up of\n the postmaster), run the commands\n <programlisting>\n! mv privkey.pem cert.pem.pw\n! openssl rsa -in cert.pem.pw -out cert.pem \n </programlisting>\n Enter the old passphrase to unlock the existing key. Now do\n <programlisting>\n--- 1911,1917 ----\n To remove the passphrase (as you must if you want automatic start-up of\n the postmaster), run the commands\n <programlisting>\n! openssl rsa -in privkey.pem -out cert.pem\n </programlisting>\n Enter the old passphrase to unlock the existing key. Now do\n <programlisting>\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 10:18:42 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] RE: SSL Connections [doc PATCH]" } ]
[ { "msg_contents": "> Peter Eisentraut <[email protected]> writes:\n> > We're losing this battle anyway. Look into \n> src/interfaces/libpq/libpq.rc.\n> \n> Ugh. Magnus, is there any reasonable way to generate that \n> thing on the fly on Win32?\nIt's the same thing as with version.h - e.g. not really :-( It can be done,\nbut I doubt it can be done cleanly.\n\n\n> One could imagine fixing this in configure --- have configure generate\n> libpq.rc from libpq.rc.in, and then treat libpq.rc as part of the\n> distribution the same as we do for gram.c and so forth. The version\n> info could get substituted into config.h.win32 the same way, \n> I suppose.\n> \n> This is pretty ugly, but you could look at it as being no different\n> from providing gram.c for those without bison: ship those dependent\n> files that can't be remade without tools that may not exist on the\n> target platform.\n> \n> You'll probably say \"that's more trouble than it's worth\", but version\n> info in a file that's only used by a marginally-supported platform is\n> just the kind of thing that humans will forget to update.\n\nIf it is possible to do that, then I think it would be the best. (And\nputting it in both a .h and the .rc file). It wuold definitly make things\ncleaner-looking for the end user :-)\n\nI have no idea how to do this, though, so I can't submit a patch. But if\nsomeone were to do it and tell me where/how it goes into a header, I can\nupdate the win32 patch to work with it...\n\nRegards,\n Magnus\n", "msg_date": "Wed, 24 Jan 2001 16:48:32 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "RE: FW: Postgresql on win32 " }, { "msg_contents": "Magnus Hagander writes:\n\n> > Peter Eisentraut <[email protected]> writes:\n> > > We're losing this battle anyway. Look into\n> > src/interfaces/libpq/libpq.rc.\n> >\n> > Ugh. Magnus, is there any reasonable way to generate that\n> > thing on the fly on Win32?\n> It's the same thing as with version.h - e.g. not really :-( It can be done,\n> but I doubt it can be done cleanly.\n\n> I have no idea how to do this, though, so I can't submit a patch. But if\n> someone were to do it and tell me where/how it goes into a header, I can\n> update the win32 patch to work with it...\n\nSince all files are now up to date for 7.1 I don't feel a lot of urge to\nwork on this right now. Maybe when we change this version again.\n\nBut realistically we're going to have to hand-maintain config.h.win32\nanyway to account for new configure tests.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 24 Jan 2001 21:36:15 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "RE: FW: Postgresql on win32 " }, { "msg_contents": "I have added ./include/config.h.win32 to the RELEASE_CHANGES update\nlist.\n\n> Magnus Hagander writes:\n> \n> > > Peter Eisentraut <[email protected]> writes:\n> > > > We're losing this battle anyway. Look into\n> > > src/interfaces/libpq/libpq.rc.\n> > >\n> > > Ugh. Magnus, is there any reasonable way to generate that\n> > > thing on the fly on Win32?\n> > It's the same thing as with version.h - e.g. not really :-( It can be done,\n> > but I doubt it can be done cleanly.\n> \n> > I have no idea how to do this, though, so I can't submit a patch. But if\n> > someone were to do it and tell me where/how it goes into a header, I can\n> > update the win32 patch to work with it...\n> \n> Since all files are now up to date for 7.1 I don't feel a lot of urge to\n> work on this right now. Maybe when we change this version again.\n> \n> But realistically we're going to have to hand-maintain config.h.win32\n> anyway to account for new configure tests.\n> \n> -- \n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 19:01:32 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Postgresql on win32" } ]
[ { "msg_contents": "I know it is a pain to have to deal with all these items, but we have to\ndo this for every release. It helps to make our releases more complete\nbecause all open issues are resolved.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 10:59:18 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Cleanup time" } ]
[ { "msg_contents": "> > 1. For 1st phase we'll place into log \"prepared-to-commit\" record\n> > and this phase will be accomplished after record is \n> > flushed on disk.\n> > At this point transaction may be committed at any time because of\n> > all its modifications are logged. But it still may be rolled back\n> > if this phase failed on other sites of distributed system.\n> \n> 1st phase will also need to do all the delayed constraint checks,\n> and all other work a commit currently does, that could possibly lead \n> to a transaction abort. The 2nd phase of 2phase commit is not \n\nIt was assumed.\n\nVadim\n", "msg_date": "Wed, 24 Jan 2001 11:19:27 -0800", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Re: AW: Re: MySQL and BerkleyDB (fwd)" } ]