threads
listlengths
1
2.99k
[ { "msg_contents": "RPMs for PostgreSQL 7.0.2 are now available at\nftp://ftp.postgresql.org/pub/binary/v7.0.2/redhat-RPM/RPMS/redhat-6.x\n\nThere is only one major change from the 7.0-3 RPMset (other than the uprev to\n7.0.2, which involves the removal of the pre-built PostScript documentation),\nand that is the removal of the pl/perl subpackage. If you want pl/perl, the\nlines to build it are still in the spec file, just commented out. There have been reports\nof difficulties building and/or running pl/perl on platforms other than x86. I\nstill want to thank Karl DeBisschop for getting it to build on x86.\n\nThere are a number of minor changes; see the spec file's changelog for details.\n \nPlease read the README.rpm file, distributed in the main RPM in\n/usr/doc/postgresql-7.0.2/README.rpm, as well as on the ftp site as\nftp://ftp.postgresql.org/pub/binary/v7.0.2/redhat-RPM/README.\n\nSoon we will also have LinuxPPC binary RPM's available, courtesy of Murray Todd\nWilliams. If you have a platform that we don't have binary RPM's for, and you\nwant to contribute binary RPMs, please let me know. The only condition is that\nthe absolute minimum spec file changes are to be made -- if the changes are more\nthan a little, I'll need your spec file and any patches you may have added.\n\nIf you are able to get the RPMset to build on an RPM-based OS other than RedHat\nLinux, send me the patches necessary to do so, along with complete platform\ninformation (OS, OS version, CPU Type, CPU model, GCC or other CC version, and\nany patches made to the main PostgreSQL tarball necessary to build). My goal\nis to get a single _source_ RPM for all RPM-based OS's (more than just Linux\ncan run RPM....). Thanks in advance for your help.\n\nAnd, of course, if you find packaging problems, let me know, either through the\[email protected] mailing list or by e-mailing me directly at\[email protected].\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 5 Jun 2000 22:19:51 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL 7.0.2-1 RPMset available." } ]
[ { "msg_contents": "Argh, I've still not got the hang of applying to a tree yet. It's in the\nmain development tree.\n\nIn both the org/postgresql/jdbc1/ResultSet.java and\norg/postgresql/jdbc2/ResultSet.java files, the method getTimestamp() has a\nline setting a SimpleDateFormat. It should read:\n\nSimpleDateFormat df = new SimpleDateFormat(\"yyyy-MM-dd HH:mm:ss\");\n\nPeter\n\n> -----Original Message-----\n> From:\tBruce Momjian [SMTP:[email protected]]\n> Sent:\tThursday, June 01, 2000 5:29 PM\n> To:\tPeter Mount\n> Cc:\tPostgreSQL-development\n> Subject:\tRe: [HACKERS] 7.0.1 is ready\n> \n> Peter, I do not see these patches applied in the REL7_0_PATCHES tree. \n> If you went to send me a patch, I can apply it into the tree.\n> \n> [ Charset ISO-8859-1 unsupported, converting... ]\n> > Bruce, some more additions:\n> > \n> > JDBC ResultSet.getTimestamp() fix (Gregory Krasnow & Floyd Marinescu)\n> > \n> > Peter\n> > \n> > -- \n> > Peter Mount\n> > Enterprise Support\n> > Maidstone Borough Council\n> > Any views stated are my own, and not those of Maidstone Borough Council.\n> \n> > \n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 6 Jun 2000 07:55:58 +0100 ", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: 7.0.1 is ready" } ]
[ { "msg_contents": "Ah, ok. At first I thought I was loosing it ;-)\n\n--\nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council\n\n> -----Original Message-----\n> From:\tBruce Momjian [SMTP:[email protected]]\n> Sent:\tFriday, June 02, 2000 3:55 PM\n> To:\tPeter Mount\n> Cc:\tPostgreSQL-development\n> Subject:\tRe: [HACKERS] 7.0.1 is ready\n> \n> Got it. I will put it under 7.0.1 and it will appear in 7.0.2. Sorry I\n> was confused and did not put it in the first time.\n> \n> \n> [ Charset ISO-8859-1 unsupported, converting... ]\n> > Bruce, some more additions:\n> > \n> > JDBC ResultSet.getTimestamp() fix (Gregory Krasnow & Floyd Marinescu)\n> > \n> > Peter\n> > \n> > -- \n> > Peter Mount\n> > Enterprise Support\n> > Maidstone Borough Council\n> > Any views stated are my own, and not those of Maidstone Borough Council.\n> \n> > \n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 6 Jun 2000 08:34:17 +0100 ", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: 7.0.1 is ready" } ]
[ { "msg_contents": "\nCan anyone help me remove the shift/reduce conflicts from the syntax? It\nworks perfectly, but since you guys won't accept shift/reduce\nconflicts....\n\n-------- Original Message --------\nSubject: [HACKERS] gram.y help, ONLY syntax\nDate: Sat, 27 May 2000 14:13:00 +1000\nFrom: Chris Bitmead <[email protected]>\nTo: Postgres Hackers List <[email protected]>\n\n\nI've made changes to gram.y to reduce the conflicts down to a couple of\nharmless shift/reduce conflicts, but I don't seem to have the right\nblack-magic incantations to remove these last ones. They seem to be\nrelated to ONLY syntax. Can anybody help?\n\n\nftp://ftp.tech.com.au/pub/gram.y.gz\nftp://ftp.tech.com.au/pub/y.output.gz\nftp://ftp.tech.com.au/pub/patch.only.gz\n", "msg_date": "Tue, 06 Jun 2000 22:54:09 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "GRAM.Y help.............." }, { "msg_contents": "Hi,\n\ndoes your patch still work if you make ONLY a ColLabel and not a \nTokenId, this would lead to ONLY not being a ColId and by that \nremoving all unresolvable conflicts in the grammar? This patch \nagainst your gram.y does this. I have tested it with bison 1.27,\nand verified that the conflicts are gone, but I have not tested \nit for correct behavior in postgres. I did not have the time to \nunderstand the whole grammar so I might have missed something \nobvious.\n\nRegards,\nFredrik Estreen\n\nChris Bitmead wrote:\n> \n> Can anyone help me remove the shift/reduce conflicts from the syntax? It\n> works perfectly, but since you guys won't accept shift/reduce\n> conflicts....\n> \n> -------- Original Message --------\n> Subject: [HACKERS] gram.y help, ONLY syntax\n> Date: Sat, 27 May 2000 14:13:00 +1000\n> From: Chris Bitmead <[email protected]>\n> To: Postgres Hackers List <[email protected]>\n> \n> I've made changes to gram.y to reduce the conflicts down to a couple of\n> harmless shift/reduce conflicts, but I don't seem to have the right\n> black-magic incantations to remove these last ones. They seem to be\n> related to ONLY syntax. Can anybody help?\n> \n> ftp://ftp.tech.com.au/pub/gram.y.gz\n> ftp://ftp.tech.com.au/pub/y.output.gz\n> ftp://ftp.tech.com.au/pub/patch.only.gz", "msg_date": "Tue, 06 Jun 2000 21:19:26 +0200", "msg_from": "Fredrik Estreen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GRAM.Y help.............." }, { "msg_contents": "Fredrik Estreen wrote:\n\n> does your patch still work if you make ONLY a ColLabel and not a\n> TokenId, \n\nYou're a genius!\n\nI'm re-submitting this OO patch... Did the 7.1 tree start yet?\n\nftp://ftp.tech.com.au/pub/patch.only.gz\n\n-- \nChris Bitmead\nmailto:[email protected]\n", "msg_date": "Wed, 07 Jun 2000 21:36:17 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] GRAM.Y help.............." }, { "msg_contents": "> Fredrik Estreen wrote:\n> \n> > does your patch still work if you make ONLY a ColLabel and not a\n> > TokenId, \n> \n> You're a genius!\n> \n> I'm re-submitting this OO patch... Did the 7.1 tree start yet?\n\nYes.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Jun 2000 15:49:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] GRAM.Y help.............." } ]
[ { "msg_contents": "\n> > having a separate tuple for each individual kind of access \n> right will\n> > consume an unreasonable amount of space --- both on disk and in the\n> > syscache, if a cache is used for this table.\n> \n> That's a valid concern, but unfortunately things aren't that easy. For\n> each access right you also have to store what user granted \n> that privilege\n\neach new grantor will create a new row. \n(thus the primary key needs to be extended: object, grantee, grantor) \nIn the system cache you will probably want a key of object+grantee\nwith a merged result over the grantors.\nRemember that you can be granted a certain priviledge by more than \none grantor, thus your key was not correct to begin with.\n(sorry I didn't realize that earlier)\n\n> and whether it's grantable, and for SELECT also whether it \n> includes the\n> \"hierarchy option\" (has to do with table inheritance somehow).\n\nseparate priviledge \"h\"\n\n> \n> Say you store all privileges in an array, then you'd either \n> need to encode\n> all 3 1/2 pieces of information into one single data type and make an\n> array thereof (like `array of record privtype; privgrantor;\n> privgrantable'), which doesn't really make things easier, or you have\n> three arrays per tuple, which makes things worse. Also \n> querying arrays is\n> painful.\n\nI actually didn't mean real arrays, but e.g. a char(n) where each position \nmarks a certain priv. e.g. \"SU-ID---\" = with grant option, \"su-id---\"\nwithout grant options.\nThis has the advantage of still beeing human readable and does not waste too\n\nmuch space.\n\n> So the break-even point for this new scheme is when \n> users have on\n> average at least 1.4 privileges (78/54) granted to them on one object.\n> Considering that such objects as types and functions will in \n> any case have\n> at most one privilege (USAGE or EXECUTE, resp.), that there \n> are groups (or\n> roles), that column level privileges will probably tend to have sparse\n> tuples of this kind, and that object owners are short-circuited in any\n> case, then it is not at all clear whether that figure will be reached.\n\nWell I think it is not for us to decide, how the security system is used \nby the users. It has to be designed to allow heavy use, and still be\nefficient.\n\nAndreas\n", "msg_date": "Tue, 6 Jun 2000 15:31:41 +0200 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: AW: AW: Proposal for enhancements of privilege syst\n\tem" }, { "msg_contents": "Zeugswetter Andreas SB writes:\n\n> Remember that you can be granted a certain priviledge by more than \n> one grantor, thus your key was not correct to begin with.\n\nOh, good catch. That will put things in perspective.\n\n> I actually didn't mean real arrays, but e.g. a char(n) where each position \n> marks a certain priv. e.g. \"SU-ID---\" = with grant option, \"su-id---\"\n> without grant options.\n\nHmm, yes, that seems like a good idea. (Of course we'll run out of letters\nbefore we have taken care of UPDATE, UNDER, USAGE. :)\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 8 Jun 2000 01:21:15 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: AW: AW: Proposal for enhancements of privilege\n syst em" } ]
[ { "msg_contents": "Is PostgreSQL going to start using odd release numbers for development\nversions? This is how the Linux kernel is, and some other projects.\n\nVersion.release.revision/build\n\nLike 7.0.x would be the current stable branch. 7.1.x, the current development\nbranch. The next stable branch would be 7.2.x. Within the current even\nrelease stable branch, maybe only do bug fixes. In the odd dev releases, focus\non new/experiemental. Both branches could have very frequent *.x\nrevisions/builds. I guess you'd have to apply patches to both branches\nsometimes, like when a bug is found in 7.0.x, it is patched and the same patch\napplied to the 7.1.x if needed.\n\nThis way there would be a clear distinction for users what is likely to be good\nfor production use and what is still unproven.\n\n-- \nRobert B. Easter\[email protected]\n", "msg_date": "Tue, 6 Jun 2000 11:34:36 -0400", "msg_from": "\"Robert B. Easter\" <[email protected]>", "msg_from_op": true, "msg_subject": "Odd release numbers for development versions?" }, { "msg_contents": "> Is PostgreSQL going to start using odd release numbers for development\n> versions? This is how the Linux kernel is, and some other projects.\n> \n> Version.release.revision/build\n> \n> Like 7.0.x would be the current stable branch. 7.1.x, the current development\n> branch. The next stable branch would be 7.2.x. Within the current even\n> release stable branch, maybe only do bug fixes. In the odd dev releases, focus\n> on new/experiemental. Both branches could have very frequent *.x\n> revisions/builds. I guess you'd have to apply patches to both branches\n> sometimes, like when a bug is found in 7.0.x, it is patched and the same patch\n> applied to the 7.1.x if needed.\n> \n> This way there would be a clear distinction for users what is likely to be good\n> for production use and what is still unproven.\n\nNo even/odd mess. Every release is stable.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 6 Jun 2000 12:07:02 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd release numbers for development versions?" }, { "msg_contents": "\"Robert B. Easter\" <[email protected]> writes:\n> Like 7.0.x would be the current stable branch. 7.1.x, the current\n> development branch. The next stable branch would be 7.2.x. Within\n> the current even release stable branch, maybe only do bug fixes. In\n> the odd dev releases, focus on new/experiemental. Both branches could\n> have very frequent *.x revisions/builds.\n\nThis has been proposed before, and rejected before. The key developers\nmostly don't believe that the Linux style \"release early, release often\"\napproach is appropriate for the Postgres project. Few people are\ninterested in running beta-quality databases, so there's no point in\ngoing to the effort of maintaining two development tracks.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Jun 2000 12:19:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd release numbers for development versions? " }, { "msg_contents": "\nnot in this life time ...\n\n\nOn Tue, 6 Jun 2000, Robert B. Easter wrote:\n\n> Is PostgreSQL going to start using odd release numbers for development\n> versions? This is how the Linux kernel is, and some other projects.\n> \n> Version.release.revision/build\n> \n> Like 7.0.x would be the current stable branch. 7.1.x, the current development\n> branch. The next stable branch would be 7.2.x. Within the current even\n> release stable branch, maybe only do bug fixes. In the odd dev releases, focus\n> on new/experiemental. Both branches could have very frequent *.x\n> revisions/builds. I guess you'd have to apply patches to both branches\n> sometimes, like when a bug is found in 7.0.x, it is patched and the same patch\n> applied to the 7.1.x if needed.\n> \n> This way there would be a clear distinction for users what is likely to be good\n> for production use and what is still unproven.\n> \n> -- \n> Robert B. Easter\n> [email protected]\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 6 Jun 2000 14:16:50 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd release numbers for development versions?" }, { "msg_contents": "Tom Lane wrote:\n> \n> \"Robert B. Easter\" <[email protected]> writes:\n> > Like 7.0.x would be the current stable branch. 7.1.x, the current\n> > development branch. The next stable branch would be 7.2.x. Within\n> > the current even release stable branch, maybe only do bug fixes. In\n> > the odd dev releases, focus on new/experiemental. Both branches could\n> > have very frequent *.x revisions/builds.\n \n> This has been proposed before, and rejected before. The key developers\n> mostly don't believe that the Linux style \"release early, release often\"\n> approach is appropriate for the Postgres project. Few people are\n> interested in running beta-quality databases, so there's no point in\n> going to the effort of maintaining two development tracks.\n\nIf the Linux kernel used CVS like a reasonable Free Software effort,\nthen the current odd/even split wouldn't even be necessary. We use CVS\n-- if you want development trees to play with, you fetch the tree by\nanon CVS and update as often as you need to. There is absolutely no\nneed for a Linux-style release system with CVS.\n\nDon't get me wrong; I like and use Linux. I just like the PostgreSQL\ndevelopment model better.\n\n\"Release Stable; release when necessary\" is all that is needed when the\ndevelopers use CVS properly. You want to be a developer? Grab the CVS\ntree and start hacking. Patches are readily accepted if they are\nacceptable.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 06 Jun 2000 14:49:32 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd release numbers for development versions?" }, { "msg_contents": "On Tue, 06 Jun 2000, Lamar Owen wrote:\n> Tom Lane wrote:\n> > \n> > \"Robert B. Easter\" <[email protected]> writes:\n> > > Like 7.0.x would be the current stable branch. 7.1.x, the current\n> > > development branch. The next stable branch would be 7.2.x. Within\n> > > the current even release stable branch, maybe only do bug fixes. In\n> > > the odd dev releases, focus on new/experiemental. Both branches could\n> > > have very frequent *.x revisions/builds.\n> \n> > This has been proposed before, and rejected before. The key developers\n> > mostly don't believe that the Linux style \"release early, release often\"\n> > approach is appropriate for the Postgres project. Few people are\n> > interested in running beta-quality databases, so there's no point in\n> > going to the effort of maintaining two development tracks.\n\nOk. If the key developers feel its not a good idea for this project, I can\naccept that. Just suggested it since now would have been an opportune time to\nstart that numbering scheme.\n\n> \"Release Stable; release when necessary\" is all that is needed when the\n> developers use CVS properly. You want to be a developer? Grab the CVS\n> tree and start hacking. Patches are readily accepted if they are\n> acceptable.\n\nThanks for the invitation. I'd like contributing something back to this\nproject someday.\n\n> --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n-- \nRobert B. Easter\[email protected]\n", "msg_date": "Tue, 6 Jun 2000 15:13:33 -0400", "msg_from": "\"Robert B. Easter\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Odd release numbers for development versions?" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n> If the Linux kernel used CVS like a reasonable Free Software effort,\n> then the current odd/even split wouldn't even be necessary. We use CVS\n> -- if you want development trees to play with, you fetch the tree by\n> anon CVS and update as often as you need to. There is absolutely no\n> need for a Linux-style release system with CVS.\n\nActually, another way to look at it is that we do have two release\ntracks. There is the bleeding edge (CVS sources, or the nightly\nsnapshot if you don't want to be bothered with setting up CVS), and\nthere is the prior stable release (RELm_n CVS branch, which we update\nwith critical patches and re-release as needed). Seems like the main\npractical difference from the Linux release model is that we don't\nbother to make formal labeled/numbered tarballs of the development track\nuntil we are in beta-test cycle. You want development track at other\ntimes, you just grab the latest sources.\n\nSo far, the alternating development and betatest/bugfix cycle has worked\nreally well for the needs of the Postgres project, so I don't think\nanyone is eager to change that approach.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Jun 2000 17:40:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd release numbers for development versions? " }, { "msg_contents": "On Tue, 06 Jun 2000, Robert B. Easter wrote:\n> On Tue, 06 Jun 2000, Lamar Owen wrote:\n\n> > \"Release Stable; release when necessary\" is all that is needed when the\n> > developers use CVS properly. You want to be a developer? Grab the CVS\n> > tree and start hacking. Patches are readily accepted if they are\n> > acceptable.\n \n> Thanks for the invitation. I'd like contributing something back to this\n> project someday.\n\nI'm sure that you could find something in our TODO list to meet your fancy...\n:-)\n\nMy response wasn't intended as a flame; although, in hindsight, I see I was\nharsher than I meant to be. Sorry 'bout that.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 6 Jun 2000 22:11:33 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd release numbers for development versions?" }, { "msg_contents": "On Tue, 06 Jun 2000, Tom Lane wrote:\n> Lamar Owen <[email protected]> writes:\n> > anon CVS and update as often as you need to. There is absolutely no\n> > need for a Linux-style release system with CVS.\n \n> Seems like the main\n> practical difference from the Linux release model is that we don't\n> bother to make formal labeled/numbered tarballs of the development track\n> until we are in beta-test cycle. You want development track at other\n> times, you just grab the latest sources.\n \n> So far, the alternating development and betatest/bugfix cycle has worked\n> really well for the needs of the Postgres project, so I don't think\n> anyone is eager to change that approach.\n\nWith our use of CVS, there is no need for the split release, IOW. We have a\nsimilar, yet more open system here -- one I rather like. Of course, if you\njust read the website, you don't get that feeling -- it takes a year or so on\nthe list to really understand what this group is all about. (No criticism\nintended of the website, Vince -- it does its job nicely).\n\nFor the first two years of my use of PostgreSQL, the only information I had was\non the website -- boy, was I in for a pleasant surprise when I subscribed to\nthis list!\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 6 Jun 2000 22:13:31 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd release numbers for development versions?" } ]
[ { "msg_contents": "Every Joe User can currently run\n\n env PGOPTIONS='-d99 -tpa -tpl -te' psql\n\nand stuff the server log with relative garbage that he will never be able\nto see anyway.\n\nAs I don't believe it feasible to do superuser checking before the options\nparsing it seems to me that these option in particular (and -s as well)\nneed to be \"secure\". Those desiring to diagnose transient problems can use\nSET debug_level, etc. which does have a superuser check in place. For\npermanent debug level changes there's of course this shiny new\nconfiguration file and the HUP signal.\n\nComments?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 6 Jun 2000 18:07:44 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Protection of debugging options" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Every Joe User can currently run\n> env PGOPTIONS='-d99 -tpa -tpl -te' psql\n> and stuff the server log with relative garbage that he will never be able\n> to see anyway.\n\n> As I don't believe it feasible to do superuser checking before the options\n> parsing it seems to me that these option in particular (and -s as well)\n> need to be \"secure\". Those desiring to diagnose transient problems can use\n> SET debug_level, etc. which does have a superuser check in place.\n\nI object loudly --- this would be a major pain in the rear end.\n\nCurrently it's possible to trace the queries issued by an application by\nthe simple expedient of setting PGOPTIONS=\"-d something\" before starting\nthe app; no cooperation from the app is necessary. To get the same\nfunctionality via SET you'd need to teach the app about the SET command,\nset up some sort of command line switch or environment variable for it\nto look at, etc etc.\n\nFurthermore, I do not think that \"unprivileged users stuffing the log\"\nis an adequate reason for taking away this functionality. A person who\nwants to cause trouble by bloating the log will certainly be able to do\nso anyway.\n\nFinally, where did you get the idea that the equivalent SET vars should\nbe superuser restricted? I object to that, too. By doing that you've\nessentially removed *any* way to trace an app on demand, unless one is\nwilling to run the app as superuser. This is taking concern for\nsecurity too far --- if anything, you are making the system *less*\nsecure by forcing people to run things as superuser just to find out\nwhat they're doing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Jun 2000 13:28:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Protection of debugging options " }, { "msg_contents": "Tom Lane writes:\n\n> I object loudly --- this would be a major pain in the rear end.\n\nIt sure would. But it's a trade off between that and log stuffing, I\nsuppose.\n\n> Finally, where did you get the idea that the equivalent SET vars should\n> be superuser restricted?\n\nNowhere. The ones that were adopted from pg_options had a general\nsuperuser restriction. Any that might be new just played along. The idea\nwas to leave it as is at first and then over time look at each option in\ndetail.\n\nOkay, so all logging/debugging options available to the public.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 7 Jun 2000 18:30:42 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Protection of debugging options " } ]
[ { "msg_contents": "As previously indicated, the configure script has now moved up a\ndirectory. release_prep is up to date, but your personally crafted build\nscripts might not be.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 7 Jun 2000 00:07:54 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Advisory: configure script has moved" } ]
[ { "msg_contents": "I am looking at how the system indexes are used.\n\nIn the past, I went through and changed all system table lookups that\nreturn a single value into system cache lookups.\n\nI now see several cases where we are doing heap scans of system tables,\nrather than using indexes. There are cases that can return several\nrows, so we can't use the cache. However, we could use index scans\nrather than heap scans.\n\nAn interesting case is the pg_listener index in commands/async.c. Our\nprevious index was by relname/pid. By changing this index to\npid/relname, I can add index scans based in pid to prevent the many heap\nscans in the file. I am sure there are other places that can be\nimproved.\n\nI can start fixing them, but as I remember, someone was thinking of\nmaking heap/index scans use the same interface. Can I get a status on\nthat?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Jun 2000 00:11:37 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Use of system indexes" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Bruce Momjian\n>\n> I am looking at how the system indexes are used.\n>\n> In the past, I went through and changed all system table lookups that\n> return a single value into system cache lookups.\n>\n> I now see several cases where we are doing heap scans of system tables,\n> rather than using indexes. There are cases that can return several\n> rows, so we can't use the cache. However, we could use index scans\n> rather than heap scans.\n>\n\n[snip]\n\n>\n> I can start fixing them, but as I remember, someone was thinking of\n> making heap/index scans use the same interface. Can I get a status on\n> that?\n>\n\nIn my trial implementation of ALTER TABLE DROP COLUMN in command.c,\nthere is a trial using\nsystable_beginscan(),systable_endscan(),systable_getnext()\nfunctions.\nHowever there would better unification of system table scan.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Wed, 7 Jun 2000 15:19:45 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Use of system indexes" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> An interesting case is the pg_listener index in commands/async.c. Our\n> previous index was by relname/pid. By changing this index to\n> pid/relname, I can add index scans based in pid to prevent the many heap\n> scans in the file. I am sure there are other places that can be\n> improved.\n\nMy opinion about that is that the pg_listener operations are better off\nwith no indexes at all. pg_listener is small and *very* frequently\nupdated...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Jun 2000 03:35:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use of system indexes " } ]
[ { "msg_contents": "Just do a search for heap_beginscan() and look at all those system table\nheap scans. Clearly, for large installations, we should be doing index\nscans.\n\nSeems like we should consider identifying all of the needed system\nindexes. I can add all indexes at one time, and people can go around\nand modify heap scans to index scans with heap_fetches of the tid if\nrequired.\n\nComments?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Jun 2000 00:23:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Look at heap_beginscan()" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Just do a search for heap_beginscan() and look at all those system table\n> heap scans. Clearly, for large installations, we should be doing index\n> scans.\n\nThere are a bunch of heap_beginscan() calls, but my first impression\nfrom a quick scan is that most of them are in very non-performance-\ncritical paths --- not to mention paths that are deliberately ignoring\nindexes because they're bootstrap or reindex code. Furthermore, some\nof the remainder are scans of pretty darn small tables. (Do we need\nto convert sequential scans of pg_am to indexed scans? Nyet.)\n\nI'd be real hesitant to do a wholesale conversion, and even more\nhesitant to add new system indexes to support indexscans that we\nhave not *proven* to be performance bottlenecks.\n\nIt's certainly something worth looking at, since we've identified\na couple of places like this that are indeed hotspots. But we need\nto convince ourselves that other places are also hotspots before\nwe add overhead in hopes of making those places faster.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Jun 2000 03:33:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Look at heap_beginscan() " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Just do a search for heap_beginscan() and look at all those system table\n> > heap scans. Clearly, for large installations, we should be doing index\n> > scans.\n> \n> There are a bunch of heap_beginscan() calls, but my first impression\n> from a quick scan is that most of them are in very non-performance-\n> critical paths --- not to mention paths that are deliberately ignoring\n> indexes because they're bootstrap or reindex code. Furthermore, some\n> of the remainder are scans of pretty darn small tables. (Do we need\n> to convert sequential scans of pg_am to indexed scans? Nyet.)\n> \n> I'd be real hesitant to do a wholesale conversion, and even more\n> hesitant to add new system indexes to support indexscans that we\n> have not *proven* to be performance bottlenecks.\n\nWell, how do we know where the critical paths are? Seems adding an\nindex is a cheap way to know we have all paths covered. If the table is\nnot updated, the index is really no overhead except 16k of disk space.\n\n> It's certainly something worth looking at, since we've identified\n> a couple of places like this that are indeed hotspots. But we need\n> to convince ourselves that other places are also hotspots before\n> we add overhead in hopes of making those places faster.\n\nAre you suggesting that heap scan is faster than index in most of these\ncases? How many rows does it take for a heap scan to be faster than an\nindex scan?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Jun 2000 12:45:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Look at heap_beginscan()" } ]
[ { "msg_contents": "I was asked for a good SQL book for beginners. Does anyone have a\nrecommendation. It's so long since I learned SQL that I simply do not know\nanymore how I got started.\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Wed, 7 Jun 2000 08:31:24 +0200", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "OFFTOPIC: SQL book" }, { "msg_contents": "\nThe Practical SQL Handbook ... \n\nOn Wed, 7 Jun 2000, Michael Meskes wrote:\n\n> I was asked for a good SQL book for beginners. Does anyone have a\n> recommendation. It's so long since I learned SQL that I simply do not know\n> anymore how I got started.\n> \n> Michael\n> -- \n> Michael Meskes\n> [email protected]\n> Go SF 49ers! Go Rhein Fire!\n> Use Debian GNU/Linux! Use PostgreSQL!\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 7 Jun 2000 03:57:44 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OFFTOPIC: SQL book" }, { "msg_contents": "\nMichael Meskes <[email protected]> wrote:\n\n> I was asked for a good SQL book for beginners. Does anyone have a\n> recommendation. It's so long since I learned SQL that I simply do not know\n> anymore how I got started.\n\nWell, as a relative beginner, allow me to offer a suggestion\nto stay away from the book I've been reading: \"Understanging\nThe New SQL: A Complete Guide\" by Melton and Simon. This is\nnot the worst technical book I've read, but something seems\nto be subtly *off* about it... it's a little too\nrepetitious, it jumps back and fourth just a little too\nmuch. I'm not sure what the problem is exactly, but I think\nthere's a hint in the way they insist pedantically that\n\"SQL\" is not to be pronounced \"sequel\".\n\nThere's a bunch of material available on-line, of course: \n\nThis claims to be the only comprehensive on-line\nintroduction to SQL:\n\n http://w3.one.net/~jhoffman/sqltut.htm\n\nAn SQL on-line course, that evidentally allows you to play\nwith a database interactively to test out examples:\n\n http://sqlcourse.com/\n\nAnd Philip Greenspun has one of the more entertaining\nintroductions:\n\n http://www.arsdigita.com/books/panda/databases-choosing\n\n", "msg_date": "Wed, 07 Jun 2000 00:33:10 -0700", "msg_from": "Joe Brenner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OFFTOPIC: SQL book " }, { "msg_contents": "On Wed, Jun 07, 2000 at 12:33:10AM -0700, Joe Brenner wrote:\n> There's a bunch of material available on-line, of course: \n> ... \n\nThanks.\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Wed, 7 Jun 2000 16:19:00 +0200", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: OFFTOPIC: SQL book" } ]
[ { "msg_contents": "Hello all,\n\nI tried to use vacuumlo on quite big database with large objects (approx 20000-30000).\nAnd I found out that it will take 5-6 hours to cleanup blobs even when there's no unlinked ones.\nI've looked inside the code and added some improvements.\nNow it took 20-30 seconds to complete.\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------", "msg_date": "Wed, 7 Jun 2000 14:42:06 +0700", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": true, "msg_subject": "Slightly faster version of vacuumlo" } ]
[ { "msg_contents": "============================================================================\n POSTGRESQL BUG REPORT TEMPLATE\n============================================================================\n\nYour name\t\t\t:\tVaclav Moucha\nYour email address\t:\[email protected]\n\nSystem Configuration\n---------------------\n Architecture (example: Intel Pentium) \t\t:\tIntel\nPentium III/350\n\n Operating System (example: Linux 2.0.26 ELF) \t:\tLinux 2.2.14\n\n PostgreSQL version (example: PostgreSQL-6.5.1):\tpostgresql-7.0.1\n\n Compiler used (example: gcc 2.8.0)\t\t:\tgcc version\negcs-2.91.66 \n\t\n19990314 (egcs-1.1.2 release)\n\nPlease enter a FULL description of your problem:\n------------------------------------------------\nI will find out some bug with like operator if I use locales.\n\nSteps to involve a bug result:\n\n1. Compilation\n ./configure --enable-locale\t# not needed for RPMS precompiled binaries \n\n2. Starting postmaster\n export LC_CTYPE=cs_CZ\n export LC_COLLATE=cs_CZ\t\t# this setting is important for the\nbug result\n postmaster -S -D /home/pgsql/data -o '-Fe'\t\n\n3. SQL steps\n create table test (name text);\n insert into test values ('ďż˝');\t# the first char is E1 from LATIN 2\ncoding\n insert into test values ('ďż˝b');\n create index test_index on test (name);\n set cpu_tuple_cost=1;\t\t# force backend to use index\nscanning\n select * from test where name like 'ďż˝%';\n\nBUG: Only 1 line is selected with 'ďż˝' only instead of both lines.\n\n\nPlease describe a way to repeat the problem. Please try to provide a\nconcise reproducible example, if at all possible: \n----------------------------------------------------------------------\n\n\n\nIf you know how this problem might be fixed, list the solution below:\n---------------------------------------------------------------------\n", "msg_date": "Wed, 7 Jun 2000 10:16:15 +0200 ", "msg_from": "=?ISO-8859-2?Q?Moucha_V=E1clav?= <[email protected]>", "msg_from_op": true, "msg_subject": "LIKE bug" }, { "msg_contents": "=?ISO-8859-2?Q?Moucha_V=E1clav?= <[email protected]> writes:\n> 1. Compilation\n> ./configure --enable-locale\t# not needed for RPMS precompiled binaries \n\n> 2. Starting postmaster\n> export LC_CTYPE=cs_CZ\n> export LC_COLLATE=cs_CZ\t\t# this setting is important for the\n> bug result\n> postmaster -S -D /home/pgsql/data -o '-Fe'\t\n\n> 3. SQL steps\n> create table test (name text);\n> insert into test values ('�');\t# the first char is E1 from LATIN 2\n> coding\n> insert into test values ('�b');\n> create index test_index on test (name);\n> set cpu_tuple_cost=1;\t\t# force backend to use index\n> scanning\n> select * from test where name like '�%';\n\n> BUG: Only 1 line is selected with '�' only instead of both lines.\n\nThe problem here is that given the search pattern '\\341%', the planner\ngenerates index limit conditions\n\tname >= '\\341' AND name < '\\342';\n\nApparently, in CZ locale it is true that '\\341' is less than '\\342',\nbut it does not follow from that that all strings starting with '\\341'\nare less than '\\342'. In fact '\\341b' is considered greater than '\\342'.\n\nSince '\\341' and '\\342' are two different accented forms of 'a'\n(if I'm looking at the right character set), this is perhaps not so\nimprobable as all that. Evidently the collation rule is that different\naccent forms sort the same unless the strings would otherwise be\nconsidered equal, in which case an ordering is assigned to them.\n\nSo, the rule we thought we had for generating index bounds falls flat,\nand we're back to the same old question: given a proposed prefix string,\nhow can we generate bounds that are certain to be considered <= and >=\nall strings starting with that prefix?\n\nI am now thinking that maybe we should search for a string that compares\ngreater than \"fooz\" when the prefix is \"foo\" --- that is, append a 'z'\nto the prefix string. But I wouldn't be surprised if that fails too\nin some locales.\n\nI'm also wondering if the left-hand inequality ('foo' <= any string\nbeginning with 'foo') might fail in some locales ... we haven't seen\nit reported but who knows ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Jun 2000 22:22:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Sigh, LIKE indexing is *still* broken in foreign locales" }, { "msg_contents": "\nOn Wed, 07 Jun 2000 22:22:06 -0400 Tom Lane wrote:\n\n> Since '\\341' and '\\342' are two different accented forms of 'a'\n> (if I'm looking at the right character set), this is perhaps not so\n> improbable as all that. Evidently the collation rule is that different\n> accent forms sort the same unless the strings would otherwise be\n> considered equal, in which case an ordering is assigned to them.\n\nI thought that was common, but while I've worked on\ninternationalisation issues sometimes I'm no linguist.\n\n> So, the rule we thought we had for generating index bounds falls flat,\n> and we're back to the same old question: given a proposed prefix string,\n> how can we generate bounds that are certain to be considered <= and >=\n> all strings starting with that prefix?\n\nTo confess ignorance, why does PostgreSQL need to generate such\nbounds? Complete string comparisons with a locale aware function such\nas strcoll() are safe. Using less than a full string is tricky\nindeed, and I'm not sure is possible in general although it might be.\n\nOther problematic cases are likely to include one-to-two collations (�\nin German, for example) and two-to-one collations (the reverse, but\nI've forgotten my example. Anyone?)\n\nThen there are wide characters, including some encodings that are\nstateful.\n\nRegards,\n\nGiles\n\n\n", "msg_date": "Thu, 08 Jun 2000 16:41:59 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sigh,\n\tLIKE indexing is *still* broken in foreign locales" }, { "msg_contents": "Hi,\n\nGiles Lean:\n> > So, the rule we thought we had for generating index bounds falls flat,\n> > and we're back to the same old question: given a proposed prefix string,\n> > how can we generate bounds that are certain to be considered <= and >=\n> > all strings starting with that prefix?\n> \n> To confess ignorance, why does PostgreSQL need to generate such\n> bounds?\n\nTo find the position in the index where it should start scanning.\n\n> Then there are wide characters, including some encodings that are\n> stateful.\n\nPersonally, I am in the \"store everything on the server in Unicode\"\ncamp. Let the parser convert everything to Unicode on the way in, \nand vice versa.\n\nThere's no sense, IMHO, in burdening the SQL core with multiple\ncharacter encoding schemes.\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nI loathe that low vice curiosity.\n -- Lord Byron\n", "msg_date": "Thu, 8 Jun 2000 08:53:25 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sigh,\n LIKE indexing is *still* broken in foreign locales" }, { "msg_contents": "\nOn Thu, 8 Jun 2000 08:53:25 +0200 \"Matthias Urlichs\" wrote:\n\n> To find the position in the index where it should start scanning.\n\nHmm. That I guess is faster than locating the prefix given to LIKE in\nthe index and scanning back as well as forward.\n\n> Personally, I am in the \"store everything on the server in Unicode\"\n> camp. Let the parser convert everything to Unicode on the way in, \n> and vice versa.\n\nThat would help the charater set problem, although there are people\nthat argue that Unicode is not acceptable for all languages. (I only\nnote this; I don't have an opinion.)\n\nI don't see that using Unicode helps very much with collation though\n-- surely collation must still be locale specific, and the problems\nwith two-pass algorithms, and two-to-one and one-to-two mappings are\nunchanged?\n\nRegards,\n\nGiles\n\n", "msg_date": "Thu, 08 Jun 2000 17:57:06 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Re: Sigh,\n\tLIKE indexing is *still* broken in foreign locales" }, { "msg_contents": "Giles Lean <[email protected]> writes:\n> On Thu, 8 Jun 2000 08:53:25 +0200 \"Matthias Urlichs\" wrote:\n\n>> To find the position in the index where it should start scanning.\n\n> Hmm. That I guess is faster than locating the prefix given to LIKE in\n> the index and scanning back as well as forward.\n\nWouldn't help. The reason why we need both an upper and lower bound\nis to know where to stop scanning as well as where to start. \"Scan\noutward from the middle\" doesn't tell you when you can stop.\n\nThe bounds do not have to be perfectly tight, in the sense of being\nthe least string >= or largest string <= the desired strings. It's\nOK if we scan a few extra tuples in some cases. But we have to have\nreasonably close bounds or we can't implement LIKE with an index.\n\n>> Personally, I am in the \"store everything on the server in Unicode\"\n>> camp. Let the parser convert everything to Unicode on the way in, \n>> and vice versa.\n\nAFAIK, none of our server-side charset encodings are stateful --- and\nI for one will argue that we must never accept any, for precisely the\nsort of problem being discussed here. (If a client wants to use such\na brain-dead encoding, that's not our problem...)\n\nHowever, the problem at hand has little to do with encodings. I think\nit's more a matter of understanding the possible variations of\ncontext-sensitive collation orders.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Jun 2000 10:04:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sigh, LIKE indexing is *still* broken in foreign locales" }, { "msg_contents": "\nOn Thu, 08 Jun 2000 10:04:11 -0400 Tom Lane wrote:\n\n> The bounds do not have to be perfectly tight, in the sense of being\n> the least string >= or largest string <= the desired strings. It's\n> OK if we scan a few extra tuples in some cases. But we have to have\n> reasonably close bounds or we can't implement LIKE with an index.\n\nDetermining the bounding (sub-)strings looks like a very hard problem.\n\nI think there is enough information in a POSIX locale to determine\nwhat the rules for constructing such bounds would be ... but there is\nno programatic interface to determine the rules a locale uses for\ncollation. (I have no idea what non-POSIX systems provide.)\n\n(The localedef program can build a locale. strcoll() and strxfrm()\ncan use the collation information. That's all I see.)\n\nIn the absence of a way to do this \"right\" we need someone to see a\n\"good enough\" hack that happens to work everywhere, or else give up\nusing indexes for LIKE which I doubt would please anyone. I suppose\nthe mismatch comes about because LIKE is about pattern matching and\nnot collation. :(\n\nRegards,\n\nGiles\n", "msg_date": "Fri, 09 Jun 2000 06:45:21 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sigh, LIKE indexing is *still* broken in foreign locales " }, { "msg_contents": "On Wed, 7 Jun 2000, Tom Lane wrote:\n\n> =?ISO-8859-2?Q?Moucha_V=E1clav?= <[email protected]> writes:\n> > 1. Compilation\n> > ./configure --enable-locale\t# not needed for RPMS precompiled binaries \n> \n> > 2. Starting postmaster\n> > export LC_CTYPE=cs_CZ\n> > export LC_COLLATE=cs_CZ\t\t# this setting is important for the\n> > bug result\n> > postmaster -S -D /home/pgsql/data -o '-Fe'\t\n> \n> > 3. SQL steps\n> > create table test (name text);\n> > insert into test values ('�');\t# the first char is E1 from LATIN 2\n> > coding\n> > insert into test values ('�b');\n> > create index test_index on test (name);\n> > set cpu_tuple_cost=1;\t\t# force backend to use index\n> > scanning\n> > select * from test where name like '�%';\n> \n> > BUG: Only 1 line is selected with '�' only instead of both lines.\n> \n> The problem here is that given the search pattern '\\341%', the planner\n> generates index limit conditions\n> \tname >= '\\341' AND name < '\\342';\n> \n> Apparently, in CZ locale it is true that '\\341' is less than '\\342',\n> but it does not follow from that that all strings starting with '\\341'\n> are less than '\\342'. In fact '\\341b' is considered greater than '\\342'.\n> \n\nHm. The character that follows 0xe1 in iso-8859-2 order is\n\"a + circumflex\" (Oxe2) which is - as far as I know - not\npart of the Czech alphapet. The successors of 0xe1 in\nCzech collation order (code points from iso-8859-2)\nare 0x042 (capital B) and 0x062 (small B).\n\n=> name >= '0xe1' AND (name < '0x062' OR name < '0x042')\n \nprovided comparision is done by strcoll().\n\nAnother interresting feature of Czech collation is:\n\nH < \"CH\" < I\n\nand:\n\nB < C < C + CARON < D .. < H < \"CH\" < I\n\nSo what happens with \"WHERE name like 'Czec%`\" ?\n\nRegards\nErich\n\n\n", "msg_date": "Fri, 9 Jun 2000 02:25:52 +0200 (CEST)", "msg_from": "Erich Stamberger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sigh, LIKE indexing is *still* broken in foreign\n locales" }, { "msg_contents": "Tom Lane writes:\n\n> Evidently the collation rule is that different accent forms sort the\n> same unless the strings would otherwise be considered equal, in which\n> case an ordering is assigned to them.\n\nYes, that's fairly common.\n\n> I am now thinking that maybe we should search for a string that compares\n> greater than \"fooz\" when the prefix is \"foo\" --- that is, append a 'z'\n> to the prefix string. But I wouldn't be surprised if that fails too\n> in some locales.\n\nIt most definitely will. sv_SE, no_NO, and hr_HR are the early candidates.\nAnd there's also nothing that says that you can only use LIKE on letters,\nLatin letters at that.\n\nThe only thing you can really do in this direction is to append the very\nlast character in the complete collation sequence, if there's a way to\nfind that out. If there isn't, it might be worth hard-coding a few popular\nones.\n\n> I'm also wondering if the left-hand inequality ('foo' <= any string\n> beginning with 'foo') might fail in some locales ... we haven't seen\n> it reported but who knows ...\n\nI think that's pretty safe. Shorter strings are always \"less than\" longer\nones.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 9 Jun 2000 02:57:56 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sigh,\n LIKE indexing is *still* broken in foreign locales" }, { "msg_contents": "> -----Original Message-----\n> From: Giles Lean\n> \n> On Thu, 08 Jun 2000 10:04:11 -0400 Tom Lane wrote:\n> \n> > The bounds do not have to be perfectly tight, in the sense of being\n> > the least string >= or largest string <= the desired strings. It's\n> > OK if we scan a few extra tuples in some cases. But we have to have\n> > reasonably close bounds or we can't implement LIKE with an index.\n> \n> Determining the bounding (sub-)strings looks like a very hard problem.\n>\n\n[snip] \n\n> \n> In the absence of a way to do this \"right\" we need someone to see a\n> \"good enough\" hack that happens to work everywhere, or else give up\n> using indexes for LIKE which I doubt would please anyone. I suppose\n> the mismatch comes about because LIKE is about pattern matching and\n> not collation. :(\n>\n\nCurrently optimizer doesn't choose index scan unless the bounds\nare sufficiently restrictive.\nHowever we may be able to choose index scan more often if we\ncould have various type(e.g. LIKE) of qualificactions in index qual.\nThough we have to scan an index file entirely,we may be able to\navoid looking up for the heap relation for sufficiently many index\ntuples.\n\nThis may also give us another possibity.\nIndex scan may be available for the queries like\n\tselect * from .. where .. LIKE '%abc%';\n\nComments ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Fri, 9 Jun 2000 11:03:11 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Sigh, LIKE indexing is *still* broken in foreign locales " }, { "msg_contents": "Erich Stamberger <[email protected]> writes:\n> Another interresting feature of Czech collation is:\n> H < \"CH\" < I\n\nOh my, that *is* interesting (using the word in the spirit of the\nancient Chinese curse, \"May you live in interesting times\"...)\n\n> So what happens with \"WHERE name like 'Czec%`\" ?\n\nThe wrong thing, without doubt.\n\nWould it help any to strip off one character of the given pattern?\nThat is, if the pattern is LIKE 'foo%', forget about the last 'o'\nand generate bounds like 'fo' <= x <= 'fp' ?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Jun 2000 03:23:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sigh, LIKE indexing is *still* broken in foreign locales " }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\n\n>>>>> \"Tom\" == Tom Lane <[email protected]> writes:\n\n Tom> Erich Stamberger <[email protected]> writes:\n >> Another interresting feature of Czech collation is: H < \"CH\" < I\n\n >> So what happens with \"WHERE name like 'Czec%`\" ?\n\n Tom> The wrong thing, without doubt.\n\nI think this is a conceptual problem: some languages use more than one\ncharacter to designate a single \"letter\" in their orthography. Even\nmore \"familiar\" languagues such as Spanish do this: ch, ll, rr. Maybe\nthey've changed since I was taught, but my Spanish dictionaries are\nput \"ciudad\" before \"chico\" (in fact, the latter is in a separate\nsection after \"C\" and before \"D\").\n\nSorry if this doesn't help find a solution, though....\n\nroland\n- -- \n\t\t PGP Key ID: 66 BC 3B CD\nRoland B. Roberts, PhD Unix Software Solutions\[email protected] 76-15 113th Street, Apt 3B\[email protected] Forest Hills, NY 11375\n\n-----BEGIN PGP SIGNATURE-----\nVersion: 2.6.3a\nCharset: noconv\nComment: Processed by Mailcrypt 3.5.4, an Emacs/PGP interface\n\niQCVAwUBOUEQFOoW38lmvDvNAQED5wQAnwW1BpbCeghJWXh/gTMezRfDfLq2eSPu\n4+H0X6Xjm2Gbegrv1SiWlSCjD1yR/FYIgTQpbMCubXlAtadu5tc4auLGNsOdSNIC\n3lT2Bc+En5BxT06dsX33QApU1B4GP73KFxSJu2+ngRMKExTFEojM7qzLlKfGzDeG\nonaP+RPbtJc=\n=Dlnw\n-----END PGP SIGNATURE-----\n", "msg_date": "09 Jun 2000 11:44:00 -0400", "msg_from": "Roland Roberts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sigh, LIKE indexing is *still* broken in foreign locales" }, { "msg_contents": "\nOn Fri, 9 Jun 2000 02:57:56 +0200 (CEST) Peter Eisentraut wrote:\n\n> I think that's pretty safe. Shorter strings are always \"less than\" longer\n> ones.\n\nNope: many-to-one collation elements break this too.\n\nRegards,\n\nGiles\n", "msg_date": "Sat, 10 Jun 2000 06:07:28 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sigh,\n\tLIKE indexing is *still* broken in foreign locales" } ]
[ { "msg_contents": "\n> Maybe this question is more aproppriate for \n> [email protected]\n> but for some strange reason my subscription confirmation was \n> rejected to\n> all new pgsql-hackers-xxx lists with a message like this\n\nDo those lists really exist ? They do not show up on the Mailing Lists\nsection of www.postgresql.org and thus those lists are somewhat \nclosed. Is this intentional ?\n\nI think the separation was a bad idea.\n\nAndreas\n", "msg_date": "Wed, 7 Jun 2000 12:33:23 +0200 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Problems with char array set to NULL" } ]
[ { "msg_contents": "Hi all,\n\nI'm faced to a big problem!!\n\nI have to do this for a customer:\n\ncreate a database on my server; this database will be queried and updated\nthrough the web --- this is easy\n\nH!ave the same databse on my customer server. This databse will be queried\nand updated by the customer . -esay too\n\nThe 2 databases have to be synchronized both ways ! Huh How can I do\nthat???\n\nCan you give me some pointers... I'd love to do it with postgresql\ninstead of going to Oracle just because they can replicate bases...\n\nTIA\n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Wed, 7 Jun 2000 16:38:44 +0200 (MET DST)", "msg_from": "Olivier PRENANT <[email protected]>", "msg_from_op": true, "msg_subject": "Big projet, please help" }, { "msg_contents": "Olivier PRENANT wrote:\n> \n> Hi all,\n> \n> I'm faced to a big problem!!\n> \n> I have to do this for a customer:\n> \n> create a database on my server; this database will be queried and updated\n> through the web --- this is easy\n> \n> H!ave the same databse on my customer server. This databse will be queried\n> and updated by the customer . -esay too\n> \n> The 2 databases have to be synchronized both ways ! Huh How can I do\n> that???\n\nJust a thought:\n\nhave two sets of tables client_xxx and web_xxx and allow updates on\n'local' \ntables only.\n\nfor queries create views like \n(select * from client_xxx union select * from web_xxx)\n\nif client wants to modyify web tables have her do it over web.\n\nto synchronize just copy over the tables\n\n> Can you give me some pointers... I'd love to do it with postgresql\n> instead of going to Oracle just because they can replicate bases...\n\nA general info on _file_system_ replication can be found at\n\nhttp://www.coda.cs.cmu.edu/\n\nit probably won't help you much with db replication\n\n\n\nA distributed db based on early postgreSQL versions is at \n\nhttp://s2k-ftp.cs.berkeley.edu:8000/mariposa/\n\n\n\n---------\nHannu\n", "msg_date": "Thu, 08 Jun 2000 10:09:58 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big projet, please help" } ]
[ { "msg_contents": "Can someone send me the emacs settings for editing java code for\npostgresql. I have the suggested c-code settings set up, but the java\ncode seems to be different as well. I'm currently using JDE in xemacs.\n\n----------------------------------------------------------------\nTravis Bauer | CS Grad Student | IU |www.cs.indiana.edu/~trbauer\n----------------------------------------------------------------\n\n", "msg_date": "Wed, 7 Jun 2000 11:09:18 -0500 (EST)", "msg_from": "Travis Bauer <[email protected]>", "msg_from_op": true, "msg_subject": "java settings in emacs for postgres" }, { "msg_contents": "> Can someone send me the emacs settings for editing java code for\n> postgresql. I have the suggested c-code settings set up, but the java\n> code seems to be different as well. I'm currently using JDE in xemacs.\n> \n\nYes, it appears to be quite strange to me too.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Jun 2000 16:01:01 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: java settings in emacs for postgres" } ]
[ { "msg_contents": "hi folks,\n\nmore and more software packages are relying on odbc 3.0\nwas wondering if when the unix odbc with postgresql was\nexpected to be up to the odbc 3.0 specs.\n\nthanks in advance.\n\njeff\n\n\n", "msg_date": "Wed, 7 Jun 2000 15:57:22 -0300 (ADT)", "msg_from": "Jeff MacDonald <[email protected]>", "msg_from_op": true, "msg_subject": "odbc" }, { "msg_contents": "Jeff MacDonald wrote:\n> \n> hi folks,\n> \n> more and more software packages are relying on odbc 3.0\n> was wondering if when the unix odbc with postgresql was\n> expected to be up to the odbc 3.0 specs.\n\nThe unixODBC package includes an ODBC-3 driver for PostgreSQL. It is\nGPL'd, and thus can't be directly included in our tree. But, it's out\nthere.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 07 Jun 2000 15:10:32 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] odbc" } ]
[ { "msg_contents": "Hi\n\nI have noticed a deadlock happening on 7.0.1 on updates.\n\nThe backends just lock, and take up as much CPU as they can. I kill\nthe postmaster, and the backends stay alive, using CPU at the highest\nrate possible. The operations arent that expensive, just a single line\nof update.\n\nAnyone else seen this? Anyone dealing with this?\nIf not, I will start to try and get some debug information.\n\nAlso, I tried to make an index and had the following problem\n\nsearch=# select count(*) from search_word_te;\n count \n-------\n 71864\n(1 row)\n\nsearch=# create index search_word_te_index on search_word_te (word,wordnum);\nERROR: btree: index item size 3040 exceeds maximum 2717\n\nWhat is this all about? It worked fine on 6.5.2\n\n\t\t\t\t\t\t~Michael\n", "msg_date": "Wed, 7 Jun 2000 23:07:37 +0100 (BST)", "msg_from": "Grim <[email protected]>", "msg_from_op": true, "msg_subject": "Apparent deadlock 7.0.1" }, { "msg_contents": "Grim <[email protected]> writes:\n> I have noticed a deadlock happening on 7.0.1 on updates.\n> The backends just lock, and take up as much CPU as they can. I kill\n> the postmaster, and the backends stay alive, using CPU at the highest\n> rate possible. The operations arent that expensive, just a single line\n> of update.\n> Anyone else seen this? Anyone dealing with this?\n\nNews to me. What sort of hardware are you running on? It sort of\nsounds like the spinlock code not working as it should --- and since\nspinlocks are done with platform-dependent assembler, it matters...\n\n> search=# create index search_word_te_index on search_word_te (word,wordnum);\n> ERROR: btree: index item size 3040 exceeds maximum 2717\n> What is this all about? It worked fine on 6.5.2\n\nIf you had the same data in 6.5.2 then you were living on borrowed time.\nThe btree code assumes it can fit at least three keys per page, and if\nyou have some keys > 1/3 page then sooner or later three of them will\nneed to be stored on the same page. 6.5.2 didn't complain in advance,\nit just crashed hard when that situation came up. 7.0 prevents the\nproblem by not letting you store an oversized key to begin with.\n\n(Hopefully all these tuple-size-related problems will go away in 7.1.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Jun 2000 19:16:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Apparent deadlock 7.0.1 " }, { "msg_contents": "Tom Lane wrote:\n> \n> Grim <[email protected]> writes:\n> > I have noticed a deadlock happening on 7.0.1 on updates.\n> > The backends just lock, and take up as much CPU as they can. I kill\n> > the postmaster, and the backends stay alive, using CPU at the highest\n> > rate possible. The operations arent that expensive, just a single line\n> > of update.\n> > Anyone else seen this? Anyone dealing with this?\n> \n> News to me. What sort of hardware are you running on? It sort of\n> sounds like the spinlock code not working as it should --- and since\n> spinlocks are done with platform-dependent assembler, it matters...\n\nThe hardware/software is:\n\nLinux kernel 2.2.15 (SMP kernel)\nGlibc 2.1.1\nDual Intel PIII/500\n\nThere are usually about 30 connections to the database at any one time.\n\n> The btree code assumes it can fit at least three keys per page, and if\n> you have some keys > 1/3 page then sooner or later three of them will\n> need to be stored on the same page. 6.5.2 didn't complain in advance,\n> it just crashed hard when that situation came up. 7.0 prevents the\n> problem by not letting you store an oversized key to begin with.\n\nAhhh, it was the tuple size, I thought it meant the number of records in\nthe index or something, seeing as coincidentally that was the biggest\ntable.\n\nDeleted one row of 3K, and all works fine now, thanks!\n\n\t\t\t\t\t~Michael\n", "msg_date": "Thu, 08 Jun 2000 02:16:03 +0100", "msg_from": "Michael Simms <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Apparent deadlock 7.0.1" }, { "msg_contents": "Michael Simms <[email protected]> writes:\n>>>> I have noticed a deadlock happening on 7.0.1 on updates.\n>>>> The backends just lock, and take up as much CPU as they can. I kill\n>>>> the postmaster, and the backends stay alive, using CPU at the highest\n>>>> rate possible. The operations arent that expensive, just a single line\n>>>> of update.\n>>>> Anyone else seen this? Anyone dealing with this?\n>> \n>> News to me. What sort of hardware are you running on? It sort of\n>> sounds like the spinlock code not working as it should --- and since\n>> spinlocks are done with platform-dependent assembler, it matters...\n\n> The hardware/software is:\n\n> Linux kernel 2.2.15 (SMP kernel)\n> Glibc 2.1.1\n> Dual Intel PIII/500\n\nDual CPUs huh? I have heard of motherboards that have (misdesigned)\nmemory caching such that the two CPUs don't reliably see each others'\nupdates to a shared memory location. Naturally that plays hell with the\nspinlock code :-(. It might be necessary to insert some kind of cache-\nflushing instruction into the spinlock wait loop to ensure that the\nCPUs see each others' changes to the lock.\n\nThis is all theory at this point, and a hole in the theory is that the\nbackends ought to give up with a \"stuck spinlock\" error after a minute\nor two of not being able to grab the lock. I assume you have left them\ngo at it for longer than that without seeing such an error?\n\nAnyway, the next step is to \"kill -ABORT\" some of the stuck processes\nand get backtraces from their coredumps to see where they are stuck.\nIf you find they are inside s_lock() then it's definitely some kind of\nspinlock problem. If not...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Jun 2000 22:43:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Apparent deadlock 7.0.1 " } ]
[ { "msg_contents": "Is anyone successfully using the NOTIFY/LISTEN mechanism in pgsql 7.0?\n\nRegards,\nEd Loehr\n", "msg_date": "Wed, 07 Jun 2000 21:37:14 -0500", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": true, "msg_subject": "[GENERAL] NOTIFY/LISTEN in pgsql 7.0" }, { "msg_contents": "Ed Loehr <[email protected]> writes:\n> Is anyone successfully using the NOTIFY/LISTEN mechanism in pgsql 7.0?\n\nIt still works AFAICT. Why, are you seeing a problem?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Jun 2000 10:14:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] NOTIFY/LISTEN in pgsql 7.0 " }, { "msg_contents": "Tom Lane wrote:\n> \n> Ed Loehr <[email protected]> writes:\n> > Is anyone successfully using the NOTIFY/LISTEN mechanism in pgsql 7.0?\n> \n> It still works AFAICT. Why, are you seeing a problem?\n\nNo. I just hadn't recalled any discussion of it in 6 months and wondered\nif it was \"bit-rotting\". The context was a discussion on pgsql-sql re\nhow to enable table-based caching of results in a modperl/DBI app. More\nspecifically, use of NOTIFY/LISTEN to alert the apps when a table had\nbeen changed in order to invalidate cached results dependent on that\ntable. After a closer look, I'm wondering how it would fit (if at all)\nwith the mod_perl/DBI API/model. [BTW, I still haven't heard anything to\ndissuade me from thinking that server-side shared-mem result-set caching\nwould be a huge performance win for many, many apps, including mine. I\nwish I better understood the lack of enthusiasm for that idea...]\n\nRegards,\nEd Loehr\n", "msg_date": "Thu, 08 Jun 2000 09:56:01 -0500", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] NOTIFY/LISTEN in pgsql 7.0" } ]
[ { "msg_contents": "Can someone comment on where we are with DROP COLUMN?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Jun 2000 23:09:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "DROP COLUMN status" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Bruce Momjian\n> \n> Can someone comment on where we are with DROP COLUMN?\n>\n\nI've already committed my trial implementation 3 months ago.\nThey are $ifdef'd by _DROP_COLUMN_HACK__.\nPlease enable the feature and evaluate it.\nYou could enable the feature without initdb.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n \n", "msg_date": "Thu, 8 Jun 2000 13:07:44 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: DROP COLUMN status" }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> > -----Original Message-----\n> > From: [email protected] [mailto:[email protected]]On\n> > Behalf Of Bruce Momjian\n> > \n> > Can someone comment on where we are with DROP COLUMN?\n> >\n> \n> I've already committed my trial implementation 3 months ago.\n> They are $ifdef'd by _DROP_COLUMN_HACK__.\n> Please enable the feature and evaluate it.\n> You could enable the feature without initdb.\n\nOK, can you explain how it works, and add any needed documentation so we\ncan enable it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 8 Jun 2000 00:57:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DROP COLUMN status" }, { "msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Thursday, June 08, 2000 1:58 PM\n> \n> [ Charset ISO-8859-1 unsupported, converting... ]\n> > > -----Original Message-----\n> > > From: [email protected] \n> [mailto:[email protected]]On\n> > > Behalf Of Bruce Momjian\n> > > \n> > > Can someone comment on where we are with DROP COLUMN?\n> > >\n> > \n> > I've already committed my trial implementation 3 months ago.\n> > They are $ifdef'd by _DROP_COLUMN_HACK__.\n> > Please enable the feature and evaluate it.\n> > You could enable the feature without initdb.\n> \n> OK, can you explain how it works, and add any needed documentation so we\n> can enable it.\n>\n\nFirst it's only a trial so I don't implement it completely.\nEspecially I don't completely drop related objects\n(FK_constraint,triggers,views etc). I don't know whether\nwe could drop them properly or not.\n\nThe implementation makes the dropped column invisible by\nchanging its attnum to -attnum - offset(currently 20) and\nattnam to (\"*already Dropped%d\",attnum). It doesn't touch\nthe table at all. After dropping a column insert/update\noperation regards the column as NULL and other related\nstuff simply ignores the column.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Thu, 8 Jun 2000 15:05:24 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: DROP COLUMN status" }, { "msg_contents": "Hiroshi Inoue wrote:\n> \n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:[email protected]]\n> > Sent: Thursday, June 08, 2000 1:58 PM\n> >\n> > [ Charset ISO-8859-1 unsupported, converting... ]\n> > > > -----Original Message-----\n> > > > From: [email protected]\n> > [mailto:[email protected]]On\n> > > > Behalf Of Bruce Momjian\n> > > >\n> > > > Can someone comment on where we are with DROP COLUMN?\n> > > >\n> > >\n> > > I've already committed my trial implementation 3 months ago.\n> > > They are $ifdef'd by _DROP_COLUMN_HACK__.\n> > > Please enable the feature and evaluate it.\n> > > You could enable the feature without initdb.\n> >\n> > OK, can you explain how it works, and add any needed documentation so we\n> > can enable it.\n> >\n> \n> First it's only a trial so I don't implement it completely.\n> Especially I don't completely drop related objects\n> (FK_constraint,triggers,views etc). I don't know whether\n> we could drop them properly or not.\n> \n> The implementation makes the dropped column invisible by\n> changing its attnum to -attnum - offset(currently 20) and\n> attnam to (\"*already Dropped%d\",attnum). It doesn't touch\n> the table at all. After dropping a column insert/update\n> operation regards the column as NULL and other related\n> stuff simply ignores the column.\n> \n\nIf one would do a dump/restore of the db after dropping a column, is the\ncolumn definitely gone then?\n\nRegards\nWim Ceulemans\n", "msg_date": "Thu, 08 Jun 2000 09:16:08 +0200", "msg_from": "Wim Ceulemans <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN status" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]\n> \n> Hiroshi Inoue wrote:\n> > \n> > > -----Original Message-----\n> > > From: Bruce Momjian [mailto:[email protected]]\n> > > Sent: Thursday, June 08, 2000 1:58 PM\n> > >\n> > > [ Charset ISO-8859-1 unsupported, converting... ]\n> > > > > -----Original Message-----\n> > > > > From: [email protected]\n> > > [mailto:[email protected]]On\n> > > > > Behalf Of Bruce Momjian\n> > > > >\n> > > > > Can someone comment on where we are with DROP COLUMN?\n> > > > >\n> > > >\n> > > > I've already committed my trial implementation 3 months ago.\n> > > > They are $ifdef'd by _DROP_COLUMN_HACK__.\n> > > > Please enable the feature and evaluate it.\n> > > > You could enable the feature without initdb.\n> > >\n> > > OK, can you explain how it works, and add any needed \n> documentation so we\n> > > can enable it.\n> > >\n> > \n> > First it's only a trial so I don't implement it completely.\n> > Especially I don't completely drop related objects\n> > (FK_constraint,triggers,views etc). I don't know whether\n> > we could drop them properly or not.\n> > \n> > The implementation makes the dropped column invisible by\n> > changing its attnum to -attnum - offset(currently 20) and\n> > attnam to (\"*already Dropped%d\",attnum). It doesn't touch\n> > the table at all. After dropping a column insert/update\n> > operation regards the column as NULL and other related\n> > stuff simply ignores the column.\n> > \n> \n> If one would do a dump/restore of the db after dropping a column, is the\n> column definitely gone then?\n>\n\nYes,if dump/restore means restore from pg_dump. pg_dump wouldn't\nsee the column definition in my implementation.\n \nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Thu, 8 Jun 2000 16:28:56 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: DROP COLUMN status" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> The implementation makes the dropped column invisible by\n> changing its attnum to -attnum - offset(currently 20) and\n> attnam to (\"*already Dropped%d\",attnum).\n\nUgh. No wonder you had to hack so many places in such an ugly fashion.\nWhy not leave the attnum as-is, and just add a bool saying \"column is\ndropped\" to pg_attribute? As long as the parser ignores columns marked\nthat way for field lookup and expansion of *, it seems the rest of the\nsystem wouldn't need to treat dropped columns specially in any way.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Jun 2000 10:20:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN status " }, { "msg_contents": "> \"Hiroshi Inoue\" <[email protected]> writes:\n> > The implementation makes the dropped column invisible by\n> > changing its attnum to -attnum - offset(currently 20) and\n> > attnam to (\"*already Dropped%d\",attnum).\n> \n> Ugh. No wonder you had to hack so many places in such an ugly fashion.\n> Why not leave the attnum as-is, and just add a bool saying \"column is\n> dropped\" to pg_attribute? As long as the parser ignores columns marked\n> that way for field lookup and expansion of *, it seems the rest of the\n> system wouldn't need to treat dropped columns specially in any way.\n\nIf we leave it as positive, don't we have to change user applications\nthat query pg_attribute so they also know to skip it?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 8 Jun 2000 11:41:43 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DROP COLUMN status" }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > The implementation makes the dropped column invisible by\n> > changing its attnum to -attnum - offset(currently 20) and\n> > attnam to (\"*already Dropped%d\",attnum).\n> \n> Ugh. No wonder you had to hack so many places in such an ugly fashion.\n> Why not leave the attnum as-is, and just add a bool saying \"column is\n> dropped\" to pg_attribute?\n\nFirst,it's only a trial and I haven't gotten any final consensus.\nIt has had the following advantages as a trial.\n\n1) It doesn't require initdb.\n2) It makes debugging easier. If I've forgotten to change some \n places it would cause aborts/asserts in most cases.\n\nNow I love my trial implementation more than that of you\nsuggests(it was my original idea) because it's more robust\nthan dropped(invisible) flag implementation. I could hardly\nexpect that no one would ignore the invisible(dropped) flag\nforever.\n\nAnyway I had hidden details behind MACROs mostly so it\nwouldn't be so difficult to change the implementation as\nyou suggests.\n\nRegards. \n\nHiroshi Inoue\[email protected]\n", "msg_date": "Fri, 9 Jun 2000 03:01:43 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: DROP COLUMN status " }, { "msg_contents": ">>>> The implementation makes the dropped column invisible by\n>>>> changing its attnum to -attnum - offset(currently 20) and\n>>>> attnam to (\"*already Dropped%d\",attnum).\n>> \n>> Ugh. No wonder you had to hack so many places in such an ugly fashion.\n>> Why not leave the attnum as-is, and just add a bool saying \"column is\n>> dropped\" to pg_attribute? As long as the parser ignores columns marked\n>> that way for field lookup and expansion of *, it seems the rest of the\n>> system wouldn't need to treat dropped columns specially in any way.\n\n> If we leave it as positive, don't we have to change user applications\n> that query pg_attribute so they also know to skip it?\n\nGood point, but I think user applications that query pg_attribute\nare likely to have trouble anyway: if they're expecting a consecutive\nseries of attnums then they're going to lose no matter what.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Jun 2000 15:52:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN status " }, { "msg_contents": "> -----Original Message-----\n> From: Hiroshi Inoue\n> Sent: Friday, June 09, 2000 3:02 AM\n>\n> > -----Original Message-----\n> > From: Tom Lane [mailto:[email protected]]\n> >\n> > \"Hiroshi Inoue\" <[email protected]> writes:\n> > > The implementation makes the dropped column invisible by\n> > > changing its attnum to -attnum - offset(currently 20) and\n> > > attnam to (\"*already Dropped%d\",attnum).\n> >\n> > Ugh. No wonder you had to hack so many places in such an ugly fashion.\n> > Why not leave the attnum as-is, and just add a bool saying \"column is\n> > dropped\" to pg_attribute?\n>\n> Anyway I had hidden details behind MACROs mostly so it\n> wouldn't be so difficult to change the implementation as\n> you suggests.\n>\n\nI'm using the following macros(in pg_attribute.h) in my implementation.\nDROP_COLUMN_INDEX() is used only once except pg_attribute.h.\nIf there are COLUMN_IS_DROPPED() macros,doesn't it mean that\nthey should be changed at any rate ?\n\n#ifdef _DROP_COLUMN_HACK__\n/*\n * CONSTANT and MACROS for DROP COLUMN implementation\n */\n#define DROP_COLUMN_OFFSET -20\n#define COLUMN_IS_DROPPED(attribute) ((attribute)->attnum <=\nDROP_COLUMN_OFFS\nET)\n#define DROPPED_COLUMN_INDEX(attidx) (DROP_COLUMN_OFFSET - attidx)\n#define ATTRIBUTE_DROP_COLUMN(attribute) \\\n Assert((attribute)->attnum > 0); \\\n (attribute)->attnum = DROPPED_COLUMN_INDEX((attribute)->attnum); \\\n (attribute)->atttypid = (Oid) -1; \\\n (attribute)->attnotnull = false; \\\n (attribute)->atthasdef = false;\n#endif /* _DROP_COLUMN_HACK__ */\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Fri, 9 Jun 2000 09:10:57 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: DROP COLUMN status " } ]
[ { "msg_contents": "At 16:38 7/06/00 +0200, Olivier PRENANT wrote:\n>\n>The 2 databases have to be synchronized both ways ! Huh How can I do\n>that???\n>\n>Can you give me some pointers... I'd love to do it with postgresql\n>instead of going to Oracle just because they can replicate bases...\n>\n\nTwo way replication has some serious issues. AFAIK, it is not possible to\nreplicate both ways without some serious limitations on who updates what,\nand how they do it (ie. very careful, and quite limiting, design choices).\nThis may suit your application - eg. if the updates are only inserts on\nnon-uniquely indexed tables, and any record updates are only ever done at\nthe site that originated them (or just at one of the sites). You also will\nhave referential integrity issues to deal with.\n\nThe only commercial replication system that I am familiar with will go both\nways, but not for the same table. ie.\n\nDB1 DB2\n=== ===\nTable1 ---> Table1\nTable2 <--- Table2\n\nIf I were you, I'd be looking at updating only the clients database, and\nletting the changes replicate to the read-only web database; possibly with\nthe option of an error being reported to the submitter of the update.\n\nAs to replication in PostgreSQL, I don't think it will be there until after\nthe WAL appears, and if it's WAL-based, my guess is that it will be one-way. \n\nBut you could implement a kind of replication by using triggers on the\ntables to be replicated: write out the record key, and the operation\nperformed (add, change,delete) to another table. Then have an (hourly?)\nreplication process that sends the changes to the replicated database(s).\nPretty low-tech, but probably quite reliable.\n\nHope this helps.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 08 Jun 2000 13:09:40 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Big projet, please help" } ]
[ { "msg_contents": "What version of emacs are you using?\n\nI've been using emacs for the majority of my Java development over the last\nfew years, and atleast the versions from the last year or so have had\njava-mode installed as standard.\n\nThe version I'm currently using is:\n\nGNU Emacs 20.4.1 (i386-suse-linux, X toolkit) of Mon Nov 8 1999 on Bareis\n\nIt's SuSE 6.3\n\nPeter\n\n--\nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council\n\n> -----Original Message-----\n> From:\tTravis Bauer [SMTP:[email protected]]\n> Sent:\tWednesday, June 07, 2000 5:09 PM\n> To:\[email protected]\n> Subject:\t[HACKERS]java settings in emacs for postgres\n> \n> Can someone send me the emacs settings for editing java code for\n> postgresql. I have the suggested c-code settings set up, but the java\n> code seems to be different as well. I'm currently using JDE in xemacs.\n> \n> ----------------------------------------------------------------\n> Travis Bauer | CS Grad Student | IU |www.cs.indiana.edu/~trbauer\n> ----------------------------------------------------------------\n", "msg_date": "Thu, 8 Jun 2000 07:34:22 +0100 ", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: java settings in emacs for postgres" }, { "msg_contents": "I've been using XEmacs 21.1 (May 1999). I don't think I\"m using the\nstandard java-mode, as I have installed JDE (Java Development\nEnvironmtne). It's an open source plug-in for emacs/xemacs that provides\nsome nifty java extra. Maybe JDE has its own java-mode which is\ncausing the difference.\n\n----------------------------------------------------------------\nTravis Bauer | CS Grad Student | IU |www.cs.indiana.edu/~trbauer\n----------------------------------------------------------------\n\nOn Thu, 8 Jun 2000, Peter Mount wrote:\n\n> What version of emacs are you using?\n> \n> I've been using emacs for the majority of my Java development over the last\n> few years, and atleast the versions from the last year or so have had\n> java-mode installed as standard.\n> \n> \n\n", "msg_date": "Thu, 8 Jun 2000 09:47:05 -0500 (EST)", "msg_from": "Travis Bauer <[email protected]>", "msg_from_op": false, "msg_subject": "RE: java settings in emacs for postgres" } ]
[ { "msg_contents": "> Karel Zak writes:\n> \n> > The Oracle always directly set first week on Jan-01, but \n> day-of-week count\n> > correct... It is pretty dirty, but it is a probably set in \n> libc's mktime().\n> \n> The first week of the year is most certainly not (always) the \n> week with\n> Jan-01 in it. My understanding is that it's the first week where the\n> Thursday is in the new year, but I might be mistaken. Here in \n> Sweden much\n> of the calendaring is done based on the week of the year \n> concept, so I'm\n> pretty sure that there's some sort of standard on this. And \n> sure enough,\n> this year started on a Saturday, but according to the \n> calendars that hang\n> around here the first week of the year started on the 3rd of January.\n\nIn Sweden (and several other places), \"Week 1\" is defined as \"the first week\nthat has at least four days in the new year\".\n\nWhile it's not an authority, my MS Outlook Calendar allows me to chose from:\n\"Starts on Jan 1\", \"First 4-day week\" and \"First full week\".\nSo it would seem there are at least these three possibilities.\n\n\n//Magnus\n", "msg_date": "Thu, 8 Jun 2000 09:47:02 +0200 ", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "RE: day of week" } ]
[ { "msg_contents": "\n> > I'd be real hesitant to do a wholesale conversion, and even more\n> > hesitant to add new system indexes to support indexscans that we\n> > have not *proven* to be performance bottlenecks.\n> \n> Well, how do we know where the critical paths are? Seems adding an\n> index is a cheap way to know we have all paths covered. If \n> the table is\n> not updated, the index is really no overhead except 16k of disk space.\n\nI think the overhead is measurable, and I agree, that adding indexes\nonly to proven performance critical paths is the way to go.\n\n> \n> > It's certainly something worth looking at, since we've identified\n> > a couple of places like this that are indeed hotspots. But we need\n> > to convince ourselves that other places are also hotspots before\n> > we add overhead in hopes of making those places faster.\n> \n> Are you suggesting that heap scan is faster than index in \n> most of these\n> cases?\n\nYes, that is what I would guess.\n\n> How many rows does it take for a heap scan to be \n> faster than an\n> index scan?\n\nI would say we can seq read at least 256k before the index starts \nto perform better.\n\nThis brings me to another idea. Why do our indexes need at least \none level ? Why can't we have an index that starts with one leaf page,\nand only if that fills up introduce the first level ?\n\nAndreas\n", "msg_date": "Thu, 8 Jun 2000 10:48:34 +0200 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Look at heap_beginscan()" }, { "msg_contents": "> > > It's certainly something worth looking at, since we've identified\n> > > a couple of places like this that are indeed hotspots. But we need\n> > > to convince ourselves that other places are also hotspots before\n> > > we add overhead in hopes of making those places faster.\n> > \n> > Are you suggesting that heap scan is faster than index in \n> > most of these\n> > cases?\n> \n> Yes, that is what I would guess.\n> \n> > How many rows does it take for a heap scan to be \n> > faster than an\n> > index scan?\n> \n> I would say we can seq read at least 256k before the index starts \n> to perform better.\n> \n> This brings me to another idea. Why do our indexes need at least \n> one level ? Why can't we have an index that starts with one leaf page,\n> and only if that fills up introduce the first level ?\n\nOK, let's look at pg_type. We have many sequential scans of that table\nin a number of places. A row is added to it for every table created by\nthe user. My question is which tables do we _know_ are a fixed size,\nand which vary based on the number of tables/indexes/views installed by\nthe user. Seems in those cases, we have to use index scans because we\ndon't know what the size of the table will be. Same with sequential\nscans of pg_index, but we already know that is a problem.\n\nAnother issue is the use of the cache. If I add cache lookups to\nreplace some of the sequential scans, I would like to have indexes to\nuse for cache loads, though I realize some are saying the sequential\nscans for cache loads are faster in some cases too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 8 Jun 2000 11:17:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Look at heap_beginscan()" } ]
[ { "msg_contents": "\n> Hi all,\n> \n> I'm faced to a big problem!!\n> \n> I have to do this for a customer:\n> \n> create a database on my server; this database will be queried \n> and updated\n> through the web --- this is easy\n> \n> H!ave the same databse on my customer server. This databse \n> will be queried\n> and updated by the customer . -esay too\n> \n> The 2 databases have to be synchronized both ways ! Huh How can I do\n> that???\n> \n> Can you give me some pointers... I'd love to do it with postgresql\n> instead of going to Oracle just because they can replicate bases...\n\nIn an environment with moderate to low update activity at least on one side\nthe simplest and most reliable replication mechanism is usually done with\ntriggers\nthat work on the primary key. \nThey provide a synchronous replication on the basis of all or nothing,\nand thus solve the problem of concurrent update to the same row\nfrom both sides.\nThe tricky part is usually how to break the trigger chain. In Informix you\ncan \nset global stored procedure variables to \"local\" and \"remote\" and only\ntrigger if that \nvariable is set to \"local\". (create trigger .... when myconn()=\"local\" ...)\n\nAndreas\n", "msg_date": "Thu, 8 Jun 2000 11:02:21 +0200 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Big projet, please help" } ]
[ { "msg_contents": "\n> > 3. SQL steps\n> > create table test (name text);\n> > insert into test values ('�');\t# the first char is E1 \n> from LATIN 2\n> > coding\n> > insert into test values ('�b');\n> > create index test_index on test (name);\n> > set cpu_tuple_cost=1;\t\t# force backend to use index\n> > scanning\n> > select * from test where name like '�%';\n> \n> > BUG: Only 1 line is selected with '�' only instead of both lines.\n> \n> The problem here is that given the search pattern '\\341%', the planner\n> generates index limit conditions\n> \tname >= '\\341' AND name < '\\342';\n\nI see that you are addressing a real problem (in german 'o' sorts same as\n'�',\nupper case sorts same as lower case) \nbut ist that related in this case ?\n\nSeems this example has exactly the same first character in both rows,\nso the bug seems to be of another class, no ?\n\nAndreas\n", "msg_date": "Thu, 8 Jun 2000 11:23:43 +0200 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Sigh, LIKE indexing is *still* broken in foreign lo\n\tcales" } ]
[ { "msg_contents": "\n> The only commercial replication system that I am familiar \n> with will go both\n> ways, but not for the same table. ie.\n> \n> DB1 DB2\n> === ===\n> Table1 ---> Table1\n> Table2 <--- Table2\n\nNo. Informix has update everywhere replication in the standard IDS server.\nInformix replication is configurable from sync to async repl (Laptops) with \nseveral options of behavior in the case of conflict (network outage ...) .\n\n> But you could implement a kind of replication by using triggers on the\n> tables to be replicated: write out the record key, and the operation\n> performed (add, change,delete) to another table. Then have an \n> (hourly?)\n> replication process that sends the changes to the replicated \n> database(s).\n> Pretty low-tech, but probably quite reliable.\n\nIf you can, I would do the replication online in the trigger stored\nprocedure.\nThis of course implys an update everywhere or not at all. If connection\nbetween\nthe two servers is lost no update is possible.\n\nAndreas\n", "msg_date": "Thu, 8 Jun 2000 11:58:12 +0200 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Big projet, please help" } ]
[ { "msg_contents": "\n> >> What it looks like to me is that we have a bug in the \n> expansion of '*'.\n> >> It should be generating columns for both the explicit and \n> the implicit\n> >> FROM clause, but it's evidently deciding that it should \n> only produce\n> >> output columns for the first one.\n> \n> Looks like the behavior is still the same (except now it says\n> NOTICE: Adding missing FROM-clause entry for table pg_language\n> as well). I'm inclined to say we should change it, and am willing\n> to do the work if no one objects...\n\nThe idea of the NOTICE was to keep the behavior the same as before\nbut tell the user of the problem. Before you change the behavior\nimho the better thing to do would be a hard error.\n\nAndreas\n", "msg_date": "Thu, 8 Jun 2000 12:13:39 +0200 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: column aliases " } ]
[ { "msg_contents": "At 11:58 8/06/00 +0200, Zeugswetter Andreas SB wrote:\n>\n>> The only commercial replication system that I am familiar \n>> with will go both\n>> ways, but not for the same table. ie.\n>> \n>\n>No. Informix has update everywhere replication in the standard IDS server.\n>Informix replication is configurable from sync to async repl (Laptops) with \n>several options of behavior in the case of conflict (network outage ...) .\n>\n\nThat's interesting. Out of curiosity, what choices does it provide for the\nfollowing:\n\na) Two inserts on a (unique) primary key\n\nb) Two inserts on a unique index (slightly different to (a))\n\nc) Two updates to different fields in the same record\n\nd) Two updates to the same field in the same record\n\ne) Deletion of a record and update of the same record\n\n>If connection between\n>the two servers is lost no update is possible.\n\nUnfortunately this restriction removes *one* of the motivations behind\nreplication. It might be better to implement a queue of pending updates in\nanother table, where the master database applies or rejects the updates\naccording to rules of the application. Needless to say, the mechanics could\nget pretty ugly. Hence my curiosity about the answers to the above questions.\n\n\n\n \n\n \n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 08 Jun 2000 21:15:28 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AW: Big projet, please help" } ]
[ { "msg_contents": "I'm getting a lot of \n\npq_recvbuf: unexpected EOF on client connection\n\n... in my postgres.log file (PostgreSQL 7.0.0), is this to be expected? \n\n\n- Mitch\n\n\"The only real failure is quitting.\"\n\n\n\n", "msg_date": "Thu, 8 Jun 2000 08:09:23 -0400", "msg_from": "\"Mitch Vincent\" <[email protected]>", "msg_from_op": true, "msg_subject": "Strange message in logs.. " }, { "msg_contents": "> pq_recvbuf: unexpected EOF on client connection\nMy Windows application (via ODBC) does the same result on shutting the\nconnection. I find this as a normal behaviour.\n\nZoltan\n\n", "msg_date": "Thu, 8 Jun 2000 14:33:01 +0200 (CEST)", "msg_from": "Kovacs Zoltan Sandor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange message in logs.. " }, { "msg_contents": "\"Mitch Vincent\" <[email protected]> writes:\n> I'm getting a lot of \n> pq_recvbuf: unexpected EOF on client connection\n> ... in my postgres.log file (PostgreSQL 7.0.0), is this to be expected? \n\nIt is if you are using client apps that don't bother to close the\nconnection cleanly (ie, first sending an 'X' message) when they quit.\n\nIt appears that our ODBC driver doesn't send 'X' during connection\nclose, which I would call a (minor) bug in the ODBC driver. Of course\nthe problem would appear anyway if the client app is in the habit of\nexiting without telling the interface library to shut down first.\n\nThe backend doesn't particularly care; it'll close up shop the same\nway with or without 'X'.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Jun 2000 10:32:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange message in logs.. " } ]
[ { "msg_contents": "It seems the truncate command deletes all rows from a table even it is\nreferenced by another tables. TRUNCATE is not in the standard any way,\nso I would not claim this is a bug. However, sometimes it would be\nhelpful for a user to let him notice that the table about to be\ntruncated is referenced by some tables. So I would propose to add\n\"RESTRICT\" option to the command. I mean if RESTRICT is specified,\nTRUNCATE will fail if the table is referenced.\n\nBTW, the keyword \"RESTRICT\" is inspired by the fact that DROP TABLE\nhas the same option according to the standard. If a table is\nreferenced by some tables and the drop table command has the RESTRICT\noption, it would fail. This seems to be a nice feature too.\n\nComments?\n--\nTatsuo Ishii\n", "msg_date": "Thu, 08 Jun 2000 21:24:52 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Proposal: TRUNCATE TABLE table RESTRICT" }, { "msg_contents": "At 09:24 PM 6/8/00 +0900, Tatsuo Ishii wrote:\n>It seems the truncate command deletes all rows from a table even it is\n>referenced by another tables. TRUNCATE is not in the standard any way,\n>so I would not claim this is a bug. However, sometimes it would be\n>helpful for a user to let him notice that the table about to be\n>truncated is referenced by some tables. So I would propose to add\n>\"RESTRICT\" option to the command. I mean if RESTRICT is specified,\n>TRUNCATE will fail if the table is referenced.\n\nShouldn't it always fail if an explicit foreign key reference\nexists to the table, in much the way that delete of a referenced\nrow does? If it doesn't now, I think it's a bug.\n\nIf the references are implicit (no REFERENCE or FOREIGN KEY given\nto inform the db of the relationship) then a RESTRICT option to\ntruncate does seem useful.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 08 Jun 2000 06:49:55 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: TRUNCATE TABLE table RESTRICT" }, { "msg_contents": "Don Baccus <[email protected]> writes:\n> If the references are implicit (no REFERENCE or FOREIGN KEY given\n> to inform the db of the relationship) then a RESTRICT option to\n> truncate does seem useful.\n\nUh, if the references are implicit, how would RESTRICT know they exist?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Jun 2000 10:41:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: TRUNCATE TABLE table RESTRICT " }, { "msg_contents": "Don Baccus wrote:\n> \n> At 09:24 PM 6/8/00 +0900, Tatsuo Ishii wrote:\n> >It seems the truncate command deletes all rows from a table even it is\n> >referenced by another tables. TRUNCATE is not in the standard any way,\n> >so I would not claim this is a bug. However, sometimes it would be\n> >helpful for a user to let him notice that the table about to be\n> >truncated is referenced by some tables. So I would propose to add\n> >\"RESTRICT\" option to the command. I mean if RESTRICT is specified,\n> >TRUNCATE will fail if the table is referenced.\n> \n> Shouldn't it always fail if an explicit foreign key reference\n> exists to the table, in much the way that delete of a referenced\n> row does? If it doesn't now, I think it's a bug.\n> \n> If the references are implicit (no REFERENCE or FOREIGN KEY given\n> to inform the db of the relationship) then a RESTRICT option to\n> truncate does seem useful.\n> \n\nJust curious, Don. But could you check to see what Oracle's\nbehavior is on this? That's the feature I was trying to mirror.\nAt the time, RI wasn't integrated so I wasn't thinking about this\nissue. And the Oracle docs state that DML triggers aren't fired\nwhen a TRUNCATE is issued, so I didn't think there would be\nissues there. Could you check?\n\nThanks, \n\nMike Mascari\n\n\n> - Don Baccus, Portland OR <[email protected]>\n> Nature photos, on-line guides, Pacific Northwest\n> Rare Bird Alert Service and other goodies at\n> http://donb.photo.net.\n", "msg_date": "Thu, 08 Jun 2000 10:43:00 -0400", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: TRUNCATE TABLE table RESTRICT" }, { "msg_contents": "At 10:41 AM 6/8/00 -0400, Tom Lane wrote:\n>Don Baccus <[email protected]> writes:\n>> If the references are implicit (no REFERENCE or FOREIGN KEY given\n>> to inform the db of the relationship) then a RESTRICT option to\n>> truncate does seem useful.\n>\n>Uh, if the references are implicit, how would RESTRICT know they exist?\n\nDuh, sorry, haven't had my coffee yet. I should know better than\nthink about computers before coffee...got any?\n\nOK ... then I'd suggest that allowing truncate in the face of explicit\nforeign keys is a bug. Truncate should either refuse to do so in\nall cases, or follow RI rules (do ON DELETE CASCADE/SET NULL/SET DEFAULT\nor refuse to do it depending on the foreign key def). It would\npresumably do so by calling the RI trigger for each row just as delete\ndoes.\n\nTRUNCATE's documented as being a quick alternative to delete,\nso refusal is perhaps the best course. Or the documentation\ncan say \"it's a lot faster if there are no foreign keys referencing\nit, otherwise it's the same as DELETE FROM\".\n\nBut breaking RI by leaving \"dangling references\" is a bug, pure\nand simple.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 08 Jun 2000 07:50:34 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: TRUNCATE TABLE table RESTRICT " }, { "msg_contents": "At 10:43 AM 6/8/00 -0400, Mike Mascari wrote:\n\n>Just curious, Don. But could you check to see what Oracle's\n>behavior is on this? That's the feature I was trying to mirror.\n>At the time, RI wasn't integrated so I wasn't thinking about this\n>issue.\n\nSure, I understand.\n\n> And the Oracle docs state that DML triggers aren't fired\n>when a TRUNCATE is issued, so I didn't think there would be\n>issues there. Could you check?\n\nIt refuses to do the TRUNCATE, whether or not there's a\n\"ON DELETE CASCADE\" modifier to the references.\n\nThat seems reasonable - it allows one to still say \"truncate's\nreally fast because it doesn't scan the rows in the table\",\nand refuses to break RI constraints.\n\nAll that needs doing is to check for the existence of \nat least one RI trigger on the table that's being truncated,\nand saying \"no way, jose\" if we want to mimic Oracle in\nthis regard.\n\nTODO item?\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 08 Jun 2000 07:55:51 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: TRUNCATE TABLE table RESTRICT" }, { "msg_contents": "> All that needs doing is to check for the existence of \n> at least one RI trigger on the table that's being truncated,\n> and saying \"no way, jose\" if we want to mimic Oracle in\n> this regard.\n> \n> TODO item?\n\nOK, added:\n\n\t* Prevent truncate on table acting as foreign key\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 8 Jun 2000 11:47:23 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: TRUNCATE TABLE table RESTRICT" }, { "msg_contents": "Don Baccus wrote:\n> \n> At 10:43 AM 6/8/00 -0400, Mike Mascari wrote:\n...\n> > And the Oracle docs state that DML triggers aren't fired\n> >when a TRUNCATE is issued, so I didn't think there would be\n> >issues there. Could you check?\n> \n> It refuses to do the TRUNCATE, whether or not there's a\n> \"ON DELETE CASCADE\" modifier to the references.\n> \n> That seems reasonable - it allows one to still say \"truncate's\n> really fast because it doesn't scan the rows in the table\",\n> and refuses to break RI constraints.\n> \n> All that needs doing is to check for the existence of\n> at least one RI trigger on the table that's being truncated,\n> and saying \"no way, jose\" if we want to mimic Oracle in\n> this regard.\n> \n> TODO item?\n\nSounds like it to me. Rats...\n\nMike Mascari\n\n> \n> - Don Baccus, Portland OR <[email protected]>\n> Nature photos, on-line guides, Pacific Northwest\n> Rare Bird Alert Service and other goodies at\n> http://donb.photo.net.\n", "msg_date": "Thu, 08 Jun 2000 11:53:22 -0400", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: TRUNCATE TABLE table RESTRICT" }, { "msg_contents": "At 11:47 AM 6/8/00 -0400, Bruce Momjian wrote:\n\n>OK, added:\n>\n>\t* Prevent truncate on table acting as foreign key\n\nHow about this: Prevent truncate on table referenced by foreign key\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 08 Jun 2000 10:27:45 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: TRUNCATE TABLE table RESTRICT" }, { "msg_contents": "At 05:20 PM 6/8/00 -0400, Bruce Momjian wrote:\n\n>\t* Prevent truncate on table with a referential integrity trigger\n>\n>Is that good?\n\nIt's beautiful!\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 08 Jun 2000 14:19:09 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: TRUNCATE TABLE table RESTRICT" }, { "msg_contents": "> At 11:47 AM 6/8/00 -0400, Bruce Momjian wrote:\n> \n> >OK, added:\n> >\n> >\t* Prevent truncate on table acting as foreign key\n> \n> How about this: Prevent truncate on table referenced by foreign key\n> \n\nActually, I made it:\n\n\t* Prevent truncate on table with a referential integrity trigger\n\nIs that good?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 8 Jun 2000 17:20:59 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: TRUNCATE TABLE table RESTRICT" }, { "msg_contents": "> >It seems the truncate command deletes all rows from a table even it is\n> >referenced by another tables. TRUNCATE is not in the standard any way,\n> >so I would not claim this is a bug. However, sometimes it would be\n> >helpful for a user to let him notice that the table about to be\n> >truncated is referenced by some tables. So I would propose to add\n> >\"RESTRICT\" option to the command. I mean if RESTRICT is specified,\n> >TRUNCATE will fail if the table is referenced.\n> \n> Shouldn't it always fail if an explicit foreign key reference\n> exists to the table, in much the way that delete of a referenced\n> row does? If it doesn't now, I think it's a bug.\n\nThat would be better. I am just wondering how the checkings hurt the\nspeed of TRUNCATE (if TRUNCATE is that slow, why we need it:-).\n\n> If the references are implicit (no REFERENCE or FOREIGN KEY given\n> to inform the db of the relationship) then a RESTRICT option to\n> truncate does seem useful.\n\nCan you tell me what are the implicit references?\n\nBTW, what do you think about DROP TABLE RESTRICT? I think this is a\nnice feature and should be added to the TODO list...\n--\nTatsuo IShii\n\n", "msg_date": "Fri, 09 Jun 2000 10:07:16 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: TRUNCATE TABLE table RESTRICT" }, { "msg_contents": "Tatsuo Ishii writes:\n\n> That would be better. I am just wondering how the checkings hurt the\n> speed of TRUNCATE (if TRUNCATE is that slow, why we need it:-).\n\nYou can make any code arbitrarily fast if it doesn't have to behave\ncorrectly. :-)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Sat, 10 Jun 2000 20:08:44 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: TRUNCATE TABLE table RESTRICT" }, { "msg_contents": "At 08:08 PM 6/10/00 +0200, Peter Eisentraut wrote:\n>Tatsuo Ishii writes:\n>\n>> That would be better. I am just wondering how the checkings hurt the\n>> speed of TRUNCATE (if TRUNCATE is that slow, why we need it:-).\n>\n>You can make any code arbitrarily fast if it doesn't have to behave\n>correctly. :-)\n\nChecking for existence or absence of triggers will be fast.\n\nJan suggested aborting TRUNCATE if any (user or system) triggers\nare on the table. If I understood his message correctly, that is.\n\nOracle only aborts for foreign keys, executing TRUNCATE and ignoring\nuser triggers if they exist.\n\nAny thoughts?\n\nRather than abort TRUNCATE due to the mere existence of a referential\nintegrity trigger on the table, we could be a bit more sophisicated\nand only abort if an RI trigger exists where the referring table is\nnon-empty. If the referring table's empty, no foreign keys will be\nstored in it and you can safely TRUNCATE.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 12 Jun 2000 07:15:17 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: TRUNCATE TABLE table RESTRICT" }, { "msg_contents": "Don Baccus wrote:\n> \n> At 08:08 PM 6/10/00 +0200, Peter Eisentraut wrote:\n> >Tatsuo Ishii writes:\n> >\n> >> That would be better. I am just wondering how the checkings hurt the\n> >> speed of TRUNCATE (if TRUNCATE is that slow, why we need it:-).\n\nThe major performance difference between TRUNCATE and DELETE is\nrealized at VACUUM time.\n\n> >\n> >You can make any code arbitrarily fast if it doesn't have to behave\n> >correctly. :-)\n> \n> Checking for existence or absence of triggers will be fast.\n> \n> Jan suggested aborting TRUNCATE if any (user or system) triggers\n> are on the table. If I understood his message correctly, that is.\n> \n> Oracle only aborts for foreign keys, executing TRUNCATE and ignoring\n> user triggers if they exist.\n> \n> Any thoughts?\n\nI agree with this.\n\n> \n> Rather than abort TRUNCATE due to the mere existence of a referential\n> integrity trigger on the table, we could be a bit more sophisicated\n> and only abort if an RI trigger exists where the referring table is\n> non-empty. If the referring table's empty, no foreign keys will be\n> stored in it and you can safely TRUNCATE.\n\nSorry to ask for another favor, but what does Oracle do here? If\na referring table has 1,000,000 rows in it which have been\ndeleted but not vacuumed, what would the performance implications\nbe?\n\nJust curious, \n\nMike Mascari\n", "msg_date": "Mon, 12 Jun 2000 10:33:21 -0400", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: TRUNCATE TABLE table RESTRICT" }, { "msg_contents": "Mike Mascari wrote:\n> Sorry to ask for another favor, but what does Oracle do here? If\n> a referring table has 1,000,000 rows in it which have been\n> deleted but not vacuumed, what would the performance implications\n> be?\n\n Referential integrity has no performance impact on VACUUM. If\n that's what you aren't sure about.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Mon, 12 Jun 2000 23:41:40 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: Proposal: TRUNCATE TABLE table RESTRICT" }, { "msg_contents": "Jan Wieck wrote:\n> \n> Mike Mascari wrote:\n> > Sorry to ask for another favor, but what does Oracle do here? If\n> > a referring table has 1,000,000 rows in it which have been\n> > deleted but not vacuumed, what would the performance implications\n> > be?\n> \n> Referential integrity has no performance impact on VACUUM. If\n> that's what you aren't sure about.\n> \n> Jan\n\nActually, I was worried that if TRUNCATE were to vist all\nreferring tables to determine whether or not it was empty, rather\nthen just issuing an elog() at the first RI trigger encountered,\nthat it might wind up scanning a 1,000,000 tuple relation (the\nreferring relation) where all the rows have been marked as\ndeleted before determining that its okay to perform the TRUNCATE.\nI was hoping that Oracle simply disallowed TRUNCATE on tables\nwith referring relations, regardless of whether or not there was\nactually any data in them, so that PostgreSQL could do the same.\n:-)\n\nMike Mascari\n", "msg_date": "Mon, 12 Jun 2000 19:48:59 -0400", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: TRUNCATE TABLE table RESTRICT" }, { "msg_contents": "At 07:48 PM 6/12/00 -0400, Mike Mascari wrote:\n\n>Actually, I was worried that if TRUNCATE were to vist all\n>referring tables to determine whether or not it was empty, rather\n>then just issuing an elog() at the first RI trigger encountered,\n>that it might wind up scanning a 1,000,000 tuple relation (the\n>referring relation) where all the rows have been marked as\n>deleted before determining that its okay to perform the TRUNCATE.\n>I was hoping that Oracle simply disallowed TRUNCATE on tables\n>with referring relations, regardless of whether or not there was\n>actually any data in them, so that PostgreSQL could do the same.\n>:-)\n\nWell, I think we probably could do so regardless of what Oracle\ndoes. Proper use of \"on delete cascade\" and \"on delete set null\"\netc would seem to make it more convenient to delete rows in a\nset of related tables via delete rather than running around\ntrying to truncate them in the right order so that you\nend up with empty tables before you delete the one with the\nRI triggers on it.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 12 Jun 2000 16:51:20 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: TRUNCATE TABLE table RESTRICT" }, { "msg_contents": "Can someone comment on this?\n\n> It seems the truncate command deletes all rows from a table even it is\n> referenced by another tables. TRUNCATE is not in the standard any way,\n> so I would not claim this is a bug. However, sometimes it would be\n> helpful for a user to let him notice that the table about to be\n> truncated is referenced by some tables. So I would propose to add\n> \"RESTRICT\" option to the command. I mean if RESTRICT is specified,\n> TRUNCATE will fail if the table is referenced.\n> \n> BTW, the keyword \"RESTRICT\" is inspired by the fact that DROP TABLE\n> has the same option according to the standard. If a table is\n> referenced by some tables and the drop table command has the RESTRICT\n> option, it would fail. This seems to be a nice feature too.\n> \n> Comments?\n> --\n> Tatsuo Ishii\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 2 Oct 2000 23:21:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: TRUNCATE TABLE table RESTRICT" }, { "msg_contents": "> At 09:24 PM 6/8/00 +0900, Tatsuo Ishii wrote:\n> >It seems the truncate command deletes all rows from a table even it is\n> >referenced by another tables. TRUNCATE is not in the standard any way,\n> >so I would not claim this is a bug. However, sometimes it would be\n> >helpful for a user to let him notice that the table about to be\n> >truncated is referenced by some tables. So I would propose to add\n> >\"RESTRICT\" option to the command. I mean if RESTRICT is specified,\n> >TRUNCATE will fail if the table is referenced.\n> \n> Shouldn't it always fail if an explicit foreign key reference\n> exists to the table, in much the way that delete of a referenced\n> row does? If it doesn't now, I think it's a bug.\n> \n> If the references are implicit (no REFERENCE or FOREIGN KEY given\n> to inform the db of the relationship) then a RESTRICT option to\n> truncate does seem useful.\n\nTODO updated:\n\n* Prevent truncate on table with a referential integrity trigger (RESTRICT)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 9 Oct 2000 15:41:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: TRUNCATE TABLE table RESTRICT" } ]
[ { "msg_contents": "> It seems the truncate command deletes all rows from a table even it is\n> referenced by another tables. TRUNCATE is not in the standard any way,\n> so I would not claim this is a bug. However, sometimes it would be\n> helpful for a user to let him notice that the table about to be\n> truncated is referenced by some tables. So I would propose to add\n> \"RESTRICT\" option to the command. I mean if RESTRICT is specified,\n> TRUNCATE will fail if the table is referenced.\n> \n> BTW, the keyword \"RESTRICT\" is inspired by the fact that DROP TABLE\n> has the same option according to the standard. If a table is\n> referenced by some tables and the drop table command has the RESTRICT\n> option, it would fail. This seems to be a nice feature too.\n\nTruncate should probably check if all referencing tables are empty\nand fail if not. Truncate should imho not lead to a violated constraint\nsituation.\nStrictly speaking the current situation is more or less a bug.\n\nAndreas\n", "msg_date": "Thu, 8 Jun 2000 16:29:24 +0200 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Proposal: TRUNCATE TABLE table RESTRICT" }, { "msg_contents": "Zeugswetter Andreas SB wrote:\n>\n> Truncate should probably check if all referencing tables are empty\n> and fail if not. Truncate should imho not lead to a violated constraint\n> situation.\n> Strictly speaking the current situation is more or less a bug.\n>\n\n Not anything is possible with RI, so the DB schema might use\n regular triggers and/or rules as well.\n\n Why not reject TRUNCATE at all if the relation has\n rules/triggers? IMHO the only safe way.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Fri, 9 Jun 2000 13:04:27 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: AW: Proposal: TRUNCATE TABLE table RESTRICT" } ]
[ { "msg_contents": "I'm wondering if anyone is working on getArray(int) for jdbc2. If so, can\nI help get it finished? If not, I'd like to implement it. It doesn't\nlook too hard, but will require custom classes that implement\nthe ResultSet, ResultSetMetaData, and Array interfaces. Is anyone else\ninterested? \n\n----------------------------------------------------------------\nTravis Bauer | CS Grad Student | IU |www.cs.indiana.edu/~trbauer\n----------------------------------------------------------------\n\n", "msg_date": "Thu, 8 Jun 2000 09:51:02 -0500 (EST)", "msg_from": "Travis Bauer <[email protected]>", "msg_from_op": true, "msg_subject": "implements jdbc2's getArray(int)" } ]
[ { "msg_contents": "For the unique system indexes, does it make sense to do a hash index\ninstead of the B-tree?\n\nAlso, does it make sense to implement a bitmap index (like Oracle 8i)?\nThat would work for even non-unique catalogs, although doing it\n\"right\" would be a lot of work.\n\nIs there interest in me looking at doing bitmap indexes (assuming\nsomeone isn't already working on it)?\n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================", "msg_date": "Thu, 8 Jun 2000 11:06:39 -0400 (EDT)", "msg_from": "Brian E Gallew <[email protected]>", "msg_from_op": true, "msg_subject": "index idea for system catalogs" }, { "msg_contents": "On Thu, 8 Jun 2000, Brian E Gallew wrote:\n\n> Date: Thu, 8 Jun 2000 11:06:39 -0400 (EDT)\n> From: Brian E Gallew <[email protected]>\n> Reply-To: [email protected]\n> To: [email protected]\n> Subject: [HACKERS] index idea for system catalogs\n> \n> For the unique system indexes, does it make sense to do a hash index\n> instead of the B-tree?\n> \n> Also, does it make sense to implement a bitmap index (like Oracle 8i)?\n> That would work for even non-unique catalogs, although doing it\n> \"right\" would be a lot of work.\n> \n> Is there interest in me looking at doing bitmap indexes (assuming\n> someone isn't already working on it)?\n\nIt would be great to have bitmap indices implemented as in Oracle\nespecially for data with low number of distinct values which cause\na problem in current index system. \n\n\tOleg\n\n> \n> -- \n> =====================================================================\n> | JAVA must have been developed in the wilds of West Virginia. |\n> | After all, why else would it support only single inheritance?? |\n> =====================================================================\n> | Finger [email protected] for my public key. |\n> =====================================================================\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 12 Jun 2000 20:40:36 +0300 (GMT)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index idea for system catalogs" } ]
[ { "msg_contents": "\n the today's CVS: \n \nmake[3]: Entering directory /home/PG_DEVEL/pgsql/src/interfaces/odbc'\ngcc -I../../include -O2 -Wall -Wmissing-prototypes -Wmissing-declarations\n-I.\n-DHAVE_CONFIG_H -fpic -c -o misc.o misc.c\nmisc.c: In function 'mylog':\nmisc.c:71: 'PG_BINARY_W' undeclared (first use in this function)\nmisc.c:71: (Each undeclared identifier is reported only once\nmisc.c:71: for each function it appears in.)\nmisc.c: In function 'log':\nmisc.c:99: 'PG_BINARY_W' undeclared (first use in this function)\nmake[3]: *** [misc.o] Error 1\nmake[3]: Leaving directory /home/PG_DEVEL/pgsql/src/interfaces/odbc'\n\nconf:\n\n./configure --prefix=/usr/lib/postgresql \\\n --with-template=linux_i386 \\\n --with-tcl \\\n --enable-multibyte \\\n --with-odbc \\\n --enable-locale \\\n --with-maxbackends=64 \\\n --with-pgport=5432\n\n\t\t\t\t\t\tKarel\n\n", "msg_date": "Thu, 8 Jun 2000 18:24:14 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "crashed odbc in CVS " }, { "msg_contents": "Oops. I thought that code included postgres.h. I have added a define\nto psqlodbc.h. I added:\n\n\t#ifdef WIN32\n\t#define PG_BINARY\tO_BINARY\n\t#define\tPG_BINARY_R\t\"rb\"\n\t#define\tPG_BINARY_W\t\"wb\"\n\t#else\n\t#define\tPG_BINARY\t0\n\t#define\tPG_BINARY_R\t\"r\"\n\t#define\tPG_BINARY_W\t\"w\"\n\t#endif\n\nCan you give it a try?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 8 Jun 2000 12:41:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: crashed odbc in CVS" }, { "msg_contents": "On Thu, 8 Jun 2000, Bruce Momjian wrote:\n\n> Oops. I thought that code included postgres.h. I have added a define\n> to psqlodbc.h. I added:\n> \n> \t#ifdef WIN32\n> \t#define PG_BINARY\tO_BINARY\n> \t#define\tPG_BINARY_R\t\"rb\"\n> \t#define\tPG_BINARY_W\t\"wb\"\n> \t#else\n> \t#define\tPG_BINARY\t0\n> \t#define\tPG_BINARY_R\t\"r\"\n> \t#define\tPG_BINARY_W\t\"w\"\n> \t#endif\n> \n> Can you give it a try?\n\n Yes, you are right --- bug is in psqlodbc.h, but it is to in 'ggps.c'.\nIn this file is not included \"psqlodbc.h\".\n\n The patch is attached.\n\n Right?\n\n\t\t\t\t\t\tKarel", "msg_date": "Thu, 8 Jun 2000 18:55:37 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: crashed odbc in CVS" }, { "msg_contents": "Yes, that is it. I move the PG_BINARY includes to misc.h. Seem like a\nbetter place for them.\n\n\n> \n> On Thu, 8 Jun 2000, Bruce Momjian wrote:\n> \n> > Oops. I thought that code included postgres.h. I have added a define\n> > to psqlodbc.h. I added:\n> > \n> > \t#ifdef WIN32\n> > \t#define PG_BINARY\tO_BINARY\n> > \t#define\tPG_BINARY_R\t\"rb\"\n> > \t#define\tPG_BINARY_W\t\"wb\"\n> > \t#else\n> > \t#define\tPG_BINARY\t0\n> > \t#define\tPG_BINARY_R\t\"r\"\n> > \t#define\tPG_BINARY_W\t\"w\"\n> > \t#endif\n> > \n> > Can you give it a try?\n> \n> Yes, you are right --- bug is in psqlodbc.h, but it is to in 'ggps.c'.\n> In this file is not included \"psqlodbc.h\".\n> \n> The patch is attached.\n> \n> Right?\n> \n> \t\t\t\t\t\tKarel\n> \nContent-Description: \n\n> diff -r -B -C2 odbc.org/gpps.c odbc/gpps.c\n> *** odbc.org/gpps.c\tThu Jun 8 12:05:35 2000\n> --- odbc/gpps.c\tThu Jun 8 18:50:33 2000\n> ***************\n> *** 29,32 ****\n> --- 29,33 ----\n> #include \"gpps.h\"\n> #include \"misc.h\"\n> + #include \"psqlodbc.h\"\n> \n> #ifndef TRUE\n> diff -r -B -C2 odbc.org/psqlodbc.h odbc/psqlodbc.h\n> *** odbc.org/psqlodbc.h\tThu Jun 8 12:05:36 2000\n> --- odbc/psqlodbc.h\tThu Jun 8 18:48:22 2000\n> ***************\n> *** 167,170 ****\n> --- 167,181 ----\n> #define PG_NUMERIC_MAX_SCALE\t\t1000\n> \n> + #ifdef WIN32\n> + #define PG_BINARY O_BINARY\n> + #define PG_BINARY_R \"rb\"\n> + #define PG_BINARY_W \"wb\"\n> + #else\n> + #define PG_BINARY 0\n> + #define PG_BINARY_R \"r\"\n> + #define PG_BINARY_W \"w\"\n> + #endif\n> + \n> + \n> #include \"misc.h\"\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 8 Jun 2000 13:09:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: crashed odbc in CVS" } ]
[ { "msg_contents": "> I would think this should echo *two* \"INSERT <oid> 1\" messages, instead\n> of just 1.\n> \n> % createdb viewsdb \n> CREATE DATABASE\n> % psql -d viewsdb -c \"create table foo(i integer); create table bar(j\n> integer);\"\n> CREATE\n> % psql -d viewsdb -c \"insert into foo(i) values (1); insert into foo(i)\n> values (2);\"\n> INSERT 9968065 1\n> % psql -d viewsdb -c \"select * from foo;\"\n> i \n> ---\n> 1\n> 2\n> (2 rows)\n> \n> % psql -d viewsdb -c \"select oid from foo;\"\n> oid \n> ---------\n> 9968064\n> 9968065\n> (2 rows)\n> \n\nThe query is sent as one string, and only one return is sent back. It\nhas always been that way.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 8 Jun 2000 12:42:05 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: interactive oddity for psql -c \"cmd; cmd;\"" }, { "msg_contents": "I would think this should echo *two* \"INSERT <oid> 1\" messages, instead\nof just 1.\n\n% createdb viewsdb \nCREATE DATABASE\n% psql -d viewsdb -c \"create table foo(i integer); create table bar(j\ninteger);\"\nCREATE\n% psql -d viewsdb -c \"insert into foo(i) values (1); insert into foo(i)\nvalues (2);\"\nINSERT 9968065 1\n% psql -d viewsdb -c \"select * from foo;\"\n i \n---\n 1\n 2\n(2 rows)\n\n% psql -d viewsdb -c \"select oid from foo;\"\n oid \n---------\n 9968064\n 9968065\n(2 rows)\n", "msg_date": "Thu, 08 Jun 2000 11:42:28 -0500", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": false, "msg_subject": "interactive oddity for psql -c \"cmd; cmd;\"" }, { "msg_contents": "Bruce Momjian writes:\n\n> > % psql -d viewsdb -c \"insert into foo(i) values (1); insert into foo(i)\n> > values (2);\"\n> > INSERT 9968065 1\n\n> The query is sent as one string, and only one return is sent back. It\n> has always been that way.\n\nWhich can be construed as a semi-feature because that's the only way to\nget unencumbered strings from psql to the backend.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 8 Jun 2000 19:09:16 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: interactive oddity for psql -c \"cmd; cmd;\"" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > I would think this should echo *two* \"INSERT <oid> 1\" messages, instead\n> > of just 1.\n> >\n> > % psql -d viewsdb -c \"insert into foo(i) values (1); insert into foo(i)\n> > values (2);\"\n> > INSERT 9968065 1\n> \n> The query is sent as one string, and only one return is sent back. It\n> has always been that way.\n\nThat's unintuitive and inconsistent behavior, albeit largely\ninconsequential, when compared with the same line in a script...\n\n% psql -d vtdb \nvtdb=# INSERT INTO foo(i) values (13); INSERT INTO foo(i) values (13);\nINSERT 9971328 1\nINSERT 9971329 1\n\nRegards,\nEd Loehr\n", "msg_date": "Thu, 08 Jun 2000 12:25:09 -0500", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: interactive oddity for psql -c \"cmd; cmd;\"" } ]
[ { "msg_contents": "\n\n> > How about this: Prevent truncate on table referenced by foreign key\n> > \n> \n> Actually, I made it:\n> \n> \t* Prevent truncate on table with a referential integrity trigger\n> \n> Is that good?\n\nNo, I think that is only one point. I think you also need to\ncheck if tables that are referencing this table are empty.\n\nAndreas\n", "msg_date": "Fri, 9 Jun 2000 10:09:11 +0200 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Proposal: TRUNCATE TABLE table RESTRICT" } ]
[ { "msg_contents": "I'll sort out a new photo - I've not had a beard for 6 months now ;-)\n\n--\nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council\n\n> -----Original Message-----\n> From:\tBruce Momjian [SMTP:[email protected]]\n> Sent:\tFriday, June 09, 2000 1:02 PM\n> To:\tVince Vielhaber\n> Cc:\[email protected]\n> Subject:\tRe: [HACKERS] New Globe\n> \n> Yes, let me remind folks Vince still needs more pictures.\n> \n> > \n> > I have the new developer's globe online. Please check your BIOs and\n> > let me know if there's anything that needs correcting. For those \n> > without pictures, don't be so shy. Submit a picture - if you need to\n> > have one scanned it can be arranged.\n> > \n> > And before I forget.. Good job on the globe, Jan!\n> > \n> > Vince.\n> > -- \n> >\n> ==========================================================================\n> > Vince Vielhaber -- KA8CSH email: [email protected]\n> http://www.pop4.net\n> > 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n> > Online Campground Directory http://www.camping-usa.com\n> > Online Giftshop Superstore http://www.cloudninegifts.com\n> >\n> ==========================================================================\n> > \n> > \n> > \n> > \n> > ************\n> > \n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jun 2000 13:30:55 +0100 ", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: New Globe" } ]
[ { "msg_contents": "Can someone comment on this?\n\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:[email protected]]\n> > \n> > > I'm now inclined to introduce a new system relation to store\n> > > the physical path name. It could also have table(data)space\n> > > information in the (near ?) future. \n> > > It seems better to separate it from pg_class because table(data?)\n> > > space may change the concept of table allocation.\n> > \n> > Why not just put it in pg_class?\n> >\n> \n> Not sure,it's only my feeling.\n> Comments please,everyone.\n> \n> We have taken a practical way which doesn't break file per table\n> assumption in this thread and it wouldn't so difficult to implement.\n> In fact Ross has already tried it.\n> \n> However there was a discussion about data(table)space for\n> months ago and currently a new discussion is there.\n> Judging from the previous discussion,I can't expect so much\n> that it could get a practical consensus(How many opinions there\n> were). We can make a practical step toward future by encapsulating\n> the information of table allocation. Separating table alloc info from\n> pg_class seems one of the way. \n> There may be more essential things for encapsulation. \n> \n> Comments ?\n> \n> Regards.\n> \n> Hiroshi Inoue\n> [email protected]\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jun 2000 13:29:26 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix for RENAME" }, { "msg_contents": "Seems Vadim's new storage manager for 7.2 would be the way to go.\n\nI think he is going to have everything in one file.\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:[email protected]]\n> > \n> > Can someone comment on this?\n> >\n> \n> It seems to me that we have to reach some consensus in\n> order to get a standard transactional control mechanism\n> to change the allocation of table files. Probably we would\n> have to separate things into 2 parts.\n> \n> 1) Where to allocate tables -- we would need some encapsulation\n> like tablespace. It would be better to be handled differently by\n> each storage manager. Note that tablespace is only an encap-\n> sulation and doesn't necessarily mean that of Oracle.\n> 2) Where tables are allocated -- only specific strorage manager\n> knows the meaing and everything would be treated internally.\n> \n> Under current (file per table) storage manager,#1 isn't necessarily\n> needed for the implementaion of #2 and Ross has already tried it.\n> If we could get some consensus on the future direction of 1)2),\n> we would be able to apply his implementation.\n> \n> Regards.\n> \n> Hiroshi Inoue\n> [email protected]\n> \n> > \n> > [ Charset ISO-8859-1 unsupported, converting... ]\n> > > > -----Original Message-----\n> > > > From: Bruce Momjian [mailto:[email protected]]\n> > > > \n> > > > > I'm now inclined to introduce a new system relation to store\n> > > > > the physical path name. It could also have table(data)space\n> > > > > information in the (near ?) future. \n> > > > > It seems better to separate it from pg_class because table(data?)\n> > > > > space may change the concept of table allocation.\n> > > > \n> > > > Why not just put it in pg_class?\n> > > >\n> > > \n> > > Not sure,it's only my feeling.\n> > > Comments please,everyone.\n> > > \n> > > We have taken a practical way which doesn't break file per table\n> > > assumption in this thread and it wouldn't so difficult to implement.\n> > > In fact Ross has already tried it.\n> > > \n> > > However there was a discussion about data(table)space for\n> > > months ago and currently a new discussion is there.\n> > > Judging from the previous discussion,I can't expect so much\n> > > that it could get a practical consensus(How many opinions there\n> > > were). We can make a practical step toward future by encapsulating\n> > > the information of table allocation. Separating table alloc info from\n> > > pg_class seems one of the way. \n> > > There may be more essential things for encapsulation. \n> > > \n> > > Comments ?\n> > > \n> > > Regards.\n> > > \n> > > Hiroshi Inoue\n> > > [email protected]\n> > > \n> > > \n> > \n> > \n> > -- \n> > Bruce Momjian | http://www.op.net/~candle\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> > \n> > \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 11 Jun 2000 23:14:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix for RENAME" }, { "msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> \n> Can someone comment on this?\n>\n\nIt seems to me that we have to reach some consensus in\norder to get a standard transactional control mechanism\nto change the allocation of table files. Probably we would\nhave to separate things into 2 parts.\n \n1) Where to allocate tables -- we would need some encapsulation\n like tablespace. It would be better to be handled differently by\n each storage manager. Note that tablespace is only an encap-\n sulation and doesn't necessarily mean that of Oracle.\n2) Where tables are allocated -- only specific strorage manager\n knows the meaing and everything would be treated internally.\n\nUnder current (file per table) storage manager,#1 isn't necessarily\nneeded for the implementaion of #2 and Ross has already tried it.\nIf we could get some consensus on the future direction of 1)2),\nwe would be able to apply his implementation.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n \n> \n> [ Charset ISO-8859-1 unsupported, converting... ]\n> > > -----Original Message-----\n> > > From: Bruce Momjian [mailto:[email protected]]\n> > > \n> > > > I'm now inclined to introduce a new system relation to store\n> > > > the physical path name. It could also have table(data)space\n> > > > information in the (near ?) future. \n> > > > It seems better to separate it from pg_class because table(data?)\n> > > > space may change the concept of table allocation.\n> > > \n> > > Why not just put it in pg_class?\n> > >\n> > \n> > Not sure,it's only my feeling.\n> > Comments please,everyone.\n> > \n> > We have taken a practical way which doesn't break file per table\n> > assumption in this thread and it wouldn't so difficult to implement.\n> > In fact Ross has already tried it.\n> > \n> > However there was a discussion about data(table)space for\n> > months ago and currently a new discussion is there.\n> > Judging from the previous discussion,I can't expect so much\n> > that it could get a practical consensus(How many opinions there\n> > were). We can make a practical step toward future by encapsulating\n> > the information of table allocation. Separating table alloc info from\n> > pg_class seems one of the way. \n> > There may be more essential things for encapsulation. \n> > \n> > Comments ?\n> > \n> > Regards.\n> > \n> > Hiroshi Inoue\n> > [email protected]\n> > \n> > \n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> \n", "msg_date": "Mon, 12 Jun 2000 12:14:08 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Fix for RENAME" }, { "msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> \n> Seems Vadim's new storage manager for 7.2 would be the way to go.\n> \n> I think he is going to have everything in one file.\n>\n\nProbably Vadim would have to change something around storage\nmanager before 7.1 in his WAL implementation. So what should/\ncould we do for 7.1 ? If Vadim would do/has done everything,it\nshould be clearly mentioned. Otherwise other people would waste\ntheir time.\n\nRegards.\n\nHiroshi Inoue\[email protected] \n", "msg_date": "Thu, 15 Jun 2000 08:56:02 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Fix for RENAME" } ]
[ { "msg_contents": "Can someone make a suggestion here?\n\n> Hi.\n> \n> Here is a patch I hacked together. I didn't understand the TAS thing, and\n> it appeared to be the cause of the problem. I hacked the code up by\n> commenting out the assembly TAS provided and inserting the semaphore\n> macros for the non-gcc alpha version. Perhaps someone with expertise in\n> this area could assist in fixing the provided TAS.\n> \n> Here is the patch:\n> \n> -Michael\n> \n> *** include/storage/s_lock.h Tue Mar 14 23:01:08 2000\n> --- include/storage/s_lock.h.bak Sat Mar 11 19:21:30 2000\n> ***************\n> *** 79,85 ****\n> */\n> \n> #if defined(__alpha__)\n> ! /*#define TAS(lock) tas(lock)\n> #define S_UNLOCK(lock) { __asm__(\"mb\"); *(lock) = 0; }\n> \n> static __inline__ int\n> --- 79,85 ----\n> */\n> \n> #if defined(__alpha__)\n> ! #define TAS(lock) tas(lock)\n> #define S_UNLOCK(lock) { __asm__(\"mb\"); *(lock) = 0; }\n> \n> static __inline__ int\n> ***************\n> *** 102,118 ****\n> 4: nop \": \"=m\"(*lock), \"=r\"(_res): :\"0\");\n> \n> return (int) _res;\n> ! }*/\n> ! /*\n> ! * OSF/1 (Alpha AXP)\n> ! *\n> ! * Note that slock_t on the Alpha AXP is msemaphore instead of char\n> ! * (see storage/ipc.h).\n> ! */\n> ! #define TAS(lock) (msem_lock((lock), MSEM_IF_NOWAIT) < 0)\n> ! #define S_UNLOCK(lock) msem_unlock((lock), 0)\n> ! #define S_INIT_LOCK(lock) msem_init((lock), MSEM_UNLOCKED)\n> ! #define S_LOCK_FREE(lock) (!(lock)->msem_state)\n> \n> #endif /* __alpha__ */\n> \n> --- 102,108 ----\n> 4: nop \": \"=m\"(*lock), \"=r\"(_res): :\"0\");\n> \n> return (int) _res;\n> ! }\n> \n> #endif /* __alpha__ */\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jun 2000 13:33:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hack to make postgres compile on Dec 4.0f with GCC" }, { "msg_contents": "Bruce Momjian wrote:\n\n> Can someone make a suggestion here?\n>\n> > Hi.\n> >\n> > Here is a patch I hacked together. I didn't understand the TAS thing, and\n> > it appeared to be the cause of the problem. I hacked the code up by\n> > commenting out the assembly TAS provided and inserting the semaphore\n> > macros for the non-gcc alpha version. Perhaps someone with expertise in\n> > this area could assist in fixing the provided TAS.\n\nYep, Arrigo looked at the TAS code for Alpha-Linux (it is commented out for\nDEC Unix) and came to the conclusion that it probably only worked on specific\nconfigurations and definitely not on multi-processor machines. He has been\ntrying to write the code himself, but it turns out to be extraordinarily hard\nto get this working properly on Alpha (espec. multi-processor machines). I\nbelieve he is currently trying to interest somebody in Digital in this\nproblem. Until a proper solution has been found, it is probably safer to just\nstick to the semaphores.\n\nAdriaan\n\n", "msg_date": "Mon, 12 Jun 2000 08:15:18 +0300", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Hack to make postgres compile on Dec 4.0f with GCC" } ]
[ { "msg_contents": "Can someone comment on this?\n\n\n> PQsetenvPoll has a very bad bug in it. It assumes that the passed-in\n> PGconn object has a valid setenv_handle if it is non-NULL. This is\n> true only when it is called from PQconnectdb and friends.\n> \n> The bad code in PQsetenvPoll is this:\n> \n> PGsetenvHandle handle = conn->setenv_handle;\n> ...\n> if (!handle || handle->state == SETENV_STATE_FAILED)\n> return PGRES_POLLING_FAILED;\n> \n> After a connection is successfully established, setenv_handle points\n> to a free(3)'ed handle. Neither PQsetenv, nor PQsetenvStart correctly\n> update this field with a new setenvHandle. Here is a short test case\n> demonstrating the memory corruption.\n> \n> #include <libpq-fe.h>\n> #include <stdio.h>\n> \n> main()\n> {\n> \tfoo(0);\n> }\n> \n> foo(i)\n> int i;\n> {\n> \tPGconn *P;\n> \n> \tP = PQconnectdb(\"\");\n> \tif (!P || PQstatus(P) != CONNECTION_OK) {\n> \t\tfprintf(stderr, \"connectdb failed\\n\");\n> \t\treturn;\n> \t}\n> \n> \tPQsetenv(P);\n> \tPQfinish(P);\n> \n> \tif (i < 1000) {\n> \t\tfoo(i+1);\n> \t}\n> }\n> \n> (gdb) where\n> #0 0x4007e683 in chunk_free (ar_ptr=0x4010ba80, p=0x80516b0) at malloc.c:3057\n> #1 0x4007e408 in __libc_free (mem=0x80516c8) at malloc.c:2959\n> #2 0x4001fce9 in freePGconn () from /usr/local/pgsql/lib/libpq.so.2.1\n> #3 0x4001fe4d in PQfinish () from /usr/local/pgsql/lib/libpq.so.2.1\n> #4 0x8048693 in foo ()\n> #5 0x80486ac in foo ()\n> #6 0x8048620 in main ()\n> #7 0x400454be in __libc_start_main (main=0x8048610 <main>, argc=1, \n> argv=0xbffff8c4, init=0x804846c <_init>, fini=0x80486f4 <_fini>, \n> rtld_fini=0x4000a130 <_dl_fini>, stack_end=0xbffff8bc)\n> at ../sysdeps/generic/libc-start.c:90\n> \n> \n> \n> One fix is to add a `conn->setenv = handle' to PQsetenvStart before\n> returning, but that won't protect in the case of PQsetenvPoll being\n> called without a corresponding PQsetenvStart first. Perhaps the\n> interface should be revisited. Do you really need to store the\n> setenvHandle in a PGconn? There is no existing way to safely free\n> setenvHandles.\n> \n> This bug was also in 7.0beta1.\n> \n> \n> \n> In the latest patches, an encoding field has been added to the\n> PGresult object. May I respectfully request an accessor function be\n> added to retrieve it?\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jun 2000 13:33:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: libpq problems in CVS" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Can someone comment on this?\n\n>> PQsetenvPoll has a very bad bug in it. It assumes that the passed-in\n>> PGconn object has a valid setenv_handle if it is non-NULL. This is\n>> true only when it is called from PQconnectdb and friends.\n\nProblem is gone: we don't export PQsetenvPoll anymore.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Jun 2000 13:53:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq problems in CVS " } ]
[ { "msg_contents": "Two way replication on a single \"table\" is availabe in Lotus Notes. In\nNotes, every record has a time-stamp, which contains the time of the\nlast update. (It also has a creation timestamp.) During replication,\ntimestamps are compared at the row/record level, and compared with the\ntimestamp of the last replication. If, for corresponding rows in two\nreplicas, the timestamp of one row is newer than the last replication,\nthe contents of this newer row is copied to the other replica. But if\nboth of the corresponding rows have newer timestamps, there is a\nproblem. The Lotus Notes solution is to:\n 1. send a replication conflict message to the Notes Administrator,\nwhich message contains full copies of both rows.\n 2. copy the newest row over the less new row in the replicas.\n 3. there is a mechanism for the Administrator to reverse the default\ndecision in 2, if the semantics of the message history, or off-line\ninvestigation indicates that the wrong decision was made.\n\nIn practice, the Administrator is not overwhelmed with replication\nconflict messages because updates usually only originate at the site\nthat originally created the row. Or updates fill only fields that were\noriginally 'TBD'. The full logic is perhaps more complicated than I have\ndescribed here, but it is already complicated enough to give you an idea\nof what you're really being asked to do. I am not aware of a supplier of\nrelational database who really supports two way replication at the level\nthat Notes supports it, but Notes isn't a relational database.\n\nThe difficulty of the position that you appear to be in is that\nmanagement might believe that the full problem is solved in brand X\nRDBMS, and you will have trouble convincing management that this is not\nreally true.\n\n", "msg_date": "Fri, 09 Jun 2000 11:52:57 -0700", "msg_from": "Paul Condon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Big project, please help" }, { "msg_contents": "Paul Condon wrote:\n> \n> Two way replication on a single \"table\" is availabe in Lotus Notes. In\n> Notes, every record has a time-stamp, which contains the time of the\n> last update. (It also has a creation timestamp.) During replication,\n> timestamps are compared at the row/record level, and compared with the\n> timestamp of the last replication. If, for corresponding rows in two\n\nI've implemented a similar two-way replication scheme for an application\nsome years ago and it works well. This was written using Progress 4GL,\nwhich has specific triggers for handling replication functionality\n(REPLICATION-CREATE, REPLICATION-WRITE and REPLICATION-DELETE) which we\ncoded to write replication data into a table (Progress also allows you\nto grab the whole record as a \"RAW\" field which we stuffed into a field\nin the replication table).\n\nThen we wrote processes to periodically dump the replication tables at\neach site, swap them, and apply the updates.\n\nThe reason/advantage of having separate replication triggers was that\nthey would be disabled on replicate-in, but the _other_ triggers could\nbe left firing if desired. In fact I think we found it wasn't desired\nin most cases (because the changes effected by the triggers were _also_\nbeing replicated), but we did use it in some cases (like where a summary\ntable was not being replicated, and was being maintained entirely by\ntriggers at both sites).\n\nOf course we were writing our own replication information with this\none. We kept before-image records for replication changes, rather than\nhaving modification timestamps for every record, and considered that a\ndifferent before-image was a replication conflict. Since we were\nwriting the replication ourselves this before-image / after-image\napproach worked better than having to add timestamps to every table on\nthe database.\n\nThe _really_ necessary function for achieving this sort of replication\nwould be a way of getting a raw record before and after the changes -\nI'm no PostgreSQL guru, but I think that should be possible in a 3GL\ntrigger. The replication itself could be implemented with normal logic\nand some flags to indicate whether a process is running normally, or is\nreplicating data in. That detection could be handled (e.g.) by having\nthe replication-in process operate as a special 'replication' user that\nwould be detected within normal triggers enabling/disabling\nfunctionality as appropriate.\n\nA fairly small 'C'-language routine to operate as a generic replication\ntrigger should be achievable quite readily within these constraints, I\nthink. I imagine it would be best to have such a routine write output\ndirectly to log files, rather than to PostgreSQL tables, given the 8k\nrecord size limitation and current problems holding binary data directly\nin PostgreSQL columns. This is unfortunate, as it would also introduce\nconcurrency complications for a busy database. If you can guarantee\ntable record sizes under 2k you can probably get away with using\nPostgreSQL tables for the replication data if you did some sort of\nencoding of the raw record images.\n\nThis scheme worked really well (continues to work well) for about 4\nyears now. Conflicts are rare because although there are around 40\npeople using the application at various locations, they are all all\naccessing fairly narrow record sets, especially for update.\n\nIn my replication we have left the actual transfer mechanism as far out\nof the equation as possible. In fact we used e-mail for the replication\nmessages. When the sites were on dial-up modems we just did it once a\nday, but now they are on DSL internet connections we do it much more\nregularly. That much reduces the numbers of collisions too, of course.\n\nFeel free to pick my brains more on this - I should even be able to dig\nout all of our design documentation on it from somewhere!\n\nRegards,\n\t\t\t\t\tAndrew McMillan\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: [email protected]\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267\n", "msg_date": "Sat, 10 Jun 2000 16:54:12 +1200", "msg_from": "Andrew McMillan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Big replication project, please help" } ]
[ { "msg_contents": "Can someone comment on this? Add to TODO?\n\n\n> On Fri, 17 Mar 2000, Thomas Lockhart wrote:\n> >Actually, I'd suggest we *remove* the \"//\" comment delimiters\n> >altogether. We always had the \"--\" SQL92 delimiter, I added the \"/*\n> >... */\" so we could get a block delimiter of some sort (it is the same\n> >aas in Ingres). I don't know what other DBMSes do, and we could define\n> >something else instead if SQL3 or some other convention offers a\n> >strong reason.\n> \n> I think the standard specifies the curly brackets as comment block characters.\n> I checked, and we do not have them :-( \n> \n> Example:\n> select { this is a comment } * from pg_class;\n> \n> Most (all that I know) other DB's have them.\n> \n> Andreas\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jun 2000 15:58:24 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FORMAL VOTE ON =- and similar" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Can someone comment on this? Add to TODO?\n\nLast I checked, SQL3 specifies -- and /*...*/ and nothing else.\nI think we are in good shape on the comment front.\n\n\t\t\tregards, tom lane\n\n\n>> On Fri, 17 Mar 2000, Thomas Lockhart wrote:\n>>>> Actually, I'd suggest we *remove* the \"//\" comment delimiters\n>>>> altogether. We always had the \"--\" SQL92 delimiter, I added the \"/*\n>>>> ... */\" so we could get a block delimiter of some sort (it is the same\n>>>> aas in Ingres). I don't know what other DBMSes do, and we could define\n>>>> something else instead if SQL3 or some other convention offers a\n>>>> strong reason.\n>> \n>> I think the standard specifies the curly brackets as comment block characters.\n>> I checked, and we do not have them :-( \n>> \n>> Example:\n>> select { this is a comment } * from pg_class;\n>> \n>> Most (all that I know) other DB's have them.\n>> \n>> Andreas\n>> \n\n\n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 09 Jun 2000 17:55:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FORMAL VOTE ON =- and similar " }, { "msg_contents": "\n\n> Can someone comment on this? Add to TODO?\n\nNo.\n\n> > On Fri, 17 Mar 2000, Thomas Lockhart wrote:\n> > >Actually, I'd suggest we *remove* the \"//\" comment delimiters\n> > >altogether. We always had the \"--\" SQL92 delimiter, I added the \"/*\n> > >... */\" so we could get a block delimiter of some sort (it is the same\n> > >aas in Ingres). I don't know what other DBMSes do, and we could define\n> > >something else instead if SQL3 or some other convention offers a\n> > >strong reason.\n> > \n> > I think the standard specifies the curly brackets as comment block characters.\n\nwrong, I'm sorry for the misinformation.\n\n> > I checked, and we do not have them :-( \n> > \n> > Example:\n> > select { this is a comment } * from pg_class;\n> > \n> > Most (all that I know) other DB's have them\n\nIt turns out I was always beleiving it to be so, but it isn't. \nThe {} are an Informix'ism and thus not worth implementing.\n\nAndreas\n\n", "msg_date": "Mon, 12 Jun 2000 16:46:51 +0200", "msg_from": "\"Zeugswetter Andreas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FORMAL VOTE ON =- and similar" }, { "msg_contents": "> > > Example:\n> > > select { this is a comment } * from pg_class;\n> > > \n> > > Most (all that I know) other DB's have them\n> \n> It turns out I was always beleiving it to be so, but it isn't. \n> The {} are an Informix'ism and thus not worth implementing.\n\nYes, I have seen it on Informix.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 12 Jun 2000 12:14:53 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FORMAL VOTE ON =- and similar" } ]
[ { "msg_contents": "Was this resolved?\n\n\n> Bruce Momjian <[email protected]> writes:\n> > \ttest=> SELECT date('1/1/1992') + timespan('1 year');\n> > \tERROR: No such function 'timespan' with the specified attributes\n> \n> This works:\n> \n> SELECT date('1/1/1992') + '1 year'::timespan;\n> \n> The function parsing code has a rather half-baked attempt to interpret\n> function calls that match type names as casts. IIRC, it only works\n> when the cast is between binary-compatible types. We should probably\n> either rip that out or make it fully equivalent to a typecast.\n> If the latter, it would have to be tried *after* failing to find a\n> matching ordinary function --- I think it's tried first at the moment,\n> which is pretty bogus.\n> \n> A more restricted possibility that would cover this particular example\n> is to treat a function call as a typecast if (a) the function name\n> matches a type name *and* (b) the argument is of type UNKNOWN (ie,\n> it is a string literal of as-yet-undetermined type).\n> \n> I'm starting to get uncomfortable with the amount of syntax and\n> semantics rejiggering we're doing in beta phase... so I'd not recommend\n> trying to implement the first option now. If people like the more\n> restricted fix, maybe that would be reasonable to do now.\n> \n> \n> I notice that although 6.5 doesn't take the query either, it gives\n> a different and perhaps more appropriate error message:\n> \n> play=> SELECT date('1/1/1992') + timespan('1 year');\n> ERROR: Function 'timespan(unknown)' does not exist\n> Unable to identify a function which satisfies the given argument types\n> You will have to retype your query using explicit typecasts\n> \n> I thought I'd got rid of the nonspecific error messages for function/\n> operator lookup failures, but this case seems to have got worse instead\n> of better. Drat. Will look into that.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jun 2000 16:30:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding time to DATE type" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Was this resolved?\n\nYeah, I think it's working fairly well now. The current code first\ntries to look up an actual function matching the name + arguments,\nand only if that fails does it try to interpret the construct as a\nbinary-compatible type coercion.\n\n\t\t\tregards, tom lane\n\n>> Bruce Momjian <[email protected]> writes:\n>>>> test=> SELECT date('1/1/1992') + timespan('1 year');\n>>>> ERROR: No such function 'timespan' with the specified attributes\n>> \n>> This works:\n>> \n>> SELECT date('1/1/1992') + '1 year'::timespan;\n>> \n>> The function parsing code has a rather half-baked attempt to interpret\n>> function calls that match type names as casts. IIRC, it only works\n>> when the cast is between binary-compatible types. We should probably\n>> either rip that out or make it fully equivalent to a typecast.\n>> If the latter, it would have to be tried *after* failing to find a\n>> matching ordinary function --- I think it's tried first at the moment,\n>> which is pretty bogus.\n>> \n>> A more restricted possibility that would cover this particular example\n>> is to treat a function call as a typecast if (a) the function name\n>> matches a type name *and* (b) the argument is of type UNKNOWN (ie,\n>> it is a string literal of as-yet-undetermined type).\n>> \n>> I'm starting to get uncomfortable with the amount of syntax and\n>> semantics rejiggering we're doing in beta phase... so I'd not recommend\n>> trying to implement the first option now. If people like the more\n>> restricted fix, maybe that would be reasonable to do now.\n>> \n>> \n>> I notice that although 6.5 doesn't take the query either, it gives\n>> a different and perhaps more appropriate error message:\n>> \n>> play=> SELECT date('1/1/1992') + timespan('1 year');\n>> ERROR: Function 'timespan(unknown)' does not exist\n>> Unable to identify a function which satisfies the given argument types\n>> You will have to retype your query using explicit typecasts\n>> \n>> I thought I'd got rid of the nonspecific error messages for function/\n>> operator lookup failures, but this case seems to have got worse instead\n>> of better. Drat. Will look into that.\n>> \n>> regards, tom lane\n>> \n\n\n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 09 Jun 2000 18:06:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding time to DATE type " } ]
[ { "msg_contents": "Can someone give me a TODO summary for this issue?\n\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> On Fri, 17 Mar 2000, Bruce Momjian wrote:\n> \n> > \ttest=> SELECT date('1/1/1992') + timespan('1 year');\n> \n> If I may point something out here, the correct syntax for this in SQL92 is\n> \n> SELECT DATE '1/1/1992' + INTERVAL '1 year'\n> \n> (Ignoring the fact that neither the date nor the interval strings have the\n> correct format.)\n> \n> This converts to a cast in PostgreSQL, which is fine, but the standard\n> makes a semantic distinction:\n> \n> \tCAST('2000-02-29' AS DATE)\n> \n> converts a character literal to date\n> \n> \tDATE '2000-02-29'\n> \n> *is* a date literal. Furthermore, just\n> \n> \t'2000-02-29'\n> \n> is not a date literal.\n> \n> I've been doing some lobbying to get rid of the \"unknown\" type because SQL\n> is perfectly clear about what \"quote-stuff-quote\" means (character type)\n> and in absence of any evidence to the contrary (such as a function only\n> taking date arguments, inserting it into a date field) it should be\n> treated as such. That will get rid of such embarrassments as\n> \n> \tSELECT 'a' LIKE 'a' -- try it\n> \n> Tom believes that this will create a pain for the odd data type crowd but\n> I don't think that this is so (or at least has to be so) whereas the\n> current behavior creates a pain for the normal data type crowd.\n> \n> Just my ideas.\n> \n> \n> -- \n> Peter Eisentraut Sernanders v?g 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jun 2000 16:30:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding time to DATE type" }, { "msg_contents": "Bruce Momjian writes:\n\n> Can someone give me a TODO summary for this issue?\n\n* make 'text' constants default to text type (not unknown)\n\n(I think not everyone's completely convinced on this issue, but I don't\nrecall anyone being firmly opposed to it.)\n\n* add SQL interval syntax\n\n\n> > On Fri, 17 Mar 2000, Bruce Momjian wrote:\n> > \n> > > \ttest=> SELECT date('1/1/1992') + timespan('1 year');\n> > \n> > If I may point something out here, the correct syntax for this in SQL92 is\n> > \n> > SELECT DATE '1/1/1992' + INTERVAL '1 year'\n> > \n> > (Ignoring the fact that neither the date nor the interval strings have the\n> > correct format.)\n> > \n> > This converts to a cast in PostgreSQL, which is fine, but the standard\n> > makes a semantic distinction:\n> > \n> > \tCAST('2000-02-29' AS DATE)\n> > \n> > converts a character literal to date\n> > \n> > \tDATE '2000-02-29'\n> > \n> > *is* a date literal. Furthermore, just\n> > \n> > \t'2000-02-29'\n> > \n> > is not a date literal.\n> > \n> > I've been doing some lobbying to get rid of the \"unknown\" type because SQL\n> > is perfectly clear about what \"quote-stuff-quote\" means (character type)\n> > and in absence of any evidence to the contrary (such as a function only\n> > taking date arguments, inserting it into a date field) it should be\n> > treated as such. That will get rid of such embarrassments as\n> > \n> > \tSELECT 'a' LIKE 'a' -- try it\n> > \n> > Tom believes that this will create a pain for the odd data type crowd but\n> > I don't think that this is so (or at least has to be so) whereas the\n> > current behavior creates a pain for the normal data type crowd.\n> > \n> > Just my ideas.\n> > \n> > \n> > -- \n> > Peter Eisentraut Sernanders v?g 10:115\n> > [email protected] 75262 Uppsala\n> > http://yi.org/peter-e/ Sweden\n> > \n> > \n> \n> \n> \n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Sun, 11 Jun 2000 13:41:24 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding time to DATE type" }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> Bruce Momjian writes:\n> \n> > Can someone give me a TODO summary for this issue?\n> \n> * make 'text' constants default to text type (not unknown)\n> \n> (I think not everyone's completely convinced on this issue, but I don't\n> recall anyone being firmly opposed to it.)\n\nI don't know but I know it came up in the last month. Something about\ncharacter strings not being considered TEXT, and they had to be cast to\nTEXT to be used.\n\n> \n> * add SQL interval syntax\n\nThese must be your own items. I don't see them on the main TODO list.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 11 Jun 2000 21:05:01 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding time to DATE type" }, { "msg_contents": "On Sun, 11 Jun 2000, Bruce Momjian wrote:\n\n> > > Can someone give me a TODO summary for this issue?\n> > \n> > * make 'text' constants default to text type (not unknown)\n> > \n> > (I think not everyone's completely convinced on this issue, but I don't\n> > recall anyone being firmly opposed to it.)\n> \n> I don't know but I know it came up in the last month. Something about\n> character strings not being considered TEXT, and they had to be cast to\n> TEXT to be used.\n> \n> > \n> > * add SQL interval syntax\n> \n> These must be your own items. I don't see them on the main TODO list.\n\n?? You asked for a TODO summary, and these are the things that would need\nTO be DOne in order to address the issue originally at hand.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 12 Jun 2000 13:20:05 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding time to DATE type" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Bruce Momjian writes:\n>> Can someone give me a TODO summary for this issue?\n\n> * make 'text' constants default to text type (not unknown)\n\n> (I think not everyone's completely convinced on this issue, but I don't\n> recall anyone being firmly opposed to it.)\n\nIt would be a mistake to eliminate the distinction between unknown and\ntext. See for example my just-posted response to John Cochran on\npgsql-general about why 'BOULEVARD'::text behaves differently from\n'BOULEVARD'::char. If string literals are immediately assigned type\ntext then we will have serious problems with char(n) fields.\n\nI think it's fine to assign string literals a type of 'unknown'\ninitially. What we need to do is add a phase of type resolution that\nconsiders treating them as text, but only after the existing logic fails\nto deduce a type.\n\n(BTW it might be better to treat string literals as defaulting to char(n)\ninstead of text, allowing the normal promotion rules to replace char(n)\nwith text if necessary. Not sure if that would make things more or less\nconfusing for operations that intermix fixed- and variable-width char\ntypes.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Jun 2000 13:10:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding time to DATE type " }, { "msg_contents": "On Mon, 12 Jun 2000, Tom Lane wrote:\n\n> > * make 'text' constants default to text type (not unknown)\n\n> I think it's fine to assign string literals a type of 'unknown'\n> initially. What we need to do is add a phase of type resolution that\n> considers treating them as text, but only after the existing logic fails\n> to deduce a type.\n\nHence \"default to\"\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 13 Jun 2000 15:25:29 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding time to DATE type " }, { "msg_contents": "Oh, OK.\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> On Sun, 11 Jun 2000, Bruce Momjian wrote:\n> \n> > > > Can someone give me a TODO summary for this issue?\n> > > \n> > > * make 'text' constants default to text type (not unknown)\n> > > \n> > > (I think not everyone's completely convinced on this issue, but I don't\n> > > recall anyone being firmly opposed to it.)\n> > \n> > I don't know but I know it came up in the last month. Something about\n> > character strings not being considered TEXT, and they had to be cast to\n> > TEXT to be used.\n> > \n> > > \n> > > * add SQL interval syntax\n> > \n> > These must be your own items. I don't see them on the main TODO list.\n> \n> ?? You asked for a TODO summary, and these are the things that would need\n> TO be DOne in order to address the issue originally at hand.\n> \n> -- \n> Peter Eisentraut Sernanders v?g 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 14 Jun 2000 00:47:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding time to DATE type" }, { "msg_contents": "Is this something worth addressing?\n\n> Peter Eisentraut <[email protected]> writes:\n> > Bruce Momjian writes:\n> >> Can someone give me a TODO summary for this issue?\n> \n> > * make 'text' constants default to text type (not unknown)\n> \n> > (I think not everyone's completely convinced on this issue, but I don't\n> > recall anyone being firmly opposed to it.)\n> \n> It would be a mistake to eliminate the distinction between unknown and\n> text. See for example my just-posted response to John Cochran on\n> pgsql-general about why 'BOULEVARD'::text behaves differently from\n> 'BOULEVARD'::char. If string literals are immediately assigned type\n> text then we will have serious problems with char(n) fields.\n> \n> I think it's fine to assign string literals a type of 'unknown'\n> initially. What we need to do is add a phase of type resolution that\n> considers treating them as text, but only after the existing logic fails\n> to deduce a type.\n> \n> (BTW it might be better to treat string literals as defaulting to char(n)\n> instead of text, allowing the normal promotion rules to replace char(n)\n> with text if necessary. Not sure if that would make things more or less\n> confusing for operations that intermix fixed- and variable-width char\n> types.)\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 2 Oct 2000 23:43:59 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding time to DATE type" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Is this something worth addressing?\n\nYes, but not when we're already overdue for beta. We've been around\non the question of type promotion rules several times, and no one has\nyet put forward a solution that everyone else liked. I don't expect\nto see a usable solution both proposed and implemented in the next\nthree weeks...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Oct 2000 00:16:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding time to DATE type " } ]
[ { "msg_contents": "Hi,\n\nI'm getting an error when trying to install a build from the\ncurrent CVS tree.\n\nThe error is :-\n\ngmake[4]: Leaving directory `/export/home/pgsql/src/backend/utils/time'\ngmake[3]: Leaving directory `/export/home/pgsql/src/backend/utils'\n../config/install-sh -c -m 555 postgres \n/export/home/pgsql/src/test/regress/tmp_check/bin/postgres\ngmake[2]: ../config/install-sh: Command not found\ngmake[2]: *** [install-bin] Error 127\ngmake[2]: Leaving directory `/export/home/pgsql/src/backend'\ngmake[1]: *** [install] Error 2\ngmake[1]: Leaving directory `/export/home/pgsql/src'\n\nMy guess is that the make variable $(INSTALL) is getting set\nto ../config/install-sh by configure. This path is then not\nappropriate when we move out of the \"src\" subdirectory.\n\nI believe Peter Eisentraut is working in this area at the moment,\nso maybe I've been caught by work in progress.\n\nThanks,\nKeith.\n\n", "msg_date": "Fri, 9 Jun 2000 22:25:49 +0100 (BST)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Install error current CVS" }, { "msg_contents": "Keith Parks <[email protected]> writes:\n> My guess is that the make variable $(INSTALL) is getting set\n> to ../config/install-sh by configure. This path is then not\n> appropriate when we move out of the \"src\" subdirectory.\n\nKeith, are you still seeing that? AFAICT, $(INSTALL) should\nalways be assigned an absolute path. I can't duplicate the\nproblem here, for sure.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 11 Jun 2000 13:02:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Install error current CVS " }, { "msg_contents": "Tom Lane writes:\n\n> Keith Parks <[email protected]> writes:\n> > My guess is that the make variable $(INSTALL) is getting set\n> > to ../config/install-sh by configure. This path is then not\n> > appropriate when we move out of the \"src\" subdirectory.\n> \n> Keith, are you still seeing that? AFAICT, $(INSTALL) should\n> always be assigned an absolute path. I can't duplicate the\n> problem here, for sure.\n\nJust so you know what the issue here is: When configure fails to find a\nworking install program it uses the install-sh that is shipped. It creates\nan output variable like INSTALL=./config/install-sh. When it substitutes\nthis into the makefiles (@INSTALL@) it adjusts the paths so they have the\nright number of `../'. In the current Postgres build this gets only\nsubstituted into Makefile.global, therefore the relative paths are wrong\nwhen it's included from another directory.\n\nIt's probably futile to argue whether the relative paths have a merit, the\nonly way to work around this is to substitute INSTALL into every makefile\nwhere it's used, which is what I'm in the process of doing.\n\nThe current workaround is to write AC_CONFIG_AUX_DIR(`pwd`/config) which\nwill generate the absolute path. But this will create problems with\nlibtool, which statically scans configure.in for a place to put its\nltconfig.sh and related stuff. It doesn't know that `pwd` is not a\nrelative file name.\n\n(There is also a related reason that you can't use\nAC_OUTPUT($some_variable), because Automake needs to statically determine\nall .in files that are touched.)\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 12 Jun 2000 00:34:49 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Install error current CVS " } ]
[ { "msg_contents": "Hi,\n\nI know it doesn't show in a patch, but.. there are some\nrogue spaces at the end of the line containing \"usecreatetable\".\n\nKeith.\n\n *** pgsql/src/bin/initdb/initdb.sh Fri Jun 9 18:09:51 2000\n--- /usr/local/pgsql/src/bin/initdb/initdb.sh Sat Jun 10 00:07:07 2000\n***************\n*** 523,529 ****\n usename, \\\n usesysid, \\\n usecreatedb, \\\n! \\ \n uselocktable, \\\n usetrace, \\\n usesuper, \\\n--- 523,529 ----\n usename, \\\n usesysid, \\\n usecreatedb, \\\n! usecreatetable, \\\n uselocktable, \\\n usetrace, \\\n usesuper, \\\n\n", "msg_date": "Sat, 10 Jun 2000 00:12:26 +0100 (BST)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Current initdb broken." }, { "msg_contents": "THanks. Fixed.\n\n\n> Hi,\n> \n> I know it doesn't show in a patch, but.. there are some\n> rogue spaces at the end of the line containing \"usecreatetable\".\n> \n> Keith.\n> \n> *** pgsql/src/bin/initdb/initdb.sh Fri Jun 9 18:09:51 2000\n> --- /usr/local/pgsql/src/bin/initdb/initdb.sh Sat Jun 10 00:07:07 2000\n> ***************\n> *** 523,529 ****\n> usename, \\\n> usesysid, \\\n> usecreatedb, \\\n> ! \\ \n> uselocktable, \\\n> usetrace, \\\n> usesuper, \\\n> --- 523,529 ----\n> usename, \\\n> usesysid, \\\n> usecreatedb, \\\n> ! usecreatetable, \\\n> uselocktable, \\\n> usetrace, \\\n> usesuper, \\\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jun 2000 19:49:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Current initdb broken." }, { "msg_contents": "Um, excuse me. Any changes to the format of pg_shadow will break\npg_dumpall, therefore this needs to be hold back until next release.\n\n\nBruce Momjian writes:\n\n> THanks. Fixed.\n> \n> \n> > Hi,\n> > \n> > I know it doesn't show in a patch, but.. there are some\n> > rogue spaces at the end of the line containing \"usecreatetable\".\n> > \n> > Keith.\n> > \n> > *** pgsql/src/bin/initdb/initdb.sh Fri Jun 9 18:09:51 2000\n> > --- /usr/local/pgsql/src/bin/initdb/initdb.sh Sat Jun 10 00:07:07 2000\n> > ***************\n> > *** 523,529 ****\n> > usename, \\\n> > usesysid, \\\n> > usecreatedb, \\\n> > ! \\ \n> > uselocktable, \\\n> > usetrace, \\\n> > usesuper, \\\n> > --- 523,529 ----\n> > usename, \\\n> > usesysid, \\\n> > usecreatedb, \\\n> > ! usecreatetable, \\\n> > uselocktable, \\\n> > usetrace, \\\n> > usesuper, \\\n> > \n> > \n> \n> \n> \n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 10 Jun 2000 17:39:41 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Current initdb broken." }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> Um, excuse me. Any changes to the format of pg_shadow will break\n> pg_dumpall, therefore this needs to be hold back until next release.\n\n\nI see your point, and the comment in pg_dumpall:\n\n\t# load all the non-postgres users\n\t# XXX this breaks badly if the layout of pg_shadow ever changes.\n\t# It'd be better to convert the data into CREATE USER commands.\n\n\nWell, seems there would be no good way to add columns using pg_dumpall,\neven if we waited until 7.1. The problem is that pg_dumpall assumes\nthat the databases being dumped and loaded have a pg_shadow with exactly\nthe same columns, as you noted in your comment. \n\nWell one solution is to have 7.1 use CREATE USER commands, then add the\npg_shadow columns in 7.2.\n\nIf we re-order the new columns to be at the end, those columns will load\nin with NULL's.\n\nWe could then modify the user code to make default for NULL values in\nthe new columns.\n\nAnother answer is to add some code to COPY to supply default values for\nmissing fields when loading pg_shadow. If we move the new columns to\nthe end of pg_shadow, that should be only a few lines of C code which we\ncan remove later.\n\n\n\n> \n> \n> Bruce Momjian writes:\n> \n> > THanks. Fixed.\n> > \n> > \n> > > Hi,\n> > > \n> > > I know it doesn't show in a patch, but.. there are some\n> > > rogue spaces at the end of the line containing \"usecreatetable\".\n> > > \n> > > Keith.\n> > > \n> > > *** pgsql/src/bin/initdb/initdb.sh Fri Jun 9 18:09:51 2000\n> > > --- /usr/local/pgsql/src/bin/initdb/initdb.sh Sat Jun 10 00:07:07 2000\n> > > ***************\n> > > *** 523,529 ****\n> > > usename, \\\n> > > usesysid, \\\n> > > usecreatedb, \\\n> > > ! \\ \n> > > uselocktable, \\\n> > > usetrace, \\\n> > > usesuper, \\\n> > > --- 523,529 ----\n> > > usename, \\\n> > > usesysid, \\\n> > > usecreatedb, \\\n> > > ! usecreatetable, \\\n> > > uselocktable, \\\n> > > usetrace, \\\n> > > usesuper, \\\n> > > \n> > > \n> > \n> > \n> > \n> \n> -- \n> Peter Eisentraut Sernanders v?g 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 10 Jun 2000 13:37:53 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Current initdb broken." }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> Um, excuse me. Any changes to the format of pg_shadow will break\n> pg_dumpall, therefore this needs to be hold back until next release.\n> \n\nWoo, hoo. We have triggers and constraints on COPY. We could do the\ndefault values that way. Seems DEFAULT is not activated in COPY. I\nknew there was some limitation in COPY.\n\n\ttest=> create table test(x int, y int default 5);\n\tCREATE\n\ttest=> copy test from '/tmp/x';\n\tCOPY\n\ttest=> select * from test;\n\t x | y \n\t---+---\n\t 1 | \n\t(1 row)\n\nCan someone suggest a clean solution for COPY? Seems we have triggers\nand constraints.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 10 Jun 2000 14:03:15 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Current initdb broken." }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Um, excuse me. Any changes to the format of pg_shadow will break\n> pg_dumpall, therefore this needs to be hold back until next release.\n\nWe are going to have to improve pg_dumpall sooner or later, aren't we?\nLoading pg_shadow and pg_group by direct COPY just won't do.\n\nBut you're right, this patch will have to be reversed out until a\nmore robust pg_dumpall has been out there for at least one release.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 10 Jun 2000 23:41:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Current initdb broken. " }, { "msg_contents": "Bruce Momjian writes:\n\n> Well one solution is to have 7.1 use CREATE USER commands, then add the\n> pg_shadow columns in 7.2.\n\nThat's the plan.\n\n> If we re-order the new columns to be at the end, those columns will load\n> in with NULL's.\n> \n> We could then modify the user code to make default for NULL values in\n> the new columns.\n\nI'm not sure, that would break the basic line of error checking in COPY.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Sun, 11 Jun 2000 13:41:09 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Current initdb broken." }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> Bruce Momjian writes:\n> \n> > Well one solution is to have 7.1 use CREATE USER commands, then add the\n> > pg_shadow columns in 7.2.\n> \n> That's the plan.\n> \n> > If we re-order the new columns to be at the end, those columns will load\n> > in with NULL's.\n> > \n> > We could then modify the user code to make default for NULL values in\n> > the new columns.\n> \n> I'm not sure, that would break the basic line of error checking in COPY.\n\nIt would be active only for pg_shadow and no other table. However, I\nsee that having not enough column in COPY fills the rest with NULL and\ndoes not report an error.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 11 Jun 2000 09:28:08 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Current initdb broken." }, { "msg_contents": "Bruce Momjian writes:\n\n> > I'm not sure, that would break the basic line of error checking in COPY.\n> \n> It would be active only for pg_shadow and no other table. However, I\n> see that having not enough column in COPY fills the rest with NULL and\n> does not report an error.\n\nThe problem remains that this change is conceptually questionable in the\nfirst place.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Mon, 12 Jun 2000 00:32:11 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Current initdb broken." }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> Bruce Momjian writes:\n> \n> > > I'm not sure, that would break the basic line of error checking in COPY.\n> > \n> > It would be active only for pg_shadow and no other table. However, I\n> > see that having not enough column in COPY fills the rest with NULL and\n> > does not report an error.\n> \n> The problem remains that this change is conceptually questionable in the\n> first place.\n> \n\nOK, I have backed out the changes. We can always use them later if we\nwish.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 11 Jun 2000 23:40:15 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Current initdb broken." }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> I'm not sure, that would break the basic line of error checking in COPY.\n\n> It would be active only for pg_shadow and no other table. However, I\n> see that having not enough column in COPY fills the rest with NULL and\n> does not report an error.\n\nI believe that's a bug in COPY, actually: it should complain if you\ndon't supply the right number of columns.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Jun 2000 12:07:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Current initdb broken. " } ]
[ { "msg_contents": "Can someone comment on this?\n\n\n> Tom Lane <[email protected]> writes in [email protected]:\n> \n> > SL Baur pointed out a few days ago that PQsetenv* are too fragile to\n> > risk exporting in their current state. I plan to make them non-exported\n> > and remove 'em from the docs, unless someone comes up with a working\n> > redesign PDQ.\n> \n> I made a test patch last weekend if you want it. It's been stress\n> tested in an XEmacs Lisp-calling-libpq environment.\n> \n> This patch adds a PQsetenvClear function that is analogous to the\n> clear function for PQresult's and changes the PQsetenvPoll call to\n> accept a PQsetenvHandle. The PQsetenvHandle is freed automatically\n> when called during a database connect. When the asynchronous setenv\n> calls are called by application code, it is now the responsibility of\n> the application code to free the setenvHandle with PQsetenvClear.\n> \n> Index: src/interfaces/libpq/fe-connect.c\n> ===================================================================\n> RCS file: /usr/local/cvsroot/pgsql/src/interfaces/libpq/fe-connect.c,v\n> retrieving revision 1.123\n> diff -u -r1.123 fe-connect.c\n> --- src/interfaces/libpq/fe-connect.c\t2000/03/11 03:08:36\t1.123\n> +++ src/interfaces/libpq/fe-connect.c\t2000/03/24 03:38:25\n> @@ -1314,6 +1314,8 @@\n> \t\t\t * variables to server.\n> \t\t\t */\n> \n> +\t\t\tif (conn->setenv_handle)\n> +\t\t\t\tPQsetenvClear(conn->setenv_handle);\n> \t\t\tif ((conn->setenv_handle = PQsetenvStart(conn)) == NULL)\n> \t\t\t\tgoto error_return;\n> \n> @@ -1327,10 +1329,12 @@\n> \t\t\t these queries. */\n> \t\t\tconn->status = CONNECTION_OK;\n> \n> -\t\t\tswitch (PQsetenvPoll(conn))\n> +\t\t\tswitch (PQsetenvPoll(conn->setenv_handle))\n> \t\t\t{\n> \t\t\t\tcase PGRES_POLLING_OK: /* Success */\n> \t\t\t\t\tconn->status = CONNECTION_OK;\n> +\t\t\t\t\tfree(conn->setenv_handle);\n> +\t\t\t\t\tconn->setenv_handle = (PGsetenvHandle)NULL;\n> \t\t\t\t\treturn PGRES_POLLING_OK;\n> \n> \t\t\t\tcase PGRES_POLLING_READING: /* Still going */\n> @@ -1343,6 +1347,8 @@\n> \n> \t\t\t\tdefault:\n> \t\t\t\t\tconn->status = CONNECTION_SETENV;\n> +\t\t\t\t\tfree(conn->setenv_handle);\n> +\t\t\t\t\tconn->setenv_handle = (PGsetenvHandle)NULL;\n> \t\t\t\t\tgoto error_return;\n> \t\t\t}\n> \t\t\t/* Unreachable */\n> @@ -1385,6 +1391,11 @@\n> \tif (conn == NULL || conn->status == CONNECTION_BAD)\n> \t\treturn NULL;\n> \n> +\tif (conn->setenv_handle) {\n> +\t\tPQsetenvClear(conn->setenv_handle);\n> +\t\tconn->setenv_handle = (PGsetenvHandle)NULL;\n> +\t}\n> +\n> \tif ((handle = malloc(sizeof(struct pg_setenv_state))) == NULL)\n> \t{\n> \t\tprintfPQExpBuffer(&conn->errorMessage,\n> @@ -1394,6 +1405,7 @@\n> \t}\n> \n> \thandle->conn = conn;\n> +\tconn->setenv_handle = handle;\n> \thandle->res = NULL;\n> \thandle->eo = EnvironmentOptions;\n> \n> @@ -1416,9 +1428,10 @@\n> * ----------------\n> */\n> PostgresPollingStatusType\n> -PQsetenvPoll(PGconn *conn)\n> +PQsetenvPoll(PGsetenvHandle handle)\n> {\n> -\tPGsetenvHandle handle = conn->setenv_handle;\n> +/*\tPGsetenvHandle handle = conn->setenv_handle; */\n> +\n> #ifdef MULTIBYTE\n> \tstatic const char envname[] = \"PGCLIENTENCODING\";\n> #endif\n> @@ -1503,10 +1516,10 @@\n> \n> \t\t\t\tencoding = PQgetvalue(handle->res, 0, 0);\n> \t\t\t\tif (!encoding)\t\t\t/* this should not happen */\n> -\t\t\t\t\tconn->client_encoding = SQL_ASCII;\n> +\t\t\t\t\thandle->conn->client_encoding = SQL_ASCII;\n> \t\t\t\telse\n> \t\t\t\t\t/* set client encoding to pg_conn struct */\n> -\t\t\t\t\tconn->client_encoding = pg_char_to_encoding(encoding);\n> +\t\t\t\t\thandle->conn->client_encoding = pg_char_to_encoding(encoding);\n> \t\t\t\tPQclear(handle->res);\n> \t\t\t\t/* We have to keep going in order to clear up the query */\n> \t\t\t\tgoto keep_going;\n> @@ -1590,7 +1603,9 @@\n> \n> \t\tcase SETENV_STATE_OK:\n> \t\t\t/* Tidy up */\n> -\t\t\tfree(handle);\n> +\t\t\t/* This is error prone and requires error conditions to be */\n> +\t\t\t/* treated specially */\n> +\t\t\t/* free(handle); */\n> \t\t\treturn PGRES_POLLING_OK;\n> \n> \t\tdefault:\n> @@ -1606,7 +1621,7 @@\n> \thandle->state = SETENV_STATE_FAILED; /* This may protect us even if we\n> \t\t\t\t\t\t\t\t\t\t * are called after the handle\n> \t\t\t\t\t\t\t\t\t\t * has been freed. */\n> -\tfree(handle);\n> +\t/* free(handle); */\n> \treturn PGRES_POLLING_FAILED;\n> }\n> \n> @@ -1627,10 +1642,24 @@\n> \tif (handle->state != SETENV_STATE_FAILED)\n> \t{\n> \t\thandle->state = SETENV_STATE_FAILED;\n> -\t\tfree(handle);\n> +\t\t/* free(handle); */\n> \t}\n> }\n> \n> +/* ----------------\n> + *\t\tPQsetenvClear\n> + *\n> + * Explicitly release a PGsetenvHandle\n> + *\n> + * ----------------\n> + */\n> +void\n> +PQsetenvClear(PGsetenvHandle handle)\n> +{\n> +\tif (!handle) return;\n> +\thandle->conn->setenv_handle = (PGsetenvHandle)NULL;\n> +\tfree(handle);\n> +}\n> \n> /* ----------------\n> *\t\tPQsetenv\n> @@ -1655,6 +1684,10 @@\n> \tif ((handle = PQsetenvStart(conn)) == NULL)\n> \t\treturn 0;\n> \n> +\tif (conn->setenv_handle)\n> +\t\tfree(conn->setenv_handle);\n> +\tconn->setenv_handle = handle;\n> +\n> \tfor (;;) {\n> \t\t/*\n> \t\t * Wait, if necessary. Note that the initial state (just after\n> @@ -1692,7 +1725,7 @@\n> \t\t/*\n> \t\t * Now try to advance the state machine.\n> \t\t */\n> -\t\tflag = PQsetenvPoll(conn);\n> +\t\tflag = PQsetenvPoll(handle);\n> \t}\n> }\n> \n> @@ -1716,6 +1749,7 @@\n> \tconn->asyncStatus = PGASYNC_IDLE;\n> \tconn->notifyList = DLNewList();\n> \tconn->sock = -1;\n> +\tconn->setenv_handle = (PGsetenvHandle)NULL;\n> #ifdef USE_SSL\n> \tconn->allow_ssl_try = TRUE;\n> #endif\n> @@ -1868,6 +1902,9 @@\n> {\n> \tif (conn)\n> \t{\n> +\t\t/* It is safe to do this now */\n> +\t\tif (conn->setenv_handle)\n> +\t\t\tPQsetenvClear(conn->setenv_handle);\n> \t\tclosePGconn(conn);\n> \t\tfreePGconn(conn);\n> \t}\n> Index: src/interfaces/libpq/libpq-fe.h\n> ===================================================================\n> RCS file: /usr/local/cvsroot/pgsql/src/interfaces/libpq/libpq-fe.h,v\n> retrieving revision 1.61\n> diff -u -r1.61 libpq-fe.h\n> --- src/interfaces/libpq/libpq-fe.h\t2000/03/11 03:08:37\t1.61\n> +++ src/interfaces/libpq/libpq-fe.h\t2000/03/24 03:38:26\n> @@ -234,8 +234,9 @@\n> \t/* Passing of environment variables */\n> \t/* Asynchronous (non-blocking) */\n> \textern PGsetenvHandle PQsetenvStart(PGconn *conn);\n> -\textern PostgresPollingStatusType PQsetenvPoll(PGconn *conn);\n> +\textern PostgresPollingStatusType PQsetenvPoll(PGsetenvHandle handle);\n> \textern void PQsetenvAbort(PGsetenvHandle handle);\n> +\textern void PQsetenvClear(PGsetenvHandle handle);\n> \n> \t/* Synchronous (blocking) */\n> \textern int PQsetenv(PGconn *conn);\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jun 2000 22:14:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Update doc changes needed" } ]
[ { "msg_contents": "OK, seems the issue is closed.\n\n> SL Baur <[email protected]> writes:\n> > I made a test patch last weekend if you want it. It's been stress\n> > tested in an XEmacs Lisp-calling-libpq environment.\n> \n> > This patch adds a PQsetenvClear function that is analogous to the\n> > clear function for PQresult's and changes the PQsetenvPoll call to\n> > accept a PQsetenvHandle. The PQsetenvHandle is freed automatically\n> > when called during a database connect. When the asynchronous setenv\n> > calls are called by application code, it is now the responsibility of\n> > the application code to free the setenvHandle with PQsetenvClear.\n> \n> I think this is just throwing good work after bad. The entire exercise\n> can be eliminated by merging the two useful fields of the 'handle'\n> object into PGconn (at a net cost of 4 bytes); there's no reason to do\n> all the bookkeeping needed to keep track of a separate handle object.\n> \n> I already did that, and de-exported the PQsetenv routines, earlier\n> today. Since I still haven't heard anyone make an argument why any\n> app would want to call PQsetenv, I don't see a reason to do more work\n> on these routines.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jun 2000 22:15:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Update doc changes needed" } ]
[ { "msg_contents": "Just so you know what this aclocal.m4 thing is for: When you run autoconf\nit looks into this file for additional macro definitions. Several ways\nexist to provide additional macros: write them yourself, get them from an\narchive site, or use the ones provided by some program like libtool.\n\nTo create aclocal.m4 you run the program aclocal. This scans configure.in\nand looks for all the macros that are undefined. It then looks around for\ndefinitions of these macros and copies them all into aclocal.m4. \"Looks\naround\" by default means places like /usr/share/aclocal, where things like\nlibtool and automake leave their macros. When you write macros yourself\nyou put them into a .m4 file and stick them (in our case) into config/ and\nthen run `aclocal -I config'. As an analogy, you could think of all the\n*.m4 files as .c files and aclocal.m4 as a library, where aclocal is the\nlinker.\n\naclocal comes with automake as does the AM_PROG_MISSING macro that\nconfigure uses now. Note that this does not mean that anyone working on\nconfigure.in needs to have automake installed, only those that are adding\nexternal macro definitions. No, this wasn't my idea, this is the standard\nautoconf setup.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 10 Jun 2000 20:09:36 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "On aclocal.m4" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> aclocal comes with automake as does the AM_PROG_MISSING macro that\n> configure uses now. Note that this does not mean that anyone working on\n> configure.in needs to have automake installed, only those that are adding\n> external macro definitions.\n\n... or editing existing ones to fix bugs ... in practice, as you push\nmore of configure's functionality into macros (which I agree is nice\nfrom a readability standpoint) it will become almost impossible to work\non configure without modifying config/*.m4.\n\nAs things stood over the weekend, even just pulling from CVS required\nautomake, since aclocal.m4 may or may not get a newer timestamp than\nthe config/*.m4 files. I temporarily diked out the toplevel make\ndependencies that tried to update aclocal.m4, but the issue needs\ndiscussion.\n\nI'd like to be convinced that automake is actually going to be a win\nfor Postgres before we start requiring developers to have it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Jun 2000 11:14:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On aclocal.m4 " }, { "msg_contents": "On Mon, 12 Jun 2000, Tom Lane wrote:\n\n> As things stood over the weekend, even just pulling from CVS required\n> automake, since aclocal.m4 may or may not get a newer timestamp than\n> the config/*.m4 files. I temporarily diked out the toplevel make\n> dependencies that tried to update aclocal.m4, but the issue needs\n> discussion.\n\nAs I mentioned to you off-list, if it actually invoked aclocal when there\nwas none, then that's a bug. Configure should put out a line `checking for\nworking aclocal|autoconf... missing|found'.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 13 Jun 2000 14:41:33 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On aclocal.m4 " } ]
[ { "msg_contents": "Hello all,\n\nI did small patch for subj. I am sure that it is not perfect, but it works for me.\nI will continue its testing. This is my first patch to pgsql. If you will find some\nobvious mistakes, do not flame, just show the right way.\n\nIf you have further suggestion/ideas, do not hesistate to contact me.\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------", "msg_date": "Sun, 11 Jun 2000 05:59:41 +0700", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": true, "msg_subject": "Patch for 'Not to stuff everything as files in a single directory,\n\thash dirs'" }, { "msg_contents": "Seems the whole large object per file is going away in 7.1. Can someone\nconfirm this?\n\n> Hello all,\n> \n> I did small patch for subj. I am sure that it is not perfect, but it works for me.\n> I will continue its testing. This is my first patch to pgsql. If you will find some\n> obvious mistakes, do not flame, just show the right way.\n> \n> If you have further suggestion/ideas, do not hesistate to contact me.\n> \n> -- \n> Sincerely Yours,\n> Denis Perchine\n> \n> ----------------------------------\n> E-Mail: [email protected]\n> HomePage: http://www.perchine.com/dyp/\n> FidoNet: 2:5000/120.5\n> ----------------------------------\n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 12 Jun 2000 11:29:59 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch for 'Not to stuff everything as files in a single\n\tdirectory, hash dirs'" }, { "msg_contents": "> I did small patch for subj. I am sure that it is not perfect, but it works for me.\n> I will continue its testing. This is my first patch to pgsql. If you will find some\n> obvious mistakes, do not flame, just show the right way.\n> \n> If you have further suggestion/ideas, do not hesistate to contact me.\n\nForget about this. Better patch posted to pgsql-patches.\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Tue, 13 Jun 2000 00:47:21 +0700", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch for 'Not to stuff everything as files in a single\n directory,\n\thash dirs'" }, { "msg_contents": "Bruce Momjian wrote:\n> Seems the whole large object per file is going away in 7.1. Can someone\n> confirm this?\n\n Not the whole one in 7.1.\n\n The TOAST stuff will lower the need for large objects alot,\n but we already discovered the fact that it isn't a real\n answer to LARGE objects.\n\n First of all, the entire datum must be properly quoted to fit\n into a querystring. Therefore the client needs to have the\n original datum, the qouted copy, the querystring it built.\n Then the querystring is sent to the backend, parsed (where a\n CONST node is built from it), copied into a tuple to be split\n up into TOAST items.\n\n So on a central system, where client and DB are both running,\n we have 6 copies of the object in memory! Not that optimal.\n\n For 7.2 I'll work on real CLOB and BLOB data types. Requires\n some more thinking though.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Tue, 13 Jun 2000 01:06:53 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: Patch for 'Not to stuff everything as files in a single\n\tdirectory, hash dirs''" }, { "msg_contents": "> Bruce Momjian wrote:\n> > Seems the whole large object per file is going away in 7.1. Can someone\n> > confirm this?\n> \n> Not the whole one in 7.1.\n> \n> The TOAST stuff will lower the need for large objects alot,\n> but we already discovered the fact that it isn't a real\n> answer to LARGE objects.\n> \n> First of all, the entire datum must be properly quoted to fit\n> into a querystring. Therefore the client needs to have the\n> original datum, the qouted copy, the querystring it built.\n> Then the querystring is sent to the backend, parsed (where a\n> CONST node is built from it), copied into a tuple to be split\n> up into TOAST items.\n> \n> So on a central system, where client and DB are both running,\n> we have 6 copies of the object in memory! Not that optimal.\n> \n> For 7.2 I'll work on real CLOB and BLOB data types. Requires\n> some more thinking though.\n\nI thought we would keep the existing large object interface, but allow\nstorage of large object data directly in fields using TOAST.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Jun 2000 03:11:53 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch for 'Not to stuff everything as files in a single\n\tdirectory, hash dirs''" }, { "msg_contents": "My idea was to implement the large object API on top of TOAST.\n\n\n> Bruce Momjian wrote:\n> > Seems the whole large object per file is going away in 7.1. Can someone\n> > confirm this?\n> \n> Not the whole one in 7.1.\n> \n> The TOAST stuff will lower the need for large objects alot,\n> but we already discovered the fact that it isn't a real\n> answer to LARGE objects.\n> \n> First of all, the entire datum must be properly quoted to fit\n> into a querystring. Therefore the client needs to have the\n> original datum, the qouted copy, the querystring it built.\n> Then the querystring is sent to the backend, parsed (where a\n> CONST node is built from it), copied into a tuple to be split\n> up into TOAST items.\n> \n> So on a central system, where client and DB are both running,\n> we have 6 copies of the object in memory! Not that optimal.\n> \n> For 7.2 I'll work on real CLOB and BLOB data types. Requires\n> some more thinking though.\n> \n> \n> Jan\n> \n> --\n> \n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #================================================== [email protected] #\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 2 Oct 2000 23:44:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch for 'Not to stuff everything as files in a single\n\tdirectory, hash dirs''" } ]
[ { "msg_contents": "It has been pondered several times before but no one ever got to it: I\ndisabled the C++ build by default. It just creates too many problems for\npeople who don't care.\n\nSo now you only need C by default and other language interfaces can be\nenabled by --with-perl, --with-CXX, --with-python, --with-tcl. Call it\nconsistency.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 11 Jun 2000 13:46:57 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "C++ disabled by default" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> It has been pondered several times before but no one ever got to it: I\n> disabled the C++ build by default. It just creates too many problems for\n> people who don't care.\n\nYes. I agree.\n\n-- \nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n", "msg_date": "Tue, 13 Jun 2000 11:55:18 +0400", "msg_from": "Dmitry samersoff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: C++ disabled by default" } ]
[ { "msg_contents": "Thanks Tom,\n\nNo, it looks fine now.\n\nI did another \"cvs update\" yesterday and it builds and regression\ntests without a problem.\n\nThanks for your interest.\n\nBTW: I'm not getting enything from any mailing lists at present,\ndue to a problem at my ISP (Demon Internet)\n\nStrange, some stuff from individuals is getting through but nothing\nfrom any of the postgres/gimp/gimp-print mailing lists makes it?\n\nKeith.\n \nTom Lane <[email protected]>\n> \n> Keith Parks <[email protected]> writes:\n> > My guess is that the make variable $(INSTALL) is getting set\n> > to ../config/install-sh by configure. This path is then not\n> > appropriate when we move out of the \"src\" subdirectory.\n> \n> Keith, are you still seeing that? AFAICT, $(INSTALL) should\n> always be assigned an absolute path. I can't duplicate the\n> problem here, for sure.\n> \n\n", "msg_date": "Sun, 11 Jun 2000 18:41:18 +0100 (BST)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Install error current CVS " } ]
[ { "msg_contents": "Sorry to bother...\n\nJust a test to see if relay still works after system upgrade.\n--\nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Sun, 11 Jun 2000 22:01:22 +0200 (MET DST)", "msg_from": "Olivier PRENANT <[email protected]>", "msg_from_op": true, "msg_subject": "test/ignore" } ]
[ { "msg_contents": "You seem to be relying on automake's aclocal program to combine the\nconfig/*.m4 files into aclocal.m4. I can see no value in introducing\nthis additional package dependency, since as far as I can tell aclocal\nisn't doing anything for us that 'cat' couldn't do.\n\nI recommend removing config/*.m4 in favor of a single aclocal.m4 file.\nThis also gets rid of the need for at least one of the broken\ndependencies in the toplevel makefile.\n\nFor that matter, I don't see any really good reason for having\naclocal.m4 in the first place. If we were maintaining a bunch of\npackages that had some reason to share configure macros, aclocal.m4\nwould make sense. We are not, so there's no really good reason not to\njust keep all the configure code in configure.in. Introducing more\nfiles just creates more ways to screw up.\n\nI'd like this to get resolved PDQ. Regression tests are currently\nfailing for me because the int8-related configure tests are busted.\nI can't fix it unless I go and install automake, which I don't\nreally care to do unless there is a consensus that automake should\nbecome a required tool for Postgres developers.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 11 Jun 2000 16:02:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Gripe: working on configure now requires Automake installed locally" }, { "msg_contents": "Tom Lane writes:\n\n> You seem to be relying on automake's aclocal program to combine the\n> config/*.m4 files into aclocal.m4. I can see no value in introducing\n> this additional package dependency, since as far as I can tell aclocal\n> isn't doing anything for us that 'cat' couldn't do.\n\nWe need it if we ever want to use libtool unless you want to include half\nof libtool into CVS.\n\n> there's no really good reason not to just keep all the configure code\n> in configure.in.\n\nIt's really the same as keeping all of the C source code in one file, you\ndon't do that either. IMHO, it's a matter of logical organization,\nmodularity etc. I find setups organized like this much more readable. Many\ntests in configure don't interface to config.cache properly;\nunfortunately, fixing that makes the tests longer. I don't think a 5000\nline file of shell and macro processing is really what you want.\n\n\n> Regression tests are currently failing for me because the int8-related\n> configure tests are busted.\n\nI already fixed something there.\n\n\nWell, I'm not proprietory on any of this, I'm just following the book. Let\nme know...\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 12 Jun 2000 00:44:44 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Gripe: working on configure now requires Automake installed\n\tlocally" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> You seem to be relying on automake's aclocal program to combine the\n>> config/*.m4 files into aclocal.m4. I can see no value in introducing\n>> this additional package dependency, since as far as I can tell aclocal\n>> isn't doing anything for us that 'cat' couldn't do.\n\n> We need it if we ever want to use libtool unless you want to include half\n> of libtool into CVS.\n\nHuh? libtool doesn't require automake; I've been using it quite\nsuccessfully for libjpeg without any automake.\n\nMy impression so far of automake is that it imposes a ton of mechanism\nand unpleasant restrictions (like, say, not being able to make INSTALL\nan absolute path, as any half-sane person would do) in return for darn\nlittle usefulness. I'm going to want to be convinced that it's a good\nidea for Postgres...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Jun 2000 10:50:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Gripe: working on configure now requires Automake installed\n\tlocally" }, { "msg_contents": "On Mon, 12 Jun 2000, Tom Lane wrote:\n\n> Huh? libtool doesn't require automake; I've been using it quite\n> successfully for libjpeg without any automake.\n\nOkay, then I'll have to investigate more. I'll tell you what, I'll change\n`aclocal' to `cat' for the time being. But I'd sure hate to have all the\nmacro definitions in one file. As someone else once said, you separate\nwhat you test for from how you do the testing -- implementation hiding.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 13 Jun 2000 14:38:39 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Gripe: working on configure now requires Automake installed\n\tlocally" }, { "msg_contents": "On Tue, 13 Jun 2000, Peter Eisentraut wrote:\n\n> On Mon, 12 Jun 2000, Tom Lane wrote:\n> \n> > Huh? libtool doesn't require automake; I've been using it quite\n> > successfully for libjpeg without any automake.\n> \n> Okay, then I'll have to investigate more. I'll tell you what, I'll\n> change `aclocal' to `cat' for the time being. But I'd sure hate to\n> have all the macro definitions in one file. As someone else once said,\n> you separate what you test for from how you do the testing --\n> implementation hiding.\n\nOkay, just to put my two cents in here ... when I played with libtool a\nlittle while back, it was my impression that *without* using automake,\nusing libtool would be pretty hellish ...\n\n... but, shouldn't automake be a requirement on a developers machine if,\nand only if, they modify either the Makefile.am files *or*\nconfigure.in? Like, it should only be a requirement for a select few of\nthe developers?\n\n\n", "msg_date": "Thu, 15 Jun 2000 09:00:27 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Gripe: working on configure now requires Automake\n\tinstalled locally" }, { "msg_contents": "The Hermit Hacker writes:\n\n> Okay, just to put my two cents in here ... when I played with libtool\n> a little while back, it was my impression that *without* using\n> automake, using libtool would be pretty hellish ...\n\nIt will be interesting, no doutb.\n\n> ... but, shouldn't automake be a requirement on a developers machine if,\n> and only if, they modify either the Makefile.am files *or*\n> configure.in? Like, it should only be a requirement for a select few of\n> the developers?\n\nThe problem was that I was using the aclocal program, which comes with\nautomake and Tom didn't want to install that. In all truth, it doesn't do\n*much* more than cat, so we use that for now.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 15 Jun 2000 18:22:15 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Gripe: working on configure now requires Automake\n\tinstalled locally" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> ... but, shouldn't automake be a requirement on a developers machine if,\n>> and only if, they modify either the Makefile.am files *or*\n>> configure.in? Like, it should only be a requirement for a select few of\n>> the developers?\n\n> The problem was that I was using the aclocal program, which comes with\n> automake and Tom didn't want to install that.\n\nJust to be perfectly clear: I don't have a problem with installing\nautomake if we come to an agreement that we need to require it as\na build tool. I do have a problem with requirements creep happening\nwithout discussion/consensus. For now, I'd like to see if we can\navoid needing automake...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Jun 2000 19:40:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Gripe: working on configure now requires Automake installed\n\tlocally" } ]
[ { "msg_contents": "I asked for this too, but it was voted down. Not sure why.\n\n> Minor psql parser suggestion: consider semicolons on the end of a\n> back-slashed psql interactive command to be line terminators, consistent\n> with sql commands...example below.\n> \n> Regards,\n> Ed Loehr\n> \n> emsdb=# \\dt\n> List of relations\n> Name | Type | Owner \n> ---------------------------------+-------+----------\n> activity | table | postgres\n> ...\n> \n> emsdb=# \\d activity;\n> Did not find any relation named \"activity;\".\n> \n> emsdb=# \\d activity \n> Table \"activity\"\n> Attribute | Type | \n> Modifier \n> ---------------------------+-----------+---------------------------------------------------\n> id | integer | not null default\n> nextval('activity_id_seq'::text)\n> ref_number | integer | not null\n> contract_id | integer | not null\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 11 Jun 2000 22:58:23 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.0 psql parser suggestion (; == terminator)" } ]
[ { "msg_contents": "This was applied.\n\n> On Sat, Mar 25, 2000 at 02:57:05AM +0000, Thomas Lockhart wrote:\n> > > Just a quick announcement that we've put beta3 up for download today ...\n> > \n> > Can people start testing this beta3 and reporting on regression test\n> > results for *all* platforms mentioned in the \"supported\" list at\n> \n> All tests but float8 (patch attached) pass on NetBSD-1.4U/i386. I think\n> NetBSD just says if x<DBL_MIN, x=0, so there is no underflow warning. This\n> is with this morning (Sunday)'s source - I assume that's the same as beta3.\n> While looking at float8 tests, I think\n> \n> float8-exp-three-digits.out\n> float8-fp-exception.out\n> \n> might be out of date - simple test: comment in float8.sql is \"over- and\n> underflow\", these files have the comment without the hyphen - maybe\n> something else changed too?\n> \n> Cheers,\n> \n> Patrick\n> \n> \n> Index: resultmap\n> ===================================================================\n> RCS file: /usr/local/cvsroot/pgsql/src/test/regress/resultmap,v\n> retrieving revision 1.14\n> diff -c -r1.14 resultmap\n> *** resultmap\t2000/03/26 02:35:01\t1.14\n> --- resultmap\t2000/03/26 12:33:49\n> ***************\n> *** 20,25 ****\n> --- 20,26 ----\n> float8/alpha-dec-osf=float8-fp-exception\n> float4/.*-qnx4=float4-exp-three-digits\n> float8/.*-qnx4=float8-exp-three-digits\n> + float8/.*-netbsd=float8-small-is-zero\n> geometry/hppa=geometry-positive-zeros\n> geometry/.*-netbsd=geometry-positive-zeros\n> geometry/.*-freebsd=geometry-positive-zeros\n> \n> \n> and you want to create a file expected/float8-small-is-zero.out created by\n> applying the following to float8.out (ie., cp float8.out someplace first!)\n> \n> \n> *** float8.out\tSun Mar 26 13:35:22 2000\n> --- float8-small-is-zero.out\tSun Mar 26 13:35:22 2000\n> ***************\n> *** 241,249 ****\n> INSERT INTO FLOAT8_TBL(f1) VALUES ('-10e400');\n> ERROR: Input '-10e400' is out of range for float8\n> INSERT INTO FLOAT8_TBL(f1) VALUES ('10e-400');\n> - ERROR: Input '10e-400' is out of range for float8\n> INSERT INTO FLOAT8_TBL(f1) VALUES ('-10e-400');\n> - ERROR: Input '-10e-400' is out of range for float8\n> -- maintain external table consistency across platforms\n> -- delete all values and reinsert well-behaved ones\n> DELETE FROM FLOAT8_TBL;\n> --- 241,247 ----\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 11 Jun 2000 23:00:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] float8 regression / platform report" } ]
[ { "msg_contents": "Hello all,\n\nlseek(4, 26394624, SEEK_SET) = 26394624\nread(4, \"@\\1\\210\\1\\0 \\0 \\234\\237\\304\\0008\\237\\304\\0\\324\\236\\304\"..., 8192) = 8192\nlseek(4, 26402816, SEEK_SET) = 26402816\nread(4, \"@\\1\\210\\1\\0 \\0 \\234\\237\\304\\0008\\237\\304\\0\\324\\236\\304\"..., 8192) = 8192\nlseek(4, 26411008, SEEK_SET) = 26411008\nread(4, \"@\\1\\210\\1\\0 \\0 \\234\\237\\304\\0008\\237\\304\\0\\324\\236\\304\"..., 8192) = 8192\nlseek(4, 26419200, SEEK_SET) = 26419200\nread(4, \"@\\1\\210\\1\\0 \\0 \\234\\237\\304\\0008\\237\\304\\0\\324\\236\\304\"..., 8192) = 8192\nlseek(4, 26427392, SEEK_SET) = 26427392\nread(4, \"@\\1\\210\\1\\0 \\0 \\234\\237\\304\\0008\\237\\304\\0\\324\\236\\304\"..., 8192) = 8192\nlseek(4, 26435584, SEEK_SET) = 26435584\nread(4, \"@\\1\\210\\1\\0 \\0 \\234\\237\\304\\0008\\237\\304\\0\\324\\236\\304\"..., 8192) = 8192\n\nI ran strace -c -p to look what postgres is doing during removing large objects...\nAnd I found out that most of the syscalls called are read & lseek.\nI will not speculate on the idea that if you would like to read lots of data it will be faster\nto do this in one read call. But calling lseek when you do not need it... It's a little bit too\nmuch.\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Mon, 12 Jun 2000 12:05:27 +0700", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": true, "msg_subject": "Serial reads & lseek calls." } ]
[ { "msg_contents": "I've just seen this on FreshMeat:\nhttp://freshmeat.net/news/2000/06/12/960784940.html\n\nIt's a patch for sendmail to use postgresql as the backend for Sendmail's\nvirtual users tables.\n\nHas anyone tried this, or had any thoughts about it?\n\nI've got to reconfigure our sendmail host here and was looking at using the\nLDAP interface, but as I use postgresql a lot here, it may be an\nalternative.\n\nComments?\n\n--\nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council\n\n", "msg_date": "Mon, 12 Jun 2000 11:55:37 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Integrating Sendmail with PostgreSQL" } ]
[ { "msg_contents": "How about using ldap, and getting ldap to use PG.\n\nMikeA\n\n\n\n>> -----Original Message-----\n>> From: Peter Mount [mailto:[email protected]]\n>> Sent: 12 June 2000 11:56\n>> To: PostgreSQL Interfaces (E-mail)\n>> Subject: [HACKERS] Integrating Sendmail with PostgreSQL\n>> \n>> \n>> I've just seen this on FreshMeat:\n>> http://freshmeat.net/news/2000/06/12/960784940.html\n>> \n>> It's a patch for sendmail to use postgresql as the backend \n>> for Sendmail's\n>> virtual users tables.\n>> \n>> Has anyone tried this, or had any thoughts about it?\n>> \n>> I've got to reconfigure our sendmail host here and was \n>> looking at using the\n>> LDAP interface, but as I use postgresql a lot here, it may be an\n>> alternative.\n>> \n>> Comments?\n>> \n>> --\n>> Peter Mount\n>> Enterprise Support\n>> Maidstone Borough Council\n>> Any views stated are my own, and not those of Maidstone \n>> Borough Council\n>> \n\n\n\n\n\nRE: [HACKERS] Integrating Sendmail with PostgreSQL\n\n\nHow about using ldap, and getting ldap to use PG.\n\nMikeA\n\n\n\n>>   -----Original Message-----\n>>   From: Peter Mount [mailto:[email protected]]\n>>   Sent: 12 June 2000 11:56\n>>   To: PostgreSQL Interfaces (E-mail)\n>>   Subject: [HACKERS] Integrating Sendmail with PostgreSQL\n>>   \n>>   \n>>   I've just seen this on FreshMeat:\n>>   http://freshmeat.net/news/2000/06/12/960784940.html\n>>   \n>>   It's a patch for sendmail to use postgresql as the backend \n>>   for Sendmail's\n>>   virtual users tables.\n>>   \n>>   Has anyone tried this, or had any thoughts about it?\n>>   \n>>   I've got to reconfigure our sendmail host here and was \n>>   looking at using the\n>>   LDAP interface, but as I use postgresql a lot here, it may be an\n>>   alternative.\n>>   \n>>   Comments?\n>>   \n>>   --\n>>   Peter Mount\n>>   Enterprise Support\n>>   Maidstone Borough Council\n>>   Any views stated are my own, and not those of Maidstone \n>>   Borough Council\n>>", "msg_date": "Mon, 12 Jun 2000 14:19:31 +0100", "msg_from": "Michael Ansley <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Integrating Sendmail with PostgreSQL" } ]
[ { "msg_contents": "\n Hi,\n\n after ./configure running in odbs GNUmakefile is still @SET_MAKE@.\n./configure --with-odbc not exchange it.\n\n In the master ./configure is not AC_PROG_MAKE_SET ...\n with it all works correct.\n \n\n\t\t\t\t\t\tKarel \n\n\n/* ----------------\n * Karel Zak * [email protected] * http://home.zf.jcu.cz/~zakkr/\n * C, PostgreSQL, PHP, WWW, http://docs.linux.cz\n * ----------------\n */\n\n", "msg_date": "Mon, 12 Jun 2000 16:09:15 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "CVS: bug in odbc Makefiles" } ]
[ { "msg_contents": "\n\n Bruce add my pg_lodump to the contrib and now I see what happen in the \ncurrent contrib --- here it is a little dirty (mazy Makefiles..etc). \n\nWell I start fix some bugs in this tree and I have a question:\n\n How idea is for 'make install' in the contrib tree? For example\nthe 'findoidjoins' has directly set INSTALLDIR = /usr/local/pgsql.\n\t \nComments?\n\n\t\t\t\t\tKarel\n\n", "msg_date": "Mon, 12 Jun 2000 17:05:26 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "the contrib" }, { "msg_contents": "Karel Zak <[email protected]> writes:\n> Bruce add my pg_lodump to the contrib and now I see what happen in the \n> current contrib --- here it is a little dirty (mazy Makefiles..etc). \n> Well I start fix some bugs in this tree and I have a question:\n> How idea is for 'make install' in the contrib tree? For example\n> the 'findoidjoins' has directly set INSTALLDIR = /usr/local/pgsql.\n\nYes, contrib is pretty messy. It'd be great to clean it up to have\nuniform build/install conventions.\n\nAnother thing wrong with contrib is that there are a couple of\ndirectories that are obsolete because the features got installed\nin the main build (int8 for example). But they were never cleaned\nout of contrib. There isn't anyone really maintaining contrib...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Jun 2000 11:58:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the contrib " }, { "msg_contents": "\nOn Mon, 12 Jun 2000, Tom Lane wrote:\n\n> Karel Zak <[email protected]> writes:\n> > Bruce add my pg_lodump to the contrib and now I see what happen in the \n> > current contrib --- here it is a little dirty (mazy Makefiles..etc). \n> > Well I start fix some bugs in this tree and I have a question:\n> > How idea is for 'make install' in the contrib tree? For example\n> > the 'findoidjoins' has directly set INSTALLDIR = /usr/local/pgsql.\n> \n> Yes, contrib is pretty messy. It'd be great to clean it up to have\n> uniform build/install conventions.\n\n if I will a little time, I try clean it. Now I fix pg_lodump \nand findoidjoins.\n\n\t\t\t\tKarel\n\n", "msg_date": "Mon, 12 Jun 2000 18:18:08 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the contrib " } ]
[ { "msg_contents": " Date: Monday, June 12, 2000 @ 12:05:25\nAuthor: momjian\n\nUpdate of /home/projects/pgsql/cvsroot/pgsql\n from hub.org:/home/projects/pgsql/tmp/cvs-serv37892/pgsql\n\nRemoved Files:\n\tMakefile \n\n----------------------------- Log Message -----------------------------\n\nRemove Makefile. Now generated by configure.\n\n", "msg_date": "Mon, 12 Jun 2000 12:05:26 -0400 (EDT)", "msg_from": "Bruce Momjian - CVS <momjian>", "msg_from_op": true, "msg_subject": "pgsql (Makefile)" }, { "msg_contents": "Bruce Momjian - CVS <[email protected]> writes:\n> Remove Makefile. Now generated by configure.\n\nBruce, what in the world are you doing? There was a perfectly good\nhand-generated Makefile in that directory.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Jun 2000 12:18:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql (Makefile) " }, { "msg_contents": "> Bruce Momjian - CVS <[email protected]> writes:\n> > Remove Makefile. Now generated by configure.\n> \n> Bruce, what in the world are you doing? There was a perfectly good\n> hand-generated Makefile in that directory.\n\nSeems Peter now creates Makefile as part of configure. However, he does\nnot create on in the top level directory, which we all think is needed\ntoo, not just in /src.\n\nOK, now I am really confused about what Peter wants. I don't see\nMakefile being created by configure.\n\nI am re-adding pgsql/Makefile, and keeping src/Makefile. Peter, can you\ncomment?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 12 Jun 2000 12:31:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql (Makefile)" }, { "msg_contents": "I have put back the old pgsql/Makefile, and added the same one to /src.\n\n> Bruce Momjian - CVS <[email protected]> writes:\n> > Remove Makefile. Now generated by configure.\n> \n> Bruce, what in the world are you doing? There was a perfectly good\n> hand-generated Makefile in that directory.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 12 Jun 2000 12:38:21 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql (Makefile)" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I have put back the old pgsql/Makefile, and added the same one to /src.\n\nOK. I think your commit messages arrived here out-of-order ...\nI was confused about what was happening.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Jun 2000 12:40:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql (Makefile) " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I have put back the old pgsql/Makefile, and added the same one to /src.\n> \n> OK. I think your commit messages arrived here out-of-order ...\n> I was confused about what was happening.\n\nSeems the problem is that src/Makefile disappeared, and I thought Peter\nwas saying he auto-generated them from configure. Actually, they are\njust beefed up.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 12 Jun 2000 12:41:21 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql (Makefile)" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Bruce Momjian <[email protected]> writes:\n>>>> I have put back the old pgsql/Makefile, and added the same one to /src.\n>> \n>> OK. I think your commit messages arrived here out-of-order ...\n>> I was confused about what was happening.\n\n> Seems the problem is that src/Makefile disappeared, and I thought Peter\n> was saying he auto-generated them from configure. Actually, they are\n> just beefed up.\n\nYes, the top-level Makefile still has the same old purpose of catching\npeople who try to use non-GNU make. It's not auto-generated.\n\nI think Peter was in error to remove src/Makefile --- it will still have\nthe same purpose, but for people who remember the old build procedure of\nstarting in pgsql/src.\n\nLooks like things are good now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Jun 2000 13:13:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql (Makefile) " }, { "msg_contents": "On Mon, 12 Jun 2000, Bruce Momjian wrote:\n\n> I am re-adding pgsql/Makefile, and keeping src/Makefile. Peter, can you\n> comment?\n\nThere should definitely be one in pgsql/Makefile and if you like for\n\"backward compatibility\" one in src/Makefile. The problem was that Bruce\napparently overwrote the new \"fancy\" version with the old one at one\npoint.\n\nThe one that's in there now is good.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Tue, 13 Jun 2000 15:10:38 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: pgsql (Makefile)" } ]
[ { "msg_contents": "Hello all,\n\nWhile digging the code I found out quite interesting comment in\nsrc/backend/access/heap/hio.c before function RelationPutHeapTupleAtEnd\n\n * Eventually, we should cache the number of blocks in a relation somewhere.\n * Until that time, this code will have to do an lseek to determine the number\n * of blocks in a relation.\n\nAs far as I can see there's field rd_nblocks in Relation.\n\nQuestion: is this field properly updated? Could it be used instead of RelationGetNumberOfBlocks\nwhich calls lseek.\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Mon, 12 Jun 2000 23:28:23 +0700", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": true, "msg_subject": "Caching number of blocks in relation to avoi lseek." }, { "msg_contents": "> Hello all,\n> \n> While digging the code I found out quite interesting comment in\n> src/backend/access/heap/hio.c before function RelationPutHeapTupleAtEnd\n> \n> * Eventually, we should cache the number of blocks in a relation somewhere.\n> * Until that time, this code will have to do an lseek to determine the number\n> * of blocks in a relation.\n> \n> As far as I can see there's field rd_nblocks in Relation.\n> \n> Question: is this field properly updated? Could it be used instead of RelationGetNumberOfBlocks\n> which calls lseek.\n\nProbably should be kept up-to-date. Currently only vacuum sets it. It\nwould have to be kept up-to-date for every INSERT/UPDATE that adds a block.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 12 Jun 2000 14:05:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching number of blocks in relation to avoi lseek." }, { "msg_contents": "> > Question: is this field properly updated? Could it be used instead of RelationGetNumberOfBlocks\n> > which calls lseek.\n> \n> Probably should be kept up-to-date. Currently only vacuum sets it. It\n> would have to be kept up-to-date for every INSERT/UPDATE that adds a block.\n\nIf we can acomplish this it will be quite big speed improvement. I saw lots of such calls\nand strace show lots of lseek(,,END) calls. I would do this, but I do not know the code so\nwell to be sure I will walk through all places.\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Tue, 13 Jun 2000 01:11:39 +0700", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Caching number of blocks in relation to avoi lseek." }, { "msg_contents": ">> * Eventually, we should cache the number of blocks in a relation somewhere.\n>> * Until that time, this code will have to do an lseek to determine the number\n>> * of blocks in a relation.\n>> \n>> As far as I can see there's field rd_nblocks in Relation.\n>> \n>> Question: is this field properly updated?\n\nNo. If it were that easy, this problem would have been fixed long ago.\nThe problem with the relcache field is that it's backend-local, so it's\nnot going to get updated when some other backend extends the relation.\n\nWe do use rd_nblocks to cache the last table length determined from\nlseek, but that can't be trusted in critical cases, like when we are\ntrying to add a block to the relation.\n\nThere has been some talk of keeping a small cache of current relation\nlengths in shared memory, but I'm dubious that that'd actually be a win.\nIn order to save an lseek (which ought to be pretty quick as kernel\ncalls go) we'd be talking about grabbing a spinlock, searching a shared\nhashtable, and releasing a spinlock. Spinlock contention on this\nheavily used datastructure might well be a problem. Now add the costs\nof updating that hashtable every time we extend a relation (or even just\nfail to find a relation in it). Not immediately obvious that this is\na net win overall, is it? If it were easier to do, or more obviously\na large performance gain, someone would likely have tried it by now ...\nbut there is lots of lower-hanging fruit to keep us busy.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Jun 2000 16:56:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching number of blocks in relation to avoi lseek. " }, { "msg_contents": "> No. If it were that easy, this problem would have been fixed long ago.\n> The problem with the relcache field is that it's backend-local, so it's\n> not going to get updated when some other backend extends the relation.\n\nSorry. I missed this.\n \n> We do use rd_nblocks to cache the last table length determined from\n> lseek, but that can't be trusted in critical cases, like when we are\n> trying to add a block to the relation.\n> \n> There has been some talk of keeping a small cache of current relation\n> lengths in shared memory, but I'm dubious that that'd actually be a win.\n> In order to save an lseek (which ought to be pretty quick as kernel\n> calls go) we'd be talking about grabbing a spinlock, searching a shared\n> hashtable, and releasing a spinlock. Spinlock contention on this\n> heavily used datastructure might well be a problem. Now add the costs\n> of updating that hashtable every time we extend a relation (or even just\n> fail to find a relation in it). Not immediately obvious that this is\n> a net win overall, is it? If it were easier to do, or more obviously\n> a large performance gain, someone would likely have tried it by now ...\n> but there is lots of lower-hanging fruit to keep us busy.\n\nHard to say what is faster... Lots of testing is needed. You are right.\nAnd what about skipping lseek when continually read relation?\nIs it possible?\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Tue, 13 Jun 2000 04:09:33 +0700", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Caching number of blocks in relation to avoi lseek." }, { "msg_contents": "Denis Perchine <[email protected]> writes:\n> And what about skipping lseek when continually read relation?\n> Is it possible?\n\nIn a pure read scenario the way it's supposed to work is that an\nlseek(END) is done once at the start of each sequential scan, and we\nsave that value in rd_nblocks. Then we read rd_nblocks pages and stop.\nBy the time we finish the scan there might be more pages in the relation\n(added by other backends, or even by ourselves if it's an update query).\nBut those pages cannot contain any tuples that could be visible to the\ncurrent scan, so it's OK if we don't read them. However, we do need a\nnew lseek() --- or some other way to verify the right table length\n--- at least once per transaction start or CommandCounterIncrement.\nEither of those events could make new tuples visible to us.\n\nI think there may be some code paths that cause us to do more than just\none lseek(END) during scan startup, and if so it'd be worthwhile to try\nto get rid of the extra lseeks(). But you'd have to be careful that\nyou didn't remove any lseeks that are essential in some other paths.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Jun 2000 17:34:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching number of blocks in relation to avoi lseek. " }, { "msg_contents": "On ���, 13 ��� 2000, Tom Lane wrote:\n> Denis Perchine <[email protected]> writes:\n> > And what about skipping lseek when continually read relation?\n> > Is it possible?\n> \n> In a pure read scenario the way it's supposed to work is that an\n> lseek(END) is done once at the start of each sequential scan, and we\n> save that value in rd_nblocks. Then we read rd_nblocks pages and stop.\n> By the time we finish the scan there might be more pages in the relation\n> (added by other backends, or even by ourselves if it's an update query).\n> But those pages cannot contain any tuples that could be visible to the\n> current scan, so it's OK if we don't read them. However, we do need a\n> new lseek() --- or some other way to verify the right table length\n> --- at least once per transaction start or CommandCounterIncrement.\n> Either of those events could make new tuples visible to us.\n> \n> I think there may be some code paths that cause us to do more than just\n> one lseek(END) during scan startup, and if so it'd be worthwhile to try\n> to get rid of the extra lseeks(). But you'd have to be careful that\n> you didn't remove any lseeks that are essential in some other paths.\n\nNo... You did not get me. I am talking about completly different thing:\nI strace'ed postgres binary when doing queries and found out that it\ndo lseek after each read, and the difference in the position is 8096.\nIt means that we are in correct position anyway and do not need additional lseek.\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Tue, 13 Jun 2000 07:52:45 +0700", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Caching number of blocks in relation to avoi lseek." }, { "msg_contents": "Denis Perchine <[email protected]> writes:\n> No... You did not get me. I am talking about completly different thing:\n> I strace'ed postgres binary when doing queries and found out that it\n> do lseek after each read, and the difference in the position is 8096.\n> It means that we are in correct position anyway and do not need additional lseek.\n\nOh. Hmm. Not sure if it's really worth the trouble, but you could try\nhaving fd.c keep track of the current seek position of VFDs when they\nare open as well as when they are closed, and optimize away the lseek\ncall in FileSeek if the position is already right. You'd have to think\ncarefully about what to do if a read or write fails, however --- where\nhas the kernel left its seek position in that case? Possibly this could\nbe dealt with by setting the internal position to \"unknown\" anytime\nwe're not perfectly sure where the kernel is.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Jun 2000 22:34:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching number of blocks in relation to avoi lseek. " }, { "msg_contents": "* Tom Lane <[email protected]> [000612 19:37] wrote:\n> Denis Perchine <[email protected]> writes:\n> > No... You did not get me. I am talking about completly different thing:\n> > I strace'ed postgres binary when doing queries and found out that it\n> > do lseek after each read, and the difference in the position is 8096.\n> > It means that we are in correct position anyway and do not need additional lseek.\n> \n> Oh. Hmm. Not sure if it's really worth the trouble, but you could try\n> having fd.c keep track of the current seek position of VFDs when they\n> are open as well as when they are closed, and optimize away the lseek\n> call in FileSeek if the position is already right. You'd have to think\n> carefully about what to do if a read or write fails, however --- where\n> has the kernel left its seek position in that case? Possibly this could\n> be dealt with by setting the internal position to \"unknown\" anytime\n> we're not perfectly sure where the kernel is.\n\nHave you thought of using pread/pwrite which are available on many\nmodern platforms:\n\n ssize_t\n pread(int d, void *buf, size_t nbytes, off_t offset)\n\n Pread() performs the same function, but reads from the specified\n position in the file without modifying the file pointer.\n\nI'm unsure how the postgresql system uses it's fds, however if they\nare really shared via dup() or across fork/exec they will share\nthe same offset across multiple instances of the fd. The only way\naround this behavior is to reopen the file in each process to get\na private offset.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Mon, 12 Jun 2000 20:01:48 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching number of blocks in relation to avoi lseek." }, { "msg_contents": "> Oh. Hmm. Not sure if it's really worth the trouble, but you could try\n> having fd.c keep track of the current seek position of VFDs when they\n> are open as well as when they are closed, and optimize away the lseek\n> call in FileSeek if the position is already right. You'd have to think\n> carefully about what to do if a read or write fails, however --- where\n> has the kernel left its seek position in that case? Possibly this could\n> be dealt with by setting the internal position to \"unknown\" anytime\n> we're not perfectly sure where the kernel is.\n\nIf read or write fails. Position will left the same. This situation is already tracked\nin File routines, but a little bit incorrectly.\n\nHere is the full patch for this. This patch reduce amount of lseek call ten times\nfor update statement and twenty times for select statement. I tested joined update\nand count(*) select for table with rows > 170000 and 10 indices.\nI think this is worse of trying. Before lseek calls account for more than 5% of time.\nNow they are 0.89 and 0.15 respectevly.\n\nDue to only one file modification patch should be applied in src/backedn/storage/file/ dir.\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------", "msg_date": "Tue, 13 Jun 2000 15:19:52 +0700", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": true, "msg_subject": "Patch for Re: [HACKERS] Caching number of blocks in relation to avoi\n\tlseek." }, { "msg_contents": "> If read or write fails. Position will left the same. This situation is already tracked\n> in File routines, but a little bit incorrectly.\n\nAfter small survey in Linux kernel code, I am not sure about it.\nNew patch set pos to unknown in the case of read/write fails. And do\nlseek again.\n \n> Here is the full patch for this. This patch reduce amount of lseek call ten times\n> for update statement and twenty times for select statement. I tested joined update\n> and count(*) select for table with rows > 170000 and 10 indices.\n> I think this is worse of trying. Before lseek calls account for more than 5% of time.\n> Now they are 0.89 and 0.15 respectevly.\n> \n> Due to only one file modification patch should be applied in src/backedn/storage/file/ dir.\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------", "msg_date": "Tue, 13 Jun 2000 15:37:58 +0700", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": true, "msg_subject": "Patch 0.2 for Re: [HACKERS] Caching number of blocks in relation to\n\tavoi lseek." }, { "msg_contents": "Yoo, hoo. No one complained about this patch, so in it goes. I bet\nthis will be a real speedup on some platforms. I know some OS's don't\ndo fseek() as fast as we would like.\n\nThanks.\n\n> > If read or write fails. Position will left the same. This situation is already tracked\n> > in File routines, but a little bit incorrectly.\n> \n> After small survey in Linux kernel code, I am not sure about it.\n> New patch set pos to unknown in the case of read/write fails. And do\n> lseek again.\n> \n> > Here is the full patch for this. This patch reduce amount of lseek call ten times\n> > for update statement and twenty times for select statement. I tested joined update\n> > and count(*) select for table with rows > 170000 and 10 indices.\n> > I think this is worse of trying. Before lseek calls account for more than 5% of time.\n> > Now they are 0.89 and 0.15 respectevly.\n> > \n> > Due to only one file modification patch should be applied in src/backedn/storage/file/ dir.\n> \n> -- \n> Sincerely Yours,\n> Denis Perchine\n> \n> ----------------------------------\n> E-Mail: [email protected]\n> HomePage: http://www.perchine.com/dyp/\n> FidoNet: 2:5000/120.5\n> ----------------------------------\n\n[ Attachment, skipping... ]\n\nIndex: fd.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/storage/file/fd.c,v\nretrieving revision 1.59\ndiff -c -b -w -r1.59 fd.c\n*** fd.c\t2000/06/02 15:57:24\t1.59\n--- fd.c\t2000/06/13 08:34:55\n***************\n*** 95,100 ****\n--- 95,102 ----\n \n #define FileIsNotOpen(file) (VfdCache[file].fd == VFD_CLOSED)\n \n+ #define FileUnknownPos (-1)\n+ \n typedef struct vfd\n {\n \tsigned short fd;\t\t\t/* current FD, or VFD_CLOSED if none */\n***************\n*** 790,795 ****\n--- 792,799 ----\n \treturnCode = read(VfdCache[file].fd, buffer, amount);\n \tif (returnCode > 0)\n \t\tVfdCache[file].seekPos += returnCode;\n+ \telse\n+ \t\tVfdCache[file].seekPos = FileUnknownPos;\n \n \treturn returnCode;\n }\n***************\n*** 806,816 ****\n \n \tFileAccess(file);\n \treturnCode = write(VfdCache[file].fd, buffer, amount);\n! \tif (returnCode > 0)\n \t\tVfdCache[file].seekPos += returnCode;\n- \n \t/* mark the file as needing fsync */\n \tVfdCache[file].fdstate |= FD_DIRTY;\n \n \treturn returnCode;\n }\n--- 810,821 ----\n \n \tFileAccess(file);\n \treturnCode = write(VfdCache[file].fd, buffer, amount);\n! \tif (returnCode > 0) {\n \t\tVfdCache[file].seekPos += returnCode;\n \t\t/* mark the file as needing fsync */\n \t\tVfdCache[file].fdstate |= FD_DIRTY;\n+ \t} else\n+ \t\tVfdCache[file].seekPos = FileUnknownPos;\n \n \treturn returnCode;\n }\n***************\n*** 840,849 ****\n \t\t\tdefault:\n \t\t\t\telog(ERROR, \"FileSeek: invalid whence: %d\", whence);\n \t\t\t\tbreak;\n- \t\t}\n \t}\n! \telse\n \t\tVfdCache[file].seekPos = lseek(VfdCache[file].fd, offset, whence);\n \treturn VfdCache[file].seekPos;\n }\n \n--- 845,870 ----\n \t\t\tdefault:\n \t\t\t\telog(ERROR, \"FileSeek: invalid whence: %d\", whence);\n \t\t\t\tbreak;\n \t\t}\n! \t} else\n! \t\tswitch (whence) {\n! \t\t\tcase SEEK_SET:\n! \t\t\t\tif (offset < 0)\n! \t\t\t\t\telog(ERROR, \"FileSeek: invalid offset: %ld\", offset);\n! \t\t\t\tif (VfdCache[file].seekPos != offset)\n! \t\t\t\t\tVfdCache[file].seekPos = lseek(VfdCache[file].fd, offset, whence);\n! \t\t\t\tbreak;\n! \t\t\tcase SEEK_CUR:\n! \t\t\t\tif ((offset != 0) || (VfdCache[file].seekPos == FileUnknownPos));\n \t\t\t\t\tVfdCache[file].seekPos = lseek(VfdCache[file].fd, offset, whence);\n+ \t\t\t\tbreak;\n+ \t\t\tcase SEEK_END:\n+ \t\t\t\tVfdCache[file].seekPos = lseek(VfdCache[file].fd, offset, whence);\n+ \t\t\t\tbreak;\n+ \t\t\tdefault:\n+ \t\t\t\telog(ERROR, \"FileSeek: invalid whence: %d\", whence);\n+ \t\t\t\tbreak;\n+ \t\t}\n \treturn VfdCache[file].seekPos;\n }\n \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Jun 2000 23:19:19 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch 0.2 for Re: [HACKERS] Caching number of blocks in\n\trelation to avoi lseek." } ]
[ { "msg_contents": "is there the possibility to do a full text search on a series of different \ntables on different (and more than one per table) fields.\n\nI've taken a look to contrib/fullsearch but don't seems to scale ...\n\nwhat (trick) can i'use\n\n\n\nvalter mazzola, italy\n________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com\n\n", "msg_date": "Mon, 12 Jun 2000 18:31:03 GMT", "msg_from": "\"valter m\" <[email protected]>", "msg_from_op": true, "msg_subject": "full text indexing & search" }, { "msg_contents": "Have you read the README. Cluster is the trick, though different fields\nmay be slower.\n\n\n> is there the possibility to do a full text search on a series of different \n> tables on different (and more than one per table) fields.\n> \n> I've taken a look to contrib/fullsearch but don't seems to scale ...\n> \n> what (trick) can i'use\n> \n> \n> \n> valter mazzola, italy\n> ________________________________________________________________________\n> Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 12 Jun 2000 15:48:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: full text indexing & search" } ]
[ { "msg_contents": "> Ed Loehr <[email protected]> writes:\n> >> 20000407.15:56:50.404 [8388] query: INSERT INTO time_report ( person_id,\n> >> ... ) SELECT old.person_id, ... newAct.activity_id, ... FROM time_report\n> >> old, activity oldAct, activity newAct WHERE oldAct.contract_id = $1 AND\n> >> newAct.contract_id = $2 AND newAct.ref_number = oldAct.ref_number\n> >> 20000407.15:56:50.404 [8388] ERROR: CURRENT used in non-rule query\n> \n> > I see my error: \"old\" is reserved for triggered functions...the error\n> > message about \"CURRENT\" is a bit misleading, even so.\n> \n> Oh ... that's a hoot. The code thinks that the keyword is CURRENT.\n> Someone apparently changed their minds at some point about the spelling\n> of the keyword, and implemented the change by modifying the entry in\n> keywords.c and nowhere else!\n> \n> \t{\"of\", OF},\n> \t{\"offset\", OFFSET},\n> \t{\"oids\", OIDS},\n> \t{\"old\", CURRENT}, <====================== blech\n> \t{\"on\", ON},\n> \t{\"only\", ONLY},\n> \n> This leads to such interesting misbehaviors as\n> \n> regression=# select 'old' as old, 'older' as older;\n> current | older\n> ---------+-------\n> old | older\n> (1 row)\n> \n> (what was that column label again?)\n> \n> This isn't a showstopper kind of bug, but it probably oughta be fixed.\n\nOK, I have made the required internal changes. However, to enable older\nrules to be loaded, I had to map CURRENT to OLD, so we still have this\nweird behavior, it is just now on CURRENT instead of OLD. We can remove\nthat hack in a few releases. Comment has been added to keyword.c just\nabove the entry.\n\nThe error message will also print correctly now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 12 Jun 2000 16:04:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.0b3 pl/pgsql, ERROR: CURRENT used in non-rule query" } ]
[ { "msg_contents": "> platforms where it is not in libc, the code does different things on\n> different platforms - on some it just copies new title over argv array.\n> Some systems has pstat() system call - again, sendmail's code uses it.\n\nWhat OS's has pstat()? Can someone supply a patch for those?\n\n\n> The overhead is platform-dependent, and for many platform is actually low.\n> The biggest overhead is for SCO, as far as I understand the code; may be I\n> missed something, but I hope I don't.\n> \n> > > Bruce, can you give me some hints where in the code you change process\n> > > title? I hope there are no more than two or three places to change.\n> > \n> > Massimo generalized the process status code in\n> > include/utils/ps_status.h. See that. It is the central place to change\n> > it.\n> \n> Thanks, I'll look into it.\n> \n> > > What is our opinion on licensing? At first glance the license looks\n> > > pretty compatible:\n> > \n> > Yes, except that the Eric P. Allman line basically says nothing about\n> > his copyright. Is his the same as BSD? Without stating that, it is a\n> > very limited copyright.\n> \n> I have never understand copyright/licensing issues (count I live in\n> post-totalitarian country where there was One copyright owner - The\n> Government, and One license...)\n> \n> Oleg.\n> ---- \n> Oleg Broytmann http://members.xoom.com/phd2.1/ [email protected]\n> Programmers don't die, they just GOSUB without RETURN.\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 12 Jun 2000 16:04:53 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: setproctitle" }, { "msg_contents": "Hi!\n\nOn Mon, 12 Jun 2000, Bruce Momjian wrote:\n> What OS's has pstat()?\n\n My understanding of sendmail sources answers that pstat() is used only\non HP-UX...\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2.1/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Tue, 13 Jun 2000 08:29:41 +0000 (GMT)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: setproctitle" }, { "msg_contents": "> Hi!\n> \n> On Mon, 12 Jun 2000, Bruce Momjian wrote:\n> > What OS's has pstat()?\n> \n> My understanding of sendmail sources answers that pstat() is used only\n> on HP-UX...\n> \n\nMaybe Tom Lane can comment on wither it also has setproctitle().\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Jun 2000 04:53:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: setproctitle" }, { "msg_contents": "On Tue, 13 Jun 2000, Bruce Momjian wrote:\n\n> > > What OS's has pstat()?\n> > \n> > My understanding of sendmail sources answers that pstat() is used only\n> > on HP-UX...\n> > \n> \n> Maybe Tom Lane can comment on wither it also has setproctitle().\n\nThe current ps display code works fine on HP-UX, Tom verified it. (And it\ndoes use pstat().)\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 13 Jun 2000 14:49:10 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: setproctitle" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> My understanding of sendmail sources answers that pstat() is used only\n>> on HP-UX...\n\n> Maybe Tom Lane can comment on wither it also has setproctitle().\n\nNo, it doesn't (at least not in 10.20).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Jun 2000 10:36:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: setproctitle " }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> On Tue, 13 Jun 2000, Bruce Momjian wrote:\n> \n> > > > What OS's has pstat()?\n> > > \n> > > My understanding of sendmail sources answers that pstat() is used only\n> > > on HP-UX...\n> > > \n> > \n> > Maybe Tom Lane can comment on wither it also has setproctitle().\n> \n> The current ps display code works fine on HP-UX, Tom verified it. (And it\n> does use pstat().)\n\nOh. Great.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Jun 2000 22:26:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: setproctitle" }, { "msg_contents": "> Hi!\n> \n> On Mon, 12 Jun 2000, Bruce Momjian wrote:\n> > What OS's has pstat()?\n> \n> My understanding of sendmail sources answers that pstat() is used only\n> on HP-UX...\n> \n\nTom Lane, does the new 7.1 ps status code work under HP-UX?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Jun 2000 23:26:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: setproctitle" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>>>> What OS's has pstat()?\n>> \n>> My understanding of sendmail sources answers that pstat() is used only\n>> on HP-UX...\n\n> Tom Lane, does the new 7.1 ps status code work under HP-UX?\n\nYup. I tested Peter's code for him before he checked it in --- it\nworks, at least on 10.20.\n\nThe pstat man page says pstat was developed by HP, so I guess it\nprobably is HPUX only :-(. Wish HP were a little more committed to\nfollowing existing standards when there's no reason for a new one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Jun 2000 23:36:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: setproctitle " } ]
[ { "msg_contents": "\nI thought Bruce fixed this.....\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n-------- Original Message --------\nSubject: PostgreSQL RPMS...\nDate: Mon, 12 Jun 2000 14:27 -0600 (MDT)\nFrom: Ronald Patterson <[email protected]>\nOrganization: MCI WorlsCom\nTo: [email protected]\n\nHi Lamar,\n\nI assume that you may still be doing the RPMS's for PostgreSQL.\nJust a note on the latest 7.0.2-1 RPMS release. The binaries from\nthe RPMS are reporting that they are version 7.0.1, instead of\n7.0.2 for the release. Not major and does not appear to effect\nthe running but it is just a nit.\n\nCurrently running these on a Linux-Mandrake 7.0.\n\nThanks for the support,\nRon\n===============================================================\nRon Patterson | MCI WorldCom (warehouseMCI)\[email protected] | 5775 Mark Dabling Blvd.\nAOL/IM: RonPDude | Dept. 1350/786\n719-535-5727 | Colorado Springs, CO 80919\nFax: 719-535-6164 |\n", "msg_date": "Mon, 12 Jun 2000 16:47:01 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: PostgreSQL RPMS...]" }, { "msg_contents": "No idea. I just brand the files.\n\n> \n> I thought Bruce fixed this.....\n> --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n> -------- Original Message --------\n> Subject: PostgreSQL RPMS...\n> Date: Mon, 12 Jun 2000 14:27 -0600 (MDT)\n> From: Ronald Patterson <[email protected]>\n> Organization: MCI WorlsCom\n> To: [email protected]\n> \n> Hi Lamar,\n> \n> I assume that you may still be doing the RPMS's for PostgreSQL.\n> Just a note on the latest 7.0.2-1 RPMS release. The binaries from\n> the RPMS are reporting that they are version 7.0.1, instead of\n> 7.0.2 for the release. Not major and does not appear to effect\n> the running but it is just a nit.\n> \n> Currently running these on a Linux-Mandrake 7.0.\n> \n> Thanks for the support,\n> Ron\n> ===============================================================\n> Ron Patterson | MCI WorldCom (warehouseMCI)\n> [email protected] | 5775 Mark Dabling Blvd.\n> AOL/IM: RonPDude | Dept. 1350/786\n> 719-535-5727 | Colorado Springs, CO 80919\n> Fax: 719-535-6164 |\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 12 Jun 2000 17:04:27 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Fwd: PostgreSQL RPMS...]" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> No idea. I just brand the files.\n----\n> > I assume that you may still be doing the RPMS's for PostgreSQL.\n> > Just a note on the latest 7.0.2-1 RPMS release. The binaries from\n> > the RPMS are reporting that they are version 7.0.1, instead of\n\nIf I run psql -V on 7.0.2, it reports that it is 7.0.1.\n\nOh well.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 12 Jun 2000 17:09:45 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Fwd: PostgreSQL RPMS...]" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > No idea. I just brand the files.\n> ----\n> > > I assume that you may still be doing the RPMS's for PostgreSQL.\n> > > Just a note on the latest 7.0.2-1 RPMS release. The binaries from\n> > > the RPMS are reporting that they are version 7.0.1, instead of\n> \n> If I run psql -V on 7.0.2, it reports that it is 7.0.1.\n> \n> Oh well.\n> \n\nStrange. If you look in include/version.h.in, it constructs\nPG_VERSION_STR and I see 7.0.2 in there. My guess is that somehow the\ntarball being grabbed has 7.0.1 in include/version.h.in, even though CVS\nhas 7.0.2.\n\nCan you check on that?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 12 Jun 2000 17:24:43 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Fwd: PostgreSQL RPMS...]" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce Momjian wrote:\n> > >\n> > > No idea. I just brand the files.\n> > ----\n> > > > I assume that you may still be doing the RPMS's for PostgreSQL.\n> > > > Just a note on the latest 7.0.2-1 RPMS release. The binaries from\n> > > > the RPMS are reporting that they are version 7.0.1, instead of\n> >\n> > If I run psql -V on 7.0.2, it reports that it is 7.0.1.\n> >\n> > Oh well.\n> >\n> \n> Strange. If you look in include/version.h.in, it constructs\n> PG_VERSION_STR and I see 7.0.2 in there. My guess is that somehow the\n> tarball being grabbed has 7.0.1 in include/version.h.in, even though CVS\n> has 7.0.2.\n> \n> Can you check on that?\n\nThe tarball I had (which was the one Marc was _going_ to release) had\n7.0.1; the current 7.0.2 tarball has 7.0.2. I'll fix the RPM's with the\nright tarball -- although, I built them _after_ the announcement. Oh\nwell. Time for a 7.0.2-2, I guess. \n\nSo, the currently available tarball has the right string; the one I have\nbeen distributing in the RPM's thinks its 7.0.1, except for the\npackaging (which was the only real change in 7.0.2....:-))...\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 12 Jun 2000 17:36:19 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Fwd: PostgreSQL RPMS...]" } ]
[ { "msg_contents": "Hi everybody,\n\nI'm the one seeking PostgreSQL not to fall in *ABORT STATE* after an erroneous\ncommand within a transaction. Peter Eisentrout, suggested a couple of months\nago, to hack the backend in a very simple way (deleting the line which aborted\nafter an error). This worked well for me in 6.5.3 as far as I could test. But in\n7.0, this doesn't perform well, as some other folk already reported. I'm willing\nto do as much work as I can towards getting this behaviour change as soon as\npossible. I would like you all to help me starting a discussion, where I could\nfind the central issues which will allow to get this done.\n\nSpecifically, the problem which generates deleting the above mentioned line, is\ncorrupted shared memory, and backend chrash and restart.\n\nAny suggestions?\n\nRegards,\nHaroldo Stenger.\n", "msg_date": "Mon, 12 Jun 2000 18:12:43 -0300", "msg_from": "Haroldo Stenger <[email protected]>", "msg_from_op": true, "msg_subject": "Revisited: Does error within transaction imply restarting it?" } ]
[ { "msg_contents": "This has been fixed in 7.0.\n\n\n> \"Oliver Elphick\" <[email protected]> writes:\n> > The following commands invariably crash the backend:\n> > morejunk=# create table test_t(id int4);\n> > CREATE\n> > morejunk=# create table test_d() inherits(test_t);\n> > CREATE\n> > morejunk=# create view test_v as select * from test_t*;\n> > CREATE 135069 1\n> > morejunk=# select * from test_v;\n> > pqReadData() -- backend closed the channel unexpectedly.\n> > This probably means the backend terminated abnormally\n> > before or while processing the request.\n> > The connection to the server was lost. Attempting reset: Failed.\n> \n> The problem appears to be in this routine in optimizer/prep/prepunion.c:\n> \n> static RangeTblEntry *\n> new_rangetable_entry(Oid new_relid, RangeTblEntry *old_entry)\n> {\n> RangeTblEntry *new_entry = copyObject(old_entry);\n> \n> /* ??? someone tell me what the following is doing! - ay 11/94 */\n> if (!strcmp(new_entry->eref->relname, \"*CURRENT*\") ||\n> !strcmp(new_entry->eref->relname, \"*NEW*\"))\n> new_entry->ref->relname = get_rel_name(new_relid);\n> else\n> new_entry->relname = get_rel_name(new_relid);\n> \n> new_entry->relid = new_relid;\n> return new_entry;\n> }\n> \n> It bombs on a RangeTblEntry that's been inserted by rule expansion,\n> because the entry's eref is NULL. The reason it's NULL is that we\n> deliberately decided not to save eref data in stored rule strings.\n> So RTEs coming from rule expansion have NULL eref fields.\n> \n> This routine is the only one outside the parser that makes use of\n> eref as opposed to ref in RTEs. I am thinking we can fix the problem\n> by making it look at ref->relname instead of eref->relname. But I'd\n> like to get Thomas to look at the issue, since according to the CVS\n> logs he was the one who changed it from ref->relname to eref->relname\n> in the first place. Thomas, is there a semantic difference between\n> ref->relname and eref->relname?\n> \n> Wait a minute ... now that I look at it, it's arguable that the entire\n> chunk of code is misguided in the first place. What we're doing is\n> making an RTE point at some inheritance child of a table that was named\n> (with *) in the original query. It seems to me that we should leave the\n> ref name alone and substitute for the true relation name regardless of\n> whether the ref name is *CURRENT* or not. That is, if you write\n> \t... FROM foo* ref\n> then the reference name ref applies to all of foo's children as well as\n> foo. I don't see why it would be different inside a view. In other\n> words the code probably should just be\n> \n> new_entry->relname = get_rel_name(new_relid);\n> new_entry->relid = new_relid;\n> return new_entry;\n> \n> Jan, Thomas, any comments?\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 12 Jun 2000 18:40:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Rel 7.0beta5: view on table* crashes backend" } ]
[ { "msg_contents": "Found a problem in the 7.0.2-1 RPM's. Well, two problems. One was that\nthe spec file as written couldn't be %cleaned as a non-root user. That\nwas fixed by changing the chmod 555 to chmod 755.\n\nThe other problem was a slightly old tarball that reported itself to be\n7.0.1 instead of 7.0.2. A new tarball and a rebuild now brings you\n7.0.2-2.\n\nUploaded to incoming.redhat.com, should be on contrib in a day or two.\n\nCanonical distribution at\nftp://ftp.postgresql.org/pub/binary/v7.0.2/redhat-RPM\n\nWe are adding some other binaries as they become available, PPC binaries\nof 7.0.2-1 are available NOW in the PPC subdir of RPMS in the above\ndirectory. I expect Murray will build new ones soon.... :-)\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 12 Jun 2000 19:36:29 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL 7.0.2-2 RPMset available." } ]
[ { "msg_contents": "Can anyone comment on this work? Karel?\n\n> \n> I start detail study of PG's memory management (because, I want remove\n> prepared query-cache to shmem (more is in my old discussion with Jan)).\n> \n> I see current code in the aset.c and I found small non-effective memory \n> usage.\n> \n> \n> Description:\n> \n> The postgresql use blocks for allocation. These blocks are inward\n> organized/split via chunks. \n> \n> If a palloc() wants memory:\n> 1) try use some chunk in a freelist of chunks\n> 2) try use free space in an actual block\n> 3) if wanted space is large - allocate specific one-block-for-one-chunk\n> 4) if previous options are not possible allocate new block\n> \n> A problem:\n> \n> - if use option 4) and already exist (old) block (but space in this block \n> is less than wanted space) current algorithm _skip_ and not _use_ this small\n> space in old block. For a detail see the 'else' on line 327 in aset.c.\n> \n> I test it and average is 8-10b per a block (but max is 1000b) - large is \n> this space if a palloc() wants bigger space. \n> \n> A solution:\n> \n> Create from this non-used residual space chunk and remove it into free \n> chunk list. \n> \n> \n> Comments?\n> \n> \t\t\t\t\t\tKarel\n> \n> \n> -----> A tested patch (hmm, we are freeze, possible for 7.0.?):\n> \n> *** aset.orig.c\tThu Apr 13 18:33:45 2000\n> --- aset.c\tThu Apr 20 18:45:50 2000\n> ***************\n> *** 323,326 ****\n> --- 323,346 ----\n> \t\telse\n> \t\t{\n> + \t\t\tint \toldfree = set->blocks->endptr - set->blocks->freeptr;\n> + \t\t\n> + \t\t\t/*\n> + \t\t\t * Try create from residual space in block free chunk\n> + \t\t\t */\n> + \t\t\tif (oldfree > MAXALIGN(1) + ALLOC_CHUNKHDRSZ) {\n> + \t\t\t\t\n> + \t\t\t\tint x_fidx = AllocSetFreeIndex(oldfree - ALLOC_CHUNKHDRSZ );\n> + \t\t\t\t\n> + \t\t\t\tchunk = (AllocChunk) (set->blocks->freeptr);\n> + \t\t\t\tchunk->size = oldfree - ALLOC_CHUNKHDRSZ;\n> + \t\t\t\t\n> + \t\t\t\t/* put chunk into freelist */\n> + \t\t\t\tchunk->aset = (void *) set->freelist[x_fidx];\n> + \t\t\t\tset->freelist[x_fidx] = chunk;\n> + \t\t\t\t\n> + \t\t\t\t/* unset free space in block */ \n> + \t\t\t\tset->blocks->freeptr = set->blocks->endptr;\n> + \t\t\t}\n> + \t\t\n> \t\t\t/* Get size of prior block */\n> \t\t\tblksize = set->blocks->endptr - ((char *) set->blocks);\n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 12 Jun 2000 20:07:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: memory management suggestion" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Can anyone comment on this work? Karel?\n\nI thought it was a good idea, except for the one possible bug about\nwhether small wasted chunks go into the right freelist or not.\n\nI think Karel submitted an updated version of the patch later, no?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Jun 2000 22:41:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: memory management suggestion " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Can anyone comment on this work? Karel?\n> \n> I thought it was a good idea, except for the one possible bug about\n> whether small wasted chunks go into the right freelist or not.\n> \n> I think Karel submitted an updated version of the patch later, no?\n\nI see a patch, and your comment on it, but nothing after that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Jun 2000 03:18:01 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: memory management suggestion" }, { "msg_contents": "\n> Can anyone comment on this work? Karel?\n\n\n Yes. Tom & Jan already comment it. I prepare it for 7.1 with\nTom's advices.\n\n\t\t\t\t\tKarel\n\n", "msg_date": "Tue, 13 Jun 2000 10:09:20 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: memory management suggestion" }, { "msg_contents": "> \n> > Can anyone comment on this work? Karel?\n> \n> \n> Yes. Tom & Jan already comment it. I prepare it for 7.1 with\n> Tom's advices.\n\nIs it already applied to 7.1?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Jun 2000 04:09:23 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: memory management suggestion" }, { "msg_contents": "\nOn Tue, 13 Jun 2000, Bruce Momjian wrote:\n\n> > \n> > > Can anyone comment on this work? Karel?\n> > \n> > \n> > Yes. Tom & Jan already comment it. I prepare it for 7.1 with\n> > Tom's advices.\n> \n> Is it already applied to 7.1?\n\n No. I said it bad, I want say \"I *will* prepare...\" :-)\n\n Well, I write it next week. Now I work on contrib/pg_lodump and the\nothers things...\n\n Bruce, (it is probably already discussed, but refresh me..) for what time\nis planned 7.1?\n\n\t\t\t\t\t\t\tKarel\n\n\n", "msg_date": "Tue, 13 Jun 2000 10:49:34 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: memory management suggestion" }, { "msg_contents": "> \n> On Tue, 13 Jun 2000, Bruce Momjian wrote:\n> \n> > > \n> > > > Can anyone comment on this work? Karel?\n> > > \n> > > \n> > > Yes. Tom & Jan already comment it. I prepare it for 7.1 with\n> > > Tom's advices.\n> > \n> > Is it already applied to 7.1?\n> \n> No. I said it bad, I want say \"I *will* prepare...\" :-)\n> \n> Well, I write it next week. Now I work on contrib/pg_lodump and the\n> others things...\n\nOh, OK. Just checking.\n\n> \n> Bruce, (it is probably already discussed, but refresh me..) for what time\n> is planned 7.1?\n\nAugust.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Jun 2000 05:01:41 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: memory management suggestion" }, { "msg_contents": "Where are we on this?\n\n\n> \n> I start detail study of PG's memory management (because, I want remove\n> prepared query-cache to shmem (more is in my old discussion with Jan)).\n> \n> I see current code in the aset.c and I found small non-effective memory \n> usage.\n> \n> \n> Description:\n> \n> The postgresql use blocks for allocation. These blocks are inward\n> organized/split via chunks. \n> \n> If a palloc() wants memory:\n> 1) try use some chunk in a freelist of chunks\n> 2) try use free space in an actual block\n> 3) if wanted space is large - allocate specific one-block-for-one-chunk\n> 4) if previous options are not possible allocate new block\n> \n> A problem:\n> \n> - if use option 4) and already exist (old) block (but space in this block \n> is less than wanted space) current algorithm _skip_ and not _use_ this small\n> space in old block. For a detail see the 'else' on line 327 in aset.c.\n> \n> I test it and average is 8-10b per a block (but max is 1000b) - large is \n> this space if a palloc() wants bigger space. \n> \n> A solution:\n> \n> Create from this non-used residual space chunk and remove it into free \n> chunk list. \n> \n> \n> Comments?\n> \n> \t\t\t\t\t\tKarel\n> \n> \n> -----> A tested patch (hmm, we are freeze, possible for 7.0.?):\n> \n> *** aset.orig.c\tThu Apr 13 18:33:45 2000\n> --- aset.c\tThu Apr 20 18:45:50 2000\n> ***************\n> *** 323,326 ****\n> --- 323,346 ----\n> \t\telse\n> \t\t{\n> + \t\t\tint \toldfree = set->blocks->endptr - set->blocks->freeptr;\n> + \t\t\n> + \t\t\t/*\n> + \t\t\t * Try create from residual space in block free chunk\n> + \t\t\t */\n> + \t\t\tif (oldfree > MAXALIGN(1) + ALLOC_CHUNKHDRSZ) {\n> + \t\t\t\t\n> + \t\t\t\tint x_fidx = AllocSetFreeIndex(oldfree - ALLOC_CHUNKHDRSZ );\n> + \t\t\t\t\n> + \t\t\t\tchunk = (AllocChunk) (set->blocks->freeptr);\n> + \t\t\t\tchunk->size = oldfree - ALLOC_CHUNKHDRSZ;\n> + \t\t\t\t\n> + \t\t\t\t/* put chunk into freelist */\n> + \t\t\t\tchunk->aset = (void *) set->freelist[x_fidx];\n> + \t\t\t\tset->freelist[x_fidx] = chunk;\n> + \t\t\t\t\n> + \t\t\t\t/* unset free space in block */ \n> + \t\t\t\tset->blocks->freeptr = set->blocks->endptr;\n> + \t\t\t}\n> + \t\t\n> \t\t\t/* Get size of prior block */\n> \t\t\tblksize = set->blocks->endptr - ((char *) set->blocks);\n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 29 Sep 2000 22:47:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: memory management suggestion" }, { "msg_contents": "\nOn Fri, 29 Sep 2000, Bruce Momjian wrote:\n\n> Where are we on this?\n> \n> > Create from this non-used residual space chunk and remove it into free \n> > chunk list. \n\n\n This is really old story. I hope that Tom think of this and has it's \nin his care. A problem is that standard day of this planet has \n24 hours for all people (incl. Tom :-), right?\n\n\t\t\t\t\tKarel\n\n", "msg_date": "Sun, 1 Oct 2000 10:13:12 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: memory management suggestion" }, { "msg_contents": "Karel Zak <[email protected]> writes:\n> This is really old story. I hope that Tom think of this and has it's \n> in his care. A problem is that standard day of this planet has \n> 24 hours for all people (incl. Tom :-), right?\n\nYup. I've got one more area to wrap up in the new-features business\n(I really want to make UNION/INTERSECT/EXCEPT work with subqueries in\nFROM) and then it's back to mopup on memory management etc.\n\nFortunately, since Vadim requested a postponement of the beta schedule,\nthere's still time ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 01 Oct 2000 11:19:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: memory management suggestion " }, { "msg_contents": "Tom, guess I can delete this email now? :-)\n\n> \n> I start detail study of PG's memory management (because, I want remove\n> prepared query-cache to shmem (more is in my old discussion with Jan)).\n> \n> I see current code in the aset.c and I found small non-effective memory \n> usage.\n> \n> \n> Description:\n> \n> The postgresql use blocks for allocation. These blocks are inward\n> organized/split via chunks. \n> \n> If a palloc() wants memory:\n> 1) try use some chunk in a freelist of chunks\n> 2) try use free space in an actual block\n> 3) if wanted space is large - allocate specific one-block-for-one-chunk\n> 4) if previous options are not possible allocate new block\n> \n> A problem:\n> \n> - if use option 4) and already exist (old) block (but space in this block \n> is less than wanted space) current algorithm _skip_ and not _use_ this small\n> space in old block. For a detail see the 'else' on line 327 in aset.c.\n> \n> I test it and average is 8-10b per a block (but max is 1000b) - large is \n> this space if a palloc() wants bigger space. \n> \n> A solution:\n> \n> Create from this non-used residual space chunk and remove it into free \n> chunk list. \n> \n> \n> Comments?\n> \n> \t\t\t\t\t\tKarel\n> \n> \n> -----> A tested patch (hmm, we are freeze, possible for 7.0.?):\n> \n> *** aset.orig.c\tThu Apr 13 18:33:45 2000\n> --- aset.c\tThu Apr 20 18:45:50 2000\n> ***************\n> *** 323,326 ****\n> --- 323,346 ----\n> \t\telse\n> \t\t{\n> + \t\t\tint \toldfree = set->blocks->endptr - set->blocks->freeptr;\n> + \t\t\n> + \t\t\t/*\n> + \t\t\t * Try create from residual space in block free chunk\n> + \t\t\t */\n> + \t\t\tif (oldfree > MAXALIGN(1) + ALLOC_CHUNKHDRSZ) {\n> + \t\t\t\t\n> + \t\t\t\tint x_fidx = AllocSetFreeIndex(oldfree - ALLOC_CHUNKHDRSZ );\n> + \t\t\t\t\n> + \t\t\t\tchunk = (AllocChunk) (set->blocks->freeptr);\n> + \t\t\t\tchunk->size = oldfree - ALLOC_CHUNKHDRSZ;\n> + \t\t\t\t\n> + \t\t\t\t/* put chunk into freelist */\n> + \t\t\t\tchunk->aset = (void *) set->freelist[x_fidx];\n> + \t\t\t\tset->freelist[x_fidx] = chunk;\n> + \t\t\t\t\n> + \t\t\t\t/* unset free space in block */ \n> + \t\t\t\tset->blocks->freeptr = set->blocks->endptr;\n> + \t\t\t}\n> + \t\t\n> \t\t\t/* Get size of prior block */\n> \t\t\tblksize = set->blocks->endptr - ((char *) set->blocks);\n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 10 Dec 2000 15:34:22 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: memory management suggestion" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, guess I can delete this email now? :-)\n\nYes, it's done.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 10 Dec 2000 17:26:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: memory management suggestion " } ]
[ { "msg_contents": "Here is a followup to it.\n\n> Karel Zak <[email protected]> writes:\n> > Create from this non-used residual space chunk and remove it into free \n> > chunk list. \n> \n> There's at least one bug in that code: the minimum acceptable chunk size\n> is not MAXALIGN(1), but 1 << ALLOC_MINBITS, and after that it goes up by\n> powers of 2. You are putting the chunk into a freelist based on which\n> freelist would be used to allocate a request for X amount of space...\n> but it had better go into a freelist based on being large enough for the\n> largest request that would be directed to that freelist, instead. As is,\n> the chunk could be handed out to someone who would scribble past the\n> allocated end of the block.\n> \n> -----> A tested patch (hmm, we are freeze, possible for 7.0.?):\n> \n> It's getting pretty late in the cycle for this sort of thing.\n> We should consider it for 7.1, after you get it right...\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 12 Jun 2000 20:08:15 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: memory management suggestion" } ]
[ { "msg_contents": "Tom, we fixed this, right?\n\n> [email protected] writes:\n> > PostgreSQL 6.5.1 on i686-pc-linux-gnu, compiled by gcc 2.7.2.3\n> \n> > Apr 28 03:06:05 ziutek logger: NOTICE: Index \n> > tb_klienci_id_klienci_key: NUMBER OF INDEX' TUPLES (10652) \n> > IS NOT THE SAME AS HEAP' (10634)\n> \n> This can happen if there are other transactions open while the VACUUM\n> runs. It's not real critical --- the cross-check between index and\n> table tuple count is just not bright enough to consider the possibility\n> of \"zombie\" tuples (killed, but not dead yet because there are other\n> transactions that can still see them). I'd like to improve the cross-\n> check so it doesn't emit bogus notices, but haven't figured out how yet.\n> \n> If you see it even when the VACUUM is the only transaction running,\n> then you might be well advised to drop and re-create the index.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 12 Jun 2000 20:13:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Notice in logg file" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, we fixed this, right?\n\nIt's still on the TODO list, AFAIK...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Jun 2000 22:29:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Notice in logg file " }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Tom Lane\n> \n> Bruce Momjian <[email protected]> writes:\n> > Tom, we fixed this, right?\n> \n> It's still on the TODO list, AFAIK...\n>\n\nAs for the cases such that NUMBER OF INDEX TUPLES < HEAP,\nwe know at least one of the cause. I have had a fix for it and would\ncommit it in a week or so(Currently my local branch is broken due\nto configure tree change). But as for the cases such that NUMBER\nOF INDEX TUPLES > HEAP,the cause seems to be not clear yet.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Tue, 13 Jun 2000 16:02:59 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Notice in logg file " } ]
[ { "msg_contents": "i'm developing a framework (mod_perl+apache) that reads the db-schema and \nautomatically explode html forms.\n\nnow i read the schema and cache it into perl-hashes to speedup things.\n\nmy problem is to recognize when a table is altered so that the framework can \nupdate the related forms connected to the db tables.\ni don't want to read the schema every time (now i must restart apache, and i \ndon't want to restart ...)\n\nIt's possible? How can i implement this ?\n\nThank you,\n\nvalter\n________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com\n\n", "msg_date": "Tue, 13 Jun 2000 00:32:06 GMT", "msg_from": "\"valter m\" <[email protected]>", "msg_from_op": true, "msg_subject": "how an app can know when a table is altered ?" } ]
[ { "msg_contents": "Hi all,\n\nI am considering doing some work on the TODO item:-\n o Allow DELETE and UPDATE to use inheritance using tablename*\n\nI would appreciate it if someone could give me some idea of the\ncomplexity of doing this, and possibly some pointers on where to start.\n\nThanks,\nGrant\n\n--\n> Poorly planned software requires a genius to write it\n> and a hero to use it.\n\nGrant Finnemore BSc(Eng) (mailto:[email protected])\nSoftware Engineer Universal Computer Services\nTel (+27)(11)712-1366 PO Box 31266 Braamfontein 2017, South Africa\nCell (+27)(82)604-5536 20th Floor, 209 Smit St., Braamfontein\nFax (+27)(11)339-3421 Johannesburg, South Africa\n\n\n\n", "msg_date": "Tue, 13 Jun 2000 07:59:05 +0200", "msg_from": "Grant Finnemore <[email protected]>", "msg_from_op": true, "msg_subject": "Allow DELETE and UPDATE to use inheritance using tablename* " }, { "msg_contents": "Grant Finnemore wrote:\n> \n> Hi all,\n> \n> I am considering doing some work on the TODO item:-\n> o Allow DELETE and UPDATE to use inheritance using tablename*\n> \n> I would appreciate it if someone could give me some idea of the\n> complexity of doing this, and possibly some pointers on where to start.\n\nI just completed this task. It was committed to CVS a couple of days\nago.\n", "msg_date": "Tue, 13 Jun 2000 16:18:42 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow DELETE and UPDATE to use inheritance using\n tablename*" } ]
[ { "msg_contents": "This was applied in 7.1.\n\n\n> On Thu, 11 May 2000, Tom Lane wrote:\n> \n> >Bruce Momjian <[email protected]> writes:\n> >> Can someone comment on this patch? It looks good to me.\n> >\n> >Looks like a rather major rewrite of the Kerberos support --- do we feel\n> >comfortable cramming this into 7.0.1? Seems like putting it into the\n> >7.1 cycle might be the prudent course, so that it will get some beta\n> >testing before being loosed upon the world. (Alternatively, if a couple\n> >other Kerberos users want to try it and report back, I'd feel better\n> >about dropping it into 7.0.* ...)\n> >\n> \n> I'm quite happy for it to sit in the contrib section for people who want\n> to try it out until its been tested on more than one site. Now that\n> RedHat are shipping kerberos with 6.2, I figured it wasn't really fair\n> to keep this implementation all to ourselves. David Wragg, the guy who\n> actually did the work, has a few things he wants to tidy up, but I don't\n> see that happening for a couple of months at the earliest. The current\n> code is reliable and stable, and can coexist with non-kerberized\n> postgres libs and binaries.\n> \n> As you point out, it is quite a reworking of the existing code. We\n> couldn't get postgres to compile against MIT kerberos V5 includes and\n> libraries, let alone run. Hence this patch. If anyone is using the\n> current V5 support, I would love to know how they're doing it.\n> \n> Cheers,\n> Mike\n> -- \n> Mike Wyer <[email protected]> || \"Woof?\"\n> http://www.doc.ic.ac.uk/~mw || Gaspode the Wonder Dog \n> Work: 020 7594 8440 || from \"Moving Pictures\"\n> Mobile: 07879 697119 || by Terry Pratchett\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Jun 2000 03:37:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: kerberos 5 patch against 7.0RC5" } ]
[ { "msg_contents": "Any comments on this?\n\n\n> SAKAIDA Masaaki <[email protected]> writes:\n> > postgres=# copy binary test to stdout; <====== error???\n> > [ psql gets confused ]\n> \n> Yes, I see it too. The COPY data protocol is fundamentally textual,\n> so there's no way of making this work without rewriting all our frontend\n> interface libraries. Not worth it. I suggest that the backend should\n> reject COPY BINARY commands that are either FROM STDIN or TO STDOUT.\n> Anybody see a better way?\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Jun 2000 03:38:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: COPY BINARY to STDOUT" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Any comments on this?\n\nThere is a test in there now to reject COPY BINARY TO STDOUT/FROM STDIN.\nIf anyone figures out how to support it, the test can be removed...\n\n\t\t\tregards, tom lane\n\n>> SAKAIDA Masaaki <[email protected]> writes:\n>>>> postgres=# copy binary test to stdout; <====== error???\n>>>> [ psql gets confused ]\n>> \n>> Yes, I see it too. The COPY data protocol is fundamentally textual,\n>> so there's no way of making this work without rewriting all our frontend\n>> interface libraries. Not worth it. I suggest that the backend should\n>> reject COPY BINARY commands that are either FROM STDIN or TO STDOUT.\n>> Anybody see a better way?\n", "msg_date": "Tue, 13 Jun 2000 03:56:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY BINARY to STDOUT " } ]
[ { "msg_contents": "Dear sir,\nThis is a small sample 'C' program.\nCompile successfully and linking error is coming in PQsetdb function.\nThis program will run on Linux RH6. How to remove this linking error.\nPlease waiting your valuable guidance.\nThanx.\n\n\n******* Compile Result ************\n[rootanuj@Lux2 rootanuj]$ gcc -S conn.c\n[rootanuj@Lux2 rootanuj]$ gcc conn.c\n/tmp/ccroTWA2.o: In function `main':\n/tmp/ccroTWA2.o(.text+0x48): undefined reference to `PQsetdbLogin'\ncollect2: ld returned 1 exit status\n[rootanuj@Lux2 rootanuj]$\n*******************************\n\n/* conn.c */\n#include <stdio.h>\n#include \"/usr/include/pgsql/libpq-fe.h\"\nmain()\n {\n char *pghost,\n *pgport,\n *pgoptions,\n *pgtty;\n char *dbName;\n\n PGconn *conn;\n\n pghost = NULL; /* host name of the backend server */\n pgport = \"5432\"; /* port of the backend server */\n pgoptions = NULL; /* special options to start up the backend\n * server */\n pgtty = NULL; /* debugging tty for the backend server */\n dbName = \"template1\";\n\n /* make a connection to the database */\n conn = PQsetdbLogin(pghost, pgport, pgoptions,\npgtty,dbName,\"username\",\"password\");\n/* Compile successfully and linking error is coming in PQsetdb function. */\n/* Compile cc -S conn.c */\n }\n\n\n", "msg_date": "Tue, 13 Jun 2000 16:26:16 +0530", "msg_from": "\"anuj\" <[email protected]>", "msg_from_op": true, "msg_subject": "linking error" }, { "msg_contents": "On Tue, 13 Jun 2000, anuj wrote:\n\n> Dear sir,\n> This is a small sample 'C' program.\n> Compile successfully and linking error is coming in PQsetdb function.\n> This program will run on Linux RH6. How to remove this linking error.\n> Please waiting your valuable guidance.\n> Thanx.\n\ngcc conn.c -L/usr/local/pgsql/lib -lpg -o conn\n\nVince.\n\n> \n> \n> ******* Compile Result ************\n> [rootanuj@Lux2 rootanuj]$ gcc -S conn.c\n> [rootanuj@Lux2 rootanuj]$ gcc conn.c\n> /tmp/ccroTWA2.o: In function `main':\n> /tmp/ccroTWA2.o(.text+0x48): undefined reference to `PQsetdbLogin'\n> collect2: ld returned 1 exit status\n> [rootanuj@Lux2 rootanuj]$\n> *******************************\n> \n> /* conn.c */\n> #include <stdio.h>\n> #include \"/usr/include/pgsql/libpq-fe.h\"\n> main()\n> {\n> char *pghost,\n> *pgport,\n> *pgoptions,\n> *pgtty;\n> char *dbName;\n> \n> PGconn *conn;\n> \n> pghost = NULL; /* host name of the backend server */\n> pgport = \"5432\"; /* port of the backend server */\n> pgoptions = NULL; /* special options to start up the backend\n> * server */\n> pgtty = NULL; /* debugging tty for the backend server */\n> dbName = \"template1\";\n> \n> /* make a connection to the database */\n> conn = PQsetdbLogin(pghost, pgport, pgoptions,\n> pgtty,dbName,\"username\",\"password\");\n> /* Compile successfully and linking error is coming in PQsetdb function. */\n> /* Compile cc -S conn.c */\n> }\n> \n> \n> \n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 14 Jun 2000 06:34:46 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: linking error" }, { "msg_contents": "There is a small 'C' program.\n Compile successfully and linking error is coming in PQsetdbLogin function.\n This program will run on Linux RH6. How to remove this linking error.\nOr\nAny other way to connect to backend PostgrsSQL via 'C' program. \nPlease waiting your valuable guidance.\nThanx. \nAnuj\n \n \n ******* Compile Result ************\n [rootanuj@Lux2 rootanuj]$ gcc -S conn.c\n [rootanuj@Lux2 rootanuj]$ gcc conn.c\n /tmp/ccroTWA2.o: In function `main':\n /tmp/ccroTWA2.o(.text+0x48): undefined reference to `PQsetdbLogin'\n collect2: ld returned 1 exit status\n [rootanuj@Lux2 rootanuj]$\n *******************************\n \n /* conn.c */\n #include <stdio.h>\n #include \"/usr/include/pgsql/libpq-fe.h\"\n main()\n {\n char *pghost, *pgport, *pgoptions, *pgtty;\n char *dbName;\n\n PGconn *conn;\n \n pghost = NULL; /* host name of the backend server */\n pgport = \"5432\"; /* port of the backend server */\n pgoptions = NULL; /* special options to start up the backend server */\n pgtty = NULL; /* debugging tty for the backend server */\n dbName = \"Mydb\";\n \n /* make a connection to the database */\n conn = PQsetdbLogin(pghost, pgport, pgoptions, pgtty,dbName,\"username\",\"password\");\n /* Compile successfully and linking error is coming in PQsetdb function. */\n /* Compile cc -S conn.c */\n }\n \n \n \n\n\n\n\n\n\n\n There is a small  'C' \nprogram. Compile successfully and linking error is coming in \nPQsetdbLogin function. This program will run on Linux RH6. How to \nremove this linking error.Or\nAny other way to connect to backend \nPostgrsSQL via 'C' program. \nPlease waiting your valuable guidance.Thanx. \n\nAnuj   \n******* Compile Result ************ [rootanuj@Lux2 rootanuj]$ gcc -S conn.c [rootanuj@Lux2 rootanuj]$ gcc conn.c /tmp/ccroTWA2.o: In \nfunction `main': /tmp/ccroTWA2.o(.text+0x48): undefined reference to \n`PQsetdbLogin' collect2: ld returned 1 exit \nstatus [rootanuj@Lux2 \nrootanuj]$ *******************************  /* conn.c \n*/ #include <stdio.h> #include \n\"/usr/include/pgsql/libpq-fe.h\" main()   \n{       \nchar       *pghost,  *pgport,   \n*pgoptions,  *pgtty;\n       \nchar       \n*dbName;       \nPGconn     \n*conn;        pghost = \nNULL;              \n/* host name of the backend server */       \npgport = \n\"5432\";              \n/* port of the backend server */       \npgoptions = NULL;           /* \nspecial options to start up the backend server \n*/       pgtty = \nNULL;               \n/* debugging tty for the backend server \n*/       dbName = \n\"Mydb\";        /* make a connection \nto the database */       conn = \nPQsetdbLogin(pghost, pgport, \npgoptions, pgtty,dbName,\"username\",\"password\"); /* \nCompile successfully and linking error is coming in PQsetdb function. \n*/ /* Compile cc -S conn.c */   }", "msg_date": "Tue, 4 Jul 2000 11:13:59 +0530", "msg_from": "\"anuj\" <[email protected]>", "msg_from_op": true, "msg_subject": "linking error" } ]
[ { "msg_contents": "> You probably right. I belive that Thomas say more about it...\n\nto_char() is compatible with Oracle. date_part() is compatible with\nIngres (or should be). I've got the Ingres docs somewhere, but\npresumably I looked at them when implementing this in the first place.\nMaybe not, but what I have is compatible with Unix date formatting.\n\n> For PG date_part/trunc will SET (or anything like this) good.\n\nLet's decide what these functions are for; in this case they are each\ncribbed from an existing database product, and should be compatible with\nthose products imho.\n\nbtw, the \"week of year\" issue is quite a bit more complex; it is defined\nin ISO-8601 and it does not correspond directly to a \"Jan 1\" point in\nthe calendar. In fact, there can be 53 weeks in a year, and some days\nearly in the calendar year will fall into the preceeding year for\npurposes of this week calculation.\n\n - Thomas\n", "msg_date": "Tue, 13 Jun 2000 13:14:49 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: day of week" } ]
[ { "msg_contents": "> OK, if you don't tell me that someone is already planning to do it,\n> and if I can glean the info I need from the source, then I'll attempt\n> to write some docs on what aggregates are supported and how to use \n> them, and mail them to you.\n\nGreat! Please cc: the docs or hackers mailing lists to make sure the\npatches are not lost; I've been coping with a new Linux distro which\nuses different mail tools that I'm used to :(\n\nLook in doc/src/sgml/ for the doc source code. If the sgml markup is\nconfusing, just put in the text or send the text to be included and I'll\nmark it up for you.\n\n - Thomas\n", "msg_date": "Tue, 13 Jun 2000 13:20:22 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL aggregate function documentation" }, { "msg_contents": "> > OK, if you don't tell me that someone is already planning to do it,\n> > and if I can glean the info I need from the source, then I'll attempt\n> > to write some docs on what aggregates are supported and how to use \n> > them, and mail them to you.\n> \n> Great! Please cc: the docs or hackers mailing lists to make sure the\n> patches are not lost...\n\n[snip]\n\nThe patches are attached. Be great if you could check them over to make sure\nall relevant content (and markup) is there...\n\nThere are three patches - one for sql.sgml (tutorial), one for syntax.sgml\n(user) and the main one for func.sgml (user).\n\nIsaac.\n\n:)", "msg_date": "Fri, 16 Jun 2000 01:10:02 +0100 (BST)", "msg_from": "Isaac Wilcox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL aggregate function documentation" }, { "msg_contents": "> There are three patches - one for sql.sgml (tutorial), one for \n> syntax.sgml (user) and the main one for func.sgml (user).\n\nThanks Isaac!\n\n - Thomas\n", "msg_date": "Fri, 16 Jun 2000 07:13:12 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL aggregate function documentation" }, { "msg_contents": "Applied. Thanks.\n\n> > > OK, if you don't tell me that someone is already planning to do it,\n> > > and if I can glean the info I need from the source, then I'll attempt\n> > > to write some docs on what aggregates are supported and how to use \n> > > them, and mail them to you.\n> > \n> > Great! Please cc: the docs or hackers mailing lists to make sure the\n> > patches are not lost...\n> \n> [snip]\n> \n> The patches are attached. Be great if you could check them over to make sure\n> all relevant content (and markup) is there...\n> \n> There are three patches - one for sql.sgml (tutorial), one for syntax.sgml\n> (user) and the main one for func.sgml (user).\n> \n> Isaac.\n> \n> :)\nContent-Description: Patch to docs/src/sgml/sql.sgml\n\n[ Attachment, skipping... ]\nContent-Description: Patch to docs/src/sgml/syntax.sgml\n\n[ Attachment, skipping... ]\nContent-Description: Patch to docs/src/sgml/func.sgml\n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 20 Jun 2000 14:03:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: PostgreSQL aggregate function documentation" } ]
[ { "msg_contents": "I've hardly touched LDAP, which was why I got interested in a direct\ninterface for PostgreSQL. I'm playing with LDAP at the moment, because I'm\ngoing to need to make it easier to administer our external mail shortly, so\nit looks like one of the two methods are possible.\n \nPeter\n \n\n-- \nPeter Mount \nEnterprise Support \nMaidstone Borough Council \nAny views stated are my own, and not those of Maidstone Borough Council \n\n-----Original Message-----\nFrom: Michael Ansley [mailto:[email protected]]\nSent: Monday, June 12, 2000 2:20 PM\nTo: 'Peter Mount'; PostgreSQL Interfaces (E-mail)\nSubject: RE: [HACKERS] Integrating Sendmail with PostgreSQL\n\n\n\nHow about using ldap, and getting ldap to use PG. \n\nMikeA \n\n\n\n>> -----Original Message----- \n>> From: Peter Mount [ mailto:[email protected]\n<mailto:[email protected]> ] \n>> Sent: 12 June 2000 11:56 \n>> To: PostgreSQL Interfaces (E-mail) \n>> Subject: [HACKERS] Integrating Sendmail with PostgreSQL \n>> \n>> \n>> I've just seen this on FreshMeat: \n>> http://freshmeat.net/news/2000/06/12/960784940.html\n<http://freshmeat.net/news/2000/06/12/960784940.html> \n>> \n>> It's a patch for sendmail to use postgresql as the backend \n>> for Sendmail's \n>> virtual users tables. \n>> \n>> Has anyone tried this, or had any thoughts about it? \n>> \n>> I've got to reconfigure our sendmail host here and was \n>> looking at using the \n>> LDAP interface, but as I use postgresql a lot here, it may be an \n>> alternative. \n>> \n>> Comments? \n>> \n>> -- \n>> Peter Mount \n>> Enterprise Support \n>> Maidstone Borough Council \n>> Any views stated are my own, and not those of Maidstone \n>> Borough Council \n>> \n\n\n\n\nRE: [HACKERS] Integrating Sendmail with PostgreSQL\n\n\nI've \nhardly touched LDAP, which was why I got interested in a direct interface for \nPostgreSQL. I'm playing with LDAP at the moment, because I'm going to need to \nmake it easier to administer our external mail shortly, so it looks like one of \nthe two methods are possible.\n \nPeter\n \n-- Peter \nMount Enterprise Support Maidstone Borough Council Any views stated are my own, and not those of Maidstone Borough \nCouncil \n\n-----Original Message-----From: Michael Ansley \n [mailto:[email protected]]Sent: Monday, June 12, \n 2000 2:20 PMTo: 'Peter Mount'; PostgreSQL Interfaces \n (E-mail)Subject: RE: [HACKERS] Integrating Sendmail with \n PostgreSQL\nHow about using ldap, and getting ldap to use PG. \nMikeA \n>>   -----Original Message-----\n>>   From: Peter Mount [mailto:[email protected]]\n>>   Sent: 12 June 2000 11:56\n>>   To: PostgreSQL Interfaces \n (E-mail) >>   Subject: [HACKERS] \n Integrating Sendmail with PostgreSQL >>   >>   \n >>   I've just seen this on \n FreshMeat: >>   http://freshmeat.net/news/2000/06/12/960784940.html\n>>   >>   It's a patch for sendmail to use postgresql as the \n backend >>   for Sendmail's\n>>   virtual users tables. >>   >>   Has \n anyone tried this, or had any thoughts about it? >>   >>   I've \n got to reconfigure our sendmail host here and was >>   looking at using the >>   LDAP interface, but as I use postgresql a lot \n here, it may be an >>   \n alternative. >>   >>   Comments? >>   >>   \n -- >>   Peter Mount >>   Enterprise Support >>   Maidstone Borough Council >>   Any views stated are my own, and not those of \n Maidstone >>   Borough Council\n>>", "msg_date": "Tue, 13 Jun 2000 14:23:54 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Integrating Sendmail with PostgreSQL" } ]
[ { "msg_contents": "> > Can you tell me trim() spec, please ? (This problem has been\n> > discussed in pgsql-jp ML. )\n> > In trim(trailing 'abc' from '123cbabc') function, 'abc' means\n> > ~'[abc]'.\n> > If trim(trailing 'abc' from '123cbabc') returns \"123cb\", current\n> > trim() spec is broken. However, the spec that 'abc' means ~'[abc]'\n> > is ugly. It seems that this ugly spec isn't used for any kind of\n> > functions argument and SQL expression except for trim().\n> > How do you think about the trim() spec ?\n> \n> afaict, the SQL92 spec for trim() requires a single character as the\n> first argument; allowing a character string is a Postgres extension. \n> On the surface, istm that this extension is in the spirit of the SQL92\n> spec, in that it allows trimming several possible characters.\n> \n> I'm not sure if SQL3/SQL99 has anything extra to say on this.\n> \n> position() and substring() seem to be able to do what you want;\n> \n> select substring('123ab' for position('ab' in '123ab')-1);\n> \n> gives '123', while\n> \n> select substring('123ab' for position('d' in '123ab')-1);\n> \n> gives '123ab', which seems to be the behavior you might be suggesting\n> for trim().\n> \n> - Tom\n", "msg_date": "Tue, 13 Jun 2000 13:32:21 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: trim() spec" } ]
[ { "msg_contents": "\n I start clean up in the contrib tree. Now I have fixed some Makefiles.\nMy idea is add to all contrib modules Makefile (and allow install \ncontrib matter) and to top-contrib-directory Makefile.global that define\nsome definition relevant for contrib only. This Makefile include standard \n../src/Makefile.global too. IMHO is not good idea add some setting \nrelevant (only) for contrib-tree to standard (src) Makefile.global. \n \n IMHO all in contrib need a little standardize.\n\n A question, how install these files from contrib? (I inspire with debian\nPG packages)\n\n\t*.sql \t\t- $(POSTGRESDIR)/sql\n\t*.so\t\t- $(POSTGRESDIR)/modules\n\texec-binary\t- $(POSTGRESDIR)/bin\n\t*.doc - $(POSTGRESDIR)/doc\n \n In the current contrib is come dead code. What remove this code from \ncontrib tree to ftp.postgresql.org to some dead-code directory. It will\nstill accessible on ftp if anyone want it, and the contrib will correct.\n \nComments?\n\t\t\t\t\t\tKarel\n\n The current state in contrib:\n\napache_logging\t- Use it anyone? IMHO good candidate for *delete* or remove\n to some web page like \"Tips how use PostgreSQL\".... \n\nbit\t\t- impossible compile, last change '2000/04/12 17:14:21'\n\t\t It is already in main tree. Delete?\n \nearthdistance\t- I fix it, it is probably ready now.\n\nlikeplanning\t- haven't Makefile, I fix it\n\nlinux\t\t- haven't Makefile, I fix it\n\nmSQL\t\t- Use it anyone?, I haven't idea where install it. It is a\n\t\t \"file.c\". Delete?\n\nnoupdate\t- I fix (add Makefile) it, it is probably ready now, but\n\t\t needs this anyone? \n\nunixdate\t- haven't Makefile\n\nProbably ready:\n\t\n\tarray\n\tdatetime\n\tfindoidjoins\n\tfulltextindex\n\tisbn_issn\n\tlo\n\tmiscutil\n\todbc\n\tpg_dumplo\n\tpgbench\n\tspi\t\t\n\tstring\t\t\n\tsoundex\t\n\ttools\n\tuser_locks\n\tvacuumlo\n\nos2client\t- I fix Makefile, but it's still breaked:\n\ngcc -I. -I../../src/include -DFRONTEND -DTCPIPV4 -DHAVE_CRYPT_H -c\n../../src/backend/libpq/pqcomm.c\nIn file included from ../../src/include/storage/ipc.h:28,\n from ../../src/include/storage/bufmgr.h:18,\n from ../../src/include/storage/bufpage.h:18,\n from ../../src/include/access/htup.h:17,\n from ../../src/include/tcop/dest.h:55,\n from ../../src/include/libpq/libpq.h:23,\n from ../../src/backend/libpq/pqcomm.c:77:\nconfig.h:16: warning: DEF_PGPORT' redefined\n../../src/include/config.h:202: warning: this is the location of the\nprevious definition\nconfig.h:18: warning: HAVE_TERMIOS_H' redefined\n../../src/include/config.h:270: warning: this is the location of the\nprevious definition\nconfig.h:19: warning: HAVE_ENDIAN_H' redefined\n../../src/include/config.h:231: warning: this is the location of the\nprevious definition\n\n----------> Commets?\n\n\n\n", "msg_date": "Tue, 13 Jun 2000 18:40:57 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "the contrib tree clean up" }, { "msg_contents": "On Tue, 13 Jun 2000, Karel Zak wrote:\n\n> \n> I start clean up in the contrib tree. Now I have fixed some Makefiles.\n> My idea is add to all contrib modules Makefile (and allow install \n> contrib matter) and to top-contrib-directory Makefile.global that define\n> some definition relevant for contrib only. This Makefile include standard \n> ../src/Makefile.global too. IMHO is not good idea add some setting \n> relevant (only) for contrib-tree to standard (src) Makefile.global. \n> \n> IMHO all in contrib need a little standardize.\n> \n> A question, how install these files from contrib? (I inspire with debian\n> PG packages)\n> \n> \t*.sql \t\t- $(POSTGRESDIR)/sql\n> \t*.so\t\t- $(POSTGRESDIR)/modules\n\nipmeter, when it installs their ports.so file, puts it in\n$(POSTGRESDIR)/lib/modules ... sounds like an appropriate place to me ...\n\n", "msg_date": "Tue, 13 Jun 2000 14:42:44 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the contrib tree clean up" } ]
[ { "msg_contents": "hi folks, i have 2 machines 1 is a pIII 500 with 758 megs of ram,\nand one is a dual pIII 450 with 512 megs of ram. both are running\nfreebsd 4.0.\n\nthe 500 is running pg 6.5.3, the dual450 is running pg 7.0 \n\nhere's the load averages -\n\npIII 500 - 12:39PM up 24 days, 14:33, 25 users, load averages: 2.16, 2.19, 2.18\nd540 - 12:44PM up 6 days, 17:26, 4 users, load averages: 2.61, 2.13, 1.89\n\nhere is the query\n\nSELECT gid FROM members\nWHERE active = 't'\nAND (gender = 0\n AND (wantrstypemale LIKE '%Short Term%'\n OR wantrstypemale LIKE '%Marriage%'\n OR wantrstypemale LIKE '%Long Term%'\n OR wantrstypemale LIKE '%Penpal%'\n OR wantrstypemale LIKE '%Activity Partner%')\n) ORDER BY created DESC;\n\nand here are the explains\n\npIII 500\n\tNOTICE: QUERY PLAN:\n\n\tSort (cost=2.05 rows=1 width=12)\n\t -> Index Scan using mgenders on members (cost=2.05 rows=1 width=12)\n\n\tEXPLAIN\n\ndual pIII 450\n\n\tNOTICE: QUERY PLAN:\n\n\tSort (cost=305.01..305.01 rows=3 width=12)\n\t -> Index Scan using mgenders on members (cost=0.00..304.98 rows=3 width=12)\n\n\tEXPLAIN\n\nnow is it just me or is the cost on the dualpIII450 a little out of wack ?\n\njeff\n\n\n", "msg_date": "Tue, 13 Jun 2000 13:50:12 -0300 (ADT)", "msg_from": "Jeff MacDonald <[email protected]>", "msg_from_op": true, "msg_subject": "speed 6.5 vs 7.0" }, { "msg_contents": "Jeff MacDonald <[email protected]> writes:\n> now is it just me or is the cost on the dualpIII450 a little out of wack ?\n\nYou seem to be confusing EXPLAIN's cost numbers with reality ;-)\n\nThe EXPLAIN numbers are on an arbitrary scale that is only used to\nmeasure the relative costs of different plans, so there's no attempt\nto adjust it for the actual speed of different machines. Given\nidentical Postgres versions, databases, and queries, you should get\nidentical EXPLAIN results no matter what hardware you're using.\n\nThe difference that you see here is entirely due to the differences\nbetween the 6.5 and 7.0 cost estimators, and basically the answer is\nthat the 6.5 estimator is broken. It's crediting the indexscan with\nthe selectivity of the whole WHERE clause, when in reality the only part\nthat the index can exploit is \"gender = 0\". So although there may be\nfew tuples returned by the query, the indexscan will have to scan a lot\nof tuples and hence should have a pretty high cost.\n\nI'm actually pretty surprised that 7.0 will use an indexscan in\nthis situation at all. Making the (perhaps incorrect?) assumption\nthat \"gender = 0\" selects about half the tuples, a plain sequential\nscan ought to be faster. Have you done a VACUUM ANALYZE on this table?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Jun 2000 18:31:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: speed 6.5 vs 7.0 " } ]
[ { "msg_contents": "Hi all,\n\nI just want to thank all the people who kindly answered my request for\nhelp on synchronizing databases.\n\nEven if I haven't decided yet where to go from here, you all gave me a lot\nof pointers.\n\nWhen it's done, I'll keep the list informed.\n\nRegards to all\n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n\n\n", "msg_date": "Tue, 13 Jun 2000 20:26:05 +0200 (MET DST)", "msg_from": "Olivier PRENANT <[email protected]>", "msg_from_op": true, "msg_subject": "Many thanks" } ]
[ { "msg_contents": "Hi;\n\nI am using libpq++ and Standard Template Library to write some programs for\npostgres6.5. I am trying to use map template to store tuples retrieved from\npostresql database. I defined map<string, vector<string> > in my program:\n typedef map<string, vector<string> > SQL_Map;\n\n If I did not include <libpq++.H> and did not use link option from the make\nfile used in libpq++, also I did not use any class from libpq++.H.\nEverything is fine, I can compile the class fine. I simply use \"g++ -c\nmyclass.cpp\" and generate myclass.o file. However if I include <libpq++.H>\nand use the makefile come with libpq++ compiling option, I got an error as:\n\"/usr/ccs/bin/as: \"/var/tmp/cc00POXR.s\", line 3512: error: can't compute\nvalue of an expression involving an external symbol\"\n\nI believe it has something to do with using string as the key for the map\nand using libpq++.H at the same time, because \nmap<int, vector<string> > is fine, but I don't know what is the problem.\nCould anyone help me out? Your help will be greatly appreciated.\n\nSincerely\n\nWenjin Zheng, Ph.D.\nBioinformatic Analyst\nLarge Scale Biology, Corp.\nVacaville, CA 95688\[email protected]\n\n\n", "msg_date": "Tue, 13 Jun 2000 14:50:24 -0700", "msg_from": "Wenjin Zheng <[email protected]>", "msg_from_op": true, "msg_subject": "Compiler error with libpq++" }, { "msg_contents": "Wenjin Zheng <[email protected]> writes:\n> I am using libpq++ and Standard Template Library to write some programs for\n> postgres6.5.\n\nThe symptoms you describe aren't familiar to me, but just on general\nprinciples I'd recommend updating to postgres 7.0. We did clean up the\nlibpq++ include files since 6.5, and that might help.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Jun 2000 18:16:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compiler error with libpq++ " } ]
[ { "msg_contents": "I just upgraded to 7.0.2. Everything looks fine, except for one\n(annoying) change.\n\nI have an attribute in a table named\n\"significance_of_anova_for_spike_rates\". (I know, I know. I get laughed\nat all the time for using prepositions in my variable names). In 6.5.1\nand before, Postgres took the whole name as the identifier. With 7.0.2\nit seems to truncate to \"significance_of_anova_for_spike\" (31\ncharacters).\n\nIs there any reason that this has changed? Anyway to get a larger length\n(say 40 or 50 characters)?\n\nThanks.\n-Tony Reina\n\n\n", "msg_date": "Tue, 13 Jun 2000 18:41:42 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "7.0.2 cuts off attribute name" }, { "msg_contents": "\"G. Anthony Reina\" <[email protected]> writes:\n> \"significance_of_anova_for_spike_rates\". (I know, I know. I get laughed\n> at all the time for using prepositions in my variable names). In 6.5.1\n> and before, Postgres took the whole name as the identifier. With 7.0.2\n> it seems to truncate to \"significance_of_anova_for_spike\" (31\n> characters).\n\nWhat? The default length limit has been 31 characters for a long time,\ncertainly long before 6.5.*.\n\n7.0 has a new behavior of *telling* you that it's truncating overlength\nidentifiers, but the system has always truncated 'em.\n\nIf you're simply complaining about the fact that it emits a notice,\nI agree with you 100%: that notice is one of the most nonstandard,\nuseless, annoying bits of pointless pedantry I've seen in many years.\nI argued against it to start with but was outvoted. Maybe we can have\na revote now that people have had some practical experience with it:\nwho still thinks it's a good idea?\n\n> Is there any reason that this has changed? Anyway to get a larger length\n> (say 40 or 50 characters)?\n\nYou could recompile with a larger NAMEDATALEN, but unless you did so in\nyour 6.5.* installation, that's not what's bugging you. Look for the\nelog(NOTICE,...) call in src/backend/parser/scan.l and dike that out,\ninstead.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Jun 2000 22:59:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.2 cuts off attribute name " }, { "msg_contents": "\n> 7.0 has a new behavior of *telling* you that it's truncating overlength\n> identifiers, but the system has always truncated 'em.\n>\n> If you're simply complaining about the fact that it emits a notice,\n> I agree with you 100%: that notice is one of the most nonstandard,\n> useless, annoying bits of pointless pedantry I've seen in many years.\n> I argued against it to start with but was outvoted. Maybe we can have\n> a revote now that people have had some practical experience with it:\n> who still thinks it's a good idea?\n\nYou think it should fail with an error message instead? ;)\n\nI think that PostgreSQL must tell the user when it is doing\nsomething as significant as truncating a name.\n\nNiall\n\n\n***********************************************************************\nPrivileged/confidential information may be contained in this message.\nIf you are not the addressee indicated in this message (or responsible\nfor delivery of the message to such person), you may not copy or \ndeliver this message to anyone. In such case, you should destroy this\nmessage and notify the sender and [email protected] \nimmediately.\n\nIf you or your employer do not consent to Internet E-mail messages of\nthis kind, please advise us immediately.\n\nOpinions, conclusions and other information expressed in this message\n (including any attachments) are not given or endorsed by ebeon ltd\n (or ebeon inc., as applicable) unless otherwise confirmed in writing\nby an authorised representative independent of this message. Any \nliability arising from reliance placed on this message (including its\nattachments) without such independent confirmation is hereby excluded.\n\nThis message (including attachments) is protected by copyright laws \nbut has no other legal or contractual standing. The presence of this \nfootnote indicates that this message (including its attachments) has\nbeen processed by an automated anti-virus system; however it is the \nresponsiblity of the recipient to ensure that the message (and\nattachments) are safe and authorised for use in their environment.\n***********************************************************************\n", "msg_date": "Wed, 14 Jun 2000 10:39:39 +0100", "msg_from": "Niall Smart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.2 cuts off attribute name" }, { "msg_contents": "Tom Lane wrote:\n> \n> 7.0 has a new behavior of *telling* you that it's truncating overlength\n> identifiers, but the system has always truncated 'em.\n> \n> If you're simply complaining about the fact that it emits a notice,\n> I agree with you 100%: that notice is one of the most nonstandard,\n> useless, annoying bits of pointless pedantry I've seen in many years.\n> I argued against it to start with but was outvoted. Maybe we can have\n> a revote now that people have had some practical experience with it:\n> who still thinks it's a good idea?\n\nAnd while we're at it, let's get rid of the 'implementation' NOTICEs\nabout foreign keys and UNIQUE.\n\nI don't mind the NOTICE about SERIAL since we don't clean up the\nsequence on DROP TABLE.\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n", "msg_date": "Wed, 14 Jun 2000 08:12:00 -0400", "msg_from": "\"Mark Hollomon\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.2 cuts off attribute name" }, { "msg_contents": "Niall Smart <[email protected]> writes:\n>> 7.0 has a new behavior of *telling* you that it's truncating overlength\n>> identifiers, but the system has always truncated 'em.\n\n> You think it should fail with an error message instead? ;)\n\n> I think that PostgreSQL must tell the user when it is doing\n> something as significant as truncating a name.\n\nBut the point is that it is *not* significant, at least not in 99.99%\nof cases. Truncating identifiers has been a standard compiler practice\nfor decades, and nobody emits warnings when they do it.\n\nThe reason it's not significant is that there is no problem unless you\nactually have a conflict caused by truncation, and in that scenario you\nwill get an appropriate error message. For example:\n\ncreate table foo (a_very_very_really_long_identifier_foo int,\na_very_very_really_long_identifier_bar float);\nNOTICE: identifier \"a_very_very_really_long_identifier_foo\" will be truncated to \"a_very_very_really_long_identif\"\nNOTICE: identifier \"a_very_very_really_long_identifier_bar\" will be truncated to \"a_very_very_really_long_identif\"\nERROR: CREATE TABLE: attribute \"a_very_very_really_long_identif\" duplicated\n\nNow when you get an error like that, it doesn't take a rocket scientist\nto figure out that the problem is the system's only paying attention to\nthe first N characters; do you really need the \"help\" of the notices\nfor that?\n\nThe rest of the time, when there isn't a naming conflict, the notices\nare just useless noise.\n\nI have never heard of another programming language implementation that\nemits notices when truncating overlength identifiers to fit in its\nsymbol table. The reason why Postgres is alone in doing this is *not*\nthat we're smarter than everybody else.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Jun 2000 11:23:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.2 cuts off attribute name " }, { "msg_contents": "\"Mark Hollomon\" <[email protected]> writes:\n> And while we're at it, let's get rid of the 'implementation' NOTICEs\n> about foreign keys and UNIQUE.\n> I don't mind the NOTICE about SERIAL since we don't clean up the\n> sequence on DROP TABLE.\n\nHmm, I could vote for that.\n\nPerhaps the right solution would be to downgrade messages like this\nto a new elog level (of severity between NOTICE and DEBUG), and then\nadd a \"verbosity\" SET variable that allows the user to choose whether\nto see 'em or not. It'd be nice to be able to choose to get DEBUG\nmessages sent to the frontend, too, instead of only going to the\npostmaster log.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Jun 2000 11:28:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.2 cuts off attribute name " }, { "msg_contents": "Tom Lane wrote:\n\n> 7.0 has a new behavior of *telling* you that it's truncating overlength\n> identifiers, but the system has always truncated 'em.\n>\n\nYes, I see now in my backups that 6.5.1 was truncating at 31 characters but\njust wasn't telling me. I wasn't aware if the limit. So it is nice to have\nthe notice, but perhaps should only show up once (e.g. when the table is\ninitially created). That way, once you've been advised of the truncation, it\nwon't annoy you any longer.\n\n>\n\n> > Is there any reason that this has changed? Anyway to get a larger length\n> > (say 40 or 50 characters)?\n>\n> You could recompile with a larger NAMEDATALEN, but unless you did so in\n> your 6.5.* installation, that's not what's bugging you. Look for the\n> elog(NOTICE,...) call in src/backend/parser/scan.l and dike that out,\n> instead.\n>\n\nI tried changing NAMEDATALEN in the postgres_ext.h to 52. Everything compiled\nand installed fine, but the initdb failed. Maybe there is more to it than\njust changing that one constant. Anyway, I've gone back to the original\nNAMEDATALEN = 32 and will just rename my field to something smaller than 31\nchars.\n\n\n-Tony\n\n\n", "msg_date": "Wed, 14 Jun 2000 10:41:58 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.0.2 cuts off attribute name" }, { "msg_contents": "\"G. Anthony Reina\" <[email protected]> writes:\n> I tried changing NAMEDATALEN in the postgres_ext.h to 52. Everything compiled\n> and installed fine, but the initdb failed. Maybe there is more to it than\n> just changing that one constant.\n\nHmm, AFAIK that's supposed to work. I'll give it a try sometime ---\nmaybe some dependency has snuck in somewhere.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Jun 2000 15:06:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.2 cuts off attribute name " }, { "msg_contents": "Tom Lane wrote:\n> \n> \"Mark Hollomon\" <[email protected]> writes:\n> > And while we're at it, let's get rid of the 'implementation' NOTICEs\n> > about foreign keys and UNIQUE.\n> > I don't mind the NOTICE about SERIAL since we don't clean up the\n> > sequence on DROP TABLE.\n> \n> Hmm, I could vote for that.\n> \n> Perhaps the right solution would be to downgrade messages like this\n> to a new elog level (of severity between NOTICE and DEBUG), and then\n> add a \"verbosity\" SET variable that allows the user to choose whether\n> to see 'em or not. It'd be nice to be able to choose to get DEBUG\n> messages sent to the frontend, too, instead of only going to the\n> postmaster log.\n\nseverity 'PEDANTIC'\n\n:-)\n\t\t\t\t\tAndrew.\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: [email protected]\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267\n", "msg_date": "Thu, 15 Jun 2000 10:38:22 +1200", "msg_from": "Andrew McMillan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.2 cuts off attribute name" }, { "msg_contents": "> \"G. Anthony Reina\" <[email protected]> writes:\n>> I tried changing NAMEDATALEN in the postgres_ext.h to 52. Everything compiled\n>> and installed fine, but the initdb failed. Maybe there is more to it than\n>> just changing that one constant.\n\n> Hmm, AFAIK that's supposed to work. I'll give it a try sometime ---\n> maybe some dependency has snuck in somewhere.\n\nI built current sources with NAMEDATALEN = 52 and didn't see any\nproblem. Regression tests all passed except for a couple of differences\nin the 'name' test --- not too surprising since it was checking for\ntruncation of names at 31 chars...\n\nYou may not have done the build properly. Usually on a reconfiguration\nthe only safe way is \"make clean\" and \"make all\". We don't have\nadequate dependency info in the Makefiles to ensure a full rebuild\nwithout \"make clean\". (I think Peter E. is hoping to fix that soon,\nbut for now that's how you gotta do it.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Jun 2000 22:05:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.2 cuts off attribute name " } ]
[ { "msg_contents": "In the following archived email:\n\nhttp://www.postgresql.org/mhonarc/pgsql-admin/2000-05/msg00025.html\n\nthis was posed as a solutions to modifying the NOT NULL constraint:\n\n>update pg_attributes set attnotnull = 'f' where oid = oidofnotnullcolumn;\n>vacuum analyze;\n\nI didn't find any further comment on this so I decided to go right to the\nsource...\n\nIs this recommended or not?\nAre there any side effects of which I should be aware before attempting to\nuse this?\n\nIf this is not a valid way to accomplish the modification of the NOT NULL\nconstraint, then are there plans for an implementation of it (I enjoy the\nnew ALTER COLUMN DEFAULT)?\n\nThanks,\n-Dan Wilson\nphpPgAdmin Author\nhttp://www.phpwizard.net/phpPgAdmin\n\nPlease reply to me directly as I'm not subscribed to the list.\n\n", "msg_date": "Tue, 13 Jun 2000 21:25:18 -0600", "msg_from": "\"Dan Wilson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Modifying NOT NULL Constraint" }, { "msg_contents": "\"Dan Wilson\" <[email protected]> writes:\n> this was posed as a solutions to modifying the NOT NULL constraint:\n>> update pg_attributes set attnotnull = 'f' where oid = oidofnotnullcolumn;\n>> vacuum analyze;\n\nattnotnull is where the gold is hidden, all right. The 'vacuum analyze'\nstep is mere mumbo-jumbo --- there's no need for that.\n\n> Are there any side effects of which I should be aware before attempting to\n> use this?\n\nChanging in that direction should be safe enough. Turning attnotnull\n*on* is a little more dubious, since it won't magically make any\nexisting null entries in the column go away. attnotnull just governs\nthe check that prevents you from storing new nulls.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Jun 2000 00:33:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Modifying NOT NULL Constraint " }, { "msg_contents": "So if I'm understanding this correctly, this would be able to remove the NOT\nNULL constraint, but would not be able to set the NOT NULL constraint. Is\nthat correct?\n\nIf that is correct, are their plans to implement a post-create setting of\nthe NOT NULL constraint?\n\n-Dan\n\n----- Original Message -----\nFrom: \"Tom Lane\" <[email protected]>\nTo: \"Dan Wilson\" <[email protected]>\nCc: <[email protected]>\nSent: Tuesday, June 13, 2000 10:33 PM\nSubject: Re: [HACKERS] Modifying NOT NULL Constraint\n\n\n> \"Dan Wilson\" <[email protected]> writes:\n> > this was posed as a solutions to modifying the NOT NULL constraint:\n> >> update pg_attributes set attnotnull = 'f' where oid =\noidofnotnullcolumn;\n> >> vacuum analyze;\n>\n> attnotnull is where the gold is hidden, all right. The 'vacuum analyze'\n> step is mere mumbo-jumbo --- there's no need for that.\n>\n> > Are there any side effects of which I should be aware before attempting\nto\n> > use this?\n>\n> Changing in that direction should be safe enough. Turning attnotnull\n> *on* is a little more dubious, since it won't magically make any\n> existing null entries in the column go away. attnotnull just governs\n> the check that prevents you from storing new nulls.\n>\n> regards, tom lane\n\n", "msg_date": "Wed, 14 Jun 2000 00:17:32 -0600", "msg_from": "\"Dan Wilson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Modifying NOT NULL Constraint " }, { "msg_contents": "\"Dan Wilson\" <[email protected]> writes:\n> So if I'm understanding this correctly, this would be able to remove the NOT\n> NULL constraint, but would not be able to set the NOT NULL constraint. Is\n> that correct?\n\nOh, you can set attnotnull if you feel like it. My point is just that\nnothing much will happen to any existing null values in the column.\nIt's up to you to check for them first, if you care.\n\n> If that is correct, are their plans to implement a post-create setting of\n> the NOT NULL constraint?\n\nWhat do you think should happen if there are null values? Refuse the\ncommand? Delete the non-compliant rows? Allow the rows to remain\neven though the column is now nominally NOT NULL?\n\nYou can implement any of these behaviors for yourself with a couple of\nSQL commands inside a transaction, so I'm not sure that I see the need\nto have a neatly-wrapped-up ALTER TABLE command that will only do one\nof the things you might want it to do.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Jun 2000 02:31:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Modifying NOT NULL Constraint " }, { "msg_contents": "Ok... point taken! I guess the masters always have reasons for why things\naren't implemented.\n\n-Dan\n\n\n----- Original Message -----\nFrom: \"Tom Lane\" <[email protected]>\nTo: \"Dan Wilson\" <[email protected]>\nCc: <[email protected]>\nSent: Wednesday, June 14, 2000 12:31 AM\nSubject: Re: [HACKERS] Modifying NOT NULL Constraint\n\n\n> \"Dan Wilson\" <[email protected]> writes:\n> > So if I'm understanding this correctly, this would be able to remove the\nNOT\n> > NULL constraint, but would not be able to set the NOT NULL constraint.\nIs\n> > that correct?\n>\n> Oh, you can set attnotnull if you feel like it. My point is just that\n> nothing much will happen to any existing null values in the column.\n> It's up to you to check for them first, if you care.\n>\n> > If that is correct, are their plans to implement a post-create setting\nof\n> > the NOT NULL constraint?\n>\n> What do you think should happen if there are null values? Refuse the\n> command? Delete the non-compliant rows? Allow the rows to remain\n> even though the column is now nominally NOT NULL?\n>\n> You can implement any of these behaviors for yourself with a couple of\n> SQL commands inside a transaction, so I'm not sure that I see the need\n> to have a neatly-wrapped-up ALTER TABLE command that will only do one\n> of the things you might want it to do.\n>\n> regards, tom lane\n\n", "msg_date": "Wed, 14 Jun 2000 00:33:29 -0600", "msg_from": "\"Dan Wilson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Modifying NOT NULL Constraint " }, { "msg_contents": "Tom Lane wrote:\n\n> What do you think should happen if there are null values? Refuse the\n> command? Delete the non-compliant rows? Allow the rows to remain\n> even though the column is now nominally NOT NULL?\n\nI would vote for refuse the command. It enforces the integrity of the\ndata.\nYou can always do an appropriate update command first if you think there\nare \nnulls in there.\n", "msg_date": "Wed, 14 Jun 2000 16:35:45 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Modifying NOT NULL Constraint" }, { "msg_contents": "> What do you think should happen if there are null values? Refuse the\n> command? Delete the non-compliant rows? Allow the rows to remain\n> even though the column is now nominally NOT NULL?\n\nWith ALTER TABLE ADD CONSTRAINT on a non-deferrable NOT\nNULL it should fail. At the end of statement the constraint is not\nsatified,\nan exception is raised and the statement is effectively ignored. It's alot\nmore complicated for deferrable constraints, and I didn't even actually\ntake that into account when I did the foreign key one (because I just\nthought\nof it now).\n\n> You can implement any of these behaviors for yourself with a couple of\n> SQL commands inside a transaction, so I'm not sure that I see the need\n> to have a neatly-wrapped-up ALTER TABLE command that will only do one\n> of the things you might want it to do.\nTrue, but it would be nice to be able to add a check constraint later, and\nas\nlong as you're doing it, it seems silly to ignore NOT NULL.\n\n\n", "msg_date": "Wed, 14 Jun 2000 12:25:41 -0700", "msg_from": "\"Stephan Szabo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Modifying NOT NULL Constraint " }, { "msg_contents": "Stephan Szabo wrote:\n> > What do you think should happen if there are null values? Refuse the\n> > command? Delete the non-compliant rows? Allow the rows to remain\n> > even though the column is now nominally NOT NULL?\n>\n> With ALTER TABLE ADD CONSTRAINT on a non-deferrable NOT\n> NULL it should fail. At the end of statement the constraint is not\n> satified,\n> an exception is raised and the statement is effectively ignored. It's alot\n> more complicated for deferrable constraints, and I didn't even actually\n> take that into account when I did the foreign key one (because I just\n> thought\n> of it now).\n\n Forget it!\n\n Doing\n\n BEGIN;\n ALTER TABLE tab ADD CONSTRAINT ... INITIALLY DEFERRED;\n UPDATE tab SET ... WHERE ... ISNULL;\n COMMIT;\n\n is totally pathetic. Do it the other way round and the ALTER\n TABLE is happy. As Tom usually says \"if it hurts, don't do\n it\". We have more important problems to spend our time for.\n\n\nJan\n\n\nBTW: Still have your other FK related mail to process. Will do so soon.\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Wed, 14 Jun 2000 22:59:48 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: Modifying NOT NULL Constraint" }, { "msg_contents": "Well, I wasn't planning on doing it any time soon... I just wanted to\nmention it for\ncompleteness-sake since it was my code that does it \"wrong\" and I'd rather\nmention\nit than have someone come back to me asking me why my code does what it\ndoes.\nThe basic point is that ALTER TABLE isn't too much of a difference from\nnormal\nconstraint checking... If the constraint fails when the ALTER TABLE is done\nthe\nstatement should abort just like any other statement that causes a\nconstraint failure.\n\n> Stephan Szabo wrote:\n> > > What do you think should happen if there are null values? Refuse the\n> > > command? Delete the non-compliant rows? Allow the rows to remain\n> > > even though the column is now nominally NOT NULL?\n> >\n> > With ALTER TABLE ADD CONSTRAINT on a non-deferrable NOT\n> > NULL it should fail. At the end of statement the constraint is not\n> > satified,\n> > an exception is raised and the statement is effectively ignored. It's\nalot\n> > more complicated for deferrable constraints, and I didn't even actually\n> > take that into account when I did the foreign key one (because I just\n> > thought\n> > of it now).\n>\n> Forget it!\n>\n> Doing\n>\n> BEGIN;\n> ALTER TABLE tab ADD CONSTRAINT ... INITIALLY DEFERRED;\n> UPDATE tab SET ... WHERE ... ISNULL;\n> COMMIT;\n>\n> is totally pathetic. Do it the other way round and the ALTER\n> TABLE is happy. As Tom usually says \"if it hurts, don't do\n> it\". We have more important problems to spend our time for.\n\n\n", "msg_date": "Wed, 14 Jun 2000 16:05:34 -0700", "msg_from": "\"Stephan Szabo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Modifying NOT NULL Constraint" } ]
[ { "msg_contents": "While people are working on that, they might want to add some sanity checking\nto the multibyte character decoders. Currently they fail to check for\n\"illegal\" character sequences (i.e. sequences with no valid multibyte mapping),\nand fail to do something reasonable (like return an error, silently drop the\noffending characters, or anything else besides just returning random garbage\nand crashing the backend).\n\nThe last time this failed to get on the TODO list because Bruce wanted\nmore than one person to verify that it was an issue. If several people \nare going to work on the NATIONAL CHARACTER stuff, maybe they could look\ninto this issue, too.\n\n\t-Michael Robinson\n\n>Added to TODO.\n>\n>> Since there are several people interested in contributing, we should\n>> list:\n>> \n>> Support multiple simultaneous character sets, per SQL92\n\n", "msg_date": "Wed, 14 Jun 2000 16:24:05 +0800 (+0800)", "msg_from": "Michael Robinson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Big 7.1 open items" }, { "msg_contents": "> While people are working on that, they might want to add some sanity checking\n> to the multibyte character decoders. Currently they fail to check for\n> \"illegal\" character sequences (i.e. sequences with no valid multibyte mapping),\n> and fail to do something reasonable (like return an error, silently drop the\n> offending characters, or anything else besides just returning random garbage\n> and crashing the backend).\n> \n> The last time this failed to get on the TODO list because Bruce wanted\n> more than one person to verify that it was an issue. If several people \n> are going to work on the NATIONAL CHARACTER stuff, maybe they could look\n> into this issue, too.\n\nThe issue is that some people felt we shouldn't be performing such\nchecks, and some did.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 14 Jun 2000 10:37:21 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Big 7.1 open items" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>The issue is that some people felt we shouldn't be performing such\n>checks, and some did.\n\nWell, more precisely, the issue was stalemated at \"one person felt we \nshould perform such checks\" and \"one person (who, incidentally, wrote the\ncode) felt we shouldn't\".\n\nObviously, as it stands, the issue is going nowhere.\n\nI was just hoping to encourage more people to examine the problem, so that\nwe might get a consensus one way or the other.\n\n\t-Michael Robinson\n\n", "msg_date": "Thu, 15 Jun 2000 12:52:20 +0800 (+0800)", "msg_from": "Michael Robinson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Big 7.1 open items" }, { "msg_contents": "> >The issue is that some people felt we shouldn't be performing such\n> >checks, and some did.\n> Well, more precisely, the issue was stalemated at \"one person felt we\n> should perform such checks\" and \"one person (who, incidentally, wrote \n> the code) felt we shouldn't\".\n> I was just hoping to encourage more people to examine the problem, so \n> that we might get a consensus one way or the other.\n\nI hope that the issue is clearer once we have a trial implementation to\nplay with.\n\n - Thomas\n", "msg_date": "Thu, 15 Jun 2000 07:06:12 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Big 7.1 open items" }, { "msg_contents": "> While people are working on that, they might want to add some sanity checking\n> to the multibyte character decoders. Currently they fail to check for\n> \"illegal\" character sequences (i.e. sequences with no valid multibyte mapping),\n> and fail to do something reasonable (like return an error, silently drop the\n> offending characters, or anything else besides just returning random garbage\n> and crashing the backend).\n\nHum.. I thought Michael Robinson is the one who is against the idea of\nrejecting \"illegal\" character sequences before they are put in the DB.\nI like the idea but I haven't time to do that (However I'm not sure I\nwould like to do it for EUC-CN, since he dislikes the codes I write).\n\nBruce, I would like to see followings in the TODO. I also would like\nto hear from Thomas and Peter or whoever being interested in\nimplementing NATIONAL CHARACTER stuffs if they are reasonable.\n\no Don't accept character sequences those are not valid as their charset\n (signaling ERROR seems appropriate IMHO)\n\no Make PostgreSQL more multibyte aware (for example, TRIM function and\n NAME data type)\n\no Regard n of CHAR(n)/VARCHAR(n) as the number of letters, rather than\n the number of bytes\n--\nTatsuo Ishii\n", "msg_date": "Thu, 15 Jun 2000 18:50:19 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Big 7.1 open items" }, { "msg_contents": "> Hum.. I thought Michael Robinson is the one who is against the idea of\n> rejecting \"illegal\" character sequences before they are put in the DB.\n> I like the idea but I haven't time to do that (However I'm not sure I\n> would like to do it for EUC-CN, since he dislikes the codes I write).\n> \n> Bruce, I would like to see followings in the TODO. I also would like\n> to hear from Thomas and Peter or whoever being interested in\n> implementing NATIONAL CHARACTER stuffs if they are reasonable.\n> \n> o Don't accept character sequences those are not valid as their charset\n> (signaling ERROR seems appropriate IMHO)\n> \n> o Make PostgreSQL more multibyte aware (for example, TRIM function and\n> NAME data type)\n> \n> o Regard n of CHAR(n)/VARCHAR(n) as the number of letters, rather than\n> the number of bytes\n\nAdded to TODO:\n\n* Reject character sequences those are not valid in their charset \n * Make functions more multi-byte aware, i.e. trim() \n* Make n of CHAR(n)/VARCHAR(n) the number of letters, not bytes \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 15 Jun 2000 09:53:14 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Big 7.1 open items" }, { "msg_contents": "> o Don't accept character sequences those are not valid as their \n> charset (signaling ERROR seems appropriate IMHO)\n> o Make PostgreSQL more multibyte aware (for example, TRIM function and\n> NAME data type)\n> o Regard n of CHAR(n)/VARCHAR(n) as the number of letters, rather than\n> the number of bytes\n\nAll good, and important features when we are done.\n\nOne issue: I can see (or imagine ;) how we can use the Postgres type\nsystem to manage multiple character sets. But allowing arbitrary\ncharacter sets in, say, table names forces us to cope with allowing a\nmix of character sets in a single column of a system table. afaik this\ngeneral capability is not mandated by SQL9x (the SQL_TEXT character set\nis used for all system resources??). Would it be acceptable to have a\n\"default database character set\" which is allowed to creep into the\npg_xxx tables? Even that seems to be a difficult thing to accomplish at\nthe moment (we'd need to get some of the text manipulation functions\nfrom the catalogs, not from hardcoded references as we do now).\n\nWe should itemize all of these issues so we can keep track of what is\nnecessary, possible, and/or \"easy\".\n\n - Thomas\n", "msg_date": "Thu, 15 Jun 2000 14:14:47 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Big 7.1 open items" }, { "msg_contents": "> > o Don't accept character sequences those are not valid as their \n> > charset (signaling ERROR seems appropriate IMHO)\n> > o Make PostgreSQL more multibyte aware (for example, TRIM function and\n> > NAME data type)\n> > o Regard n of CHAR(n)/VARCHAR(n) as the number of letters, rather than\n> > the number of bytes\n> \n> All good, and important features when we are done.\n\nGlad to hear that.\n\n> One issue: I can see (or imagine ;) how we can use the Postgres type\n> system to manage multiple character sets. But allowing arbitrary\n> character sets in, say, table names forces us to cope with allowing a\n> mix of character sets in a single column of a system table. afaik this\n> general capability is not mandated by SQL9x (the SQL_TEXT character set\n> is used for all system resources??). Would it be acceptable to have a\n> \"default database character set\" which is allowed to creep into the\n> pg_xxx tables? Even that seems to be a difficult thing to accomplish at\n> the moment (we'd need to get some of the text manipulation functions\n> from the catalogs, not from hardcoded references as we do now).\n\n\"default database character set\" idea does not seem to be the solution\nfor cross-db relations such as pg_database. The only solution I can\nimagine so far is using SQL_TEXT.\n\nBTW, I've been thinking about SQL_TEXT for a while and it seems\nmule_internal_code or Unicode(UTF-8) would be the candidates for\nit. Mule_internal_code looks more acceptable for Asian multi-byte\nusers like me than Unicode. It's clean, simple and does not require\nhuge conversion tables between Unicode and other encodings. However,\nUnicode has a stronger political power in the real world and for most\nsingle-byte users probably it would be enough. My idea is let users\nchoose one of them. I mean making it a compile time option.\n\n> We should itemize all of these issues so we can keep track of what is\n> necessary, possible, and/or \"easy\".\n\nYou are right, probably there would be tons of issues in implementing\nmultiple charsets support.\n--\nTatsuo Ishii\n", "msg_date": "Fri, 16 Jun 2000 23:03:57 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Big 7.1 open items" }, { "msg_contents": "> \"default database character set\" idea does not seem to be the solution\n> for cross-db relations such as pg_database. The only solution I can\n> imagine so far is using SQL_TEXT.\n> BTW, I've been thinking about SQL_TEXT for a while and it seems\n> mule_internal_code or Unicode(UTF-8) would be the candidates for\n> it. Mule_internal_code looks more acceptable for Asian multi-byte\n> users like me than Unicode. It's clean, simple and does not require\n> huge conversion tables between Unicode and other encodings. However,\n> Unicode has a stronger political power in the real world and for most\n> single-byte users probably it would be enough. My idea is let users\n> choose one of them. I mean making it a compile time option.\n\nOh. I was recalling SQL_TEXT as being a \"subset\" character set which\ncontains only the characters (more or less) that are required for\nimplementing the SQL92 query language and standard features.\n\nAre you seeing it as being a \"superset\" character set which can\nrepresent all other character sets??\n\nAnd, how would you suggest we start tracking this discussion in a design\ndocument? I could put something into the developer's guide, or we could\nhave a plain-text FAQ, or ??\n\nI'd propose that we start accumulating a feature list, perhaps ordering\nit into categories like\n\no required/suggested by SQL9x\no required/suggested by experience in the real world\no sure would be nice to have\no really bad idea ;)\n\n - Thomas\n", "msg_date": "Fri, 16 Jun 2000 14:37:35 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Big 7.1 open items" }, { "msg_contents": "Thomas,\n\nA few (hopefully relevant) comments regarding character sets, code pages, \nI18N, and all that:\n\n1) I've seen databases (DB2 if memory serves) that allowed the client \nside to declare itself to the database back-end engine as being in a \nparticular code page. For instance, one could have a CP850 Latin-1 client \nand an ISO 8859-1 database. The database engine did appropriate \ntranslations in both directions.\n \n2) Mixing code pages in a single column and then having the database \nengine support it is not trivial. Either each CHAR/VARCHAR would have to \nhave some code page settable per row (eg either as a separate column or \nas something like mycolumnname.encoding). \n Even if you could handle all that you'd still be faced with the issue \nis collating sequence. Each individual code page will have a collating \nsequence. But how do you collate across code pages? There'd be letters \nthat were only in a single code page. Plus, it gets messy because with, \nfor instance, a simple umlauted a that occurs in CP850, CP1252, and ISO \n8859-1 (and likely in other code pages as well). That letter is really \nthe same letter in all those code pages and should treated as such when \nsorting. \n\n3) I think it is more important for a database to support lots of \nlanguages in the stored data than in the field names and table names. If \na programmer has to deal with A-Za-z for naming identifiers and that \nperseon is Korean or Japanese then that is certain is an imposition on \nthem. But its a far far bigger imposition if that programmer can't build \na database that will store the letters of his national language and sort \nand index and search them in convenient ways. \n\n4) The real solution to the multiple code page dilemma is Unicode. \n Yes, its more space. But the can of worms of dealing with multiple \ncode pages in a column is really no fun and the result is not great. \nBTDTHTTS.\n\n5) The problem with enforcing \n I've built a database in DB2 where particular columns in it contained \ndata from many different code pages (each row had a code page field as \nwell as a text field). For some applications that is okay if that field \nis not going to be part of an index. \n However, if a database is going to be defined as being in a particular \ncode page, and if the database engine is going to reject characters that \nare not recognized as part of that code page then you can't play the sort \nof game I just described _unless_ there is a different datatype that is \nsimilar to CHAR/VARCHAR but for which the RDBMS does not enforce code \npage legality on each character. Otherwise you choose some code page for \na column, you go merrily stuffing in all sorts of rows in all sorts of \ncode pages, and then along come some character that is of a value that is \nnot a value for some other character in the code page that the RDBMS \nthinks it is. \n\nAnyway, I've done lots of I18N database stuff and hopefully a few of my \ncomments will be useful to the assembled brethren <g>.\n\nIn news:<[email protected]>, \[email protected] says...\n> One issue: I can see (or imagine ;) how we can use the Postgres type\n> system to manage multiple character sets. But allowing arbitrary\n> character sets in, say, table names forces us to cope with allowing a\n> mix of character sets in a single column of a system table. afaik this\n> general capability is not mandated by SQL9x (the SQL_TEXT character set\n> is used for all system resources??). Would it be acceptable to have a\n> \"default database character set\" which is allowed to creep into the\n> pg_xxx tables? Even that seems to be a difficult thing to accomplish at\n> the moment (we'd need to get some of the text manipulation functions\n> from the catalogs, not from hardcoded references as we do now).\n> \n", "msg_date": "Fri, 16 Jun 2000 16:18:23 -0700", "msg_from": "Randall Parker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Big 7.1 open items" }, { "msg_contents": "> Oh. I was recalling SQL_TEXT as being a \"subset\" character set which\n> contains only the characters (more or less) that are required for\n> implementing the SQL92 query language and standard features.\n> \n> Are you seeing it as being a \"superset\" character set which can\n> represent all other character sets??\n\nYes, it's my understanding from the section 19.3.1 of Date's book\n(fourth edition). Please correct me if I am wrong.\n\n> And, how would you suggest we start tracking this discussion in a design\n> document? I could put something into the developer's guide, or we could\n> have a plain-text FAQ, or ??\n> \n> I'd propose that we start accumulating a feature list, perhaps ordering\n> it into categories like\n> \n> o required/suggested by SQL9x\n> o required/suggested by experience in the real world\n> o sure would be nice to have\n> o really bad idea ;)\n\nSounds good. Could I put \"CREATE CHARACTER SET\" as the first item of\nthe list and start a discussion for that?\n\nI have a feeling that you have an idea to treat user defined charset\nas a PostgreSQL new data type. So probably \"CREATE CHARACTER SET\"\ncould be traslated to our \"CREATE TYPE\" by the parer, right?\n--\nTatsuo Ishii\n", "msg_date": "Sun, 18 Jun 2000 18:18:09 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Big 7.1 open items" }, { "msg_contents": "> > Oh. I was recalling SQL_TEXT as being a \"subset\" character set which\n> > contains only the characters (more or less) that are required for\n> > implementing the SQL92 query language and standard features.\n> > Are you seeing it as being a \"superset\" character set which can\n> > represent all other character sets??\n> Yes, it's my understanding from the section 19.3.1 of Date's book\n> (fourth edition). Please correct me if I am wrong.\n\nYuck. That is what is says, all right :(\n\nDate says that SQL_TEXT is required to have two things:\n1) all characters used in the SQL language itself (which is what I\nrecalled)\n\n2) Every other character from every character set in the installation.\n\nafaict (2) pretty much kills extensibility if we interpret that\nliterally. I'd like to research it a bit more before we accept it as a\nrequirement.\n\n> > I'd propose that we start accumulating a feature list, perhaps ordering\n> > it into categories like\n> >\n> > o required/suggested by SQL9x\n> > o required/suggested by experience in the real world\n> > o sure would be nice to have\n> > o really bad idea ;)\n> \n> Sounds good. Could I put \"CREATE CHARACTER SET\" as the first item of\n> the list and start a discussion for that?\n> \n> I have a feeling that you have an idea to treat user defined charset\n> as a PostgreSQL new data type. So probably \"CREATE CHARACTER SET\"\n> could be traslated to our \"CREATE TYPE\" by the parer, right?\n\nYes. Though the SQL_TEXT issue may completely kill this. And lead to a\nrequirement that we have a full-unicode backend :((\n\nI'm hoping that there is a less-intrusive way to do this. What do other\ndatabase systems have for this? I assume most do not have much...\n\n - Thomas\n", "msg_date": "Tue, 20 Jun 2000 15:47:26 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Big 7.1 open items" }, { "msg_contents": "Thomas Lockhart writes:\n\n> One issue: I can see (or imagine ;) how we can use the Postgres type\n> system to manage multiple character sets.\n\nBut how are you going to tell a genuine \"type\" from a character set? And\nyou might have to have three types for each charset. There'd be a lot of\nredundancy and confusion regarding the input and output functions and\nother pg_type attributes. No doubt there's something to be learned from\nthe type system, but character sets have different properties -- like\ncharacters(!), collation rules, encoding \"translations\" and what not.\nThere is no doubt also need for different error handling. So I think that\njust dumping every character set into pg_type is not a good idea. That's\nalmost equivalent to having separate types for char(6), char(7), etc.\n\nInstead, I'd suggest that character sets become separate objects. A\ncharacter entity would carry around its character set in its header\nsomehow. Consider a string concatenation function, being invoked with two\narguments of the same exotic character set. Using the type system only\nyou'd have to either provide a function signature for all combinations of\ncharacters sets or you'd have to cast them up to SQL_TEXT, concatenate\nthem and cast them back to the original charset. A smarter concatentation\nfunction instead might notice that both arguments are of the same\ncharacter set and simply paste them together right there.\n\n\n> But allowing arbitrary character sets in, say, table names forces us\n> to cope with allowing a mix of character sets in a single column of a\n> system table.\n\nThe priority is probably the data people store, not the way they get to\nname their tables.\n\n> Would it be acceptable to have a \"default database character set\"\n> which is allowed to creep into the pg_xxx tables?\n\nI think we could go with making all system table char columns Unicode, but\nof course they are really of the \"name\" type, which is another issue\ncompletely.\n\n\n> We should itemize all of these issues so we can keep track of what is\n> necessary, possible, and/or \"easy\".\n\nHere are a couple of \"items\" I keep wondering about:\n\n* To what extend would we be able to use the operating systems locale\nfacilities? Besides the fact that some systems are deficient or broken one\nway or another, POSIX really doesn't provide much besides \"given two\nstrings, which one is greater\", and then only on a per-process basis.\nWe'd really need more that, see also LIKE indexing issues, and indexing in\ngeneral.\n\n* Client support: A lot of language environments provide pretty smooth\nUnicode support these days, e.g., Java, Perl 5.6, and I think that C99 has\nalso made some strides. So while \"we can store stuff in any character set\nyou want\" is great, it's really no good if it doesn't work transparently\nwith the client interfaces. At least something to keep in mind.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 20 Jun 2000 18:43:44 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Character sets (Re: Re: Big 7.1 open items)" }, { "msg_contents": "> But how are you going to tell a genuine \"type\" from a character set? And\n> you might have to have three types for each charset. There'd be a lot of\n> redundancy and confusion regarding the input and output functions and\n> other pg_type attributes. No doubt there's something to be learned from\n> the type system, but character sets have different properties -- like\n> characters(!), collation rules, encoding \"translations\" and what not.\n> There is no doubt also need for different error handling. So I think that\n> just dumping every character set into pg_type is not a good idea. That's\n> almost equivalent to having separate types for char(6), char(7), etc.\n> \n> Instead, I'd suggest that character sets become separate objects. A\n> character entity would carry around its character set in its header\n> somehow. Consider a string concatenation function, being invoked with two\n> arguments of the same exotic character set. Using the type system only\n> you'd have to either provide a function signature for all combinations of\n> characters sets or you'd have to cast them up to SQL_TEXT, concatenate\n> them and cast them back to the original charset. A smarter concatentation\n> function instead might notice that both arguments are of the same\n> character set and simply paste them together right there.\n\nIntersting idea. But what about collations? SQL allows to assign a\ncollation different from the default one to a character set on the\nfly. Should we make collations as separate obejcts as well?\n\n> Here are a couple of \"items\" I keep wondering about:\n> \n> * To what extend would we be able to use the operating systems locale\n> facilities? Besides the fact that some systems are deficient or broken one\n> way or another, POSIX really doesn't provide much besides \"given two\n> strings, which one is greater\", and then only on a per-process basis.\n> We'd really need more that, see also LIKE indexing issues, and indexing in\n> general.\n\nCorrect. I'd suggest completely getting ride of OS's locale.\n\n> * Client support: A lot of language environments provide pretty smooth\n> Unicode support these days, e.g., Java, Perl 5.6, and I think that C99 has\n> also made some strides. So while \"we can store stuff in any character set\n> you want\" is great, it's really no good if it doesn't work transparently\n> with the client interfaces. At least something to keep in mind.\n\nDo you suggest that we should convert everyting into Unicode and store\nthem into DB?\n--\nTatsuo Ishii\n", "msg_date": "Wed, 21 Jun 2000 15:19:17 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Character sets (Re: Re: Big 7.1 open items)" }, { "msg_contents": "> Yuck. That is what is says, all right :(\n> \n> Date says that SQL_TEXT is required to have two things:\n> 1) all characters used in the SQL language itself (which is what I\n> recalled)\n> \n> 2) Every other character from every character set in the installation.\n\nDoesn't it say \"charcter repertory\", rather than character set? I\nthink it would be possible to let our SQL_TEXT support every character\nrepertories in the world, if we use Unicode or Mule internal code for\nthat.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 21 Jun 2000 15:20:00 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "SQL_TEXT (Re: Re: Big 7.1 open items)" }, { "msg_contents": "> > Date says that SQL_TEXT is required to have two things:\n> > 1) all characters used in the SQL language itself (which is what I\n> > recalled)\n> > 2) Every other character from every character set in the \n> > installation.\n> Doesn't it say \"charcter repertory\", rather than character set? I\n> think it would be possible to let our SQL_TEXT support every character\n> repertories in the world, if we use Unicode or Mule internal code for\n> that.\n\nI think that \"character set\" and \"character repertoire\" are synonymous\n(at least I am interpreting them that way). SQL99 makes a slight\ndistinction, in that \"repertoire\" is a \"set\" in a specific context of\napplication.\n\nI'm starting to look at the SQL99 doc. I am going to try to read the doc\nas if SQL_TEXT is a placeholder for \"any allowed character set\", not\n\"all character sets simultaneously\" and see if that works. \n\nSince there are a wide range of encodings to choose from, and since most\ncharacter sets can not be translated to another random character set,\nhaving SQL_TEXT usefully require all sets present simultaneously seems a\nbit of a stretch.\n\nI'm also not going to try to understand the complete doc before having a\ntrial solution; we can extend/modify/redefine/throw away the trial\nsolution as we understand the spec better.\n\nWhile I'm thinking about it: afaict, if we have the ability to load\nmultiple character sets simultaneously, we will want to have *one* of\nthose mapped in as the \"default character set\" for an installation or\ndatabase. So we might want to statically link that one in, while the\nothers get loaded dynamically.\n\n - Thomas\n", "msg_date": "Sun, 25 Jun 2000 04:55:08 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL_TEXT (Re: Re: Big 7.1 open items)" }, { "msg_contents": "> I think that \"character set\" and \"character repertoire\" are synonymous\n> (at least I am interpreting them that way). \n\nIs it? I think that a \"character set\" consists of a \"character\nrepertoire\" and a \"form of use\".\n\n> SQL99 makes a slight\n> distinction, in that \"repertoire\" is a \"set\" in a specific context of\n> application.\n\nI don't understand this probably due to my English ability. Can you\ntell me where I can get SQL99 on line doc so that I could study it\nmore?\n\n> While I'm thinking about it: afaict, if we have the ability to load\n> multiple character sets simultaneously, we will want to have *one* of\n> those mapped in as the \"default character set\" for an installation or\n> database. So we might want to statically link that one in, while the\n> others get loaded dynamically.\n\nRight. While I am not sure we could statically link it, there could be\n\"default character set\" in a installation or database. Also we could\nhave another \"default character set\" for NATIONAL CHARACTER. It seems\nthat those \"default character set\" are actually same acording to the\nstandard?\n--\nTatsuo Ishii\n", "msg_date": "Sun, 25 Jun 2000 21:21:27 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL_TEXT (Re: Re: Big 7.1 open items)" }, { "msg_contents": "> I think that a \"character set\" consists of a \"character\n> repertoire\" and a \"form of use\".\n> > SQL99 makes a slight distinction, in that \"repertoire\" is a \"set\" in \n> > a specific context of application.\n> I don't understand this probably due to my English ability.\n\nI'm pretty sure that it is due to convoluted standards ;)\n\n> Can you tell me where I can get SQL99 on line doc so that I could \n> study it more?\n\n From Peter E:\n\nftp://jerry.ece.umassd.edu/isowg3/x3h2/Standards/ansi-iso-9075-[12345]-1999.txt\n\n(a set of 5 files) which are also available in PDF at the same site.\n\n - Thomas\n", "msg_date": "Sun, 25 Jun 2000 16:40:27 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL_TEXT (Re: Re: Big 7.1 open items)" } ]
[ { "msg_contents": "> These (docbook) tools produced a lot of warnings, is this default?\n\nafaik, no. I've got a Mandrake 7.0.x machine at home, and a 7.0 laptop\nwhich is apparently not quite the same thing since one was upgraded from\n6.1 and the other was a clean install. But my laptop produces *no*\nwarnings or errors when building the Postgres 7.0.0 docs. Not sure about\nthe current state (e.g. pg-7.0.2) but it may be slightly broken. Will\ncheck -- actually, I've just checked and you are right, the current tree\nproduces fatal errors from three files, though they will be easy to fix\nup.\n\n> > > I think I could test Mandrake RPMS from time to time.\n> > Do you work with the cooker folks? If so, then perhaps you could \n> > test and post Mandrake-specific RPMs which I've built? That would be \n> > a big help...\n> I'm not actually running the cooker distro, but I often commit\n> RPMS to the cooker. I will upgrade to 7.1 next week, so that I'm\n> not that far from the cooker. I had some contact with Lenny Cartier\n> (Mandrakesoft) and think he would accept me as a maintainer for\n> postgresql.\n> I would post your RPMS if you want.\n\nGreat! I'm cc'ing Lamar Owens, who is the primary Redhat RPM maintainer.\nMy last try at building pg-7.0 RPMs for Mandrake from Lamar's RH\n.src.rpm was a complete success with no additional patches required,\nthough I have not had a chance to try the pg-7.0.2 RPM build and have\nnot posted the results.\n\nFor the Mandrake stuff, perhaps we can do it as a team; the first\nimportant step is for someone to start \"babysitting\" it, just building\nand posting the RPMs from Lamar's sources, then posting the results at\nMandrake's web site.\n\nbtw, Lamar, can we put the .ps.gz doc files into the RPM distro, if you\nhaven't already done so? Or should we break them out into separate RPMs,\nsay for hardcopy and hardcopy-A4 or something like that?\n\n - Thomas\n", "msg_date": "Wed, 14 Jun 2000 12:35:24 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Building PostgreSQL 7.0.1 documentation" }, { "msg_contents": "Thomas Lockhart wrote:\n> > > > I think I could test Mandrake RPMS from time to time.\n> > > Do you work with the cooker folks? If so, then perhaps you could\n> > > test and post Mandrake-specific RPMs which I've built? That would be\n> > > a big help...\n> > I'm not actually running the cooker distro, but I often commit\n> > RPMS to the cooker. I will upgrade to 7.1 next week, so that I'm\n> > not that far from the cooker. I had some contact with Lenny Cartier\n> > (Mandrakesoft) and think he would accept me as a maintainer for\n> > postgresql.\n> > I would post your RPMS if you want.\n \n> Great! I'm cc'ing Lamar Owens, who is the primary Redhat RPM maintainer.\n> My last try at building pg-7.0 RPMs for Mandrake from Lamar's RH\n> .src.rpm was a complete success with no additional patches required,\n> though I have not had a chance to try the pg-7.0.2 RPM build and have\n> not posted the results.\n\nInteresting to see any differences, as the spec file itself has changed\nsome since the 7.0 series. I am trying to make this spec file buildable\non more than just RedHat -- so, testers are more than welcome. I\nalready am getting contributions for at least one non-RedHat\ndistribution for PowerPC from Murray Todd Williams (which RPM's I am\nputting on ftp.postgresql.org). The more the merrier! The goal, of\ncourse, is a single source RPM that everybody simply rebuilds, which\nspec file simply Does The Right Thing (TM) for each distribution. I am\nvery grateful to the fine folks at RedHat for their help in making the\nportability goal a little closer.\n \n> For the Mandrake stuff, perhaps we can do it as a team; the first\n> important step is for someone to start \"babysitting\" it, just building\n> and posting the RPMs from Lamar's sources, then posting the results at\n> Mandrake's web site.\n\nYes! Try a simple --rebuild of my SRPM first -- then, send me a diff of\nwhat it takes to get it to build right (including policy-type things,\nsuch as bzip instead of gzip, or pgcc instead of egcs, or different\ninitscripts directory structure, etc.). Be sure to include any other\npatches that may need to be included, and I'll try to get things\nincorporated right away. As I only have RedHat 6.2+ boxen at the moment\n(soon to get access to a SPARC, hopefully), I can only build and test on\nthem. \n\nI was at one point going to get and install SuSE, Mandrake, Caldera, AND\nRedHat all on one box, but that plan proved unweildy at best. Better to\nget a SuSE person, a Mandrake (since Mandrake is diverging more and more\nfrom its RedHat roots) person, and a Caldera person to be that\ndistribution's 'expert', who then helps me take the RedHat SRPM and\nmassage it to rebuild smoothly on each distribution. Plus, we can then\nget people with Alphas, SPARCs, PPC's, or even IA-64's to do builds and\nsuggest changes. As well as getting reports from other RPM-based\ndistributions....\n\nAlso, when submitting patches to the spec file, include a copy of the\noutput of 'rpm --showrc', so I can see what kind of RPM environment you\nhave.\n \n> btw, Lamar, can we put the .ps.gz doc files into the RPM distro, if you\n> haven't already done so? Or should we break them out into separate RPMs,\n> say for hardcopy and hardcopy-A4 or something like that?\n\nI can, if that's what's wanted. I don't have a preference as to whether\nit's still in the main package, or in a separate docs package, as, prior\nto 7.0.1 the postscript stuff was in the main package. Just let me know\nwhere the source .ps.gz files need to come from, so I don't get the\nwrong ones. :-)\n\nIt would be preferable to have a versioned tarball of them available, as\nthere are already a larger number of source files in the RPM than I\nwould like.\n\nLet me know....\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 14 Jun 2000 12:53:48 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Building PostgreSQL 7.0.1 documentation" }, { "msg_contents": "Lamar Owen wrote:\n \n> > For the Mandrake stuff, perhaps we can do it as a team; the first\n> > important step is for someone to start \"babysitting\" it, just building\n> > and posting the RPMs from Lamar's sources, then posting the results at\n> > Mandrake's web site.\n> \n> Yes! Try a simple --rebuild of my SRPM first -- then, send me a diff of\n> what it takes to get it to build right (including policy-type things,\n> such as bzip instead of gzip, or pgcc instead of egcs, or different\n> initscripts directory structure, etc.). Be sure to include any other\n> patches that may need to be included, and I'll try to get things\n> incorporated right away. As I only have RedHat 6.2+ boxen at the moment\n> (soon to get access to a SPARC, hopefully), I can only build and test on\n> them.\n\nI did so, I build packages for Mandrake. Your SPEC file is great and\nI had only to do some little changes for full Mandrake compliance.\n\n- pack all source files with bzip2, even those you left uncompressed\n- extract the .jar files and the init/logrotate scripts at their destination\n- bzip2 all man pages\n\nThat's all, I will upload the SRPM to the Mandrake incoming directory\nin the later evening and will notify Lenny Cartier (MandrakeSoft).\n\nI've attached my spec file.\n\nJan", "msg_date": "Thu, 15 Jun 2000 16:55:20 +0200", "msg_from": "Jan Dittberner <[email protected]>", "msg_from_op": false, "msg_subject": "Mandrake RPMS, was \"Building PostgreSQL 7.0.1 documentation\"" }, { "msg_contents": "> I did so, I build packages for Mandrake. Your SPEC file is great and\n> I had only to do some little changes for full Mandrake compliance.\n\nGreat!\n\n> - pack all source files with bzip2, even those you left uncompressed\n\nOoh. That would seem to be at odds with the \"pristine source\" philosophy\nof RPM, but I'll guess that this topic has been covered extensively\nelsewhere? Are we sure that this is not just a suggestion from Mandrake\nthat the sources *should* be available with bzip2 compression?\n\n> - extract the .jar files and the init/logrotate scripts at their \n> destination\n\nIs this related to the previous point? I haven't looked at the details,\nbut I'm not sure why Mandrake and RH would need to be different here.\n\n> - bzip2 all man pages\n\nHmm. This is for the man pages after installation from the RPM, right?\n\nAll in all these are pretty small changes for two independent distros,\nthough it is a shame that even these diffs need to exist. The \".bz2 on\ninput files\" is particularly annoying, since those compression formats\nare *not* available in the original tarball distros.\n\n> That's all, I will upload the SRPM to the Mandrake incoming directory\n> in the later evening and will notify Lenny Cartier (MandrakeSoft).\n\nLamar, would it be possible to carry (some of) these differences in the\nsame spec file? *So much* is in common that it would be nice to have the\ninfo together. But in either case, this is great progress. Thanks Jan!\n\n - Thomas\n", "msg_date": "Thu, 15 Jun 2000 15:24:56 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Mandrake RPMS, was \"Building PostgreSQL 7.0.1 documentation\"" }, { "msg_contents": "Jan Dittberner wrote:\n> Lamar Owen wrote:\n> > > For the Mandrake stuff, perhaps we can do it as a team; the first\n> > > important step is for someone to start \"babysitting\" it, just building\n> > > and posting the RPMs from Lamar's sources, then posting the results at\n> > > Mandrake's web site.\n\n> > Yes! Try a simple --rebuild of my SRPM first -- then, send me a diff of\n \n> I did so, I build packages for Mandrake. Your SPEC file is great and\n> I had only to do some little changes for full Mandrake compliance.\n \n> - pack all source files with bzip2, even those you left uncompressed\n> - extract the .jar files and the init/logrotate scripts at their destination\n> - bzip2 all man pages\n \n> That's all, I will upload the SRPM to the Mandrake incoming directory\n> in the later evening and will notify Lenny Cartier (MandrakeSoft).\n\nCan you send me the result of 'rpm --showrc'? I'm curious if Mandrake\nis using the macro package to determine the use of gzip versus bzip2....\nOtherwise, if the _vendor macro is set to 'mandrake', I can use that in\na conditional. \n\nThanks for the rebuild and the spec!\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 15 Jun 2000 11:46:52 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Mandrake RPMS, was \"Building PostgreSQL 7.0.1 documentation\"" }, { "msg_contents": "Lamar Owen wrote:\n> \n> Can you send me the result of 'rpm --showrc'? I'm curious if Mandrake\n> is using the macro package to determine the use of gzip versus bzip2....\n> Otherwise, if the _vendor macro is set to 'mandrake', I can use that in\n> a conditional.\n> \n\nHere it is:\n\n--8<---------------------------------------------------------------------\n\nARCHITECTURE AND OS:\nbuild arch : i586\ncompatible build archs: i586 k6 i486 i386 noarch\nbuild os : Linux\ncompatible build os's : Linux\ninstall arch : i586\ninstall os : Linux\ncompatible archs : i586 k6 i486 i386 noarch\ncompatible os's : Linux\n\nRPMRC VALUES:\nmacrofiles : /usr/lib/rpm/macros:/usr/lib/rpm/i586-linux/macros:/etc/rpm/macros:/etc/rpm/i586-linux/macros:~/.rpmmacros\noptflags : -O3 -fomit-frame-pointer -fno-exceptions -fno-rtti -pipe -s -mpentium -mcpu=pentium -march=pentium -ffast-math -fexpensive-optimizations -malign-loops=2 -malign-jumps=2 -malign-functions=2 -mpreferred-stack-boundary=2\n========================\n-14: GNUconfigure(MC:)\t\n %{__libtoolize} --copy --force \n %{__aclocal} \n %{__autoheader} \n %{__automake} \n %{__autoconf} \n %{-C:_mydir=\"`pwd`\"; %{-M:%{__mkdir} -p %{-C*};} cd %{-C*};} \n CFLAGS=\"%{optflags}\" %{-C:${_mydir}}%{!-C:.}/configure %{_target_platform} --prefix=%{_prefix} %* %{-C:cd ${_mydir}; unset _mydir}\n-14: __aclocal\taclocal\n-14: __autoconf\tautoconf\n-14: __autoheader\tautoheader\n-14: __automake\tautomake\n-14: __bzip2\t%{_bzip2bin}\n-14: __cat\t/bin/cat\n-14: __chgrp\t/bin/chgrp\n-14: __chmod\t/bin/chmod\n-14: __chown\t/bin/chown\n-14: __cp\t/bin/cp\n-14: __cpio\t/bin/cpio\n-14: __find_provides\t/usr/lib/rpm/find-provides\n-14: __find_requires\t/usr/lib/rpm/find-requires\n-14: __gzip\t%{_gzipbin}\n-14: __id\t/usr/bin/id\n-14: __install\t%(which install)\n-14: __libtoolize\tlibtoolize\n-14: __make\t/usr/bin/make\n-14: __mkdir\t/bin/mkdir\n-14: __mv\t/bin/mv\n-14: __patch\t/usr/bin/patch\n-14: __ranlib\t%(which ranlib)\n-14: __rm\t/bin/rm\n-14: __strip\t%(which strip)\n-14: __tar\t/bin/tar\n-14: _arch\ti386\n-14: _bindir\t%{_exec_prefix}/bin\n-14: _build\t%{_host}\n-14: _build_alias\t%{_host_alias}\n-14: _build_cpu\t%{_host_cpu}\n-14: _build_os\t%{_host_os}\n-14: _build_vendor\t%{_host_vendor}\n-14: _builddir\t%{_topdir}/BUILD\n-14: _buildshell\t/bin/sh\n-14: _bzip2bin\t/usr/bin/bzip2\n-14: _datadir\t%{_prefix}/share\n-14: _dbpath\t%{_var}/lib/rpm\n-14: _defaultdocdir\t%{_usr}/doc\n-14: _exec_prefix\t%{_prefix}\n-14: _fixgroup\t[ `%{__id} -u` = '0' ] && %{__chgrp} -Rhf root\n-14: _fixowner\t[ `%{__id} -u` = '0' ] && %{__chown} -Rhf root\n-14: _fixperms\t%{__chmod} -Rf a+rX,g-w,o-w\n-14: _gzipbin\t/bin/gzip\n-14: _host\ti686-pc-linux-gnu\n-14: _host_alias\ti686-pc-linux-gnu\n-14: _host_cpu\ti686\n-14: _host_os\tlinux-gnu\n-14: _host_vendor\tpc\n-14: _includedir\t%{_prefix}/include\n-14: _infodir\t%{_prefix}/info\n-14: _instchangelog\t5\n-14: _libdir\t%{_exec_prefix}/lib\n-14: _libexecdir\t%{_exec_prefix}/libexec\n-14: _localstatedir\t%{_prefix}/var\n-14: _mandir\t%{_prefix}/man\n-14: _oldincludedir\t/usr/include\n-14: _os\tlinux\n-14: _packager\tJan Dittberner <[email protected]>\n-14: _pgpbin\t/usr/bin/pgp\n-14: _preScriptEnvironment\t\n\tRPM_SOURCE_DIR=\"%{_sourcedir}\"\n\tRPM_BUILD_DIR=\"%{_builddir}\"\n\tRPM_OPT_FLAGS=\"%{optflags}\"\n\tRPM_ARCH=\"%{_arch}\"\n\tRPM_OS=\"%{_os}\"\n\texport RPM_SOURCE_DIR RPM_BUILD_DIR RPM_OPT_FLAGS RPM_ARCH RPM_OS\n\tRPM_DOC_DIR=\"%{_docdir}\"\n\texport RPM_DOC_DIR\n\tRPM_PACKAGE_NAME=\"%{name}\"\n\tRPM_PACKAGE_VERSION=\"%{version}\"\n\tRPM_PACKAGE_RELEASE=\"%{release}\"\n\texport RPM_PACKAGE_NAME RPM_PACKAGE_VERSION RPM_PACKAGE_RELEASE\n\t%{?buildroot:RPM_BUILD_ROOT=\"%{buildroot}\"\n\texport RPM_BUILD_ROOT\n\t}\n-14: _prefix\t/usr\n-14: _rpmdir\t%{_topdir}/RPMS\n-14: _rpmfilename\t%%{ARCH}/%%{NAME}-%%{VERSION}-%%{RELEASE}.%%{ARCH}.rpm\n-14: _sbindir\t%{_exec_prefix}/sbin\n-14: _sharedstatedir\t%{_prefix}/com\n-14: _signature\tnone\n-14: _sourcedir\t%{_topdir}/SOURCES\n-14: _specdir\t%{_topdir}/SPECS\n-14: _srcrpmdir\t%{_topdir}/SRPMS\n-14: _sysconfdir\t%{_prefix}/etc\n-11: _target\ti586-linux\n-14: _target_alias\t%{_host_alias}\n-11= _target_cpu\ti586\n-11= _target_os\tlinux\n-14: _target_platform\t%{_target_cpu}-%{_vendor}-%{_target_os}\n-14: _target_vendor\t%{_host_vendor}\n-14: _tmppath\t%{_var}/tmp\n-14: _topdir\t/home/jd9/RPM\n-14: _usr\t/usr\n-14: _usrsrc\t%{_usr}/src\n-14: _var\t/var\n-14: _vendor\tmandrake\n-14: configure\t\n %{?__libtoolize:[ -f configure.in ] && %{__libtoolize} --copy --force} \n CFLAGS=\"%{optflags}\" ./configure %{_target_platform} --prefix=%{_prefix}\n-14: nil\t%{!?nil}\n-11: optflags\t-O3 -fomit-frame-pointer -fno-exceptions -fno-rtti -pipe -s -mpentium -mcpu=pentium -march=pentium -ffast-math -fexpensive-optimizations -malign-loops=2 -malign-jumps=2 -malign-functions=2 -mpreferred-stack-boundary=2\n-14: perl_archlib\t%(eval \"`perl -V:installarchlib`\"; echo $installarchlib)\n-14: perl_sitearch\t%(eval \"`perl -V:installsitearch`\"; echo $installsitearch)\n-14: requires_eq\t%(LC_ALL=\"C\" rpm -q --queryformat 'Requires:%%{NAME} = %%{VERSION}' %1| grep -v \"is not\")\n-15: sigtype\tnone\n======================== active 90 empty 0\n\n--8<---------------------------------------------------------------------\n", "msg_date": "Thu, 15 Jun 2000 17:56:45 +0200", "msg_from": "Jan Dittberner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Mandrake RPMS, was \"Building PostgreSQL 7.0.1 documentation\"" }, { "msg_contents": "Jan Dittberner wrote:\n> Lamar Owen wrote:\n> > Can you send me the result of 'rpm --showrc'? I'm curious if Mandrake\n> > is using the macro package to determine the use of gzip versus bzip2....\n> > Otherwise, if the _vendor macro is set to 'mandrake', I can use that in\n> > a conditional.\n> >\n \n> Here it is:\n> -14: __bzip2 %{_bzip2bin}\n> -14: __gzip %{_gzipbin}\n> -14: _bzip2bin /usr/bin/bzip2\n> -14: _gzipbin /bin/gzip\n\nCan't use them... :-(\n\n> -14: _vendor mandrake\n\nBut I _can_ use this. RedHat sets this, predictably, to redhat. Most\nother things are easy -- there are macros for nearly everything you need\nthat hide all details (such as the /usr/src/redhat versus /usr/src/RPM\nand others deal).\n\nThus, Thomas, to answer your question -- YES, we can have a single spec\nfile do both systems. Just some conditionals, and judicious use of the\ndefined macros and envvars.\n\nAnd this is why I need the rpm --showrc results with spec file diffs (or\nexamples)... :-).\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 15 Jun 2000 12:10:47 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Mandrake RPMS, was \"Building PostgreSQL 7.0.1 documentation\"" }, { "msg_contents": "I can't find the original message, so I'll have to do some indirect answering.\n\n\nLamar Owen <[email protected]> writes:\n\n> Jan Dittberner wrote:\n> > Lamar Owen wrote:\n> > > > For the Mandrake stuff, perhaps we can do it as a team; the first\n> > > > important step is for someone to start \"babysitting\" it, just building\n> > > > and posting the RPMs from Lamar's sources, then posting the results at\n> > > > Mandrake's web site.\n> \n> > > Yes! Try a simple --rebuild of my SRPM first -- then, send me a diff of\n> \n> > I did so, I build packages for Mandrake. Your SPEC file is great and\n> > I had only to do some little changes for full Mandrake compliance.\n> \n> > - pack all source files with bzip2, even those you left\n> > uncompressed\n\nIf the postgresql team releases bz2 - good. If there is no file\n\nftp://foo.bar.com/xyzzy.tar.bz2\n\nbut just\n\nftp://foo.bar.com/xyzzy.tar.gz\n\nthen using that URL is just lying. Don't do that. Surely Mandrake\ndoesn't do that?\n\n> > - extract the .jar files and the init/logrotate scripts at their destination\n> > - bzip2 all man pages\n\nDon't. This should be handled automatically with the \"--buildpolicy\"\nflag[1]. like we do. Besides, bzipping man pages is in the \"use bzip2,\neven though there is no point in it\" category.\n\n[1] Like this: \"rpm -ba --buildpolicy redhat postgresql.spec\"\n\nNo need to strip or compress explicitly.\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "15 Jun 2000 12:14:32 -0400", "msg_from": "[email protected] (Trond Eivind=?iso-8859-1?q?_Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Re: Mandrake RPMS, was \"Building PostgreSQL 7.0.1 documentation\"" }, { "msg_contents": "Trond Eivind Glomsr�d wrote:\n> \n> I can't find the original message, so I'll have to do some indirect answering.\n\n[If you wish, I'll pipe you the original]\n> Lamar Owen <[email protected]> writes:\n> > Jan Dittberner wrote:\n> > > I did so, I build packages for Mandrake. Your SPEC file is great and\n> > > I had only to do some little changes for full Mandrake compliance.\n> >\n> > > - pack all source files with bzip2, even those you left\n> > > uncompressed\n \n> If the postgresql team releases bz2 - good. If there is no file\n \n> ftp://foo.bar.com/xyzzy.tar.bz2\n \n> but just\n \n> ftp://foo.bar.com/xyzzy.tar.gz\n \n> then using that URL is just lying. Don't do that. Surely Mandrake\n> doesn't do that?\n\nThis source re-compress is not needed, really. After all, the whole\nsource RPM is compressed.... And, yes, as Thomas already commented,\nthis violates the pristine source issue. No, ftp.postgresql.org doesn't\ndo bzip2, yet (Marc??)\n \n> > > - bzip2 all man pages\n \n> Don't. This should be handled automatically with the \"--buildpolicy\"\n> flag[1]. like we do. Besides, bzipping man pages is in the \"use bzip2,\n> even though there is no point in it\" category.\n \n> [1] Like this: \"rpm -ba --buildpolicy redhat postgresql.spec\"\n \n> No need to strip or compress explicitly.\n\nWhich is why you had pulled the stripping stuff out of the spec... also\nexplains a portion of why the insistence on %{_mandir}, as that may be a\nclue to rpm to apply 'policy' ?? \n\nOk, now just _where_ is this '--buildpolicy' deal documented -- it sure\nisn't in the man page on my 6.2 box. What sort of config file is used,\netc. I certainly like the sound of it, but have never heard of it\nbefore.\n\nIf it is a new version of RPM thingy, well.... ;-)\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 15 Jun 2000 12:25:25 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Mandrake RPMS, was \"Building PostgreSQL 7.0.1 documentation\"" }, { "msg_contents": "Jan Dittberner wrote:\n> - pack all source files with bzip2, even those you left uncompressed\n> - extract the .jar files and the init/logrotate scripts at their destination\n> - bzip2 all man pages\n\nYou know, I wonder why Mandrake is insisting on bzip2'ing everything.\nCan you point me to Mandrake's packaging policy statement, please?\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 16 Jun 2000 20:41:30 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Mandrake RPMS, was \"Building PostgreSQL 7.0.1 documentation\"" }, { "msg_contents": "Thomas Lockhart wrote:\n> Lamar, would it be possible to carry (some of) these differences in the\n> same spec file? *So much* is in common that it would be nice to have the\n> info together. But in either case, this is great progress. Thanks Jan!\n\nThomas, could you or Jan (Dittberner) send me a tarball of the mandrake\n/usr/lib/rpm directory, in particular, any files that start with 'brp-'?\n\nThis will help things.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 23 Jun 2000 16:12:07 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Mandrake RPMS, was \"Building PostgreSQL 7.0.1 documentation\"" } ]
[ { "msg_contents": "Hello,\n\nThis is a report on my attempts to install unixODBC (1.8.9) and postgres\nodbc driver on IRIX 6.5.7 64bit using the native compiler. There are a\nnumber of changes that were necessary to be made on both the\nconfiguration files and source to get it to compile. patches are\nattached. some of these were derived from 1.8.8, but they all apply\nsuccessfully on 1.8.9. \n\nUnixODBC\n--------\n\n1- you can ignore the changes to acinclude.m4. they are a hack to\naugment the qt libraries for my platform. a more thoughtful allowance\nfor this possibility though might be needed.\n\n2- I had to propagate USER_LDFLAGS into DataManager/Makefile.in,\nODBCConfig/Makefile.in and odbctest/Makefile.am adding it to LDADD\nflags. There is no reason not to and they were needed in my case to find\nall qt related libraries.\n\n3- The IRIX native compiler does not like having new lines in strings. I\nhad to delete spurious new lines from a few strings in\nDataManager/classLogin.cpp and DataManager/classISQL.cpp\n\n4- default values for function arguments were defined twice in a number\nof files. in the headers as well as in cpp files. IRIX compiler does not\nlike that besides it is a maintenance burden. I kept the default\narguments in the header files only (see patch).\n\n5- Needed to insert explicit type casts in some places as well as\nleading function prototypes (see patch).\n\nPostgres ODBC driver\n--------------------\n\n6- One bug that was hard to track was related to the postgres driver.\nThe driver defines #define Int4 long int; in psqlodbc.h. unfortunately,\nwhen compiling 64bit a long int is 8 bytes. for my setup I hacked it by\nchanging that to #define Int4 int; which I think is probably appropriate\non most platforms. But this type should really be determined at\nconfigure time.\n\nRegards\n\n\n-- \nMurad Nayal M.D. Ph.D.\nDepartment of Biochemistry and Molecular Biophysics\nCollege of Physicians and Surgeons of Columbia University\n630 West 168th Street. New York, NY 10032\nTel: 212-305-6884\tFax: 212-305-6926", "msg_date": "Wed, 14 Jun 2000 16:23:32 +0200", "msg_from": "Murad Nayal <[email protected]>", "msg_from_op": true, "msg_subject": "info on unixODBC/Postgres driver port to IRIX 6.5.7 64bit" }, { "msg_contents": "Murad Nayal wrote:\n\n> Hello,\n>\n> This is a report on my attempts to install unixODBC (1.8.9) and postgres\n> odbc driver on IRIX 6.5.7 64bit using the native compiler. There are a\n> number of changes that were necessary to be made on both the\n> configuration files and source to get it to compile. patches are\n> attached. some of these were derived from 1.8.8, but they all apply\n> successfully on 1.8.9.\n>\n\nThanks for that, I will apply the changes, I had checked unixODBC on a 64 bit alpha platform, but its not a suprise that some have escaped.\n\n--\nTo me vi is Zen. To use vi is to practice zen. Every command is\na koan. Profound to the user, unintelligible to the uninitiated.\nYou discover truth everytime you use it.\n -- [email protected]\n\n\n\n", "msg_date": "Thu, 15 Jun 2000 00:13:57 +0100", "msg_from": "Nick Gorham <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [unixODBC-DEV] - info on unixODBC/Postgres driver port to IRIX\n\t6.5.7 64bit" }, { "msg_contents": "Is this patch based on the current snapshot? I have applied several\nunixODBC patches in the past several days that may already fix these\nproblems.\n\n\n> \n> \n> Hello,\n> \n> This is a report on my attempts to install unixODBC (1.8.9) and postgres\n> odbc driver on IRIX 6.5.7 64bit using the native compiler. There are a\n> number of changes that were necessary to be made on both the\n> configuration files and source to get it to compile. patches are\n> attached. some of these were derived from 1.8.8, but they all apply\n> successfully on 1.8.9. \n> \n> UnixODBC\n> --------\n> \n> 1- you can ignore the changes to acinclude.m4. they are a hack to\n> augment the qt libraries for my platform. a more thoughtful allowance\n> for this possibility though might be needed.\n> \n> 2- I had to propagate USER_LDFLAGS into DataManager/Makefile.in,\n> ODBCConfig/Makefile.in and odbctest/Makefile.am adding it to LDADD\n> flags. There is no reason not to and they were needed in my case to find\n> all qt related libraries.\n> \n> 3- The IRIX native compiler does not like having new lines in strings. I\n> had to delete spurious new lines from a few strings in\n> DataManager/classLogin.cpp and DataManager/classISQL.cpp\n> \n> 4- default values for function arguments were defined twice in a number\n> of files. in the headers as well as in cpp files. IRIX compiler does not\n> like that besides it is a maintenance burden. I kept the default\n> arguments in the header files only (see patch).\n> \n> 5- Needed to insert explicit type casts in some places as well as\n> leading function prototypes (see patch).\n> \n> Postgres ODBC driver\n> --------------------\n> \n> 6- One bug that was hard to track was related to the postgres driver.\n> The driver defines #define Int4 long int; in psqlodbc.h. unfortunately,\n> when compiling 64bit a long int is 8 bytes. for my setup I hacked it by\n> changing that to #define Int4 int; which I think is probably appropriate\n> on most platforms. But this type should really be determined at\n> configure time.\n> \n> Regards\n> \n> \n> -- \n> Murad Nayal M.D. Ph.D.\n> Department of Biochemistry and Molecular Biophysics\n> College of Physicians and Surgeons of Columbia University\n> 630 West 168th Street. New York, NY 10032\n> Tel: 212-305-6884\tFax: 212-305-6926\n\n> *** ./acinclude.m4.bk1\tWed Jun 14 13:30:52 2000\n> --- ./acinclude.m4\tWed Jun 14 00:47:35 2000\n> ***************\n> *** 562,568 ****\n> fi\n> AC_MSG_CHECKING([for Qt])\n> \n> ! LIBQT=\"$LIBQT -lXext -lX11 $LIBSOCKET\"\n> ac_qt_includes=NO ac_qt_libraries=NO ac_qt_bindir=NO\n> qt_libraries=\"\"\n> qt_includes=\"\"\n> --- 562,568 ----\n> fi\n> AC_MSG_CHECKING([for Qt])\n> \n> ! LIBQT=\"$LIBQT -ljpeg -lSM -lICE -lXext -lX11 $LIBSOCKET\"\n> ac_qt_includes=NO ac_qt_libraries=NO ac_qt_bindir=NO\n> qt_libraries=\"\"\n> qt_includes=\"\"\n> ***************\n> *** 693,699 ****\n> AC_SUBST(QT_LDFLAGS)\n> AC_PATH_QT_MOC\n> \n> ! LIB_QT='-lqt $(LIBPNG) -lXext $(LIB_X11)'\n> AC_SUBST(LIB_QT)\n> \n> ])\n> --- 693,699 ----\n> AC_SUBST(QT_LDFLAGS)\n> AC_PATH_QT_MOC\n> \n> ! LIB_QT='-lqt $(LIBPNG) -ljpeg -lSM -lICE -lXext $(LIB_X11)'\n> AC_SUBST(LIB_QT)\n> \n> ])\n> *** ./DataManager/Makefile.in.bk1\tWed Jun 14 13:28:47 2000\n> --- ./DataManager/Makefile.in\tTue Jun 13 13:25:30 2000\n> ***************\n> *** 176,182 ****\n> \n> @QT_TRUE@INCLUDES = -I../include @QT_INCLUDES@\n> \n> ! @QT_TRUE@DataManager_LDADD = @X_LDFLAGS@ \t@QT_LDFLAGS@ \t@LIB_QT@ \t../odbcinst/libodbcinst.la \t../DriverManager/libodbc.la \n> \n> @QT_TRUE@DataManager_DEPENDANCIES = ../odbcinst/libodbcinst.la \t../DriverManager/libodbc.la \n> \n> --- 176,182 ----\n> \n> @QT_TRUE@INCLUDES = -I../include @QT_INCLUDES@\n> \n> ! @QT_TRUE@DataManager_LDADD = @X_LDFLAGS@ \t@QT_LDFLAGS@ @USER_LDFLAGS@ \t@LIB_QT@ \t../odbcinst/libodbcinst.la \t../DriverManager/libodbc.la \n> \n> @QT_TRUE@DataManager_DEPENDANCIES = ../odbcinst/libodbcinst.la \t../DriverManager/libodbc.la \n> \n> *** ./ODBCConfig/Makefile.in.bk1\tTue Jun 13 13:18:55 2000\n> --- ./ODBCConfig/Makefile.in\tTue Jun 13 13:25:00 2000\n> ***************\n> *** 176,182 ****\n> \n> @QT_TRUE@INCLUDES = -I../include @QT_INCLUDES@ -DSYSTEM_FILE_PATH=\\\"@sysconfdir@\\\" -DDEFLIB_PATH=\\\"@libdir@\\\" $(INCLTDL)\n> \n> ! @QT_TRUE@ODBCConfig_LDADD = @X_LDFLAGS@ \t@QT_LDFLAGS@ \t@LIB_QT@ \t../odbcinst/libodbcinst.la \t../extras/libodbcextraslc.la\n> \n> @QT_TRUE@ODBCConfig_DEPENDANCIES = ../odbcinst/libodbcinst.la ../extras/libodbcextraslc.la\n> \n> --- 176,182 ----\n> \n> @QT_TRUE@INCLUDES = -I../include @QT_INCLUDES@ -DSYSTEM_FILE_PATH=\\\"@sysconfdir@\\\" -DDEFLIB_PATH=\\\"@libdir@\\\" $(INCLTDL)\n> \n> ! @QT_TRUE@ODBCConfig_LDADD = @X_LDFLAGS@ \t@QT_LDFLAGS@ @USER_LDFLAGS@\t@LIB_QT@ \t../odbcinst/libodbcinst.la \t../extras/libodbcextraslc.la\n> \n> @QT_TRUE@ODBCConfig_DEPENDANCIES = ../odbcinst/libodbcinst.la ../extras/libodbcextraslc.la\n> \n> *** odbctest/Makefile.am.bk1\tWed Jun 14 15:16:22 2000\n> --- odbctest/Makefile.am\tWed Jun 14 15:16:37 2000\n> ***************\n> *** 6,11 ****\n> --- 6,12 ----\n> \n> odbctest_LDADD = @X_LDFLAGS@ \\\n> \t@QT_LDFLAGS@ \\\n> + \t@USER_LDFLAGS@ \\\n> \t@LIB_QT@ \\\n> \t../odbcinst/libodbcinst.la \\\n> \t../DriverManager/libodbc.la \n> *** ./DataManager/classLogin.cpp.bk1\tTue Jun 13 14:20:07 2000\n> --- ./DataManager/classLogin.cpp\tTue Jun 13 14:20:57 2000\n> ***************\n> *** 85,93 ****\n> \t\t\tQMessageBox::warning( this, \"Data Manager\", szBuf);\n> \t\telse\n> /* END TIM */\t\t\n> ! \t\t\tQMessageBox::warning( this, \"Data Manager\", \"Login failed\\n\\nThis may\n> ! be for one of these reasons;\\n1. invalid ID and Password\\n2. invalid Data\n> ! Source config\\n3. improper installation\" );\n> \t\treturn;\t\n> \t}\n> \n> --- 85,91 ----\n> \t\t\tQMessageBox::warning( this, \"Data Manager\", szBuf);\n> \t\telse\n> /* END TIM */\t\t\n> ! \t\t\tQMessageBox::warning( this, \"Data Manager\", \"Login failed\\n\\nThis may be for one of these reasons;\\n1. invalid ID and Password\\n2. invalid Data Source config\\n3. improper installation\" );\n> \t\treturn;\t\n> \t}\n> \n> *** ./DataManager/classISQL.cpp.bk1\tTue Jun 13 13:49:38 2000\n> --- ./DataManager/classISQL.cpp\tTue Jun 13 13:50:56 2000\n> ***************\n> *** 140,147 ****\n> \t// CREATE A STATEMENT\n> iRC = SQLAllocStmt( hDbc, &hStmt );\n> if( SQL_SUCCESS != iRC )\n> ! \t\tQMessageBox::critical( (QWidget *)this, \"Data Manager\", \"Failed:\n> ! SQLAllocStmt \" );\n> \n> if( SQL_SUCCESS != (iRC=SQLPrepare(hStmt,\n> (SQLCHAR*)(txtSQL->text().data()), SQL_NTS)) )\n> --- 140,146 ----\n> \t// CREATE A STATEMENT\n> iRC = SQLAllocStmt( hDbc, &hStmt );\n> if( SQL_SUCCESS != iRC )\n> ! \t\tQMessageBox::critical( (QWidget *)this, \"Data Manager\", \"Failed: SQLAllocStmt \" );\n> \n> if( SQL_SUCCESS != (iRC=SQLPrepare(hStmt,\n> (SQLCHAR*)(txtSQL->text().data()), SQL_NTS)) )\n> ***************\n> *** 151,158 ****\n> \tif (retcode == SQL_SUCCESS)\n> \t\tQMessageBox::critical( (QWidget *)this, \"Data Manager\", szBuf);\n> \telse\n> ! \t\tQMessageBox::critical( (QWidget *)this, \"Data Manager\", \"Failed:\n> ! SQLPrepare \" );\n> }\n> \n> // EXECUTE\n> --- 150,156 ----\n> \tif (retcode == SQL_SUCCESS)\n> \t\tQMessageBox::critical( (QWidget *)this, \"Data Manager\", szBuf);\n> \telse\n> ! \t\tQMessageBox::critical( (QWidget *)this, \"Data Manager\", \"Failed: SQLPrepare \" );\n> }\n> \n> // EXECUTE\n> ***************\n> *** 163,170 ****\n> \tif (retcode == SQL_SUCCESS)\n> \t\tQMessageBox::critical( (QWidget *)this, \"Data Manager\", szBuf);\n> \telse\n> ! \t\tQMessageBox::critical( (QWidget *)this, \"Data Manager\", \"Failed:\n> ! SQLExecute \" );\n> }\n> \n> // GET NUMBER OF ROWS AFFECTED\n> --- 161,167 ----\n> \tif (retcode == SQL_SUCCESS)\n> \t\tQMessageBox::critical( (QWidget *)this, \"Data Manager\", szBuf);\n> \telse\n> ! \t\tQMessageBox::critical( (QWidget *)this, \"Data Manager\", \"Failed: SQLExecute \" );\n> }\n> \n> // GET NUMBER OF ROWS AFFECTED\n> ***************\n> *** 186,193 ****\n> // FREE STATEMENT\n> iRC = SQLFreeStmt( hStmt, SQL_DROP );\n> if( SQL_SUCCESS != iRC )\n> ! \t\tQMessageBox::critical( (QWidget *)this, \"Data Manager\", \"Failed:\n> ! SQLFreeStmt \" );\n> \n> pTabBar->setCurrentTab( 1 );\n> txtResults->show();\n> --- 183,189 ----\n> // FREE STATEMENT\n> iRC = SQLFreeStmt( hStmt, SQL_DROP );\n> if( SQL_SUCCESS != iRC )\n> ! \t\tQMessageBox::critical( (QWidget *)this, \"Data Manager\", \"Failed: SQLFreeStmt \" );\n> \n> pTabBar->setCurrentTab( 1 );\n> txtResults->show();\n> *** ./odbctest/odbctest.cpp.bk1\tTue Jun 13 14:33:54 2000\n> --- ./odbctest/odbctest.cpp\tTue Jun 13 14:34:23 2000\n> ***************\n> *** 687,693 ****\n> return a.exec();\n> }\n> \n> ! Handle::Handle( int t, SQLHANDLE h, QString desc = NULL, SQLHANDLE stmt = SQL_NULL_HANDLE ) \n> { \n> \ttype = t; \n> \thandle = h; \n> --- 687,693 ----\n> return a.exec();\n> }\n> \n> ! Handle::Handle( int t, SQLHANDLE h, QString desc, SQLHANDLE stmt) \n> { \n> \ttype = t; \n> \thandle = h; \n> *** ./libltdl/ltdl.c.bk1\tSun Jun 11 23:35:11 2000\n> --- ./libltdl/ltdl.c\tSun Jun 11 23:35:34 2000\n> ***************\n> *** 210,218 ****\n> \n> /* dynamic linking with dlopen/dlsym */\n> \n> - #if HAVE_DLFCN_H\n> # include <dlfcn.h>\n> - #endif\n> \n> /*\n> * GLOBAL is not a good thing for us, it breaks perl amonst others\n> --- 210,216 ----\n> *** ./Drivers/txt/SQLStatistics.c.bk1\tFri May 26 16:55:21 2000\n> --- ./Drivers/txt/SQLStatistics.c\tFri May 26 16:55:43 2000\n> ***************\n> *** 58,64 ****\n> \t\treturn SQL_ERROR;\n> \t}\n> \n> ! \thStmt->hStmtExtras->hBoundCols\t= _CreateBoundCols( hStmt );\n> \n> logPushMsg( hStmt->hLog, __FILE__, __FILE__, __LINE__, LOG_INFO, LOG_INFO, \"SQL_SUCCESS\" );\n> \treturn SQL_SUCCESS;\n> --- 58,64 ----\n> \t\treturn SQL_ERROR;\n> \t}\n> \n> ! \thStmt->hStmtExtras->hBoundCols\t= _CreateBoundCols((SQLHSTMT) hStmt );\n> \n> logPushMsg( hStmt->hLog, __FILE__, __FILE__, __LINE__, LOG_INFO, LOG_INFO, \"SQL_SUCCESS\" );\n> \treturn SQL_SUCCESS;\n> *** ./Drivers/txt/SQLColumns.c.bk1\tFri May 26 16:52:03 2000\n> --- ./Drivers/txt/SQLColumns.c\tFri May 26 16:51:41 2000\n> ***************\n> *** 61,67 ****\n> \t\tlogPushMsg( hStmt->hLog, __FILE__, __FILE__, __LINE__, LOG_WARNING, LOG_WARNING, hStmt->szSqlMsg );\n> \t\treturn SQL_ERROR;\n> \t}\n> ! \thStmt->hStmtExtras->hBoundCols\t= _CreateBoundCols( hStmt );\n> \n> logPushMsg( hStmt->hLog, __FILE__, __FILE__, __LINE__, LOG_INFO, LOG_INFO, \"SQL_SUCCESS\" );\n> return SQL_SUCCESS;\n> --- 61,67 ----\n> \t\tlogPushMsg( hStmt->hLog, __FILE__, __FILE__, __LINE__, LOG_WARNING, LOG_WARNING, hStmt->szSqlMsg );\n> \t\treturn SQL_ERROR;\n> \t}\n> ! \thStmt->hStmtExtras->hBoundCols\t= _CreateBoundCols((SQLHSTMT) hStmt );\n> \n> logPushMsg( hStmt->hLog, __FILE__, __FILE__, __LINE__, LOG_INFO, LOG_INFO, \"SQL_SUCCESS\" );\n> return SQL_SUCCESS;\n> *** ./Drivers/txt/SQLSpecialColumns.c.bk1\tFri May 26 16:54:45 2000\n> --- ./Drivers/txt/SQLSpecialColumns.c\tFri May 26 16:55:03 2000\n> ***************\n> *** 76,82 ****\n> \t\treturn SQL_ERROR;\n> \t}\n> \n> ! \thStmt->hStmtExtras->hBoundCols\t= _CreateBoundCols( hStmt );\n> \n> logPushMsg( hStmt->hLog, __FILE__, __FILE__, __LINE__, LOG_INFO, LOG_INFO, \"SQL_SUCCESS\" );\n> \treturn SQL_SUCCESS;\n> --- 76,82 ----\n> \t\treturn SQL_ERROR;\n> \t}\n> \n> ! \thStmt->hStmtExtras->hBoundCols\t= _CreateBoundCols((SQLHSTMT) hStmt );\n> \n> logPushMsg( hStmt->hLog, __FILE__, __FILE__, __LINE__, LOG_INFO, LOG_INFO, \"SQL_SUCCESS\" );\n> \treturn SQL_SUCCESS;\n> *** ./Drivers/txt/SQLTables.c.bk1\tFri May 26 16:56:02 2000\n> --- ./Drivers/txt/SQLTables.c\tFri May 26 16:56:42 2000\n> ***************\n> *** 55,61 ****\n> \t\treturn SQL_ERROR;\n> \t}\n> \n> ! \thStmt->hStmtExtras->hBoundCols\t= _CreateBoundCols( hStmt );\n> \n> logPushMsg( hStmt->hLog, __FILE__, __FILE__, __LINE__, LOG_INFO, LOG_INFO, \"SQL_SUCCESS\" );\n> return SQL_SUCCESS;\n> --- 55,61 ----\n> \t\treturn SQL_ERROR;\n> \t}\n> \n> ! \thStmt->hStmtExtras->hBoundCols\t= _CreateBoundCols((SQLHSTMT) hStmt );\n> \n> logPushMsg( hStmt->hLog, __FILE__, __FILE__, __LINE__, LOG_INFO, LOG_INFO, \"SQL_SUCCESS\" );\n> return SQL_SUCCESS;\n> *** ./Drivers/txt/SQLPrimaryKeys.c.bk1\tFri May 26 16:53:27 2000\n> --- ./Drivers/txt/SQLPrimaryKeys.c\tFri May 26 16:53:41 2000\n> ***************\n> *** 55,61 ****\n> \t\treturn SQL_ERROR;\n> \t}\n> \n> ! \thStmt->hStmtExtras->hBoundCols\t= _CreateBoundCols( hStmt );\n> \n> logPushMsg( hStmt->hLog, __FILE__, __FILE__, __LINE__, LOG_INFO, LOG_INFO, \"SQL_SUCCESS\" );\n> \treturn SQL_SUCCESS;\n> --- 55,61 ----\n> \t\treturn SQL_ERROR;\n> \t}\n> \n> ! \thStmt->hStmtExtras->hBoundCols\t= _CreateBoundCols((SQLHSTMT) hStmt );\n> \n> logPushMsg( hStmt->hLog, __FILE__, __FILE__, __LINE__, LOG_INFO, LOG_INFO, \"SQL_SUCCESS\" );\n> \treturn SQL_SUCCESS;\n> *** ./Drivers/template/SQLAllocStmt.c.bk1\tFri May 26 17:16:55 2000\n> --- ./Drivers/template/SQLAllocStmt.c\tFri May 26 17:18:17 2000\n> ***************\n> *** 62,68 ****\n> (*phStmt)->pNext\t\t= NULL;\n> (*phStmt)->pPrev\t\t= NULL;\n> (*phStmt)->pszQuery\t\t= NULL;\n> ! sprintf( (*phStmt)->szCursorName, \"CUR_%08lX\", *phStmt );\n> \n> \t/* ADD TO DBCs STATEMENT LIST */\n> \t\n> --- 65,71 ----\n> (*phStmt)->pNext\t\t= NULL;\n> (*phStmt)->pPrev\t\t= NULL;\n> (*phStmt)->pszQuery\t\t= NULL;\n> ! sprintf((char*) (*phStmt)->szCursorName, \"CUR_%08lX\", *phStmt );\n> \n> \t/* ADD TO DBCs STATEMENT LIST */\n> \t\n> *** ./Drivers/template/SQLFreeEnv.c.bk1\tFri May 26 17:19:48 2000\n> --- ./Drivers/template/SQLFreeEnv.c\tFri May 26 17:20:08 2000\n> ***************\n> *** 14,19 ****\n> --- 14,22 ----\n> **********************************************************************/\n> \n> #include \"driver.h\"\n> + \n> + SQLRETURN _FreeEnv(SQLHENV);\n> + \n> SQLRETURN SQLFreeEnv( SQLHENV hDrvEnv )\n> {\n> return _FreeEnv( hDrvEnv );\n> *** ./Drivers/template/SQLFreeHandle.c.bk1\tFri May 26 17:21:17 2000\n> --- ./Drivers/template/SQLFreeHandle.c\tFri May 26 17:21:27 2000\n> ***************\n> *** 25,31 ****\n> return _FreeConnect( (SQLHDBC)nHandle );\n> \n> case SQL_HANDLE_STMT:\n> ! // return _FreeStmt( (SQLHSTMT)nHandle, 0 );\n> return _FreeStmt( (SQLHSTMT)nHandle );\n> \n> case SQL_HANDLE_DESC:\n> --- 25,31 ----\n> return _FreeConnect( (SQLHDBC)nHandle );\n> \n> case SQL_HANDLE_STMT:\n> ! /* return _FreeStmt( (SQLHSTMT)nHandle, 0 ); */\n> return _FreeStmt( (SQLHSTMT)nHandle );\n> \n> case SQL_HANDLE_DESC:\n> *** ./Drivers/PostgreSQL/environ.c.bk1\tFri May 26 15:14:24 2000\n> --- ./Drivers/PostgreSQL/environ.c\tFri May 26 15:35:46 2000\n> ***************\n> *** 425,431 ****\n> \n> \t/* Free any connections belonging to this environment */\n> \tfor (lf = 0; lf < MAX_CONNECTIONS; lf++) {\n> ! \t\tif (conns[lf] && conns[lf]->henv == self)\n> \t\t\trv = rv && CC_Destructor(conns[lf]);\n> \t}\n> \n> --- 425,431 ----\n> \n> \t/* Free any connections belonging to this environment */\n> \tfor (lf = 0; lf < MAX_CONNECTIONS; lf++) {\n> ! \t\tif (conns[lf] && conns[lf]->henv == (HENV) self)\n> \t\t\trv = rv && CC_Destructor(conns[lf]);\n> \t}\n> \n> ***************\n> *** 459,465 ****\n> \n> \tfor (i = 0; i < MAX_CONNECTIONS; i++) {\n> \t\tif ( ! conns[i]) {\n> ! \t\t\tconn->henv = self;\n> \t\t\tconns[i] = conn;\n> \n> \t\t\tmylog(\" added at i =%d, conn->henv = %u, conns[i]->henv = %u\\n\", i, conn->henv, conns[i]->henv);\n> --- 459,465 ----\n> \n> \tfor (i = 0; i < MAX_CONNECTIONS; i++) {\n> \t\tif ( ! conns[i]) {\n> ! \t\t\tconn->henv = (HENV) self;\n> \t\t\tconns[i] = conn;\n> \n> \t\t\tmylog(\" added at i =%d, conn->henv = %u, conns[i]->henv = %u\\n\", i, conn->henv, conns[i]->henv);\n> *** ./Drivers/PostgreSQL/execute.c.bk1\tFri May 26 15:36:14 2000\n> --- ./Drivers/PostgreSQL/execute.c\tSun Jun 11 21:42:40 2000\n> ***************\n> *** 445,451 ****\n> \n> \t\tmylog(\"SQLCancel: SQLFreeStmt returned %d\\n\", result);\n> \n> ! \t\tSC_clear_error(hstmt);\n> \t\treturn SQL_SUCCESS;\n> \t}\n> \n> --- 445,451 ----\n> \n> \t\tmylog(\"SQLCancel: SQLFreeStmt returned %d\\n\", result);\n> \n> ! \t\tSC_clear_error( (StatementClass *) hstmt);\n> \t\treturn SQL_SUCCESS;\n> \t}\n> \n> ***************\n> *** 728,734 ****\n> \t\t\t\t}\n> \t\t\t}\n> \t\t\telse {\n> ! \t\t\t\tcurrent_param->EXEC_buffer = malloc(cbValue + 1);\n> \t\t\t\tif ( ! current_param->EXEC_buffer) {\n> \t\t\t\t\tstmt->errornumber = STMT_NO_MEMORY_ERROR;\n> \t\t\t\t\tstmt->errormsg = \"Out of memory in SQLPutData (2)\";\n> --- 728,734 ----\n> \t\t\t\t}\n> \t\t\t}\n> \t\t\telse {\n> ! \t\t\t\tcurrent_param->EXEC_buffer = (char*) malloc(cbValue + 1);\n> \t\t\t\tif ( ! current_param->EXEC_buffer) {\n> \t\t\t\t\tstmt->errornumber = STMT_NO_MEMORY_ERROR;\n> \t\t\t\t\tstmt->errormsg = \"Out of memory in SQLPutData (2)\";\n> ***************\n> *** 758,764 ****\n> \t\t\tbuffer = current_param->EXEC_buffer;\n> \n> \t\t\tif (cbValue == SQL_NTS) {\n> ! \t\t\t\tbuffer = realloc(buffer, strlen(buffer) + strlen(rgbValue) + 1);\n> \t\t\t\tif ( ! buffer) {\n> \t\t\t\t\tstmt->errornumber = STMT_NO_MEMORY_ERROR;\n> \t\t\t\t\tstmt->errormsg = \"Out of memory in SQLPutData (3)\";\n> --- 758,764 ----\n> \t\t\tbuffer = current_param->EXEC_buffer;\n> \n> \t\t\tif (cbValue == SQL_NTS) {\n> ! \t\t\t\tbuffer = (char*) realloc(buffer, strlen(buffer) + strlen(rgbValue) + 1);\n> \t\t\t\tif ( ! buffer) {\n> \t\t\t\t\tstmt->errornumber = STMT_NO_MEMORY_ERROR;\n> \t\t\t\t\tstmt->errormsg = \"Out of memory in SQLPutData (3)\";\n> ***************\n> *** 784,790 ****\n> \t\t\t\tmylog(\" cbValue = %d, old_pos = %d, *used = %d\\n\", cbValue, old_pos, *current_param->EXEC_used);\n> \n> \t\t\t\t/* dont lose the old pointer in case out of memory */\n> ! \t\t\t\tbuffer = realloc(current_param->EXEC_buffer, *current_param->EXEC_used + 1);\n> \t\t\t\tif ( ! buffer) {\n> \t\t\t\t\tstmt->errornumber = STMT_NO_MEMORY_ERROR;\n> \t\t\t\t\tstmt->errormsg = \"Out of memory in SQLPutData (3)\";\n> --- 784,790 ----\n> \t\t\t\tmylog(\" cbValue = %d, old_pos = %d, *used = %d\\n\", cbValue, old_pos, *current_param->EXEC_used);\n> \n> \t\t\t\t/* dont lose the old pointer in case out of memory */\n> ! \t\t\t\tbuffer = (char*) realloc(current_param->EXEC_buffer, *current_param->EXEC_used + 1);\n> \t\t\t\tif ( ! buffer) {\n> \t\t\t\t\tstmt->errornumber = STMT_NO_MEMORY_ERROR;\n> \t\t\t\t\tstmt->errormsg = \"Out of memory in SQLPutData (3)\";\n> *** ./Drivers/PostgreSQL/info.c.bk1\tFri May 26 15:39:45 2000\n> --- ./Drivers/PostgreSQL/info.c\tFri May 26 15:44:56 2000\n> ***************\n> *** 1028,1034 ****\n> \n> \tresult = PG__SQLExecDirect(htbl_stmt, tables_query, strlen(tables_query));\n> \tif((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO)) {\n> ! \t\tstmt->errormsg = SC_create_errormsg(htbl_stmt);\n> \t\tstmt->errornumber = tbl_stmt->errornumber;\n> \t\tSC_log_error(func, \"\", stmt);\n> \t\tPG__SQLFreeStmt(htbl_stmt, SQL_DROP);\n> --- 1028,1034 ----\n> \n> \tresult = PG__SQLExecDirect(htbl_stmt, tables_query, strlen(tables_query));\n> \tif((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO)) {\n> ! \t\tstmt->errormsg = SC_create_errormsg((StatementClass *) htbl_stmt);\n> \t\tstmt->errornumber = tbl_stmt->errornumber;\n> \t\tSC_log_error(func, \"\", stmt);\n> \t\tPG__SQLFreeStmt(htbl_stmt, SQL_DROP);\n> ***************\n> *** 1149,1155 ****\n> \t\tresult = PG__SQLFetch(htbl_stmt);\n> }\n> \tif(result != SQL_NO_DATA_FOUND) {\n> ! \t\tstmt->errormsg = SC_create_errormsg(htbl_stmt);\n> \t\tstmt->errornumber = tbl_stmt->errornumber;\n> \t\tSC_log_error(func, \"\", stmt);\n> \t\tPG__SQLFreeStmt(htbl_stmt, SQL_DROP);\n> --- 1149,1155 ----\n> \t\tresult = PG__SQLFetch(htbl_stmt);\n> }\n> \tif(result != SQL_NO_DATA_FOUND) {\n> ! \t\tstmt->errormsg = SC_create_errormsg((StatementClass *) htbl_stmt);\n> \t\tstmt->errornumber = tbl_stmt->errornumber;\n> \t\tSC_log_error(func, \"\", stmt);\n> \t\tPG__SQLFreeStmt(htbl_stmt, SQL_DROP);\n> ***************\n> *** 1240,1246 ****\n> result = PG__SQLExecDirect(hcol_stmt, columns_query,\n> strlen(columns_query));\n> if((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO)) {\n> ! \t\tstmt->errormsg = SC_create_errormsg(hcol_stmt);\n> \t\tstmt->errornumber = col_stmt->errornumber;\n> \t\tSC_log_error(func, \"\", stmt);\n> \t\tPG__SQLFreeStmt(hcol_stmt, SQL_DROP);\n> --- 1240,1246 ----\n> result = PG__SQLExecDirect(hcol_stmt, columns_query,\n> strlen(columns_query));\n> if((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO)) {\n> ! \t\tstmt->errormsg = SC_create_errormsg((StatementClass *) hcol_stmt);\n> \t\tstmt->errornumber = col_stmt->errornumber;\n> \t\tSC_log_error(func, \"\", stmt);\n> \t\tPG__SQLFreeStmt(hcol_stmt, SQL_DROP);\n> ***************\n> *** 1470,1476 ****\n> \n> }\n> if(result != SQL_NO_DATA_FOUND) {\n> ! \t\tstmt->errormsg = SC_create_errormsg(hcol_stmt);\n> \t\tstmt->errornumber = col_stmt->errornumber;\n> \t\tSC_log_error(func, \"\", stmt);\n> \t\tPG__SQLFreeStmt(hcol_stmt, SQL_DROP);\n> --- 1470,1476 ----\n> \n> }\n> if(result != SQL_NO_DATA_FOUND) {\n> ! \t\tstmt->errormsg = SC_create_errormsg((StatementClass *) hcol_stmt);\n> \t\tstmt->errornumber = col_stmt->errornumber;\n> \t\tSC_log_error(func, \"\", stmt);\n> \t\tPG__SQLFreeStmt(hcol_stmt, SQL_DROP);\n> ***************\n> *** 1760,1766 ****\n> \t\tresult = PG__SQLFetch(hcol_stmt);\n> \t}\n> \tif(result != SQL_NO_DATA_FOUND || total_columns == 0) {\n> ! \t\t\tstmt->errormsg = SC_create_errormsg(hcol_stmt); /*// \"Couldn't get column names in SQLStatistics.\"; */\n> \t\t\tstmt->errornumber = col_stmt->errornumber;\n> \t\t\tPG__SQLFreeStmt(hcol_stmt, SQL_DROP);\n> \t\t\tgoto SEEYA;\n> --- 1760,1766 ----\n> \t\tresult = PG__SQLFetch(hcol_stmt);\n> \t}\n> \tif(result != SQL_NO_DATA_FOUND || total_columns == 0) {\n> ! \t\t\tstmt->errormsg = SC_create_errormsg((StatementClass *) hcol_stmt); /*// \"Couldn't get column names in SQLStatistics.\"; */\n> \t\t\tstmt->errornumber = col_stmt->errornumber;\n> \t\t\tPG__SQLFreeStmt(hcol_stmt, SQL_DROP);\n> \t\t\tgoto SEEYA;\n> ***************\n> *** 1784,1790 ****\n> \n> result = PG__SQLExecDirect(hindx_stmt, index_query, strlen(index_query));\n> if((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO)) {\n> ! \t\tstmt->errormsg = SC_create_errormsg(hindx_stmt); /*// \"Couldn't execute index query (w/SQLExecDirect) in SQLStatistics.\"; */\n> \t\tstmt->errornumber = indx_stmt->errornumber;\n> \t\tPG__SQLFreeStmt(hindx_stmt, SQL_DROP);\n> \t\tgoto SEEYA;\n> --- 1784,1790 ----\n> \n> result = PG__SQLExecDirect(hindx_stmt, index_query, strlen(index_query));\n> if((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO)) {\n> ! \t\tstmt->errormsg = SC_create_errormsg((StatementClass *) hindx_stmt); /*// \"Couldn't execute index query (w/SQLExecDirect) in SQLStatistics.\"; */\n> \t\tstmt->errornumber = indx_stmt->errornumber;\n> \t\tPG__SQLFreeStmt(hindx_stmt, SQL_DROP);\n> \t\tgoto SEEYA;\n> ***************\n> *** 1924,1930 ****\n> result = PG__SQLFetch(hindx_stmt);\n> }\n> if(result != SQL_NO_DATA_FOUND) {\n> ! \t\tstmt->errormsg = SC_create_errormsg(hindx_stmt); /*// \"SQLFetch failed in SQLStatistics.\"; */\n> \t\tstmt->errornumber = indx_stmt->errornumber;\n> \t\tPG__SQLFreeStmt(hindx_stmt, SQL_DROP);\n> \t\tgoto SEEYA;\n> --- 1924,1930 ----\n> result = PG__SQLFetch(hindx_stmt);\n> }\n> if(result != SQL_NO_DATA_FOUND) {\n> ! \t\tstmt->errormsg = SC_create_errormsg((StatementClass *) hindx_stmt); /*// \"SQLFetch failed in SQLStatistics.\"; */\n> \t\tstmt->errornumber = indx_stmt->errornumber;\n> \t\tPG__SQLFreeStmt(hindx_stmt, SQL_DROP);\n> \t\tgoto SEEYA;\n> ***************\n> *** 2076,2082 ****\n> \n> result = PG__SQLExecDirect(htbl_stmt, tables_query, strlen(tables_query));\n> if((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO)) {\n> ! \t\tstmt->errormsg = SC_create_errormsg(htbl_stmt);\n> \t\tstmt->errornumber = tbl_stmt->errornumber;\n> \t\tSC_log_error(func, \"\", stmt);\n> \t\tPG__SQLFreeStmt(htbl_stmt, SQL_DROP);\n> --- 2076,2082 ----\n> \n> result = PG__SQLExecDirect(htbl_stmt, tables_query, strlen(tables_query));\n> if((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO)) {\n> ! \t\tstmt->errormsg = SC_create_errormsg((StatementClass *) htbl_stmt);\n> \t\tstmt->errornumber = tbl_stmt->errornumber;\n> \t\tSC_log_error(func, \"\", stmt);\n> \t\tPG__SQLFreeStmt(htbl_stmt, SQL_DROP);\n> ***************\n> *** 2120,2126 ****\n> }\n> \n> if(result != SQL_NO_DATA_FOUND) {\n> ! \t\tstmt->errormsg = SC_create_errormsg(htbl_stmt);\n> \t\tstmt->errornumber = tbl_stmt->errornumber;\n> \t\tSC_log_error(func, \"\", stmt);\n> \t\tPG__SQLFreeStmt(htbl_stmt, SQL_DROP);\n> --- 2120,2126 ----\n> }\n> \n> if(result != SQL_NO_DATA_FOUND) {\n> ! \t\tstmt->errormsg = SC_create_errormsg((StatementClass *) htbl_stmt);\n> \t\tstmt->errornumber = tbl_stmt->errornumber;\n> \t\tSC_log_error(func, \"\", stmt);\n> \t\tPG__SQLFreeStmt(htbl_stmt, SQL_DROP);\n> ***************\n> *** 2272,2278 ****\n> \n> \t\tresult = PG__SQLExecDirect(htbl_stmt, tables_query, strlen(tables_query));\n> \t\tif((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO)) {\n> ! \t\t\tstmt->errormsg = SC_create_errormsg(htbl_stmt);\n> \t\t\tstmt->errornumber = tbl_stmt->errornumber;\n> \t\t\tSC_log_error(func, \"\", stmt);\n> \t\tPG__SQLFreeStmt(htbl_stmt, SQL_DROP);\n> --- 2272,2278 ----\n> \n> \t\tresult = PG__SQLExecDirect(htbl_stmt, tables_query, strlen(tables_query));\n> \t\tif((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO)) {\n> ! \t\t\tstmt->errormsg = SC_create_errormsg((StatementClass *) htbl_stmt);\n> \t\t\tstmt->errornumber = tbl_stmt->errornumber;\n> \t\t\tSC_log_error(func, \"\", stmt);\n> \t\tPG__SQLFreeStmt(htbl_stmt, SQL_DROP);\n> ***************\n> *** 2314,2320 ****\n> \t\t\treturn SQL_SUCCESS;\n> \n> \t\tif(result != SQL_SUCCESS) {\n> ! \t\t\tstmt->errormsg = SC_create_errormsg(htbl_stmt);\n> \t\t\tstmt->errornumber = tbl_stmt->errornumber;\n> \t\t\tSC_log_error(func, \"\", stmt);\n> \t\t\tPG__SQLFreeStmt(htbl_stmt, SQL_DROP);\n> --- 2314,2320 ----\n> \t\t\treturn SQL_SUCCESS;\n> \n> \t\tif(result != SQL_SUCCESS) {\n> ! \t\t\tstmt->errormsg = SC_create_errormsg((StatementClass *) htbl_stmt);\n> \t\t\tstmt->errornumber = tbl_stmt->errornumber;\n> \t\t\tSC_log_error(func, \"\", stmt);\n> \t\t\tPG__SQLFreeStmt(htbl_stmt, SQL_DROP);\n> ***************\n> *** 2447,2453 ****\n> \n> \t\tresult = PG__SQLExecDirect(htbl_stmt, tables_query, strlen(tables_query));\n> \t\tif((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO)) {\n> ! \t\t\tstmt->errormsg = SC_create_errormsg(htbl_stmt);\n> \t\t\tstmt->errornumber = tbl_stmt->errornumber;\n> \t\t\tSC_log_error(func, \"\", stmt);\n> \t\tPG__SQLFreeStmt(htbl_stmt, SQL_DROP);\n> --- 2447,2453 ----\n> \n> \t\tresult = PG__SQLExecDirect(htbl_stmt, tables_query, strlen(tables_query));\n> \t\tif((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO)) {\n> ! \t\t\tstmt->errormsg = SC_create_errormsg((StatementClass *) htbl_stmt);\n> \t\t\tstmt->errornumber = tbl_stmt->errornumber;\n> \t\t\tSC_log_error(func, \"\", stmt);\n> \t\tPG__SQLFreeStmt(htbl_stmt, SQL_DROP);\n> ***************\n> *** 2499,2505 ****\n> \t\t\treturn SQL_SUCCESS;\n> \n> \t\tif(result != SQL_SUCCESS) {\n> ! \t\t\tstmt->errormsg = SC_create_errormsg(htbl_stmt);\n> \t\t\tstmt->errornumber = tbl_stmt->errornumber;\n> \t\t\tSC_log_error(func, \"\", stmt);\n> \t\t\tPG__SQLFreeStmt(htbl_stmt, SQL_DROP);\n> --- 2499,2505 ----\n> \t\t\treturn SQL_SUCCESS;\n> \n> \t\tif(result != SQL_SUCCESS) {\n> ! \t\t\tstmt->errormsg = SC_create_errormsg((StatementClass *) htbl_stmt);\n> \t\t\tstmt->errornumber = tbl_stmt->errornumber;\n> \t\t\tSC_log_error(func, \"\", stmt);\n> \t\t\tPG__SQLFreeStmt(htbl_stmt, SQL_DROP);\n> *** ./Drivers/PostgreSQL/psqlodbc.h.bk1\tMon Jun 12 01:45:27 2000\n> --- ./Drivers/PostgreSQL/psqlodbc.h\tMon Jun 12 01:45:39 2000\n> ***************\n> *** 18,24 ****\n> #include <stdio.h>\t/* for FILE* pointers: see GLOBAL_VALUES */\n> \n> #ifndef WIN32\n> ! #define Int4 long int\n> #define UInt4 unsigned int\n> #define Int2 short\n> #define UInt2 unsigned short\n> --- 18,24 ----\n> #include <stdio.h>\t/* for FILE* pointers: see GLOBAL_VALUES */\n> \n> #ifndef WIN32\n> ! #define Int4 int\n> #define UInt4 unsigned int\n> #define Int2 short\n> #define UInt2 unsigned short\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 15 Jun 2000 10:16:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] info on unixODBC/Postgres driver port to IRIX 6.5.7\n\t64bit" }, { "msg_contents": "> Is this patch based on the current snapshot? I have applied several\n> unixODBC patches in the past several days that may already fix these\n> problems.\n> > This is a report on my attempts to install unixODBC (1.8.9)\n\nafaik, unixODBC != psqlODBC, though they have common roots. Is that\nactually the case? Is unixODBC sufficiently mature to consider merging\nthe efforts?\n\n - Thomas\n", "msg_date": "Thu, 15 Jun 2000 14:29:36 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] info on unixODBC/Postgres driver port to IRIX 6.5.7\n\t64bit" }, { "msg_contents": "The PostgreSQL driver, in unixODBC, is a direct descendent of Byrons\ndriver. I took the driver, made it compile on UNIX again, then added a few\nfixs to make it work in unixODBC and with StarOffice 5.0. I think other\nenhancements were made to it since then (about a year ago).\n\nThe intention was to merge the changes back into Byrons stuff but that has not\nhappened (that I am aware of).\n\nIt would be great if this driver could be common and even better if any common\ndriver was to be based upon unixODBC (i.e. its odbcinst etc).\n\nPeter\n\nNOTE: There is no reason, that I am aware of, why a common driver would not be\nable to work on unix with unixODBC and on MS'isms without change to the code\n(providing some care was taken).\n\nOn Thu, 15 Jun 2000, Thomas Lockhart wrote:\n> > Is this patch based on the current snapshot? I have applied several\n> > unixODBC patches in the past several days that may already fix these\n> > problems.\n> > > This is a report on my attempts to install unixODBC (1.8.9)\n> \n> afaik, unixODBC != psqlODBC, though they have common roots. Is that\n> actually the case? Is unixODBC sufficiently mature to consider merging\n> the efforts?\n> \n> - Thomas\n", "msg_date": "Thu, 15 Jun 2000 08:23:03 -0700", "msg_from": "Peter Harvey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [unixODBC-DEV] - Re: [HACKERS] info on unixODBC/Postgres driver\n\tport to IRIX 6.5.7 64bit" }, { "msg_contents": "Well the driver is derived from the MS Windows driver (Byrons) so it has to\nadhere to whatever licensing that is there but unixODBC is, in general, LGPL.\n\nPeter\n\n\nOn Thu, 15 Jun 2000, Lamar Owen wrote:\n> Peter Harvey wrote:\n> > \n> > The PostgreSQL driver, in unixODBC, is a direct descendent of Byrons\n> > driver. I took the driver, made it compile on UNIX again, then added a few\n> > fixs to make it work in unixODBC and with StarOffice 5.0. I think other\n> > enhancements were made to it since then (about a year ago).\n> > \n> > The intention was to merge the changes back into Byrons stuff but that has not\n> > happened (that I am aware of).\n> \n> Is the unixODBC driver GPL? If so, that's why it hasn't been pulled\n> back into the BSD-licensed PostgreSQL distribution.\n> \n> --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n", "msg_date": "Thu, 15 Jun 2000 09:17:10 -0700", "msg_from": "Peter Harvey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [unixODBC-DEV] - Re: [HACKERS] info on unixODBC/Postgres driver\n\tport to IRIX 6.5.7 64bit" }, { "msg_contents": "Peter Harvey wrote:\n> \n> The PostgreSQL driver, in unixODBC, is a direct descendent of Byrons\n> driver. I took the driver, made it compile on UNIX again, then added a few\n> fixs to make it work in unixODBC and with StarOffice 5.0. I think other\n> enhancements were made to it since then (about a year ago).\n> \n> The intention was to merge the changes back into Byrons stuff but that has not\n> happened (that I am aware of).\n\nIs the unixODBC driver GPL? If so, that's why it hasn't been pulled\nback into the BSD-licensed PostgreSQL distribution.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 15 Jun 2000 12:27:30 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [unixODBC-DEV] - Re: [HACKERS] info on unixODBC/Postgres driver\n\tport to IRIX 6.5.7 64bit" }, { "msg_contents": "> Well the driver is derived from the MS Windows driver (Byrons) so it has to\n> adhere to whatever licensing that is there but unixODBC is, in general, LGPL.\n> \n\nOh, that is fine. Yes, our ODBC is LGPL, which is fine for us.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 15 Jun 2000 15:12:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [unixODBC-DEV] - Re: [HACKERS] info on unixODBC/Postgres driver\n\tport to IRIX 6.5.7 64bit" } ]
[ { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> On Tue, 13 Jun 2000, Karel Zak wrote:\n> \n> > \t+ new ACL? (please :-)\n> \n> Not if we're shipping in August. :(\n> \n\nI hear you.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 14 Jun 2000 10:42:15 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Big 7.1 open items" }, { "msg_contents": "On Wed, 14 Jun 2000, Peter Eisentraut wrote:\n\n> On Tue, 13 Jun 2000, Karel Zak wrote:\n> \n> > \t+ new ACL? (please :-)\n> \n> Not if we're shipping in August. :(\n\n I understand you. I said it as dream :-)\n\n\n BTW. --- Are you sure, how idea for ACL will good? \n\n 1/ your original idea with one-line-for-one-privilage in pg_privilage\n (IMHO it's good idea).\n\n 2/ more priv. in one-line.\n\n\t\t\t\t\t\tKarel\n\n", "msg_date": "Thu, 15 Jun 2000 09:35:47 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big 7.1 open items" } ]
[ { "msg_contents": "Hi,\n\nI've managed to speak to someone knowledgeable at Digital in the UK\nwho pointed me in the direction of a very interesting include file for\nDigital C/C++, namely /usr/include/alpha/builtins.h.\n\nIt contains a series of function prototypes which are then converted\ninto fast assembler sequences by the compiler. In particular a number\nof these seem highly suited for the task of rewriting the alpha\nspinlock code avoiding IPC semaphores.\n\nAmongst the many functions I believe the most relevant are, for the\nTAS() macro in s_lock.h:\n\n/*\n** Interlocked \"test for bit set and then set\". Returns non-zero\n** if bit was already set.\n*/\nint __INTERLOCKED_TESTBITSS_QUAD(volatile void *__address, int __bit_position);\nint __INTERLOCKED_TESTBITSS_QUAD_RETRY(volatile void *__address,\n\t\t\t\t int __bit_position,\n\t\t\t\t int __retry,\n\t\t\t\t int *__status);\n\nNote that this call does _not_ generate a memory barrier. For the\nothers, i.e. S_LOCK and S_UNLOCK perhaps the following might help:\n\n/*\n** Acquire/release binary spinlock based on low-order bit of a longword.\n** NOTE: Memory barrier generated after lock, before unlock.\n** _RETRY variant returns non-zero on success within retry attempts.\n*/\nvoid __LOCK_LONG(volatile void *__address);\nint __LOCK_LONG_RETRY(volatile void *__address, int __retry);\nvoid __UNLOCK_LONG(volatile void *__address);\n\nThere are also counting semaphores if need be (all in the same file).\nIf we change s_lock from msemaphore to long then the following patch\ncompiles and is being tested by Adriaan Joubert as we speak. It\nprobably crashes & burns but at least we can see if we get anywhere.\nMy personal opinion is that it might be the way to go, I haven't\nlooked carefully at S_LOCK etc. but will do so once Adriaan has\ncrashed the copy of Postgres currently being compiled.\n\n===File ~/src/hacks/s_lock.diff=====================\n--- s_lock.h.orig\tWed Jun 14 15:33:28 2000\n+++ s_lock.h\tWed Jun 14 16:11:29 2000\n@@ -252,10 +252,18 @@\n * Note that slock_t on the Alpha AXP is msemaphore instead of char\n * (see storage/ipc.h).\n */\n-#define TAS(lock)\t (msem_lock((lock), MSEM_IF_NOWAIT) < 0)\n-#define S_UNLOCK(lock) msem_unlock((lock), 0)\n-#define S_INIT_LOCK(lock)\t\tmsem_init((lock), MSEM_UNLOCKED)\n-#define S_LOCK_FREE(lock)\t (!(lock)->msem_state)\n+#if 0\n+/* Original hack */\n+# define TAS(lock)\t (msem_lock((lock), MSEM_IF_NOWAIT) < 0)\n+# define S_UNLOCK(lock) msem_unlock((lock), 0)\n+# define S_INIT_LOCK(lock)\t\tmsem_init((lock), MSEM_UNLOCKED)\n+# define S_LOCK_FREE(lock)\t (!(lock)->msem_state)\n+#else\n+/* Arrigo's hack */\n+# include <alpha/builtins.h>\n+# define TAS(lock) (__INTERLOCKED_TESTBITSS_QUAD(lock,0))\n+#endif\n+\n \n #else /* i.e. not __osf__ */\n \n============================================================\n\nCiao,\n\nArrigo\n\nP.S. Yes, I don't really know what I am doing but trying my best to\n learn ;-)\n", "msg_date": "Wed, 14 Jun 2000 16:22:04 +0100 (BST)", "msg_from": "Arrigo Triulzi <[email protected]>", "msg_from_op": true, "msg_subject": "OSF/1/Digital UNIX/Tru64 UNIX spinlock code" }, { "msg_contents": "Can I ask where this was left?\n\n> Hi,\n> \n> I've managed to speak to someone knowledgeable at Digital in the UK\n> who pointed me in the direction of a very interesting include file for\n> Digital C/C++, namely /usr/include/alpha/builtins.h.\n> \n> It contains a series of function prototypes which are then converted\n> into fast assembler sequences by the compiler. In particular a number\n> of these seem highly suited for the task of rewriting the alpha\n> spinlock code avoiding IPC semaphores.\n> \n> Amongst the many functions I believe the most relevant are, for the\n> TAS() macro in s_lock.h:\n> \n> /*\n> ** Interlocked \"test for bit set and then set\". Returns non-zero\n> ** if bit was already set.\n> */\n> int __INTERLOCKED_TESTBITSS_QUAD(volatile void *__address, int __bit_position);\n> int __INTERLOCKED_TESTBITSS_QUAD_RETRY(volatile void *__address,\n> \t\t\t\t int __bit_position,\n> \t\t\t\t int __retry,\n> \t\t\t\t int *__status);\n> \n> Note that this call does _not_ generate a memory barrier. For the\n> others, i.e. S_LOCK and S_UNLOCK perhaps the following might help:\n> \n> /*\n> ** Acquire/release binary spinlock based on low-order bit of a longword.\n> ** NOTE: Memory barrier generated after lock, before unlock.\n> ** _RETRY variant returns non-zero on success within retry attempts.\n> */\n> void __LOCK_LONG(volatile void *__address);\n> int __LOCK_LONG_RETRY(volatile void *__address, int __retry);\n> void __UNLOCK_LONG(volatile void *__address);\n> \n> There are also counting semaphores if need be (all in the same file).\n> If we change s_lock from msemaphore to long then the following patch\n> compiles and is being tested by Adriaan Joubert as we speak. It\n> probably crashes & burns but at least we can see if we get anywhere.\n> My personal opinion is that it might be the way to go, I haven't\n> looked carefully at S_LOCK etc. but will do so once Adriaan has\n> crashed the copy of Postgres currently being compiled.\n> \n> ===File ~/src/hacks/s_lock.diff=====================\n> --- s_lock.h.orig\tWed Jun 14 15:33:28 2000\n> +++ s_lock.h\tWed Jun 14 16:11:29 2000\n> @@ -252,10 +252,18 @@\n> * Note that slock_t on the Alpha AXP is msemaphore instead of char\n> * (see storage/ipc.h).\n> */\n> -#define TAS(lock)\t (msem_lock((lock), MSEM_IF_NOWAIT) < 0)\n> -#define S_UNLOCK(lock) msem_unlock((lock), 0)\n> -#define S_INIT_LOCK(lock)\t\tmsem_init((lock), MSEM_UNLOCKED)\n> -#define S_LOCK_FREE(lock)\t (!(lock)->msem_state)\n> +#if 0\n> +/* Original hack */\n> +# define TAS(lock)\t (msem_lock((lock), MSEM_IF_NOWAIT) < 0)\n> +# define S_UNLOCK(lock) msem_unlock((lock), 0)\n> +# define S_INIT_LOCK(lock)\t\tmsem_init((lock), MSEM_UNLOCKED)\n> +# define S_LOCK_FREE(lock)\t (!(lock)->msem_state)\n> +#else\n> +/* Arrigo's hack */\n> +# include <alpha/builtins.h>\n> +# define TAS(lock) (__INTERLOCKED_TESTBITSS_QUAD(lock,0))\n> +#endif\n> +\n> \n> #else /* i.e. not __osf__ */\n> \n> ============================================================\n> \n> Ciao,\n> \n> Arrigo\n> \n> P.S. Yes, I don't really know what I am doing but trying my best to\n> learn ;-)\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 9 Oct 2000 16:46:14 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OSF/1/Digital UNIX/Tru64 UNIX spinlock code" } ]
[ { "msg_contents": "I've fixed up the automatic doc generation on postgresql.org to do the\nfollowing:\n\n Automatically cvs update and build all html docs\n Put tarballs on the ftp site in /pub/dev/docs\n Put tarballs on the web site in devel-corner/docs/\n Unpack the tarballs in devel-corner/docs/{postgres,admin,...}\n Post the log of the process to devel-corner/docbuild.log\n\nVince, could we update the developer's web page to point to this, moving\nthe info for the development docs from the normal user's docs page?\nAlso, can we move the \"developer's bio\" page to be reachable from this\nnew default developer's web page? Other info, presumably, would include\npointers to TOAST, etc etc.\n\nAlso, I've committed changes to get the docs to actually build; they had\naccumulated a bit of markup trouble.\n\nThings seem to run OK; I'd doing an end-to-end test right now.\n\n - Thomas\n", "msg_date": "Wed, 14 Jun 2000 15:31:39 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Developer's snapshot docs" } ]
[ { "msg_contents": "Here is an updated list of the items we have recently discussed:\n\nmake separate analyze command\nmove page/tuple computation into analyze, heap_beginscan updates pages?\nrun fixinclude and fixnoinclude\nmake use of new index on pg_index's indrelid\nchange heapscans to index scans on system tables\nremove pg_listener index and don't use cache\nremove pg_attrdef? (Tom)\nallow pg_dumpall to emit CREATE USER commands\nbit type\ninheritance\ndrop column\nvacuum index speed\ncached query plans\nmemory context cleanup\nTOAST\nWAL\nfmgr redesign\nencrypt pg_shadow passwords using MD5\nredesign pg_hba.conf password option to not store passwords\nmake pg_hba.conf an SQL table and dump as flat file\nnew location for config files\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 14 Jun 2000 12:21:24 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Updated 7.1 items list" } ]
[ { "msg_contents": "Hi,\n\njust wanted to report that 7.0.2 with my patch and a small change to\nos.h to tell s_lock to be a long instead of an msemaphore.\n\nCould someone tell us how to get more useful numbers out of the\nbenchmarks? On a DS20 the benchmark is basically meaningless...\n\nThanks,\n\nArrigo\n", "msg_date": "Wed, 14 Jun 2000 17:34:24 +0100 (BST)", "msg_from": "Arrigo Triulzi <[email protected]>", "msg_from_op": true, "msg_subject": "New alpha spinlock code passes regression test" }, { "msg_contents": "Cool --- did you try the parallel regress tests, or just sequential?\n\nMy experience is that it takes quite a few iterations of the parallel\ntests before you should have much confidence that there aren't locking\nbugs lurking. But if it comes through that, send in the patch and\nwe'll gratefully accept it!\n\n\t\t\tregards, tom lane\n\nPS: also please note my just-posted call for testers of the fmgr\nchanges...\n", "msg_date": "Wed, 14 Jun 2000 13:03:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New alpha spinlock code passes regression test " }, { "msg_contents": "Tom Lane wrote:\n\n> Cool --- did you try the parallel regress tests, or just sequential?\n>\n> My experience is that it takes quite a few iterations of the parallel\n> tests before you should have much confidence that there aren't locking\n> bugs lurking. But if it comes through that, send in the patch and\n> we'll gratefully accept it!\n\nI've run the regression tests several times (also the parallel ones and\nbigtest). It failed geometry, but it always does due to different\nrounding. Passed everything else, so it looks ok.\n\nNow I'm trying to figure out how I can do a sensible timing test on the\ntwo versions to see whether it is any faster. The postgres benchmark\nfinishes in such a ridiculously short time, that that isn't telling me\nanything.\n\nAdriaan\n\n", "msg_date": "Thu, 15 Jun 2000 10:02:29 +0300", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New alpha spinlock code passes regression test" } ]
[ { "msg_contents": "I have finished updating about half of Postgres' builtin functions to\nthe new style of fmgr interface. At this point, everything that accepts\nany pass-by-value datatypes is converted; the remaining work is for\nfunctions that use only pass-by-reference datatypes, and therefore\nreceive only pointer arguments.\n\nThis should already take care of function-call-related portability\nproblems on many platforms. In particular these changes should\neliminate the need for Ryan Kirkpatrick's Linux/Alpha patches, and\nshould also solve the known problems on PPC builds with optimization\nhigher than -O0. We might be able to increase the optimization level\non other platforms that have had trouble with function-call\noptimizations, too.\n\nI'd like to get the current code tested by some people with Alphas\n(or other 64-bit platforms), PPCs, and anything else that has had\noptimization-related problems. You can get the 7.1 development code\nfrom our CVS server, or use the current daily-snapshot tarball (see\nftp://ftp.postgresql.org/pub/dev/).\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Jun 2000 12:55:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Call for port testing on fmgr changes" }, { "msg_contents": "> eliminate the need for Ryan Kirkpatrick's Linux/Alpha patches, and\n> should also solve the known problems on PPC builds with optimization\n> higher than -O0. We might be able to increase the optimization level\n\nI have change the -O0 to -O2.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 14 Jun 2000 13:01:26 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Call for port testing on fmgr changes" }, { "msg_contents": "On Wed, 14 Jun 2000, Tom Lane wrote:\n\n> I have finished updating about half of Postgres' builtin functions to\n> the new style of fmgr interface. At this point, everything that accepts\n> any pass-by-value datatypes is converted; the remaining work is for\n> functions that use only pass-by-reference datatypes, and therefore\n> receive only pointer arguments.\n\n\tCool! I need to study up a bit on fmgr, but anything that help\nLinux/Alpha is a good thing. :)\n\n> This should already take care of function-call-related portability\n> problems on many platforms. In particular these changes should\n> eliminate the need for Ryan Kirkpatrick's Linux/Alpha patches, and\n\n\tNot all of my Linux/Alpha patches are related to fmgr, there are a\nfew related to s_lock and such (which can probably be put into the source\ntree with #ifdefs). But this should take care of the majority of the\nLinux/Alpha patch.\n\tI will download the snapshot today at work (where I have \"real\"\nbandwidth :) and test things out this weekend. I should have a report by\nMonday. Maybe I will even have a patch that can be safely applied to\nthe development source tree. :) TTYL.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n", "msg_date": "Fri, 16 Jun 2000 07:14:22 -0600 (MDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for port testing on fmgr changes" }, { "msg_contents": "Ryan Kirkpatrick <[email protected]> writes:\n> \tNot all of my Linux/Alpha patches are related to fmgr, there are a\n> few related to s_lock and such (which can probably be put into the source\n> tree with #ifdefs).\n\nRight-o. Up to now we haven't worried about that, but now that the\nfundamental problem is fixed (I hope), we can start cleaning up the\nsmaller details so that Linux/Alpha will be supported out-of-the-box.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Jun 2000 10:46:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Call for port testing on fmgr changes " }, { "msg_contents": "On Fri, 16 Jun 2000, Ryan Kirkpatrick wrote:\n\n> \tI will download the snapshot today at work (where I have \"real\"\n> bandwidth :) and test things out this weekend. I should have a report by\n> Monday. Maybe I will even have a patch that can be safely applied to\n> the development source tree. :) TTYL.\n\n\tOk, I have tested the new snapshot. I have good news and I have\nbad news...\n\tFirst of all, to build pgsql correctly from the snapshot tarball\n(dated Fri, Jun 16th), I had to run a 'make distclean' first. There were\nsome config.status files laying around that were confusing the configure\nrun. Don't know if that is just par for pgsql devel snapshots or if there\nwas mistake somewhere in the building of the snapshot.\n\tOnce I got past that, I realized that the only non-fmgr related\nLinux/Alpha patch was a adding a single line to the linux_alpha template\nfile. If I had realized that earlier, I would have submitted that patch\nlong ago. But it is attached now, probably not even work using 'patch' on\n(i.e. easier to hand code it in), and really should not break any other\nplatforms. :)\n\tNow, for the fmgr rewrite testing... I compiled pgsql w/o any\npatches or source code modifications save for the two I mentioned above\n(make distclean and added line to linux_alpha template). 'initdb' ran\nwithout problems. Regression tests failed on geometry as expected (off by\none in nth decimal place), but also failed on timestamp, tinterval,\nhorology, and abstime. These latter failures were characteristic of\nrunning release versions of pgsql w/o the Linux/Alpha patches. Years that\nare several centuries off. :(\n\tI tracked the problem down to the following suspect functions. I\nfound these as they were functions listed in the Linux/Alpha patches (as\nneeding Datum datatypes for function params), but did not appear to have\nbeen rewritten from the new fmgr (i.e. no PG_FUNCTION_ARGS in\ndeclaration). These functions are all in src/backend/utils/adt/nabstime.c.\n\tabstime2tm\n\ttm2abstime\n\tAbsoluteTimeIsBefore\n\tAbsoluteTimeIsAfter\n\treltime2tm\nIf these are converted to the new fmgr format, then I think the regression\ntests mentioned above should pass on Linux/Alpha. I think I will leave the\nconversion to some one who is more familiar with the new fmgr than me.\n\tOnce that is done, and the attached patch is applied to the source\ntree, there is a very good chance that pgsql will FINALLY work out of the\nbox on Linux/Alpha. :)\n\tLet me know when the new fmgr is ready to test again, if you\nneed any more information on the above, or if there is anything else I can\ndo help this process. TTYL.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------", "msg_date": "Sun, 18 Jun 2000 16:43:03 -0600 (MDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for port testing on fmgr changes -- Results!" }, { "msg_contents": "Applied.\n\n\n> On Fri, 16 Jun 2000, Ryan Kirkpatrick wrote:\n> \n> > \tI will download the snapshot today at work (where I have \"real\"\n> > bandwidth :) and test things out this weekend. I should have a report by\n> > Monday. Maybe I will even have a patch that can be safely applied to\n> > the development source tree. :) TTYL.\n> \n> \tOk, I have tested the new snapshot. I have good news and I have\n> bad news...\n> \tFirst of all, to build pgsql correctly from the snapshot tarball\n> (dated Fri, Jun 16th), I had to run a 'make distclean' first. There were\n> some config.status files laying around that were confusing the configure\n> run. Don't know if that is just par for pgsql devel snapshots or if there\n> was mistake somewhere in the building of the snapshot.\n> \tOnce I got past that, I realized that the only non-fmgr related\n> Linux/Alpha patch was a adding a single line to the linux_alpha template\n> file. If I had realized that earlier, I would have submitted that patch\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 18 Jun 2000 20:51:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for port testing on fmgr changes -- Results!" }, { "msg_contents": "Ryan Kirkpatrick <[email protected]> writes:\n> \tFirst of all, to build pgsql correctly from the snapshot tarball\n> (dated Fri, Jun 16th), I had to run a 'make distclean' first. There were\n> some config.status files laying around that were confusing the configure\n> run.\n\nThat's odd, the snapshot is supposed to have been made from a\ndistcleaned directory tree. Maybe something busted in the snapshot\ngeneration script? Marc?\n\n> \tI tracked the problem down to the following suspect functions. I\n> found these as they were functions listed in the Linux/Alpha patches (as\n> needing Datum datatypes for function params), but did not appear to have\n> been rewritten from the new fmgr (i.e. no PG_FUNCTION_ARGS in\n> declaration). These functions are all in src/backend/utils/adt/nabstime.c.\n> \tabstime2tm\n> \ttm2abstime\n> \tAbsoluteTimeIsBefore\n> \tAbsoluteTimeIsAfter\n> \treltime2tm\n> If these are converted to the new fmgr format, then I think the regression\n> tests mentioned above should pass on Linux/Alpha.\n\nHmm. I did not touch these because they aren't fmgr-callable (and in\nfact I feel fairly safe in saying that AbsoluteTimeIsAfter isn't called\nperiod, since it's ifdef'd out...). As best I can tell, the other four\nare only called from places that see valid prototypes for them, so if\nyour compiler is failing to call them correctly then your compiler is\nbroken.\n\nI suspect that the real problem is not call sequences, but something\nelse that happens to be affecting these routines (and maybe related code\nthat doesn't get exercised by the regression tests). Maybe something\nlike macro-constants that are declared with not quite the right type\n(unsigned or not, long or not) and need to be casted to match whatever\nthey're being compared to. We've seen that before.\n\nCould you dig a little more and try to identify exactly what's\ngoing wrong?\n\nAnyway, it sure sounds like we've broken the back of the problems.\nCouple more bug fixes and we'll be there. That's good news indeed!\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Jun 2000 01:17:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Call for port testing on fmgr changes -- Results! " }, { "msg_contents": "On Mon, 19 Jun 2000, Tom Lane wrote:\n\n> > declaration). These functions are all in src/backend/utils/adt/nabstime.c.\n> > \tabstime2tm\n> > \ttm2abstime\n> > \tAbsoluteTimeIsBefore\n> > \tAbsoluteTimeIsAfter\n> > \treltime2tm\n> \n> Hmm. I did not touch these because they aren't fmgr-callable (and in\n...\n> I suspect that the real problem is not call sequences, but something\n> else that happens to be affecting these routines (and maybe related code\n...\n> they're being compared to. We've seen that before.\n> Could you dig a little more and try to identify exactly what's\n> going wrong?\n\n\tWill do. I ran out of time last weekend to actually test if these\nfunctions were the cause of the problem or not. They just looked suspcious\ngiven the patches I had. I will did deeper and see what I can find, but it\nwill probably not happen until next weekend. Will post when I have found\nsomething.\n\n> Anyway, it sure sounds like we've broken the back of the problems.\n> Couple more bug fixes and we'll be there. That's good news indeed!\n\n\tYes, very good news indeed! :)\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n", "msg_date": "Tue, 20 Jun 2000 17:15:25 -0600 (MDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Call for port testing on fmgr changes -- Results!" }, { "msg_contents": "> On Mon, 19 Jun 2000, Tom Lane wrote:\n> \n> > > declaration). These functions are all in src/backend/utils/adt/nabstime.c.\n> > > \tabstime2tm\n> > > \ttm2abstime\n> > > \tAbsoluteTimeIsBefore\n> > > \tAbsoluteTimeIsAfter\n> > > \treltime2tm\n> > \n> > Hmm. I did not touch these because they aren't fmgr-callable (and in\n> ...\n> > I suspect that the real problem is not call sequences, but something\n> > else that happens to be affecting these routines (and maybe related code\n> ...\n> > they're being compared to. We've seen that before.\n> > Could you dig a little more and try to identify exactly what's\n> > going wrong?\n> \n> \tWill do. I ran out of time last weekend to actually test if these\n> functions were the cause of the problem or not. They just looked suspcious\n> given the patches I had. I will did deeper and see what I can find, but it\n> will probably not happen until next weekend. Will post when I have found\n> something.\n\nTamotsu Nakagawa has posted a fix for this to a local mail list in\nJapan. Can someone comment on this? According to him, with the patch\nnow only the geometry test fails.\n\n void\n-abstime2tm(AbsoluteTime time, int *tzp, struct tm * tm, char *tzn)\n+abstime2tm(AbsoluteTime _time, int *tzp, struct tm * tm, char *tzn)\n {\n+ time_t time = (time_t) _time;\n #ifdef USE_POSIX_TIME\n struct tm *tx;\n\n", "msg_date": "Sun, 25 Jun 2000 11:46:20 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Call for port testing on fmgr changes -- Results!" }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> Tamotsu Nakagawa has posted a fix for this to a local mail list in\n> Japan. Can someone comment on this? According to him, with the patch\n> now only the geometry test fails.\n\n> void\n> -abstime2tm(AbsoluteTime time, int *tzp, struct tm * tm, char *tzn)\n> +abstime2tm(AbsoluteTime _time, int *tzp, struct tm * tm, char *tzn)\n> {\n> + time_t time = (time_t) _time;\n> #ifdef USE_POSIX_TIME\n> struct tm *tx;\n\nHmm, that makes all kinds of sense if time_t is not the same size as\nAbsoluteTime --- which wouldn't surprise me at all on a 64-bit system.\ntime_t *ought* to be 64-bits on such a machine. The casts in that\nroutine,\n\t\ttx = localtime((time_t *) &time);\nare obviously bogus if so. Can anyone with an Alpha comment?\n\nWhat surprises me more is the implication that this is the only place\nthat makes such a bogus assumption about the size of time_t. I'd have\nguessed there are more places...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Jun 2000 00:15:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Call for port testing on fmgr changes -- Results! " }, { "msg_contents": "> Hmm, that makes all kinds of sense if time_t is not the same size as\n> AbsoluteTime --- which wouldn't surprise me at all on a 64-bit system.\n> time_t *ought* to be 64-bits on such a machine. The casts in that\n> routine,\n> tx = localtime((time_t *) &time);\n> are obviously bogus if so. Can anyone with an Alpha comment?\n\nI haven't had an Alpha for a couple of years, but I *strongly* recall\nthat time_t is 64 bits on that machine.\n\n - Thomas\n", "msg_date": "Sun, 25 Jun 2000 04:36:13 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Call for port testing on fmgr changes -- Results!" }, { "msg_contents": "Thomas Lockhart wrote:\n\n> > Hmm, that makes all kinds of sense if time_t is not the same size as\n> > AbsoluteTime --- which wouldn't surprise me at all on a 64-bit system.\n> > time_t *ought* to be 64-bits on such a machine. The casts in that\n> > routine,\n> > tx = localtime((time_t *) &time);\n> > are obviously bogus if so. Can anyone with an Alpha comment?\n>\n> I haven't had an Alpha for a couple of years, but I *strongly* recall\n> that time_t is 64 bits on that machine.\n>\n\nIn <sys/types.h> time_t is defined as an int4, i.e. 4 bytes. To\ndouble-check I wrote a program to print sizeof:\n\nsizeof(time_t)=4 (DU 4.0F, cc)\n\nSo I guess it is 32 bits. On the whole they have stuck to traditional sizes\nfor traditional types -- it would just have broken too many programmes\notherwise. Of course they are going to have to make time_t 64 bits within\nthe next 30 years ....\n\nAdriaan\n\n", "msg_date": "Sun, 25 Jun 2000 10:17:52 +0300", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Re: Call for port testing on fmgr changes --\n\tResults!" }, { "msg_contents": "On Sun, 25 Jun 2000, Tom Lane wrote:\n\n> Tatsuo Ishii <[email protected]> writes:\n> > Tamotsu Nakagawa has posted a fix for this to a local mail list in\n> > Japan. Can someone comment on this? According to him, with the patch\n> > now only the geometry test fails.\n> \n> > void\n> > -abstime2tm(AbsoluteTime time, int *tzp, struct tm * tm, char *tzn)\n> > +abstime2tm(AbsoluteTime _time, int *tzp, struct tm * tm, char *tzn)\n> > {\n> > + time_t time = (time_t) _time;\n> > #ifdef USE_POSIX_TIME\n> > struct tm *tx;\n> \n> Hmm, that makes all kinds of sense if time_t is not the same size as\n> AbsoluteTime --- which wouldn't surprise me at all on a 64-bit system.\n> time_t *ought* to be 64-bits on such a machine. The casts in that\n> routine,\n> \t\ttx = localtime((time_t *) &time);\n> are obviously bogus if so. Can anyone with an Alpha comment?\n\n\tOn Linux/Alpha, time_t is indeed 64 bit (i.e.\nprintf(\"%d\\n\",sizeof(time_t)) results in an '8'). If AbsoluteTime is only\n32 bit, then that would definitely cause a problem.\n\n> What surprises me more is the implication that this is the only place\n> that makes such a bogus assumption about the size of time_t. I'd have\n> guessed there are more places...\n\n\tI will apply the above patch, check for other instances of time_t\nsize assumptions, and see what the result is this afternoon. TTYL.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n\n", "msg_date": "Sun, 25 Jun 2000 09:31:53 -0600 (MDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Call for port testing on fmgr changes -- Results!" }, { "msg_contents": "> > > void\n> > > -abstime2tm(AbsoluteTime time, int *tzp, struct tm * tm, char *tzn)\n> > > +abstime2tm(AbsoluteTime _time, int *tzp, struct tm * tm, char *tzn)\n> > > {\n> > > + time_t time = (time_t) _time;\n> > > #ifdef USE_POSIX_TIME\n> > > struct tm *tx;\n> > \n\n\tOk, the above patch does indeed solve the problem. And this\nappears to be the only place AbsoluteTime needs to be copied to a time_t\nvariable. I can't find any other casts of AbsoluteTime to time_t, and\nwith this patch applied all regression tests pass just fine (save for\ngeometry of course with its standard off by one in nth decimal place\ndifference).\n\tAdditionally, I do not see how this patch could break other\nplatforms. At worst, it is a minor slow down that might even be optimized\nout by some compiliers when they see that sizeof(AbsoluteTime) ==\nsizeof(time_t). I will defer to the core developers on how you want to\napply this patch to the source tree (i.e. with #ifdef alpha && linux or as\nabove). Though probably best to add a bit of a comment beside it so\nsomeone does not remove it later thinking they are \"optimizing\" the code.\n:)\n\tAt this point, Linux/Alpha should actually run out of the box! Let\nme know when this patch is applied (in what ever form it ends up as) and I\nwill download a new snapshot and test it. \n\tTTYL.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n", "msg_date": "Sun, 25 Jun 2000 16:08:55 -0600 (MDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Call for port testing on fmgr changes -- Results!" }, { "msg_contents": "Tom Lane writes:\n\n> What surprises me more is the implication that this is the only place\n> that makes such a bogus assumption about the size of time_t. I'd have\n> guessed there are more places...\n\nA lot of those were fixed for 7.0.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 26 Jun 2000 03:41:50 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Call for port testing on fmgr changes -- Results!" }, { "msg_contents": "Ryan Kirkpatrick <[email protected]> writes:\n> \tOk, the above patch does indeed solve the problem. And this\n> appears to be the only place AbsoluteTime needs to be copied to a time_t\n> variable. I can't find any other casts of AbsoluteTime to time_t,\n\nGreat!\n\n> and\n> with this patch applied all regression tests pass just fine (save for\n> geometry of course with its standard off by one in nth decimal place\n> difference).\n\nProbably we should write that off as a platform issue and create an\nAlpha-specific expected-output file for geometry. See the documentation\nabout platform-specific files, and please send along a patch to add one.\n\n> \tAdditionally, I do not see how this patch could break other\n> platforms. At worst, it is a minor slow down that might even be optimized\n> out by some compiliers when they see that sizeof(AbsoluteTime) ==\n> sizeof(time_t). I will defer to the core developers on how you want to\n> apply this patch to the source tree (i.e. with #ifdef alpha && linux or as\n> above).\n\nNo, we should just apply it as is, no #ifdef. There are going to be\nmore and more platforms with 64-bit time_t.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Jun 2000 23:31:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Call for port testing on fmgr changes -- Results! " }, { "msg_contents": "On Sun, 25 Jun 2000, Tom Lane wrote:\n\n> Probably we should write that off as a platform issue and create an\n> Alpha-specific expected-output file for geometry.\n\nI noticed the other day that the geometry output differs with the\ncompiler optimization level. That can't be good.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 26 Jun 2000 14:29:43 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Call for port testing on fmgr changes -- Results! " }, { "msg_contents": "> I noticed the other day that the geometry output differs with the\n> compiler optimization level. That can't be good.\n\nIt depends where the differences are. If they are just in the last few\ndecimal places, then it should be OK (though annoying for regression\ntest support).\n\nWith higher optimizations, they may be doing more inlining and using\ndifferent code sequences for, for example, the transcendental functions.\n\n - Thomas\n", "msg_date": "Mon, 26 Jun 2000 13:17:37 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Call for port testing on fmgr changes --Results!" }, { "msg_contents": "Thomas Lockhart wrote:\n\n> > I noticed the other day that the geometry output differs with the\n> > compiler optimization level. That can't be good.\n>\n> It depends where the differences are. If they are just in the last few\n> decimal places, then it should be OK (though annoying for regression\n> test support).\n>\n> With higher optimizations, they may be doing more inlining and using\n> different code sequences for, for example, the transcendental functions.\n\nDunno which flags you used, but with the right flags you get faster\nversions of the built-in library routines -- I know that for sine/cosine\netc this makes a significant difference in runtime for a very small loss in\naccuracy. And aggressive reordering of instructions can mean that different\nparts of the calculation are executed in different order (if I recall\ncorrectly EV6/7s have 8 fp/int pipes) so that small differences in\nend-results are to be expected. But then, the last few digits are garbage\nin most numerical calculations anyway, so no great harm done there.\n\nThe right way to solve this would be to be able to control the number of\ndigits that are printed, so that it is easier to check that the significant\ndigits are the same. That would probably get rid of quite a few\narchitecture specific files regression test files as well.\n\nAdriaan\n\n", "msg_date": "Mon, 26 Jun 2000 16:29:40 +0300", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] Re: Re: Call for port testing on fmgr changes --Results!" }, { "msg_contents": "On Sun, 25 Jun 2000, Tom Lane wrote:\n\n> > with this patch applied all regression tests pass just fine (save for\n> > geometry of course with its standard off by one in nth decimal place\n> > difference).\n> \n> Probably we should write that off as a platform issue and create an\n> Alpha-specific expected-output file for geometry. See the documentation\n> about platform-specific files, and please send along a patch to add one.\n\n\tI remember finding a geometry.out file in the expected directory\nthat did match. Seems to me it was a Solaris/Sun one.... Anyway, I will\nlook into it and generate a patch to rid ourselves of that annoyance.\nProbably will be next Sunday before it happens.\n\tAs for the comments about different geometry outputs with\ndifferent optimization levels, the linux_alpha template is set to use -O2,\nwhich seems pretty standard for safe, yet useful optimization. Though I\nhave had reports that newer versions of gcc (from Mandrake 7.1) break\npgsql (spinlocks get stuck) with that optimization level. My alpha is\nstill running Debian 2.1 and so my gcc is a little old. Probably will\nupgrade before the end of the summer and have to deal with that issue\nthen. Yuck. :( Of course, if some one wants to beat me to it, feel free :)\n\n> > \tAdditionally, I do not see how this patch could break other\n> > platforms. At worst, it is a minor slow down that might even be optimized\n> > out by some compiliers when they see that sizeof(AbsoluteTime) ==\n> > sizeof(time_t). I will defer to the core developers on how you want to\n> > apply this patch to the source tree (i.e. with #ifdef alpha && linux or as\n> > above).\n> \n> No, we should just apply it as is, no #ifdef. There are going to be\n> more and more platforms with 64-bit time_t.\n\n\tNo problem here with that, go ahead and apply it.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n", "msg_date": "Mon, 26 Jun 2000 18:36:02 -0600 (MDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Call for port testing on fmgr changes -- Results!" }, { "msg_contents": "Applied as you suggested.\n\n\n> > On Mon, 19 Jun 2000, Tom Lane wrote:\n> > \n> > > > declaration). These functions are all in src/backend/utils/adt/nabstime.c.\n> > > > \tabstime2tm\n> > > > \ttm2abstime\n> > > > \tAbsoluteTimeIsBefore\n> > > > \tAbsoluteTimeIsAfter\n> > > > \treltime2tm\n> > > \n> > > Hmm. I did not touch these because they aren't fmgr-callable (and in\n> > ...\n> > > I suspect that the real problem is not call sequences, but something\n> > > else that happens to be affecting these routines (and maybe related code\n> > ...\n> > > they're being compared to. We've seen that before.\n> > > Could you dig a little more and try to identify exactly what's\n> > > going wrong?\n> > \n> > \tWill do. I ran out of time last weekend to actually test if these\n> > functions were the cause of the problem or not. They just looked suspcious\n> > given the patches I had. I will did deeper and see what I can find, but it\n> > will probably not happen until next weekend. Will post when I have found\n> > something.\n> \n> Tamotsu Nakagawa has posted a fix for this to a local mail list in\n> Japan. Can someone comment on this? According to him, with the patch\n> now only the geometry test fails.\n> \n> void\n> -abstime2tm(AbsoluteTime time, int *tzp, struct tm * tm, char *tzn)\n> +abstime2tm(AbsoluteTime _time, int *tzp, struct tm * tm, char *tzn)\n> {\n> + time_t time = (time_t) _time;\n> #ifdef USE_POSIX_TIME\n> struct tm *tx;\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 27 Jun 2000 14:08:05 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Call for port testing on fmgr changes -- Results!" }, { "msg_contents": "On Sun, 25 Jun 2000, Tom Lane wrote:\n\n> > and\n> > with this patch applied all regression tests pass just fine (save for\n> > geometry of course with its standard off by one in nth decimal place\n> > difference).\n> \n> Probably we should write that off as a platform issue and create an\n> Alpha-specific expected-output file for geometry. See the documentation\n> about platform-specific files, and please send along a patch to add one.\n\n\tThe patch is attached, just adds a line to the resultmap file as\nthe geometry-solaris-precision.out file matched the Linux/Alpha output.\nAlso, the geometry-cygwin-precision.out is an exact match to the\ngeometry-solaris-precision.out file if anyone is interested in reducing\nthe number of geometry files. :)\n\tNow, all regression tests are passing (snapshot from 2000/7/1),\nand this is on a build of pgsql straight out of the box. Just untarred,\nconfigure (w/linux_alpha template), and a make. So, it looks like we have\nfinally reached out-of-the-box compatiblity for Linux/Alpha. :) Though I\nhave heard that some of the optional interfaces, ODBC in particular, are\nnot compiling on the Linux/Alpha. I guess one's work is never done...\nTTYL.\n\n\tPS. The 'make runcheck' for the regression tests is quite a nice\nway of testing a pgsql build without affecting a current pgsql install.\nOne gripe, it calls 'gmake' to do the temporary install, which does not\nexists, at least under Debian 2.1 Linux, as GNU make is the only make\ninstalled. Maybe, test for gmake, if it not found, just use make, or use\nmake outright on linux machines (detected from arch configurations). Just\na suggestion.\n\n\tPPS. The snapshots still have a dirty config.cache and\nconfig.status file included with them, requiring a make distclean\nimmediately after untar.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------", "msg_date": "Tue, 4 Jul 2000 19:42:06 -0600 (MDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": false, "msg_subject": "Linux/Alpha Regression Test Patch (Was Re: Call for port testing on\n\tfmgr changes -- Results! )" }, { "msg_contents": "Applied.\n\n> On Sun, 25 Jun 2000, Tom Lane wrote:\n> \n> > > and\n> > > with this patch applied all regression tests pass just fine (save for\n> > > geometry of course with its standard off by one in nth decimal place\n> > > difference).\n> > \n> > Probably we should write that off as a platform issue and create an\n> > Alpha-specific expected-output file for geometry. See the documentation\n> > about platform-specific files, and please send along a patch to add one.\n> \n> \tThe patch is attached, just adds a line to the resultmap file as\n> the geometry-solaris-precision.out file matched the Linux/Alpha output.\n> Also, the geometry-cygwin-precision.out is an exact match to the\n> geometry-solaris-precision.out file if anyone is interested in reducing\n> the number of geometry files. :)\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 5 Jul 2000 00:27:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux/Alpha Regression Test Patch (Was Re: Call for port\n\ttesting on fmgr changes -- Results! )" }, { "msg_contents": "On Wed, 5 Jul 2000, Bruce Momjian wrote:\n\n> Applied.\n> \n> > \tThe patch is attached, just adds a line to the resultmap file as\n> > the geometry-solaris-precision.out file matched the Linux/Alpha output.\n> > Also, the geometry-cygwin-precision.out is an exact match to the\n> > geometry-solaris-precision.out file if anyone is interested in reducing\n> > the number of geometry files. :)\n\n\tVerified as valid. I downloaded today's snapshot, built it, and\nthen ran the regression checks (make runcheck). I made no modifications to\nthe source tree (i.e. no patches), and all regression tests passed. I\nthink at this point we can declare that pgsql works out of the box on\nLinux/Alpha. :)\n\tI will of course periodically check the snapshots to make sure\nnothing new is broken on the Linux/Alpha platform. TTYL.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n", "msg_date": "Tue, 15 Aug 2000 20:49:55 -0600 (MDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Linux/Alpha Regression Test Patch" } ]
[ { "msg_contents": "Bah, while editing myself out of the To: list, I botched the mailing lists\naddress. Here it is, for everyone else.\n\nRoss\n----- Forwarded message from \"Ross J. Reedstrom\" <[email protected]> -----\n\nOn Sun, Jun 11, 2000 at 11:14:06PM -0400, Bruce Momjian wrote:\n> Seems Vadim's new storage manager for 7.2 would be the way to go.\n> \n> I think he is going to have everything in one file.\n> \n\nMight I remind everyone that that current code is actually buggy on\nNT: since the NTFS is case-insensitive/case-preserving, two tables\n\"fred\" and \"Fred\" attempt to create the same file to store into.\n\nHiroshi had a version of my patch without the silly messing with the\nrelcache, that passed all the regression tests. Perhaps we should put\nthat in as an NT bug fix, while we wait for bigger and better things?\nI have noticed a lot more activity on the mailing lists mentioning\nlooking at the NT port.\n\nThe patch would need to be tweaked a little: ALTER TABLE RENAME becomes\nless intrusive (a mere update of relname in pg_class) and completely\nunder MVCC and transaction control. VACUUM may need touching to find\nrenamed tables and fix their physical table name then, since it's already\ngot the locks needed, if we want to maintain consistency of the mostly\nhuman readable filename <-> tablename relation. It could also be fixed\nup by a dump/restore, since the physical filename is generated at table\ncreation time.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n\n----- End forwarded message -----\n", "msg_date": "Wed, 14 Jun 2000 12:36:54 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix for RENAME" } ]
[ { "msg_contents": "HI all,\n\nI just want to confirm if input/output functions have \nbeen changed.\n\nIn my environment(current cvs),I get the following.\nUnfortunately it isn't the latest one because I wasn't able\nto update my cvs tree yesterday\n\nselect int4out(1);\n int4out\n-----------\n 136475032\n(1 row)\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Thu, 15 Jun 2000 11:59:20 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "input/output functions have been changed ?" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> I just want to confirm if input/output functions have \n> been changed.\n\n> select int4out(1);\n> int4out\n> -----------\n> 136475032\n> (1 row)\n\nSure, you expected something nicer? You're seeing the numeric\nrepresentation of a C string pointer.\n\nActually, checking 6.5, I see that int4out --- alone of all the *out\nfunctions --- used to be declared in pg_proc to return type 'name'.\nAll the other ones are declared to return type 'int4', which means if\nyou invoke them explicitly as in the above example, you'll see the\nnumeric equivalent of whatever pointer value they return.\n\n'name' is not a correct declaration either since it implies a string\nof < NAMEDATALEN chars, but it would have happened to produce the right\nsort of results.\n\nI changed int4out to be the same as the other *out functions recently,\njust for consistency's sake. Really the right fix is to assign a real\ntype OID to represent \"C string\" so that the argument and return types\nof input/output functions can be declared honestly. We've talked about\nthat before, and I've been thinking about making a concrete proposal\nfor it, but haven't done anything yet...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Jun 2000 23:42:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: input/output functions have been changed ? " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > I just want to confirm if input/output functions have \n> > been changed.\n> \n> > select int4out(1);\n> > int4out\n> > -----------\n> > 136475032\n> > (1 row)\n> \n> Sure, you expected something nicer? You're seeing the numeric\n> representation of a C string pointer.\n> \n> Actually, checking 6.5, I see that int4out --- alone of all the *out\n> functions --- used to be declared in pg_proc to return type 'name'.\n\nOops,int4out() alone. \nI don't complain about this change.\nIt seems not good for clients to call input/output functions directly.\n\nThere remains a pair of int4out() calls in interfaces/odbc/info.c.\nI don't know why those calls have been needed but this change\nwould return wrong result.\nProbably we should remove the calls from info.c completely.\n\nRegards. \n\nHiroshi Inoue\[email protected]\n", "msg_date": "Thu, 15 Jun 2000 13:33:55 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: input/output functions have been changed ? " }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> It seems not good for clients to call input/output functions directly.\n\nI agree, but if we don't prevent it maybe we should make it work\nreasonably ...\n\n> There remains a pair of int4out() calls in interfaces/odbc/info.c.\n> I don't know why those calls have been needed but this change\n> would return wrong result.\n\nOh, I missed those --- didn't notice references outside the backend.\nHmm:\n\n strcat(tables_query, \" and relname !~ '^xinv[0-9]+'\");\n strcat(tables_query, \" and int4out(usesysid) = int4out(relowner)\");\n strcat(tables_query, \"order by relname\");\n\nThat seems absolutely wacko ... why not just \"usesysid = relowner\"?\n\nNow that I look, there are several other pretty silly-looking\ninvocations of int4out in\n\nsrc/interfaces/python/tutorial/syscat.py\nsrc/test/regress/sql/view_perms.sql\nsrc/tutorial/syscat.source\n\nThe one in view_perms.sql is not only especially bizarre, but it's still\npassing regress test! Wow...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Jun 2000 02:43:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: input/output functions have been changed ? " } ]