threads
listlengths
1
2.99k
[ { "msg_contents": "Philip Warner writes:\n\n> Just looking at the TODO list for other pg_dump related items, and saw:\n> \n> add pg_dump option to dump type names as standard ANSI types \n> \n> Tow questions:\n> \n> Do we already have a function to represent PG types as ANSI standard types?\n\nformat_type(pg_type.oid, pg_attribute.atttypmod)\n\n> How should types that are not in the standard be represented?\n\nWhatever format_type gives you.\n\nI wrote format_type for exactly this purpose a few weeks ago, but I\nfigured I let you get the glory for making pg_dump use it. :-) psql uses\nit as well.\n\nNote: IMO it should not be an *option* to use format_type, it should\nalways be used. It's nice for encapsulation on the programming side and\nthere's no reason for using any other type names.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 4 Aug 2000 23:59:26 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dump and ANSI types (TODO item)" } ]
[ { "msg_contents": "Philip Warner writes:\n\n> Is there any reason that a security model does not exist for psql that\n> allows Unix user 'fred' to log in as PG user 'fred' with no password etc,\n> but any user trying to log on as someone other than themselves has to\n> provide a password?\n\nShort of someone sitting down and making it happen I don't see any. You'd\nonly need to implement some sort of fall-through in `pg_hba.conf', which\nin my estimate can't be exceedingly hard.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 4 Aug 2000 23:59:34 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Security choices..." }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> Philip Warner writes:\n> \n> > Is there any reason that a security model does not exist for psql that\n> > allows Unix user 'fred' to log in as PG user 'fred' with no password etc,\n> > but any user trying to log on as someone other than themselves has to\n> > provide a password?\n> \n> Short of someone sitting down and making it happen I don't see any. You'd\n> only need to implement some sort of fall-through in `pg_hba.conf', which\n> in my estimate can't be exceedingly hard.\n\nHow do you know Fred is Fred without a password?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 Aug 2000 18:34:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security choices..." }, { "msg_contents": "At 18:34 4/08/00 -0400, Bruce Momjian wrote:\n>[ Charset ISO-8859-1 unsupported, converting... ]\n>> Philip Warner writes:\n>> \n>> > Is there any reason that a security model does not exist for psql that\n>> > allows Unix user 'fred' to log in as PG user 'fred' with no password etc,\n>> > but any user trying to log on as someone other than themselves has to\n>> > provide a password?\n>> \n>> Short of someone sitting down and making it happen I don't see any. You'd\n>> only need to implement some sort of fall-through in `pg_hba.conf', which\n>> in my estimate can't be exceedingly hard.\n>\n>How do you know Fred is Fred without a password?\n>\n\nThe idea was to apply only on the matchine on which the postmaster runs;\nthen ideally you get the username of the client process. It's kind of like\nIDENT, except it works only for local connections, and asks for passwords\nfor non-local connections.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 05 Aug 2000 10:13:54 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security choices..." }, { "msg_contents": "> At 18:34 4/08/00 -0400, Bruce Momjian wrote:\n> >[ Charset ISO-8859-1 unsupported, converting... ]\n> >> Philip Warner writes:\n> >> \n> >> > Is there any reason that a security model does not exist for psql that\n> >> > allows Unix user 'fred' to log in as PG user 'fred' with no password etc,\n> >> > but any user trying to log on as someone other than themselves has to\n> >> > provide a password?\n> >> \n> >> Short of someone sitting down and making it happen I don't see any. You'd\n> >> only need to implement some sort of fall-through in `pg_hba.conf', which\n> >> in my estimate can't be exceedingly hard.\n> >\n> >How do you know Fred is Fred without a password?\n> >\n> \n> The idea was to apply only on the matchine on which the postmaster runs;\n> then ideally you get the username of the client process. It's kind of like\n> IDENT, except it works only for local connections, and asks for passwords\n> for non-local connections.\n\nI am not aware of any way to determine the PID at the other end of a\nunix domain socket.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 Aug 2000 23:13:23 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security choices..." }, { "msg_contents": "On Fri, 4 Aug 2000, Bruce Momjian wrote:\n\n> > At 18:34 4/08/00 -0400, Bruce Momjian wrote:\n> > >[ Charset ISO-8859-1 unsupported, converting... ]\n> > >> Philip Warner writes:\n> > >> \n> > >> > Is there any reason that a security model does not exist for psql that\n> > >> > allows Unix user 'fred' to log in as PG user 'fred' with no password etc,\n> > >> > but any user trying to log on as someone other than themselves has to\n> > >> > provide a password?\n> > >> \n> > >> Short of someone sitting down and making it happen I don't see any. You'd\n> > >> only need to implement some sort of fall-through in `pg_hba.conf', which\n> > >> in my estimate can't be exceedingly hard.\n> > >\n> > >How do you know Fred is Fred without a password?\n> > >\n> > \n> > The idea was to apply only on the matchine on which the postmaster runs;\n> > then ideally you get the username of the client process. It's kind of like\n> > IDENT, except it works only for local connections, and asks for passwords\n> > for non-local connections.\n> \n> I am not aware of any way to determine the PID at the other end of a\n> unix domain socket.\n\nYou actually don't need the PID on the other end, what you are interested\nare the credentials of a process on the other end. \n\nUnfortunately, every OS implemented it in very different way. Linux has\nSO_PEERCREDS option, solaris has doors, xBSD have SCM_CREDS or LOCAL_CREDS\n\nsee:\n\nhttp://metalab.unc.edu/pub/Linux/docs/HOWTO/Secure-Programs-HOWTO\nhttp://www.whitefang.com/sup/work.html\nhttp://cr.yp.to/docs/secureipc.html\n\n", "msg_date": "Fri, 4 Aug 2000 23:50:20 -0400 (EDT)", "msg_from": "Alex Pilosov <[email protected]>", "msg_from_op": false, "msg_subject": "Peer credentials (was Security choices...)" }, { "msg_contents": "this kinda has a hole in it also.. our database server only has about 5\nuesrs on it , all are employee acounts, not clients. \n\njeff\n\nOn Sat, 5 Aug 2000, Philip Warner wrote:\n\n> At 18:34 4/08/00 -0400, Bruce Momjian wrote:\n> >[ Charset ISO-8859-1 unsupported, converting... ]\n> >> Philip Warner writes:\n> >> \n> >> > Is there any reason that a security model does not exist for psql that\n> >> > allows Unix user 'fred' to log in as PG user 'fred' with no password etc,\n> >> > but any user trying to log on as someone other than themselves has to\n> >> > provide a password?\n> >> \n> >> Short of someone sitting down and making it happen I don't see any. You'd\n> >> only need to implement some sort of fall-through in `pg_hba.conf', which\n> >> in my estimate can't be exceedingly hard.\n> >\n> >How do you know Fred is Fred without a password?\n> >\n> \n> The idea was to apply only on the matchine on which the postmaster runs;\n> then ideally you get the username of the client process. It's kind of like\n> IDENT, except it works only for local connections, and asks for passwords\n> for non-local connections.\n> \n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.C.N. 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n> \n\nJeff MacDonald,\n\n-----------------------------------------------------\nPostgreSQL Inc\t\t| Hub.Org Networking Services\[email protected]\t\t| [email protected]\nwww.pgsql.com\t\t| www.hub.org\n1-902-542-0713\t\t| 1-902-542-3657\n-----------------------------------------------------\nFascimile : 1 902 542 5386\nIRC Nick : bignose\n\n", "msg_date": "Tue, 15 Aug 2000 19:55:55 -0300 (ADT)", "msg_from": "Jeff MacDonald <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security choices..." }, { "msg_contents": "\nwhere is the hole? don't you trust your employees? *raised eyebrows*\n\nOn Tue, 15 Aug 2000, Jeff MacDonald wrote:\n\n> this kinda has a hole in it also.. our database server only has about 5\n> uesrs on it , all are employee acounts, not clients. \n> \n> jeff\n> \n> On Sat, 5 Aug 2000, Philip Warner wrote:\n> \n> > At 18:34 4/08/00 -0400, Bruce Momjian wrote:\n> > >[ Charset ISO-8859-1 unsupported, converting... ]\n> > >> Philip Warner writes:\n> > >> \n> > >> > Is there any reason that a security model does not exist for psql that\n> > >> > allows Unix user 'fred' to log in as PG user 'fred' with no password etc,\n> > >> > but any user trying to log on as someone other than themselves has to\n> > >> > provide a password?\n> > >> \n> > >> Short of someone sitting down and making it happen I don't see any. You'd\n> > >> only need to implement some sort of fall-through in `pg_hba.conf', which\n> > >> in my estimate can't be exceedingly hard.\n> > >\n> > >How do you know Fred is Fred without a password?\n> > >\n> > \n> > The idea was to apply only on the matchine on which the postmaster runs;\n> > then ideally you get the username of the client process. It's kind of like\n> > IDENT, except it works only for local connections, and asks for passwords\n> > for non-local connections.\n> > \n> > \n> > ----------------------------------------------------------------\n> > Philip Warner | __---_____\n> > Albatross Consulting Pty. Ltd. |----/ - \\\n> > (A.C.N. 008 659 498) | /(@) ______---_\n> > Tel: (+61) 0500 83 82 81 | _________ \\\n> > Fax: (+61) 0500 83 82 82 | ___________ |\n> > Http://www.rhyme.com.au | / \\|\n> > | --________--\n> > PGP key available upon request, | /\n> > and from pgp5.ai.mit.edu:11371 |/\n> > \n> \n> Jeff MacDonald,\n> \n> -----------------------------------------------------\n> PostgreSQL Inc\t\t| Hub.Org Networking Services\n> [email protected]\t\t| [email protected]\n> www.pgsql.com\t\t| www.hub.org\n> 1-902-542-0713\t\t| 1-902-542-3657\n> -----------------------------------------------------\n> Fascimile : 1 902 542 5386\n> IRC Nick : bignose\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 15 Aug 2000 20:22:16 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security choices..." }, { "msg_contents": "only those that \n\t1 : are named after cartoon dogs\n\t2 : they are named after software developers who tend to stay alone..\n\nrofl..\n\n\ncourse , i didn't say \"my employees\" i said\nemployees.. :)\n\njeff\n\nOn Tue, 15 Aug 2000, The Hermit Hacker wrote:\n\n> \n> where is the hole? don't you trust your employees? *raised eyebrows*\n> \n> On Tue, 15 Aug 2000, Jeff MacDonald wrote:\n> \n> > this kinda has a hole in it also.. our database server only has about 5\n> > uesrs on it , all are employee acounts, not clients. \n> > \n> > jeff\n> > \n> > On Sat, 5 Aug 2000, Philip Warner wrote:\n> > \n> > > At 18:34 4/08/00 -0400, Bruce Momjian wrote:\n> > > >[ Charset ISO-8859-1 unsupported, converting... ]\n> > > >> Philip Warner writes:\n> > > >> \n> > > >> > Is there any reason that a security model does not exist for psql that\n> > > >> > allows Unix user 'fred' to log in as PG user 'fred' with no password etc,\n> > > >> > but any user trying to log on as someone other than themselves has to\n> > > >> > provide a password?\n> > > >> \n> > > >> Short of someone sitting down and making it happen I don't see any. You'd\n> > > >> only need to implement some sort of fall-through in `pg_hba.conf', which\n> > > >> in my estimate can't be exceedingly hard.\n> > > >\n> > > >How do you know Fred is Fred without a password?\n> > > >\n> > > \n> > > The idea was to apply only on the matchine on which the postmaster runs;\n> > > then ideally you get the username of the client process. It's kind of like\n> > > IDENT, except it works only for local connections, and asks for passwords\n> > > for non-local connections.\n> > > \n> > > \n> > > ----------------------------------------------------------------\n> > > Philip Warner | __---_____\n> > > Albatross Consulting Pty. Ltd. |----/ - \\\n> > > (A.C.N. 008 659 498) | /(@) ______---_\n> > > Tel: (+61) 0500 83 82 81 | _________ \\\n> > > Fax: (+61) 0500 83 82 82 | ___________ |\n> > > Http://www.rhyme.com.au | / \\|\n> > > | --________--\n> > > PGP key available upon request, | /\n> > > and from pgp5.ai.mit.edu:11371 |/\n> > > \n> > \n> > Jeff MacDonald,\n> > \n> > -----------------------------------------------------\n> > PostgreSQL Inc\t\t| Hub.Org Networking Services\n> > [email protected]\t\t| [email protected]\n> > www.pgsql.com\t\t| www.hub.org\n> > 1-902-542-0713\t\t| 1-902-542-3657\n> > -----------------------------------------------------\n> > Fascimile : 1 902 542 5386\n> > IRC Nick : bignose\n> > \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n\nJeff MacDonald,\n\n-----------------------------------------------------\nPostgreSQL Inc\t\t| Hub.Org Networking Services\[email protected]\t\t| [email protected]\nwww.pgsql.com\t\t| www.hub.org\n1-902-542-0713\t\t| 1-902-542-3657\n-----------------------------------------------------\nFascimile : 1 902 542 5386\nIRC Nick : bignose\n\n", "msg_date": "Tue, 15 Aug 2000 23:46:20 -0300 (ADT)", "msg_from": "Jeff MacDonald <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security choices..." }, { "msg_contents": "At 23:46 15/08/00 -0300, Jeff MacDonald wrote:\n>\n>course , i didn't say \"my employees\" i said\n>employees.. :)\n>\n\nAs distinct from the unemployed?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 16 Aug 2000 14:35:04 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security choices..." }, { "msg_contents": "that's right, if you don't have a job..\nyou don't get an account... we're elitists :)\n\nJOKING..\n\nok, this thread is dead.. i replied to a 10 day\nold message anyway.\n\nciao.\n\n\nOn Wed, 16 Aug 2000, Philip Warner wrote:\n\n> At 23:46 15/08/00 -0300, Jeff MacDonald wrote:\n> >\n> >course , i didn't say \"my employees\" i said\n> >employees.. :)\n> >\n> \n> As distinct from the unemployed?\n> \n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.B.N. 75 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n> \n\nJeff MacDonald,\n\n-----------------------------------------------------\nPostgreSQL Inc\t\t| Hub.Org Networking Services\[email protected]\t\t| [email protected]\nwww.pgsql.com\t\t| www.hub.org\n1-902-542-0713\t\t| 1-902-542-3657\n-----------------------------------------------------\nFascimile : 1 902 542 5386\nIRC Nick : bignose\n\n", "msg_date": "Wed, 16 Aug 2000 09:37:21 -0300 (ADT)", "msg_from": "Jeff MacDonald <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security choices..." } ]
[ { "msg_contents": "It had occurred to me that it would be nice (if not necessary) that one\ncould use `configure --prefix=/usr/local', and good things would happen.\n(replace /usr/local with any other shared prefix)\n\nCurrently, bad things will happen, in particular in the include dir, but\nalso under share, with severe cluttering. It is common in these cases to\ncreate package-specific subdirectories (/usr/local/include/pgsql, etc.),\nas indeed the binary packages do.\n\nNow it might be awkward to unconditionally append \"pgsql\" to various\ndirectory names; think `/usr/local/pgsql/include/pgsql'. Therefore I\npropose the following scheme, stolen in its entirety from Apache:\n\nThe string \"pgsql/\" will automatically be appended to datadir (not the\nsame as PGDATA), sysconfdir, includedir, and docdir, unless one of the\nfollowing is true:\n\n1) The user specified the particular directory manually (--sysconfdir,\netc.), or\n\n2) The expanded directory name already contains the string \"pgsql\" or\n\"postgres\" somewhere.\n\nI'd say that most users currently fall under exception 2), so they would\nnot be affected. Those brave enough to try an install into /usr/local\nwould finally get reasonable behaviour.\n\nOne fine day we might also want to consider changing the default directory\nnames from \"pgsql\" to \"postgresql\". It's not nice to use two different\nnames, and the tarball is already named \"postgresql\". Is that a reasonable\npossibility?\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 4 Aug 2000 23:59:44 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Installation layout idea" }, { "msg_contents": "On Fri, 4 Aug 2000, Peter Eisentraut wrote:\n\n> It had occurred to me that it would be nice (if not necessary) that one\n> could use `configure --prefix=/usr/local', and good things would happen.\n> (replace /usr/local with any other shared prefix)\n> \n> Currently, bad things will happen, in particular in the include dir, but\n> also under share, with severe cluttering. It is common in these cases to\n> create package-specific subdirectories (/usr/local/include/pgsql, etc.),\n> as indeed the binary packages do.\n> \n> Now it might be awkward to unconditionally append \"pgsql\" to various\n> directory names; think `/usr/local/pgsql/include/pgsql'. Therefore I\n> propose the following scheme, stolen in its entirety from Apache:\n> \n> The string \"pgsql/\" will automatically be appended to datadir (not the\n> same as PGDATA), sysconfdir, includedir, and docdir, unless one of the\n> following is true:\n> \n> 1) The user specified the particular directory manually (--sysconfdir,\n> etc.), or\n> \n> 2) The expanded directory name already contains the string \"pgsql\" or\n> \"postgres\" somewhere.\n> \n> I'd say that most users currently fall under exception 2), so they would\n> not be affected. Those brave enough to try an install into /usr/local\n> would finally get reasonable behaviour.\n> \n> One fine day we might also want to consider changing the default directory\n> names from \"pgsql\" to \"postgresql\". It's not nice to use two different\n> names, and the tarball is already named \"postgresql\". Is that a reasonable\n> possibility?\n\nthis one I have no problem with, specially since I would guess most\neveryone's shells have tab-completion, and therefore it isn't as if they\nhave to type any more ...\n\nI do have issues with the whole /usr/local/include/pgsql concept though\n... that is one of the things that I *really* hate about FreeBSD ports\nwhere they install qt 1.x in /usr/X11R6/include/qt and qt 2 in\n/usr/X11R6/include/qt2, and ... *roll eyes* I like the fact that\neverything goes into one place ...\n\n", "msg_date": "Fri, 4 Aug 2000 19:31:00 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Installation layout idea" }, { "msg_contents": "The Hermit Hacker writes:\n\n> I do have issues with the whole /usr/local/include/pgsql concept though\n> ... that is one of the things that I *really* hate about FreeBSD ports\n> where they install qt 1.x in /usr/X11R6/include/qt and qt 2 in\n> /usr/X11R6/include/qt2, and ... *roll eyes* I like the fact that\n> everything goes into one place ...\n\nWhether or not this is a good idea is not really on trial here, although\nI'd certainly hate to have the 200 Qt include files cluttering\n/usr/include or some such directory. The fact is that various file system\nstandards, including FHS and GNU (and apparently FreeBSD) request\n(require) such behaviour.\n\nAs long as we install semi-internal files such as config.h we better put\nthem in a non-shared location. The current state is broken in my mind.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 20 Aug 2000 00:37:45 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Installation layout idea" } ]
[ { "msg_contents": "Jan Wieck writes:\n\n> Anyway, it's good to hear you're still on it. What's the\n> estimated time you think it'll be ready to get patched in?\n\nNext release. I would hope we can get the current stuff into beta in a\nmonth or so, whereas this project would break open a lot of things.\n\n\n> The thing users actually complain about is the requirement of\n> UPDATE permissions to REFERENCE a table. This could be fixed\n> with making RI triggers setuid functions for 7.1 and check\n> that the user at least has SELECT permission on the\n> referenced table during constraint creation. This would also\n> remove the actual DOS problem, that a user could potentiall\n> create a referencing table and not giving anyone who can\n> update the referenced one update permissions on it too.\n> \n> I think it's worth doing it now, and couple it later with\n> your general access control things.\n\nTrue. I had already looked into this, it's not fundamentally difficult,\nbut there's a lot of code that will need to be touched.\n\nIf you want to go for it, be my guest; I agree that it is fairly\northogonal to the rest of the privilege system. I'll put it on my priority\nlist if no one's taking it.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 5 Aug 2000 00:01:42 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New Privilege model purposal" } ]
[ { "msg_contents": "Tom Lane writes:\n\n> Oh, that's interesting. What platform do you use? If RAND_MAX applies\n> to random() on some machines that'd probably explain why the code is\n> written like it is. But on my box (HPUX) the rand() function is old\n> and crufty and considerably different from random().\n\nrand() and RAND_MAX are defined by ANSI C, random() is a BSD-ism. I\nsuggest you use the former. Also, while you're at it, this is a snippet\nfrom the C FAQ:\n\n13.16: How can I get random integers in a certain range?\n \nA: The obvious way,\n \n rand() % N /* POOR */\n \n (which tries to return numbers from 0 to N-1) is poor, because\n the low-order bits of many random number generators are\n distressingly *non*-random. (See question 13.18.) A better\n method is something like\n \n (int)((double)rand() / ((double)RAND_MAX + 1) * N)\n \n If you're worried about using floating point, you could use\n \n rand() / (RAND_MAX / N + 1)\n \n Both methods obviously require knowing RAND_MAX (which ANSI\n #defines in <stdlib.h>), and assume that N is much less than\n RAND_MAX.\n \n (Note, by the way, that RAND_MAX is a *constant* telling you\n what the fixed range of the C library rand() function is. You\n cannot set RAND_MAX to some other value, and there is no way of\n requesting that rand() return numbers in some other range.)\n \n If you're starting with a random number generator which returns\n floating-point values between 0 and 1, all you have to do to get\n integers from 0 to N-1 is multiply the output of that generator\n by N.\n \n References: K&R2 Sec. 7.8.7 p. 168; PCS Sec. 11 p. 172.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 5 Aug 2000 00:03:19 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: [GENERAL] random() function produces wrong range" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> rand() and RAND_MAX are defined by ANSI C, random() is a BSD-ism. I\n> suggest you use the former.\n\nUnfortunately, except on a few platforms like Linux, the typical\nrand() implementation is vastly inferior to the typical random()\nimplementation. BSD wouldn't have bothered to roll their own if\nthe older code hadn't been so godawful. But unless you are using\nglibc, you probably have a bug-compatible rand().\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Aug 2000 22:05:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] random() function produces wrong range " } ]
[ { "msg_contents": "At 23:59 4/08/00 +0200, Peter Eisentraut wrote:\n>Philip Warner writes:\n>\n>> Is there any reason that a security model does not exist for psql that\n>> allows Unix user 'fred' to log in as PG user 'fred' with no password etc,\n>> but any user trying to log on as someone other than themselves has to\n>> provide a password?\n>\n>Short of someone sitting down and making it happen I don't see any. You'd\n>only need to implement some sort of fall-through in `pg_hba.conf', which\n>in my estimate can't be exceedingly hard.\n>\n\nI'd prefer not to overrule pg_hba.conf; I was thinking along the lines of\nadding another security type which falls back to password auth. if it cant\nget the username, or if the client process is not a valid user.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 05 Aug 2000 10:16:17 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Security choices..." } ]
[ { "msg_contents": "At 23:59 4/08/00 +0200, Peter Eisentraut wrote:\n>> \n>> Do we already have a function to represent PG types as ANSI standard types?\n>\n>format_type(pg_type.oid, pg_attribute.atttypmod)\n\nGood; I thought that might be why you sent me the info...\n\n\n>\n>I wrote format_type for exactly this purpose a few weeks ago, but I\n>figured I let you get the glory for making pg_dump use it. :-) psql uses\n>it as well.\n>\n\nIs it in 7.0.2?\n\n\n>Note: IMO it should not be an *option* to use format_type, it should\n>always be used. It's nice for encapsulation on the programming side and\n>there's no reason for using any other type names.\n\nSounds good to me, as long as we can be sure that the backend will\nundestand it's output...one reason for making it an option is was to allow\nISO SQL exports for porting to other DBs when we know we have different\nformats (but no examples spring to mind).\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 05 Aug 2000 10:21:21 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dump and ANSI types (TODO item)" } ]
[ { "msg_contents": "I've been looking at SQL99 while reviewing a book, and stumbled across\nsome (new to me) behavior for double-quoted identifiers. The SQL99 way\nto embed a double-quote into a quoted identifier is to put in two\nadjacent double-quotes (much like is done for embedding single-quotes\ninto string literals in SQL9x). \n\nI've modified (but not yet committed) the pieces in parser (scan.l) and\npg_dump (common.c) to accept and emit appropriate stuff for the embedded\ndouble-quote case. An example is\n\n create table \"hi\"\"there\" (i int);\n \\d\n List of relations\n Name | Type | Owner\n ----------|-------|----------\n hi\"there | table | lockhart\n (1 row)\n\nCurrently, pg_dump escapes this by embedding a backslash/double-quote\npair, and I'm proposing that it emit two adjacent double-quotes instead\n(btw, scan.l does not seem to accept this for input at the moment ;).\nAny objections to committing this? Are there other cases which must be\nconsidered?\n\nOn another somewhat related point:\n\nString literals can contain escaped characters, which postgres removes\nearly in the parsing stage. These escapes are re-inserted *every time\nthe string is returned in a query*. imho this is the wrong behavior,\nsince the escapes were present in the input only to get around SQL9x\nsyntax for allowable input characters (or to get around some postgres\noddity). This is not an issue when sending strings back out from the\nserver, except for perhaps the special case of the null character. And\nit's pretty silly to enter escaped strings, remove the escapes, then\nre-escape them for all user-visible interactions with the string. Except\nfor perhaps string comparisons, there is hardly any point in bothering\nto unescape the string internally!\n\nI propose that we move the responsibility for re-escaping literal\nstrings to pg_dump, which is the only utility with the charter to\ngenerate strings which are fully symmetric with input strings. We can\nprovide libpq with an \"escape routine\" to assist other apps with this\ntask if they need it.\n\nComments?\n\n - Thomas\n", "msg_date": "Sat, 05 Aug 2000 04:05:20 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Quoting fun" }, { "msg_contents": "> On another somewhat related point:\n\nHmm. In looking at pg_dump it seems that escaping string literals is\nalready done there. So never mind on this second point...\n\n - Thomas\n", "msg_date": "Sat, 05 Aug 2000 04:29:55 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Quoting fun" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I've been looking at SQL99 while reviewing a book, and stumbled across\n> some (new to me) behavior for double-quoted identifiers. The SQL99 way\n> to embed a double-quote into a quoted identifier is to put in two\n> adjacent double-quotes (much like is done for embedding single-quotes\n> into string literals in SQL9x). \n\nI looked at doing that a while ago, not because I knew it was in SQL99\nbut just because it seemed like a nice idea. I backed off though when\nI realized that there are a *lot* of places that will break. scan.l\nand pg_dump are just the tip of the iceberg --- there are many other\nplaces, and probably lots of applications, that assume printing \"%s\"\nis sufficient to protect an identifier. Be prepared for a lot of\nmop-up work if you want to press forward with this.\n\n> Currently, pg_dump escapes this by embedding a backslash/double-quote\n> pair,\n\npg_dump is mistaken --- as you say, the backend doesn't accept\nbackslashes in doublequoted idents. (Since there is no way to get a\ndoublequote into an ident currently, pg_dump's check is dead code,\nwhich is why no one noticed it was broken.)\n\n> String literals can contain escaped characters, which postgres removes\n> early in the parsing stage. These escapes are re-inserted *every time\n> the string is returned in a query*.\n\nAu contraire, the backend never re-inserts escapes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Aug 2000 22:19:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Quoting fun " } ]
[ { "msg_contents": "It finally dawned on my how to easily implement the LIKE/ESCAPE clause.\nCurrently, LIKE is transformed to the \"~~\" operator in the parser. For\nLIKE/ESCAPE, we should instead transform it to a three-parameter\nfunction call. The rest of the implementation is likely to be trivial\n(as is this parsing solution).\n\nDoes anyone see a problem with this solution? Should I also change the\nexisting \"two parameter\" implementation to look for a function call\ninstead of an operator (I think so, but...)?\n\nSomeone has been working on an \"SQL generator function\", which will be\nused to generate output (??). The \"like()\" function should be\ntransformed back to the SQL9x clause; any hints on where to look (or\nvolunteers to fix that part)?\n\n - Thomas\n", "msg_date": "Sat, 05 Aug 2000 06:23:55 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "LIKE/ESCAPE implementation" }, { "msg_contents": "At 01:23 AM 8/5/2000, Thomas Lockhart wrote:\n>It finally dawned on my how to easily implement the LIKE/ESCAPE clause.\n>Currently, LIKE is transformed to the \"~~\" operator in the parser. For\n>LIKE/ESCAPE, we should instead transform it to a three-parameter\n>function call. The rest of the implementation is likely to be trivial\n>(as is this parsing solution).\n\nWhile your at it... :)\n\nWould their be anything like an ILIKE for a case insensitive like \nsearch? Or maybe insensitive over text/char/varchar datatypes?\n\nJust a thought...\n\nThomas\n\n-\n- Thomas Swan\n- Graduate Student - Computer Science\n- The University of Mississippi\n-\n- \"People can be categorized into two fundamental\n- groups, those that divide people into two groups\n- and those that don't.\"\n\nAt 01:23 AM 8/5/2000, Thomas Lockhart wrote:\nIt finally dawned on my how to easily\nimplement the LIKE/ESCAPE clause.\nCurrently, LIKE is transformed to the \"~~\" operator in the\nparser. For\nLIKE/ESCAPE, we should instead transform it to a three-parameter\nfunction call. The rest of the implementation is likely to be\ntrivial\n(as is this parsing solution).\n\nWhile your at it... :)   \n\nWould their be anything like an ILIKE for a case insensitive like\nsearch?   Or maybe insensitive over text/char/varchar\ndatatypes?\n\nJust a thought...\n\nThomas\n\n\n- \n- Thomas Swan\n                                  \n- Graduate Student  - Computer Science\n- The University of Mississippi\n- \n- \"People can be categorized into two fundamental \n- groups, those that divide people into two groups \n- and those that don't.\"", "msg_date": "Sat, 05 Aug 2000 01:26:00 -0500", "msg_from": "Thomas Swan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE/ESCAPE implementation" }, { "msg_contents": "> Would their be anything like an ILIKE for a case insensitive like\n> search? Or maybe insensitive over text/char/varchar datatypes?\n\nWhat is ILIKE? afaik it is not in SQL9x, so is there any reason to have\nthat rather than the full regular expression case-insensitive operator\n(\"~*\") we already have?\n\n - Thomas\n", "msg_date": "Sat, 05 Aug 2000 06:59:42 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LIKE/ESCAPE implementation" }, { "msg_contents": "> > Would their be anything like an ILIKE for a case insensitive like\n> > search? Or maybe insensitive over text/char/varchar datatypes?\n> \n> What is ILIKE?\n\nAs far as I remember it was introduced in Oracle. (I may be mistaken)\n\n> afaik it is not in SQL9x, so is there any reason to have\n> that rather than the full regular expression case-insensitive operator\n> (\"~*\") we already have?\n\nYes. They are. If you use RE you should escape lots of symbols. If you do not need\npower of RE ILIKE is really the best choice.\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Sat, 5 Aug 2000 14:08:58 +0700", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE/ESCAPE implementation" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> What is ILIKE? afaik it is not in SQL9x, so is there any reason to have\n> that rather than the full regular expression case-insensitive operator\n> (\"~*\") we already have?\n\nJust that a lot of people have asked for it, over and over again ...\nsee the archives ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Aug 2000 22:37:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE/ESCAPE implementation " }, { "msg_contents": "> > What is ILIKE? afaik it is not in SQL9x, so is there any reason to have\n> > that rather than the full regular expression case-insensitive operator\n> > (\"~*\") we already have?\n> Just that a lot of people have asked for it, over and over again ...\n> see the archives ...\n\nI had thought it would be trivial to do ILIKE, but now I'm not sure how\nto handle the multi-byte case. It isn't sufficient to wrap the\nsingle-byte comparison arguments with tolower() is it??\n\nbtw, do the archives have a full discussion of the correct syntax for\nthis? I recall people asking for it, but since it is a non-standard\nfeature what implementation example should I follow? What alternatives\nare there? Is \"check the archives\" sufficient to produce a complete\ndesign discussion? What thread??\n\n - Thomas\n", "msg_date": "Sun, 06 Aug 2000 03:45:57 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LIKE/ESCAPE implementation" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I had thought it would be trivial to do ILIKE, but now I'm not sure how\n> to handle the multi-byte case. It isn't sufficient to wrap the\n> single-byte comparison arguments with tolower() is it??\n\nI'd be inclined to force both strings to lower case as a whole and\nthen apply normal LIKE. Comments anyone?\n\n> I recall people asking for it, but since it is a non-standard\n> feature what implementation example should I follow? What alternatives\n> are there? Is \"check the archives\" sufficient to produce a complete\n> design discussion?\n\nI do not recall seeing a complete proposal, but wasn't someone just\nopining that Oracle has such a feature? If so, borrowing their spec\nseems the thing to do.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Aug 2000 00:10:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE/ESCAPE implementation " }, { "msg_contents": "At 10:45 PM 8/5/2000, Thomas Lockhart wrote:\n> > > What is ILIKE? afaik it is not in SQL9x, so is there any reason to have\n> > > that rather than the full regular expression case-insensitive operator\n> > > (\"~*\") we already have?\n> > Just that a lot of people have asked for it, over and over again ...\n> > see the archives ...\n>\n>I had thought it would be trivial to do ILIKE, but now I'm not sure how\n>to handle the multi-byte case. It isn't sufficient to wrap the\n>single-byte comparison arguments with tolower() is it??\n>\n>btw, do the archives have a full discussion of the correct syntax for\n>this? I recall people asking for it, but since it is a non-standard\n>feature what implementation example should I follow? What alternatives\n>are there? Is \"check the archives\" sufficient to produce a complete\n>design discussion? What thread??\n\nI don't know... As far as syntax would go, I would follow the existing LIKE \noperator, doing a case insensitive operation.\n\n\n-\n- Thomas Swan\n- Graduate Student - Computer Science\n- The University of Mississippi\n-\n- \"People can be categorized into two fundamental\n- groups, those that divide people into two groups\n- and those that don't.\"\n\nAt 10:45 PM 8/5/2000, Thomas Lockhart wrote:\n> > What is ILIKE? afaik it is not in\nSQL9x, so is there any reason to have\n> > that rather than the full regular expression case-insensitive\noperator\n> > (\"~*\") we already have?\n> Just that a lot of people have asked for it, over and over again\n...\n> see the archives ...\n\nI had thought it would be trivial to do ILIKE, but now I'm not sure\nhow\nto handle the multi-byte case. It isn't sufficient to wrap the\nsingle-byte comparison arguments with tolower() is it??\n\nbtw, do the archives have a full discussion of the correct syntax\nfor\nthis? I recall people asking for it, but since it is a non-standard\nfeature what implementation example should I follow? What\nalternatives\nare there? Is \"check the archives\" sufficient to produce a\ncomplete\ndesign discussion? What thread??\n\nI don't know... As far as syntax would go, I would follow the existing\nLIKE operator, doing a case insensitive operation.\n\n\n\n- \n- Thomas Swan\n                                  \n- Graduate Student  - Computer Science\n- The University of Mississippi\n- \n- \"People can be categorized into two fundamental \n- groups, those that divide people into two groups \n- and those that don't.\"", "msg_date": "Sat, 05 Aug 2000 23:14:52 -0500", "msg_from": "Thomas Swan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE/ESCAPE implementation" }, { "msg_contents": "> > I had thought it would be trivial to do ILIKE, but now I'm not sure how\n> > to handle the multi-byte case. It isn't sufficient to wrap the\n> > single-byte comparison arguments with tolower() is it??\n> I'd be inclined to force both strings to lower case as a whole and\n> then apply normal LIKE. Comments anyone?\n\nOK. \"Both strings to lower case as a whole\" doesn't seem to be something\nwhich is multibyte-enabled in our code. Am I just missing seeing some\nfeatures? istm that lots of our code falls over on MB strings...\n\n> I do not recall seeing a complete proposal, but wasn't someone just\n> opining that Oracle has such a feature? If so, borrowing their spec\n> seems the thing to do.\n\nAnyone have suggestions for a reference? Altavista on \"+ilike +oracle\"\ndoesn't seem to do it.\n\n - Thomas\n", "msg_date": "Sun, 06 Aug 2000 04:51:05 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LIKE/ESCAPE implementation" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> I'd be inclined to force both strings to lower case as a whole and\n>> then apply normal LIKE. Comments anyone?\n\n> OK. \"Both strings to lower case as a whole\" doesn't seem to be something\n> which is multibyte-enabled in our code. Am I just missing seeing some\n> features?\n\nNot sure that it matters for multibyte, but for sure LOCALE ought to\nmake a difference. Consider German esstet (sp?) --- that beta-looking\nsymbol that lowercases to \"ss\". Do we do this correctly?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Aug 2000 01:03:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE/ESCAPE implementation " }, { "msg_contents": "> Not sure that it matters for multibyte, but for sure LOCALE ought to\n> make a difference. Consider German esstet (sp?) --- that beta-looking\n> symbol that lowercases to \"ss\". Do we do this correctly?\n\nafaict we do none of this. Using tolower() on a char* variable can not\npossibly do the right thing for multiple-byte character sets. Your\nexample (single byte to two bytes) can't work either.\n\nTatsuo and others: what is the state of MB for these cases? Should I\njust code the single-byte LOCALE solution for now, or do we have some\nother code I should be referring to?\n\n - Thomas\n", "msg_date": "Sun, 06 Aug 2000 05:38:06 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LIKE/ESCAPE implementation" } ]
[ { "msg_contents": "\nI noticed that the COALESCE function is implemented as a case statement,\nwith the result that:\n\n update t1 set f = Coalesce( (select fn from t2 x where x.f1 = t1.f1),\nt1.f1)\n\nhas the following plan:\n\nSeq Scan on t1 (cost=0.00..20.00 rows=1000 width=10)\n SubPlan\n -> Seq Scan on t2 x (cost=0.00..22.50 rows=10 width=4)\n -> Seq Scan on t2 x (cost=0.00..22.50 rows=1000 width=4)\n\n\nie. it *seems* to scan t2 twice, because the resulting CASE statement for\nthe subselect is:\n\n case when not (select fn from t2 x where x.f1 = t1.f1) is NULL then\n (select fn from t2 x where x.f1 = t1.f1)\n\t else\n t1.f1\n end\n\nwhich does seem to imply two executions of the same select statement.\n\nI realize that the standard says:\n\n 2) COALESCE (V(1), V(2)) is equivalent to the following <case\nspecification>:\n\n CASE WHEN V(1) IS NOT NULL THEN V(1) ELSE V(2)\nEND\n\n 3) \"COALESCE (V(1), V(2), . . . , V(n))\", for n >= 3, is\nequivalent\n to the following <case specification>:\n\n CASE\nWHEN V(1) IS NOT NULL THEN V(1) \n ELSE COALESCE (V(2), . . . , V(n)) \n END\n\nI was wondering if there was a reason that we interpret this literally,\nrather than implement a function? Or set a flag on the CaseExpr node to\nindicate that the 'result == whenClause', or some such.\n\nI am still hunting through the planner/optimizer to try to understand if\nthis is feasible, and would appreciate any suggestions...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 05 Aug 2000 17:07:46 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "COALESCE implementation question" }, { "msg_contents": "P.S. What's worse, I should have mentioned that the plan *with* indexes\nseems flawed:\n\nCreate Table t1(f1 int);\nCreate Table t1(f1 int, f2 int);\nCreate Unique Index t2f1 on t2(f1);\nCreate Unique Index t2f2 on t2(f2);\n\nexplain update t1 set f1 = Coalesce( (select f2 from t2 x where x.f1 =\nt1.f1), t1.f1);\nNOTICE: QUERY PLAN:\n\nSeq Scan on t1 (cost=0.00..20.00 rows=1000 width=10)\n SubPlan\n -> Index Scan using t2f1 on t2 x (cost=0.00..8.14 rows=10 width=4)\n -> Seq Scan on t2 x (cost=0.00..22.50 rows=1000 width=4)\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 05 Aug 2000 17:22:50 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: COALESCE implementation question" }, { "msg_contents": "At 17:22 5/08/00 +1000, Philip Warner wrote:\n>P.S. What's worse, I should have mentioned that the plan *with* indexes\n>seems flawed:\n>\n>Create Table t1(f1 int);\n>Create Table t1(f1 int, f2 int);\n>Create Unique Index t2f1 on t2(f1);\n>Create Unique Index t2f2 on t2(f2);\n>\n>explain update t1 set f1 = Coalesce( (select f2 from t2 x where x.f1 =\n>t1.f1), t1.f1);\n>NOTICE: QUERY PLAN:\n>\n>Seq Scan on t1 (cost=0.00..20.00 rows=1000 width=10)\n> SubPlan\n> -> Index Scan using t2f1 on t2 x (cost=0.00..8.14 rows=10 width=4)\n> -> Seq Scan on t2 x (cost=0.00..22.50 rows=1000 width=4)\n>\n\nOddly enough, I now think that the EXPLAIN output is a lie; it seems that\nit never does a sequential scan. So I am now even more confused...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 05 Aug 2000 18:17:11 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: COALESCE implementation question" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> explain update t1 set f1 = Coalesce( (select f2 from t2 x where x.f1 =\n> t1.f1), t1.f1);\n> NOTICE: QUERY PLAN:\n\n> Seq Scan on t1 (cost=0.00..20.00 rows=1000 width=10)\n> SubPlan\n> -> Index Scan using t2f1 on t2 x (cost=0.00..8.14 rows=10 width=4)\n> -> Seq Scan on t2 x (cost=0.00..22.50 rows=1000 width=4)\n\nThis is a bug caused by interaction between two planning passes run\non the same Query node. The parser thinks it's cool to generate a\nCASE parsetree with multiple paths to the same sub-select Query node,\nbut in fact it is not cool because planning destructively alters the\nQuery node contents. I'm amazed it didn't crash, to tell the truth.\n\nI have a patch but haven't applied it yet (been offline for most of\ntwo days due to telco idiocy :-().\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Aug 2000 22:27:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COALESCE implementation question " }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> I realize that the standard says:\n\n> 2) COALESCE (V(1), V(2)) is equivalent to the following <case\n> specification> :\n> CASE WHEN V(1) IS NOT NULL THEN V(1) ELSE V(2) END\n\n> I was wondering if there was a reason that we interpret this literally,\n> rather than implement a function?\n\nWell, the standard is perfectly clear, isn't it? If V(1) has side\neffects then trying to optimize this into just one evaluation of V(1)\nwill generate non-spec-compliant results.\n\nI'd have to agree that two evaluations are pretty annoying, though,\nand I wonder whether the spec authors *really* meant to demand\ndouble evaluation of the \"winning\" case item. Can anyone check\nwhether Oracle and other DBMSes perform double evaluation?\n\nBTW, the \"BETWEEN\" expression has exactly the same issue.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Aug 2000 22:36:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COALESCE implementation question " }, { "msg_contents": "At 22:36 5/08/00 -0400, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> I realize that the standard says:\n>\n>> 2) COALESCE (V(1), V(2)) is equivalent to the following <case\n>> specification> :\n>> CASE WHEN V(1) IS NOT NULL THEN V(1) ELSE V(2) END\n>\n>> I was wondering if there was a reason that we interpret this literally,\n>> rather than implement a function?\n>\n>Well, the standard is perfectly clear, isn't it? If V(1) has side\n>effects then trying to optimize this into just one evaluation of V(1)\n>will generate non-spec-compliant results.\n\nAt least with the new function manager, if I feel te need I can write a\n'CoalesceValues' function (at least for fixed numbers of parameters).\n\n\n>I'd have to agree that two evaluations are pretty annoying, though,\n>and I wonder whether the spec authors *really* meant to demand\n>double evaluation of the \"winning\" case item. Can anyone check\n>whether Oracle and other DBMSes perform double evaluation?\n\nIt's very hard to believe that is what they meant, or even if they even\nconsidered the ramifications of their proposed implementation (I'm not\nreally sure why they chose to describe the implementation and specifically\nto implement a 'function' as a case statement). eg. the result of the first\nexecution *could* mean that the second execution returns NULL - fine for\nCASE, lousy for COALESCE. In fact it's pretty easy to write a function that\ncauses COALESCE(f(), 1) to return NULL...\n\nSadly, my usual yard stick (Dec/RDB) seems to evaluate twice (at least\nthat's what it's planner says). And dumping a view with a coalesce\nstatement produces a CASE statement, so it probably has no choice.\n\nJust seems daft to me.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sun, 06 Aug 2000 13:22:03 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: COALESCE implementation question " }, { "msg_contents": "At 22:27 5/08/00 -0400, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> explain update t1 set f1 = Coalesce( (select f2 from t2 x where x.f1 =\n>> t1.f1), t1.f1);\n>> NOTICE: QUERY PLAN:\n>\n>> Seq Scan on t1 (cost=0.00..20.00 rows=1000 width=10)\n>> SubPlan\n>> -> Index Scan using t2f1 on t2 x (cost=0.00..8.14 rows=10 width=4)\n>> -> Seq Scan on t2 x (cost=0.00..22.50 rows=1000 width=4)\n>\n>This is a bug caused by interaction between two planning passes run\n>on the same Query node. The parser thinks it's cool to generate a\n>CASE parsetree with multiple paths to the same sub-select Query node,\n>but in fact it is not cool because planning destructively alters the\n>Query node contents. I'm amazed it didn't crash, to tell the truth.\n>\n>I have a patch but haven't applied it yet (been offline for most of\n>two days due to telco idiocy :-().\n\nThanks for this; I must admit I was very surprised not to get a response\nwithing 24 hours! Is there any chance of sending me the patch - I have been\nlooking at the sources for a while now, and it would be nice to see the\nanswer...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sun, 06 Aug 2000 13:23:29 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: COALESCE implementation question " }, { "msg_contents": "Philip Warner <[email protected]> writes:\n>> Well, the standard is perfectly clear, isn't it? If V(1) has side\n>> effects then trying to optimize this into just one evaluation of V(1)\n>> will generate non-spec-compliant results.\n\n> At least with the new function manager, if I feel te need I can write a\n> 'CoalesceValues' function (at least for fixed numbers of parameters).\n\nMmm ... not really. You could detect nulls all right, but a function-\nbased version of COALESCE would evaluate *all* its arguments exactly\nonce, which is certainly wrong. If you don't stop evaluating with\nthe one you decide to return, you are neither compliant with the spec\nnor safe (later expressions might yield errors if evaluated!)\n\n> Sadly, my usual yard stick (Dec/RDB) seems to evaluate twice (at least\n> that's what it's planner says). And dumping a view with a coalesce\n> statement produces a CASE statement, so it probably has no choice.\n\nSounds like they do it the same as we do, ie, expand COALESCE into the\nspecified CASE equivalent on sight.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Aug 2000 23:28:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COALESCE implementation question " }, { "msg_contents": "At 23:28 5/08/00 -0400, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>>> Well, the standard is perfectly clear, isn't it? If V(1) has side\n>>> effects then trying to optimize this into just one evaluation of V(1)\n>>> will generate non-spec-compliant results.\n>\n>> At least with the new function manager, if I feel te need I can write a\n>> 'CoalesceValues' function (at least for fixed numbers of parameters).\n>\n>Mmm ... not really. You could detect nulls all right, but a function-\n>based version of COALESCE would evaluate *all* its arguments exactly\n>once, which is certainly wrong.\n\nGood point. Although in the specific case (two args, one of them constant),\nit's not an issue. I guess I'll just have to live with double-evaluation,\nand a COALESCE than can return NULL. Grumble grumble...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sun, 06 Aug 2000 14:12:38 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: COALESCE implementation question " }, { "msg_contents": "Philip Warner <[email protected]> writes:\n>> This is a bug caused by interaction between two planning passes run\n>> on the same Query node. The parser thinks it's cool to generate a\n>> CASE parsetree with multiple paths to the same sub-select Query node,\n>> but in fact it is not cool because planning destructively alters the\n>> Query node contents. I'm amazed it didn't crash, to tell the truth.\n>> \n>> I have a patch but haven't applied it yet (been offline for most of\n>> two days due to telco idiocy :-().\n\n> Thanks for this; I must admit I was very surprised not to get a response\n> withing 24 hours! Is there any chance of sending me the patch - I have been\n> looking at the sources for a while now, and it would be nice to see the\n> answer...\n\nWell, I'm not *proud* of this patch, it's pretty much brute-force.\nBut it will do until we get around to redesigning querytrees.\nSee src/backend/optimizer/plan/subselect.c.\n\nI imagine the diff would apply to 7.0.* if you want to do that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Aug 2000 00:50:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COALESCE implementation question " } ]
[ { "msg_contents": "How about it? The \";\" and \":\" operators were deprecated for 7.0 (and are\nlikely to be little-used anyway). For the 7.0 release, they print a\nnasty warning every time they are used. Can I remove them for 7.1?\n\n - Thomas\n", "msg_date": "Sun, 06 Aug 2000 04:24:10 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "OK to remove operators for exp() and ln()" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> How about it? The \";\" and \":\" operators were deprecated for 7.0 (and are\n> likely to be little-used anyway). For the 7.0 release, they print a\n> nasty warning every time they are used. Can I remove them for 7.1?\n\nI had 'em on my hit list too. Ready ... aim ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Aug 2000 00:57:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OK to remove operators for exp() and ln() " }, { "msg_contents": "On Sun, 6 Aug 2000, Tom Lane wrote:\n\n> Thomas Lockhart <[email protected]> writes:\n> > How about it? The \";\" and \":\" operators were deprecated for 7.0 (and are\n> > likely to be little-used anyway). For the 7.0 release, they print a\n> > nasty warning every time they are used. Can I remove them for 7.1?\n> \n> I had 'em on my hit list too. Ready ... aim ...\n\n... fire\n\n\n", "msg_date": "Sun, 6 Aug 2000 15:10:16 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OK to remove operators for exp() and ln() " }, { "msg_contents": "> > I had 'em on my hit list too. Ready ... aim ...\n> ... fire\n\nAlready hit and sunk.\n\n - Thomas\n", "msg_date": "Mon, 07 Aug 2000 00:00:19 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: OK to remove operators for exp() and ln()" } ]
[ { "msg_contents": "\nI think my last message to Tom (and the list)\nabout the foreign key stuff and oids ended up \nin /dev/null due to a problem on the local \nmailer.\n\nTom had suggested storing a more \nunderstandable form of the foreign key constraint\nto make dumping more reasonable in its own table.\nI'd guess like the src stored for check constraints.\nHowever, I noticed a few problems with this and\nwhile thinking about it I had a few germs of\nideas which aren't any kind of proposal yet, but\nI thought someone might be interested in them.\n\nThe problem with storing source is that it doesn't\nget changed when things change. Try altering\na column name that has a check constraint, then\ndump the database. I don't think this is the\nresponsibility of the dumper. If we store source\nwe should be guaranteeing it's correct. \nPlus, right now for FK constraints we do something\nspecific to keep track of the other table referenced\nso we can remove the constraints if the table goes \naway. But, what happens when we allow subqueries\nin check constraints, etc...\n\nSo, what I was thinking is, that if we have another\ntable to store this kind of constraint info, it\nshould probably store information for all constraints.\nI was thinking two tables, one (say pg_constraint)\nwhich stores basic information about the constraint\n(what type, the constraint name, primarily constraintd\ntable, maybe owner if constraints have owners in SQL)\nand a source form (see more below).\nThe second table stores references from this constraint.\nSo any table, column, index, etc is stored here.\nProbably something of the form constraintoid, \ntype of thing being referenced (the oid of the table?),\nthe oid of the referenced thing and a number.\n\nThe number comes in to the source form thats stored.\nAnywhere that we're referencing something that a name\nis insufficient for (like a column name or table name)\nwe put something into the source for that says \nreferncing column n of the referenced thing m.\n\nThen we create something like \nformat_constraint(constraintoid) which gives out\nan SQL compliant version of the cconstraint.\n\nAnd it means that if we deleted something, we know fairly \neasily whether or not it is being referenced by some\nconstraint somewhere without writing separate code for\nfk constraints and check constraints, etc.. And\nrenaming wouldn't be a problem.\n\n- There are some problems I see right off both conceptually\nand implementation, but I thought someone might be able \nto come up with a better idea once it was presented (even \nif it's just a \"not worth the effort\" :) )\n\nOne of the problems I see is that if taken to its end,\nwould you store function oids here? If so, that might\nmake it harder to allow a drop function/create function\nto ever work transparently in the future.\nPlus, I'm not even really sure if it would be reasonable\nto get a source form like I was thinking of for check\nconstraints really.\n\n\n", "msg_date": "Sun, 6 Aug 2000 10:29:19 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": true, "msg_subject": "Constraint stuff" }, { "msg_contents": "Stephan Szabo <[email protected]> writes:\n> Tom had suggested storing a more \n> understandable form of the foreign key constraint\n> to make dumping more reasonable in its own table.\n> I'd guess like the src stored for check constraints.\n\nI wasn't actually thinking of storing source, but rather precompiled\nexpressions (as I remarked awhile ago, I think pg_relcheck's rcsrc\ncolumn is dead weight; we could and should generate the value on demand\nby reverse-listing rcbin instead). This gets you away from\nrename-induced problems since everything is table OIDs, attribute column\nnumbers, operator and function OIDs, etc.\n\nHowever, digging those references out of the expression tree is a little\nbit painful; you're right that we shouldn't expect applications to do\nthat for themselves. We could store an additional list of referenced\nitems. We wouldn't necessarily have to store that explicitly either,\nthough --- functions to say \"is this OID referenced in this stored\nexpression\" or perhaps \"give me an array of all function OIDs in this\nexpression\" would get the job done AFAICS.\n\n> One of the problems I see is that if taken to its end,\n> would you store function oids here? If so, that might\n> make it harder to allow a drop function/create function\n> to ever work transparently in the future.\n\nI don't think we should worry about that. What's actually needed IMHO\nis an \"ALTER FUNCTION\" command that allows you to replace the body of\nan existing function, and perhaps change its name, but NOT its type\nsignature (result type and number/types of arguments). Changing the\nsignature is inherently not a transparent operation because it'd\ninvalidate stored expressions that use the function. ALTER would let\nyou make safe changes to a function without changing its OID and thus\nwithout invalidating references-by-OID.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Aug 2000 12:32:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Constraint stuff " }, { "msg_contents": "Tom Lane wrote:\n> \n> Stephan Szabo <[email protected]> writes:\n> > Tom had suggested storing a more\n> > understandable form of the foreign key constraint\n> > to make dumping more reasonable in its own table.\n> > I'd guess like the src stored for check constraints.\n\n...\n\n> I don't think we should worry about that. What's actually needed IMHO\n> is an \"ALTER FUNCTION\" command that allows you to replace the body of\n> an existing function, and perhaps change its name, but NOT its type\n> signature (result type and number/types of arguments). \n\nIIRC Oracle allows the syntax CREATE OR REPLACE in many places, for \nexample for changing VIEWS and PROCEDURES without affecting the things \ndependent on them.\n\nCREATE OR REPLACE works also for not-yet-existing function which ALTER \nprobably would not.\n\n> Changing the\n> signature is inherently not a transparent operation because it'd\n> invalidate stored expressions that use the function. ALTER would let\n> you make safe changes to a function without changing its OID and thus\n> without invalidating references-by-OID.\n> \n\n----------\nHannu\n", "msg_date": "Mon, 07 Aug 2000 20:23:44 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Constraint stuff" }, { "msg_contents": "On Mon, 7 Aug 2000, Tom Lane wrote:\n\n> Stephan Szabo <[email protected]> writes:\n> > Tom had suggested storing a more \n> > understandable form of the foreign key constraint\n> > to make dumping more reasonable in its own table.\n> > I'd guess like the src stored for check constraints.\n> \n> I wasn't actually thinking of storing source, but rather precompiled\n> expressions (as I remarked awhile ago, I think pg_relcheck's rcsrc\n\nI guess you could store the fk_constraint node that is generated for fk\nconstraints, but that's not really an expression... I think I must\nbe missing something, because I can't quite see what the precompiled\nexpression for an fk constraint would be...\n\n> However, digging those references out of the expression tree is a little\n> bit painful; you're right that we shouldn't expect applications to do\n> that for themselves. We could store an additional list of referenced\n> items. We wouldn't necessarily have to store that explicitly either,\n> though --- functions to say \"is this OID referenced in this stored\n> expression\" or perhaps \"give me an array of all function OIDs in this\n> expression\" would get the job done AFAICS.\n\nThe reason I was thinking of storing things was also so you could do\nthings like: is this oid stored in any constraint. For example,\nI'm removing a column, is there any constraint that references this\ncolumn, etc, rather than having to code stuff for all of the special\ncases in all places that might need it.\n\n", "msg_date": "Mon, 7 Aug 2000 11:13:08 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Constraint stuff " }, { "msg_contents": "At 08:23 PM 8/7/00 +0300, Hannu Krosing wrote:\n\n>IIRC Oracle allows the syntax CREATE OR REPLACE in many places\n\nYes, Oracle does allow this. \n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 07 Aug 2000 11:33:18 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Constraint stuff" } ]
[ { "msg_contents": "I've bumped the system catalog version number and committed changes\nwhich:\n o implement LIKE/ESCAPE and related clauses\n o implement a case-insensitive ILIKE and related clauses\n o allow embedded double-quotes in double-quoted identifers\n o implement CREATE/DROP SCHEMA as a synonym for CREATE DATABASE\n\nAll (more or less ;) per SQL99 standard, with the SCHEMA stuff just a\nstopgap (afaik others are planning real work in this area). A few other\nfixups too...\n\n - Thomas\n\n From the CVS logs:\n\nSupport SQL99 embedded double-quote syntax for quoted identifiers.\nAllow this in the parser and in pg_dump, but it is probably not enough\n for a complete solution. \nBetter to have the feature started then never here.\n\nImplement LIKE/ESCAPE. Change parser to use like()/notlike() \n rather than the \"~~\" operator; this made it easy to add ESCAPE\nfeatures.\nImplement ILIKE, NOT ILIKE, and the ESCAPE clause for them. \n afaict this is not MultiByte clean, but lots of other stuff isn't\neither.\nFix up underlying support code for LIKE/NOT LIKE. \n Things should be faster and does not require internal string copying.\nUpdate regression test to add explicit checks for \n LIKE/NOT LIKE/ILIKE/NOT ILIKE.\nRemove colon and semi-colon operators as threatened in 7.0.\nImplement SQL99 COMMIT/AND NO CHAIN. \n Throw elog(ERROR) on COMMIT/AND CHAIN per spec \n since we don't yet support it.\nImplement SQL99 CREATE/DROP SCHEMA as equivalent to CREATE DATABASE.\n This is only a stopgap or demo since schemas will have another \n implementation soon.\nRemove a few unused production rules to get rid of warnings \n which crept in on the last commit. \nFix up tabbing in some places by removing embedded spaces.\n", "msg_date": "Sun, 06 Aug 2000 18:16:19 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "LIKE/ESCAPE et al, initdb required!" } ]
[ { "msg_contents": "Is it just me (or my new dial-in drop) or is the scp server daemon not\ncompletely working on hub.org? I get a password prompt but it never\nstarts transferring files.\n\n - Thomas\n", "msg_date": "Sun, 06 Aug 2000 18:19:24 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "scp daemon working?" }, { "msg_contents": "On Sun, 6 Aug 2000, Thomas Lockhart wrote:\n\n> Is it just me (or my new dial-in drop) or is the scp server daemon not\n> completely working on hub.org? I get a password prompt but it never\n> starts transferring files\n\nYou?\n\n> scp udm-diff.gz hub.org:udm-diff.gz\nudm-diff.gz 100% |*****************************| 1320 00:00 \n\n\n", "msg_date": "Sun, 6 Aug 2000 16:08:52 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scp daemon working?" }, { "msg_contents": "> You?\n\nMaybe. It is working now when I explicitly try \"scp1\". Pretty sure I\ntried that earlier.\n\nbtw, when using scp2, things seems to stop when my side requests X11\nforwarding.\n\n - Thomas\n", "msg_date": "Mon, 07 Aug 2000 00:39:01 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: scp daemon working?" }, { "msg_contents": "On Mon, 7 Aug 2000, Thomas Lockhart wrote:\n\n> > You?\n> \n> Maybe. It is working now when I explicitly try \"scp1\". Pretty sure I\n> tried that earlier.\n> \n> btw, when using scp2, things seems to stop when my side requests X11\n> forwarding.\n\nthat one I have no way to test, apparently ... FreeBSD has 'ssh' only,\nwith 'ssh -2' forcing ssh2 mode ... but I can't find anything similar for\nscp. Trying ssh -2 though, I can connect and get logged in ...\n\n\n", "msg_date": "Sun, 6 Aug 2000 22:07:53 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scp daemon working?" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> that one I have no way to test, apparently ... FreeBSD has 'ssh' only,\n> with 'ssh -2' forcing ssh2 mode ... but I can't find anything similar for\n> scp.\n\nThe only way I've been able to influence scp is through .ssh/config\n\n> Trying ssh -2 though, I can connect and get logged in ...\n", "msg_date": "Mon, 07 Aug 2000 08:51:47 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: scp daemon working?" } ]
[ { "msg_contents": "> I suppose we could implement the conversion as \"float8in(float4out(x))\"\n> instead of \"(double) x\" but it'd be several orders of magnitude slower,\n> as well as being *less* useful to those who know what they're doing with\n> float math (since the result would actually be a less-precise conversion\n> except in cases where the intended value has a short decimal\n> representation).\n\nWe only need to maintain the lower-order bit(s). Seems this could be done\na lot easier than by an ascii in-between.\n\nIs there a reason we can't perform the conversion and then copy the\nlow-order bits manually, with some bit-shifting and masking?\n\nIan\n\n", "msg_date": "Sun, 6 Aug 2000 14:36:19 -0700 (PDT)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Trouble with float4 afterupgrading from\n\t6.5.3 to 7.0.2" }, { "msg_contents": "> > Is there a reason we can't perform the conversion and then copy the\n> > low-order bits manually, with some bit-shifting and masking?\n> \n> *What* low-order bits? The fundamental problem is we don't have 'em.\n\nOK, we represent 10.1 (decimal) with 1010.000110011..., repeating the 0011\npattern. In floating point representation, we say this is 2^3 *\n1.010000110011..., repeating the 0011 pattern.\n\nBut this dosen't work very well, because when we print it out it won't be\nexact, it will be some decimal number ending in a 0 or a 5. So when we\nread this out in decimal, we get 10.099999995, with a variable number of\n9's depending on the precision we use.\n\nWhen we print this number out, we perform the decimal conversion, and then\ntruncate the last decimal digit and round.\n\nSo I guess the question is, why can't we perform 4-bit float -> 8-bit\nfloat conversion via a decimal conversion, irrespective of storing the\nthing in ASCII.\n\nMabye the decimal conversion is too costly? Perhaps we could 1) mark 4-bit\nfloats with a flag of some kind indicating whether or not the decimal\nconversion is necessary, and 2) avoid this conversion wherever possible,\nincluding giving people a warning when they use float4s in their tuples.\n\nOr mabye I'm just being dumb.\n\nIan\n\n\n\n", "msg_date": "Sun, 6 Aug 2000 17:04:28 -0700 (PDT)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Trouble with float4 afterupgrading from\n\t6.5.3 to 7.0.2" }, { "msg_contents": "After trying to upgrade PostgreSQL from 6.5.3 to 7.0.2 I got into trouble with float4. I'll try to explain it by example.\n\npostgres@lee: ~$ createdb -E LATIN1 -e testfloat\nCREATE DATABASE \"testfloat\" WITH ENCODING = 'LATIN1'\nCREATE DATABASE\npostgres@lee: ~$ psql testfloat\ntestfloat=# create table tftbl (f1 float4, f2 float4);\nCREATE\ntestfloat=# insert into tftbl values (10, 20);\nINSERT 212682 1\ntestfloat=# select * from tftbl;\n f1 | f2\n----+----\n 10 | 20\n(1 row)\n\ntestfloat=# update tftbl set f1=10.1 where f1=10 and f2=20;\nUPDATE 1\ntestfloat=# update tftbl set f2=20.2 where f1=10.1 and f2=20;\nUPDATE 0\ntestfloat=# select * from tftbl;\n f1 | f2\n------+----\n 10.1 | 20\n(1 row)\n\ntestfloat=# update tftbl set f2=20.2 where f1=float4(10.1) and f2=20;\nUPDATE 1\ntestfloat=# select * from tftbl;\n f1 | f2\n------+------\n 10.1 | 20.2\n(1 row)\n\nIn my real client application (Windows 98, Borland C++ Builder 5.0, BDE 5.1.1.1, PostODBC 06.50.0000) I cannot in all cases use expressions like f1=float4(10.1) instead of simple f1=10.1 because BDE and PostODBC construct queries by themselves when I, for example, update tables from BCB components (BDE complains, that another user changed the record, while I am the only user at the time). They use f1=10.1-like format. With PostgeSQL 6.5.3 I have no problem of the kind.\n\nPostgreSQL lives in Debian Linux (woody), kernel 2.2.14, libc6 2.1.3, locales 2.1.3, here is output of locale:\nlee:~# locale\nLANG=C\nLC_CTYPE=\"C\"\nLC_NUMERIC=\"C\"\nLC_TIME=\"C\"\nLC_COLLATE=\"C\"\nLC_MONETARY=\"C\"\nLC_MESSAGES=\"C\"\nLC_ALL=\n\nAny help, tip or hint would be appreciated.\n\nThank you, Mikhail.\n\n\n\n\n", "msg_date": "Mon, 7 Aug 2000 17:04:27 +0600", "msg_from": "\"Romanenko Mikhail\" <[email protected]>", "msg_from_op": false, "msg_subject": "Trouble with float4 after upgrading from 6.5.3 to 7.0.2" }, { "msg_contents": "\"Romanenko Mikhail\" <[email protected]> writes:\n> testfloat=# update tftbl set f1=10.1 where f1=10 and f2=20;\n> UPDATE 1\n> testfloat=# update tftbl set f2=20.2 where f1=10.1 and f2=20;\n> UPDATE 0\n\nThe second update is failing to find any tuple that satisfies f1 = 10.1,\nbecause f1 is a float4 variable whereas 10.1 is implicitly a float8\nconstant. 6.5 also treated 10.1 as float8, but managed to find equality\nanyway.\n\nI think this change in behavior is my fault :-(. About a year ago\nI cleaned up some ugly coding in float.c and (without thinking about\nit very carefully) changed float48eq and related operators so that\nfloat4-vs-float8 comparisons are done in float8 arithmetic not float4.\nThe old code truncated the double input down to float and did a float\nequality check, while the new code promotes the float input to double\nand does a double-precision comparison.\n\nThis behavior is arguably more correct than the old way from a purely\nmathematical point of view, but now that I think about it, it's not\nclear that it's more useful than the old way. In particular, in an\nexample like the above, it's now impossible for any float4 value to be\nconsidered exactly equal to the float8 constant 10.1, because the float4\nvalue just hasn't got the right low-order bits after widening.\n\nPerhaps the old way of considering equality only to float accuracy\nis more useful, even though it opens us up to problems like overflow\nerrors in \"float4var = 1e100\". Comments anyone?\n\nA general comment on your table design though: anyone who expects exact\nequality tests on fractional float values to succeed is going to get\nburnt sooner or later. If you must use this style of coding then\nI recommend using numeric fields not float fields, and certainly not\nfloat4 fields.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Aug 2000 11:35:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trouble with float4 after upgrading from 6.5.3 to 7.0.2 " }, { "msg_contents": "At 11:35 7/08/00 -0400, Tom Lane wrote:\n>\n>Perhaps the old way of considering equality only to float accuracy\n>is more useful, even though it opens us up to problems like overflow\n>errors in \"float4var = 1e100\". Comments anyone?\n>\n\nThe following frightened me a little:\n\npjw=# select float4(10.1);\n float4\n--------\n 10.1\n(1 row)\n\npjw=# select float8(float4(10.1));\n float8\n------------------\n 10.1000003814697\n(1 row)\n\n\nI would have expected the latter to be at worst 10.10000000000000 +/-\n.00000000000001.\n\nAm I missing something? \n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 08 Aug 2000 01:57:18 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Trouble with float4 after\n\tupgrading from 6.5.3 to 7.0.2" }, { "msg_contents": "> Perhaps the old way of considering equality only to float accuracy\n> is more useful, even though it opens us up to problems like overflow\n> errors in \"float4var = 1e100\". Comments anyone?\n\nI would not have anticipated this either. I agree that downconverting to\nfloat4 is the right solution.\n\nPossible overflow errors can be checked in advance using the macros or\nroutines already there. This may be an example of why those could be A\nGood Thing in some instances.\n\n - Thomas\n", "msg_date": "Mon, 07 Aug 2000 16:04:54 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Trouble with float4 after upgrading from\n\t6.5.3 to 7.0.2" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> pjw=# select float8(float4(10.1));\n> float8\n> ------------------\n> 10.1000003814697\n> (1 row)\n\n> I would have expected the latter to be at worst 10.10000000000000 +/-\n> .00000000000001.\n\nfloat4 is good to about 7 decimal digits (24 mantissa bits) on\nIEEE-standard machines. Thus the above result is actually closer\nthan you have any right to expect.\n\nDon't they teach people about float arithmetic in CS 101 anymore?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Aug 2000 12:11:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Trouble with float4 after upgrading from 6.5.3 to\n\t7.0.2" }, { "msg_contents": "> I would have expected the latter to be at worst 10.10000000000000 +/-\n> .00000000000001.\n> Am I missing something?\n\nWell, yes :)\n\n10.1 can't be represented exactly, so the float8 representation has bits\nset way down at the low end of the mantissa. When converting to float4\nthose low bits get rounded up or down into the lowest bit of the float4\nrepresentation. At that point, you have lost knowledge that this ever\nwas supposed to be *exactly* 10.1. And when converting back to float8,\nthat float4 low bit becomes a middle-range bit in the float8\nrepresentation, with all the bits underneath that zeroed.\n\nBack in the old days, before printf() implementations settled down, you\nwould be reminded of this any time you did anything, since just\nassigning 10.1 and then printing it out would give you some goofy\n10.099999999998 or 10.10000000001 (don't count the number of digits here\ntoo closely, they are only qualitatively correct).\n\n - Thomas\n", "msg_date": "Mon, 07 Aug 2000 16:12:59 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Trouble with float4 afterupgrading from\n\t6.5.3 to 7.0.2" }, { "msg_contents": "At 12:11 7/08/00 -0400, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> pjw=# select float8(float4(10.1));\n>> float8\n>> ------------------\n>> 10.1000003814697\n>> (1 row)\n>\n>> I would have expected the latter to be at worst 10.10000000000000 +/-\n>> .00000000000001.\n>\n>float4 is good to about 7 decimal digits (24 mantissa bits) on\n>IEEE-standard machines. Thus the above result is actually closer\n>than you have any right to expect.\n>\n>Don't they teach people about float arithmetic in CS 101 anymore?\n>\n\nNo idea. It's a couple of decades since I did it.\n\nI wasn't complaining about the float4 accuracy; I was complaining about the\nway it was converted to float8. It seems more intuitive to zero-extend base\n10 zeros...\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 08 Aug 2000 02:15:45 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Trouble with float4 after\n\tupgrading from 6.5.3 to 7.0.2" }, { "msg_contents": "At 16:12 7/08/00 +0000, Thomas Lockhart wrote:\n>> I would have expected the latter to be at worst 10.10000000000000 +/-\n>> .00000000000001.\n>> Am I missing something?\n>\n>10.1 can't be represented exactly, so the float8 representation has bits\n>set way down at the low end of the mantissa. When converting to float4\n>those low bits get rounded up or down into the lowest bit of the float4\n>representation. At that point, you have lost knowledge that this ever\n>was supposed to be *exactly* 10.1. And when converting back to float8,\n>that float4 low bit becomes a middle-range bit in the float8\n>representation, with all the bits underneath that zeroed.\n>\n\nNow I understand, but it doesn't quite make sense given what was displayed.\nThe float4 value is *displayed* as 10.1, not 10.1000001, so I had assumed\nthat there was a level of either accuracy or display rouding happening.\nWhen this value is converted to float8, I hoped that the result would be\nthe same as:\n\n Cast( Cast(f4val as varchar(32)) as float8)\n\nMaybe this hope is naieve, but it it a lot more useful than the current\nsituation. But now that I understand what is happening, I see that (short\nof varchar conversions!), it is probably quite hard to do since we can't\ntell the 'correct' value.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 08 Aug 2000 02:28:37 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Trouble with float4 afterupgrading\n\tfrom 6.5.3 to 7.0.2" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> Now I understand, but it doesn't quite make sense given what was displayed.\n> The float4 value is *displayed* as 10.1, not 10.1000001, so I had assumed\n> that there was a level of either accuracy or display rouding happening.\n\nIn float4-to-ASCII, yes. Modern printf implementations have some\nheuristics about the actual accuracy of float4 and float8 and where they\nought to round off the printed result accordingly. But float4 to float8\nis normally done just by appending zeroes to the mantissa.\n\nI suppose we could implement the conversion as \"float8in(float4out(x))\"\ninstead of \"(double) x\" but it'd be several orders of magnitude slower,\nas well as being *less* useful to those who know what they're doing with\nfloat math (since the result would actually be a less-precise conversion\nexcept in cases where the intended value has a short decimal\nrepresentation).\n\nAfter thinking about it some more, I'm of the opinion that the current\nbehavior of float48eq and friends is the right thing, and that people\nwho expect 10.1 to be an exact value should be told to use type NUMERIC.\nWe should not kluge up the behavior of the float operations to try to\nmake it look like inexact values are exact. That will just cause\nfailures in other situations.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Aug 2000 12:53:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Trouble with float4 afterupgrading from 6.5.3 to\n\t7.0.2" }, { "msg_contents": "At 02:15 AM 8/8/00 +1000, Philip Warner wrote:\n\n>I wasn't complaining about the float4 accuracy; I was complaining about the\n>way it was converted to float8. It seems more intuitive to zero-extend base\n>10 zeros...\n\nYou're right, it absolutely is more intuitive - in a base-10 representation,\ni.e. NUMERIC. A float4 is a binary number, and the only zeros available\nare binary ones. The same is true of the non-zeros, for that matter. Not\nthat it matters, adding zero to the right of any number doesn't add\nsignificance,\nsomething they pound into people's heads in the physical sciences.\n\nDoing a type conversion from float4 to float8 is in general not a safe thing\nto do, because you only can depend on 24 bits of mantissa significance \nafterwards anyway. One such conversion will propagate that lesser\nsignificance\nall throughout the expressions using it. Take great care when you do this.\n\nAs Tom pointed out you're getting 8 digits of decimal significance in \nyour example (10.100000) due to the particular number involved. You can\nonly expect 24/log2(10) digits, which as he points out is just 7 digits plus\nchange.\n\nThe basic problem is that we evolved with 10 fingers, rather than 8 or 16 :)\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 07 Aug 2000 11:29:50 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] Trouble with float4 after\n\tupgrading from 6.5.3 to 7.0.2" }, { "msg_contents": "<[email protected]> writes:\n> Is there a reason we can't perform the conversion and then copy the\n> low-order bits manually, with some bit-shifting and masking?\n\n*What* low-order bits? The fundamental problem is we don't have 'em.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Aug 2000 14:57:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Trouble with float4 afterupgrading from 6.5.3 to\n\t7.0.2" }, { "msg_contents": "At 12:53 7/08/00 -0400, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> Now I understand, but it doesn't quite make sense given what was displayed.\n>> The float4 value is *displayed* as 10.1, not 10.1000001, so I had assumed\n>> that there was a level of either accuracy or display rouding happening.\n>\n>I suppose we could implement the conversion as \"float8in(float4out(x))\"\n>instead of \"(double) x\" but it'd be several orders of magnitude slower,\n>as well as being *less* useful to those who know what they're doing with\n>float math (since the result would actually be a less-precise conversion\n>except in cases where the intended value has a short decimal\n>representation).\n\nWould I be right in saying that \"those who know what they're doing with\nfloat math\" would totally avoid mixing float4 & float8? If so, then ISTM\nthat any changes to float4/8 conversions should not affect them.\n\nThere seem to be a few choices:\n\n- go with zero-extending the bits. This is easy, and what would be expected\nfor normal float ops, at least by people who understand float implementations.\n\n- do an intermediate text or numeric conversion. This will produce more\nexpected results, but at the expense of speed. If people complain about\nspeed, then they can change all float types to matching precision, or use\nnumeric data types.\n\n- take the code from 'printf' or 'numeric' and do the appropriate\nbit-munging to get the value to use in conversion. No idea if this would\nwork, but it is probably better than doing a text conversion since we won't\nbe at the mercy of the occasional C library that produces 10.1000001.\n\nWould it be worth having some kind of DB setting for how it handles\nfloat4/8 conversion? Or is it just too much work, when just using all\nfloat8 or numeric is an acceptable workaround?\n\nDo you know how fast 'numeric' is?\n\n\n>That will just cause\n>failures in other situations.\n\nIf there are genuine failures that would be introduced, then clearly it's a\nbad idea. But, since it will only affect people who compare float4/8, it\ndoesn't seem too likely to produce worse failures than the change you have\nalready made. I ask this mainly out of curiosity - I assume there are more\naspects to this issue that I have missed...\n\n\nBye for now,\n\nPhilip.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 08 Aug 2000 13:37:55 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Trouble with float4 afterupgrading\n\tfrom 6.5.3 to 7.0.2" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> A general comment on your table design though: anyone who expects exact\n> equality tests on fractional float values to succeed is going to get\n> burnt sooner or later. If you must use this style of coding then\n> I recommend using numeric fields not float fields, and certainly not\n> float4 fields.\n> \n> regards, tom lane\n\nI try 'numeric', 'decimal', and 'float8' types and only 'float8' works. Both 'decimal' and 'numeric' failed (as 'float4' did) with error message: \"Unable to identify an operator '=' for type numeric and 'float8' You will have to retype this query using an explicite cast\".\n\nAnd though my problem is solved, thank you for the help, I thought perhaps this information will be usefull for you.\n\nMikhail.\n\n", "msg_date": "Tue, 8 Aug 2000 09:59:49 +0600", "msg_from": "\"Romanenko Mikhail\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trouble with float4 after upgrading from 6.5.3 to 7.0.2 " }, { "msg_contents": "At 01:37 PM 8/8/00 +1000, Philip Warner wrote:\n\n>- do an intermediate text or numeric conversion. This will produce more\n>expected results\n\nBy who? I'm serious. I sure wouldn't. I can't think of any language\nimplementation that does this. The standard approach has the advantage\nof maintaining a defined significance. The approach you suggest doesn't,\nyou're actually losing significance. It gives the illusion of increasing\nfor the particular example you've chosen, but it is nothing but illusion.\n\n>Would it be worth having some kind of DB setting for how it handles\n>float4/8 conversion?\n\nUse type numeric when you need precise decimal results. Your suggested\nkludge won't give you what you want.\n\n>Do you know how fast 'numeric' is?\n\nNot as fast as float by any means, but there's a reason why they exist \nin all languages which include the financial sphere in their presumed\napplication space.\n\nThe simplest thing is to realize that using float4 leaves you with\njust over 7 significant digits, and to only print out 7 digits.\nThen you'll get the answer you expect (10.100000).\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Tue, 08 Aug 2000 05:50:00 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] Trouble with float4 afterupgrading\n\tfrom 6.5.3 to 7.0.2" }, { "msg_contents": "At 05:50 8/08/00 -0700, Don Baccus wrote:\n>\n>The simplest thing is to realize that using float4 leaves you with\n>just over 7 significant digits, and to only print out 7 digits.\n>Then you'll get the answer you expect (10.100000).\n>\n\nYou may have missed the point; my suggestions are only aimed at changing\nthe results of float4/float8 conversions & comparisons. \n\nMy (very vague) recollections of this stuff is that the machine\nrepresentation is only guaranteed to be within a certain machine/language\naccuracy, so the stored value is within +/-(machine error) of the 'real\nvalue'. Further, my recollection is that one or more bits are usually used\nto provide rounding information so that, eg., the 7 digit representations\nare consistent. \n\nGiven this, I have assumed that printf etc use these least significant bits\nto determine the 'correct' representation. The idea is to do exactly the\nsame in converting float4 to float8, so that:\n\n '4.1'::float4 = '4.1'::float8\n\nwill be true.\n\nMaybe my recollection is false...\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 08 Aug 2000 23:53:18 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Trouble with float4 afterupgrading\n\tfrom 6.5.3 to 7.0.2" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> Given this, I have assumed that printf etc use these least significant bits\n> to determine the 'correct' representation.\n\nNo. What float4-to-text really does is *discard* information, by\nrounding off the printed result to only 7 digits (when there are\nactually 7-and-change in there). This means values that are actually\ndistinct float4 values may get printed as the same thing:\n\nregression=# select 1.234567 :: float4;\n ?column?\n----------\n 1.23457\n(1 row)\n\nregression=# select 1.234568 :: float4;\n ?column?\n----------\n 1.23457\n(1 row)\n\nregression=# select 1.234567 :: float4 = 1.234568 :: float4;\n ?column?\n----------\n f\n(1 row)\n\nregression=# select 1.234567 :: float4 - 1.234568 :: float4;\n ?column?\n--------------\n -9.53674e-07\n(1 row)\n\nI don't much care for this behavior (since it means dump and reload of\nfloat columns is lossy), and I certainly won't hold still for\nintroducing it into other operations on floats.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Aug 2000 10:04:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Trouble with float4 afterupgrading from 6.5.3 to\n\t7.0.2" }, { "msg_contents": "At 10:04 AM 8/8/00 -0400, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> Given this, I have assumed that printf etc use these least significant bits\n>> to determine the 'correct' representation.\n>\n>No. What float4-to-text really does is *discard* information, by\n>rounding off the printed result to only 7 digits (when there are\n>actually 7-and-change in there).\n\nWhich is the standard approach for such conversion routines. If you\nkeep generating digits they're just garbage anyway.\n\nAs far as the rest, Phil, it is true that well-designed floating-point\nhardware such as that which follows the IEEE spec (well, at least\nbetter-designed compared to most of its predecessors) strictly specifies\nhow extra rounding information is to be used during various mathematical\noperations (add, multiply, etc). This is done so that error due to rounding\ncan be strictly bounded. The result (for float4) is a 24-bit mantissa\nwith strictly defined significance.\n\nHowever, the float4 itself once stored only consists of that 24-bit \nmantissa. There's no way to know the history of how that 24th bit\nwas generated, i.e. whether all the bits to the right were exactly\nzero or whether it was the result of rounding (or truncation if the\nuser specified it and the hardware supports it). \n\nKludging conversion by using decimal conversion will simply lose\nsignificance. In your 10.1 case you'll be happy because that 24th\nbit becomes zero.\n\nAll you've accomplished, though, is to throw away (at least) one bit.\nYour float8 now has no more than 23 bits of significance rather than\n24. Repeat this process a few times and you could store the result\nin a boolean, in terms of the bits you could guarantee to be \nsignificant ...\n\n>I don't much care for this behavior (since it means dump and reload of\n>float columns is lossy),\n\nA good reason for binary backup programs!\n\n> and I certainly won't hold still for\n>introducing it into other operations on floats.\n\nNo, it flies in the face of not only convention, but a lot of investigation\ninto how to implement floating point arithmetic in a way that's useful\nto those who have to depend on the results having mathematically definable\nerror bounds.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Tue, 08 Aug 2000 07:21:08 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] Trouble with float4 afterupgrading\n\tfrom 6.5.3 to 7.0.2" }, { "msg_contents": "At 10:04 8/08/00 -0400, Tom Lane wrote:\n>\n>No. What float4-to-text really does is *discard* information, by\n>rounding off the printed result to only 7 digits (when there are\n>actually 7-and-change in there). This means values that are actually\n>distinct float4 values may get printed as the same thing:\n>\n\nThanks guys for some remarkably patient explanations. I now know more than\nI want to know about float values.\n\n\n>I don't much care for this behavior (since it means dump and reload of\n>float columns is lossy), and I certainly won't hold still for\n>introducing it into other operations on floats.\n\nThis makes me think that some kind of binary dump in pg_dump is probably\nnot a bad idea. Has anybody looked at doing a cross-platform binary COPY?\nOr some other way of representing base types - we have <type>in/out maybe\n<type>exp/imp (export/import) might be useful to get a portable, lossless\nrepresentation.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 09 Aug 2000 01:40:16 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Trouble with float4 afterupgrading\n\tfrom 6.5.3 to 7.0.2" }, { "msg_contents": "At 01:40 AM 8/9/00 +1000, Philip Warner wrote:\n\n>This makes me think that some kind of binary dump in pg_dump is probably\n>not a bad idea. Has anybody looked at doing a cross-platform binary COPY?\n>Or some other way of representing base types - we have <type>in/out maybe\n><type>exp/imp (export/import) might be useful to get a portable, lossless\n>representation.\n\nAnother way to do it is to dump/restore floats in hex, maintaining the\nactual binary values.\n\nConversion to hex, unlike conversion to decimal, is exact (16 is a power\nof 2 while 10 is not, to add to your \"more knowledge than you want\" about\nfloats!)\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Tue, 08 Aug 2000 10:27:53 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] Trouble with float4 afterupgrading\n\tfrom 6.5.3 to 7.0.2" } ]
[ { "msg_contents": "I've updated the LIKE code to make it more SQL9x compliant. I've left in\nthe \"permanent backslash\" escape character, but I would like to remove\nit now.\n\nHere's why:\n\nUsually, we would want to preserve the backward compatibility for a\nrelease or so. But in this case, we have to choose backward\ncompatibility or SQL9x compliance. I'd rather move toward compliance and\n(in this case) a richer feature set. If I leave in the backslash, then\nyou can't use SQL9x syntax to specify a pattern match which has a\nliteral backslash in it. So the \"one release grace period\" means that we\nhave one more release which does not support the full SQL92 syntax for\nthis feature.\n\nIf I remove the backslash feature, then instead of matching a literal\npercent sign (\"%\") like this:\n\n ... 'hi%there' LIKE 'hi\\%there' ...\n\nyou would write\n\n ... 'hi%there' LIKE 'hi\\%there' ESCAPE '\\' ...\n\nor of course you could specify another escape character. afaik there is\nno default explicit escape character in SQL99.\n\nComments?\n\n - Thomas\n", "msg_date": "Sun, 06 Aug 2000 23:59:30 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "LIKE pattern matching" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> you would write\n\n> ... 'hi%there' LIKE 'hi\\%there' ESCAPE '\\' ...\n\n> or of course you could specify another escape character. afaik there is\n> no default explicit escape character in SQL99.\n\nI thought the agreement was to assume default ESCAPE '\\' (or really\nESCAPE '\\\\', unless you are proposing to break ALL Postgres applications\nrather than just all the ones that use LIKE?).\n\nTwo points here:\n\n1. I do not think it's acceptable to drop the backslash-quoting behavior\nwith no notice.\n\n2. It's not clear to me that the SQL default of \"no quote character\" is\nsuperior to having a default quote character, and therefore I'd actually\nargue that we should NEVER go to 100% SQL-and-nothing-but semantics on\nthis point.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Aug 2000 20:36:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE pattern matching " }, { "msg_contents": "Tom Lane wrote:\n> > ... 'hi%there' LIKE 'hi\\%there' ESCAPE '\\' ...\n> > or of course you could specify another escape character. afaik there is\n> > no default explicit escape character in SQL99.\n> I thought the agreement was to assume default ESCAPE '\\' (or really\n> ESCAPE '\\\\', unless you are proposing to break ALL Postgres applications\n> rather than just all the ones that use LIKE?).\n\nNo, my proposal *only* affects the internal workings of the LIKE support\ncode, *not* the other backslashing which happens at the parser. That is\nanother can of worms as you point out. But...\n\n> 1. I do not think it's acceptable to drop the backslash-quoting behavior\n> with no notice.\n\nNot all quoting behavior, as noted above.\n\n> 2. It's not clear to me that the SQL default of \"no quote character\" is\n> superior to having a default quote character, and therefore I'd actually\n> argue that we should NEVER go to 100% SQL-and-nothing-but semantics on\n> this point.\n\nFor the LIKE constructs, this isn't true. And you point out something\ninteresting which I hadn't noticed: to get the backslash quoting\nbehavior which was implemented in the LIKE code you actually had to use\n*two* backslashes. Yuck.\n\nAnyway, the point is that the effects of this proposed change are\nlimited to internal LIKE behavior only, *and* will give us richer and\nmore consistant features. istm that this is to be preferred over some\n\"halfway there\" implementation which isn't exactly backward compatible\nand isn't completely standards compliant.\n\nThat said, it is trivial to clean up the internal code as I propose but\nto *also* support the default backslash (not SQL9x compliant, but what\nthe heck ;) by simply passing the right parameter to the new \"two\nargument\" like() support routines. That parameter could be set back to\nNULL after the next release to get us back to SQL9x compliance.\n\nOh, and I seem to have not committed the new strings regression test\noutput, but will do so soon.\n\n - Thomas\n", "msg_date": "Mon, 07 Aug 2000 01:03:33 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LIKE pattern matching" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> That said, it is trivial to clean up the internal code as I propose but\n> to *also* support the default backslash (not SQL9x compliant, but what\n> the heck ;) by simply passing the right parameter to the new \"two\n> argument\" like() support routines. That parameter could be set back to\n> NULL after the next release to get us back to SQL9x compliance.\n\nSure. I'm merely arguing that the default behavior needs to be to treat\nbackslash as escape by default for at least one more release. You need\nto give people warning and time to update their applications to say\n\"LIKE ... ESCAPE '\\\\'\", if that's the behavior they want to have going\nforward.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Aug 2000 12:35:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE pattern matching " }, { "msg_contents": "> > That said, it is trivial to clean up the internal code as I propose but\n> > to *also* support the default backslash (not SQL9x compliant, but what\n> > the heck ;) by simply passing the right parameter to the new \"two\n> > argument\" like() support routines. That parameter could be set back to\n> > NULL after the next release to get us back to SQL9x compliance.\n> Sure. I'm merely arguing that the default behavior needs to be to treat\n> backslash as escape by default for at least one more release. You need\n> to give people warning and time to update their applications to say\n> \"LIKE ... ESCAPE '\\\\'\", if that's the behavior they want to have going\n> forward.\n\nOK. I was worried that leaving in the explicit \"escape code\" in the\nroutines will lead to bad behavior wrt both old releases *and* SQL9x.\nBut providing the default argument while still cleaning up the internal\ncode probably does The Right Thing.\n\n - Thomas\n", "msg_date": "Mon, 07 Aug 2000 17:05:34 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LIKE pattern matching" } ]
[ { "msg_contents": "1. like.c doesn't compile cleanly anymore:\n\n$ make\ngcc -c -I../../../../src/include -O1 -Wall -Wmissing-prototypes -Wmissing-declarations -g -o like.o like.c\nlike.c:143: warning: no previous prototype for `inamelike'\nlike.c:155: warning: no previous prototype for `inamenlike'\nlike.c:167: warning: no previous prototype for `inamelike_escape'\nlike.c:180: warning: no previous prototype for `inamenlike_escape'\nlike.c:193: warning: no previous prototype for `itextlike'\nlike.c:205: warning: no previous prototype for `itextnlike'\nlike.c:217: warning: no previous prototype for `itextlike_escape'\nlike.c:230: warning: no previous prototype for `itextnlike_escape'\nlike.c: In function `MatchTextLower':\nlike.c:401: warning: implicit declaration of function `tolower'\n\n2. The strings regress test fails. I think you forgot to commit the\nexpected file to go with the new test file?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Aug 2000 20:40:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "LIKE gripes" }, { "msg_contents": "> 1. like.c doesn't compile cleanly anymore:\n> $ make\n> gcc -c -I../../../../src/include -O1 -Wall -Wmissing-prototypes -Wmissing-declarations -g -o like.o like.c\n> like.c:143: warning: no previous prototype for `inamelike'\n> like.c:155: warning: no previous prototype for `inamenlike'\n> like.c:167: warning: no previous prototype for `inamelike_escape'\n> like.c:180: warning: no previous prototype for `inamenlike_escape'\n> like.c:193: warning: no previous prototype for `itextlike'\n> like.c:205: warning: no previous prototype for `itextnlike'\n> like.c:217: warning: no previous prototype for `itextlike_escape'\n> like.c:230: warning: no previous prototype for `itextnlike_escape'\n> like.c: In function `MatchTextLower':\n> like.c:401: warning: implicit declaration of function `tolower'\n\nOK, will look at it. The first ones I see here; the second I'm not sure\nI do. Will see what other files have for #include's.\n\n> 2. The strings regress test fails. I think you forgot to commit the\n> expected file to go with the new test file?\n\nYup. I just noticed that here after updating the tree.\n\n - Thomas\n", "msg_date": "Mon, 07 Aug 2000 01:06:41 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE gripes" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> like.c: In function `MatchTextLower':\n>> like.c:401: warning: implicit declaration of function `tolower'\n\n> OK, will look at it. The first ones I see here; the second I'm not sure\n> I do. Will see what other files have for #include's.\n\nI think <ctype.h> is what's needed here.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Aug 2000 21:12:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LIKE gripes " }, { "msg_contents": "> > OK, will look at it. The first ones I see here; the second I'm not sure\n> > I do. Will see what other files have for #include's.\n> I think <ctype.h> is what's needed here.\n\nOK, I've updated builtins.h and like.c to eliminate compiler warnings,\nand updates the strings.out regression test result file. I've also\nupdated the like support code to\n\n1) eliminate a gratuitous case statement (actually, two) in favor of an\nif/elseif/else construct.\n\n2) eliminate the extraneous backslash quoting code for like() functions\nonly. As I mentioned in the cvs note, this behavior can be put back in\nif we really want it by replacing a NULL with a \"\\\" in two function\ncalls.\n\n - Thomas\n", "msg_date": "Mon, 07 Aug 2000 03:34:24 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE gripes" }, { "msg_contents": "Where has MULTIBYTE Stuff in like.c gone ?\n\nHiroshi Inoue\n\n> -----Original Message-----\n> From: Thomas Lockhart\n> Sent: Monday, August 07, 2000 10:07 AM\n> To: Tom Lane\n> Cc: [email protected]\n> Subject: [HACKERS] Re: LIKE gripes\n> \n> \n> > 1. like.c doesn't compile cleanly anymore:\n> > $ make\n> > gcc -c -I../../../../src/include -O1 -Wall \n> -Wmissing-prototypes -Wmissing-declarations -g -o like.o like.c\n> > like.c:143: warning: no previous prototype for `inamelike'\n> > like.c:155: warning: no previous prototype for `inamenlike'\n> > like.c:167: warning: no previous prototype for `inamelike_escape'\n> > like.c:180: warning: no previous prototype for `inamenlike_escape'\n> > like.c:193: warning: no previous prototype for `itextlike'\n> > like.c:205: warning: no previous prototype for `itextnlike'\n> > like.c:217: warning: no previous prototype for `itextlike_escape'\n> > like.c:230: warning: no previous prototype for `itextnlike_escape'\n> > like.c: In function `MatchTextLower':\n> > like.c:401: warning: implicit declaration of function `tolower'\n> \n> OK, will look at it. The first ones I see here; the second I'm not sure\n> I do. Will see what other files have for #include's.\n> \n> > 2. The strings regress test fails. I think you forgot to commit the\n> > expected file to go with the new test file?\n> \n> Yup. I just noticed that here after updating the tree.\n> \n> - Thomas\n> \n", "msg_date": "Tue, 8 Aug 2000 16:40:28 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Re: LIKE gripes" }, { "msg_contents": "> Where has MULTIBYTE Stuff in like.c gone ?\n\nUh, I was wondering where it was in the first place! Will fix it asap...\n\nThere was some string copying stuff in a middle layer of the like()\ncode, but I had thought that it was there only to get a null-terminated\nstring. When I rewrote the code to eliminate the need for null\ntermination (by using the length attribute of the text data type) then\nthe need for copying went away. Or so I thought :(\n\nThe other piece to the puzzle is that the lowest-level like() support\nroutine traversed the strings using the increment operator, and so I\ndidn't understand that there was any MB support in there. I now see that\n*all* of these strings get stuffed into unsigned int arrays during\ncopying; I had (sort of) understood some of the encoding schemes (most\nuse a combination of one to three byte sequences for each character) and\ndidn't realize that this normalization was being done on the fly. \n\nSo, this answers some questions I have related to implementing character\nsets:\n\n1) For each character set, we would need to provide operators for \"next\ncharacter\" and for boolean comparisons for each character set. Why don't\nwe have those now? Answer: because everything is getting promoted to a\n32-bit internal encoding every time a comparison or traversal is\nrequired.\n\n2) For each character set, we would need to provide conversion functions\nto other \"compatible\" character sets, or to a character \"superset\". Why\ndon't we have those conversion functions? Answer: we do! There is an\ninternal 32-bit encoding within which all comparisons are done.\n\nAnyway, I think it will be pretty easy to put the MB stuff back in, by\n#ifdef'ing some string copying inside each of the routines (such as\nnamelike()). The underlying routine no longer requires a null-terminated\nstring (using explicit lengths instead) so I'll generate those lengths\nin the same place unless they are already provided by the char->int MB\nsupport code.\n\nIn the future, I'd like to see us use alternate encodings as-is, or as a\ncommon set like UniCode (16 bits wide afaik) rather than having to do\nthis widening to 32 bits on the fly. Then, each supported character set\ncan be efficiently manipulated internally, and only converted to another\nencoding when mixing with another character set.\n\nAny and all advice welcome and accepted (though \"keep your hands off the\nMB code!\" seems a bit too late ;)\n\nSorry for the shake-up...\n\n - Thomas\n", "msg_date": "Tue, 08 Aug 2000 15:19:05 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: LIKE gripes" }, { "msg_contents": "> > Where has MULTIBYTE Stuff in like.c gone ?\n\nI didn't know that:-)\n\n> Uh, I was wondering where it was in the first place! Will fix it asap...\n> \n> There was some string copying stuff in a middle layer of the like()\n> code, but I had thought that it was there only to get a null-terminated\n> string. When I rewrote the code to eliminate the need for null\n> termination (by using the length attribute of the text data type) then\n> the need for copying went away. Or so I thought :(\n> \n> The other piece to the puzzle is that the lowest-level like() support\n> routine traversed the strings using the increment operator, and so I\n> didn't understand that there was any MB support in there. I now see that\n> *all* of these strings get stuffed into unsigned int arrays during\n> copying; I had (sort of) understood some of the encoding schemes (most\n> use a combination of one to three byte sequences for each character) and\n> didn't realize that this normalization was being done on the fly. \n> \n> So, this answers some questions I have related to implementing character\n> sets:\n>\n> 1) For each character set, we would need to provide operators for \"next\n> character\" and for boolean comparisons for each character set. Why don't\n> we have those now? Answer: because everything is getting promoted to a\n> 32-bit internal encoding every time a comparison or traversal is\n> required.\n\nMB has something similar to the \"next character\" fucntion called\npg_encoding_mblen. It tells the length of the MB word pointed to so\nthat you could move forward to the next MB word etc.\n\n> 2) For each character set, we would need to provide conversion functions\n> to other \"compatible\" character sets, or to a character \"superset\". Why\n> don't we have those conversion functions? Answer: we do! There is an\n> internal 32-bit encoding within which all comparisons are done.\n\nRight.\n\n> Anyway, I think it will be pretty easy to put the MB stuff back in, by\n> #ifdef'ing some string copying inside each of the routines (such as\n> namelike()). The underlying routine no longer requires a null-terminated\n> string (using explicit lengths instead) so I'll generate those lengths\n> in the same place unless they are already provided by the char->int MB\n> support code.\n\nI have not taken a look at your new like code, but I guess you could use\n\n\t\tpg_mbstrlen(const unsigned char *mbstr)\n\nIt tells the number of words in mbstr (however mbstr needs to null\nterminated).\n\n> In the future, I'd like to see us use alternate encodings as-is, or as a\n> common set like UniCode (16 bits wide afaik) rather than having to do\n> this widening to 32 bits on the fly. Then, each supported character set\n> can be efficiently manipulated internally, and only converted to another\n> encoding when mixing with another character set.\n\nIf you are planning to convert everything to Unicode or whatever\nbefore storing them into the disk, I'd like to object the idea. It's\nnot only the waste of disk space but will bring serious performance\ndegration. For example, each ISO 8859 byte occupies 2 bytes after\nconverted to Unicode. I dont't think this two times disk space\nconsuming is acceptable.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 09 Aug 2000 21:45:13 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: LIKE gripes" }, { "msg_contents": "> MB has something similar to the \"next character\" fucntion called\n> pg_encoding_mblen. It tells the length of the MB word pointed to so\n> that you could move forward to the next MB word etc.\n> > 2) For each character set, we would need to provide conversion functions\n> > to other \"compatible\" character sets, or to a character \"superset\". Why\n> > don't we have those conversion functions? Answer: we do! There is an\n> > internal 32-bit encoding within which all comparisons are done.\n> Right.\n\nOK. As you know, I have an interest in this, but little knowledge ;)\n\n> > Anyway, I think it will be pretty easy to put the MB stuff back in, by\n> > #ifdef'ing some string copying inside each of the routines (such as\n> > namelike()). The underlying routine no longer requires a null-terminated\n> > string (using explicit lengths instead) so I'll generate those lengths\n> > in the same place unless they are already provided by the char->int MB\n> > support code.\n> I have not taken a look at your new like code, but I guess you could use\n> pg_mbstrlen(const unsigned char *mbstr)\n> It tells the number of words in mbstr (however mbstr needs to null\n> terminated).\n\nTo get the length I'm now just running through the output string looking\nfor a zero value. This should be more efficient than reading the\noriginal string twice; it might be nice if the conversion routines\n(which now return nothing) returned the actual number of pg_wchars in\nthe output.\n\nThe original like() code allocates a pg_wchar array dimensioned by the\nnumber of bytes in the input string (which happens to be the absolute\nupper limit for the size of the 32-bit-encoded string). Worst case, this\nresults in a 4-1 expansion of memory, and always requires a\npalloc()/pfree() for each call to the comparison routines.\n\nI think I have a solution for the current code; could someone test its\nbehavior with MB enabled? It is now committed to the source tree; I know\nit compiles, but afaik am not equipped to test it :(\n\n> > In the future, I'd like to see us use alternate encodings as-is, or as a\n> > common set like UniCode (16 bits wide afaik) rather than having to do\n> > this widening to 32 bits on the fly. Then, each supported character set\n> > can be efficiently manipulated internally, and only converted to another\n> > encoding when mixing with another character set.\n> If you are planning to convert everything to Unicode or whatever\n> before storing them into the disk, I'd like to object the idea. It's\n> not only the waste of disk space but will bring serious performance\n> degration. For example, each ISO 8859 byte occupies 2 bytes after\n> converted to Unicode. I dont't think this two times disk space\n> consuming is acceptable.\n\nI am not planning on converting everything to UniCode for disk storage.\nWhat I would *like* to do is the following:\n\n1) support each encoding \"natively\", using Postgres' type system to\ndistinguish between them. This would allow strings with the same\nencodings to be used without conversion, and would both minimize storage\nrequirements *and* run-time conversion costs.\n\n2) support conversions between encodings, again using Postgres' type\nsystem to suggest the appropriate conversion routines. This would allow\nstrings with different but compatible encodings to be mixed, but\nrequires internal conversions *only* if someone is mixing encodings\ninside their database.\n\n3) one of the supported encodings might be Unicode, and if one chooses,\nthat could be used for on-disk storage. Same with the other existing\nencodings.\n\n4) this difference approach to encoding support can coexist with the\nexisting MB support since (1) - (3) is done without mention of existing\nMB internal features. So you can choose which scheme to use, and can\ntest the new scheme without breaking the existing one.\n\nimho this comes closer to one of the important goals of maximizing\nperformance for internal operations (since there is less internal string\ncopying/conversion required), even at the expense of extra conversion\ncost when doing input/output (a good trade since *usually* there are\nlots of internal operations to a few i/o operations).\n\nComments?\n\n - Thomas\n", "msg_date": "Wed, 09 Aug 2000 14:28:26 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: LIKE gripes" }, { "msg_contents": "> To get the length I'm now just running through the output string looking\n> for a zero value. This should be more efficient than reading the\n> original string twice; it might be nice if the conversion routines\n> (which now return nothing) returned the actual number of pg_wchars in\n> the output.\n\nSounds resonable. I'm going to enhance them as you suggested.\n\n> The original like() code allocates a pg_wchar array dimensioned by the\n> number of bytes in the input string (which happens to be the absolute\n> upper limit for the size of the 32-bit-encoded string). Worst case, this\n> results in a 4-1 expansion of memory, and always requires a\n> palloc()/pfree() for each call to the comparison routines.\n\nRight.\n\nThere would be another approach to avoid use such that extra memory\nspace. However I am not sure it worth to implement right now.\n\n> I think I have a solution for the current code; could someone test its\n> behavior with MB enabled? It is now committed to the source tree; I know\n> it compiles, but afaik am not equipped to test it :(\n\nIt passed the MB test, but fails the string test. Yes, I know it fails\nbecasue ILIKE for MB is not implemented (yet). I'm looking forward to\nimplement the missing part. Is it ok for you, Thomas?\n\n> I am not planning on converting everything to UniCode for disk storage.\n\nGlad to hear that.\n\n> What I would *like* to do is the following:\n> \n> 1) support each encoding \"natively\", using Postgres' type system to\n> distinguish between them. This would allow strings with the same\n> encodings to be used without conversion, and would both minimize storage\n> requirements *and* run-time conversion costs.\n> \n> 2) support conversions between encodings, again using Postgres' type\n> system to suggest the appropriate conversion routines. This would allow\n> strings with different but compatible encodings to be mixed, but\n> requires internal conversions *only* if someone is mixing encodings\n> inside their database.\n> \n> 3) one of the supported encodings might be Unicode, and if one chooses,\n> that could be used for on-disk storage. Same with the other existing\n> encodings.\n> \n> 4) this difference approach to encoding support can coexist with the\n> existing MB support since (1) - (3) is done without mention of existing\n> MB internal features. So you can choose which scheme to use, and can\n> test the new scheme without breaking the existing one.\n> \n> imho this comes closer to one of the important goals of maximizing\n> performance for internal operations (since there is less internal string\n> copying/conversion required), even at the expense of extra conversion\n> cost when doing input/output (a good trade since *usually* there are\n> lots of internal operations to a few i/o operations).\n> \n> Comments?\n\nPlease note that existing MB implementation does not need such an\nextra conversion cost except some MB-aware-functions(text_length\netc.), regex, like and the input/output stage. Also MB stores native\nencodings 'as is' onto the disk.\n\nAnyway, it looks like MB would eventually be merged into/deplicated by\nyour new implementaion of multiple encodings support.\n\nBTW, Thomas, do you have a plan to support collation functions?\n--\nTatsuo Ishii\n\n", "msg_date": "Fri, 11 Aug 2000 17:13:47 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: LIKE gripes" }, { "msg_contents": "> > I think I have a solution for the current code; could someone test its\n> > behavior with MB enabled? It is now committed to the source tree; I know\n> > it compiles, but afaik am not equipped to test it :(\n> It passed the MB test, but fails the string test. Yes, I know it fails\n> becasue ILIKE for MB is not implemented (yet). I'm looking forward to\n> implement the missing part. Is it ok for you, Thomas?\n\nWhew! I'm glad \"fails the string test\" is because of the ILIKE/tolower()\nissue; I was afraid you would say \"... because Thomas' bad code dumps\ncore...\" :)\n\nYes, feel free to implement the missing parts. I'm not even sure how to\ndo it! Do you think it would be best in the meantime to disable the\nILIKE tests, or perhaps to separate that out into a different test?\n\n> Please note that existing MB implementation does not need such an\n> extra conversion cost except some MB-aware-functions(text_length\n> etc.), regex, like and the input/output stage. Also MB stores native\n> encodings 'as is' onto the disk.\n\nYes. I am probably getting a skewed view of MB since the LIKE code is an\nedge case which illustrates the difficulties in handling character sets\nin general no matter what solution is used.\n\n> Anyway, it looks like MB would eventually be merged into/deplicated by\n> your new implementaion of multiple encodings support.\n\nI've started writing up a description of my plans (based on our previous\ndiscussions), and as I do so I appreciate more and more your current\nsolution ;) imho you have solved several issues such as storage format,\nclient/server communication, and mixed-encoding comparison and\nmanipulation which would all need to be solved by a \"new\nimplementation\".\n\nMy current thought is to leave MB intact, and to start implementing\n\"character sets\" as distinct types (I know you have said that this is a\nlot of work, and I agree that is true for the complete set). Once I have\ndone one or a few character sets (perhaps using a Latin subset of\nUnicode so I can test it by converting between ASCII and Unicode using\ncharacter sets I know how to read ;) then we can start implementing a\n\"complete solution\" for those character sets which includes character\nand string comparison building blocks like \"<\", \">\", and \"tolower()\",\nfull comparison functions, and conversion routines between different\ncharacter sets.\n\nBut that by itself does not solve, for example, client/server encoding\nissues, so let's think about that again once we have some \"type-full\"\ncharacter sets to play with. The default solution will of course use MB\nto handle this.\n\n> BTW, Thomas, do you have a plan to support collation functions?\n\nYes, that is something that I hope will come out naturally from a\ncombination of SQL9x language features and use of the type system to\nhandle character sets. Then, for example (hmm, examples might be better\nin Japanese since you have such a rich mix of encodings ;),\n\n CREATE TABLE t1 (name TEXT COLLATE francais);\n\nwill (or might ;) result in using the \"francais\" data type for the name\ncolumn.\n\n SELECT * FROM t1 WHERE name < _FRANCAIS 'merci';\n\nwill use the \"francais\" data type for the string literal. And\n\n CREATE TABLE t1 (name VARCHAR(10) CHARACTER SET latin1 COLLATE\nfrancais);\n\nwill (might?) use, say, the \"latin1_francais\" data type. Each of these\ndata types will be a loadable module (which could be installed into\ntemplate1 to make them available to every new database), and each can\nreuse underlying support routines to avoid as much duplicate code as\npossible.\n\nMaybe there would be defined a default encoding for a type, say \"latin1\"\nfor \"francais\", so that the backend or some external scripts can help\nset these up. There is a good chance we will need (yet another) system\ntable to allow us to tie these types into character sets and collations;\notherwise Postgres might not be able to recognize that a type is\nimplementing these language features and, for example, pg_dump might not\nbe able to reconstruct the correct table creation syntax.\n\nI notice that SQL99 has *a lot* of new specifics on character set\nsupport, which prescribe things like CREATE COLLATION... and DROP\nCOLLATION... This means that there is less thinking involved in the\nsyntax but more work to make those exact commands fit into Postgres.\nSQL92 left most of this as an exercise for the reader. I'd be happier if\nwe knew this stuff *could* be implemented by seeing another DB implement\nit. Are you aware of any that do (besides our own of course)?\n\n - Thomas\n", "msg_date": "Fri, 11 Aug 2000 14:13:59 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: LIKE gripes" }, { "msg_contents": "> > > I think I have a solution for the current code; could someone test its\n> > > behavior with MB enabled? It is now committed to the source tree; I know\n> > > it compiles, but afaik am not equipped to test it :(\n> > It passed the MB test, but fails the string test. Yes, I know it fails\n> > becasue ILIKE for MB is not implemented (yet). I'm looking forward to\n> > implement the missing part. Is it ok for you, Thomas?\n> \n> Whew! I'm glad \"fails the string test\" is because of the ILIKE/tolower()\n> issue; I was afraid you would say \"... because Thomas' bad code dumps\n> core...\" :)\n> \n> Yes, feel free to implement the missing parts. I'm not even sure how to\n> do it! Do you think it would be best in the meantime to disable the\n> ILIKE tests, or perhaps to separate that out into a different test?\n\nDone. I have committed changes to like.c.\n\n> > Please note that existing MB implementation does not need such an\n> > extra conversion cost except some MB-aware-functions(text_length\n> > etc.), regex, like and the input/output stage. Also MB stores native\n> > encodings 'as is' onto the disk.\n> \n> Yes. I am probably getting a skewed view of MB since the LIKE code is an\n> edge case which illustrates the difficulties in handling character sets\n> in general no matter what solution is used.\n\nThis time I have slightly modified the way to support MB so that to\neliminate the up-to-4-times-memory-consuming-problem.\n\nThe regression test has passed (including the strings test)\nwith/without MB on Red Hat Linux 5.2. Tests on other platforms are\nwelcome.\n--\nTatsuo Ishii\n\n", "msg_date": "Tue, 22 Aug 2000 15:35:50 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: LIKE gripes" }, { "msg_contents": "> > Yes, feel free to implement the missing parts...\n> Done. I have committed changes to like.c.\n...\n> This time I have slightly modified the way to support MB so that to\n> eliminate the up-to-4-times-memory-consuming-problem.\n\nGreat! btw, would it be OK if I took README.mb, README.locale, and\nREADME.Charsets and consolidated them into a chapter or two in the main\ndocs? istm that they are more appropriate there than in these isolated\nREADME files. I've already gotten a good start on it...\n\nAlso, I propose to consolidate and eliminate README.fsync, which\nduplicates (or will duplicate) info available in the Admin Guide. The\nfact that it hasn't been touched since 1996, and still refers to\nPostgres'95, is a clue that some changes are in order.\n\n - Thomas\n", "msg_date": "Tue, 22 Aug 2000 06:57:51 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE gripes (and charset docs)" }, { "msg_contents": "> Great! btw, would it be OK if I took README.mb, README.locale, and\n> README.Charsets and consolidated them into a chapter or two in the main\n> docs? istm that they are more appropriate there than in these isolated\n> README files. I've already gotten a good start on it...\n\nSure, no problem at all for README.mb (Also please feel free to\ncorrect any grammatical mistakes in it). I'm not sure for\nREADME.locale and README.Charsets but I guess Oleg would be glad to\nhear that...\n\n> Also, I propose to consolidate and eliminate README.fsync, which\n> duplicates (or will duplicate) info available in the Admin Guide. The\n> fact that it hasn't been touched since 1996, and still refers to\n> Postgres'95, is a clue that some changes are in order.\n\nAgreed.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 22 Aug 2000 16:22:53 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE gripes (and charset docs)" }, { "msg_contents": "On Tue, 22 Aug 2000, Thomas Lockhart wrote:\n\n> Date: Tue, 22 Aug 2000 06:57:51 +0000\n> From: Thomas Lockhart <[email protected]>\n> To: Tatsuo Ishii <[email protected]>\n> Cc: [email protected], [email protected]\n> Subject: Re: [HACKERS] LIKE gripes (and charset docs)\n> \n> > > Yes, feel free to implement the missing parts...\n> > Done. I have committed changes to like.c.\n> ...\n> > This time I have slightly modified the way to support MB so that to\n> > eliminate the up-to-4-times-memory-consuming-problem.\n> \n> Great! btw, would it be OK if I took README.mb, README.locale, and\n> README.Charsets and consolidated them into a chapter or two in the main\n> docs? istm that they are more appropriate there than in these isolated\n> README files. I've already gotten a good start on it...\n> \n\nI think one chapter about internationalization support would be ok.\n\n\tRegards,\n\n\t\tOleg\n\n> Also, I propose to consolidate and eliminate README.fsync, which\n> duplicates (or will duplicate) info available in the Admin Guide. The\n> fact that it hasn't been touched since 1996, and still refers to\n> Postgres'95, is a clue that some changes are in order.\n> \n> - Thomas\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 22 Aug 2000 15:31:13 +0300 (GMT)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE gripes (and charset docs)" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Also, I propose to consolidate and eliminate README.fsync, which\n> duplicates (or will duplicate) info available in the Admin Guide.\n\n?? Peter did that already. README.fsync was \"cvs remove\"d over a\nmonth ago...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 22 Aug 2000 10:14:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LIKE gripes (and charset docs) " } ]
[ { "msg_contents": "At 10:29 6/08/00 -0700, Stephan Szabo wrote:\n>\n>The problem with storing source is that it doesn't\n>get changed when things change. Try altering\n>a column name that has a check constraint, then\n>dump the database.\n\nOr renaming a referenced table - I think the current constraint system will\nhandle this since OIDs don't change.\n\n\n>So, what I was thinking is, that if we have another\n>table to store this kind of constraint info, it\n>should probably store information for all constraints.\n>I was thinking two tables, one (say pg_constraint)\n>which stores basic information about the constraint\n>(what type, the constraint name, primarily constraintd\n>table, maybe owner if constraints have owners in SQL)\n>and a source form (see more below).\n\nThis sounds reasonable.\n\n\n>The second table stores references from this constraint.\n>So any table, column, index, etc is stored here.\n>Probably something of the form constraintoid, \n>type of thing being referenced (the oid of the table?),\n>the oid of the referenced thing and a number.\n\nI would prefer to see this generalized: a dependencies table that lists\nboth the referrer OID *and* type, as well as the refrerenced thing oid &\ntype. This then allows things such as SQL functions to make entries in this\ntable as well as views etc etc. \n\n\n>The number comes in to the source form thats stored.\n>Anywhere that we're referencing something that a name\n>is insufficient for (like a column name or table name)\n>we put something into the source for that says \n>referncing column n of the referenced thing m.\n\nDon't know enough about the internals, but don't we have attr ids for this,\nand/or won't OIDs work in most cases? Maybe I'm missing your point.\n\n\n>- There are some problems I see right off both conceptually\n>and implementation, but I thought someone might be able \n>to come up with a better idea once it was presented (even \n>if it's just a \"not worth the effort\" :) )\n\nIt seems to me that:\n\n- 'format_constraint' is a good idea\n- we need the dependency stuff\n- dumping source in canonical form is best put in the backend\n(philosophical point)\n- I presume it's a first part of a full implementation of 'alter table\nadd/drop constraint...'\n\nso I don't think it's a waste of time.\n\n\n>One of the problems I see is that if taken to its end,\n>would you store function oids here? \n\nSounds sensible.\n\n\n>If so, that might\n>make it harder to allow a drop function/create function\n>to ever work transparently in the future.\n\nI *think* it doesn't work now; yes you can drop the function, but AFAIK,\nthe constraint references the old one (at least that's true for normal\ntriggers). What you are proposing makes people aware that they are about to\nbreak more things than they know.\n\n\n>Plus, I'm not even really sure if it would be reasonable\n>to get a source form like I was thinking of for check\n>constraints really.\n\nI suspect it has to depend on how the constraint is acually checked. If\nThey are checkied by using table OIDs then you need to store the OID\ndependency and *somehow* reconstruct the source. If they are checked using\ntable names (getting OID each time), then store the name and the raw source\n(maybe). You need to handle renaming of tables referenced in CHECK clauses.\nI hate rename (but I use it).\n\nMaybe you can do something nasty like store the source with escape\ncharacters and OIDs in place of names. This is not as bad as it sounds, I\nthink. It also gets around the problem that original source may be\nunrecoverable (eg. COALESCE is translated to CASE in the parser, so a CHECK\nclause that uses COALESCE will never be fully recoverable - although most\npeople would not see this as a problem). This messing around would have to\nbe done in the parser, I would guess. So:\n\n check exists(select * from reftbl where reffld=tbl.origfld)\n\nmight become:\n\n check exists(select * from %%table:<OID>%% where\n%%table-attr:<Table-OID>,<Attd-ID>%% \n = %%table:<OID>%%.%%table-attr:<Table-OID>,<Attd-ID>%%\n\nLooking at this, maybe it's not such a good idea after all...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 07 Aug 2000 11:01:43 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Constraint stuff" }, { "msg_contents": "On Mon, 7 Aug 2000, Philip Warner wrote:\n\n> >The second table stores references from this constraint.\n> >So any table, column, index, etc is stored here.\n> >Probably something of the form constraintoid, \n> >type of thing being referenced (the oid of the table?),\n> >the oid of the referenced thing and a number.\n> \n> I would prefer to see this generalized: a dependencies table that lists\n> both the referrer OID *and* type, as well as the refrerenced thing oid &\n> type. This then allows things such as SQL functions to make entries in this\n> table as well as views etc etc. \n\nThat makes more sense, yes. :) Although not all of those things would\nprobably use it immediately. \n\n> \n> >The number comes in to the source form thats stored.\n> >Anywhere that we're referencing something that a name\n> >is insufficient for (like a column name or table name)\n> >we put something into the source for that says \n> >referncing column n of the referenced thing m.\n> \n> Don't know enough about the internals, but don't we have attr ids for this,\n> and/or won't OIDs work in most cases? Maybe I'm missing your point.\n\nI was thinking of it more for getting a textual representation back out\nof the dependencies for constraints and thinking that I might want to \nreference something other than its name that's on something that's\nreferenced. And actually referncing column n, meant more like attrno\nn of the row that m refers to. (Sort of like your thing below for\n%%table:OID,attrno)\n\n> >If so, that might\n> >make it harder to allow a drop function/create function\n> >to ever work transparently in the future.\n> \n> I *think* it doesn't work now; yes you can drop the function, but AFAIK,\n> the constraint references the old one (at least that's true for normal\n> triggers). What you are proposing makes people aware that they are about to\n> break more things than they know.\n\nTrue, I just wanted to point it out in case someone had some thought on\nchanging it so that the system somehow fixed such references, but it\nseems like alter function is more likely :)\n\n> >Plus, I'm not even really sure if it would be reasonable\n> >to get a source form like I was thinking of for check\n> >constraints really.\n> \n> I suspect it has to depend on how the constraint is acually checked. If\n> They are checkied by using table OIDs then you need to store the OID\n> dependency and *somehow* reconstruct the source. If they are checked using\n> table names (getting OID each time), then store the name and the raw source\n> (maybe). You need to handle renaming of tables referenced in CHECK clauses.\n> I hate rename (but I use it).\n>\n> Maybe you can do something nasty like store the source with escape\n> characters and OIDs in place of names. This is not as bad as it sounds, I\n> think. It also gets around the problem that original source may be\n> unrecoverable (eg. COALESCE is translated to CASE in the parser, so a CHECK\n> clause that uses COALESCE will never be fully recoverable - although most\n> people would not see this as a problem). This messing around would have to\n> be done in the parser, I would guess. So:\n> \n> check exists(select * from reftbl where reffld=tbl.origfld)\n> \n> might become:\n> \n> check exists(select * from %%table:<OID>%% where\n> %%table-attr:<Table-OID>,<Attd-ID>%% \n> = %%table:<OID>%%.%%table-attr:<Table-OID>,<Attd-ID>%%\n> \n> Looking at this, maybe it's not such a good idea after all...\n\n:) Basically that's sort of what I was proposing with the %m.n above where\nI was referencing a reference rather than the oid directly and the n was \nbasically attd-id (and the reference stored the type so i didn't need\nit. But if it had to be sent from the parser then your format probably\nmakes more sense. I was thinking about reversing from the stored\nexpression in some fashion (but that wouldn't recover a coalesce or\nsomething like that)\n\n", "msg_date": "Mon, 7 Aug 2000 10:01:30 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Constraint stuff" } ]
[ { "msg_contents": "\n> > > I think maybe what needs to be done to fix all this is to \n> restructure\n> > > postgres.c's interface to the parser/rewriter. What we want is to\n> > > run just the yacc grammar initially to produce a list of raw parse\n> > > trees (which is enough to detect begin/commit/rollback, no?) Then\n> > > postgres.c walks down that list, and for each element, if it is\n> > > commit/rollback OR we are not in abort state, do parse analysis,\n> > > rewrite, planning, and execution. (Thomas, any comments here?)\n> > \n> > Sure, why not (restructure postgres.c that is)? I was just thinking\n> > about how to implement \"autocommit\" and was considering \n> doing a hack in\n> > analyze.c which just plops a \"BEGIN\" in front of the \n> existing query. But\n> \n> Man, that is something I would do. :-)\n\nWouldn't the hack be to issue a begin work after connect,\nand then issue begin work after each commit or rollback ?\n\nAndreas\n", "msg_date": "Mon, 7 Aug 2000 15:23:11 +0200 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Questionable coding in proc.c & lock.c" }, { "msg_contents": "> > > ... was considering doing a hack in\n> > > analyze.c which just plops a \"BEGIN\" in front of the\n> > existing query.\n> Wouldn't the hack be to issue a begin work after connect,\n> and then issue begin work after each commit or rollback ?\n\nSure. When hacking all things are possible ;)\n\nBut I do have some preference for not doing work until something is\nrequested (not me personally, but in the code) so would prefer to issue\nthe BEGIN at the first query.\n\n - Thomas\n", "msg_date": "Mon, 07 Aug 2000 15:03:24 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Questionable coding in proc.c & lock.c" } ]
[ { "msg_contents": "\n> Brian Baquiran in the [GENERAL] list recently asked if it was \n> possible to\n> 'throttle-down' pg_dump so that it did not cause an IO bottleneck when\n> copying large tables.\n> \n> Can anyone see a reason not to pause periodically?\n\nHow about a simple renice ? (You mention that CPU goes to 100 % too)\n\n> \n> The only problem I have with pausing is that pg_dump runs in a single\n> transaction, and I have an aversion to keeping TX's open too \n> long, but this\n> is born of experience with other databases, and may not be \n> relevant to PG.\n\nProbably not an issue with postgres.\n\nAndreas\n", "msg_date": "Mon, 7 Aug 2000 15:32:45 +0200 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: pg_dump & performance degradation" } ]
[ { "msg_contents": "\n> > [email protected] writes:\n> > > My goal is to make the backend accept erroneous commands, \n> not falling\n> > > in *ABORT STATE*, but rolling back automatically, & \n> continue accepting\n> > > commands.\n> > \n> > The way you're doing it, you might as well just not use transaction\n> > blocks at all. I don't think wiping out the effects of all \n> preceding\n> > commands within the transaction counts as \"recovering from \n> an error\".\n> \n> Ok, maybe I exagerated, but kind of solves my problem. \n> GeneXus, my CASE tool,\n> will send begin/commit pairs, so I must 'recover' \n> automatically. I aimed\n> DB2-like behaviour, which I was told, aborts on errors within \n> transactions, but\n> remains in a runnable state. Don't you consider it valueable \n> whatsoever?\n\nDB/2 only aborts the one single statement inside this transaction.\nWhat you did will rollback everything since the last begin work,\nand thus is rather dangerous.\n\nAndreas\n", "msg_date": "Mon, 7 Aug 2000 15:50:20 +0200 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Now PostgreSQL recovers from errors within trns" }, { "msg_contents": "Zeugswetter Andreas SB wrote:\n> > Ok, maybe I exagerated, but kind of solves my problem.\n> > GeneXus, my CASE tool,\n> > will send begin/commit pairs, so I must 'recover'\n> > automatically. I aimed\n> > DB2-like behaviour, which I was told, aborts on errors within\n> > transactions, but\n> > remains in a runnable state. Don't you consider it valueable\n> > whatsoever?\n> \n> DB/2 only aborts the one single statement inside this transaction.\n> What you did will rollback everything since the last begin work,\n> and thus is rather dangerous.\n\nOk, I understand. About this, when PostgreSQL has nested transactions, is it\nalready planned that every single statement runs within its own transaction, so\nthat if it goes bad, it doesn't abort the sorounding transaction?\n\nDoes anybody know where I can find more info on Write Ahead Log, i.e. when it\nwill be ready, and what its features will be?\n\nThanks.\n\nRegards,\nHaroldo.\n\n\n-- \n----------------------+------------------------\n Haroldo Stenger | [email protected]\n Montevideo, Uruguay. | [email protected]\n----------------------+------------------------\n Visit UYLUG Web Site: http://www.linux.org.uy\n-----------------------------------------------\n", "msg_date": "Thu, 10 Aug 2000 23:00:40 -0300", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: AW: Now PostgreSQL recovers from errors within trns" } ]
[ { "msg_contents": "\nIs this a known problem? Is there a work-around?\n\ncreate table t (c char, i int);\ninsert into t values ('a', 1);\ninsert into t values ('a', 1);\ninsert into t values ('a', 2);\ninsert into t values ('b', 2);\n\nselect distinct on (c, i) c, count(i) from t group by c;\nERROR: Attribute t.i must be GROUPed or used in an aggregate function\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n", "msg_date": "Mon, 07 Aug 2000 11:07:03 -0400", "msg_from": "\"Mark Hollomon\" <[email protected]>", "msg_from_op": true, "msg_subject": "'GROUP BY' / 'DISTINCT' interaction bug?" }, { "msg_contents": "\"Mark Hollomon\" <[email protected]> writes:\n> select distinct on (c, i) c, count(i) from t group by c;\n> ERROR: Attribute t.i must be GROUPed or used in an aggregate function\n\nWhy do you think that's a bug? The DISTINCT clause requires use of\nthe ungrouped value of i, doesn't it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Aug 2000 11:59:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 'GROUP BY' / 'DISTINCT' interaction bug? " } ]
[ { "msg_contents": "(copied to -hackers)\n\n> you were right about the wrong compiler-options: it was not the -O3 but\n> the -ffast-math option that made the wrong rounding (and other stuff) ;-)\n> I compiled a ManDrake source RPM that used the default rpmrc in\n> /usr/lib/rpm/rpmrc, in which the optflags are set incorrectly:\n> \n> ... -O3 ... -ffast-math ...\n> \n> But this behaviour may be found in any distribution based onto the RedHat\n> system, at least when utilizing the RPM-utility unchanged. My RPM is \n> version 3.0.3 here.\n> The manpage of gcc does not suggest to use the -ffast-math with any of the\n> -O options.\n> When I removed the -ffast-math option, your time and timestamp datatypes\n> worked perfectly.\n> A hot fix for the SPEC-file would be, to eliminate that option by\n> RPM_OPT_FLAGS=`echo $RPM_OPT_FLAGS | sed -e \"s/-ffast-math//\"`\n> after the %build line. On the long term there should be a discussion, if\n> that option is realy OK for doing all the packages...\n> BTW: I'm using ManDrake 7.0 and took a postgresql-7.0.2-2mdk.src.rpm from a\n> server and compiled it for a target i586-pc-linux-gnu on an i686.\n\nThanks for tracking this down. I'll look at my RPM setup for\nMandrake-7.1 to confirm the behavior, and then perhaps Lamar can\nintroduce an appropriate fix to the canonical spec file.\n\n - Thomas\n", "msg_date": "Mon, 07 Aug 2000 15:35:23 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: was: bad datatypes time and timestamp - wrong gcc option" }, { "msg_contents": "Thomas Lockhart wrote:\n> > you were right about the wrong compiler-options: it was not the -O3 but\n> > the -ffast-math option that made the wrong rounding (and other stuff) ;-)\n> > I compiled a ManDrake source RPM that used the default rpmrc in\n> > /usr/lib/rpm/rpmrc, in which the optflags are set incorrectly:\n\n> > ... -O3 ... -ffast-math ...\n\n> > But this behaviour may be found in any distribution based onto the RedHat\n> > system, at least when utilizing the RPM-utility unchanged. My RPM is\n> > version 3.0.3 here.\n\nOn my RedHat 6.2 buildbox, I get -02 -m486 -fno_strength_reduce (using\nthe very useful 'rpm --showrc|grep optflags' -- the optflags macro gets\nplaced in the RPM_OPT_FLAGS envvar during rpm startup) -- sounds like a\nMandrakeism.\n\n> > A hot fix for the SPEC-file would be, to eliminate that option by\n> > RPM_OPT_FLAGS=`echo $RPM_OPT_FLAGS | sed -e \"s/-ffast-math//\"`\n \n> Thanks for tracking this down. I'll look at my RPM setup for\n> Mandrake-7.1 to confirm the behavior, and then perhaps Lamar can\n> introduce an appropriate fix to the canonical spec file.\n\nI will look into doing this, although Mandrake really should fix this,\nif it is not recommended by the gcc people to combine -O stuff and\n-ffast_math.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 07 Aug 2000 11:56:35 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: was: bad datatypes time and timestamp - wrong gcc option" } ]
[ { "msg_contents": "Is this a bug or have I just not noticed a nuance with SQL\n\nAssume I have create the two tables\n\ncreate table foo (\n id int4,\n);\n\ncreate table foo_child (\n name text\n) inherits (foo);\n\nIf I do\n\nselect id, name from foo_child union select id, null as name from foo;\nit works\n\nselect id, null as text from foo union select id, name from foo_child;\nfails with\n\nunable to trasform {insert whatever type here} into unknown\n Each UNION | EXCEPT | INTERSECT clause must have compatible target \ntypes\n\nIf this isn't a bug, it would be nice to be a nice feature to be able to \ncoax a data type into an 'unknown' field...\n\nI know it would make my life easier... :)\n\n\n- Thomas Swan\n- Graduate Student - Computer Science\n- The University of Mississippi\n-\n- \"People can be categorized into two fundamental\n- groups, those that divide people into two groups\n- and those that don't.\"-\n\n\nIs this a bug or have I just not noticed a\nnuance with SQL\n\nAssume I have create the two tables \n\ncreate table foo (\n        id\nint4,\n);\n\ncreate table foo_child (\n\n        name\ntext\n) inherits (foo);\n\nIf I do \n\nselect id, name from foo_child\nunion select id, null as name from foo;\nit works\n\nselect id, null as text from foo\nunion select id, name from foo_child;\nfails with \n\nunable to trasform {insert\nwhatever type here} into unknown\n        Each UNION\n| EXCEPT | INTERSECT clause must have compatible target types\n\nIf this isn't a bug, it would be\nnice to be a nice feature to be able to coax a data type into an\n'unknown' field... \n\nI know it would make my life easier... :)\n\n\n- Thomas Swan\n                                  \n- Graduate Student  - Computer Science\n- The University of Mississippi\n- \n- \"People can be categorized into two fundamental \n- groups, those that divide people into two groups \n- and those that don't.\"-", "msg_date": "Mon, 07 Aug 2000 11:01:36 -0500", "msg_from": "Thomas Swan <[email protected]>", "msg_from_op": true, "msg_subject": "UNIONS" }, { "msg_contents": "Thomas Swan <[email protected]> writes:\n> select id, null as text from foo union select id, name from foo_child;\n> fails with\n> unable to trasform {insert whatever type here} into unknown\n> Each UNION | EXCEPT | INTERSECT clause must have compatible target \n> types\n\nThe UNION type-resolution code could use some work; right now I think\nthe algorithm is to use the types of the first SELECT and force\neverything else into that. A more symmetrical\npromote-to-common-supertype approach would be nice. The UNION code is\nsuch a mess that I haven't wanted to touch it until we do querytree\nrevisions in 7.2, though.\n\nIn the meantime, you should force the NULL to have the datatype you want\nwith something like \"null::text\" or \"cast (null as text)\". Note that\nthe way you have it above is only assigning a column label that happens\nto be \"text\"; it's not a type coercion.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Aug 2000 14:07:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UNIONS " }, { "msg_contents": "At 01:07 PM 8/7/2000, Tom Lane wrote:\n>Thomas Swan <[email protected]> writes:\n> > select id, null as text from foo union select id, name from foo_child;\n> > fails with\n> > unable to trasform {insert whatever type here} into unknown\n> > Each UNION | EXCEPT | INTERSECT clause must have compatible \n> target\n> > types\n>\n>The UNION type-resolution code could use some work; right now I think\n>the algorithm is to use the types of the first SELECT and force\n>everything else into that. A more symmetrical\n>promote-to-common-supertype approach would be nice. The UNION code is\n>such a mess that I haven't wanted to touch it until we do querytree\n>revisions in 7.2, though.\n>\n>In the meantime, you should force the NULL to have the datatype you want\n>with something like \"null::text\" or \"cast (null as text)\". Note that\n>the way you have it above is only assigning a column label that happens\n>to be \"text\"; it's not a type coercion.\n\nThe reason I was asking is that I had an idea for doing the select ** from \ntablename* that would expand.\n\nIt could be macro of sorts but part of it depending on creating a null \ntable or the equivalent of it with nothing but a null column for each \ndifferent column of the set. I had a reverse traversal of the classes set \nup, but it didn't work because I could allow for all the columns of all the \nchildren.\n\nIf you could recommend a place to start, I wouldn't mind looking at the \nexisting code and seeing what I could do.\n\n-\n- Thomas Swan\n- Graduate Student - Computer Science\n- The University of Mississippi\n-\n- \"People can be categorized into two fundamental\n- groups, those that divide people into two groups\n- and those that don't.\"\n\nAt 01:07 PM 8/7/2000, Tom Lane wrote:\nThomas Swan <[email protected]>\nwrites:\n> select id, null as text from foo union select id, name from\nfoo_child;\n> fails with\n> unable to trasform {insert whatever type here} into unknown\n>          Each UNION |\nEXCEPT | INTERSECT clause must have compatible target \n> types\n\nThe UNION type-resolution code could use some work; right now I\nthink\nthe algorithm is to use the types of the first SELECT and force\neverything else into that.  A more symmetrical\npromote-to-common-supertype approach would be nice.  The UNION code\nis\nsuch a mess that I haven't wanted to touch it until we do querytree\nrevisions in 7.2, though.\n\nIn the meantime, you should force the NULL to have the datatype you\nwant\nwith something like \"null::text\" or \"cast (null as\ntext)\".  Note that\nthe way you have it above is only assigning a column label that\nhappens\nto be \"text\"; it's not a type coercion.\n\nThe reason I was asking is that I had an idea for doing the select **\nfrom tablename* that would expand.\n\nIt could be macro of sorts but part of it depending on creating a null\ntable or the equivalent of it with nothing but a null column for each\ndifferent column of the set.  I had  a reverse traversal of the\nclasses set up, but it didn't work because I could allow for all the\ncolumns of all the children.   \n\nIf you could recommend a place to start, I wouldn't mind looking at the\nexisting code and seeing what I could do.\n\n\n- \n- Thomas Swan\n                                  \n- Graduate Student  - Computer Science\n- The University of Mississippi\n- \n- \"People can be categorized into two fundamental \n- groups, those that divide people into two groups \n- and those that don't.\"", "msg_date": "Mon, 07 Aug 2000 13:56:31 -0500", "msg_from": "Thomas Swan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: UNIONS " }, { "msg_contents": "Thomas Swan <[email protected]> writes:\n> The reason I was asking is that I had an idea for doing the select ** from \n> tablename* that would expand.\n\n> It could be macro of sorts but part of it depending on creating a null \n> table or the equivalent of it with nothing but a null column for each \n> different column of the set.\n\nWhat happens when two different child tables have similarly-named\ncolumns of different types?\n\nIn any case, this wouldn't be a very satisfactory solution because you\ncouldn't tell the difference between a null stored in a child table and\nthe lack of any column at all. We really need to do it the hard way,\nie, issue a new tuple descriptor as we pass into each new child table.\n\nThere appears to have once been support for that back in the Berkeley\ndays; you might care to dig through Postgres 4.2 or so to see how they\ndid it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Aug 2000 15:02:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UNIONS " } ]
[ { "msg_contents": "\n> > If I remember correctly,this has been only in case of SQL functions.\n> \n> True, the tlist is ignored except in SQL functions --- another reason\n> why attaching it to all function nodes is a waste. I believe that's\n> itself a bug, since it seems like PL functions ought to be capable\n> of returning tuples (whether they actually can or not is \n> another story,\n> but it sure seems like plpgsql ought to be close to being able to).\n> By separating the fieldselect operation into another node, we can fix\n> that bug too, since it wouldn't matter what the function's\n> implementation language is.\n\nSounds like a huge win !\n\nAndreas\n", "msg_date": "Mon, 7 Aug 2000 18:15:59 +0200 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Anyone particularly wedded to func_tlist mechanism?\t " } ]
[ { "msg_contents": "\n> I've bumped the system catalog version number and committed changes\n> which:\n\n> o implement CREATE/DROP SCHEMA as a synonym for CREATE DATABASE\n\nThis is imho wrong ! \nPlease do not preassume results to a discussion that is not over or has \nnot reached a consensus.\n\nSome of us (me) where of the opinion that schema must be a hierarchy below\nour database. \nAccess to different schemas within one session is mandatory.\nNon default schemas need to be syntactically specified with:\n\tschemaname.tabname\n\nSo I ask you to please revert the above change unless we reach a consensus.\n\nAndreas\n", "msg_date": "Mon, 7 Aug 2000 18:43:13 +0200 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: LIKE/ESCAPE et al, initdb required!" }, { "msg_contents": "> So I ask you to please revert the above change unless we reach a consensus.\n\nDon't panic! I did this with the *explicit* mention that it is a\nstopgap, and it does not preclude the better long term solution which\nyou have been discussing. The CVS log reflects this, as does my\nannouncement to the list.\n\nThis will be an unannounced feature in our next release (if at all), so\nfolks can't claim to have grown fond of it ;)\n\n - Thomas\n", "msg_date": "Mon, 07 Aug 2000 17:00:26 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: LIKE/ESCAPE et al, initdb required!" }, { "msg_contents": "On Mon, 7 Aug 2000, Thomas Lockhart wrote:\n\n> > So I ask you to please revert the above change unless we reach a consensus.\n> \n> Don't panic! I did this with the *explicit* mention that it is a\n> stopgap, and it does not preclude the better long term solution which\n> you have been discussing. The CVS log reflects this, as does my\n> announcement to the list.\n> \n> This will be an unannounced feature in our next release (if at all), so\n> folks can't claim to have grown fond of it ;)\n\nSo, if its a stopgap and won't be announced ... why implement it? *raised\neyebrow* *grin*\n\n\n", "msg_date": "Mon, 7 Aug 2000 20:45:42 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: LIKE/ESCAPE et al, initdb required!" } ]
[ { "msg_contents": "Jan Wieck <[email protected]> writes:\n> PL/Tcl and PL/pgSQL will load a function's source only once a\n> session. The functions loaded are identified by OID, so if\n> you drop/create a function, the PL handler will simply load a\n> different function too (from his point of view). At the time\n> we are able to ALTER a function, we might want to include\n> some version counter to pg_proc and in the fmgr_info?\n\nMore generally, the PL functions need to be able to deal with\nrecomputing saved plans after an ALTER of a table referenced by the\nfunction. I haven't really thought about how to do that ... but it\nseems like Stephan's idea of a table showing referencers and referencees\nmight help detect the cases where plans have to be flushed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Aug 2000 16:23:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Constraint stuff " }, { "msg_contents": "\nOn Mon, 7 Aug 2000, Tom Lane wrote:\n\n> Jan Wieck <[email protected]> writes:\n> > PL/Tcl and PL/pgSQL will load a function's source only once a\n> > session. The functions loaded are identified by OID, so if\n> > you drop/create a function, the PL handler will simply load a\n> > different function too (from his point of view). At the time\n> > we are able to ALTER a function, we might want to include\n> > some version counter to pg_proc and in the fmgr_info?\n> \n> More generally, the PL functions need to be able to deal with\n> recomputing saved plans after an ALTER of a table referenced by the\n> function. I haven't really thought about how to do that ... but it\n\n More and more generally, IMHO all saved plans need some validity \n checking and not only for ALTER, but also for all operation those\n changing relevant system tables. And this is not problem for PL only,\n but for all what is based on SPI (and VIEWs?).\n\n IMHO correct solution is _one_ space and one method for plans saving\n = query/plan cache, and some common validity-checker that will work\n over this cache. But how implement validity-checker is unknown...\n (? Call from all command that changing system tables some handler, that\n check plans in a cache ?)\n\n\t\t\t\t\t\tKarel\n\nBTW. - first step in this problem is (or can be) on the way --- \n query cache...\n\n", "msg_date": "Tue, 8 Aug 2000 10:26:06 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Constraint stuff " } ]
[ { "msg_contents": "\nI need to learn how to type headers I think... :(\n\nStephan Szabo\[email protected]\n\nOn Mon, 7 Aug 2000, Jan Wieck wrote:\n\n> > More generally, the PL functions need to be able to deal with\n> > recomputing saved plans after an ALTER of a table referenced by the\n> > function. I haven't really thought about how to do that ... but it\n> > seems like Stephan's idea of a table showing referencers and referencees\n> > might help detect the cases where plans have to be flushed.\n> \n> More more generally, you cannot tell which objects are\n> referenced from saved plans inside of a PL function. Might be\n> possible for PL/pgSQL, where we could use a specialized\n> invocation of the PL compiler to determine. But there's no\n> way to do it for PL/Tcl or the like.\n\n> We might end up with some general catalog sequence, bumped\n> any time a schema change happens, and require each function\n> to forget about all of it's saved plans for the next\n> transaction. Ugly, but the only way I see to be consistent.\n\nAs a dumb question to help me understand better...\nWhat exactly is saved in the plans and how are the plan saved for a\nPL/Tcl function that does something where it generates a query that\nyou say don't know the table of until run time?\n\n\n", "msg_date": "Mon, 7 Aug 2000 16:29:33 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Constraint stuff (fwd)" } ]
[ { "msg_contents": "> > select id, null as text from foo union select id, name from \n> foo_child;\n> > fails with\n> > unable to trasform {insert whatever type here} into unknown\n> > Each UNION | EXCEPT | INTERSECT clause must have \n> compatible target \n> > types\n> \n> The UNION type-resolution code could use some work; right now I think\n> the algorithm is to use the types of the first SELECT and force\n> everything else into that.\n\nImho this is expected behavior (maybe even standard). Very easy to\nunderstand.\n \n> A more symmetrical\n> promote-to-common-supertype approach would be nice.\n\nWhile this sounds sexy it is not what people would expect (imho),\nand would have a magic touch to it.\n\nAndreas\n", "msg_date": "Tue, 8 Aug 2000 09:39:55 +0200 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: UNIONS " }, { "msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n>> The UNION type-resolution code could use some work; right now I think\n>> the algorithm is to use the types of the first SELECT and force\n>> everything else into that.\n\n> Imho this is expected behavior (maybe even standard).\n\nWrong. Read the spec (see 9.3 in SQL99).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Aug 2000 09:21:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: UNIONS " } ]
[ { "msg_contents": "I realized that the GRANT/REVOKE statements don't change the *seq tables. \nSo, if I call a currval(...) function in a transaction, I must use the\nGRANT for the sequence table, too. I find it as a bug. Is it? I suggest\ndoing some other GRANTs for the sequence tables of the relation if a GRANT\nis called on it.\n\nZoltan\n\n", "msg_date": "Tue, 8 Aug 2000 09:47:46 +0200 (CEST)", "msg_from": "Kovacs Zoltan Sandor <[email protected]>", "msg_from_op": true, "msg_subject": "ACL inheritance for sequences" } ]
[ { "msg_contents": "When the WHERE clause includes a sub query the query plan seems to ignore\nindexes.\nSee the examples below. \nTable R1684 has one column, stockno, which is the same type as the stockno\nin the books_fti table. There is no index on R1684.\nIn the first case the index on books_fti(stockno) is not used but in the\nsecond case it is.\n\n \n\n=============================== Query 1\n======================================= \nexplain select * from books_fti where stockno in (select stockno from R1684);\n\nSeq Scan on books_fti (cost=79300.27 rows=1024705 width=160)\n SubPlan\n -> Seq Scan on r1684 (cost=43.00 rows=1000 width=12) \n\n================================ Query 2\n=======================================\n explain select * from books_fti where stockno in\n('0815171161','1857281012','0419251901');\n\nIndex Scan using allbooks_isbn, allbooks_isbn, allbooks_isbn on books_fti\n(cost\n=6.15 rows=5 width=160)\n \n-- \nthorNET - Internet Consultancy, Services & Training\nPhone: 01454 854413\nFax: 01454 854412\nhttp://www.thornet.co.uk \n", "msg_date": "Tue, 08 Aug 2000 12:22:12 +0100", "msg_from": "Steve Heaven <[email protected]>", "msg_from_op": true, "msg_subject": "Query plan and sub-queries" }, { "msg_contents": "Steve Heaven wrote:\n> \n> When the WHERE clause includes a sub query the query plan seems to ignore\n> indexes.\n\nThis is a FAQ:\n\n4.23) Why are my subqueries using IN so slow?\n\nCurrently, we join subqueries to outer queries by sequential\nscanning the result of the subquery for each row of the outer\nquery. A workaround is to replace IN with EXISTS: \n\n SELECT *\n FROM tab\n WHERE col1 IN (SELECT col2 FROM TAB2)\n\nto: \n\n SELECT *\n FROM tab\n WHERE EXISTS (SELECT col2 FROM TAB2 WHERE col1 = col2)\n\nWe hope to fix this limitation in a future release. \n\nHope that helps, \n\nMike Mascari\n", "msg_date": "Tue, 08 Aug 2000 08:24:28 -0400", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query plan and sub-queries" }, { "msg_contents": "At 08:24 08/08/00 -0400, you wrote:\n> A workaround is to replace IN with EXISTS: \n\nThis still does a sequential rather that indexed scan:\n\nexplain select * from books_fti where exists \n (select R1684.stockno from R1684,books_fti where\nR1684.stockno=books_fti.stockno ); \n\nResult (cost=79300.27 rows=0 width=0)\n InitPlan\n -> Nested Loop (cost=2093.00 rows=1024706 width=24)\n -> Seq Scan on r1684 (cost=43.00 rows=1000 width=12)\n -> Index Scan using allbooks_isbn on books_fti (cost=2.05\nrows=1024705 width=12)\n -> Seq Scan on books_fti (cost=79300.27 rows=1024705 width=160)\n \n-- \nthorNET - Internet Consultancy, Services & Training\nPhone: 01454 854413\nFax: 01454 854412\nhttp://www.thornet.co.uk \n", "msg_date": "Tue, 08 Aug 2000 13:47:34 +0100", "msg_from": "Steve Heaven <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query plan and sub-queries" }, { "msg_contents": "Steve Heaven wrote:\n> \n> At 08:24 08/08/00 -0400, you wrote:\n> > A workaround is to replace IN with EXISTS:\n> \n> This still does a sequential rather that indexed scan:\n> \n> explain select * from books_fti where exists\n> (select R1684.stockno from R1684,books_fti where\n> R1684.stockno=books_fti.stockno );\n\nFirstly, a simple join would yield the same results:\n\nSELECT books_fti.* FROM books_fti, R1684 WHERE\nbooks_fti.stockno = R1684.stockno;\n\nSecondly, you've listed the target table twice in the above\nquery, which might be causing a problem with the planner.\nInstead, it should read:\n\nSELECT * FROM books_fti WHERE EXISTS (\n SELECT R1684.stockno FROM R1684 WHERE R1684.stockno =\nbooks_fti.stockno\n);\n\nThat should result in 1 sequential scan on one of the tables, and\n1 index scan on the inner table. The plan should look something\nlike:\n\nSeq Scan on R1684 (cost=9.44 rows=165 width=12)\n SubPlan\n -> Index Scan using allbooks_isbn on books_fti (cost=490.59\nrows=7552 width=12)\n\n\nHope that helps, \n\nMike Mascari\n", "msg_date": "Tue, 08 Aug 2000 10:17:23 -0400", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query plan and sub-queries" }, { "msg_contents": "At 10:17 08/08/00 -0400, Mike Mascari wrote:\n>\n>Firstly, a simple join would yield the same results:\n>\n>SELECT books_fti.* FROM books_fti, R1684 WHERE\n>books_fti.stockno = R1684.stockno;\n\nYes that gives me:\nNested Loop (cost=2093.00 rows=1024706 width=172)\n -> Seq Scan on r1689 (cost=43.00 rows=1000 width=12)\n -> Index Scan using allbooks_isbn on books_fti (cost=2.05 rows=1024705\nwidth\n=160) \n\n\nBut the 'EXISTS' sub-query you suggest still doesnt use the index.\n\n>SELECT * FROM books_fti WHERE EXISTS (\n> SELECT R1684.stockno FROM R1684 WHERE R1684.stockno =\n>books_fti.stockno\n>);\n>\n>That should result in 1 sequential scan on one of the tables, and\n>1 index scan on the inner table. The plan should look something\n>like:\n>\n>Seq Scan on R1684 (cost=9.44 rows=165 width=12)\n> SubPlan\n> -> Index Scan using allbooks_isbn on books_fti (cost=490.59\n>rows=7552 width=12)\n>\n\nNo actually I'm getting:\nSeq Scan on books_fti (cost=79300.27 rows=1024705 width=160)\n SubPlan\n -> Seq Scan on r1684 (cost=43.00 rows=2 width=12) \n-- \nthorNET - Internet Consultancy, Services & Training\nPhone: 01454 854413\nFax: 01454 854412\nhttp://www.thornet.co.uk \n", "msg_date": "Tue, 08 Aug 2000 15:36:51 +0100", "msg_from": "Steve Heaven <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query plan and sub-queries" }, { "msg_contents": "Has something happened to the list server ?\n\nI am only subscribed to the general list, but after two days of nothing I'm\nnow getting the hackers list stuff.\n\nSteve\n\n\n-- \nthorNET - Internet Consultancy, Services & Training\nPhone: 01454 854413\nFax: 01454 854412\nhttp://www.thornet.co.uk \n", "msg_date": "Thu, 14 Sep 2000 08:59:53 +0100", "msg_from": "Steve Heaven <[email protected]>", "msg_from_op": true, "msg_subject": "List funnies ?" }, { "msg_contents": "> Has something happened to the list server ?\n> \n> I am only subscribed to the general list, but after two days of nothing I'm\n> now getting the hackers list stuff.\n> \nSo it's not just me?\n\nHow sad, I was hoping I had be promoted to Hacker status... ;-)\n\n\n/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\n\nFabrizio Ermini Alternate E-mail:\nC.so Umberto, 7 [email protected]\nloc. Meleto Valdarno Mail on GSM: (keep it short!)\n52020 Cavriglia (AR) [email protected]\n", "msg_date": "Thu, 14 Sep 2000 12:26:56 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: List funnies ?" }, { "msg_contents": "\nokay, this is most odd ... according to the list software, you are still\nonly subscribed to the general list:\n\n Address: [email protected]\n Address is valid.\n Address is registered as:\n [email protected]\n Registered at Fri Sep 1 15:33:13 2000 GMT.\n Registration data last changed at Fri Sep 1 15:33:13 2000 GMT.\n Address is subscribed to 1 list:\n pgsql-general:\n Subscribed at Fri Sep 1 15:33:13 2000 GMT.\n Receiving each message as it is posted.\n Subscriber flags:\n noeliminatecc\n nohide\n prefix\n replyto\n selfcopy\n norewritefrom\n noackstall\n noackdeny\n noackpost\n noackreject\n Data last changed at Fri Sep 1 15:33:13 2000 GMT.\n\ncan you forward me a copy of the next 'hackers' message you receive, along\nwith its *full* headers? Just to make sure, [email protected]\nhasn't been inadvertently subscribed to hackers, so we aren't getting a\ncross there:\n\nMajordomo>show [email protected]\n\n Address: [email protected]\n Address is valid.\n Address is not registered.\n\n\n\nOn Thu, 14 Sep 2000 [email protected] wrote:\n\n> > Has something happened to the list server ?\n> > \n> > I am only subscribed to the general list, but after two days of nothing I'm\n> > now getting the hackers list stuff.\n> > \n> So it's not just me?\n> \n> How sad, I was hoping I had be promoted to Hacker status... ;-)\n> \n> \n> /\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\n> \n> Fabrizio Ermini Alternate E-mail:\n> C.so Umberto, 7 [email protected]\n> loc. Meleto Valdarno Mail on GSM: (keep it short!)\n> 52020 Cavriglia (AR) [email protected]\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 14 Sep 2000 09:01:50 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: List funnies ?" }, { "msg_contents": "When last we left our intrepid adventurers...\n\n > Has something happened to the list server ?\n >\n > I am only subscribed to the general list, but after two days of nothing I'm\n > now getting the hackers list stuff.\n >\n >So it's not just me?\n\nIt's not just you... this morning, I was surprised that my filters hadn't \nfiltered the pgsql-hackers messages to another folder, when I realized, \nHEY! I'm not ON the hackers list...\n\nSo for lack of anything better to to, I unsubbed, and got a return message \nthat it was successful. This, in spite of the fact that I'd never \nsubscribed. Hmmm...\n\nNow, back to our regularly scheduled programming.\n\nDavid Veatch - [email protected]\n\n\"Many people would sooner die than think.\nIn fact, they do.\" - Bertrand Russell\n\n", "msg_date": "Thu, 14 Sep 2000 07:31:32 -0500", "msg_from": "David Veatch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: List funnies ?" }, { "msg_contents": "On Thu, Sep 14, 2000 at 09:01:50AM -0300, The Hermit Hacker wrote:\n> \n> okay, this is most odd ... according to the list software, you are still\n> only subscribed to the general list:\n\nMarc\n\nI can also confirm that I had no message on pgsql-general for about\ntwo days until the thread 'List Funnies' started. Some -general has\nbeen vanishing into a black hole. (Including one message I know a\nfriend of mine, 'Richard Poole <[email protected]>' sent recently).\n\nJules\n", "msg_date": "Thu, 14 Sep 2000 13:37:02 +0100", "msg_from": "Jules Bean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: List funnies ?" }, { "msg_contents": "\nthere was a problem with database corruption in pgsql-general that we\nfixed last night ... if anyone else is interested in helping, I'm going to\nbe working with the Mj2 guys on moving the backend from BerkeleyDB ->\nPostgreSQL ... if anyone is interested in helping out, let me know ...\n\nOn Thu, 14 Sep 2000, Jules Bean wrote:\n\n> On Thu, Sep 14, 2000 at 09:01:50AM -0300, The Hermit Hacker wrote:\n> > \n> > okay, this is most odd ... according to the list software, you are still\n> > only subscribed to the general list:\n> \n> Marc\n> \n> I can also confirm that I had no message on pgsql-general for about\n> two days until the thread 'List Funnies' started. Some -general has\n> been vanishing into a black hole. (Including one message I know a\n> friend of mine, 'Richard Poole <[email protected]>' sent recently).\n> \n> Jules\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 14 Sep 2000 10:02:45 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: List funnies ?" } ]
[ { "msg_contents": "I know that I've seen this answer before but can't seem to find it for\n7.0.2 in the archives. Which file(s) need to be changed to have Postgres\ndefault to 32K size row limits rather than 8K? Has anyone run into any\nhorror stories after going to 32K?\n\nThanks.\n-Tony\n\np.s. Could the procedure for increasing to 32K rows be added to the FAQ?\n(Hopefully, it won't be necessary post-TOAST).\n\n\n", "msg_date": "Tue, 08 Aug 2000 10:24:43 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Extending to 32K row limit" }, { "msg_contents": "At 12:24 PM 8/8/2000, G. Anthony Reina wrote:\n>I know that I've seen this answer before but can't seem to find it for\n>7.0.2 in the archives. Which file(s) need to be changed to have Postgres\n>default to 32K size row limits rather than 8K? Has anyone run into any\n>horror stories after going to 32K?\n\nI've been running it for a while and fairly heavily without any problems...\n\nin src/include/config.h modify the following section AFTER running configure.\n\n/*\n * Size of a disk block --- currently, this limits the size of a tuple.\n * You can set it bigger if you need bigger tuples.\n */\n/* currently must be <= 32k bjm */\n#define BLCKSZ 8192\n\nchange to\n\n#define BLCKSZ 32768\n\nThis has worked for me....\n-\n- Thomas Swan\n- Graduate Student - Computer Science\n- The University of Mississippi\n-\n- \"People can be categorized into two fundamental\n- groups, those that divide people into two groups\n- and those that don't.\"\n\nAt 12:24 PM 8/8/2000, G. Anthony Reina wrote:\nI know that I've seen this answer before but\ncan't seem to find it for\n7.0.2 in the archives. Which file(s) need to be changed to have\nPostgres\ndefault to 32K size row limits rather than 8K? Has anyone run into\nany\nhorror stories after going to 32K?\n\nI've been running it for a while and fairly heavily without any\nproblems...\n\nin src/include/config.h modify the following section AFTER running\nconfigure.\n\n/*\n * Size of a disk block --- currently, this limits the size of a\ntuple.\n * You can set it bigger if you need bigger tuples.\n */\n/* currently must be <= 32k bjm */\n#define BLCKSZ  8192\n\nchange to \n\n#define BLCKSZ  32768\n\n\nThis has worked for me....\n- \n- Thomas Swan\n                                  \n- Graduate Student  - Computer Science\n- The University of Mississippi\n- \n- \"People can be categorized into two fundamental \n- groups, those that divide people into two groups \n- and those that don't.\"", "msg_date": "Tue, 08 Aug 2000 12:26:37 -0500", "msg_from": "Thomas Swan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extending to 32K row limit" }, { "msg_contents": "", "msg_date": "Tue, 08 Aug 2000 10:34:04 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extending to 32K row limit" }, { "msg_contents": "At 11:17 AM 8/8/00 -0700, G. Anthony Reina wrote:\n>Thanks Don. One more question: Does Postgres set aside an entire 8 K (or\n32 K) of\n>hard disk space for the row; or, does it just use what's needed to store the\n>information? For example, if I only have one integer value in a row, does\nPostgres\n>set aside 8K of harddrive space or just sizeof(int) space (with some\npointer where\n>other values in the row could be placed on the disk)?\n\nNo, it does not allocate a fixed 8K (or 32K) block per row. The size of a\nrow is dependent on the data stored within the row. Each row also contains\na header of modest length.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Tue, 08 Aug 2000 10:51:16 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extending to 32K row limit" }, { "msg_contents": "> I know that I've seen this answer before but can't seem to find it for\n> 7.0.2 in the archives. Which file(s) need to be changed to have Postgres\n> default to 32K size row limits rather than 8K? Has anyone run into any\n> horror stories after going to 32K?\n\nBumping it to 32K on AIX 4.1 broke the disk drivers here, so I would say it\ndepends on your platform. Going to 16K worked fine, but after the jump to\n32K, some fsck'ing was required to fix up our drives.\n\nThe problem was definitely in AIX since many other platforms have reported\nno problems with the 32K setting. If another use has bumped it up\nsuccessfully for the same platform as yours, then I'd feel confident in\ndoing it. If you don't get a reply to that effect or can't find it in the\narchives that someone has done it, I'd recommend putting it to 32K on a test\nmachine first.\n\nJust my $.02 worth after trying it on AIX 4.1.\n\nDarren\n\n", "msg_date": "Tue, 8 Aug 2000 14:14:50 -0400", "msg_from": "\"Darren King\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Extending to 32K row limit" }, { "msg_contents": "Thanks Don. One more question: Does Postgres set aside an entire 8 K (or 32 K) of\nhard disk space for the row; or, does it just use what's needed to store the\ninformation? For example, if I only have one integer value in a row, does Postgres\nset aside 8K of harddrive space or just sizeof(int) space (with some pointer where\nother values in the row could be placed on the disk)?\n\nI just wanted to make sure that my old data at 8 K wasn't going to take up 4 times as\nmuch harddrive space after the 32K conversion.\n\nThanks.\n-Tony\n\n\n\n\nDon Baccus wrote:\n\n> At 12:26 PM 8/8/00 -0500, Thomas Swan wrote:\n> >>>>\n>\n> At 12:24 PM 8/8/2000, G. Anthony Reina wrote:\n>\n> I know that I've seen this answer before but can't seem to find it for\n> 7.0.2 in the archives. Which file(s) need to be changed to have Postgres\n> default to 32K size row limits rather than 8K? Has anyone run into any\n> horror stories after going to 32K?\n>\n> I've been running it for a while and fairly heavily without any problems...\n>\n> <<<<\n>\n> Folks using the OpenACS toolkit, which includes the AOLserver site run by\n> AOL (did y'all know that's a Postgres site now? - just the AOLserver site,\n> not all of AOL, don't get TOO excited) run with a 16KB blocksize if they\n> follow our instructions.\n>\n> I've been running a couple of sites like that.\n>\n> Zero problems.\n>\n> - Don Baccus, Portland OR <[email protected]>\n> Nature photos, on-line guides, Pacific Northwest\n> Rare Bird Alert Service and other goodies at\n> http://donb.photo.net.\n\n--\n///////////////////////////////////////////////////\n// G. Anthony Reina, MD //\n// The Neurosciences Institute //\n// 10640 John Jay Hopkins Drive //\n// San Diego, CA 92121 //\n// Phone: (858) 626-2132 //\n// FAX: (858) 626-2199 //\n////////////////////////////////////////////\n\n\n\n", "msg_date": "Tue, 08 Aug 2000 11:17:19 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extending to 32K row limit" }, { "msg_contents": "Thomas,\n\n I've re-done my database with the 32K tuple limit-- looks good.\nHowever, I seem to be having trouble with binary cursors. I think it may\nbe with the number of bytes in the tuple header (used to be 16 bytes\nwith the 8K limit). I've tried 16, 32, and 64, but haven't seemed to\nfind it. Have you used binary cursors with this setup?\n\nThanks.\n-Tony\n\n\n\n\nThomas Swan wrote:\n\n> At 12:24 PM 8/8/2000, G. Anthony Reina wrote:\n>\n>> I know that I've seen this answer before but can't seem to find it\n>> for\n>> 7.0.2 in the archives. Which file(s) need to be changed to have\n>> Postgres\n>> default to 32K size row limits rather than 8K? Has anyone run into\n>> any\n>> horror stories after going to 32K?\n>\n>\n> I've been running it for a while and fairly heavily without any\n> problems...\n>\n> in src/include/config.h modify the following section AFTER running\n> configure.\n>\n> /*\n> * Size of a disk block --- currently, this limits the size of a\n> tuple.\n> * You can set it bigger if you need bigger tuples.\n> */\n> /* currently must be <= 32k bjm */\n> #define BLCKSZ 8192\n>\n> change to\n>\n> #define BLCKSZ 32768\n>\n> This has worked for me....\n> -\n> - Thomas Swan\n> - Graduate Student - Computer Science\n> - The University of Mississippi\n> -\n> - \"People can be categorized into two fundamental\n> - groups, those that divide people into two groups\n> - and those that don't.\"\n\n", "msg_date": "Tue, 08 Aug 2000 15:49:49 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extending to 32K row limit" }, { "msg_contents": "Sorry. I just figured out it was an endianess problem rather than a header\nsize problem. Works fine now. Looks like 16 is still the magic number.\nPlease disregard the last question.\n\n-Tony\n\n\n\n\n\n\n\"G. Anthony Reina\" wrote:\n\n> Thomas,\n>\n> I've re-done my database with the 32K tuple limit-- looks good.\n> However, I seem to be having trouble with binary cursors. I think it may\n> be with the number of bytes in the tuple header (used to be 16 bytes\n> with the 8K limit). I've tried 16, 32, and 64, but haven't seemed to\n> find it. Have you used binary cursors with this setup?\n>\n> Thanks.\n> -Tony\n>\n> Thomas Swan wrote:\n>\n> > At 12:24 PM 8/8/2000, G. Anthony Reina wrote:\n> >\n> >> I know that I've seen this answer before but can't seem to find it\n> >> for\n> >> 7.0.2 in the archives. Which file(s) need to be changed to have\n> >> Postgres\n> >> default to 32K size row limits rather than 8K? Has anyone run into\n> >> any\n> >> horror stories after going to 32K?\n> >\n> >\n> > I've been running it for a while and fairly heavily without any\n> > problems...\n> >\n> > in src/include/config.h modify the following section AFTER running\n> > configure.\n> >\n> > /*\n> > * Size of a disk block --- currently, this limits the size of a\n> > tuple.\n> > * You can set it bigger if you need bigger tuples.\n> > */\n> > /* currently must be <= 32k bjm */\n> > #define BLCKSZ 8192\n> >\n> > change to\n> >\n> > #define BLCKSZ 32768\n> >\n> > This has worked for me....\n> > -\n> > - Thomas Swan\n> > - Graduate Student - Computer Science\n> > - The University of Mississippi\n> > -\n> > - \"People can be categorized into two fundamental\n> > - groups, those that divide people into two groups\n> > - and those that don't.\"\n\n", "msg_date": "Tue, 08 Aug 2000 15:57:44 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extending to 32K row limit" } ]
[ { "msg_contents": "On Tue, 8 Aug 2000, Jan Wieck wrote:\n\n> That's not exactly what I said.\n> \n> PL/Tcl has spi_exec and spi_prepare/spi_execp commands. And\n> of course, when the function call's spi_prepare, it is known\n> which objects it uses in this one query. But in contrast to\n> PL/pgSQL, PL/Tcl could use an argument as a tablename,\n> attribute, function, whatnot and build a saved plan for it\n> (would need to do so again for each different argument\n> value). So it CAN possibly reference almost every object in\n> the entire database, and you have no chance to know that,\n> even after a hundredthousand invocations of the function.\n\nOkay, that's actually what I had thought, but I wasn't sure after\nyour previous message. But the saved plans themselves do actually\nreference particular objects, even if the function doesn't, right?\nSo you wouldn't necessarily need to recompile all saved plans, \njust ones that reference the changed objects, although it might\nbe easier to just force all of them.\n \n> And, you're not living isolated in your backend (I know -\n> everything would be so easy :-). There's life in other\n> processes too, and you need to tell them that they possibly\n> have to recompile saved plans for the next Xact. How\n> complicated do you want this to be?\n\nOnly as complicated as necessary... :) But it seems that it is \nnecessary to have really functional full set of alter commands. \nI mean, if a particular plan references a column whose type has \nchanged, that sounds like it would be bad to use an old saved plan.\n(Oops, that's not a varchar anymore... it's an integer...)\n\nI guess a question is, what is the correct/desired behavior in\ncertain cases... And what cases are reasonably our problem and what\ncan we say is the admin's problem? Obviously, we're not going to get\nfar trying to deal with the possibility that a user changes a column\ntype and does something that is no longer correct, except to error,\nwe probably can't and shouldn't fix it, but what things should we handle\nautomatically?\n\nIf you have a function that makes a query like select * from foo that\nisn't done via arguments and you rename foo, what *should* happen if\nanything? What if you remove it entirely? What about a constraint that\nreferences a function that's renamed, does the constraint follow the name\nchange? If the function is removed, do we want to remove the now broken\nconstraint?\n\nAdmittedly, the initial thought behind this whole thing was to allow\nconstraints to properly dump after renames and to make dropping tables or\ncolumns easier to handle for removing referencing constraints (assuming\nthat we were going to be at some point handling subqueries inside check\nconstraints). All the rest of this is way past where I initially thought\nit to...\n\n\n", "msg_date": "Tue, 8 Aug 2000 11:51:51 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Constraint stuff (fwd)" } ]
[ { "msg_contents": "Seems that it's not possible to combine arrays and foreign keys ?\n\nCREATE TABLE table1 (\n fld1 integer NOT NULL,\n number integer,\n level integer,\n PRIMARY KEY (fld1)\n);\n\nCREATE TABLE table2 (\n pkey integer NOT NULL,\n arvar integer[],\n PRIMARY KEY (pkey),\n FOREIGN KEY (arvar) REFERENCES table1(fld1)\n);\n\n\nThis works, but the following insert complains that \n\nERROR: Unable to identify an operator '=' for types 'int4' and '_int4'\n You will have to retype this query using an explicit cast\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2582\nHowitzvej 75 �ben 14.00-18.00 Email: [email protected]\n2000 Frederiksberg L�rdag 11.00-17.00 Web: www.suse.dk\n", "msg_date": "Tue, 8 Aug 2000 22:03:11 +0200", "msg_from": "Kaare Rasmussen <[email protected]>", "msg_from_op": true, "msg_subject": "Arrays and foreign keys" }, { "msg_contents": "\nWell, the two types aren't the same (one is an integer the\nother an integer array,) so I wouldn't expect it to work. Note: \nThis shows another thing it probably should check before allowing \nthe constraint to be created.\n\nI don't know if these belong in TODO, but this might\nbe the appropriate entry.\n* Make sure that types used in foreign key constraints\n are comparable.\n\nStephan Szabo\[email protected]\n\nOn Tue, 8 Aug 2000, Kaare Rasmussen wrote:\n\n> Seems that it's not possible to combine arrays and foreign keys ?\n> \n> CREATE TABLE table1 (\n> fld1 integer NOT NULL,\n> number integer,\n> level integer,\n> PRIMARY KEY (fld1)\n> );\n> \n> CREATE TABLE table2 (\n> pkey integer NOT NULL,\n> arvar integer[],\n> PRIMARY KEY (pkey),\n> FOREIGN KEY (arvar) REFERENCES table1(fld1)\n> );\n> \n> \n> This works, but the following insert complains that \n> \n> ERROR: Unable to identify an operator '=' for types 'int4' and '_int4'\n> You will have to retype this query using an explicit cast\n\n", "msg_date": "Wed, 9 Aug 2000 10:52:17 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arrays and foreign keys" }, { "msg_contents": "I get exactly the same behavior; it would be really helpful if foreign key\nconstraints were available for array types!\n\nTim\n\nKaare Rasmussen wrote:\n\n> Seems that it's not possible to combine arrays and foreign keys ?\n>\n> CREATE TABLE table1 (\n> fld1 integer NOT NULL,\n> number integer,\n> level integer,\n> PRIMARY KEY (fld1)\n> );\n>\n> CREATE TABLE table2 (\n> pkey integer NOT NULL,\n> arvar integer[],\n> PRIMARY KEY (pkey),\n> FOREIGN KEY (arvar) REFERENCES table1(fld1)\n> );\n>\n> This works, but the following insert complains that\n>\n> ERROR: Unable to identify an operator '=' for types 'int4' and '_int4'\n> You will have to retype this query using an explicit cast\n>\n> --\n> Kaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\n> Kaki Data tshirts, merchandize Fax: 3816 2582\n> Howitzvej 75 �ben 14.00-18.00 Email: [email protected]\n> 2000 Frederiksberg L�rdag 11.00-17.00 Web: www.suse.dk\n\n--\nTimothy H. Keitt\nNational Center for Ecological Analysis and Synthesis\n735 State Street, Suite 300, Santa Barbara, CA 93101\nPhone: 805-892-2519, FAX: 805-892-2510\nhttp://www.nceas.ucsb.edu/~keitt/\n\n\n\n", "msg_date": "Wed, 09 Aug 2000 11:32:04 -0700", "msg_from": "\"Timothy H. Keitt\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arrays and foreign keys" }, { "msg_contents": "\nThis is an interesting point. Originally postgres integrity rules were\nbased on a very general rules system where many things were possible to\nspecify. I'm curious about the more recent addition of referential\nintegrity to postgres (I know little about it), why it is such a\nspecific solution and is not based on the more general postgres rules\nsystem?\n\nThere are some functions somewhere in contrib that allow you to say\nwhether something is somewhere within an array, which is generally\nuseful for an ODBMS style data model and also the example below. Ideally\nit could somehow be linked into integrity checks.\n\n\n\n\"Timothy H. Keitt\" wrote:\n> \n> I get exactly the same behavior; it would be really helpful if foreign key\n> constraints were available for array types!\n> \n> Tim\n> \n> Kaare Rasmussen wrote:\n> \n> > Seems that it's not possible to combine arrays and foreign keys ?\n> >\n> > CREATE TABLE table1 (\n> > fld1 integer NOT NULL,\n> > number integer,\n> > level integer,\n> > PRIMARY KEY (fld1)\n> > );\n> >\n> > CREATE TABLE table2 (\n> > pkey integer NOT NULL,\n> > arvar integer[],\n> > PRIMARY KEY (pkey),\n> > FOREIGN KEY (arvar) REFERENCES table1(fld1)\n> > );\n> >\n> > This works, but the following insert complains that\n> >\n> > ERROR: Unable to identify an operator '=' for types 'int4' and '_int4'\n> > You will have to retype this query using an explicit cast\n> >\n> > --\n> > Kaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\n> > Kaki Data tshirts, merchandize Fax: 3816 2582\n> > Howitzvej 75 �ben 14.00-18.00 Email: [email protected]\n> > 2000 Frederiksberg L�rdag 11.00-17.00 Web: www.suse.dk\n> \n> --\n> Timothy H. Keitt\n> National Center for Ecological Analysis and Synthesis\n> 735 State Street, Suite 300, Santa Barbara, CA 93101\n> Phone: 805-892-2519, FAX: 805-892-2510\n> http://www.nceas.ucsb.edu/~keitt/\n", "msg_date": "Thu, 10 Aug 2000 09:40:38 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arrays and foreign keys" }, { "msg_contents": "\nOn Thu, 10 Aug 2000, Chris Bitmead wrote:\n\n> This is an interesting point. Originally postgres integrity rules were\n> based on a very general rules system where many things were possible to\n> specify. I'm curious about the more recent addition of referential\n> integrity to postgres (I know little about it), why it is such a\n> specific solution and is not based on the more general postgres rules\n> system?\n\nBecause unfortunately the SQL spec for referential integrity cannot really\nbe implemented in the current rules system (or at least not in a way that\nis terribly nice). One problem is the fact that they need the option to\nbe deferred to end of transaction (which we still have problems with now),\nplus I'm not sure that MATCH PARTIAL with referential integrity would be\npossible with the rewrites without having 2^(number of key elements) rules\nper action per constraint (that's the not terribly nice part). And there\nare rules about not letting a piece of data get multiply changed due to\ncircular dependencies that you'd need to work in as well. All in all,\nit's a mess.\n \n> There are some functions somewhere in contrib that allow you to say\n> whether something is somewhere within an array, which is generally\n> useful for an ODBMS style data model and also the example below. Ideally\n> it could somehow be linked into integrity checks.\nFor now, you should be able define the element in array as the equality\noperator between integer and array of integers which would probably do\nit. \n\nThe spec generally says that the referenced and referencing values should\nbe equal (well, there are exceptions more NULLs in various cases). We'd\nhave to decide whether we'd want to extend that to be equal, except in the\ncase that the referenced value is an array in which case we use in array\ninstead. It'd probably be fairly easy probably to make the change\nassuming it's easy to tell if a column is an array.\n\n", "msg_date": "Wed, 9 Aug 2000 17:52:47 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arrays and foreign keys" }, { "msg_contents": "Stephan Szabo wrote:\n> > This is an interesting point. Originally postgres integrity rules were\n> > based on a very general rules system where many things were possible to\n> > specify. I'm curious about the more recent addition of referential\n> > integrity to postgres (I know little about it), why it is such a\n> > specific solution and is not based on the more general postgres rules\n> > system?\n> \n> Because unfortunately the SQL spec for referential integrity cannot really\n> be implemented in the current rules system (or at least not in a way that\n> is terribly nice). \n\nSo it wasn't feasible to extend the current rules system to support\nthese oddities, instead of implementing the specific solution?\n\n> One problem is the fact that they need the option to\n> be deferred to end of transaction (which we still have problems with now),\n> plus I'm not sure that MATCH PARTIAL with referential integrity would be\n> possible with the rewrites without having 2^(number of key elements) rules\n> per action per constraint (that's the not terribly nice part). And there\n> are rules about not letting a piece of data get multiply changed due to\n> circular dependencies that you'd need to work in as well. All in all,\n> it's a mess.\n> \n> > There are some functions somewhere in contrib that allow you to say\n> > whether something is somewhere within an array, which is generally\n> > useful for an ODBMS style data model and also the example below. Ideally\n> > it could somehow be linked into integrity checks.\n> For now, you should be able define the element in array as the equality\n> operator between integer and array of integers which would probably do\n> it.\n> \n> The spec generally says that the referenced and referencing values should\n> be equal (well, there are exceptions more NULLs in various cases). We'd\n> have to decide whether we'd want to extend that to be equal, except in the\n> case that the referenced value is an array in which case we use in array\n> instead. It'd probably be fairly easy probably to make the change\n> assuming it's easy to tell if a column is an array.\n", "msg_date": "Thu, 10 Aug 2000 10:57:38 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arrays and foreign keys" }, { "msg_contents": "At 10:57 AM 8/10/00 +1000, Chris Bitmead wrote:\n>Stephan Szabo wrote:\n>> > This is an interesting point. Originally postgres integrity rules were\n>> > based on a very general rules system where many things were possible to\n>> > specify. I'm curious about the more recent addition of referential\n>> > integrity to postgres (I know little about it), why it is such a\n>> > specific solution and is not based on the more general postgres rules\n>> > system?\n>> \n>> Because unfortunately the SQL spec for referential integrity cannot really\n>> be implemented in the current rules system (or at least not in a way that\n>> is terribly nice). \n>\n>So it wasn't feasible to extend the current rules system to support\n>these oddities, instead of implementing the specific solution?\n\nSince Jan apparently knows more about the current rules system than anyone\nelse on the planet (he's done a lot of work in that area in the past), and\nsince he designed the RI system, my guess is that the simple answer to your\nquestion is \"yes\".\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Wed, 09 Aug 2000 18:03:17 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arrays and foreign keys" }, { "msg_contents": "> Well, the two types aren't the same (one is an integer the\n> other an integer array,) so I wouldn't expect it to work. Note: \n\nEh, I could figure that out myself. What I'm asking for is if there is a way to\ncombine arrays with foreign keys?\n\nI believe the answer for now is 'no', but did like to get it confirmed, and\nalso draw attention to this if someone wants to make it.\n\n> * Make sure that types used in foreign key constraints\n> are comparable.\n\nAnd maybe \n* Add foreign key constraint for arrays\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2582\nHowitzvej 75 �ben 14.00-18.00 Email: [email protected]\n2000 Frederiksberg L�rdag 11.00-17.00 Web: www.suse.dk\n", "msg_date": "Thu, 10 Aug 2000 16:53:23 +0200", "msg_from": "Kaare Rasmussen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Arrays and foreign keys" }, { "msg_contents": "On Thu, 10 Aug 2000, Kaare Rasmussen wrote:\n\n> > Well, the two types aren't the same (one is an integer the\n> > other an integer array,) so I wouldn't expect it to work. Note: \n> \n> Eh, I could figure that out myself. What I'm asking for is if there is a way to\n> combine arrays with foreign keys?\n\nFor what you want, maybe. Probably defining an equals operator to make\nthe two types comparable for equality would allow the constraint to work.\n\n> I believe the answer for now is 'no', but did like to get it confirmed, and\n> also draw attention to this if someone wants to make it.\n> \n> > * Make sure that types used in foreign key constraints\n> > are comparable.\n> \n> And maybe \n> * Add foreign key constraint for arrays\n\nActually, it would be:\n* Change foreign key constraint for array -> element to mean element\n in array,\nsince the constraints seem to work on arrays (make two integer\narrays and reference them and it seems to work in my two minute test).\n\nThe question is whether or not we want to extend the spec in this way.\nIt would probably be easy to do, but it's definately an extension, since\nthe spec says that the two things should be equal, and I don't generally\nthink of element in array as equality. And, what do we do if neither\nthe in operator nor equals is defined between array and element?\n\n\n", "msg_date": "Thu, 10 Aug 2000 09:52:18 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arrays and foreign keys" }, { "msg_contents": "Kaare Rasmussen wrote:\n> > Well, the two types aren't the same (one is an integer the\n> > other an integer array,) so I wouldn't expect it to work. Note:\n>\n> Eh, I could figure that out myself. What I'm asking for is if there is a way to\n> combine arrays with foreign keys?\n>\n> I believe the answer for now is 'no', but did like to get it confirmed, and\n> also draw attention to this if someone wants to make it.\n>\n> > * Make sure that types used in foreign key constraints\n> > are comparable.\n>\n> And maybe\n> * Add foreign key constraint for arrays\n\n The major problem isn't that we do not have a comparision\n operator for int4 vs. _int4. The bigger one is that there is\n no easy way to build an index on them, and that there is no\n way to define what a referential action should really do in\n the case of cascaded operations.\n\n For a primary key containing an array, the values of all\n array elements of all rows must be unique and NOT NULL. So\n there must be a unique index on the elements, the array\n itself cannot be NULL, no element of the array can be NULL\n and there must be at least one element.\n\n And for a foreign key containing an array, what to do when ON\n DELETE CASCADE is requested? DELETE the FK row? Remove the\n element from the array? DELETE the row then when the array\n get's empty or not?\n\n Are these questions answered by the standard? If not, do we\n want to answer them ourself and take the risk the standard\n someday answers them different?\n\n For the meantime, I suggest normalize your schema if you want\n referential integrity.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Thu, 10 Aug 2000 16:16:46 -0500 (EST)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arrays and foreign keys" }, { "msg_contents": "Stephan Szabo wrote:\n\n> Actually, it would be:\n> * Change foreign key constraint for array -> element to mean element\n> in array,\n> since the constraints seem to work on arrays (make two integer\n> arrays and reference them and it seems to work in my two minute test).\n> \n> The question is whether or not we want to extend the spec in this way.\n> It would probably be easy to do, but it's definately an extension, since\n> the spec says that the two things should be equal, and I don't generally\n> think of element in array as equality. And, what do we do if neither\n> the in operator nor equals is defined between array and element?\n\nMaybe the syntax should be extended to support this concept. Thus\ninstead of having....\n\n\nCREATE TABLE table2 (\n pkey integer NOT NULL,\n arvar integer[],\n PRIMARY KEY (pkey),\n FOREIGN KEY (arvar) REFERENCES table1(fld1)\n);\n\nWe instead have....\n\nCREATE TABLE table2 (\n pkey integer NOT NULL,\n arvar integer[],\n PRIMARY KEY (pkey),\n FOREIGN KEY (arvar) REFERENCES table1(fld1[])\n);\n\nThe extra [] meaning that it references a member of fld1, but we don't\nknow which. That would leave strict equality intact, but still provide\nthis very useful extension.\n", "msg_date": "Fri, 11 Aug 2000 09:43:13 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arrays and foreign keys" }, { "msg_contents": "On Fri, 11 Aug 2000, Chris Bitmead wrote:\n\n> Stephan Szabo wrote:\n> \n> > Actually, it would be:\n> > * Change foreign key constraint for array -> element to mean element\n> > in array,\n> > since the constraints seem to work on arrays (make two integer\n> > arrays and reference them and it seems to work in my two minute test).\n> > \n> > The question is whether or not we want to extend the spec in this way.\n> > It would probably be easy to do, but it's definately an extension, since\n> > the spec says that the two things should be equal, and I don't generally\n> > think of element in array as equality. And, what do we do if neither\n> > the in operator nor equals is defined between array and element?\n> \n> Maybe the syntax should be extended to support this concept. Thus\n> instead of having....\n> \n> \n> CREATE TABLE table2 (\n> pkey integer NOT NULL,\n> arvar integer[],\n> PRIMARY KEY (pkey),\n> FOREIGN KEY (arvar) REFERENCES table1(fld1)\n> );\n> \n> We instead have....\n> \n> CREATE TABLE table2 (\n> pkey integer NOT NULL,\n> arvar integer[],\n> PRIMARY KEY (pkey),\n> FOREIGN KEY (arvar) REFERENCES table1(fld1[])\n> );\n> \n> The extra [] meaning that it references a member of fld1, but we don't\n> know which. That would leave strict equality intact, but still provide\n> this very useful extension.\n\nActually, it's the other way around right, arvar is the array, fld1 is\njust an integer, so I'd guess\nFOREIGN KEY (arvar[]) REFERENCES table1(fld1) \nwould be it.\n\nThere are the issues of the referential integrity actions. If I were\nto hazard a guess at the behavior one would expect from this, I'd guess...\n\nON UPDATE CASCADE - The particular referencing element changes.\nON UPDATE SET NULL - The particular referencing element is set null\nON UPDATE SET DEFAULT - For now the same as set null since i don't think\n array elements can default\nON UPDATE NO ACTION|RESTRICT - disallow changing of the value if there\n exists an array element reference\nON DELETE CASCADE - Remove referencing element, drop row if the array\n is emptied\nON DELETE ... - Pretty much as on update.\n\nBut (and this is a really big but) -- This is going to be slow as hell,\nand perhaps slower than that, since for any update or delete, you would\nhave to go through every row on the other table doing the array in until\nwe can get an index on all the elements in all of the arrays.\n\nThen there are other problematic issues like:\n{1,2,3} -> {1,3,4} -- Is this a delete of 2 and an insert of 4 or\n two updates?\n{1,2,3} -> {3,4,1} -- What about this one?\n\n---\nThis of course brings up, well, what about an element that wants to\nreference an array, or what about arrays that you want to say, this array\nmust be a subset of the referenced array, but we can get into that\nlater... :)\n\n", "msg_date": "Thu, 10 Aug 2000 18:03:16 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arrays and foreign keys" }, { "msg_contents": "Stephan Szabo wrote:\n\n> But (and this is a really big but) -- This is going to be slow as hell,\n> and perhaps slower than that, since for any update or delete, you would\n> have to go through every row on the other table doing the array in until\n> we can get an index on all the elements in all of the arrays.\n> \n> Then there are other problematic issues like:\n> {1,2,3} -> {1,3,4} -- Is this a delete of 2 and an insert of 4 or\n> two updates?\n> {1,2,3} -> {3,4,1} -- What about this one?\n\nProbably the only useful use of arrays in conjunction with referential\nintegrity is to treat the array as an unordered collection. \n\n{1,2,3} -> {1,3,4} -- Is a delete of 2 and an insert of 4.\n \n{1,2,3} -> {3,4,1} -- Is a delete of 2 and an insert of 4.\n\nFor that reason I'm not sure that it has to be slow. When an array is\nupdated find the elements that have changed (according to the above\ndefinition of changed) and only check on those ones.\n", "msg_date": "Fri, 11 Aug 2000 11:19:59 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arrays and foreign keys" }, { "msg_contents": "\nOn Fri, 11 Aug 2000, Chris Bitmead wrote:\n\n> Stephan Szabo wrote:\n> \n> > But (and this is a really big but) -- This is going to be slow as hell,\n> > and perhaps slower than that, since for any update or delete, you would\n> > have to go through every row on the other table doing the array in until\n> > we can get an index on all the elements in all of the arrays.\n> > \n> > Then there are other problematic issues like:\n> > {1,2,3} -> {1,3,4} -- Is this a delete of 2 and an insert of 4 or\n> > two updates?\n> > {1,2,3} -> {3,4,1} -- What about this one?\n> \n> Probably the only useful use of arrays in conjunction with referential\n> integrity is to treat the array as an unordered collection. \n> \n> {1,2,3} -> {1,3,4} -- Is a delete of 2 and an insert of 4.\n> \n> {1,2,3} -> {3,4,1} -- Is a delete of 2 and an insert of 4.\n>\n> For that reason I'm not sure that it has to be slow. When an array is\n> updated find the elements that have changed (according to the above\n> definition of changed) and only check on those ones.\n\nRemember, his structure was the array referenced the integer, not the\nother way around. So, if you say, delete one of the integers from the\nreferenced table you need to find any array element that referenced that\ninteger in all rows of the referencing table, that's the slow part.\n\n\n\n\n", "msg_date": "Thu, 10 Aug 2000 18:33:43 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arrays and foreign keys" }, { "msg_contents": "Stephan Szabo wrote:\n\n> Remember, his structure was the array referenced the integer, not the\n> other way around. So, if you say, delete one of the integers from the\n> referenced table you need to find any array element that referenced that\n> integer in all rows of the referencing table, that's the slow part.\n\nAh yes. I guess that's a problem crying out for a new indexing solution.\n", "msg_date": "Fri, 11 Aug 2000 11:42:59 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arrays and foreign keys" }, { "msg_contents": "On Fri, 11 Aug 2000, Chris Bitmead wrote:\n\n> Stephan Szabo wrote:\n> \n> > Remember, his structure was the array referenced the integer, not the\n> > other way around. So, if you say, delete one of the integers from the\n> > referenced table you need to find any array element that referenced that\n> > integer in all rows of the referencing table, that's the slow part.\n> \n> Ah yes. I guess that's a problem crying out for a new indexing solution.\n\nYeah, and it would probably need some associated cost estimation stuff,\nsince you'd need to know something about the element value rarity\ninstead of the array value rarity if you wanted to make intelligent guesses\nas to whether the index scan is better than the sequential scan.\n\nYou could kind of store the information in a secondary relation, but that\nseems like a major point of locking contention, plus it'd either end up\nbeing the reverse index (element->array of oids) or the normalized,\nelement->oid rows at which point are you better off than if you\nnormalized the original relation.\n\nDoes any version of SQL have meaningful arrays, and do they actually\nspecify any behavior for this? Or for that matter, what about other\ndbs. What do they do with these cases...\n\n\n", "msg_date": "Thu, 10 Aug 2000 19:02:13 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arrays and foreign keys" }, { "msg_contents": "Stephan Szabo wrote:\n\n> > Ah yes. I guess that's a problem crying out for a new indexing solution.\n> \n> Yeah, and it would probably need some associated cost estimation stuff,\n> since you'd need to know something about the element value rarity\n> instead of the array value rarity if you wanted to make intelligent guesses\n> as to whether the index scan is better than the sequential scan.\n\nYou could probably do some kind of quick hack with regular indexes, just\nhave more than one entry for each tuple when indexing arrays.\n\n> You could kind of store the information in a secondary relation, but that\n> seems like a major point of locking contention, plus it'd either end up\n> being the reverse index (element->array of oids) or the normalized,\n> element->oid rows at which point are you better off than if you\n> normalized the original relation.\n> \n> Does any version of SQL have meaningful arrays, and do they actually\n> specify any behavior for this? Or for that matter, what about other\n> dbs. What do they do with these cases...\n\nAll ODBMSes by necessity support arrays. I'm not aware of any attempt to\nindex them in this way or support referential integrity. It would\nprobably be a postgresql first.\n", "msg_date": "Fri, 11 Aug 2000 12:06:39 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arrays and foreign keys" }, { "msg_contents": "On Fri, 11 Aug 2000, Chris Bitmead wrote:\n\n> You could probably do some kind of quick hack with regular indexes, just\n> have more than one entry for each tuple when indexing arrays.\n\nMaybe, it depends on how the code is structured. Plus, it may mean\nchanges to the stuff that handles arrays as well, since you're not\nindexing the data value, but the set (actually, not a set i guess since\nthere's nothing preventing duplicates) that's there, so {1,2}->{1,3} means\nan index delete for the 2 and index insert for the 3.\n\n> > You could kind of store the information in a secondary relation, but that\n> > seems like a major point of locking contention, plus it'd either end up\n> > being the reverse index (element->array of oids) or the normalized,\n> > element->oid rows at which point are you better off than if you\n> > normalized the original relation.\n> > \n> > Does any version of SQL have meaningful arrays, and do they actually\n> > specify any behavior for this? Or for that matter, what about other\n> > dbs. What do they do with these cases...\n> \n> All ODBMSes by necessity support arrays. I'm not aware of any attempt to\n> index them in this way or support referential integrity. It would\n> probably be a postgresql first.\n\nWell, one of Jan's concerns was defining all of this behavior in a way\nthat was different from a current or reasonably likely spec (I'd guess he\nwas most concerned with SQL, but...). \n\nI think perhaps we're overreaching for the moment. The ri stuff isn't\neven completely finished for the cases that are specified by the SQL\nspecification, and there are still problems with what's there, so we\nshould probably get it working with an eye towards this possible\ndirection.\n\nAnd whatever is done should leave arrays with the same meaning they\ncurrently have for people who use them in other ways. I'm almost\nthinking that we want a set rather than an array here where sets have\ndifferent semantics that make more sense for this sort of behavior.\nIt just seems to make more sense to me that a set would be indexed\nby its elements than array, since position is supposed to be meaningful\nfor arrays, and that set(1,2) is equal to the set(2,1) whereas the\nindexes aren't. Of course, I guess that's not much different from\nthe normalized table case.\n\n", "msg_date": "Thu, 10 Aug 2000 19:35:40 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arrays and foreign keys" }, { "msg_contents": "Stephan Szabo wrote:\n\n> And whatever is done should leave arrays with the same meaning they\n> currently have for people who use them in other ways. I'm almost\n> thinking that we want a set rather than an array here where sets have\n> different semantics that make more sense for this sort of behavior.\n> It just seems to make more sense to me that a set would be indexed\n> by its elements than array, since position is supposed to be meaningful\n> for arrays, and that set(1,2) is equal to the set(2,1) whereas the\n> indexes aren't. Of course, I guess that's not much different from\n> the normalized table case.\n\nProbably a collection rather than a set. No sense in excluding\nduplicates.\n\nWhat often happens in an ODBMS is that some general purpose collection\nclasses are written based on arrays. A simple example would be...\n\nclass Set<type> {\n RefArray<type> array;\n}\n\nWhere RefArray<Object> gets mapped to something like oid[] in the odbms.\nThen when you want a class that has a set..\n\nclass Person {\n Set<Car> owns;\n}\n\nwhich gets flattened and mapped to\ncreate table Person (owns oid[]);\n\nThe set semantics being enforced by the language bindings.\n", "msg_date": "Fri, 11 Aug 2000 14:39:11 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arrays and foreign keys" }, { "msg_contents": "On Fri, 11 Aug 2000, Chris Bitmead wrote:\n\n> Stephan Szabo wrote:\n> \n> > And whatever is done should leave arrays with the same meaning they\n> > currently have for people who use them in other ways. I'm almost\n> > thinking that we want a set rather than an array here where sets have\n> > different semantics that make more sense for this sort of behavior.\n> > It just seems to make more sense to me that a set would be indexed\n> > by its elements than array, since position is supposed to be meaningful\n> > for arrays, and that set(1,2) is equal to the set(2,1) whereas the\n> > indexes aren't. Of course, I guess that's not much different from\n> > the normalized table case.\n> \n> Probably a collection rather than a set. No sense in excluding\n> duplicates.\n\nProbably not, at least for the referencing thing anyway. (To do this\nto a referenced object would require that the values in all elements\nof all the sets be unique, not just within one since the spec we're\ngoing with assumes unique key values.)\n \n> What often happens in an ODBMS is that some general purpose collection\n> classes are written based on arrays. A simple example would be...\n> \n> class Set<type> {\n> RefArray<type> array;\n> }\n> \n> Where RefArray<Object> gets mapped to something like oid[] in the odbms.\n> Then when you want a class that has a set..\n> \n> class Person {\n> Set<Car> owns;\n> }\n> \n> which gets flattened and mapped to\n> create table Person (owns oid[]);\n> \n> The set semantics being enforced by the language bindings.\n\nRight, but doing something like this ri stuff would require some\ncollection semantics being enforced by the database, since we'd\nbe treating this array as a set in some cases, even if it wasn't\na set. It might not matter so much for this case, but let's say\nthat at some point someone wanted to extend general purpose triggers\nin some similar fashion. Then it would become important whether\nsomething was a delete or update, and treating an array as a set\nin that case would be a bad idea.\n\n\n", "msg_date": "Thu, 10 Aug 2000 23:24:16 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arrays and foreign keys" }, { "msg_contents": "> > Well, the two types aren't the same (one is an integer the\n> > other an integer array,) so I wouldn't expect it to work. Note: \n> \n> Eh, I could figure that out myself. What I'm asking for is if there is a way to\n> combine arrays with foreign keys?\n> \n> I believe the answer for now is 'no', but did like to get it confirmed, and\n> also draw attention to this if someone wants to make it.\n> \n> > * Make sure that types used in foreign key constraints\n> > are comparable.\n> \n> And maybe \n> * Add foreign key constraint for arrays\n\nAdded to TODO.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 11 Oct 2000 23:04:43 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arrays and foreign keys" }, { "msg_contents": "> > And maybe \n> > * Add foreign key constraint for arrays\n> \n> Actually, it would be:\n> * Change foreign key constraint for array -> element to mean element\n> in array,\n\nTODO updated.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 11 Oct 2000 23:05:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arrays and foreign keys" }, { "msg_contents": "There is some stuff which last time I looked is in contrib that allows\nqueries to test if something is in an array. Something vaguely like\nSELECT * from part, box where IN(part.num, box.array).\n\nHaving this integrated in the foreign key stuff would certainly be\nimportant for object databases, which by definition use these kinds of\narrays.\n\nBruce Momjian wrote:\n> \n> > > Well, the two types aren't the same (one is an integer the\n> > > other an integer array,) so I wouldn't expect it to work. Note:\n> >\n> > Eh, I could figure that out myself. What I'm asking for is if there is a way to\n> > combine arrays with foreign keys?\n> >\n> > I believe the answer for now is 'no', but did like to get it confirmed, and\n> > also draw attention to this if someone wants to make it.\n> >\n> > > * Make sure that types used in foreign key constraints\n> > > are comparable.\n> >\n> > And maybe\n> > * Add foreign key constraint for arrays\n> \n> Added to TODO.\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 13 Oct 2000 00:56:58 +1100", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arrays and foreign keys" }, { "msg_contents": "\nI think that was the agreement on the best way to do it (although the\noperator is even easier looking, just replace = with whatever the operator\nis.). This would mean moving the array code from contrib into the real\nsource tree probably though, or having the foreign key stuff figure out if\nyou had it installed and use it only in those cases. \n\nOn Fri, 13 Oct 2000, Chris wrote:\n\n> There is some stuff which last time I looked is in contrib that allows\n> queries to test if something is in an array. Something vaguely like\n> SELECT * from part, box where IN(part.num, box.array).\n> \n> Having this integrated in the foreign key stuff would certainly be\n> important for object databases, which by definition use these kinds of\n> arrays.\n> \n> Bruce Momjian wrote:\n> > \n> > > > Well, the two types aren't the same (one is an integer the\n> > > > other an integer array,) so I wouldn't expect it to work. Note:\n> > >\n> > > Eh, I could figure that out myself. What I'm asking for is if there is a way to\n> > > combine arrays with foreign keys?\n> > >\n> > > I believe the answer for now is 'no', but did like to get it confirmed, and\n> > > also draw attention to this if someone wants to make it.\n> > >\n> > > > * Make sure that types used in foreign key constraints\n> > > > are comparable.\n> > >\n> > > And maybe\n> > > * Add foreign key constraint for arrays\n> > \n> > Added to TODO.\n\n", "msg_date": "Thu, 12 Oct 2000 09:53:18 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arrays and foreign keys" } ]
[ { "msg_contents": "Hi,\n\nI think I have found a bug in the handeling of 'deferred\ncontraints'. I have attached a small sql script that reproduces the\nerror.\n\nThe first transaction succedes, but the second one failes with\n'psql:bug.sql:58: ERROR: <unnamed> referential integrity violation -\nkey in p still referenced from c'\n\nIsn't it supposed succed just like the first one ??\n\n-- \nBest regards,\nDavid Jack Olrik <[email protected]> http://david.olrik.dk\nGnuPG key C290 0A4A 0CCC CBA8 2B37 E18D 01D2 F6EF 2E61 9894\n[ GNU Software: 'The source will be with you ... Always!' ]", "msg_date": "Wed, 9 Aug 2000 14:54:10 +0200", "msg_from": "David Jack Olrik <[email protected]>", "msg_from_op": true, "msg_subject": "Possible bug in 'set constraints all deferred';" }, { "msg_contents": "David Jack Olrik wrote:\n> Hi,\n>\n> I think I have found a bug in the handeling of 'deferred\n> contraints'. I have attached a small sql script that reproduces the\n> error.\n>\n> The first transaction succedes, but the second one failes with\n> 'psql:bug.sql:58: ERROR: <unnamed> referential integrity violation -\n> key in p still referenced from c'\n>\n> Isn't it supposed succed just like the first one ??\n\n[Attachment, skipping...]\n\n WRT the primary key constraint, this should be considered a\n \"triggered data change violation\", and the system should\n already complain at the INSERT attempt's for p.\n\n We don't track it that precise. So the error message isn't\n the right one and it happens too late. But the overall\n transactional behaviour is correct.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Wed, 9 Aug 2000 14:20:45 -0500 (EST)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible bug in 'set constraints all deferred';" } ]
[ { "msg_contents": "I would like to build Postgres from the srpm, but am unsure how to\nenable sending logging information to syslog. How would I do this in\nthe specfile?\n\n -Mike\n\n", "msg_date": "Wed, 9 Aug 2000 09:14:05 -0400", "msg_from": "\"Michael Mayo\" <[email protected]>", "msg_from_op": true, "msg_subject": "Activating USE_SYSLOG from srpm?" }, { "msg_contents": "----- Original Message -----\nFrom: Michael Mayo <[email protected]>\n\n\n> I would like to build Postgres from the srpm, but am unsure how to\n> enable sending logging information to syslog. How would I do this\nin\n> the specfile?\n>\n> -Mike\n\n Oops, nevermind...it's obvious; I should have RTFM'ed in the first\nplace. Sorry. =)\n\n -Mike\n\n", "msg_date": "Wed, 9 Aug 2000 10:46:56 -0400", "msg_from": "\"Michael Mayo\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Activating USE_SYSLOG from srpm?" } ]
[ { "msg_contents": "Hi,\n\nI tried to implement fulltext search using linguistic approach,\nfor example, using ispell like udmsearch does. We also save position\ninformation of each lexem in document to calculate relevancy\n(it's C-function using SPI-interface). We're still testing different\nstrategies but found several problems with optimizer, just look at plan - \nvery strange numbers and no indices used) (I did run vacuume analyze)\n\nexplain\nselect\n txt.tid\nfrom\n txt, txt_lexem1 tl1_0, txt_lexem11 tl11_0\nwhere\ntl1_0.lid =17700\nOR\ntl11_0.lid =172751\n;\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=0.00..16275952180.00 rows=512819420786 width=12)\n -> Nested Loop (cost=0.00..891369556.42 rows=512819421 width=8)\n -> Seq Scan on txt_lexem11 tl11_0 (cost=0.00..2596.92 rows=132292 width=4)\n -> Seq Scan on txt_lexem1 tl1_0 (cost=0.00..3815.95 rows=194795 width=4)\n -> Seq Scan on txt (cost=0.00..20.00 rows=1000 width=4)\n\nEXPLAIN\n\nfulltext=# \\d txt\n Table \"txt\"\n Attribute | Type | Modifier \n-----------+---------+----------\n tid | integer | not null\nIndex: txt_pkey\n\ntables txt_lexemX look like:\n\nfulltext=# \\d txt_lexem1\n Table \"txt_lexem1\"\n Attribute | Type | Modifier \n-----------+-----------+----------\n tid | integer | not null\n lid | integer | not null\n did | integer | not null\n count | integer | not null\n pos | integer[] | not null\nIndex: txt_lexem1_key\n\nWe have rewrite using EXISTS and plan looks better !\n\nselect\n txt.tid\nfrom\n txt\nwhere\nEXISTS ( select tid from txt_lexem1 tl1_0 where tl1_0.lid=17700 and tl1_0.did=0\nand txt.tid=tl1_0.tid )\nOR\nEXISTS ( select tid from txt_lexem11 tl11_0 where tl11_0.lid=172751 and \ntl11_0.did=0 and txt.tid=tl11_0.tid )\n;\n\nNOTICE: QUERY PLAN:\n\nSeq Scan on txt (cost=0.00..7416.48 rows=1000 width=4)\n SubPlan\n -> Index Scan using txt_lexem1_key on txt_lexem1 tl1_0 (cost=0.00..3.95 rows=1 width=4)\n -> Index Scan using txt_lexem11_key on txt_lexem11 tl11_0 (cost=0.00..3.45 rows=1 width=4)\n\nEXPLAIN\n\nI've tested on plain 7.0.2 and CVS version.\nI remind there was old problem with OR. Does optimizer still has\nsuch problem ?\n\n\tRegards,\n\n\t\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 9 Aug 2000 18:31:16 +0300 (GMT)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "VERY strange query plan (LONG)" } ]
[ { "msg_contents": "Hi -\n\tSomeone on -general suggested I bring this up here. I'll try and\nexplain as much as I can. If you need more information from me, please\nlet me know. I think the easiest way to illustrate this is to just paste\nin the output. This is all happening on FreeBSD 3.4 running 7.0.2.\n\n--------------------------------------------------------------------------\ndevloki=> CREATE TABLE test (field VARCHAR(10));\nCREATE\ndevloki=> \\d test \n Table \"test\"\n Attribute | Type | Modifier \n-----------+-------------+----------\n field | varchar(10) | \n\ndevloki=> INSERT INTO test VALUES ('test string');\nINSERT 110505 1\ndevloki=> SELECT field FROM test;\n field \n------------\n test strin\n(1 row)\n\ndevloki=> SELECT UPPER(field) FROM test;\n upper \n------------\n TEST STRIN\n(1 row)\n\ndevloki=> CREATE INDEX test_idx ON test (field);\nCREATE\ndevloki=> CREATE INDEX test_upper_idx ON test (UPPER(field));\nERROR: DefineIndex: function 'upper(varchar)' does not exist\n--------------------------------------------------------------------------\n\nIs there any other information I can provide? Should I send this on to\n-bugs?\n\nThanks,\n\n-philip\n\n", "msg_date": "Wed, 9 Aug 2000 20:54:25 -0700 (PDT)", "msg_from": "Philip Hallstrom <[email protected]>", "msg_from_op": true, "msg_subject": "CREATE INDEX test_idx ON test (UPPER(varchar_field)) doesn't work..." }, { "msg_contents": "On Wed, 9 Aug 2000, Philip Hallstrom wrote:\n\n> devloki=> SELECT UPPER(field) FROM test;\n> upper \n> ------------\n> TEST STRIN\n> (1 row)\n> \n> devloki=> CREATE INDEX test_idx ON test (field);\n> CREATE\n> devloki=> CREATE INDEX test_upper_idx ON test (UPPER(field));\n> ERROR: DefineIndex: function 'upper(varchar)' does not exist\n> --------------------------------------------------------------------------\n> \n> Is there any other information I can provide? Should I send this on to\n> -bugs?\n\nI think the reason for this is that the function is\nupper(text) returns text. The select is willing to \ndo the type conversion for you but the index creation \nis not.\n\nI'm not 100% sure it's a good idea, but IIRC text and\nvarchar are binary compatible. You probably could\nget away with adding an entry in pg_proc for\nupper(varchar) returns varchar using the same function\nby adding a new row with only the prorettype and proargtypes \nchanged.\n\n\n", "msg_date": "Wed, 9 Aug 2000 21:49:34 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE INDEX test_idx ON test (UPPER(varchar_field))\n\tdoesn't work..." }, { "msg_contents": "Philip Hallstrom <[email protected]> writes:\n> devloki=> CREATE INDEX test_upper_idx ON test (UPPER(field));\n> ERROR: DefineIndex: function 'upper(varchar)' does not exist\n\nThis is a known bug. There is indeed no upper(varchar) function\ndeclared in pg_proc, but the parser knows that varchar is \"binary\nequivalent\" to type text, so when you ask for upper(varchar) in\nmost contexts it will silently substitute upper(text) instead.\nThe bug is that CREATE INDEX does not provide the same leeway;\nit wants to find an exact type-signature match. It should accept\nfunctions that are binary-compatible with the type being indexed.\n\nThis is on the to-do list and might make a good first backend-hacking\nproject, if anyone is annoyed enough by it to work on it before the\ncore developers get 'round to it.\n\nBTW, I did just read over the discussion in pg-general (was out of town\nso couldn't answer sooner) and I believe you could have made your\nfunction work safely if it read\n\n\tCREATE FUNCTION upper(VARCHAR) RETURNS TEXT AS '\n\t...\n\tRETURN UPPER($1::text);\n\t...\n\nAs you wrote it it's an infinite recursion, because as soon as you\nprovide a function upper(varchar), that will be selected in preference\nto upper(text) for any varchar input value --- so \"RETURN UPPER($1)\" is\na self-reference. But with the type coercion you should get a call to\nthe built-in upper(text) instead.\n\nA faster way is the one someone else suggested: just create another row\nin pg_proc that declares upper(varchar) as an alias for the built-in\nupper(text). For example,\nCREATE FUNCTION upper(VARCHAR) RETURNS TEXT AS 'upper' LANGUAGE 'internal';\n\n(You have to first look in pg_proc to confirm that the internal function\nis in fact named 'upper' at the C level --- look at the 'prosrc' field.)\n\nThe infinite recursion should not have \"locked up\" your machine; if it\ndid I'd say that's a bad weakness in FreeBSD. What I see on HPUX is a\ncoredump due to stack limit overrun within a second or two of invoking\nan infinitely-recursive function. Performance of other processes\ndoesn't seem to be hurt materially... although HPUX does take an\nunreasonably long time to actually execute a coredump of a process\nthat's grown to a large size...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Aug 2000 10:38:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE INDEX test_idx ON test (UPPER(varchar_field)) doesn't\n\twork..." } ]
[ { "msg_contents": "\n\n> very strange numbers and no indices used) (I did run vacuume analyze)\n> \n> explain\n> select\n> txt.tid\n> from\n> txt, txt_lexem1 tl1_0, txt_lexem11 tl11_0\n> where\n> tl1_0.lid =17700\n> OR\n> tl11_0.lid =172751\n> ;\n> NOTICE: QUERY PLAN:\n\nDid you forget to join the tids together, and the did=0 restrictions ?\n\nYour statement looks very strange (cartesian product), and has nothing in \ncommon with the subselect statements you quoted.\n\nAndreas\n", "msg_date": "Thu, 10 Aug 2000 10:14:42 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: VERY strange query plan (LONG)" }, { "msg_contents": "On Thu, 10 Aug 2000, Zeugswetter Andreas SB wrote:\n\n> Date: Thu, 10 Aug 2000 10:14:42 +0200\n> From: Zeugswetter Andreas SB <[email protected]>\n> To: 'Oleg Bartunov' <[email protected]>\n> Cc: \"'[email protected]'\" <[email protected]>\n> Subject: AW: [HACKERS] VERY strange query plan (LONG)\n> \n> \n> \n> > very strange numbers and no indices used) (I did run vacuume analyze)\n> > \n> > explain\n> > select\n> > txt.tid\n> > from\n> > txt, txt_lexem1 tl1_0, txt_lexem11 tl11_0\n> > where\n> > tl1_0.lid =17700\n> > OR\n> > tl11_0.lid =172751\n> > ;\n> > NOTICE: QUERY PLAN:\n> \n> Did you forget to join the tids together, and the did=0 restrictions ?\n> \n> Your statement looks very strange (cartesian product), and has nothing in \n> common with the subselect statements you quoted.\n\nYou're right, I simplified original query just to show plans.\nHere is original query:\nexplain\nselect\n txt.tid,\n tl1_0.count, tl1_0.pos[1] as pos\nfrom\n txt, txt_lexem1 tl1_0, txt_lexem11 tl11_0\nwhere\n (\n( tl1_0.lid in (17700) and tl1_0.did=0 and txt.tid=tl1_0.tid )\nOR\n( tl11_0.lid in (172751) and tl11_0.did=0 and txt.tid=tl11_0.tid ))\n\norder by count desc, pos asc;\n\nand plan:\n\nNOTICE: QUERY PLAN:\n\nSort (cost=1278139131.36..1278139131.36 rows=1 width=44)\n -> Nested Loop (cost=0.00..1278139131.35 rows=1 width=44)\n -> Nested Loop (cost=0.00..1277916858.52 rows=4041 width=40)\n -> Seq Scan on txt_lexem11 tl11_0 (cost=0.00..2596.92 rows=132292 width=12)\n -> Seq Scan on txt_lexem1 tl1_0 (cost=0.00..3815.95 rows=194795 width=28)\n -> Seq Scan on txt (cost=0.00..20.00 rows=1000 width=4)\n\nEXPLAIN\n\nInteresthing that plan for AND looks realistic (and uses indices):\nexplain\nselect\n txt.tid,\n tl1_0.count, tl1_0.pos[1] as pos\nfrom\n txt, txt_lexem1 tl1_0, txt_lexem11 tl11_0\nwhere\n (\n( tl1_0.lid in (17700) and tl1_0.did=0 and txt.tid=tl1_0.tid )\nAND\n( tl11_0.lid in (172751) and tl11_0.did=0 and txt.tid=tl11_0.tid ))\n\norder by count desc, pos asc;\nNOTICE: QUERY PLAN:\n\nSort (cost=109.05..109.05 rows=1 width=28)\n -> Nested Loop (cost=0.00..109.04 rows=1 width=28)\n -> Nested Loop (cost=0.00..87.69 rows=3 width=24)\n -> Index Scan using txt_lexem11_key on txt_lexem11 tl11_0 (cost=0.00..35.23 rows=13 width=4)\n -> Index Scan using txt_lexem1_key on txt_lexem1 tl1_0 (cost=0.00..3.95 rows=1 width=20)\n -> Index Scan using txt_pkey on txt (cost=0.00..8.14 rows=10 width=4)\n\nEXPLAIN\n\n\nWe could live with fulltext search using only AND but very strange\nplan for OR worry me.\n\n\n\tRegards,\n\n\t\tOleg\n\n> \n> Andreas\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Thu, 10 Aug 2000 11:41:13 +0300 (GMT)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: VERY strange query plan (LONG)" } ]
[ { "msg_contents": "\n> > Your statement looks very strange (cartesian product), and \n> has nothing in \n> > common with the subselect statements you quoted.\n> \n> You're right, I simplified original query just to show plans.\n> Here is original query:\n> explain\n> select\n> txt.tid,\n> tl1_0.count, tl1_0.pos[1] as pos\n> from\n> txt, txt_lexem1 tl1_0, txt_lexem11 tl11_0\n> where\n> (\n> ( tl1_0.lid in (17700) and tl1_0.did=0 and txt.tid=tl1_0.tid )\n> OR\n> ( tl11_0.lid in (172751) and tl11_0.did=0 and txt.tid=tl11_0.tid ))\n> \n> order by count desc, pos asc;\n\nThat still does not lead to the same result as your subselect.\nLooks like the subselect is really what you want in the first place.\nThe problem with above is that for the two or'ed clauses there\nis no restriction for the respective 3rd table, thus still producing\na cartesian product (the and'ed clauses wont produce that,\nthus correct plan for and).\n\nA little better, but still not same result would be:\nwhere\ntxt.tid=tl1_0.tid and txt.tid=tl11_0.tid and\n(( tl1_0.lid in (17700) and tl1_0.did=0)\nOR\n( tl11_0.lid in (172751) and tl11_0.did=0))\n\nAndreas\n", "msg_date": "Thu, 10 Aug 2000 11:15:29 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: AW: VERY strange query plan (LONG)" } ]
[ { "msg_contents": "Hello,\n\nSuddenly I am getting errors with the following function:\n\n SELECT incr(max_price($1),0.05)\n\n 000810.17:20:41.181 [2246] ERROR: Bad float8 input format '0.05'\n 000810.17:20:41.181 [2246] AbortCurrentTransaction\n\nWhere incr() is defined as:\n\n CREATE FUNCTION \"incr\" (float8,float8 ) RETURNS float8 AS '\n SELECT CASE WHEN $1 < dpow(10,int8(log($1))+1)/2 \n THEN (dpow(10,int8(log($1)))) * $2 \n ELSE (dpow(10,int8(log($1))+1)/2) * $2 \n END\n ' LANGUAGE 'SQL';\n\nStrangely engough the function call works fine when called from psql but\nfails (but not always!) from a C trigger.\n\nThanks in advance for any help,\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.org\n\n \"Kill a man, and you are an assassin. Kill millions of men, and you\n are a conqueror. Kill everyone, and you are a god.\" -- Jean Rostand\n", "msg_date": "Thu, 10 Aug 2000 17:26:46 +0200", "msg_from": "Louis-David Mitterrand <[email protected]>", "msg_from_op": true, "msg_subject": "problem with float8 input format" }, { "msg_contents": "Louis-David Mitterrand <[email protected]> writes:\n> Strangely engough the function call works fine when called from psql but\n> fails (but not always!) from a C trigger.\n\nMay we see the C trigger? I'm suspicious it's doing something wrong...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Aug 2000 11:42:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problem with float8 input format " }, { "msg_contents": "On Fri, Aug 11, 2000 at 11:42:06AM -0400, Tom Lane wrote:\n> Louis-David Mitterrand <[email protected]> writes:\n> > Strangely engough the function call works fine when called from psql but\n> > fails (but not always!) from a C trigger.\n> \n> May we see the C trigger? I'm suspicious it's doing something wrong...\n> \n\nPlease find the trigger attached to this message as well as the .sql\nfile containing the full DB schema including the functions. Here is a\ntypicall log entry of the error:\n\n 000811.18:02:03.555 [1673] query: \n SELECT incr(max_price($1),0.05)\n\n 000811.18:02:03.556 [1673] ERROR: Bad float8 input format '0.05'\n\nThanks for your help,\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.org\n\n Slight disorientation after prolonged system\n uptime is normal for new Linux users. Please do\n not adjust your browser.", "msg_date": "Fri, 11 Aug 2000 22:07:39 +0200", "msg_from": "Louis-David Mitterrand <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problem with float8 input format" }, { "msg_contents": "Louis-David Mitterrand <[email protected]> writes:\n>> May we see the C trigger? I'm suspicious it's doing something wrong...\n\n> Please find the trigger attached to this message\n\nAlthough I don't see an obvious connection to the error message you are\ngetting, I am suspicious that the problem happens because you are\nexpecting CurrentTriggerData to stay valid throughout the execution of\nyour trigger --- through executions of sub-queries, in fact.\n\nCurrentTriggerData is a global and should be considered extremely\nvolatile, because it will get changed if any other trigger is fired\nby the sub-query, and may get zeroed anyway if certain paths through\nthe function manager get taken.\n\nI recommend this coding pattern for user-defined triggers:\n\n1. Copy CurrentTriggerData into a local variable, say\n\tTriggerData *trigdata;\n*immediately* upon entry to your trigger function, and then reset\nCurrentTriggerData = NULL before doing anything else.\n\n2. Subsequently, use \"trigdata\" not CurrentTriggerData.\n\nAside from not causing problems for recursive trigger calls, this\napproach will also be a lot easier to convert to 7.1 code --- a word\nto the wise eh?\n\nIf you still see flaky behavior after making this change, please let me\nknow and I'll probe more deeply.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Aug 2000 20:35:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problem with float8 input format " }, { "msg_contents": "On Fri, Aug 11, 2000 at 08:35:03PM -0400, Tom Lane wrote:\n> Although I don't see an obvious connection to the error message you are\n> getting, I am suspicious that the problem happens because you are\n> expecting CurrentTriggerData to stay valid throughout the execution of\n> your trigger --- through executions of sub-queries, in fact.\n> \n> CurrentTriggerData is a global and should be considered extremely\n> volatile, because it will get changed if any other trigger is fired\n> by the sub-query, and may get zeroed anyway if certain paths through\n> the function manager get taken.\n> \n> I recommend this coding pattern for user-defined triggers:\n> \n> 1. Copy CurrentTriggerData into a local variable, say\n> \tTriggerData *trigdata;\n> *immediately* upon entry to your trigger function, and then reset\n> CurrentTriggerData = NULL before doing anything else.\n\nI did just that and the error keeps happening.\n\n\nOn an unrelated matter I have this expression in the trigger:\n\n int stop_date = DatumGetInt32(SPI_getbinval(\n SPI_tuptable->vals[0],\n SPI_tuptable->tupdesc,\n SPI_fnumber(SPI_tuptable->tupdesc,\"date_part\"),\n &isnull));\n\nwhere \"date_part\" comes from \"date_part('epoch', stopdate)\" in a\nprevious query. The problem is the value of stop_date is not the number\nof seconds since the epoch but some internal representation of the data.\nSo I can't compare stop_date with the output of\nGetCurrentAbsoluteTime().\n\nWhat function should I use to convert the Datum to a C int?\nDatumGetInt32 doesn't seem to work here.\n\nAnd what is the method for float8 Datum conversion to C double? I\ncouldn't find any clearcut examples in the trigger examples.\n\nThanks in advance,\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.org\n\nConscience is what hurts when everything else feels so good.\n", "msg_date": "Sat, 12 Aug 2000 11:48:13 +0200", "msg_from": "Louis-David Mitterrand <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problem with float8 input format" }, { "msg_contents": "On Fri, Aug 11, 2000 at 10:07:39PM +0200, Louis-David Mitterrand wrote:\n> On Fri, Aug 11, 2000 at 11:42:06AM -0400, Tom Lane wrote:\n> > Louis-David Mitterrand <[email protected]> writes:\n> > > Strangely engough the function call works fine when called from psql but\n> > > fails (but not always!) from a C trigger.\n> > \n> > May we see the C trigger? I'm suspicious it's doing something wrong...\n> > \n> \n> Please find the trigger attached to this message as well as the .sql\n> file containing the full DB schema including the functions. Here is a\n> typicall log entry of the error:\n\nFinally I found the problem:\n\n bindtextdomain(\"apartia_com\", \"/usr/local/auction/locale\");\n textdomain(\"apartia_com\");\n setlocale(LC_ALL, seller_locale);\n\nWhen \"seller_locale\" is, for instance, \"de_DE\", then I get theses\nerrors:\n\n ERROR: Bad float8 input format '0.05'\n\nIs Postgres expecting the float as 0,05 (notice the comma) because of\nthe locale?\n\nWhen \"seller_locale\" is \"en_US\" all is well.\n\n(C trigger is attached)\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.org\n\n \"When I give food to the poor I am called a saint, when I ask why\n they go hungry I am called a communist\"\n --Bishop Helder Camara", "msg_date": "Sat, 12 Aug 2000 16:57:17 +0200", "msg_from": "Louis-David Mitterrand <[email protected]>", "msg_from_op": true, "msg_subject": "solution! (was: Re: problem with float8 input format)" }, { "msg_contents": "Louis-David Mitterrand <[email protected]> writes:\n> When \"seller_locale\" is, for instance, \"de_DE\", then I get theses\n> errors:\n> ERROR: Bad float8 input format '0.05'\n> Is Postgres expecting the float as 0,05 (notice the comma) because of\n> the locale?\n\nI'm sure that's the issue. If you look at the source of the message\n(float8in() in src/backend/utils/adt/float.c) you'll see that it's\njust relying on strtod() to parse the input. If your local strtod() is\nlocale-sensitive then the expected input format changes accordingly.\nNot sure whether that's a feature or a bug, but it's how Postgres\nhas always worked.\n\nIMPORTANT: changing the backend's locale on-the-fly is an EXTREMELY\nDANGEROUS thing to do, and I strongly recommend that you find another\nway to solve your problem. Running with a different locale changes the\nexpected sort order for indices, which means that your indices will\nbecome corrupted as items get inserted out of order compared to other\nitems (for one definition of \"order\" or the other), leading to failure\nto find items that should be found in later searches.\n\nGiven that your trigger has been exiting with the changed locale still\nin force, I'm surprised your DB is still functional at all (perhaps you\nhave no indexes on textual columns?). But it'd be extremely dangerous\neven if you were to restore the old setting before exit --- what happens\nif there's an elog(ERROR) before you can restore?\n\nAt present, the only safe way to handle locale is to set it in the\npostmaster's environment, never in individual backends. What's more,\nyou'd better be careful that the postmaster is always started with the\nsame locale setting for a given database. You can find instances of\npeople being burnt by this sort of problem in the archives :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 Aug 2000 12:15:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: solution! (was: Re: problem with float8 input format) " }, { "msg_contents": "Louis-David Mitterrand <[email protected]> writes:\n> where \"date_part\" comes from \"date_part('epoch', stopdate)\" in a\n> previous query. The problem is the value of stop_date is not the number\n> of seconds since the epoch but some internal representation of the data.\n> So I can't compare stop_date with the output of\n> GetCurrentAbsoluteTime().\n\nGetCurrentAbsoluteTime yields an \"abstime\", so you should coerce the\n\"timestamp\" result of date_part() to abstime and then you will get a\nvalue you can compare directly.\n\n> What function should I use to convert the Datum to a C int?\n> DatumGetInt32 doesn't seem to work here.\n\nNo, because timestamps are really floats. (abstime is an int though.)\n\n> And what is the method for float8 Datum conversion to C double?\n\n\tdouble x = * DatumGetFloat64(datum);\n\nThis is pretty grotty because it exposes the fact that float8 datums\nare pass-by-reference (ie, pointers). 7.1 will let you write\n\n\tdouble x = DatumGetFloat8(datum);\n\nwhich is much cleaner. (I am planning that on 64-bit machines it will\nsomeday be possible for float8 and int64 to be pass-by-value, so it's\nimportant to phase out explicit knowledge of the representation in user\nfunctions.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 Aug 2000 12:41:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problem with float8 input format " }, { "msg_contents": "On Sat, Aug 12, 2000 at 12:15:26PM -0400, Tom Lane wrote:\n> Louis-David Mitterrand <[email protected]> writes:\n> > When \"seller_locale\" is, for instance, \"de_DE\", then I get theses\n> > errors:\n> > ERROR: Bad float8 input format '0.05'\n> > Is Postgres expecting the float as 0,05 (notice the comma) because of\n> > the locale?\n> \n> I'm sure that's the issue. If you look at the source of the message\n> (float8in() in src/backend/utils/adt/float.c) you'll see that it's\n> just relying on strtod() to parse the input. If your local strtod() is\n> locale-sensitive then the expected input format changes accordingly.\n> Not sure whether that's a feature or a bug, but it's how Postgres\n> has always worked.\n\nSo using \"setlocale(LC_MESSAGES, seller_locale)\" instead of \"LC_ALL\"\nshould be safe? It doesn't touch numeric formatting.\n\n> IMPORTANT: changing the backend's locale on-the-fly is an EXTREMELY\n> DANGEROUS thing to do, and I strongly recommend that you find another\n> way to solve your problem. \n\nThe \"problem\" I am trying to solve is to send e-mail notifications to\nauction bidders in their own language with the proper number formatting,\netc. From what you are saying I'll probably have to move these\nnotifications to the mod_perl layer of the application. Too bad... not\nbeing a C programmer it took me a while to be able to send mail from the\ntrigger. Oh well.\n\n> Running with a different locale changes the expected sort order for\n> indices, which means that your indices will become corrupted as items\n> get inserted out of order compared to other items (for one definition\n> of \"order\" or the other), leading to failure to find items that should\n> be found in later searches.\n\nYou mean the indices change because accented characters can come into\nplay w.r.t the sort order?\n\n> Given that your trigger has been exiting with the changed locale still\n> in force, I'm surprised your DB is still functional at all (perhaps\n> you have no indexes on textual columns?). \n\nRight, not yet.\n\n> But it'd be extremely dangerous even if you were to restore the old\n> setting before exit --- what happens if there's an elog(ERROR) before\n> you can restore?\n\n> At present, the only safe way to handle locale is to set it in the\n> postmaster's environment, never in individual backends. What's more,\n> you'd better be careful that the postmaster is always started with the\n> same locale setting for a given database. You can find instances of\n> people being burnt by this sort of problem in the archives :-(\n\nMany thanks for the thorough and clear explanation of the issues.\n\nCheers,\n\n[much relieved at having found \"why\"]\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.org\n\n\"Of course Australia was marked for glory, for its people had been\nchosen by the finest judges in England.\"\n", "msg_date": "Sat, 12 Aug 2000 18:51:06 +0200", "msg_from": "Louis-David Mitterrand <[email protected]>", "msg_from_op": true, "msg_subject": "dangers of setlocale() in backend (was: problem with float8 input\n\tformat)" }, { "msg_contents": "Louis-David Mitterrand <[email protected]> writes:\n>> IMPORTANT: changing the backend's locale on-the-fly is an EXTREMELY\n>> DANGEROUS thing to do, and I strongly recommend that you find another\n>> way to solve your problem. \n\n> The \"problem\" I am trying to solve is to send e-mail notifications to\n> auction bidders in their own language with the proper number formatting,\n> etc. From what you are saying I'll probably have to move these\n> notifications to the mod_perl layer of the application.\n\nWell, you could fork a subprocess to issue the mail and change locale\nonly once you're safely inside the subprocess.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 Aug 2000 13:10:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dangers of setlocale() in backend (was: problem with float8 input\n\tformat)" }, { "msg_contents": "\nOn Sat, 12 Aug 2000, Louis-David Mitterrand wrote:\n\n> On Sat, Aug 12, 2000 at 12:15:26PM -0400, Tom Lane wrote:\n> > Louis-David Mitterrand <[email protected]> writes:\n> > > When \"seller_locale\" is, for instance, \"de_DE\", then I get theses\n> > > errors:\n> > > ERROR: Bad float8 input format '0.05'\n> > > Is Postgres expecting the float as 0,05 (notice the comma) because of\n> > > the locale?\n\n The postgreSQL allows to work with locale-numbers. See to_char() \nand to_number() functions.\n\ntest=# select to_char(1234.456, '9G999D999');\n to_char\n------------\n 1ďż˝234,456\n(1 row)\n\ntest=# select to_number('1 234,457', '9G999D999');\n to_number\n-----------\n 1234.457\n(1 row)\n\n\n And your backend will out of next Tom's note :-)\n\n> > IMPORTANT: changing the backend's locale on-the-fly is an EXTREMELY\n> > DANGEROUS thing to do, and I strongly recommend that you find another\n> > way to solve your problem. \n\n\t\t\t\t\tKarel\n\n", "msg_date": "Mon, 14 Aug 2000 14:06:50 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dangers of setlocale() in backend (was: problem with float8 input\n\tformat)" }, { "msg_contents": "On Sat, Aug 12, 2000 at 01:10:01PM -0400, Tom Lane wrote:\n> Louis-David Mitterrand <[email protected]> writes:\n> >> IMPORTANT: changing the backend's locale on-the-fly is an EXTREMELY\n> >> DANGEROUS thing to do, and I strongly recommend that you find another\n> >> way to solve your problem. \n> \n> > The \"problem\" I am trying to solve is to send e-mail notifications to\n> > auction bidders in their own language with the proper number formatting,\n> > etc. From what you are saying I'll probably have to move these\n> > notifications to the mod_perl layer of the application.\n> \n> Well, you could fork a subprocess to issue the mail and change locale\n> only once you're safely inside the subprocess.\n\nCould you give a minimal example of how forking a subprocess in a PG\ntrigger is done? Or maybe give a pointer to an existing example?\n\n\nOn an unrelated subject I have to maintain and update a table containing\ncurrency rates for the auction site (URL in .sig):\n\n Table \"currency\"\n Attribute | Type | Modifier \n-----------+--------+--------------------\n USD | float4 | not null default 1\n FRF | float4 | \n AUD | float4 | \n CAD | float4 | \n EUR | float4 | \n GBP | float4 | \n DEM | float4 | \n JPY | float4 | \n CHF | float4 | \n\nTo update it I wrote a quick perl script that grabs data from Yahoo's\ncurrency web page (attached). This script has to be installed and run as\na cron job, but I'd like to integrate that functionality in the DB\nbackend as a trigger that performs the data refresh every n'th SELECT on\nthe table:\n\t- either convert that perl script to C (maybe using libwww and a\n\t regex C library);\n\t- or simpy launch that perl script from the trigger;\nThe former solution is not easy, and is probably a good programming\nexercise, the latter is quick but how would one go about launching a\nperl script from C without waiting for its completion?\n\nThanks in advance,\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.org\n\n Parkinson's Law: Work expands to fill the time alloted it.", "msg_date": "Tue, 15 Aug 2000 11:07:05 +0200", "msg_from": "Louis-David Mitterrand <[email protected]>", "msg_from_op": true, "msg_subject": "forking a process and grabbing web site data from a C trigger?" } ]
[ { "msg_contents": "Don Baccus wrote:\n> At 10:57 AM 8/10/00 +1000, Chris Bitmead wrote:\n> >Stephan Szabo wrote:\n> >> > This is an interesting point. Originally postgres integrity rules were\n> >> > based on a very general rules system where many things were possible to\n> >> > specify. I'm curious about the more recent addition of referential\n> >> > integrity to postgres (I know little about it), why it is such a\n> >> > specific solution and is not based on the more general postgres rules\n> >> > system?\n> >>\n> >> Because unfortunately the SQL spec for referential integrity cannot really\n> >> be implemented in the current rules system (or at least not in a way that\n> >> is terribly nice).\n> >\n> >So it wasn't feasible to extend the current rules system to support\n> >these oddities, instead of implementing the specific solution?\n>\n> Since Jan apparently knows more about the current rules system than anyone\n> else on the planet (he's done a lot of work in that area in the past), and\n> since he designed the RI system, my guess is that the simple answer to your\n> question is \"yes\".\n\n \"Yes\"\n\n Rules are fired before the original query is executed. This\n is because otherwise a DELETE (for example) already stamped\n it's XID and CID into the max fields of the tuples to delete\n and the command counter gets incremented. So the rules scans\n would never be able to find them again. From the visibility\n point of view they are deleted.\n\n To make rules deferrable in this visibility system, someone\n would need to remember the command ID of the original query,\n and when later executing the deferred queries modify all the\n scan-command ID's of those rangetable-entries, coming from\n the original query, to have the original queries CID, while\n leaving the others at the current.\n\n Theoretically possible up to here, but as soon as there are\n any functions invoked in that query which use SPI, it's over.\n\n Finally there is that problem about \"triggered data change\n violation\". Since only \"changing the effective value\" of an\n FK or PK is considered to be a \"data change\", each individual\n tuple must be checked for it. This cannot be told on the\n query level.\n\n I'm sure it cannot be done with the rule system. Thus we\n created this \"specific solution\".\n\n And it is true that with the \"very general rules system\" of\n the \"original Postgres 4.2\" many things where possible to\n specify. But most of them never worked until v6.4. I know\n definitely, because I found it out the hard way - fixing it.\n And still, many things don't work.\n\n Take some look at the short description of the rule system\n internals in the programmers guide. After that, you maybe\n come to the same conclusions as I did. Otherwise correct me\n by reimplementing SQL3 RI with rules.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n", "msg_date": "Thu, 10 Aug 2000 11:25:18 -0500 (EST)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Arrays and foreign keys" } ]
[ { "msg_contents": "I recently increased the default tuple size to 32K on Postgres 7.0.2\nwith no problems. My colleague, however, told me that I can't pass a\ntext string greater than 16 K into the PQexec C function. So his claim\nis that the only way I can actually get > 16 K into the tuple is through\na binary cursor. Anyone know if this is correct? Is there some\nconfiguration variable I can change to up this to the same size as the\nmaximum tuple length?\n\nThanks.\n-Tony\n\n\n", "msg_date": "Thu, 10 Aug 2000 10:01:44 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Input strings > 16 K?" }, { "msg_contents": "\"G. Anthony Reina\" <[email protected]> writes:\n> I recently increased the default tuple size to 32K on Postgres 7.0.2\n> with no problems. My colleague, however, told me that I can't pass a\n> text string greater than 16 K into the PQexec C function.\n\nThere *was* such a restriction in libpq, before 7.0. His info is\nobsolete...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Aug 2000 10:48:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Input strings > 16 K? " }, { "msg_contents": "Tom Lane wrote:\n\n> \"G. Anthony Reina\" <[email protected]> writes:\n> > I recently increased the default tuple size to 32K on Postgres 7.0.2\n> > with no problems. My colleague, however, told me that I can't pass a\n> > text string greater than 16 K into the PQexec C function.\n>\n> There *was* such a restriction in libpq, before 7.0. His info is\n> obsolete...\n>\n> regards, tom lane\n\nThanks Tom. Good to know that the restriction was lifted.\n\n-Tony\n\n\n\n", "msg_date": "Fri, 11 Aug 2000 09:19:42 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Input strings > 16 K?" } ]
[ { "msg_contents": "Before I run off and figure out how to syncronize a \"backup\" database\nwith a live one, I was wondering if anyone had any pointers on \nimplementing a delayed syncronization way of backing up a database\nwhile it's live. I'd really like to be able to only transfer over\nmodified rows, rather than the entire database.\n\nAny clues, pointers that could help me out?\n\nthanks,\n-Alfred\n", "msg_date": "Thu, 10 Aug 2000 15:25:12 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": true, "msg_subject": "Live incremental backups?" }, { "msg_contents": "At 15:25 10/08/00 -0700, Alfred Perlstein wrote:\n>Before I run off and figure out how to syncronize a \"backup\" database\n>with a live one, I was wondering if anyone had any pointers on \n>implementing a delayed syncronization way of backing up a database\n>while it's live. I'd really like to be able to only transfer over\n>modified rows, rather than the entire database.\n>\n\nMy *guess* is that the WAL should add a few options here - especially for\nbackup. However, if your replication is only partial, it may not help as much.\n\nOn other system I have used, you 'truncate' the WAL periodically and save\nit is part of the backup. These pieces of WAL can be applied to a prior\ncopy of the database to produce a version of the DB at the time the WAL was\ntruncated.\n\nObviously this requires a little more than just the WAL, but it might be\nthe right way to go in the future for on-line backup and whole-database,\nread-only replication.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 11 Aug 2000 11:02:41 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Live incremental backups?" } ]
[ { "msg_contents": "I didn't hear anything back on this. Does someone have a little time or\na pointer to a good resource that will clarify the use of the SELECT FOR\nUPDATE syntax?\n\nTim\n\n-------- Original Message --------\nSubject: Re: haven't forgotten about you...\nDate: Mon, 07 Aug 2000 16:08:29 -0700\nFrom: Tim Perdue <[email protected]>\nTo: Benjamin Adida <[email protected]>\nCC: [email protected]\nReferences: <B5934C52.708E%[email protected]>\n\nBenjamin Adida wrote:\n> \n> on 7/13/00 10:39 AM, Tim Perdue at [email protected] wrote:\n> \n> > I wouldn't really worry about that right now.\n> \n> Oh okay, I thought this was an emergency because you were looking at\n> switching possibly to another DB. I hope you won't make the Oracle jump!\n> \n> > I *would* like to see an article on transactions though.\n> \n> Okay, fair enough. I'll get working on that ASAP.\n\n\nAre you going to do this?\n\nI've been recently asked to write an article for Linux Journal about\n\"Deploying a Serious Application With PHP\". I'd like to use postgres for\na \"serious\" application rather than MySQL, but I would like to see this\ntutorial to understand the nuances first. (as I mentioned, I don't think\nI understand the SELECT * FOR UPDATE syntax)\n\nTim\n\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n", "msg_date": "Thu, 10 Aug 2000 20:36:14 -0700", "msg_from": "Tim Perdue <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: Re: haven't forgotten about you...]" }, { "msg_contents": "Ben Adida wrote:\n> begin transaction\n> select balance from accounts where account_id=2 for update\n> \n> will select the balance and lock the row for account #2\n> You can then perform some math on the balance, and do something like:\n> \n> update accounts set balance= $new_balance where account_id=2\n> end transaction\n> \n\nGreat - I assume end transaction is going to do a commit. If you don't\ndo an end transaction and you don't issue a rollback... I assume it\nrolls back?\n\nThis is pretty slick - over the last month or so I've come up with about\n8 different places where I really wish I had transactions/rollbacks on\nSourceForge. Also running into lots of places where I really, really\nwish I had fscking subselects...\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n", "msg_date": "Thu, 10 Aug 2000 21:13:52 -0700", "msg_from": "Tim Perdue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Fwd: Re: haven't forgotten about you...]" }, { "msg_contents": "At 09:13 PM 8/10/00 -0700, Tim Perdue wrote:\n>Ben Adida wrote:\n>> begin transaction\n>> select balance from accounts where account_id=2 for update\n>> \n>> will select the balance and lock the row for account #2\n>> You can then perform some math on the balance, and do something like:\n>> \n>> update accounts set balance= $new_balance where account_id=2\n>> end transaction\n>> \n>\n>Great - I assume end transaction is going to do a commit. If you don't\n>do an end transaction and you don't issue a rollback... I assume it\n>rolls back?\n\nIt is best not to assume, and to do so explicitly. I base this on the\ntheory that you ought to know what your code does, and what it did to\nget there.\n\n(end transaction is indeed \"commit\", you can use \"commit\" if you prefer).\n\n>This is pretty slick - over the last month or so I've come up with about\n>8 different places where I really wish I had transactions/rollbacks on\n>SourceForge.\n\nYes. That's the realization one comes to when working on complex database\napps.\n\n>Also running into lots of places where I really, really\n>wish I had fscking subselects...\n\nAs someone who uses Oracle, I feel the same way, but Postgres doesn't\nmake me feel that way nearly as often as MySQL would :)\n\n(and actually, Oracle's outer join syntax requires subselects if you are\nto mix and match inner and outer joins and control the priority of execution\norder - which the vastly superior SQL92 syntax solves in a reasonably \nelegant manner).\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 10 Aug 2000 21:34:50 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Fwd: Re: haven't forgotten about you...]" }, { "msg_contents": "Tim Perdue wrote:\n\n> I didn't hear anything back on this. Does someone have a little time or\n> a pointer to a good resource that will clarify the use of the SELECT FOR\n> UPDATE syntax?\n\nUggh, just when I finally had some time to answer :) Let me attempt to answer\nit anyways. SELECT for UPDATE is a means of explicitly locking a row for\nlater updating within the same transaction. For example (this is a simplified\nexample):\n\nbegin transaction\nselect balance from accounts where account_id=2 for update\n\nwill select the balance and lock the row for account #2\nYou can then perform some math on the balance, and do something like:\n\nupdate accounts set balance= $new_balance where account_id=2\nend transaction\n\nThus, this construct makes this safe in a multi-client environment. Even if\ntwo clients perform these actions simultaneously, the \"for update\" will\nguarantee that one of the two locks that row at the select statement level,\nand the second waits until the first transaction commits (at which point the\nlock is transparently released).\n\nNote that if you *didn't* have the \"for update\", no lock would be acquired at\nthe select level, and you could run into a race condition where two processes\ngrab the same balance from the account, and independently update that amount,\nthereby losing the effect of one of those updates (and probably robbing you\nof money).\n\nNote also that the lock acquired is row-level, which means that if two\nprocesses are updating two different accounts, both processes can proceed\nwithout blocking each other. This will thus behave not only correctly, but as\nefficiently as possible.\n\nI hope this clears things up. I am writing that article about transactions\nand locking, it's on its way, I swear.\n\n-Ben\n\n", "msg_date": "Fri, 11 Aug 2000 00:37:08 -0400", "msg_from": "Ben Adida <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Fwd: Re: haven't forgotten about you...]" }, { "msg_contents": "Tim Perdue wrote:\n\n> Great - I assume end transaction is going to do a commit. If you don't\n> do an end transaction and you don't issue a rollback... I assume it\n> rolls back?\n\nYes, when I said end transaction, I meant commit.\n\nThe precise behavior you're inquiring about is dependent on your web server\n/ driver setup. In AOLserver's Postgres driver, if a database handle is\nreleased when a transaction is still open, the transaction is rolled back.\nI can imagine other drivers behaving differently, but implicit commits\nsound very dangerous to me.\n\n> This is pretty slick - over the last month or so I've come up with about\n> 8 different places where I really wish I had transactions/rollbacks on\n> SourceForge. Also running into lots of places where I really, really\n> wish I had fscking subselects...\n\nYes, Postgres is definitely pretty slick...\n\n-Ben\n\n", "msg_date": "Fri, 11 Aug 2000 00:54:09 -0400", "msg_from": "Ben Adida <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Fwd: Re: haven't forgotten about you...]" } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\n\n- -----BEGIN PGP SIGNED MESSAGE-----\n\nI've found a problem in pg_dump which results in unreliable dumps under the\nfollowing conditions:\n* The serial datatype is used\n* Either the table or a column name is mixed case\n\nExample:\nCREATE TABLE \"Test1\" (\n id serial,\n dummy int\n);\n\npg_dump thisdb\nCREATE SEQUENCE \"Test1_id_seq\" start 1 increment 1 maxvalue 2147483647\nminvalue 1 cache 1 ;\nCREATE TABLE \"Test1\" (\n \"id\" int4 DEFAULT nextval('Test1_id_seq'::text) NOT NULL,\n \"dummy\" int4\n);\nCOPY \"Test1\" FROM stdin;\n\\.\nCREATE UNIQUE INDEX \"Test1_id_key\" on \"Test1\" using btree ( \"id\" \"int4_ops\"\n);\n\nThe error is in the line\n \"id\" int4 DEFAULT nextval('Test1_id_seq'::text) NOT NULL,\nwhich should read\n \"id\" int4 DEFAULT nextval('\"Test1_id_seq\"'::text) NOT NULL,\n\nI've tried to fix the error in pg_dump.c, but noticed that I would have to go\ndeeper, because not pg_dump is the problem but somewhere else. So I would be\ngratefull if somebody can either help me where to start bug hunting or fix\nthis bug, which should be really easy (put quotes around the sequence name).\n\nWhen replying, please send me a CC, my subscription to this list is not yet\nprocessed. Thanks.\n\nMario Weilguni\n- -----BEGIN PGP SIGNATURE-----\nVersion: 2.6.3i\nCharset: noconv\n\niQCVAwUBOZOWhQotfkegMgnVAQFVGAP+Monp4VVCG350XUaMfhrP0EhODIWpTFJO\n8udkYjJZdhLx7csoVgmi/fBQVVUikJEXuAV82e0xpJ9LGOcfRomnHa9KZnaX7q0a\nnmKMJ09Ve88Seszf0yGIhr9DabXkYfKhEQ6MjNhagyeF9ajsKSeBJxfnR1UocEHp\nc328CLLzMBI=\n=UiDy\n- -----END PGP SIGNATURE-----\n\n-----BEGIN PGP SIGNATURE-----\nVersion: 2.6.3i\nCharset: noconv\n\niQCVAwUBOZOWngotfkegMgnVAQH3JQQAn2K70HIhSWhwVlfyUVM+loS1UYCo372Z\nzzIimzmhs66YNPD/LGY7jtbCZuF5h+RvXS28kkeX01GZn/EGv3tSCJE1JSyj1STC\n9vfpqJjAUbsBtB1e9WPsyo2yjrUPhUfRwL5HiI7xymC3hHOPM8XDlUUceHUmbrew\nEDddtaGL7xc=\n=aD4u\n-----END PGP SIGNATURE-----\n", "msg_date": "Fri, 11 Aug 2000 08:01:02 +0200", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": true, "msg_subject": "Identified a problem in pg_dump with serial data type and mixed case" }, { "msg_contents": "Mario Weilguni <[email protected]> writes:\n> I've found a problem in pg_dump which results in unreliable dumps under the\n> following conditions:\n\nWhat version are you using? This appears to work correctly in current\nsources --- I get\n\nCREATE TABLE \"Test1\" (\n \"id\" int4 DEFAULT nextval('\"Test1_id_seq\"'::text) NOT NULL,\n \"dummy\" int4\n);\n\nNot sure offhand how long ago it was fixed, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Aug 2000 09:38:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identified a problem in pg_dump with serial data type and mixed\n\tcase" } ]
[ { "msg_contents": "Hi you all,\n\nI'm convincing my boss to drop Sybase and use Postgres exclusively for\nthe Partnership for Peace database (see www.ppc.pims.org if you are\ninterested in an account just fill in the form and use my name as the\nPOC). But is there someplace that has a comparisons of Postgres to\nthings like Oracle Ingress Sybase...\n\nAlso I need to take the show on the road. So I got permission to buy a\n\"good\" notebook PC. I want to run Netscape Enterprise Server (and mess\naround with Apache) and Postgres over Linux (analog modem and LAN card)\n-- very good performance and a super display/display driver (so I can\nplay games in the hotel room). Can anyone recommend a super notebook\nthat has good compatibility for this configuration?\n\nThanks a lot for your help and keep up the good work.\n\nPostgres is super,\n\nAllan in Belgium\n\n", "msg_date": "Fri, 11 Aug 2000 11:37:52 +0200", "msg_from": "\"Allan Huffman\" <[email protected]>", "msg_from_op": true, "msg_subject": "db Comparisons - Road Show" } ]
[ { "msg_contents": "I'm trying to write a function that takes a text input and returns\na text output. I can get this to work. The problem is that I want\nto return NULL when the input string doesn't match the criteria I\ndesire. Unfortunately, returning NULL seems to crash the backend.\ni.e. if I did\n\n#include \"postgres.h\"\ntext * andytest ( text * str )\n{\n return NULL;\n}\n\nThe backend would quit unexpectantly when I ran\nselect andytest('fds');\nor select andytest(NULL);\n\nObviously, there must be some way to create a NULL text * return \nvariable, but I haven't been able to find it. I've looked at all\nthe code I've been able to find to no avail.\n\n-Andy\n", "msg_date": "Fri, 11 Aug 2000 15:29:21 -0500", "msg_from": "Andrew Selle <[email protected]>", "msg_from_op": true, "msg_subject": "Returning null from Userdefined C function" }, { "msg_contents": "Andrew Selle <[email protected]> writes:\n> Obviously, there must be some way to create a NULL text * return \n> variable,\n\nYou would think that, but you'd be wrong :-( --- at least for current\nreleases; this problem has been fixed for 7.1 by creating a new API\nfor user-defined functions.\n\nWith the old API, for the case of single-argument functions, you can\nfake it via an ugly kluge: declare a second argument \"bool *isNull\"\n(at the C level only, not in the SQL definition) and set *isNull to\nTRUE if you want to return a NULL. This does not work if the function\ndoesn't have exactly one SQL argument, however.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Aug 2000 20:17:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Returning null from Userdefined C function " } ]
[ { "msg_contents": "\nI have a table for which the SEQSCAN and INDEXSCAN estimates are the same\nup to a point, after which the SEQSCAN estimates remain fixed, and the\nindexscan estimates continue to grow. However, the actual speed of the\nindex scan is superior for a much greater period than the optimizer predicts.\n\nThe database has a table 'ping' with various fields including a 'pingtime\ntimestamp'; it also has a btree indexe on the date and has been 'vacuum\nanalyze'-ed. There are about 200000 rows and the data is evenly distributed\nin 5 minute intervals.\n\nThese are the results from 'explain':\n\n------ 1 ----\nuptime=# explain select * from ping where pingtime>'1-aug-1999' and\npingtime<'1-aug-1999';\nNOTICE: QUERY PLAN:\n\nIndex Scan using ping_ix1 on ping (cost=0.00..4.28 rows=1 width=52)\n------\n\nThis seems fine, even if the query is bogus.\n\n\n------ 2 ----\nuptime=# explain select * from ping where pingtime>'1-aug-1999' and\npingtime<'2-aug-1999';\nNOTICE: QUERY PLAN:\n\nIndex Scan using ping_ix1 on ping (cost=0.00..1679.29 rows=561 width=52)\n------\n\nAlso looks OK.\n\n\n------ 3 ----\nuptime=# explain select * from ping where pingtime>'1-aug-1999' and\npingtime<'3-aug-1999';\nNOTICE: QUERY PLAN:\n\nIndex Scan using ping_ix1 on ping (cost=0.00..3091.18 rows=1123 width=52)\n------\n\nThis seems OK; the estimate is roughly double the previous, which is to be\nexpected, I think.\n\n\n------- 5 ----\nuptime=# explain select * from ping where pingtime>'1-aug-1999' and\npingtime<'5-aug-1999';\nNOTICE: QUERY PLAN:\n\nIndex Scan using ping_ix1 on ping (cost=0.00..5386.70 rows=2245 width=52)\n------\n\nAgain. this is OK, although I am a little surprised at the continuing\nnon-linearity of the estimates.\n\nNow it starts getting very strange:\n\n------- 5+a bit ----\nuptime=# explain select * from ping where pingtime>'1-aug-1999' and\npingtime<'5-aug-1999 20:25';\nNOTICE: QUERY PLAN:\n\nSeq Scan on ping (cost=0.00..6208.68 rows=2723 width=52)\n-------\n\nOK so far, but look at the following (the costs are the same):\n\n------- 3 Months ----\nuptime=# explain select * from ping where pingtime>'1-aug-1999' and\npingtime<'1-nov-1999';\nNOTICE: QUERY PLAN:\n\nSeq Scan on ping (cost=0.00..6208.68 rows=51623 width=52)\n-------\n\nand\n\n------- 5 + a YEAR ----\nuptime=# explain select * from ping where pingtime>'1-aug-1999' and\npingtime<'5-aug-2000 20:25';\nNOTICE: QUERY PLAN:\n\nSeq Scan on ping (cost=0.00..6208.68 rows=208184 width=52)\n------\n\n\nNow what is also strange, is if I set ENABLE_SEQSCAN=OFF, then the\nestimates up to '5+a bit' are the *same*, but the running time is\nsubstantially better for index scan. In fact the running time is better for\nindex scans up to an interval of about three months. I presume there is\nsomething wrong with the selectivify estimates for the index.\n\nI really don't want to have the code call 'SET ENABLE_SEQSCAN=OFF/ON'\naround this statement, since for a longer period, I do want a sequential\nscan. And building my own 'query optimizer' which says 'if time diff > 3\nmonths, then enable seqscan' seems like a very bad idea.\n\nI would be interested to know (a) if there is any way I can influence the\noptimizer choice when it considers using the index in question, and (b) if\nthe fixed seqscan cost estimate is a bug.\n\n\nFWIW, the output of a 3 month period with ENABLE_SEQSCAN=OFF is:\n\n-----\nuptime=# set enable_seqscan=off;\nuptime=# explain select * from ping where pingtime>'1-aug-1999' and\npingtime<'1-nov-1999';\nNOTICE: QUERY PLAN:\n\nIndex Scan using ping_ix1 on ping (cost=0.00..27661.01 rows=51623 width=52)\n-----\n\nAny help, explanation, etc would be appreciated.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 12 Aug 2000 15:27:26 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Optimizer confusion?" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> Again. this is OK, although I am a little surprised at the continuing\n> non-linearity of the estimates.\n\nIndexscan estimates are supposed to be nonlinear, actually, to account\nfor the effects of caching. I doubt the shapes of the curves are right\nin detail, but I haven't had time to do any research about it.\n\n> I would be interested to know (a) if there is any way I can influence the\n> optimizer choice when it considers using the index in question,\n\nYou could push random_page_cost and effective_cache_size around to try\nto match your platform better. Let me know if that helps...\n\n> (b) if the fixed seqscan cost estimate is a bug.\n\nI don't think so. A seqscan will touch every page and every tuple once,\ntherefore the costs should be pretty much independent of the number of\ntuples that actually get selected, no? (Note that the time spent\nreturning tuples to the frontend is deliberately ignored by the\noptimizer, on the grounds that every correct plan for a given query\nwill have the exact same output costs. So if you want to try to compare\nthe planner's cost estimates to real elapsed time, you might want to\nmeasure the results for \"select count(*) from ...\" instead of \"select *\nfrom ...\" so that output costs are held fixed.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 Aug 2000 02:15:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Optimizer confusion? " }, { "msg_contents": "At 02:15 12/08/00 -0400, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> Again. this is OK, although I am a little surprised at the continuing\n>> non-linearity of the estimates.\n>\n>Indexscan estimates are supposed to be nonlinear, actually, to account\n>for the effects of caching. I doubt the shapes of the curves are right\n>in detail, but I haven't had time to do any research about it.\n\nOf course - I was thinking it was a seqscan, which is even sillier. \n\nAs to the non-linearity, I would have thought the majority of I/Os by far\nwould be reading rows, and with retrieval by index, you may not get much\nbuffering benefit on table pages. For a large table, ISTM linear estimates\nwould be a good estimate.\n\n\n>> I would be interested to know (a) if there is any way I can influence the\n>> optimizer choice when it considers using the index in question,\n>\n>You could push random_page_cost and effective_cache_size around to try\n>to match your platform better. Let me know if that helps...\n\nSetting it to 0.1 works (it was 4). But this (I think) just highlights the\nfact that the index is sorted by date, and the rows were added in date\norder. As a result (for this table, in this query), the index scan get's a\nmuch better cache-hit rate, so the actual IO cost is low. \n\nDoes that sound reasonable? Does the optimizer know if I have used\nclustering? If so, maybe I should just use the clustering command. If not,\nthen probably it's best to go with an index scan always for this query. Is\nthere any way I can code the query with \"{enable_seqscan=off}\" to apply\nonly to the current query? Or, perhaps more usefully, ask it (politely) to\nuse a given index for part of the predicate?\n\nISTM setting random_page_cost to 0.1 would be a bad idea in general...\n\n\n>> (b) if the fixed seqscan cost estimate is a bug.\n>\n>I don't think so. \n\nI think you're right; equating cost to row-IO means seqscan cost is fixed.\n\n\n> So if you want to try to compare\n>the planner's cost estimates to real elapsed time, you might want to\n>measure the results for \"select count(*) from ...\" instead of \"select *\n>from ...\" so that output costs are held fixed.)\n\nThis actually makes the indexscan even more desirable - the time interval\nhas to be more than 7 months before indexscan is slower, but at this point\nit gets hard to tell how much benefit is being made from buffering etc, so\nthe 'elapsed time' comparison is probably pretty dodgy. \n\nI don't suppose I can get the backend to tell me how many logical IOs and\nhow much CPU it used?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 12 Aug 2000 17:46:53 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Optimizer confusion? " }, { "msg_contents": "Philip Warner <[email protected]> writes:\n>> Indexscan estimates are supposed to be nonlinear, actually, to account\n>> for the effects of caching. I doubt the shapes of the curves are right\n>> in detail, but I haven't had time to do any research about it.\n\n> As to the non-linearity, I would have thought the majority of I/Os by far\n> would be reading rows, and with retrieval by index, you may not get much\n> buffering benefit on table pages. For a large table, ISTM linear estimates\n> would be a good estimate.\n\nThe estimate is linear, for number-of-tuples-to-fetch << number-of-pages-\nin-table. See src/backend/optimizer/path/costsize.c for the gory details.\n\n>> You could push random_page_cost and effective_cache_size around to try\n>> to match your platform better. Let me know if that helps...\n\n> Setting it to 0.1 works (it was 4).\n\nUnfortunately, random_page_cost < 1 is ridiculous on its face ... but\nthat's not where the problem is anyway, as your next comment makes clear.\n\n> But this (I think) just highlights the\n> fact that the index is sorted by date, and the rows were added in date\n> order. As a result (for this table, in this query), the index scan get's a\n> much better cache-hit rate, so the actual IO cost is low. \n\n> Does that sound reasonable?\n\nQuite. The cost estimates are based on the assumption that the tuples\nvisited by an indexscan are scattered randomly throughout the table.\nObviously, if that's wrong then the estimates will be way too high.\n\n> Does the optimizer know if I have used clustering?\n\nNope. To quote from the code:\n\n * XXX if the relation has recently been \"clustered\" using this index,\n * then in fact the target tuples will be highly nonuniformly\n * distributed, and we will be seriously overestimating the scan cost!\n * Currently we have no way to know whether the relation has been\n * clustered, nor how much it's been modified since the last\n * clustering, so we ignore this effect. Would be nice to do better\n * someday.\n\nThe killer implementation problem here is keeping track of how much the\ntable ordering has been altered since the last CLUSTER command. We have\ntalked about using an assumption of \"once clustered, always clustered\",\nie, ignore the issue of sort order degrading over time. That's pretty\nugly but it might still be more serviceable than the current state of\nignorance. For a table like this one, where rows are added in date\norder and (I imagine) seldom updated, the sort order isn't going to\ndegrade anyway. For other tables, you could assume that you're going\nto run CLUSTER on a periodic maintenance basis to keep the sort order\nfairly good.\n\nI have not yet done anything about this, mainly because I'm unwilling to\nencourage people to use CLUSTER, since it's so far from being ready for\nprime time (see TODO list). Once we've done something about table\nversioning, we can rewrite CLUSTER so that it's actually reasonable to\nuse on a regular basis, and at that point it'd make sense to make the\noptimizer CLUSTER-aware.\n\n> I don't suppose I can get the backend to tell me how many logical IOs and\n> how much CPU it used?\n\nYes you can. Run psql with\n\tPGOPTIONS=\"-s\"\nand look in the postmaster log. There's also -tparse, -tplan,\n-texec if you'd rather see the query time broken down by stages.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 Aug 2000 13:45:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Optimizer confusion? " }, { "msg_contents": "At 13:45 12/08/00 -0400, Tom Lane wrote:\n>> But this (I think) just highlights the\n>> fact that the index is sorted by date, and the rows were added in date\n>> order. As a result (for this table, in this query), the index scan get's a\n>> much better cache-hit rate, so the actual IO cost is low. \n>\n>> Does that sound reasonable?\n>\n>Quite. The cost estimates are based on the assumption that the tuples\n>visited by an indexscan are scattered randomly throughout the table.\n\nInterestingly, while testing a truly random index on a table with 4M rows,\nthe index estimates are actually way too optimistic (contrary to my other\nexample), even for a small retrieval. I'm still playing, but I'll send some\nfigures soon.\n\n\n>\n>> Does the optimizer know if I have used clustering?\n>\n>The killer implementation problem here is keeping track of how much the\n>table ordering has been altered since the last CLUSTER command. We have\n>talked about using an assumption of \"once clustered, always clustered\",\n\nThis *might* be appropriate to set as an index attribute of some kind, most\nparticularly for time-series indexes etc (as you suggest).\n\n\n>I have not yet done anything about this, mainly because I'm unwilling to\n>encourage people to use CLUSTER, since it's so far from being ready for\n>prime time (see TODO list). Once we've done something about table\n>versioning, we can rewrite CLUSTER so that it's actually reasonable to\n>use on a regular basis, and at that point it'd make sense to make the\n>optimizer CLUSTER-aware.\n\nThere might be a way to side-step the issue here. I assume that the index\nnodes contain a pointer to a record in a file, which has some kind of file\nposition. By comparing the file positions on one leaf node, and then\naveraging the node cluster values, you might be able to get a pretty good\nidea of the *real* clustering.\n\nDoes this sound worthwhile?\n\nIt has the advantage of working for all tables, and is presumably updated\nby Vacuum.\n\n\n>> I don't suppose I can get the backend to tell me how many logical IOs and\n>> how much CPU it used?\n>\n>Yes you can. Run psql with\n>\tPGOPTIONS=\"-s\"\n>and look in the postmaster log. There's also -tparse, -tplan,\n>-texec if you'd rather see the query time broken down by stages.\n\nThanks for this; I see almost no file IO, but lots of paging; is this a\nfeature of the way Linux does file buffering?\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sun, 13 Aug 2000 22:41:19 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizer confusion? " }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> There might be a way to side-step the issue here. I assume that the index\n> nodes contain a pointer to a record in a file, which has some kind of file\n> position. By comparing the file positions on one leaf node, and then\n> averaging the node cluster values, you might be able to get a pretty good\n> idea of the *real* clustering.\n\nHmm. I have been thinking that an easier way of gathering statistics\nfor the optimizer (on a column for which there is a btree index) is to\nscan the index sequentially. This makes it trivial to determine the\ncolumn min, max, and most common value, whereas right now we have very\nlittle chance of getting accurate MCV stats if there are more than a\nfew distinct values. If we do that we could also calculate some\nstatistic about how well-ordered the pointers to main-table tuples are.\n\nThe nifty thing about doing this during ANALYZE is that you'd only have\nto read the index, not the main table, so it should be reasonably quick.\nIn most contexts that would be tres uncool because you'd not be able to\ntell index entries for deleted tuples from those for live tuples --- but\nfor ANALYZE I think it'd be perfectly acceptable to just count 'em all.\nIndeed one could argue that it's *more* accurate to include the deleted\nindex entries than not, since they'll still provoke main-table accesses\nwhen scanned, which is exactly the thing we're trying to estimate.\n\n>>> I don't suppose I can get the backend to tell me how many logical IOs and\n>>> how much CPU it used?\n>> \n>> Yes you can. Run psql with\n>> PGOPTIONS=\"-s\"\n>> and look in the postmaster log. There's also -tparse, -tplan,\n>> -texec if you'd rather see the query time broken down by stages.\n\n> Thanks for this; I see almost no file IO, but lots of paging; is this a\n> feature of the way Linux does file buffering?\n\nCould be. IIRC, it's possible to tell from the stats how many page\naccesses are short-circuited by Postgres' own disk buffers (vs being\ngiven to the kernel) and at least on HPUX it's also possible to tell\nhow many of the kernel requests actually resulted in physical reads\n(vs being satisfied out of kernel disk buffers). But it takes a certain\namount of reading between the lines 'cause the numbers aren't real well\nlabeled. Dunno about how it works on Linux --- comments anyone?\n\n\t\t\tregards, tom lane\n\nPS: I am leaving town in an hour to go to LinuxWorld. Will be seeing\nemail erratically if at all this week, so don't be surprised at lack of\nresponse. Any of y'all planning to be at LinuxWorld, don't forget to\nstop by the Great Bridge booth!\n", "msg_date": "Sun, 13 Aug 2000 10:02:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer confusion? " }, { "msg_contents": "At 10:02 13/08/00 -0400, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> There might be a way to side-step the issue here. I assume that the index\n>> nodes contain a pointer to a record in a file, which has some kind of file\n>> position. By comparing the file positions on one leaf node, and then\n>> averaging the node cluster values, you might be able to get a pretty good\n>> idea of the *real* clustering.\n>\n>Hmm. I have been thinking that an easier way of gathering statistics\n>for the optimizer (on a column for which there is a btree index) is to\n>scan the index sequentially. This makes it trivial to determine the\n>column min, max, and most common value, whereas right now we have very\n>little chance of getting accurate MCV stats if there are more than a\n>few distinct values. If we do that we could also calculate some\n>statistic about how well-ordered the pointers to main-table tuples are.\n\nThere are probably a couple of things to look for, in the sense that\nwell-ordered is not important, and neither is a partial sub-ordering - what\nyou need is 'well-clumped'.\n\nie. some indication of how many pages would need to be read in order to\nread all records pointed to by an index node (unfortunately, this ignores\nIOs from toasted values that are stored elsewhere).\n\nThe process is complicated a little by how you handle nodes with records in\nadjacent pages (does this count as one IO), and in a related manner, by how\nbig the index entries are. But these seem pretty easy to deal with in a\nVACUUM pass. Finally, you may also want to see how well sequential nodes\nare clumped, but this is probably only important if the individual index\nentries are large.\n\n\n>Indeed one could argue that it's *more* accurate to include the deleted\n>index entries than not, since they'll still provoke main-table accesses\n>when scanned, which is exactly the thing we're trying to estimate.\n\nAnd if we ever start reusing space automatically, then the index tree can\nbe updated appropriately to reflect the new costs.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 14 Aug 2000 11:42:19 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizer confusion? " } ]
[ { "msg_contents": "\n> I have not yet done anything about this, mainly because I'm unwilling to\n> encourage people to use CLUSTER, since it's so far from being ready for\n> prime time (see TODO list).\n\nWell imho making the optimizer cluster aware is a step that has to be done\nanyway. No need to advertise the cluster feature, but if a user does\nalready take the chances of the current cluster implementation,\nhe deserves the fruits, no ?\n\nActually I would be using the cluster command on a freshly created table\nwith an index that I know corresponds to insert order. No risc here.\n\nImho the assumption that the dba guards the cluster state of his tables\nis better than assuming random distribution of a clustered table.\n\nOf course a real statistic for all indices as we discussed before would be \nbetter, but assuming perfect clustering state of a clustered index would be \na good step to do in lack of such a statistic.\n\nAndreas\n", "msg_date": "Mon, 14 Aug 2000 10:24:39 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Optimizer confusion? " } ]
[ { "msg_contents": "Hello,\n\nI just finished a new C trigger that updated a \"modified\" column with\nthe current time upon an UPDATE event. It seems to work OK but I just\nwanted to bounce this off you guys to check for some non-kosher stuff or\nbetter way of doing it. Thanks in advance.\n\nHeapTuple update_modified() {\n\tTupleDesc\ttupdesc;\n\tHeapTuple\trettuple;\n\tbool isnull;\n\tTriggerData *trigdata = CurrentTriggerData;\n\n\t/* Get the current datetime. */\n\tTimestamp *tstamp = timestamp_in(\"now\");\n\tDatum newdt = Float32GetDatum(tstamp);\n\n\tCurrentTriggerData = NULL;\n\n\tif (!trigdata)\n\t\telog(NOTICE, \"bid_control.c: triggers are not initialized\");\n\n\tif (!TRIGGER_FIRED_BY_UPDATE(trigdata->tg_event))\n\t\telog(ERROR, \"bid_control.c: trigger should only be called on INSERT\");\n\n\tif (!TRIGGER_FIRED_BEFORE(trigdata->tg_event))\n\t\telog(ERROR, \"bid_control.c: trigger should only be called BEFORE\");\n\n\trettuple = trigdata->tg_trigtuple;\n\ttupdesc = trigdata->tg_relation->rd_att;\n\n\tif ((i = SPI_connect()) < 0)\n\t\telog(NOTICE, \"bid_control.c: SPI_connect returned %d\", i);\n\n\ti = SPI_fnumber(tupdesc, \"modified\");\n\trettuple = SPI_modifytuple(\n\t\t\ttrigdata->tg_relation,\n\t\t\trettuple,\n\t\t\t1,\n\t\t\t&i,\n\t\t\t&newdt,\n\t\t\tNULL);\n\n\tSPI_finish();\n\treturn rettuple;\n}\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.org\n\nLord, protect me from your followers.\n", "msg_date": "Mon, 14 Aug 2000 16:47:33 +0200", "msg_from": "Louis-David Mitterrand <[email protected]>", "msg_from_op": true, "msg_subject": "modifying a timestamp in a C trigger" }, { "msg_contents": "At 04:47 PM 8/14/00 +0200, Louis-David Mitterrand wrote:\n>Hello,\n>\n>I just finished a new C trigger that updated a \"modified\" column with\n>the current time upon an UPDATE event. It seems to work OK but I just\n>wanted to bounce this off you guys to check for some non-kosher stuff or\n>better way of doing it. Thanks in advance.\n\nThis could easily done in PL/pgSQL. Your C trigger will have to be modified\nif the details of trigger or the function call protocol changes, while the\nPL/pgSQL source will work forever without change.\n\nAnd since the expense is in the \"update\" itself, I'd be surprised if you\ncould measure any speed difference between the two approaches.\n\nUnless you're doing this to learn how to write C triggers for the heck\nof it or to do stuff you can't do in PL/pgSQL, the PL/pgSQL approach is\nmuch better.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 14 Aug 2000 10:45:19 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: modifying a timestamp in a C trigger" }, { "msg_contents": "On Mon, Aug 14, 2000 at 10:45:19AM -0700, Don Baccus wrote:\n> At 04:47 PM 8/14/00 +0200, Louis-David Mitterrand wrote:\n> >I just finished a new C trigger that updated a \"modified\" column with\n> >the current time upon an UPDATE event. It seems to work OK but I just\n> >wanted to bounce this off you guys to check for some non-kosher stuff or\n> >better way of doing it. Thanks in advance.\n> \n> This could easily done in PL/pgSQL. Your C trigger will have to be modified\n> if the details of trigger or the function call protocol changes, while the\n> PL/pgSQL source will work forever without change.\n> \n> And since the expense is in the \"update\" itself, I'd be surprised if you\n> could measure any speed difference between the two approaches.\n> \n> Unless you're doing this to learn how to write C triggers for the heck\n> of it or to do stuff you can't do in PL/pgSQL, the PL/pgSQL approach is\n> much better.\n\nYes, that's the main reason: being able to program triggers in C, to be\nprepared for the moment when only C will cut it for certain features I\nam thinking about. It's a kind of training as I don't have a programming\nbackground.\n\nPL/pgsql is very nice and quick, granted, but sometimes a bit hard to\ndebug: 'parser error near \"\"' messages sometimes occur and make you\nwonder where in your code is the error.\n\nThanks for your input, cheers,\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.org\n\nMACINTOSH == Most Applications Crash If Not The Operatings System Hangs\n", "msg_date": "Tue, 15 Aug 2000 08:45:28 +0200", "msg_from": "Louis-David Mitterrand <[email protected]>", "msg_from_op": true, "msg_subject": "Re: modifying a timestamp in a C trigger" } ]
[ { "msg_contents": "Gah, typo'ed the name of pgsql-hackers. This should be better. Sorry\nto those who got this twice, once on GENERAL, once on HACKERS.\n\nRoss\n\nOn Mon, Aug 14, 2000 at 02:33:55PM +1000, Tim Allen wrote:\n> I'm just trying out PG7.0.2, with a view to upgrading from 6.5.3, and I've\n> found one quirk a little troublesome. Not sure whether I'll get any\n> sympathy, but I shall ask anyway :).\n> \n> We find it convenient to be able to store +/- infinity for float8 values\n> in some database tables. With Postgres 6.5.3, we were able to get away\n> with this by using the values -1.79769313486232e+308 for -Inf and\n> 1.79769313486232e+308 for Inf. This is probably not very portable, but\n> anyway, it worked fine for us, on both x86 Linux and SGI IRIX. One thing,\n> though, to get these numbers past the interface we had to put them in\n> quotes. It seemed as though there was one level of parsing that didn't\n> like these particular numbers, and one level of parsing that coped OK, and\n> using quotes got it past the first level.\n> \n> Now, however (unfortunately for us), this inconsistency in the interface\n> has been \"fixed\", and now we can't get this past the interface, either\n> quoted or not. Fixing inconsistencies is, of course, in general, a good\n> thing, which is why I'm not confident of getting much sympathy :).\n> \n\nBreaking working apps is never a good thing, but that's part of why it went\nfrom 6.X to 7.X. \n\n> So, any suggestions as to how we can store +/- infinity as a valid float8\n> value in a database table?\n> \n\nRight: the SQL standard doesn't say anything about what to do for these\ncases for floats (except by defining the syntax of an approximate numeric\nconstant as basically a float), but the IEEE754 does: as you discovered\nbelow, they're NaN, -Infinity, and +Infinity.\n\n> I notice, btw, that 'NaN' is accepted as a valid float8. Is there any\n> particular reason why something similar for, eg '-Inf' and 'Inf' doesn't\n> also exist? Just discovered, there is a special number 'Infinity', which\n> seems to be recognised, except you can't insert it into a table because it\n> reports an overflow error. Getting warm, it seems, but not there yet. And\n> there doesn't seem to be a negative equivalent.\n\nAnd this is a bug. From looking at the source, I see that Thomas added\ncode to accept 'NaN' and 'Infinity' (but not '-Infinity'), and Tom Lane\ntweaked it, but it's never been able to get an Infinity all the way to\nthe table, as far as I can see: the value gets set to HUGE_VAL, but the\ncall to CheckFloat8Val compares against FLOAT8_MAX (and FLOAT8_MIN),\nand complains, since HUGE_VAL is _defined_ to be larger than DBL_MAX.\n\nAnd, there's no test case in the regression tests for inserting NaN or\nInfinity. (Shame on Thomas ;-)\n\nI think the right thing to do is move the call to CheckFloat8Val into a\nbranch of the test for NaN and Infinity, thereby not calling it if we've\nbeen passed those constants. I'm compiling up a test of this right now,\nand I'll submit a patch to Bruce if it passes regression. Looks like\nthat function hasn't been touch in a while, so the patch should apply\nto 7.0.X as well as current CVS.\n\n<some time later>\n\nLooks like it works, and passes the regression tests as they are. I'm\npatching the tests to include the cases 'NaN', 'Infinity', and '-Infinity'\nas valid float8s, and 'not a float' as an invalid representation, and\nrerunning to get output to submit with the patch. This might be a bit\nhairy, since there are 5 different expected/float8* files. Should I try\nto hand patch them to deal with the new rows, or let them be regenerated\nby people with the appropriate platforms?\n\n<later again>\n\nBigger problem with changing the float8 regression tests: a lot of our\nmath functions seem to be guarded with CheckFloat8Val(result), so, if we\nallow these values in a float8 column, most of the math functions with\nelog(). It strikes me that there must have been a reason for this at one\ntime. There's even a #define UNSAFE_FLOATS, to disable these checks. By\nreading the comments in old copies of float.c, it looks like this was\nadded for an old, buggy linux/Alpha libc that would throw floating point\nexceptions, otherwise.\n\nIs there an intrinsic problem with allowing values outside the range\nFLOAT8_MAX <= x =>FLOAT8_MIN ? 'ORDER BY' seems to still work, with\n'Infinity' and '-Infinity' sorting properly. Having a 'NaN' in there\nbreaks sorting however. That's a current, live bug. Could be fixed\nby treating 'NaN' as a different flavor of NULL. Probably a fairly deep\nchange, however. Hmm, NULL in a float8 sorts to the end, regardless of\nASC or DESC, is that right?\n\nAnyway, here's the patch for just float.c , if anyone wants to look\nat it. As I said, it passes the existing float8 regression tests, but\nraises a lot of interesting questions.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005", "msg_date": "Mon, 14 Aug 2000 13:38:58 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "> > So, any suggestions as to how we can store +/- infinity as a valid float8\n> > value in a database table?\n> Right: the SQL standard doesn't say anything about what to do for these\n> cases for floats (except by defining the syntax of an approximate numeric\n> constant as basically a float), but the IEEE754 does: as you discovered\n> below, they're NaN, -Infinity, and +Infinity.\n\nNot all computers fully support IEEE754, though many new ones do.\n\n> > I notice, btw, that 'NaN' is accepted as a valid float8. Is there any\n> > particular reason why something similar for, eg '-Inf' and 'Inf' doesn't\n> > also exist? Just discovered, there is a special number 'Infinity', which\n> > seems to be recognised, except you can't insert it into a table because it\n> > reports an overflow error. Getting warm, it seems, but not there yet. And\n> > there doesn't seem to be a negative equivalent.\n> And this is a bug. From looking at the source, I see that Thomas added\n> code to accept 'NaN' and 'Infinity' (but not '-Infinity'), and Tom Lane\n> tweaked it, but it's never been able to get an Infinity all the way to\n> the table, as far as I can see: the value gets set to HUGE_VAL, but the\n> call to CheckFloat8Val compares against FLOAT8_MAX (and FLOAT8_MIN),\n> and complains, since HUGE_VAL is _defined_ to be larger than DBL_MAX.\n> And, there's no test case in the regression tests for inserting NaN or\n> Infinity. (Shame on Thomas ;-)\n\nAh, I'm just trying to leave some rewarding work for other folks ;)\n\n> I think the right thing to do is move the call to CheckFloat8Val into a\n> branch of the test for NaN and Infinity, thereby not calling it if we've\n> been passed those constants. I'm compiling up a test of this right now,\n> and I'll submit a patch to Bruce if it passes regression. Looks like\n> that function hasn't been touch in a while, so the patch should apply\n> to 7.0.X as well as current CVS.\n\nistm that the existing protection (or something like it) is required for\nsome platforms, while other platforms may be able to handle NaN and\n+/-Inf just fine. Seems like a job for autoconf to determine the FP\ncapabilities of a system, unless Posix defines some way to tell. Of\ncourse, even then we'd need an autoconf test to deal with non-Posix\nplatforms.\n\n> Looks like it works, and passes the regression tests as they are. I'm\n> patching the tests to include the cases 'NaN', 'Infinity', and '-Infinity'\n> as valid float8s, and 'not a float' as an invalid representation, and\n> rerunning to get output to submit with the patch. This might be a bit\n> hairy, since there are 5 different expected/float8* files. Should I try\n> to hand patch them to deal with the new rows, or let them be regenerated\n> by people with the appropriate platforms?\n\nHow about setting up a separate test (say, ieee754.sql) so that\nnon-compliant platforms can still pass the original FP test suite. Then\nother platforms can be added in as they are tested.\n\nSome platforms may need their compiler switches tweaked; I haven't\nchecked the Alpha/DUnix configuration but I recall needing to fix some\nflags to get compiled code to move these edge cases around even just\nthrough subroutine calls. One example was in trying to call finite(),\nwhich threw an error during the call to it if the number was NaN or\nInfinity. Which sort of defeated the purpose of the call :)\n\n> Bigger problem with changing the float8 regression tests: a lot of our\n> math functions seem to be guarded with CheckFloat8Val(result), so, if we\n> allow these values in a float8 column, most of the math functions with\n> elog(). It strikes me that there must have been a reason for this at one\n> time. There's even a #define UNSAFE_FLOATS, to disable these checks. By\n> reading the comments in old copies of float.c, it looks like this was\n> added for an old, buggy linux/Alpha libc that would throw floating point\n> exceptions, otherwise.\n\nThere are still reasons on some platforms, as noted above...\n\n> Is there an intrinsic problem with allowing values outside the range\n> FLOAT8_MAX <= x =>FLOAT8_MIN ? 'ORDER BY' seems to still work, with\n> 'Infinity' and '-Infinity' sorting properly. Having a 'NaN' in there\n> breaks sorting however. That's a current, live bug. Could be fixed\n> by treating 'NaN' as a different flavor of NULL. Probably a fairly deep\n> change, however. Hmm, NULL in a float8 sorts to the end, regardless of\n> ASC or DESC, is that right?\n\nNULL and NaN are not quite the same thing imho. If we are allowing NaN\nin columns, then it is *known* to be NaN.\n\n> Anyway, here's the patch for just float.c , if anyone wants to look\n> at it. As I said, it passes the existing float8 regression tests, but\n> raises a lot of interesting questions.\n\nAre you interested in pursuing this further? It seems like we might be\nable to move in the direction you suggest on *some* platforms, but we\nwill need to scrub the math functions to be able to handle these edge\ncases.\n\n - Thomas\n", "msg_date": "Tue, 15 Aug 2000 03:27:55 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "On Tue, Aug 15, 2000 at 03:27:55AM +0000, Thomas Lockhart wrote:\n> \n> Not all computers fully support IEEE754, though many new ones do.\n\nTrue - the question becomes: how new is new? Are we supporting ones\nthat aren't? If so, that's fine. If not, it's a lot easier to fix. ;-)\n\n> > And, there's no test case in the regression tests for inserting NaN or\n> > Infinity. (Shame on Thomas ;-)\n> \n> Ah, I'm just trying to leave some rewarding work for other folks ;)\n\nAnd we appreciate the crumbs. Actually, it _was_ good practice grovelling\nout versions from CVS and matching log messages.\n\n> \n> istm that the existing protection (or something like it) is required for\n> some platforms, while other platforms may be able to handle NaN and\n> +/-Inf just fine. Seems like a job for autoconf to determine the FP\n> capabilities of a system, unless Posix defines some way to tell. Of\n> course, even then we'd need an autoconf test to deal with non-Posix\n> platforms.\n\nYeah, need to get Peter Eisentraut involved, perhaps. Should actually be\npretty simple: the #define is already there: UNSAFE_FLOATS. Define that,\nand the CheckFloat[48]Val functions just return true. \n\n> \n> How about setting up a separate test (say, ieee754.sql) so that\n> non-compliant platforms can still pass the original FP test suite. Then\n> other platforms can be added in as they are tested.\n\nHmm, I wish we had clue what other systems might be non-compliant, and how.\nThe question becomes one of if it's _possible_ to support NaN, +/-Inf on\nsome platforms. Then, we end up with a difference in functionality.\n\n> \n> > Is there an intrinsic problem with allowing values outside the range\n> > FLOAT8_MAX <= x =>FLOAT8_MIN ? 'ORDER BY' seems to still work, with\n> > 'Infinity' and '-Infinity' sorting properly. Having a 'NaN' in there\n> > breaks sorting however. That's a current, live bug. Could be fixed\n> > by treating 'NaN' as a different flavor of NULL. Probably a fairly deep\n> > change, however. Hmm, NULL in a float8 sorts to the end, regardless of\n> > ASC or DESC, is that right?\n> \n> NULL and NaN are not quite the same thing imho. If we are allowing NaN\n> in columns, then it is *known* to be NaN.\n\nFor the purposes of ordering, however, they are very similar. Neither one\ncan be placed corectly with respect to the other values: NULL, because\nwe don't know were it really is, NaN because we know it's not even on\nthis axis. I'm suggesting the we fix sorting on floats to treat NaN as\nNULL, and sort it to the end. As it stands, sorting is broken, since it\nreturns the NaN rows wherever they happen to be. This causes them to\nact as barriers, partitioning the returned set into seperately sorted\nsub sequences.\n\n> \n> > Anyway, here's the patch for just float.c , if anyone wants to look\n> > at it. As I said, it passes the existing float8 regression tests, but\n> > raises a lot of interesting questions.\n> \n> Are you interested in pursuing this further? It seems like we might be\n> able to move in the direction you suggest on *some* platforms, but we\n> will need to scrub the math functions to be able to handle these edge\n> cases.\n\nSure. I'm no great floating point wiz, but I'm sure Tom and Don will\njump on anything I get wrong. Duping the float tests and feeding them\nNaN/+/-Inf as a seperate test set is probably a good idea.\n\nThe existing patch moves the call to CheckFloat8Val() inside float8in\nso it is only called if strtod() consumes all it's input, and does not\nset errno. Seems safe to me: if strtod() doesn't consume it's input,\nwe check to make sure it's not NaN/+/-Inf (gah, need a shorthand word\nfor those three values), else elog(). If it does, but sets errno,\nwe catch that. Then, in belt-and-suspenders style, we call\nCheckFloat8Val(). For that check to fail, strtod() would have to consume\nits entire input, return +/-HUGE_VAL, and _not_ set errno to ERANGE.\n\nBTW, this also brings up something that got discussed before the last\nrelease, but never implemented. The original problem from Tim Allen had\nto do with using a work around for not having +/- Inf: storing the values\n-1.79769313486232e+308 and 1.79769313486232e+308. He was having trouble,\nsince a pg_dump/restore cycle broke, do to rounding the values to out\nof range for floats. This wasn't caught by the current regression tests, \nbut would have been caught by the suggested dump/restore/dump/compare\ndumps cycle someone suggested for exercizing the *out and *in functions.\n\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Tue, 15 Aug 2000 11:21:57 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "Thomas - \nA general design question. There seems to be a good reason to allow +/-Inf\nin float8 columns: Tim Allen has an need for them, for example. That's\npretty straight forward, they seem to act properly if the underlying float\nlibs handle them.\n\nI'm not convinced NaN gives us anything useful, especially given how\nbadly it breaks sorting. I've been digging into that code a little,\nand it's not going to be pretty. It strikes me as wrong to embed type\nspecific info into the generic sorting routines.\n\nSo, anyone have any ideas what NaN would be useful for? Especially given\nwe have NULL available, which most (non DB) numeric applications don't.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n\n", "msg_date": "Tue, 15 Aug 2000 11:33:27 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "> So, anyone have any ideas what NaN would be useful for? Especially given\n> we have NULL available, which most (non DB) numeric applications don't.\n\nHmm. With Tom Lane's new fmgr interface, you *can* return NULL if you\nspot a NaN result. Maybe that is the best way to go about it; we'll\nstipulate that NaN and NULL are equivalent. And we'll further stipulate\nthat if you are messing with NaN then you deserve what you get ;)\n\n - Thomas\n", "msg_date": "Tue, 15 Aug 2000 16:53:26 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "I can't say whether its worth the trouble to add NaN, but I will say that NaN\nis not the same as NULL. NULL is missing data; NaN is 0./0. The difference\nis significant for numerical applications.\n\nTim\n\n\"Ross J. Reedstrom\" wrote:\n\n> Thomas -\n> A general design question. There seems to be a good reason to allow +/-Inf\n> in float8 columns: Tim Allen has an need for them, for example. That's\n> pretty straight forward, they seem to act properly if the underlying float\n> libs handle them.\n>\n> I'm not convinced NaN gives us anything useful, especially given how\n> badly it breaks sorting. I've been digging into that code a little,\n> and it's not going to be pretty. It strikes me as wrong to embed type\n> specific info into the generic sorting routines.\n>\n> So, anyone have any ideas what NaN would be useful for? Especially given\n> we have NULL available, which most (non DB) numeric applications don't.\n>\n> Ross\n> --\n> Ross J. Reedstrom, Ph.D., <[email protected]>\n> NSBRI Research Scientist/Programmer\n> Computer and Information Technology Institute\n> Rice University, 6100 S. Main St., Houston, TX 77005\n\n--\nTimothy H. Keitt\nNational Center for Ecological Analysis and Synthesis\n735 State Street, Suite 300, Santa Barbara, CA 93101\nPhone: 805-892-2519, FAX: 805-892-2510\nhttp://www.nceas.ucsb.edu/~keitt/\n\n\n\n", "msg_date": "Tue, 15 Aug 2000 09:57:30 -0700", "msg_from": "\"Timothy H. Keitt\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "\n>I'm not convinced NaN gives us anything useful, especially given how\n>badly it breaks sorting. I've been digging into that code a little,\n>and it's not going to be pretty. It strikes me as wrong to embed type\n>specific info into the generic sorting routines.\n>\n>So, anyone have any ideas what NaN would be useful for? Especially given\n>we have NULL available, which most (non DB) numeric applications don't.\n>\n>Ross\n\nJust a wild guess, NaN could be used to indicated invalid numeric \ndata. However, this seems odd because it should have been checked prior to \nbeing put in the DB.\n\nNULL is no value, +/- infinity could be just that or out of range, unless \nyou want NaN to be out of range. Depending on your scheme for \nrepresentation you could take an out of range value and store it as +/i \ninfinity.\n\nThese are just suggestions.\n\nThomas\n\n", "msg_date": "Tue, 15 Aug 2000 12:07:39 -0500", "msg_from": "Thomas Swan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "Hi,\n\njust to add my opinion on NaN in the IEEE standard. As far as I\nremember, IEEE numbers work as follows:\n\n1 bit sign\nsome bits base\nsome bits exponent\n\nThis allows you to do several things:\n\ninterpret the exp bits as a normal integer and get\n- exp=below half: negative exponents\n- exp=half: exponent=0\n- exp=above half: positive exponents\n- exp=all set: NaN, quite a few at that\n\nFor all of these the sign can be either positive or negative, leading\nto pos/neg zero (quite a strange concept).\n\nWith the NaNs, you get quite a few possibilities, but notably:\n- base=0 (NaN -- this is not a number, but an animal)\n- base=max (pos/neg infinity, depending on sign)\n\nSomeone mentioned a representation for 0/0 and I might add that there\nare four possibilities:\n\t(( 1.)*0.) / (( 1.)*0.)\n\t(( 1.)*0.) / ((-1.)*0.)\n\t((-1.)*0.) / (( 1.)*0.)\n\t((-1.)*0.) / ((-1.)*0.)\nThese (given commutativity, except that we're dealing with a finite\nrepresentation, but predictable in that it is actually possible to\nfactor out the sign) can be reduced to:\n\t( 1) * (0./0.)\n\t(-1) * (0./0.)\nwhich amounts to pos/neg infinity of some sort.\n\nNow my take on NULL vs NaN is that there should be a whole bunch of\nNULL, just like there is a whole bunch of NaN. Just off the top of my\nhead, I could imagine \"unknown\", \"unknowable\", \"out of range in\ndirection X\". But, alas, the SQL standard doesn't provide for such\nthings (though the storage implementation would: but what would you do\nwith comparisons, conversions and displays?).\n\nso long,\n\nOliver\n", "msg_date": "15 Aug 2000 19:25:06 +0200", "msg_from": "Oliver Seidel <[email protected]>", "msg_from_op": false, "msg_subject": "NaN" }, { "msg_contents": "(Side note: Folks, we need a real bug/issue-tracking system. We just\ndiscussed this a month ago (\"How PostgreSQL's floating point hurts\neveryone everywhere\"). If anyone's interested in porting Bugzilla or some\nother such system to PostgreSQL and putting it into use, let me know.)\n\nRoss J. Reedstrom writes:\n\n> Yeah, need to get Peter Eisentraut involved, perhaps. Should actually be\n> pretty simple: the #define is already there: UNSAFE_FLOATS. Define that,\n> and the CheckFloat[48]Val functions just return true. \n\nShow me a system where it doesn't work and we'll get it to work.\nUNSAFE_FLOATS as it stands it probably not the most appropriate behaviour;\nit intends to speed things up, not make things portable.\n\n\n> > NULL and NaN are not quite the same thing imho. If we are allowing NaN\n> > in columns, then it is *known* to be NaN.\n> \n> For the purposes of ordering, however, they are very similar.\n\nThen we can also treat them similar, i.e. sort them all last or all first.\nIf you have NaN's in your data you wouldn't be interested in ordering\nanyway.\n\n\n> we check to make sure it's not NaN/+/-Inf (gah, need a shorthand word\n> for those three values), else elog().\n\n\"non-finite values\"\n\n\nSide note 2: The paper \"How Java's floating point hurts everyone\neverywhere\" provides for good context reading.\n\nSide note 3: Once you read that paper you will agree that using floating\npoint with Postgres is completely insane as long as the FE/BE protocol is\ntext-based.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 20 Aug 2000 00:33:00 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "Thomas Lockhart writes:\n\n> > So, anyone have any ideas what NaN would be useful for? Especially given\n> > we have NULL available, which most (non DB) numeric applications don't.\n> \n> Hmm. With Tom Lane's new fmgr interface, you *can* return NULL if you\n> spot a NaN result. Maybe that is the best way to go about it; we'll\n> stipulate that NaN and NULL are equivalent. And we'll further stipulate\n> that if you are messing with NaN then you deserve what you get ;)\n\nI beg to differ, this behaviour would not be correct. Instead, this should\nhappen:\n\nNULL < NULL\t=> NULL\nNULL < 1.0\t=> NULL\nNULL < Nan\t=> NULL\n1.0 < NULL\t=> NULL\n1.0 < NaN\t=> false\nNaN < NULL\t=> NULL\nNaN < 1.0\t=> false\n\nThen all the NaN's sort either all first or all last before or after the\nNULLs.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 20 Aug 2000 00:33:49 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "At 12:33 AM 8/20/00 +0200, Peter Eisentraut wrote:\n>(Side note: Folks, we need a real bug/issue-tracking system. We just\n>discussed this a month ago (\"How PostgreSQL's floating point hurts\n>everyone everywhere\"). If anyone's interested in porting Bugzilla or some\n>other such system to PostgreSQL and putting it into use, let me know.)\n\nOpenACS and arsDigita are using Ben Adida's software development manager,\nwhich includes a ticket-tracking module. It's still under development,\nbut you can take a look at www.openacs.org/sdm to see how we're using\nit.\n\nIt was developed for Postgres (which is what you see at the above URL)\nthen ported to Oracle (which is what you arsDigita does). aD has also\nadded some functionality which is supposed to be ported back to the\nPostgres version.\n\nAmong other things it integrates with a todo list manager that maintains\nindividual todo lists for developers ... you're assigned a bug, it\nends up on your todo list.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sat, 19 Aug 2000 16:26:44 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "\nBen has an account on Hub, and aolserver has been installed, so if you\nguys want to install and get this working, just tell me what else you need\nas far as software and/or configurations are concerned and \"it shall be\ndone\" ...\n\nOn Sat, 19 Aug 2000, Don Baccus wrote:\n\n> At 12:33 AM 8/20/00 +0200, Peter Eisentraut wrote:\n> >(Side note: Folks, we need a real bug/issue-tracking system. We just\n> >discussed this a month ago (\"How PostgreSQL's floating point hurts\n> >everyone everywhere\"). If anyone's interested in porting Bugzilla or some\n> >other such system to PostgreSQL and putting it into use, let me know.)\n> \n> OpenACS and arsDigita are using Ben Adida's software development manager,\n> which includes a ticket-tracking module. It's still under development,\n> but you can take a look at www.openacs.org/sdm to see how we're using\n> it.\n> \n> It was developed for Postgres (which is what you see at the above URL)\n> then ported to Oracle (which is what you arsDigita does). aD has also\n> added some functionality which is supposed to be ported back to the\n> Postgres version.\n> \n> Among other things it integrates with a todo list manager that maintains\n> individual todo lists for developers ... you're assigned a bug, it\n> ends up on your todo list.\n> \n> \n> \n> \n> - Don Baccus, Portland OR <[email protected]>\n> Nature photos, on-line guides, Pacific Northwest\n> Rare Bird Alert Service and other goodies at\n> http://donb.photo.net.\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 19 Aug 2000 21:01:37 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "At 09:01 PM 8/19/00 -0300, The Hermit Hacker wrote:\n>\n>Ben has an account on Hub, and aolserver has been installed, so if you\n>guys want to install and get this working, just tell me what else you need\n>as far as software and/or configurations are concerned and \"it shall be\n>done\" ...\n\nI've e-mailed Ben a \"heads-up\", though he monitors this list and will probably\nsee your note.\n\nI'll be gone about five of the next 6-7 weeks mostly doing my annual stint as\na field biologist where I'm only accessible once a day via radio by BLM\nDispatch\nin Elko, Nevada so I'm afraid this (like many other things at the moment) will\nfall on Ben's shoulders...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sat, 19 Aug 2000 17:07:01 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "\nhttp://www.postgresql.org/bugs/\n\nI was about to implement it once before and the directory disappeared.\nBut anyway it's there.\n\nVince.\n\n\nOn Sat, 19 Aug 2000, The Hermit Hacker wrote:\n\n> \n> Ben has an account on Hub, and aolserver has been installed, so if you\n> guys want to install and get this working, just tell me what else you need\n> as far as software and/or configurations are concerned and \"it shall be\n> done\" ...\n> \n> On Sat, 19 Aug 2000, Don Baccus wrote:\n> \n> > At 12:33 AM 8/20/00 +0200, Peter Eisentraut wrote:\n> > >(Side note: Folks, we need a real bug/issue-tracking system. We just\n> > >discussed this a month ago (\"How PostgreSQL's floating point hurts\n> > >everyone everywhere\"). If anyone's interested in porting Bugzilla or some\n> > >other such system to PostgreSQL and putting it into use, let me know.)\n> > \n> > OpenACS and arsDigita are using Ben Adida's software development manager,\n> > which includes a ticket-tracking module. It's still under development,\n> > but you can take a look at www.openacs.org/sdm to see how we're using\n> > it.\n> > \n> > It was developed for Postgres (which is what you see at the above URL)\n> > then ported to Oracle (which is what you arsDigita does). aD has also\n> > added some functionality which is supposed to be ported back to the\n> > Postgres version.\n> > \n> > Among other things it integrates with a todo list manager that maintains\n> > individual todo lists for developers ... you're assigned a bug, it\n> > ends up on your todo list.\n> > \n> > \n> > \n> > \n> > - Don Baccus, Portland OR <[email protected]>\n> > Nature photos, on-line guides, Pacific Northwest\n> > Rare Bird Alert Service and other goodies at\n> > http://donb.photo.net.\n> > \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sat, 19 Aug 2000 20:34:43 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "At 08:34 PM 8/19/00 -0400, Vince Vielhaber wrote:\n>\n>http://www.postgresql.org/bugs/\n>\n>I was about to implement it once before and the directory disappeared.\n>But anyway it's there.\n\nCool, I tried it and broke it on my second click ... any particular reason\nto roll your own rather than use something that's already being used by\nseveral other development projects and is under active development for\nthat reason? (i.e. the SDM)\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sat, 19 Aug 2000 17:43:13 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "Don Baccus wrote:\n> \n> >(Side note: Folks, we need a real bug/issue-tracking system. We just\n> >discussed this a month ago (\"How PostgreSQL's floating point hurts\n> >everyone everywhere\"). If anyone's interested in porting Bugzilla or some\n> >other such system to PostgreSQL and putting it into use, let me know.)\n\nistm that it is *not* that easy. We tried (very briefly) a bug tracking\nsystem. Whatever technical problems it had (which other tools may not),\nthe fundamental problem is that the mailing lists do a *great* job of\nscreening problem reports while also supporting and enhancing the\n\"Postgres culture\", whereas a \"bug report tool\" eliminates that traffic\nand requires one or a few people to pay attention to the bug list to\nmanage new and existing bug reports.\n\nThis has (or could have) a *huge* impact on the culture and tradition of\nPostgres development, which imho is one of the most satisfying,\npleasant, and effective environments in open source development. So if\nwe try to do something with a bug tracking system, we will need to\nfigure out:\n\no how to retain a free and helpful discussion on the mailing lists, and\nto not degrade into a \"shut up and check the bug reports\" response.\n\no how to filter or qualify bug reports so that developers don't spend\ntime having to do that.\n\nAll imho of course ;)\n\n - Thomas\n", "msg_date": "Sun, 20 Aug 2000 01:22:39 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "At 01:22 AM 8/20/00 +0000, Thomas Lockhart wrote:\n\n>istm that it is *not* that easy. We tried (very briefly) a bug tracking\n>system. Whatever technical problems it had (which other tools may not),\n>the fundamental problem is that the mailing lists do a *great* job of\n>screening problem reports while also supporting and enhancing the\n>\"Postgres culture\", whereas a \"bug report tool\" eliminates that traffic\n>and requires one or a few people to pay attention to the bug list to\n>manage new and existing bug reports.\n\nIn the SDM you can, of course, ask to be notified of various events\nby e-mail. And there's a commenting facility so in essence a bug\nreport or feature request starts a conversation thread. \n\nI don't recall saying that the SDM is simply a \"bug report tool\". There's\nquite a bit more to it, and the goal is to INCREASE interactivity, not\ndecrease it.\n\n>This has (or could have) a *huge* impact on the culture and tradition of\n>Postgres development, which imho is one of the most satisfying,\n>pleasant, and effective environments in open source development. So if\n>we try to do something with a bug tracking system, we will need to\n>figure out:\n>\n>o how to retain a free and helpful discussion on the mailing lists, and\n>to not degrade into a \"shut up and check the bug reports\" response.\n\nThis is a social, not software, engineering issue.\n\n>o how to filter or qualify bug reports so that developers don't spend\n>time having to do that.\n\nDevelopers don't have to filter or qualify bug reports e-mailed to the\nbugs list today? Who's doing it, then, and why can't they continue doing\nso if another tool is used to manage bug reports? \n\nThe SDM allows a little more granularity than the single e-mail list\naproach allows for. You can designate modules within a package. For\ninstance, psql might be a module with Peter assigned as an administrator,\nand he might elect to get e-mail alerts whenever a bug is submitted to\nfor psql.\n\nBut he might not, for instance, be particularly interested in getting\ne-mail alerts on (say) the JDBC driver.\n\nThere's a certain amount of delegation inherent in an approach like\nthis, and developers focused on narrow portions of the product (and\nPeter came to mind because of psql, I'm not suggesting he only has\na narrow interest in the product) can arrange to only get nagged, if\nyou will, for stuff they've taken responsibility for.\n\nMy guess is that such a system probably isn't as cozy and useful for\ndevelopers, as you're implying.\n\nI think it might well be more friendly for users, though. Certainly\nthe OpenACS and arsDigita communities - both fairly large though\nnot as long in the tooth as PG, I might add - seem to appreciate having\naccess to such a system.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sat, 19 Aug 2000 18:28:54 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "On Sat, 19 Aug 2000, Don Baccus wrote:\n\n> At 08:34 PM 8/19/00 -0400, Vince Vielhaber wrote:\n> >\n> >http://www.postgresql.org/bugs/\n> >\n> >I was about to implement it once before and the directory disappeared.\n> >But anyway it's there.\n> \n> Cool, I tried it and broke it on my second click ... any particular reason\n> to roll your own rather than use something that's already being used by\n> several other development projects and is under active development for\n> that reason? (i.e. the SDM)\n\nLike I said, the dir disappeared before I could commit it, probably some\nconfig stuff too. We tried a couple of already in use items and frankly\nI got tired of learning a new package that noone used anyway. I figured\nat least this one could be more of what we needed. It logs the problem\nin the database and emails the bugs list (I may have the wrong list\naddr in there too). The status can be changed, entries can be made as\nto the status. \n\nWhat did you do to break it and what broke?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sat, 19 Aug 2000 22:52:52 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "\nIf Ben and gang are willing to work on the bug tracking system to get it\nto fit what we want/need, it would give a good example of OpenACS and gang\nfor them to use ... we could just shot \"wants\" at them and they could\nimprove as we go along?\n\nOn Sat, 19 Aug 2000, Vince Vielhaber wrote:\n\n> On Sat, 19 Aug 2000, Don Baccus wrote:\n> \n> > At 08:34 PM 8/19/00 -0400, Vince Vielhaber wrote:\n> > >\n> > >http://www.postgresql.org/bugs/\n> > >\n> > >I was about to implement it once before and the directory disappeared.\n> > >But anyway it's there.\n> > \n> > Cool, I tried it and broke it on my second click ... any particular reason\n> > to roll your own rather than use something that's already being used by\n> > several other development projects and is under active development for\n> > that reason? (i.e. the SDM)\n> \n> Like I said, the dir disappeared before I could commit it, probably some\n> config stuff too. We tried a couple of already in use items and frankly\n> I got tired of learning a new package that noone used anyway. I figured\n> at least this one could be more of what we needed. It logs the problem\n> in the database and emails the bugs list (I may have the wrong list\n> addr in there too). The status can be changed, entries can be made as\n> to the status. \n> \n> What did you do to break it and what broke?\n> \n> Vince.\n> -- \n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n> 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n> \n> \n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 20 Aug 2000 00:02:35 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "On Sat, 19 Aug 2000, Don Baccus wrote:\n\n> >o how to filter or qualify bug reports so that developers don't spend\n> >time having to do that.\n> \n> Developers don't have to filter or qualify bug reports e-mailed to the\n> bugs list today? Who's doing it, then, and why can't they continue\n> doing so if another tool is used to manage bug reports?\n\nthe problem as I see it with any bug tracking tool is someone has to close\noff those bugs when fixed ... right now, someone commits a bug fix, and\nthen fires off an email to the list stating its fixed ... with a bug\ntracking system, then they have to go one more step, open up a web\nbrowser, login to the system, find the bug report and close it ...\n\n\n", "msg_date": "Sun, 20 Aug 2000 00:07:21 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "On Sun, 20 Aug 2000, The Hermit Hacker wrote:\n\n> \n> If Ben and gang are willing to work on the bug tracking system to get it\n> to fit what we want/need, it would give a good example of OpenACS and gang\n> for them to use ... we could just shot \"wants\" at them and they could\n> improve as we go along?\n\nFine by me, BUT I have no desire to learn how it works. If that's gonna\nbe the end result then rm -Rf is my preference. This'll be the third or\nforth one so far, the others were pushed off the edge of the earth. Sorry\nto be so harsh but no matter what the bug tool is I can't see it lasting\nvery long. This group has shown repeatedly that it's not as desired as\nit appears to be.\n\nVince.\n\n> \n> On Sat, 19 Aug 2000, Vince Vielhaber wrote:\n> \n> > On Sat, 19 Aug 2000, Don Baccus wrote:\n> > \n> > > At 08:34 PM 8/19/00 -0400, Vince Vielhaber wrote:\n> > > >\n> > > >http://www.postgresql.org/bugs/\n> > > >\n> > > >I was about to implement it once before and the directory disappeared.\n> > > >But anyway it's there.\n> > > \n> > > Cool, I tried it and broke it on my second click ... any particular reason\n> > > to roll your own rather than use something that's already being used by\n> > > several other development projects and is under active development for\n> > > that reason? (i.e. the SDM)\n> > \n> > Like I said, the dir disappeared before I could commit it, probably some\n> > config stuff too. We tried a couple of already in use items and frankly\n> > I got tired of learning a new package that noone used anyway. I figured\n> > at least this one could be more of what we needed. It logs the problem\n> > in the database and emails the bugs list (I may have the wrong list\n> > addr in there too). The status can be changed, entries can be made as\n> > to the status. \n> > \n> > What did you do to break it and what broke?\n> > \n> > Vince.\n> > -- \n> > ==========================================================================\n> > Vince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n> > 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n> > Online Campground Directory http://www.camping-usa.com\n> > Online Giftshop Superstore http://www.cloudninegifts.com\n> > ==========================================================================\n> > \n> > \n> > \n> > \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sat, 19 Aug 2000 23:20:08 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "FWIW, I agree with Thomas' comments: our last try at a bug-tracking\nsystem was a spectacular failure, and rather than just trying again\nwith a technically better bug-tracker, we need to understand why we\nfailed before.\n\nI think we do need to look for a better answer than what we have, but\nI do not have any faith in \"install system FOO and all your problems\nwill be solved\".\n\nMy take is that\n\n(a) a bug *tracking* system is not the same as a bug *reporting*\nsystem. A tracking system will be useless if it gets cluttered\nwith non-bug reports, duplicate entries, etc. There must be a human\nfilter controlling what gets entered into the system.\n\n(b) our previous try (with Keystone) was a failure in part because\nit was not even effective as a bug reporting system: it did not\nencourage people to fill in our standard \"bug report form\", with the\nresult that bug reports were seriously incomplete w.r.t. version\nnumbers, platforms, etc. This is a relatively simple technical\ndeficiency, not a social-engineering problem, but it does point up\nthe fact that one-size-fits-all solutions fit nobody.\n\n(c) fill-in-the-web-form reporting systems suck. They make it\ndifficult to copy-and-paste query output, dump files, etc.\nAlso, the window for entering text is always either too small or too\nlarge. Email with attachments is fundamentally superior.\n\n(d) divorcing the bug reporting system from the existing mailing\nlist community is foolish, as Thomas pointed out. When a bug report\nis a non-bug (user error, etc) or fixed in a later version or just\na duplicate, we tend to rely on the rest of the community to give\nthe reporter a helpful response. Funneling reports into a separate\nsystem that is only read by a few key developers will lose.\n\nI'm not sure what I want, but I know what I don't want...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 19 Aug 2000 23:22:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "The Hermit Hacker wrote:\n\n> the problem as I see it with any bug tracking tool is someone has to close\n> off those bugs when fixed ... right now, someone commits a bug fix, and\n> then fires off an email to the list stating its fixed ... with a bug\n> tracking system, then they have to go one more step, open up a web\n> browser, login to the system, find the bug report and close it ...\n\nWith something like the SDM, the developer can also just mark that bug fixed\nonline and whoever requested notifications (like maybe the pgsql-hackers\nmailing list) can get a one-liner automated email. This is great for\nindividual users who are particularly interested in seeing one bug fixed:\nthey request notification on that bug, and only get notifications that\npertain to it...\n\n-Ben\n\n", "msg_date": "Sun, 20 Aug 2000 01:10:54 -0400", "msg_from": "Ben Adida <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "Tom Lane wrote:\n\n> FWIW, I agree with Thomas' comments: our last try at a bug-tracking\n> system was a spectacular failure, and rather than just trying again\n> with a technically better bug-tracker, we need to understand why we\n> failed before.\n\nOkay, well then this is an interesting discussion: the OpenACS project\nand my company (OpenForce) would very much be interested in discussing\nwhat you guys think would be a useful bug tracking system. What kind of\nfeatures would it need? Nothing is too fancy here, the idea is to have\nthe best possible system, one that focused developers like the PG team\nwould use.\n\nMaybe we should take this offline? I'm happy to keep this discussion\ngoing and hear everything you've got to say: understanding what it would\ntake to build a system you guys would use is *very* important data.\n\n-Ben\n\n", "msg_date": "Sun, 20 Aug 2000 01:15:57 -0400", "msg_from": "Ben Adida <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "Perhaps this is foolhardy, but it might be worth making a list of\nrequirements, at least so we can tick moxes when considering any system...\n\n\n>(a) a bug *tracking* system is not the same as a bug *reporting*\n>system. A tracking system will be useless if it gets cluttered\n>with non-bug reports, duplicate entries, etc. There must be a human\n>filter controlling what gets entered into the system.\n\n1. Human filtering of 'incoming' reports.\n\n2. Separation of 'bug reports' from 'bugs'. \n\n\n>(b) our previous try (with Keystone) was a failure in part because\n>it was not even effective as a bug reporting system: it did not\n>encourage people to fill in our standard \"bug report form\", with the\n>result that bug reports were seriously incomplete w.r.t. version\n>numbers, platforms, etc. This is a relatively simple technical\n>deficiency, not a social-engineering problem, but it does point up\n>the fact that one-size-fits-all solutions fit nobody.\n\n3. Web and email submissions should do data verification and reject\nincomplete reports (giving reasons).\n\n\n>(c) fill-in-the-web-form reporting systems suck. They make it\n>difficult to copy-and-paste query output, dump files, etc.\n>Also, the window for entering text is always either too small or too\n>large. Email with attachments is fundamentally superior.\n\n[I disagree with the above (web *can* work), but...]\n\n4. Must support email AND web submissions, or at least email submissions\nand web reporting.\n\n\n>(d) divorcing the bug reporting system from the existing mailing\n>list community is foolish, as Thomas pointed out. \n\n5. Must integrate with mailing lists.\n\n\nAnd to add some of my own (suggested) requirements:\n\n6. Require: name, email address, OS & version, PG version, short description.\n\n7. Optional: compiler & version, long description, file attachments.\n\n8. Creation of 'bug reports' is a public function. Creation of 'bug\nentries' is a priv'd function.\n\n9. Simple reporting - unprocessed bug reports, open bugs, bugs by module etc. \n\n\nI have tried to keep this relatively simple in an effort to define what we\nneed to make it work in the current context. But I'm sure I've missed\nthings.... \n\n\n<YABRS>\nAs it happens, I have a Perl/PGSql based bug-tracking system that I give my\nclients access to, which does about 80-90% of this, and I would be willing\nto GPL it.\n\nBefore anybody asks, the reason I rolled my own was because there weren't\nmany that supported email and web submission, and the ones that did, did\nnot support PG easily. This system also implements my prefferred model for\nbug reporting which is to separate (to use my terminology) 'Incidents' from\n'Issues': an Incident is an event (or set of events) that causes a user to\nmake a report. An Issue is an individual item that needs attention (usually\na single bug). Typically users send email (or web forms) and I create one\nor more Issues. When two Incidents (bug reports) are made about the same\nIssue, then the system allows the two to be linked, so that when the Issue\nis fixed, all Incident owners are notified etc.\n\nThe email integration does very little validation, and it is *not*\nintegrated with a mailing list (but it is on my ToDo list). I had planned\nto do the following:\n\n- When a message is received and the subject does not start with 'Re:' (or\n'Aw:'!), submit it as a bug report. If the bug report code returns an\nerror, then reject it from the list. If the bug report works, then send it\non to Majordomo & Mhonarc.\n\n- If the message starts with 'Re:' then just submit it to the list.\n\nLet me know if anyone would be interested, and I can set up a sample\n'product' for people to play with and then, if it is still worth pursuing,\nmake the code a little more presentable (and decrease it's reliance on my\nown perl libraries).\n\nBear in mind that this is designed for an Apache/mod-perl environment, so\nmay not be acceptable to some people. \n</YABRS>\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sun, 20 Aug 2000 15:20:19 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "On Sun, 20 Aug 2000, Ben Adida wrote:\n\n> The Hermit Hacker wrote:\n> \n> > the problem as I see it with any bug tracking tool is someone has to close\n> > off those bugs when fixed ... right now, someone commits a bug fix, and\n> > then fires off an email to the list stating its fixed ... with a bug\n> > tracking system, then they have to go one more step, open up a web\n> > browser, login to the system, find the bug report and close it ...\n> \n> With something like the SDM, the developer can also just mark that bug\n> fixed online and whoever requested notifications (like maybe the\n> pgsql-hackers mailing list) can get a one-liner automated email. This\n> is great for individual users who are particularly interested in\n> seeing one bug fixed: they request notification on that bug, and only\n> get notifications that pertain to it...\n\nwhat we'd need would be soemthing where a person enters the bug report, it\ngets email'd out to -bugs so that developers see it ... if a developer\ncomments on it, it should go from taht email into the database ... if a\ndeveloper fixes it, there should be an easy email mechanism whereby the\ndeveloper can close it ... \n\nmaybe somethign so simple as those with commit access, who can fix the\nbugs, have a passwd that they can include, like majordomo, in their email\nto a central 'admin' mailbox that they can send a mesage to like:\n\npasswd <passwd> close #1\n\n\n", "msg_date": "Sun, 20 Aug 2000 02:25:16 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "Tom Lane writes:\n\n> (a) a bug *tracking* system is not the same as a bug *reporting*\n> system. A tracking system will be useless if it gets cluttered\n> with non-bug reports, duplicate entries, etc. There must be a human\n> filter controlling what gets entered into the system.\n\nLetting any user submit bug reports directly into any such system is\ncertainly not going to work, we'd have \"query does not use index\" 5 times\na day. I consider the current *reporting* procedure pretty good; web forms\nare overrated in my mind.\n\nWhat I had in mind was more a databased incarnation of the TODO list. I\nmean, who are we kidding, we are writing a database and maintain the list\nof problems in flat text. The TODO list has already moved to the\nTODO.detail extension, but we could take it a bit further.\n\nI think currently too many issues get lost, or discussed over and over\nagain. Many developers maintain their own little lists. The TODO list\noften cannot be deciphered by end users and hence does not get read.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 20 Aug 2000 11:18:28 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> Tom Lane writes:\n> \n> > (a) a bug *tracking* system is not the same as a bug *reporting*\n> > system. A tracking system will be useless if it gets cluttered\n> > with non-bug reports, duplicate entries, etc. There must be a human\n> > filter controlling what gets entered into the system.\n> \n> Letting any user submit bug reports directly into any such system is\n> certainly not going to work, we'd have \"query does not use index\" 5 times\n> a day. I consider the current *reporting* procedure pretty good; web forms\n> are overrated in my mind.\n> \n> What I had in mind was more a databased incarnation of the TODO list. I\n> mean, who are we kidding, we are writing a database and maintain the list\n> of problems in flat text. The TODO list has already moved to the\n> TODO.detail extension, but we could take it a bit further.\n> \n\nI maintain my todo items for my projects in a postgres database. But\nthere are a lot of issues to consider there too:\n\n- a table of projects (or topics)\n- a table of todo items with synopsis, full description, ...\n- a table of versions (item is planned to be solved in version, x.x.x,\nactually solved in y.y.y)\n- a table of developers\n- assign table (projects -> developers, items -> developers)\n- change type: bug,doc,rfe (request for enhancement),idc (internal\ndesign change), ...\n- change state (accepted, evaluated, fixed, rejected, incomplete,\ncommitted, ...\n- severity or priority of each item, project\n- search functionality\n\nRegards\nWim\n", "msg_date": "Sun, 20 Aug 2000 13:19:35 +0200", "msg_from": "Wim Ceulemans <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "On Sun, 20 Aug 2000, Peter Eisentraut wrote:\n\n> Tom Lane writes:\n> \n> > (a) a bug *tracking* system is not the same as a bug *reporting*\n> > system. A tracking system will be useless if it gets cluttered\n> > with non-bug reports, duplicate entries, etc. There must be a human\n> > filter controlling what gets entered into the system.\n> \n> Letting any user submit bug reports directly into any such system is\n> certainly not going to work, we'd have \"query does not use index\" 5 times\n> a day. I consider the current *reporting* procedure pretty good; web forms\n> are overrated in my mind.\n> \n> What I had in mind was more a databased incarnation of the TODO list. I\n> mean, who are we kidding, we are writing a database and maintain the list\n> of problems in flat text. The TODO list has already moved to the\n> TODO.detail extension, but we could take it a bit further.\n> \n> I think currently too many issues get lost, or discussed over and over\n> again. Many developers maintain their own little lists. The TODO list\n> often cannot be deciphered by end users and hence does not get read.\n\nA TODO list that one can add comments to ...\n\n", "msg_date": "Sun, 20 Aug 2000 11:17:28 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "I'm putting on my suits-type suit for just a moment.\n \nIn order to Conquer The Universe(tm) why don't we just call it \"PG\"?\n \n -dlj.\n \n\n\n", "msg_date": "Sun, 20 Aug 2000 16:37:30 -0400", "msg_from": "\"David Lloyd-Jones\" <[email protected]>", "msg_from_op": false, "msg_subject": "How Do You Pronounce \"PostgreSQL\"?" }, { "msg_contents": "On Sun, Aug 20, 2000 at 12:33:00AM +0200, Peter Eisentraut wrote:\n<snip side comment about bug tracking. My input: for an email controllable\nsystem, take a look at the debian bug tracking system>\n\n> Show me a system where it doesn't work and we'll get it to work.\n> UNSAFE_FLOATS as it stands it probably not the most appropriate behaviour;\n> it intends to speed things up, not make things portable.\n> \n\nI agree. In the previous thread on this, Thomas suggested creating a flag\nthat would allow control turning the CheckFloat8Val function calls into\na macro NOOP. Sound slike a plan to me.\n\n> \n> > > NULL and NaN are not quite the same thing imho. If we are allowing NaN\n> > > in columns, then it is *known* to be NaN.\n> > \n> > For the purposes of ordering, however, they are very similar.\n> \n> Then we can also treat them similar, i.e. sort them all last or all first.\n> If you have NaN's in your data you wouldn't be interested in ordering\n> anyway.\n\nRight, but the problem is that NULLs are an SQL language feature, and\nthere for rightly special cased directly in the sorting apparatus. NaN is\ntype specific, and I'd be loath to special case it in the same place. As\nit happens, I've spent some time this weekend groveling through the sort\n(and index, as it happens) code, and have an idea for a type specific fix.\n\nHere's the deal, and an actual, honest to goodness bug in the current code.\n\nAs it stands, we allow one non-finite to be stored in a float8 field:\nNaN, with partial parsing of 'Infinity'.\n\nAs I reported last week, NaNs break sorts: they act as barriers, creating\nsorted subsections in the output. As those familiar with the code have\nalready guessed, there is a more serious bug: NaNs break indicies on\nfloat8 fields, essentially chopping the index off at the first NaN.\n\nFixing this turns out to be a one liner to btfloat8cmp.\n\nFixing sorts is a bit tricker, but can be done: Currently, I've hacked\nthe float8lt and float8gt code to sort NaN to after +/-Infinity. (since\nNULLs are special cased, they end up sorting after NaN). I don't see\nany problems with this solution, and it give the desired behavior.\n\nI've attached a patch which fixes all the sort and index problems, as well\nas adding input support for -Infinity. This is not a complete solution,\nsince I haven't done anything with the CheckFloat8Val test. On my\nsystem (linux/glibc2.1) compiling with UNSAFE_FLOATS seems to work fine \nfor testing.\n\n> \n> Side note 2: The paper \"How Java's floating point hurts everyone\n> everywhere\" provides for good context reading.\n\nhttp://http/cs.berkeley.edu/~wkahan/JAVAhurt.pdf ? I'll take a look at it\nwhen I get in to work Monday.\n\n> \n> Side note 3: Once you read that paper you will agree that using floating\n> point with Postgres is completely insane as long as the FE/BE protocol is\n> text-based.\n\nProbably. But it's not our job to enforce sanity, right? Another way to think\nabout it is fixing the implementation so the deficiencies of the FE/BE stand\nout in a clearer light. ;-)\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005", "msg_date": "Sun, 20 Aug 2000 17:08:28 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "\nwhat's wrong wth \"Post-Gres-QL\"?\n\nI find it soooo simple to pronounce *shrug*\n\nOn Sun, 20 Aug 2000, David Lloyd-Jones wrote:\n\n> I'm putting on my suits-type suit for just a moment.\n> \n> In order to Conquer The Universe(tm) why don't we just call it \"PG\"?\n> \n> -dlj.\n> \n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 20 Aug 2000 19:48:23 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How Do You Pronounce \"PostgreSQL\"?" }, { "msg_contents": "At 01:15 20/08/00 -0400, Ben Adida wrote:\n>Okay, well then this is an interesting discussion: the OpenACS project\n>and my company (OpenForce) would very much be interested in discussing\n>what you guys think would be a useful bug tracking system. \n\nSo am I.\n\n>\n>Maybe we should take this offline? \n\nIf you do decide to go offline, I'd appreciate some CCs...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 21 Aug 2000 11:44:49 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "If you all don't mind, I think it'd be great to keep this discussion on the\nlist. As some of you know, Great Bridge is working on its own project\nhosting site (including a PG-backed bug tracking module). We're not quite\nready to pull back the curtain yet, but are getting close, and will be\nactively soliciting input (and hacks) from the community.\n\nThe process of getting hacker requirements for such a system is a very\nuseful one, IMHO...\n\nRegards,\nNed\n\n\nPhilip Warner wrote:\n\n> At 01:15 20/08/00 -0400, Ben Adida wrote:\n> >Okay, well then this is an interesting discussion: the OpenACS project\n> >and my company (OpenForce) would very much be interested in discussing\n> >what you guys think would be a useful bug tracking system.\n>\n> So am I.\n>\n> >\n> >Maybe we should take this offline?\n>\n> If you do decide to go offline, I'd appreciate some CCs...\n>\n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.B.N. 75 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Sun, 20 Aug 2000 22:22:27 -0400", "msg_from": "Ned Lilly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n> ! \tswitch (isinf(num))\n> ! \t{\n> ! \t\tcase -1:\n> ! \t\t\tPG_RETURN_CSTRING(strcpy(ascii, \"-Infinity\"));\n> ! \t\t\tbreak;\n> ! \t\tcase 1:\n> ! \t\t\tPG_RETURN_CSTRING(strcpy(ascii, \"Infinity\"));\n> ! \t\t\tbreak;\n> ! \t\tdefault:\n> ! \t\t\tbreak;\n> ! \t}\n\nMy man page for isinf() sez:\n\n isinf() returns a positive integer if x is +INFINITY, or a negative\n integer if x is -INFINITY. Otherwise it returns zero.\n\nso the above switch statement is making an unportable assumption about\nexactly which positive or negative value will be returned.\n\n> + \tif (isnan(arg2)) PG_RETURN_BOOL(1); \n\nPG_RETURN_BOOL(true), please...\n\n> ! \tif (isnan(a))\n> ! \t\tPG_RETURN_INT32(1);\n\nDo not like this at all --- doesn't it make the result of btint4cmp(NaN,\nNaN) dependent on which argument chances to be first? Seems to me that\nyou must consider two NaNs to be equal, unless you want to subdivide\nthe category of NaNs.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 Aug 2000 01:25:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's " }, { "msg_contents": "At 10:22 PM 8/20/00 -0400, Ned Lilly wrote:\n>If you all don't mind, I think it'd be great to keep this discussion on the\n>list. As some of you know, Great Bridge is working on its own project\n>hosting site (including a PG-backed bug tracking module). We're not quite\n>ready to pull back the curtain yet, but are getting close, and will be\n>actively soliciting input (and hacks) from the community.\n\nSo - again - why roll your own instead of build upon a base which at least\nis already seeing some use, by some fairly large organizations (arsDigita\nis larger and more deeply funded than Great Bridge, making profits to boot,\nand I won't even start talking about AOL)? arsDigita is putting some\ndeveloper\neffort into the SDM, so it's no longer just Ben and whoever he can rope into\nhelping out.\n\nCouldn't you guys more profitably spend time, say, working on outer joins\nrather than doing something like this?\n\nThe folks working on the SDM have a LOT more web/db development experience\nthan whoever's rolling your bug tracking system. \n\nI keep sniffing the odor of NIH ...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 21 Aug 2000 06:59:06 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "\n>At 10:22 PM 8/20/00 -0400, Ned Lilly wrote:\n>>If you all don't mind, I think it'd be great to keep this discussion on the\n>>list. As some of you know, Great Bridge is working on its own project\n>>hosting site (including a PG-backed bug tracking module). We're not quite\n>>ready to pull back the curtain yet, but are getting close, and will be\n>>actively soliciting input (and hacks) from the community.\n\nAnother implication which missed me first time 'round is that Great Bridge\nmight be planning to have its own bug reporting system, separate from \nthat used by the development community at large?\n\nI hope not. There should be one central place for bug reporting. If \nGreat Bridge wants to run it, fine, also if Great Bridge wants to be able \nto incorporate some sort of prioritization system for those with paid\nsupport (or some other discriminatory system) it is still probably better\nto figure out a way to accomodate it rather than have two separate \nbug reporting systems.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 21 Aug 2000 07:35:10 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "On Mon, 21 Aug 2000, Don Baccus wrote:\n\n> At 10:22 PM 8/20/00 -0400, Ned Lilly wrote:\n> >If you all don't mind, I think it'd be great to keep this discussion on the\n> >list. As some of you know, Great Bridge is working on its own project\n> >hosting site (including a PG-backed bug tracking module). We're not quite\n> >ready to pull back the curtain yet, but are getting close, and will be\n> >actively soliciting input (and hacks) from the community.\n> \n> So - again - why roll your own instead of build upon a base which at least\n> is already seeing some use, by some fairly large organizations (arsDigita\n> is larger and more deeply funded than Great Bridge, making profits to boot,\n> and I won't even start talking about AOL)? arsDigita is putting some\n> developer\n> effort into the SDM, so it's no longer just Ben and whoever he can rope into\n> helping out.\n> \n> Couldn't you guys more profitably spend time, say, working on outer joins\n> rather than doing something like this?\n> \n> The folks working on the SDM have a LOT more web/db development experience\n> than whoever's rolling your bug tracking system. \n> \n> I keep sniffing the odor of NIH ...\n\nCould it be possible that folks are shying away because of having\nto install and learn an entire webserver and tools and then the \nbug tracker on top of that?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 21 Aug 2000 10:48:17 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "Vince Vielhaber wrote:\n\n> Could it be possible that folks are shying away because of having\n> to install and learn an entire webserver and tools and then the\n> bug tracker on top of that?\n\nAnd so the solution is to build a totally new system?\n\nComing from the Postgres team, this is relatively surprising. Postgres is a great\ntool. I dropped Oracle and learned Postgres because I thought it would eventually\nbecome a better tool, and because it was already better in many ways. It took\ntime and effort to do so, but eventually it was the right thing to do because I\ncan now make full use of a very powerful open-source database.\n\nIt seems to me that the whole point of Open-Source vs. Not Invented Here is that\nyou are *supposed* to go out and make the effort necessary to learn new tools\nthat can then become extremely useful. If you accept the attitude of \"it's not\nApache/mod-perl so I'm not using it,\" then it's time to stop all criticism of\nthose who use MySQL, Oracle, Windows, etc... Those people are *used* to their\ntechnology, and the only reason they refuse to switch is that they don't want to\nspend time learning something new.\n\nJust my 2 cents.... the useful tools are not always the ones everyone is using.\n\n-Ben\n\n", "msg_date": "Mon, 21 Aug 2000 11:01:49 -0400", "msg_from": "Ben Adida <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "On Mon, 21 Aug 2000, Ben Adida wrote:\n\n> Vince Vielhaber wrote:\n> \n> > Could it be possible that folks are shying away because of having\n> > to install and learn an entire webserver and tools and then the\n> > bug tracker on top of that?\n> \n> And so the solution is to build a totally new system?\n\nIn some cases yes. \n \n> Coming from the Postgres team, this is relatively surprising. Postgres is a great\n> tool. I dropped Oracle and learned Postgres because I thought it would eventually\n> become a better tool, and because it was already better in many ways. It took\n> time and effort to do so, but eventually it was the right thing to do because I\n> can now make full use of a very powerful open-source database.\n\nI am *NOT* \"the Postgres team\". But have you listened to what you & Don\nare suggesting that we, or for that matter anyone else in need of a bug\ntracking system, do? You want us to install the full blown arsDigita with\nall the bells and whistles just for a bug tracker. That's like saying I \nneed a pickup truck to move a chair so I'm going to go out and get a new\nFreightLiner with a 55' trailer to do the job.\n\n> It seems to me that the whole point of Open-Source vs. Not Invented Here is that\n> you are *supposed* to go out and make the effort necessary to learn new tools\n> that can then become extremely useful. If you accept the attitude of \"it's not\n> Apache/mod-perl so I'm not using it,\" then it's time to stop all criticism of\n> those who use MySQL, Oracle, Windows, etc... Those people are *used* to their\n> technology, and the only reason they refuse to switch is that they don't want to\n> spend time learning something new.\n\nYou missed the point. It's called overkill. You needed a full blown\ndatabase for your project. We need (although _want_ may be another story)\na bug tracker - not a new webserver.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 21 Aug 2000 11:12:51 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "Tom - \nThanks for the review. Here's a new version of the patch, fixing the two\nyou objected to. Unfotunately, I seem to have found another corner case\nin the existing code that needs fixing. Here's the one line version:\n\nUse of an index in an ORDER BY DESC result changes placement of NULLs\n(and NaNs, now) from last returned to first returned tuples\n\nLong version:\n\nWhile examining the output from ORDER BY queries, both using and not using\nan index, I came across a discrepancy: the explicit handling of NULLs in\nthe tuplesort case always sorts NULLs to the end, regardless of direction\nof sort. Intellectually, I kind of like that: \"We don't know what these are,\nlet's just tack them on the end\". I implemented NaN sorting to emulate that\nbehavior. This also has the pleasant property that NULL (or NaN) are never\nreturned as > or < any other possible value, should be expected.\n\nHowever, if an index is involved, the index gets built, and the NULL\nvalues are stored at one end of the index. So, when a ORDER BY DESC is\nrequested, the index is just read backwards, sending the NULLs (and NaNs)\nfirst. (They're still not returned from a query with a clause such as\nWHERE f1 > 0.)\n\nAn example of the output is attached, from the regress float8 table (with\na NULL and non-finites added. Don't need the non-finites to to display\nthe problem, though, since it's NULLs as well) Note the blank row,\nwhich is the NULL, moves from the bottom to the top in the last case,\nusing the index.\n\nSo, what way should we go here? Make ASC/DESC actual mirrors of each other\nin the direct sort case, as well? Hack the index scan to know about nodes\nthat always go to the end? Document it as a quirk? (Not likely: selection of\nplan should never affect output.)\n\nTo make the direct sort the same as the index read would work for NULL,\nbut for NaN would either require allowing NaN to be returned as >\nInfinity, which doesn't happen now, or add another ordering operator\nthat is only used for the sort case (use of '>' and '<' seems to be\nhardcoded all the way to the parser)\n\nThoughts?\n\nRoss\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005", "msg_date": "Mon, 21 Aug 2000 11:59:13 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "On Mon, Aug 21, 2000 at 07:35:10AM -0700, Don Baccus wrote:\n> \n> >At 10:22 PM 8/20/00 -0400, Ned Lilly wrote:\n> >>If you all don't mind, I think it'd be great to keep this discussion on the\n> >>list. As some of you know, Great Bridge is working on its own project\n> >>hosting site (including a PG-backed bug tracking module). We're not quite\n> >>ready to pull back the curtain yet, but are getting close, and will be\n> >>actively soliciting input (and hacks) from the community.\n> \n> Another implication which missed me first time 'round is that Great Bridge\n> might be planning to have its own bug reporting system, separate from \n> that used by the development community at large?\n\n\tCool your conspiracy theories. I'm not yet involved with either side\nof this discussion, but before it runs out of control...\n\n> I hope not. There should be one central place for bug reporting. If \n> Great Bridge wants to run it, fine, also if Great Bridge wants to be able \n> to incorporate some sort of prioritization system for those with paid\n> support (or some other discriminatory system) it is still probably better\n> to figure out a way to accomodate it rather than have two separate \n> bug reporting systems.\n\n\tThe fact is that postgres already has a very good system for keeping\ntrack of issues from report to fix to verification. So far the main defect\nis the obvious one of \"People don't know the history unless they troll the\nmessage archives or lurk\". Everyone here is leery of \"fixing\" a working\nsystem. Especially when it entails modifying the working system to deal\nwith a new issue database.\n\n\tBug Database/Issue Trackers can be done in two ways.\n\nSomeone can grab an off-the-shelf system like Bugzilla or this ArsTechnica \nthing and then try to make the project conform to it. So far, everyone \nI've talked to who has touched Bugzilla has said that it sucks. I don't \nknow anything about this other proposed system but it will probably require\na lot of time to even get people to use it regularly, much less use it well.\n\nThe other method is to create the system to match the process in place.\nSince the postgres project is already very well organized, I personally would\nlike to see the custom system, rather then make Bruce throw away his TODO\nlist and use someone else's idea of an issue tracking system. It takes a lot\nmore work--someone has to pay attention to what is going on with the project\nand make sure the database stays in sync, but in the long run, it is less\ndisruptive and smoother to integrate into an already working project like\nthis one.\n\n\n-- \nAdam Haberlach | \"A farm tractor is not a motorcycle.\"\[email protected] | --California DMV 1999\nhttp://www.newsnipple.com/ | Motorcycle Driver Handbook\n", "msg_date": "Mon, 21 Aug 2000 10:28:18 -0700", "msg_from": "Adam Haberlach <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n> While examining the output from ORDER BY queries, both using and not using\n> an index, I came across a discrepancy: the explicit handling of NULLs in\n> the tuplesort case always sorts NULLs to the end, regardless of direction\n> of sort.\n\nYeah. I think that's widely considered a bug --- we have a TODO item to\nfix it. You might care to dig in the archives for prior discussions.\n\n> To make the direct sort the same as the index read would work for NULL,\n> but for NaN would either require allowing NaN to be returned as >\n> Infinity, which doesn't happen now,\n\nSeems to me the sort order should be\n\n\t-Infinity\n\tnormal values\n\t+Infinity\n\tother types of NaN\n\tNULL\n\nand the reverse in a descending sort.\n\n> or add another ordering operator that is only used for the sort case\n> (use of '>' and '<' seems to be hardcoded all the way to the parser)\n\ndon't even think about that...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 Aug 2000 13:39:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's " }, { "msg_contents": "At 10:48 AM 8/21/00 -0400, Vince Vielhaber wrote:\n\n>Could it be possible that folks are shying away because of having\n>to install and learn an entire webserver and tools and then the \n>bug tracker on top of that?\n\nLearning to use AOLserver is going to be harder than writing a\nbugtracker and associated tools from scratch? I find that hard to\nbelieve. \n\nIf it's true, of course they could run Apache, since arsDigita\nprovides a module which implements the AOLserver API in Apache\nfor exactly this reason, thus making it possible to run the\ntoolkit (including the SDM) under Apache.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 21 Aug 2000 10:47:41 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "At 11:12 AM 8/21/00 -0400, Vince Vielhaber wrote:\n\n>I am *NOT* \"the Postgres team\". But have you listened to what you & Don\n>are suggesting that we, or for that matter anyone else in need of a bug\n>tracking system, do? You want us to install the full blown arsDigita with\n>all the bells and whistles just for a bug tracker. That's like saying I \n>need a pickup truck to move a chair so I'm going to go out and get a new\n>FreightLiner with a 55' trailer to do the job.\n\nA rather dubious analogy. \n\nIt takes me less than half a day to install AOLserver, Postgres and the\ntoolkit on a virgin system, including setting up user accounts, etc.\n\nAnd ... you never know. Other parts of the toolkit might turn out to be\nuseful.\n\nIf not, just leave them turned off. Take a look at openacs.org - do you\nfind any traces of the e-commerce module there? The intranet company\nmanagement module? What do you see? You see the use of perhaps 10% of\nthe toolkit.\n\nThis is slightly different than hauling an 18-wheeler around. Software\nand trucks bear little resemblence to each other, though Freightliner does\nhave its home here in Portland, OR.\n\nAnd, of course, the SDM has a bit more functionality than a simple bugtracker,\nwhich is just one piece. It will be gaining more functionality over time,\nincluding increased integration with CVS (there is already some integration,\ni.e. the ability to browse the tree).\n\n>You missed the point. It's called overkill. You needed a full blown\n>database for your project. We need (although _want_ may be another story)\n>a bug tracker - not a new webserver.\n\nThen run it under Apache.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 21 Aug 2000 11:00:44 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "At 10:28 AM 8/21/00 -0700, Adam Haberlach wrote:\n>On Mon, Aug 21, 2000 at 07:35:10AM -0700, Don Baccus wrote:\n\n>> Another implication which missed me first time 'round is that Great Bridge\n>> might be planning to have its own bug reporting system, separate from \n>> that used by the development community at large?\n>\n>\tCool your conspiracy theories.\n\nI'm making an observation, that's all. Cool your own wild theories, please.\n\n>\n>> I hope not. There should be one central place for bug reporting. If \n>> Great Bridge wants to run it, fine, also if Great Bridge wants to be able \n>> to incorporate some sort of prioritization system for those with paid\n>> support (or some other discriminatory system) it is still probably better\n>> to figure out a way to accomodate it rather than have two separate \n>> bug reporting systems.\n>\n>\tThe fact is that postgres already has a very good system for keeping\n>track of issues from report to fix to verification.\n\n<shrug> I didn't raise the subject, it was a core developer who started\nthis thread with a semi-rant about it being about time that the project\nhad decent bug tracking software.\n\nSo apparently not everyone in the community agrees with your analysis. If\nthere were consensus that the current system's great then Great Bridge\nwouldn't\nbe looking at implementing something different, and Peter wouldn't be ranting\nthat something better is needed.\n\n> So far the main defect\n>is the obvious one of \"People don't know the history unless they troll the\n>message archives or lurk\". Everyone here is leery of \"fixing\" a working\n>system.\n\nThere seems to be some disagreement about how well it works. Again, I didn't\nraise the issue, I simply responded with a possible solution when one of the\ncore developers raised it. And I know that Great Bridge wants to do something\nweb-based - this isn't some fantasy I dreamed up when in a psychotic state.\n\nI'm only saying that if a different approach is to be taken, why not build\non something that exists, is under active development, and is being driven\nby folks who are VERY open to working with the project to make the tool\nfit the project rather than vice-versa?\n\n>\n>Someone can grab an off-the-shelf system like Bugzilla or this ArsTechnica \n\narsDigita\n\n>thing and then try to make the project conform to it.\n\nRead above. Ben's already posted that he's eager for design input. aD has\nalready enhanced the thing based on their own needs, and there's no reason\nwhy Great Bridge and the Postgres crew can't do the same.\n\n>I don't \n>know anything about this other proposed system but it will probably require\n>a lot of time to even get people to use it regularly, much less use it well.\n\nStrange, OpenACS folk use it regularly and all we've done is put a \"report\na bug\" link on the home page.\n\nI haven't heard so many arguments against change since the VT100 started\nreplacing the KSR 35!\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 21 Aug 2000 11:34:45 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "* Don Baccus <[email protected]> [000821 11:48] wrote:\n> At 11:12 AM 8/21/00 -0400, Vince Vielhaber wrote:\n> \n> >You missed the point. It's called overkill. You needed a full blown\n> >database for your project. We need (although _want_ may be another story)\n> >a bug tracker - not a new webserver.\n> \n> Then run it under Apache.\n\nSorry to jump in without reading the entire thread, but has GNATS\n(what the FreeBSD team uses) or Bugzilla come up as options?\n\nGNATS is a bit crusty but works pretty ok for us.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Mon, 21 Aug 2000 11:50:13 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "On Mon, 21 Aug 2000, Don Baccus wrote:\n\n> At 10:48 AM 8/21/00 -0400, Vince Vielhaber wrote:\n> \n> >Could it be possible that folks are shying away because of having\n> >to install and learn an entire webserver and tools and then the \n> >bug tracker on top of that?\n> \n> Learning to use AOLserver is going to be harder than writing a\n> bugtracker and associated tools from scratch? I find that hard to\n> believe. \n\nLearning how to use it is only a tiny part of it. You still have to\nmigrate your website to it. It's not a drop in replacement. So\nwriting a bugtracker that will fit the environment vs learning a new\nwebserver & migrating your website & rebuilding or rewriting custom \napps ... For the average, busy admin, don't count too heavily on\nthe latter. They're more likely to stick with what they know and\ntrust regardless of how good something else is reported to be.\n\n> If it's true, of course they could run Apache, since arsDigita\n> provides a module which implements the AOLserver API in Apache\n> for exactly this reason, thus making it possible to run the\n> toolkit (including the SDM) under Apache.\n\nFirst I heard of this, but I'd also have concerns of it's reliability.\nIt has to be real new. And if it fails it's not ars that looks bad,\nit's the site that's running it. Remember the flack over udmsearch?\nPostgreSQL was slammed over and over because udmsearch wasn't working\nright. \n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 21 Aug 2000 14:54:12 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "On Mon, 21 Aug 2000, Don Baccus wrote:\n\n> At 11:12 AM 8/21/00 -0400, Vince Vielhaber wrote:\n> \n> >I am *NOT* \"the Postgres team\". But have you listened to what you & Don\n> >are suggesting that we, or for that matter anyone else in need of a bug\n> >tracking system, do? You want us to install the full blown arsDigita with\n> >all the bells and whistles just for a bug tracker. That's like saying I \n> >need a pickup truck to move a chair so I'm going to go out and get a new\n> >FreightLiner with a 55' trailer to do the job.\n> \n> A rather dubious analogy. \n> \n> It takes me less than half a day to install AOLserver, Postgres and the\n> toolkit on a virgin system, including setting up user accounts, etc.\n\nHow familiar are you with it as opposed to most others on the net?\n \n> And ... you never know. Other parts of the toolkit might turn out to be\n> useful.\n> \n> If not, just leave them turned off. Take a look at openacs.org - do you\n> find any traces of the e-commerce module there? The intranet company\n> management module? What do you see? You see the use of perhaps 10% of\n> the toolkit.\n> \n> This is slightly different than hauling an 18-wheeler around. Software\n> and trucks bear little resemblence to each other, though Freightliner does\n> have its home here in Portland, OR.\n\nYou really don't get it do you? I'm not comparing software and trucks,\nI'm comparing tools to do a job. Once you grasp that concept you'll be\nable to catch on to what I'm talking about - until then I'm just wasting\nmy time.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 21 Aug 2000 14:58:03 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "At 02:54 PM 8/21/00 -0400, Vince Vielhaber wrote:\n>On Mon, 21 Aug 2000, Don Baccus wrote:\n\n>> Learning to use AOLserver is going to be harder than writing a\n>> bugtracker and associated tools from scratch? I find that hard to\n>> believe. \n\n>Learning how to use it is only a tiny part of it. You still have to\n>migrate your website to it. It's not a drop in replacement. So\n>writing a bugtracker that will fit the environment vs learning a new\n>webserver & migrating your website & rebuilding or rewriting custom \n>apps ... For the average, busy admin, don't count too heavily on\n>the latter. They're more likely to stick with what they know and\n>trust regardless of how good something else is reported to be.\n\nSo run the development portion of the site on a different server. Who\nsaid anything about migrating the entire postgres site to AOLserver???\n\n>> If it's true, of course they could run Apache, since arsDigita\n>> provides a module which implements the AOLserver API in Apache\n>> for exactly this reason, thus making it possible to run the\n>> toolkit (including the SDM) under Apache.\n>\n>First I heard of this, but I'd also have concerns of it's reliability.\n>It has to be real new.\n\nYep. Written by Robert Thau, one of the original eight core Apache\ndevelopers, under contract to aD.\n\n>And if it fails it's not ars that looks bad,\n>it's the site that's running it.\n\narsDigita gets on average greater than $500,000 to develop and deploy\na website.\n\nIf aD deploys one on Apache+mod_aolserver (they paid for the development\nof this module) and it falls over, do you really believe aD won't look\nbad?\n\nSeeing as they'd very likely be sued, I think you're wrong.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 21 Aug 2000 12:24:49 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "At 02:58 PM 8/21/00 -0400, Vince Vielhaber wrote:\n>On Mon, 21 Aug 2000, Don Baccus wrote:\n\n>> It takes me less than half a day to install AOLserver, Postgres and the\n>> toolkit on a virgin system, including setting up user accounts, etc.\n>\n>How familiar are you with it as opposed to most others on the net?\n\nBen has already offered to help out and has an account on hub. If I weren't\nleaving town for five of the next six or so weeks, I would too.\n\nStill, we have strangers to all three pieces installing everything in a\nweekend, usually with a bit of help. If you were to install it, you'd\nbe familiar with Postgres which is about 1/3 of the confusion for newcomers.\n\n>You really don't get it do you?\n\nYes, I do.\n\n> I'm not comparing software and trucks,\n>I'm comparing tools to do a job. Once you grasp that concept you'll be\n>able to catch on to what I'm talking about - until then I'm just wasting\n>my time.\n\nDo I detect a flame? Ohhh...\n\nIt's a toolkit, Vincent. Once you grasp the notion that using a wrench\nout of your toolbox doesn't mean you have to use every tool in the\ntoolbox you'll be able to grasp what I'm talking about. \n\nI answered your previous question plainly. If you're not capable of\nunderstanding the answer, don't answer with an invitation for a flamefest\nyou have no chance of winning, OK?\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 21 Aug 2000 12:30:46 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "On Mon, Aug 21, 2000 at 01:39:32PM -0400, Tom Lane wrote:\n> \"Ross J. Reedstrom\" <[email protected]> writes:\n> > While examining the output from ORDER BY queries, both using and not using\n> > an index, I came across a discrepancy: the explicit handling of NULLs in\n> > the tuplesort case always sorts NULLs to the end, regardless of direction\n> > of sort.\n> \n> Yeah. I think that's widely considered a bug --- we have a TODO item to\n> fix it. You might care to dig in the archives for prior discussions.\n\nI'll take a look.\n\n> \n> > To make the direct sort the same as the index read would work for NULL,\n> > but for NaN would either require allowing NaN to be returned as >\n> > Infinity, which doesn't happen now,\n> \n> Seems to me the sort order should be\n> \n> \t-Infinity\n> \tnormal values\n> \t+Infinity\n> \tother types of NaN\n> \tNULL\n> \n> and the reverse in a descending sort.\n> \n> > or add another ordering operator that is only used for the sort case\n> > (use of '>' and '<' seems to be hardcoded all the way to the parser)\n> \n> don't even think about that...\n\nSure, but any ideas _how_ to get 'NaN > +Infinity' to happen only during a\nsort operation, and not when '>' is called explicity as a WHERE condition,\nby touching only type specific code? 'Cause I'd call it a bug to be able\nto say:\n\nSELECT * from foo where f8 > 'Infinity';\n\nand get anything at all back.\n\nNULL is taken care of by special casing in the sort code, as I already mentioned,\nand can be fixed immediately.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Mon, 21 Aug 2000 14:32:08 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "At 04:34 PM 8/21/00 -0300, The Hermit Hacker wrote:\n\n>Ummm, just stupid question here, but I already posted that AOLserver was\n>installed and ready for you guys to implement this ... Ben already has an\n>account to get in and do it ... instead of arguing about it, why not just\n>do it? If nobody likes it/uses it, so be it ... its no skin off my\n>back. But right now the arguments that are going back and forth seem to\n>be sooooooo useless since they *seem* to involve technical issues that\n>aren't issues ...\n\nWell, this is a breath of fresh air ...\n\nHopefully Ben will have time soon to do so. Unfortunately (well, not from\nmy personal point of view!) I'm about to leave for five of the next six\nweeks, four of those spent where the internet doesn't reach (where nothing\nbut BLM radio dispatch doesn't reach, to be more precise) so I'm not going\nto be able to help.\n\nOtherwise I'd just jump in.\n\nOf course, putting it up will just make its shortcomings apparent, and \nthose who resist even the concept of change will ignore the fact that\nBen and I have both stated that modifying the SDM to fit the project\nrather than modifying the project to fit the SDM and say, \"see, this\ndoesn't fit our needs!\" blah blah blah.\n\nJust as the red herring you've mentioned has been raised.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 21 Aug 2000 12:36:12 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "\nWell, just to throw another piece of information into the\nmix, there is a new bug-tracking system under development by\nthe folks at collab.net. They call it \"scarab\", and last I\ntalked to them that they thought it would be ready for\nproduction use Real Soon Now: \n\n http://scarab.tigris.org/\n\nI guess they regard this as a replacement for bugzilla. \n\n\nSome personal opinions: \n\n(1) I would actually like to see bugzilla fixed rather than\nreplaced. The collab.net guys are into java servlets\nbecause they're CS geeks who are down on perl. Me, I'm a\nperl loyalist who thinks that Larry Wall is onto something\n-- mathematical elegance may not be the right standard to\njudge a computer language.\n\n(2) And maybe it'd be nice if the \"religous wars\" could be\ndropped in favor of pure objective technical decision\nmaking, but i don't think they can be: the social and the\ntechnical don't neatly split into two little piles. (A case\nin point: the argument that using a mailing list for bug\ncontrol is somehow \"warmer\" or \"more human\" than a bug\ndatabase.\n \n", "msg_date": "Mon, 21 Aug 2000 12:40:19 -0700", "msg_from": "Joe Brenner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's) " }, { "msg_contents": "[first off, I got rid of that awful cc: list..... ARggghhh....]\nDon Baccus wrote:\n> At 04:34 PM 8/21/00 -0300, The Hermit Hacker wrote:\n> >Ummm, just stupid question here, but I already posted that AOLserver was\n> >installed and ready for you guys to implement this ... Ben already has an\n> >account to get in and do it ... instead of arguing about it, why not just\n \n> Well, this is a breath of fresh air ...\n \n> Hopefully Ben will have time soon to do so. Unfortunately (well, not from\n\nI'm available to help some, and also have a hub account (as well as an\noperational AOLserver+PostgreSQL+OpenACS site to work on here....).\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 21 Aug 2000 15:45:36 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "On Mon, 21 Aug 2000, Don Baccus wrote:\n\n> At 02:58 PM 8/21/00 -0400, Vince Vielhaber wrote:\n> >On Mon, 21 Aug 2000, Don Baccus wrote:\n> \n> >> It takes me less than half a day to install AOLserver, Postgres and the\n> >> toolkit on a virgin system, including setting up user accounts, etc.\n> >\n> >How familiar are you with it as opposed to most others on the net?\n> \n> Ben has already offered to help out and has an account on hub. If I weren't\n> leaving town for five of the next six or so weeks, I would too.\n> \n> Still, we have strangers to all three pieces installing everything in a\n> weekend, usually with a bit of help. If you were to install it, you'd\n> be familiar with Postgres which is about 1/3 of the confusion for newcomers.\n> \n> >You really don't get it do you?\n> \n> Yes, I do.\n> \n> > I'm not comparing software and trucks,\n> >I'm comparing tools to do a job. Once you grasp that concept you'll be\n> >able to catch on to what I'm talking about - until then I'm just wasting\n> >my time.\n> \n> Do I detect a flame? Ohhh...\n> \n> It's a toolkit, Vincent. Once you grasp the notion that using a wrench\n> out of your toolbox doesn't mean you have to use every tool in the\n> toolbox you'll be able to grasp what I'm talking about. \n\nYet you insist on shoving the entire toolbox down our throats every time\nthere's a task to be done, Donnie.\n\n> I answered your previous question plainly. If you're not capable of\n> understanding the answer, don't answer with an invitation for a flamefest\n> you have no chance of winning, OK?\n\nWhen the day comes that you actually answer a question without telling\nthe world that openacs, arsdigita, aolserver or whatever you want to call\nit is the answer and saviour to everything from world peace to who cares\nwhat else, I'll believe you answered a \"question plainly\". But that's\nnot likely to ever happen. And since you think that anything I've said\nso far has been a flame I doubt you'd know one if it slapped you with a\ntrout. \n\nOne more thing, Donnie... ***PLONK***\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 21 Aug 2000 15:47:33 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "On Mon, 21 Aug 2000, Don Baccus wrote:\n\n> At 04:34 PM 8/21/00 -0300, The Hermit Hacker wrote:\n> \n> >Ummm, just stupid question here, but I already posted that AOLserver was\n> >installed and ready for you guys to implement this ... Ben already has an\n> >account to get in and do it ... instead of arguing about it, why not just\n> >do it? If nobody likes it/uses it, so be it ... its no skin off my\n> >back. But right now the arguments that are going back and forth seem to\n> >be sooooooo useless since they *seem* to involve technical issues that\n> >aren't issues ...\n> \n> Well, this is a breath of fresh air ...\n> \n> Hopefully Ben will have time soon to do so. Unfortunately (well, not from\n> my personal point of view!) I'm about to leave for five of the next six\n> weeks, four of those spent where the internet doesn't reach (where nothing\n> but BLM radio dispatch doesn't reach, to be more precise) so I'm not going\n> to be able to help.\n> \n> Otherwise I'd just jump in.\n> \n> Of course, putting it up will just make its shortcomings apparent, and \n> those who resist even the concept of change will ignore the fact that\n> Ben and I have both stated that modifying the SDM to fit the project\n> rather than modifying the project to fit the SDM and say, \"see, this\n> doesn't fit our needs!\" blah blah blah.\n> \n> Just as the red herring you've mentioned has been raised.\n\nRight now, you've thrown out \"it can do this, it can do that\" ... I've\nput forth the resources so that you can prove that it will work, its in\nyour court now :)\n\n\n", "msg_date": "Mon, 21 Aug 2000 16:54:04 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "On Mon, 21 Aug 2000, Vince Vielhaber wrote:\n\n> On Mon, 21 Aug 2000, Don Baccus wrote:\n> \n> > At 10:48 AM 8/21/00 -0400, Vince Vielhaber wrote:\n> > \n> > >Could it be possible that folks are shying away because of having\n> > >to install and learn an entire webserver and tools and then the \n> > >bug tracker on top of that?\n> > \n> > Learning to use AOLserver is going to be harder than writing a\n> > bugtracker and associated tools from scratch? I find that hard to\n> > believe. \n> \n> Learning how to use it is only a tiny part of it. You still have to\n> migrate your website to it. It's not a drop in replacement. So\n> writing a bugtracker that will fit the environment vs learning a new\n> webserver & migrating your website & rebuilding or rewriting custom\n> apps ... For the average, busy admin, don't count too heavily on the\n> latter. They're more likely to stick with what they know and trust\n> regardless of how good something else is reported to be.\n\nwhy not just run aolserver on a different port like we do?\n\n\n", "msg_date": "Mon, 21 Aug 2000 17:04:37 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "At 7:48 PM -0300 8/20/00, The Hermit Hacker wrote:\n>what's wrong wth \"Post-Gres-QL\"?\n>\n>I find it soooo simple to pronounce *shrug*\n>\n>On Sun, 20 Aug 2000, David Lloyd-Jones wrote:\n>\n> > I'm putting on my suits-type suit for just a moment.\n> >\n> > In order to Conquer The Universe(tm) why don't we just call it \"PG\"?\n> >\n\nIMNSHO the \"QL\" are silent and you pronounce it Postgres. I consider \nthe SQL query language to be merely a (major) feature added during \nthe evolution of the system.\n\n\nSignature held pending an ISO 9000 compliant\nsignature design and approval process.\[email protected], or [email protected]\n", "msg_date": "Mon, 21 Aug 2000 13:09:54 -0700", "msg_from": "\"Henry B. Hotz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How Do You Pronounce \"PostgreSQL\"?" }, { "msg_contents": "On Mon, 21 Aug 2000, The Hermit Hacker wrote:\n\n> On Mon, 21 Aug 2000, Vince Vielhaber wrote:\n> \n> > On Mon, 21 Aug 2000, Don Baccus wrote:\n> > \n> > > At 10:48 AM 8/21/00 -0400, Vince Vielhaber wrote:\n> > > \n> > > >Could it be possible that folks are shying away because of having\n> > > >to install and learn an entire webserver and tools and then the \n> > > >bug tracker on top of that?\n> > > \n> > > Learning to use AOLserver is going to be harder than writing a\n> > > bugtracker and associated tools from scratch? I find that hard to\n> > > believe. \n> > \n> > Learning how to use it is only a tiny part of it. You still have to\n> > migrate your website to it. It's not a drop in replacement. So\n> > writing a bugtracker that will fit the environment vs learning a new\n> > webserver & migrating your website & rebuilding or rewriting custom\n> > apps ... For the average, busy admin, don't count too heavily on the\n> > latter. They're more likely to stick with what they know and trust\n> > regardless of how good something else is reported to be.\n> \n> why not just run aolserver on a different port like we do?\n\nIf it has the functionality you desire and are looking for, sure. My\ncontention is that the average site isn't going to do it for a couple\nof features and would roll their own instead.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 21 Aug 2000 16:13:47 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "On Mon, 21 Aug 2000, Jan Wieck wrote:\n\n> The Hermit Hacker wrote:\n> >\n> > what's wrong wth \"Post-Gres-QL\"?\n> >\n> > I find it soooo simple to pronounce *shrug*\n> \n> Mee too. And I'm not sure if anybody pronounces PG the same\n> way. Is it PeeGee or PiG (in which case we'd have the wrong\n> animal in our logo).\n\nActually, the one that gets me is those that refer to it as Postgres\n... postgres was a project out of Berkeley way back in the 80's, early\n90's ... hell, it was based on a PostQuel language ... this ain't\npostgres, its only based on it :(\n\n\n\n \n> > \n> Jan\n> \n> >\n> > On Sun, 20 Aug 2000, David Lloyd-Jones wrote:\n> >\n> > > I'm putting on my suits-type suit for just a moment.\n> > >\n> > > In order to Conquer The Universe(tm) why don't we just call it \"PG\"?\n> > >\n> > > -dlj.\n> > >\n> > >\n> > >\n> >\n> > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > Systems Administrator @ hub.org\n> > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n> >\n> \n> \n> --\n> \n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #================================================== [email protected] #\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 21 Aug 2000 17:15:37 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How Do You Pronounce \"PostgreSQL\"?" }, { "msg_contents": "On Mon, 21 Aug 2000, Henry B. Hotz wrote:\n\n> At 7:48 PM -0300 8/20/00, The Hermit Hacker wrote:\n> >what's wrong wth \"Post-Gres-QL\"?\n> >\n> >I find it soooo simple to pronounce *shrug*\n> >\n> >On Sun, 20 Aug 2000, David Lloyd-Jones wrote:\n> >\n> > > I'm putting on my suits-type suit for just a moment.\n> > >\n> > > In order to Conquer The Universe(tm) why don't we just call it \"PG\"?\n> > >\n> \n> IMNSHO the \"QL\" are silent and you pronounce it Postgres. I consider \n> the SQL query language to be merely a (major) feature added during \n> the evolution of the system.\n\nas stated in another email, it is not pronounced Postgres ... postgres was\na whole different beast based on a completely different query language\n... if you refer to Postgres, you are, IMHO, refering to the only\nPostgres 4.2 which was the grand-daddy to what its evolved into ...\n\n\n", "msg_date": "Mon, 21 Aug 2000 17:16:55 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How Do You Pronounce \"PostgreSQL\"?" }, { "msg_contents": "[trimmed cc: list]\n\nVince Vielhaber wrote:\n> On Mon, 21 Aug 2000, Don Baccus wrote:\n> > Vince wrote:\n> > >You really don't get it do you?\n\n> > Yes, I do.\n\nVince, Don really does 'get it' -- he's just pretty vehement about his\n'getting it'.\n\n> > It's a toolkit, Vincent. Once you grasp the notion that using a wrench\n> > out of your toolbox doesn't mean you have to use every tool in the\n> > toolbox you'll be able to grasp what I'm talking about.\n \n> Yet you insist on shoving the entire toolbox down our throats every time\n> there's a task to be done, Donnie.\n\nOne of the many useful features of OpenACS is that you get the whole\ntoolbox -- a toolbox, as opposed to a 'box of tools' -- Jensen makes a\nnice profit selling toolboxes with matched tools -- neat, clean, trim,\nand don't look anything like my four-drawer toolbox made up of a melange\nof tools and a Wal-mart toolbox.\n\nOpenACS is like the Jensen toolset -- you get a matched case, and\nhigh-quality tools matches to the case. With OpenACS you get a\nframework that tools can be plugged into -- tools that were designed to\nbe plugged in that way (well, it's not perfect -- but nothing is). \nEverything can be covered by the system-wide authentication module, user\ngroup module, etc. Everything is designed to work smoothly together. \nso you only use authentication+SDM -- so what. You can expand as you\nneed to -- and it doesn't take up _that_ much space.\n\nPostgreSQL is much like OpenACS (barring the funny capitalization, but I\ndigress): PostgreSQL is a toolbox of database routines and modules, tied\ntogether by a SQL parser and a large set of clients with matching\narbiter/backends. Download postgresql-version.tar.gz, and you _have_ to\nget the C++ code -- even if you don't want it. You have to get pgaccess\n-- even if you won't use it. If you want to do meaningful development,\nyou have to keep nearly the whole source tree around......etc. How is\nthis different from the OpenACS model? Don't want a part of OpenACS? \nNobody is preventing you from nuking that part from your installation\ntree -- just like no one is preventing someone from installing\nPostgreSQL in a client-only sort of way (much easier with an RPMset, but\nI again digress.....).\n\nPostgreSQL requires libpq -- OpenACS requires (but doesn't include)\nAOLserver -- that analogy is not perfect, but close. \n\nAnd, OpenACS can run just fine under Apache with mod_aolserver. \nAlthough, since Marc has an AOLserver available and running.... :-) and\na killer db server (bigger :-))...\n\nVince, Don: sparring like this is not productive. Both of you are\nexcellent hackers -- I've seen both of your code. Let's just make it\nwork, and see what the hackers think of it.\n \n> > I answered your previous question plainly. If you're not capable of\n> > understanding the answer, don't answer with an invitation for a flamefest\n> > you have no chance of winning, OK?\n\nFlamefests are unwinnable. All parties to flamwars get burned -- either\ndirectly, or indirectly. I have seen too many flamewars -- and it's not\nworth the risk to reputation to go too far with one. This one is mild\nso far -- on a scale from one to ten, this makes it to one-and-a-half\nthus far (I've been on news.groups and news.admin (and cross-posted to\nalt.flame) more than once several years back....).\n \n> When the day comes that you actually answer a question without telling\n> the world that openacs, arsdigita, aolserver or whatever you want to call\n> it is the answer and saviour to everything from world peace to who cares\n\nVince, try it. You might like it. But, it does require some different\nthinking -- which if you don't have time to do, your loss. Or, to put\nit differently -- I just recently learned how to write CGI scripts. \nThat may seem laughable -- but I had already written many dynamic web\npages using AOLserver TCL -- who needs CGI? The current rush on PHP\nprogrammers shows this -- many PHP programmers wouldn't have a clue how\nto go about writing a CGI script -- and, guess what -- with PHP you\ndon't need CGI. The AOLserver TCL API is like PHP on steroids in many\ncase -- and, in many other cases, PHP out-API's AOLserver. \n\nAnd I do remember that you _have_ tried AOLserver, about a year ago....\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 21 Aug 2000 16:17:30 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "On Mon, 21 Aug 2000, Vince Vielhaber wrote:\n\n> > why not just run aolserver on a different port like we do?\n> \n> If it has the functionality you desire and are looking for, sure. My\n> contention is that the average site isn't going to do it for a couple\n> of features and would roll their own instead.\n\nSounds like a big waste of time investment when few of us have much of it\nto start with ...\n\nI've talked to Ben and he's going to work on getting the OpenACS version\nup and running ... once he has that going, he's going to talk to you\n(Vince) about getting the look to match what our web site looks like\n... once we have that in place, the next step will be to customize it so\nthat it does what *we* want it to do ... \n\n\n", "msg_date": "Mon, 21 Aug 2000 17:20:59 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "[cc: list trimmed]\n\nVince Vielhaber wrote:\n> On Mon, 21 Aug 2000, The Hermit Hacker wrote:\n\n> > why not just run aolserver on a different port like we do?\n \n> If it has the functionality you desire and are looking for, sure. My\n> contention is that the average site isn't going to do it for a couple\n> of features and would roll their own instead.\n\nIsn't this the biggest reason people give for using MySQL and not\nPostgreSQL???? (re: transaction support, triggers, etc).\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 21 Aug 2000 16:22:08 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "The Hermit Hacker wrote:\n>\n> what's wrong wth \"Post-Gres-QL\"?\n>\n> I find it soooo simple to pronounce *shrug*\n\n Mee too. And I'm not sure if anybody pronounces PG the same\n way. Is it PeeGee or PiG (in which case we'd have the wrong\n animal in our logo).\n\n\nJan\n\n>\n> On Sun, 20 Aug 2000, David Lloyd-Jones wrote:\n>\n> > I'm putting on my suits-type suit for just a moment.\n> >\n> > In order to Conquer The Universe(tm) why don't we just call it \"PG\"?\n> >\n> > -dlj.\n> >\n> >\n> >\n>\n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n>\n\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Mon, 21 Aug 2000 15:24:04 -0500 (EST)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How Do You Pronounce \"PostgreSQL\"?" }, { "msg_contents": "On Mon, 21 Aug 2000, The Hermit Hacker wrote:\n\n> On Mon, 21 Aug 2000, Vince Vielhaber wrote:\n> \n> > > why not just run aolserver on a different port like we do?\n> > \n> > If it has the functionality you desire and are looking for, sure. My\n> > contention is that the average site isn't going to do it for a couple\n> > of features and would roll their own instead.\n> \n> Sounds like a big waste of time investment when few of us have much of it\n> to start with ...\n> \n> I've talked to Ben and he's going to work on getting the OpenACS version\n> up and running ... once he has that going, he's going to talk to you\n> (Vince) about getting the look to match what our web site looks like\n> ... once we have that in place, the next step will be to customize it so\n> that it does what *we* want it to do ... \n\nWho is going to customize it?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 21 Aug 2000 16:26:59 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "On Mon, 21 Aug 2000, Vince Vielhaber wrote:\n\n> On Mon, 21 Aug 2000, The Hermit Hacker wrote:\n> \n> > On Mon, 21 Aug 2000, Vince Vielhaber wrote:\n> > \n> > > > why not just run aolserver on a different port like we do?\n> > > \n> > > If it has the functionality you desire and are looking for, sure. My\n> > > contention is that the average site isn't going to do it for a couple\n> > > of features and would roll their own instead.\n> > \n> > Sounds like a big waste of time investment when few of us have much of it\n> > to start with ...\n> > \n> > I've talked to Ben and he's going to work on getting the OpenACS version\n> > up and running ... once he has that going, he's going to talk to you\n> > (Vince) about getting the look to match what our web site looks like\n> > ... once we have that in place, the next step will be to customize it so\n> > that it does what *we* want it to do ... \n> \n> Who is going to customize it?\n\nHe will \n\n\n", "msg_date": "Mon, 21 Aug 2000 17:32:01 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n>>>> or add another ordering operator that is only used for the sort case\n>>>> (use of '>' and '<' seems to be hardcoded all the way to the parser)\n>> \n>> don't even think about that...\n\n> Sure, but any ideas _how_ to get 'NaN > +Infinity' to happen only during a\n> sort operation, and not when '>' is called explicity as a WHERE condition,\n> by touching only type specific code?\n\nThat's exactly what you shouldn't even think about. The entire index\nand sorting system is predicated on the assumption that '<' and related\noperators agree with the order induced by a btree index. You do not get\nto make the operators behave differently in the free-standing case than\nwhen they are used with an index.\n\n> 'Cause I'd call it a bug to be able to say:\n> SELECT * from foo where f8 > 'Infinity';\n> and get anything at all back.\n\nI agree it's pretty arbitrary to define NaN as > Infinity, but the sort\nordering is necessarily arbitrary. We can special-case NULL because\nthat's a type-independent concept, but special-casing NaN is out of the\nquestion IMHO. Pick where you want it in the type-specific order, and\nlive with it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 Aug 2000 16:37:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's " }, { "msg_contents": "On Mon, 21 Aug 2000, The Hermit Hacker wrote:\n\n> On Mon, 21 Aug 2000, Vince Vielhaber wrote:\n> \n> > On Mon, 21 Aug 2000, The Hermit Hacker wrote:\n> > \n> > > On Mon, 21 Aug 2000, Vince Vielhaber wrote:\n> > > \n> > > > > why not just run aolserver on a different port like we do?\n> > > > \n> > > > If it has the functionality you desire and are looking for, sure. My\n> > > > contention is that the average site isn't going to do it for a couple\n> > > > of features and would roll their own instead.\n> > > \n> > > Sounds like a big waste of time investment when few of us have much of it\n> > > to start with ...\n> > > \n> > > I've talked to Ben and he's going to work on getting the OpenACS version\n> > > up and running ... once he has that going, he's going to talk to you\n> > > (Vince) about getting the look to match what our web site looks like\n> > > ... once we have that in place, the next step will be to customize it so\n> > > that it does what *we* want it to do ... \n> > \n> > Who is going to customize it?\n> \n> He will \n\nworks for me.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 21 Aug 2000 16:37:42 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "On Mon, Aug 21, 2000 at 02:32:08PM -0500, Ross J. Reedstrom wrote:\n> On Mon, Aug 21, 2000 at 01:39:32PM -0400, Tom Lane wrote:\n> > \n> > Seems to me the sort order should be\n> > \n> > \t-Infinity\n> > \tnormal values\n> > \t+Infinity\n> > \tother types of NaN\n> > \tNULL\n> > \n> > and the reverse in a descending sort.\n> > \n> \n> NULL is taken care of by special casing in the sort code, as I already mentioned,\n> and can be fixed immediately.\n> \n\nGrr, I take this back. By the time comparetup_* see the tuples, we've no idea\nwhich order we're sorting in, just a pointer to the appropriate sortop.\n\n<whine mode> \nWhy does every thing I touch in pgsql end up pulling in down into the\nguts of the whole system? Even something that looks nicely factored\nat first, like the type system? I guess this stuff is _hard_.\n</whine mode>\n\nSigh, back to fixing up referential integrity violations in the DB\nI'm finally upgrading from 6.5 to 7.0.X. (DBA life lesson number XX:\nimplementing RI in the backend _from the very start of a project_ is a\nvery good thing. Cleansing the data of cruft left from _not_ having RI in\nthe backend, is a bad thing. And those clever, recursive self-referencing\ntable structures for representing trees are a pain in the DBA to reload.)\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Mon, 21 Aug 2000 15:49:51 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "At 5:16 PM -0300 8/21/00, The Hermit Hacker wrote:\n>On Mon, 21 Aug 2000, Henry B. Hotz wrote:\n>\n> > At 7:48 PM -0300 8/20/00, The Hermit Hacker wrote:\n> > >what's wrong wth \"Post-Gres-QL\"?\n> > >\n> > >I find it soooo simple to pronounce *shrug*\n> > >\n> > >On Sun, 20 Aug 2000, David Lloyd-Jones wrote:\n> > >\n> > > > I'm putting on my suits-type suit for just a moment.\n> > > >\n> > > > In order to Conquer The Universe(tm) why don't we just call it \"PG\"?\n> > > >\n> >\n> > IMNSHO the \"QL\" are silent and you pronounce it Postgres. I consider\n> > the SQL query language to be merely a (major) feature added during\n> > the evolution of the system.\n>\n>as stated in another email, it is not pronounced Postgres ... postgres was\n>a whole different beast based on a completely different query language\n>... if you refer to Postgres, you are, IMHO, refering to the only\n>Postgres 4.2 which was the grand-daddy to what its evolved into ...\n\n;-)\n\n\nSignature held pending an ISO 9000 compliant\nsignature design and approval process.\[email protected], or [email protected]\n", "msg_date": "Mon, 21 Aug 2000 14:01:32 -0700", "msg_from": "\"Henry B. Hotz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How Do You Pronounce \"PostgreSQL\"?" }, { "msg_contents": "Hi Joe,\n\nWe're looking at elements of the Tigris architecture for the site\nthat I referenced in an earlier email (so Don, we're not reinventing\nthe wheel). Scarab has apparently been in development for awhile\nnow - the Tigris project is currently using Bugzilla until Scarab\nitself is ready for prime time, then they intend to switch it over.\n\nWe've also spent some time looking at an app called BugRat (even\nported it to Postgres from MySQL)... it seems to offer a pretty good\nmix of features as well.\n\nThe greatbridge.org site will likely get started with either BugRat\nor Bugzilla, then transitition to Scarab when/if it's ready. BTW,\nthe goal of the site (which I didn't really explain too well in my\nearlier message) will be to provide a hosting infrastructure for\nsome related projects (like interfaces, or the apps and tools we're\nlisting at http://www.greatbridge.com/tools/toollist.php)... also,\nall the software we develop internally will go up on that site.\n\nSo please don't interpret our interest as getting in the middle of\nwhat the main Postgres project uses for bug tracking - although the\nrequirements outlined in the earlier messages was helpful just for\nour own development of the greatbridge.org site. If Ben and the\nOpenACS gang are willing to put the time into a bug tracker for the\nproject, I think that couldn't help but be a good thing.\n\nRegards,\nNed\n\n\n\nJoe Brenner wrote:\n\n> Well, just to throw another piece of information into the\n> mix, there is a new bug-tracking system under development by\n> the folks at collab.net. They call it \"scarab\", and last I\n> talked to them that they thought it would be ready for\n> production use Real Soon Now:\n>\n> http://scarab.tigris.org/\n>\n> I guess they regard this as a replacement for bugzilla.\n>\n> Some personal opinions:\n>\n> (1) I would actually like to see bugzilla fixed rather than\n> replaced. The collab.net guys are into java servlets\n> because they're CS geeks who are down on perl. Me, I'm a\n> perl loyalist who thinks that Larry Wall is onto something\n> -- mathematical elegance may not be the right standard to\n> judge a computer language.\n>\n> (2) And maybe it'd be nice if the \"religous wars\" could be\n> dropped in favor of pure objective technical decision\n> making, but i don't think they can be: the social and the\n> technical don't neatly split into two little piles. (A case\n> in point: the argument that using a mailing list for bug\n> control is somehow \"warmer\" or \"more human\" than a bug\n> database.\n>\n\n", "msg_date": "Mon, 21 Aug 2000 17:24:49 -0400", "msg_from": "Ned Lilly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking" }, { "msg_contents": "At 1:25 AM -0400 8/21/00, Tom Lane wrote:\n>\"Ross J. Reedstrom\" <[email protected]> writes:\n> > ! \tif (isnan(a))\n> > ! \t\tPG_RETURN_INT32(1);\n>\n>Do not like this at all --- doesn't it make the result of btint4cmp(NaN,\n>NaN) dependent on which argument chances to be first? Seems to me that\n>you must consider two NaNs to be equal, unless you want to subdivide\n>the category of NaNs.\n\nI'm pretty sure IEEE 754 says that NaN does not compare equal to \nanything, including themselves. I also believe that Infinity isn't \nequal to itself either, it's just bigger than anything else except \nNaN (which isn't littler either).\n\nWithout having seen the start of this thread I think the biggest \nproblem is that some of the results of compares depend on the mode \nthat the FP hardware is put in. IEEE specifies some modes, but not \nhow you set the mode you want on the system you are actually running \non. For example I think comparing zero and -Infinity may return \nthree possible results: 0 > -Infinity, 0 < -Infinity (because it was \ntold to ignore the sign of Infinity), or an illegal value exception. \nLikewise signalling/non-signalling NaN's act different depending on \nmode settings.\n\nWe need to first figure out what floating point behavior we want to \nsupport. Then we figure what mode settings provide that behavior \nwith a minimum of overhead. Then we need to figure out how to set \nthose modes on all the platforms we support. We will probably \ndiscover that not all required modes actually exist on all hardware \nplatforms. I know that 68000 and SPARC are pretty good, but PowerPC \npunted some stuff to exception handlers which may not be correct on \nall OS's. I've heard that Java has some portability issues because \nIntel fudged some stuff in the newer hardware.\n\nDoes anyone feel like tracing down how to set the modes for all the \ndifferent systems that we try to run on? If there is interest then I \nmight poke at a couple/three NetBSD ports and Solaris/SPARC. But \nonly if there is interest.\n\nSun has put some code out under GPL which will let you test for these \nspecial values and handle them, but that seems like a big hit for \nwhat should be a simple compare. I assume that we can't put GPL code \ninto the main sources any more than the *BSD's do. Perhaps we could \nget away with it if it is only included if configure can't figure out \nhow to set the modes properly?\n\n\nSignature held pending an ISO 9000 compliant\nsignature design and approval process.\[email protected], or [email protected]\n", "msg_date": "Mon, 21 Aug 2000 14:34:50 -0700", "msg_from": "\"Henry B. Hotz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "On Mon, Aug 21, 2000 at 04:37:21PM -0400, Tom Lane wrote:\n> \"Ross J. Reedstrom\" <[email protected]> writes:\n> >>>> or add another ordering operator that is only used for the sort case\n> >>>> (use of '>' and '<' seems to be hardcoded all the way to the parser)\n> >> \n> >> don't even think about that...\n> \n> > Sure, but any ideas _how_ to get 'NaN > +Infinity' to happen only during a\n> > sort operation, and not when '>' is called explicity as a WHERE condition,\n> > by touching only type specific code?\n> \n> That's exactly what you shouldn't even think about. The entire index\n> and sorting system is predicated on the assumption that '<' and related\n> operators agree with the order induced by a btree index. You do not get\n> to make the operators behave differently in the free-standing case than\n> when they are used with an index.\n\nOh really? Then why do btree's have their own comparator functions,\nseperate from heap sorts, and datum sorts, and explicit use of '<' ? The\ncurrent code infrastructure allows for the possibility that these may need\nto diverge, requiring the coders to keep them in sync. Annoying, that, but\nuseful for edge cases.\n\nSince btree already uses it's own comparator, The only reason I can\nsee that the parser drops in '<' and '>' as the name of the sorting\noperator to use for ORDER BY is convenience: the functions are there,\nand have the (mostly) correct behavior. \n\nChanging this would only require writing another set of operators for\nthe parser to drop in, that are used only for sorting, so that the\nsort behavior could diverge slightly, by knowing how to sort NULLs and\nNaNs. Yes, it'd be a third set of operators to keep in sync with the\nbtree and default ones, but it could give completely correct behavior.\n\nHmm, I another thought: all the comparator code assumes (a<b || a>b || a==c)\nand therefor only test 2 of the three conditions, falling through to the \nthird. In the three places I just looked, two fall thorough to the equal case,\nand one to the 'less than' case. If all three fell through to the 'greater than'\ncase, it might work with no tweaking at all. I'll have to try that, first.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Mon, 21 Aug 2000 17:30:21 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "On Mon, Aug 21, 2000 at 05:30:21PM -0500, Ross J. Reedstrom wrote:\n> \n> Hmm, I another thought: all the comparator code assumes (a<b || a>b || a==c)\n> and therefor only test 2 of the three conditions, falling through to the \n> third. In the three places I just looked, two fall thorough to the equal case,\n> and one to the 'less than' case. If all three fell through to the 'greater than'\n> case, it might work with no tweaking at all. I'll have to try that, first.\n\nLooking again, I realize that the sort comparetup_* code doesn't have\naccess to a operator to test for equality, so can't do this. Sigh. Time\nto go home, I think.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n\n", "msg_date": "Mon, 21 Aug 2000 17:59:02 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n> On Mon, Aug 21, 2000 at 04:37:21PM -0400, Tom Lane wrote:\n>> That's exactly what you shouldn't even think about. The entire index\n>> and sorting system is predicated on the assumption that '<' and related\n>> operators agree with the order induced by a btree index. You do not get\n>> to make the operators behave differently in the free-standing case than\n>> when they are used with an index.\n\n> Oh really? Then why do btree's have their own comparator functions,\n> seperate from heap sorts, and datum sorts, and explicit use of '<' ?\n\nStrictly and only to save a few function-call cycles. Some paths in the\nbtree code need a three-way comparison (is A<B, or A=B, or A>B?) and\nabout half the time you'd need two calls to type-specific comparator\nfunctions to make that determination if you only had the user-level\noperators available. This does *not* mean that you have license to make\nthe 3-way comparator's behavior differ from the operators, because the\noperators are used too. Note also that it is a three-way comparison\nfunction, not four-way: there is no provision for answering \"none of the\nabove\" (except when a NULL is involved, and that only works because it's\nspecial-cased without calling type-specific code at all).\n\nThe reason the sort code doesn't use the comparator routine is strictly\nhistorical, AFAICT. It really should, for speed reasons; but there may\nnot be a 3-way comparator associated with a given '<' operator, and\nwe've got a longstanding convention that a user-selected sort order is\nspecified by naming a particular '<'-like operator. It may also be\nworth pointing out that the sort code still assumes trichotomy: it\ntests A<B, and if that is false it tries B<A, and if that's also false\nthen it assumes A=B. There's still no room for an \"unordered\" response.\n\n> The current code infrastructure allows for the possibility that these\n> may need to diverge, requiring the coders to keep them in\n> sync. Annoying, that, but useful for edge cases.\n\nIt is annoying. Many of the datatypes where comparison is nontrivial\nactually use an underlying 3-way comparison routine that the boolean\ncomparators call, so as to avoid code-divergence problems.\n\n> Changing this would only require writing another set of operators for\n> the parser to drop in, that are used only for sorting,\n\nNo, because *the user-level operators must match the index*. How many\ntimes do I have to repeat that? The transformation that allows, say,\n\tSELECT * FROM tab WHERE foo > 33 AND foo < 42\nto be implemented by an indexscan (of an index on foo) is fundamentally\ndependent on the assumption that the operators '>' and '<' induce the\nsame ordering of data values as is stored in the index. Otherwise you\ncan't scan a subrange of the index and know that you've hit all the\nmatching rows. The planner actually takes considerable care to verify\nthat the operators appearing in WHERE *do* match the index ordering ---\nthat's what pg_opclass and pg_amop are all about. If you invent an\ninternal set of operators that provide a different index ordering,\nyou will find that the planner ignores your index.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 Aug 2000 19:12:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's " }, { "msg_contents": "On Mon, Aug 21, 2000 at 11:34:45AM -0700, Don Baccus wrote:\n\n> Strange, OpenACS folk use it regularly and all we've done is put a \"report\n> a bug\" link on the home page.\n\n\t...I'm not sure you noticed, but this project isn't OpenACS. It is\nan established project with decent management that only needs a few features.\nSwitching everything over to a canned solution, no matter how good of a\ntoolbox you feel it is, is not necessarily going to solve the few problems\nwe have without causing a whole host of new ones...\n\n> I haven't heard so many arguments against change since the VT100 started\n> replacing the KSR 35!\n\n\tOh, and in case you didn't hear it earlier... PLONK.\n\n-- \nAdam Haberlach | \"A farm tractor is not a motorcycle.\"\[email protected] | --California DMV 1999\nhttp://www.newsnipple.com/ | Motorcycle Driver Handbook\n", "msg_date": "Mon, 21 Aug 2000 17:35:22 -0700", "msg_from": "Adam Haberlach <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "On Mon, 21 Aug 2000, Adam Haberlach wrote:\n\n> On Mon, Aug 21, 2000 at 11:34:45AM -0700, Don Baccus wrote:\n> \n> > Strange, OpenACS folk use it regularly and all we've done is put a \"report\n> > a bug\" link on the home page.\n> \n> \t...I'm not sure you noticed, but this project isn't OpenACS. It is\n> an established project with decent management that only needs a few features.\n> Switching everything over to a canned solution, no matter how good of a\n> toolbox you feel it is, is not necessarily going to solve the few problems\n> we have without causing a whole host of new ones...\n\nUmmmm, who was talking about switching anything over to a canned\nsolution? *raised eyebrow* we are talking about allowing the OpenACS\nfolks setup a bug tracking system for us and seeing if it servces us\nbetter then other attempts have in the past ... \n\n\n", "msg_date": "Mon, 21 Aug 2000 21:40:38 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "At 09:40 PM 8/21/00 -0300, The Hermit Hacker wrote:\n>On Mon, 21 Aug 2000, Adam Haberlach wrote:\n>\n>> On Mon, Aug 21, 2000 at 11:34:45AM -0700, Don Baccus wrote:\n>> \n>> > Strange, OpenACS folk use it regularly and all we've done is put a\n\"report\n>> > a bug\" link on the home page.\n>> \n>> \t...I'm not sure you noticed, but this project isn't OpenACS. It is\n>> an established project with decent management that only needs a few\nfeatures.\n>> Switching everything over to a canned solution, no matter how good of a\n>> toolbox you feel it is, is not necessarily going to solve the few problems\n>> we have without causing a whole host of new ones...\n>\n>Ummmm, who was talking about switching anything over to a canned\n>solution? *raised eyebrow* we are talking about allowing the OpenACS\n>folks setup a bug tracking system for us and seeing if it servces us\n>better then other attempts have in the past ... \n\nWhich it probably won't - customization is the key, we're not quite as\nstupid as Adam makes us out to be.\n\nHere's an idea:\n\nHow about ignoring whether or not a new solution is borne, or the SDM\ncan be customized to fit your needs with less work than, and concentrating\non what features you want to see?\n\nHas anyone thought about this, or was the last attempt such a failure that\npeople have given it no further thought?\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 21 Aug 2000 17:50:05 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "\nThe Hermit Hacker <[email protected]> wrote:\n\n> On Mon, 21 Aug 2000, Jan Wieck wrote:\n> \n> > The Hermit Hacker wrote:\n> > >\n> > > what's wrong wth \"Post-Gres-QL\"?\n> > >\n> > > I find it soooo simple to pronounce *shrug*\n> > \n> > Mee too. And I'm not sure if anybody pronounces PG the same\n> > way. Is it PeeGee or PiG (in which case we'd have the wrong\n> > animal in our logo).\n> \n> Actually, the one that gets me is those that refer to it as Postgres\n> ... postgres was a project out of Berkeley way back in the 80's, early\n> 90's ... hell, it was based on a PostQuel language ... this ain't\n> postgres, its only based on it :(\n\nPersonally, I say P-G-Sequel or Post-Grease-S-Q-L.\n\nAnd I have to say that \"postgresql\" has one of the worst\nnames of any software I've ever encountered. I'm entirely\nin sympathy with the \"suit\" who suggested calling it \"PG\". \n\nI would go further and say that in the near future when some\nmilestone is reached (say, the addition of outer joins?) it \nmight be a good idea to mark the occasion with a name change\nof some sort. \n\nI cringe at the thought of the hassles involved with\nchoosing a new name though. \n\nOpenbase? Freebase? ACIDtrip?\n\nI think I like \"Grease\".\n\n", "msg_date": "Mon, 21 Aug 2000 19:55:00 -0700", "msg_from": "Joe Brenner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How Do You Pronounce \"PostgreSQL\"? " }, { "msg_contents": "On Mon, 21 Aug 2000, Joe Brenner wrote:\n\n> \n> The Hermit Hacker <[email protected]> wrote:\n> \n> > On Mon, 21 Aug 2000, Jan Wieck wrote:\n> > \n> > > The Hermit Hacker wrote:\n> > > >\n> > > > what's wrong wth \"Post-Gres-QL\"?\n> > > >\n> > > > I find it soooo simple to pronounce *shrug*\n> > > \n> > > Mee too. And I'm not sure if anybody pronounces PG the same\n> > > way. Is it PeeGee or PiG (in which case we'd have the wrong\n> > > animal in our logo).\n> > \n> > Actually, the one that gets me is those that refer to it as Postgres\n> > ... postgres was a project out of Berkeley way back in the 80's, early\n> > 90's ... hell, it was based on a PostQuel language ... this ain't\n> > postgres, its only based on it :(\n> \n> Personally, I say P-G-Sequel or Post-Grease-S-Q-L.\n> \n> And I have to say that \"postgresql\" has one of the worst\n> names of any software I've ever encountered. I'm entirely\n> in sympathy with the \"suit\" who suggested calling it \"PG\". \n> \n> I would go further and say that in the near future when some milestone\n> is reached (say, the addition of outer joins?) it might be a good idea\n> to mark the occasion with a name change of some sort.\n\nI don't think so ... we changed the name 4+ years ago, and, quite frankly,\nhave worked for 4+ years at building an identity around that ... ppl know\nwhat PostgreSQL is, and what it represents ... could you imagine someone\nchanging Apache, or Linux, or Oracle? I really don't see what is so hard\nabout pronouncing Post-Gres-QL ... *shrug*\n\n\n", "msg_date": "Tue, 22 Aug 2000 00:19:36 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How Do You Pronounce \"PostgreSQL\"? " }, { "msg_contents": "Don Baccus wrote:\n>\n> How about ignoring whether or not a new solution is borne, or the SDM\n> can be customized to fit your needs with less work than, and concentrating\n> on what features you want to see?\n> \n> Has anyone thought about this, or was the last attempt such a failure that\n> people have given it no further thought?\n> \n\nI don't know what the major code contributors need beyond the\nTODO list. But I remember what went wrong with the older system\n-- people would post non-bug issues, and in large numbers, as\nbugs. And the system would \"pend\" those non-issues, assigning\nthem to core developers, who, at the time, were very busy\nimplementing MVCC and crushing real bugs by the hundreds. It\nseems all Peter wants is a system whereby authorized users\n(presumably those with CVS privileges) would have the ability to\npost and close bugs. Perhaps such a system might have prevented\nthe duplicate work done recently on the \"binary compatibility WRT\nfunctional indexes\" issue. Just from lurking, I think the core\ndevelopers' consensus was that anything which allows Joe User to\nopen tickets, without a \"front-line\" of advanced users/minor code\ncontributors willing to act as filters, would consume too much\ntime. People with great frequency ignore the note on the web-site\nwhich reads \"Note: You must post elsewhere first\" with respect to\nthe pgsql-hackers list.\n\nSo I don't think it was an issue with the technology, but the\nprocess. Although, from what I've read, I suspect ArsDigita is\nlight-years ahead of the Keystone software that was the\n\"PostgreSQL Bug Tracking System\".\n\nP.S.: I've been looking forward to seeing ArsDigita running on\npostgresql.org for some time. I suspect there would be some\nshort-term pain, but substantial long-term gain. :-)\n\nMike Mascari\n", "msg_date": "Mon, 21 Aug 2000 23:40:26 -0400", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking (was Re: +/- Inf for float8's)" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> I don't think so ... we changed the name 4+ years ago, and, quite frankly,\n> have worked for 4+ years at building an identity around that ... ppl know\n> what PostgreSQL is, and what it represents ... could you imagine someone\n> changing Apache, or Linux, or Oracle? I really don't see what is so hard\n> about pronouncing Post-Gres-QL ... *shrug*\n\nThe name is certainly ugly, but it's got history behind it: it gives\nappropriate credit to those who went before us. (Don't forget that\nthe roots of this project go back twenty-odd years.) There's unlikely\nto be much support around here for changing the name, no matter what\nalternative is offered.\n\nFWIW, I say \"post-gres-cue-ell\", same as Marc.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 Aug 2000 23:51:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How Do You Pronounce \"PostgreSQL\"? " }, { "msg_contents": "On Mon, 21 Aug 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > I don't think so ... we changed the name 4+ years ago, and, quite frankly,\n> > have worked for 4+ years at building an identity around that ... ppl know\n> > what PostgreSQL is, and what it represents ... could you imagine someone\n> > changing Apache, or Linux, or Oracle? I really don't see what is so hard\n> > about pronouncing Post-Gres-QL ... *shrug*\n> \n> The name is certainly ugly, but it's got history behind it: it gives\n> appropriate credit to those who went before us. (Don't forget that\n> the roots of this project go back twenty-odd years.) There's unlikely\n> to be much support around here for changing the name, no matter what\n> alternative is offered.\n> \n> FWIW, I say \"post-gres-cue-ell\", same as Marc.\n\nI think we need to get this put up on the main page in big bold letters as\none of those \"dictionary pronounciation\" sort of things so that it the\nfirst thing that ppl learn when they hit our site :)\n\n\n", "msg_date": "Tue, 22 Aug 2000 00:59:14 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How Do You Pronounce \"PostgreSQL\"? " }, { "msg_contents": "On Tue, 22 Aug 2000, The Hermit Hacker wrote:\n\n> On Mon, 21 Aug 2000, Tom Lane wrote:\n> \n> > The Hermit Hacker <[email protected]> writes:\n> > > I don't think so ... we changed the name 4+ years ago, and, quite frankly,\n> > > have worked for 4+ years at building an identity around that ... ppl know\n> > > what PostgreSQL is, and what it represents ... could you imagine someone\n> > > changing Apache, or Linux, or Oracle? I really don't see what is so hard\n> > > about pronouncing Post-Gres-QL ... *shrug*\n> > \n> > The name is certainly ugly, but it's got history behind it: it gives\n> > appropriate credit to those who went before us. (Don't forget that\n> > the roots of this project go back twenty-odd years.) There's unlikely\n> > to be much support around here for changing the name, no matter what\n> > alternative is offered.\n> > \n> > FWIW, I say \"post-gres-cue-ell\", same as Marc.\n> \n> I think we need to get this put up on the main page in big bold letters as\n> one of those \"dictionary pronounciation\" sort of things so that it the\n> first thing that ppl learn when they hit our site :)\n\nOr an embedded audio file? :)\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 22 Aug 2000 05:44:41 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How Do You Pronounce \"PostgreSQL\"? " }, { "msg_contents": "On Tue, Aug 22, 2000 at 02:16:44PM +0200, Peter Eisentraut wrote:\n> Ross J. Reedstrom writes:\n> \n> > Fixing sorts is a bit tricker, but can be done: Currently, I've hacked\n> > the float8lt and float8gt code to sort NaN to after +/-Infinity. (since\n> > NULLs are special cased, they end up sorting after NaN). I don't see\n> > any problems with this solution, and it give the desired behavior.\n> \n> SQL 99, part 5, section 17.2 specifies that the sort order for ASC and\n> DESC is defined in terms of the particular type's < and > operators.\n> Therefore the NaN's must always be at the end. (Before or after NULL is\n> implementation-defined, btw.)\n\n\nI'm not sure what your suggesting, Peter. Which is 'the end'? And how does\n'Therefore' follow from considering the type behavior of NaN and the < and\n> operators ? \n\nI think your suggesting that NaN always sort to one end, either greater\nthan Infinity or less than -Infinity, regardless of sort direction. \nTherefore, depending on the direction of ORDER BY, NaNs will be returned\neither be first or last, not always last, as I've currently implemented.\n\nI agree with this, but my reason comes from the required treatment of NULLs. \n\nMy reasoning is as follows:\n\nThe standard says (17.2):\n\n The relative position of rows X and Y in the result is determined by\n comparing XV(i) and YV(i) according to the rules of Subclause 8.2,\n \"<comparison predicate>\", in ISO/IEC 9075-2, where the <comp op>\n is the applicable <comp op> for K(i), [...]\n\nand Subclause 8.2 says:\n\n 2) Numbers are compared with respect to their algebraic value.\n\nHowever, NaN is _not_ algebraically > or < any other number: in fact,\nGeneral Rule 1. of subclause 8.2 does deal with this:\n\n 5) X <comp op> Y is_unknown if X <comp op> Y is neither\n true_ nor false_ .\n\nSo, we're left with not knowing where to put NaN.\n\nHowever, the only other case where the comparision is unknown is:\n\n a) If either XV or YV is the null value, then \n X <comp op> Y is unknown_ .\n\nAnd, going back to section 17.2:\n\n [...] where the <comp op> is the applicable <comp op> for K(i),\n with the following special treatment of null values. Whether a\n sort key value that is null is considered greater or less than a\n non-null value is implementation-defined, but all sort key values\n that are null shall either be considered greater than all non-null\n values or be considered less than all non-null values.\n\nSo, NULLs go at one end (less or greater), always, so NaN should as well.\nAnd NULL will go outside them, since NULLs are required to be considered\ngreater than (in our case) all non-null values (including NaN).\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n\n> Ross J. Reedstrom writes:\n> \n> > Fixing sorts is a bit tricker, but can be done: Currently, I've hacked\n> > the float8lt and float8gt code to sort NaN to after +/-Infinity. (since\n> > NULLs are special cased, they end up sorting after NaN). I don't see\n> > any problems with this solution, and it give the desired behavior.\n> \n> SQL 99, part 5, section 17.2 specifies that the sort order for ASC and\n> DESC is defined in terms of the particular type's < and > operators.\n> Therefore the NaN's must always be at the end. (Before or after NULL is\n> implementation-defined, btw.)\n> \n> \n> -- \n> Peter Eisentraut Sernanders v�g 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n> \n", "msg_date": "Tue, 22 Aug 2000 10:46:35 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Mon, 21 Aug 2000, Tom Lane wrote:\n> \n> > The Hermit Hacker <[email protected]> writes:\n> > > I don't think so ... we changed the name 4+ years ago, and, quite frankly,\n> > > have worked for 4+ years at building an identity around that ... ppl know\n> > > what PostgreSQL is, and what it represents ... could you imagine someone\n> > > changing Apache, or Linux, or Oracle? I really don't see what is so hard\n> > > about pronouncing Post-Gres-QL ... *shrug*\n> >\n> > The name is certainly ugly, but it's got history behind it: it gives\n> > appropriate credit to those who went before us. (Don't forget that\n> > the roots of this project go back twenty-odd years.) There's unlikely\n> > to be much support around here for changing the name, no matter what\n> > alternative is offered.\n> >\n> > FWIW, I say \"post-gres-cue-ell\", same as Marc.\n> \n> I think we need to get this put up on the main page in big bold letters as\n> one of those \"dictionary pronounciation\" sort of things so that it the\n> first thing that ppl learn when they hit our site :)\n\nWe in spanish speaking areas, are clueless most of the times when pronouncing\nenglish names when speaking to non-english speakers, or when mixing english\nwords with spanish words. As to PostgreSQL, I pronounce it (in spanish)\n\"postgres\" \"s\" \"q\" \"l\" inevitably doubling the \"s\", and hardly saying the \"t\".\nAlternatively, I'd just say \"postgres\" as most of the people hasn't ever heard\nabout Berkeley's ages old project. Pronouncing it \"postgre\" \"s\" \"q\" \"l\" is hard\nin spanish as the \"name\" of the letter \"s\" is \"ese\", and so saying \"postgre\"\n\"ese\" \"q\" \"l\" is harder (and uglier) to pronounce than \"postgres\" \"s\" \"q\" \"l\".\n\nRegards,\nHaroldo.\n", "msg_date": "Tue, 22 Aug 2000 14:02:46 -0300", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: How Do You Pronounce \"PostgreSQL\"?" }, { "msg_contents": "On Tue, Aug 22, 2000 at 08:22:21PM +0200, Peter Eisentraut wrote:\n> \n> Hmm, I'm getting the feeling that perhaps at this point we should\n> explicitly *not* support NaN at all. After all, the underlying reason for\n> offering them is to provide IEEE 754 compliant floating point arithmetic,\n> but if we start making compromises such as NaN == NaN or NaN > +Infinity\n> then we might as well not do it. In these cases I opine that if you can't\n> do something correctly then you should perhaps be honest and don't do\n> it. After all, users that want a \"not-a-number\" can use NULL in most\n> cases, and hard-core floating point users are going to fail miserably\n> with the FE/BE protocol anyway.\n> \n\nPretty much were I have come to on this, as well. The point is to get\nthe existing NaN to not break indicies or sorting. The simplest way is\nto disable it.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Tue, 22 Aug 2000 14:34:31 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "\n> I would go further and say that in the near future when some\n> milestone is reached (say, the addition of outer joins?) it\n> might be a good idea to mark the occasion with a name change\n> of some sort.\n\nIn my personal experience, out in the real world, people refer to it as\n\"Postgres\". The QL being a mouthful, and contrary to the common practice\nof pronouncing SQL as SEQUEL. While Marc points out that technically\nPostgres died when it left Berkeley, that discontinuity is really only\nsomething we choose to acknowledge. As Henry points out, SQL is only one\nfeature that happened to be added. Apart from not owning the domain\nname, why shouldn't it just be \"Postgres\"?\n", "msg_date": "Wed, 23 Aug 2000 09:53:40 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How Do You Pronounce \"PostgreSQL\"?" }, { "msg_contents": "On Wed, 23 Aug 2000, Chris Bitmead wrote:\n\n> \n> > I would go further and say that in the near future when some\n> > milestone is reached (say, the addition of outer joins?) it\n> > might be a good idea to mark the occasion with a name change\n> > of some sort.\n> \n> In my personal experience, out in the real world, people refer to it\n> as \"Postgres\". The QL being a mouthful, and contrary to the common\n> practice of pronouncing SQL as SEQUEL. While Marc points out that\n> technically Postgres died when it left Berkeley, that discontinuity is\n> really only something we choose to acknowledge. As Henry points out,\n> SQL is only one feature that happened to be added. Apart from not\n> owning the domain name, why shouldn't it just be \"Postgres\"?\n\n4 years ago we discussed what to rename the project, since Postgres95\nwasn't considerd a very \"long term name\" (kinda like Windows2000), and\nPostgreSQL was choosen, as it both represented our roots as well as what\nwe've grown into ... we've spent 4 years now building up a market presence\nwith that name, getting it known so that ppl know what it is ... changing\nit now is not an option. If PostgreSQL were considered a bad name, maybe\n... look at MySQL with their new \"MaxSQL\" product ... but it isn't, and is\ngrowing stronger ...\n\n\n", "msg_date": "Tue, 22 Aug 2000 21:30:59 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How Do You Pronounce \"PostgreSQL\"?" }, { "msg_contents": "The Hermit Hacker wrote:\n\n> 4 years ago we discussed what to rename the project, since Postgres95\n> wasn't considerd a very \"long term name\" (kinda like Windows2000), and\n> PostgreSQL was choosen, as it both represented our roots as well as what\n> we've grown into ... we've spent 4 years now building up a market presence\n> with that name, \n\nQuestion: Did it work? Or are people really calling it Postgres instead?\nKind of like Coca Cola. At some point in time they realised people\nweren't calling it Coca Cola anymore, they were calling it Coke. So\ninstead of resisting the inevitable - trying to educate people to ask\nfor a \"Coca Cola\", they accepted it, trademarked the name \"Coke\", and\nstarted putting \"Coke\" on all their products.\n\n> getting it known so that ppl know what it is ... changing\n> it now is not an option. If PostgreSQL were considered a bad name, maybe\n> ... look at MySQL with their new \"MaxSQL\" product ... but it isn't, and is\n> growing stronger ...\n", "msg_date": "Wed, 23 Aug 2000 10:42:28 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How Do You Pronounce \"PostgreSQL\"?" }, { "msg_contents": "When someone asks me what RDBMS our company uses for most projects and I say\npost-gray 'Es Queue El' everyone always says \"Huh? Post-what?\"\n\nI love the product but the name is a bitch of a tongue twister. It's strange\nto say and doesn't roll off the tongue very well. Still, after I repeat\nmyself a few times they generally end up calling it \"Postgres\" all on their\nown -- I guess there is some natural inclination for people to move from\npost-gray 'Es Queue El' to plain old \"Postgres\"...\n\nIt's becoming more and more known even if it is a bit of a strange name at\nfirst so I think it will all work out just fine..\n\n-Mitch\n\n----- Original Message -----\nFrom: \"Chris Bitmead\" <[email protected]>\nTo: \"PostgreSQL HACKERS\" <[email protected]>\nSent: Tuesday, August 22, 2000 5:42 PM\nSubject: Re: [HACKERS] How Do You Pronounce \"PostgreSQL\"?\n\n\n> The Hermit Hacker wrote:\n>\n> > 4 years ago we discussed what to rename the project, since Postgres95\n> > wasn't considerd a very \"long term name\" (kinda like Windows2000), and\n> > PostgreSQL was choosen, as it both represented our roots as well as what\n> > we've grown into ... we've spent 4 years now building up a market\npresence\n> > with that name,\n>\n> Question: Did it work? Or are people really calling it Postgres instead?\n> Kind of like Coca Cola. At some point in time they realised people\n> weren't calling it Coca Cola anymore, they were calling it Coke. So\n> instead of resisting the inevitable - trying to educate people to ask\n> for a \"Coca Cola\", they accepted it, trademarked the name \"Coke\", and\n> started putting \"Coke\" on all their products.\n>\n> > getting it known so that ppl know what it is ... changing\n> > it now is not an option. If PostgreSQL were considered a bad name,\nmaybe\n> > ... look at MySQL with their new \"MaxSQL\" product ... but it isn't, and\nis\n> > growing stronger ...\n>\n\n", "msg_date": "Tue, 22 Aug 2000 20:21:52 -0700", "msg_from": "\"Mitch Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How Do You Pronounce \"PostgreSQL\"?" }, { "msg_contents": "> On Tue, Aug 22, 2000 at 08:22:21PM +0200, Peter Eisentraut wrote:\n>> Hmm, I'm getting the feeling that perhaps at this point we should\n>> explicitly *not* support NaN at all.\n\nWell ... this is a database, not a number-crunching system. It seems\nto me that we should be able to store and retrieve NaNs (at least on\nIEEE-compliant platforms). But I'm less excited about whether the\nsorting/comparison operators we offer are 100% IEEE-compliant.\n\nIt has been quite a few years since I looked closely at the IEEE FP\nspecs, but I do still recall that they made a distinction between \"IEEE\naware\" and \"non IEEE aware\" comparison operators --- specifically, the\nfirst kind understood about unordered comparisons and the second didn't.\nPerhaps we could satisfy both SQL and IEEE requirements if we stipulate\nthat we implement only IEEE's \"non-aware\" comparisons? Worth looking at\nanyway.\n\n>> ... hard-core floating point users are going to fail miserably\n>> with the FE/BE protocol anyway.\n\nIt would be a mistake to design backend behavior on the assumption that\nwe'll never have an FE/BE protocol better than the one we have today.\n\n(You could actually fix this problem without any protocol change,\njust a SET variable to determine output precision for FP values.\nWell-written platforms will reproduce floats exactly given \"%.17g\"\nor more precision demand in sprintf. If that fails it's libc's\nbug not ours.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 Aug 2000 02:23:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's " }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Mon, 21 Aug 2000, Jan Wieck wrote:\n> \n> > The Hermit Hacker wrote:\n> > >\n> > > what's wrong wth \"Post-Gres-QL\"?\n> > >\n> > > I find it soooo simple to pronounce *shrug*\n> >\n> > Mee too. And I'm not sure if anybody pronounces PG the same\n> > way. Is it PeeGee or PiG (in which case we'd have the wrong\n> > animal in our logo).\n> \n> Actually, the one that gets me is those that refer to it as Postgres\n\nI suspect that it is still at least 75% postgres codewize, no? \n\n> ... postgres was a project out of Berkeley way back in the 80's, early\n> 90's ... hell, it was based on a PostQuel language ... this ain't\n> postgres, its only based on it :(\n\nso postgres-QL is based on a project that is based on PostQuel\n\nas the ease of initial transition from PostQuel to SQL shows, it is not \n\"based on\" SQL, it just happens to use SQL as a query language.\nIt did not even loose too much features in transition.\n\nIMHO PG is based on solid relational database technology.\n\n-----------\nHannu\n", "msg_date": "Wed, 23 Aug 2000 11:06:47 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How Do You Pronounce \"PostgreSQL\"?" }, { "msg_contents": "From: \"Mitch Vincent\" <[email protected]>\n\n> When someone asks me what RDBMS our company uses for most projects and I\nsay\n> post-gray 'Es Queue El' everyone always says \"Huh? Post-what?\"\n>\n> I love the product but the name is a bitch of a tongue twister. It's\nstrange\n> to say and doesn't roll off the tongue very well.\n\nJust a small detail for evvybuddy: \"Postgres,\" -- dot-com, dot-net, and I\nstopped checking after that -- have belonged to some company called, uh,\nGreat Bridge, in Norfolk Virginia, since April 20th.\n\nI don't make these strange and wonderful stories up...\n\n -dlj.\n\n", "msg_date": "Sun, 27 Aug 2000 04:20:55 -0400", "msg_from": "\"David Lloyd-Jones\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How Do You Pronounce \"PostgreSQL\"?" }, { "msg_contents": "On Mon, Aug 21, 2000 at 07:55:00PM -0700, Joe Brenner wrote:\n... \n> And I have to say that \"postgresql\" has one of the worst\n> names of any software I've ever encountered. I'm entirely\n> in sympathy with the \"suit\" who suggested calling it \"PG\". \n\nLike the tea-bags ;-)\n", "msg_date": "Mon, 28 Aug 2000 16:09:34 +0100", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How Do You Pronounce \"PostgreSQL\"?" }, { "msg_contents": "My assumption is that we never came up with any solution to this, right?\n\n\n> On Sun, Aug 20, 2000 at 12:33:00AM +0200, Peter Eisentraut wrote:\n> <snip side comment about bug tracking. My input: for an email controllable\n> system, take a look at the debian bug tracking system>\n> \n> > Show me a system where it doesn't work and we'll get it to work.\n> > UNSAFE_FLOATS as it stands it probably not the most appropriate behaviour;\n> > it intends to speed things up, not make things portable.\n> > \n> \n> I agree. In the previous thread on this, Thomas suggested creating a flag\n> that would allow control turning the CheckFloat8Val function calls into\n> a macro NOOP. Sound slike a plan to me.\n> \n> > \n> > > > NULL and NaN are not quite the same thing imho. If we are allowing NaN\n> > > > in columns, then it is *known* to be NaN.\n> > > \n> > > For the purposes of ordering, however, they are very similar.\n> > \n> > Then we can also treat them similar, i.e. sort them all last or all first.\n> > If you have NaN's in your data you wouldn't be interested in ordering\n> > anyway.\n> \n> Right, but the problem is that NULLs are an SQL language feature, and\n> there for rightly special cased directly in the sorting apparatus. NaN is\n> type specific, and I'd be loath to special case it in the same place. As\n> it happens, I've spent some time this weekend groveling through the sort\n> (and index, as it happens) code, and have an idea for a type specific fix.\n> \n> Here's the deal, and an actual, honest to goodness bug in the current code.\n> \n> As it stands, we allow one non-finite to be stored in a float8 field:\n> NaN, with partial parsing of 'Infinity'.\n> \n> As I reported last week, NaNs break sorts: they act as barriers, creating\n> sorted subsections in the output. As those familiar with the code have\n> already guessed, there is a more serious bug: NaNs break indicies on\n> float8 fields, essentially chopping the index off at the first NaN.\n> \n> Fixing this turns out to be a one liner to btfloat8cmp.\n> \n> Fixing sorts is a bit tricker, but can be done: Currently, I've hacked\n> the float8lt and float8gt code to sort NaN to after +/-Infinity. (since\n> NULLs are special cased, they end up sorting after NaN). I don't see\n> any problems with this solution, and it give the desired behavior.\n> \n> I've attached a patch which fixes all the sort and index problems, as well\n> as adding input support for -Infinity. This is not a complete solution,\n> since I haven't done anything with the CheckFloat8Val test. On my\n> system (linux/glibc2.1) compiling with UNSAFE_FLOATS seems to work fine \n> for testing.\n> \n> > \n> > Side note 2: The paper \"How Java's floating point hurts everyone\n> > everywhere\" provides for good context reading.\n> \n> http://http/cs.berkeley.edu/~wkahan/JAVAhurt.pdf ? I'll take a look at it\n> when I get in to work Monday.\n> \n> > \n> > Side note 3: Once you read that paper you will agree that using floating\n> > point with Postgres is completely insane as long as the FE/BE protocol is\n> > text-based.\n> \n> Probably. But it's not our job to enforce sanity, right? Another way to think\n> about it is fixing the implementation so the deficiencies of the FE/BE stand\n> out in a clearer light. ;-)\n> \n> Ross\n> -- \n> Ross J. Reedstrom, Ph.D., <[email protected]> \n> NSBRI Research Scientist/Programmer\n> Computer and Information Technology Institute\n> Rice University, 6100 S. Main St., Houston, TX 77005\n> \n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 12 Oct 2000 00:16:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "Bruce Momjian writes:\n\n> My assumption is that we never came up with any solution to this, right?\n\nIt stopped when we noticed that proper support for non-finite values will\nbreak indexing, because the relational trichotomy doesn't hold.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 12 Oct 2000 16:56:57 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "[ continuing a discussion from last August ]\n\nPeter Eisentraut <[email protected]> writes:\n> Bruce Momjian writes:\n>> My assumption is that we never came up with any solution to this, right?\n\n> It stopped when we noticed that proper support for non-finite values will\n> break indexing, because the relational trichotomy doesn't hold.\n\nI believe that's not a problem anymore. The current form of the float\ncomparison functions will perform sorting and comparisons according to\nthe sequence\n\n\t-infinity < normal values < infinity < NaN < NULL\n\nwith all NaNs treated as equal. This may not be exactly what an IEEE\npurist would like, but given that we have to define *some* consistent\nsort order, it seems as reasonable as we can get.\n\nAccordingly, I suggest that Ross go back to work on persuading the code\nto treat infinities and NaNs properly in other respects. IIRC, there\nare still open issues concerning whether we still need/want\nCheckFloat8Val/CheckFloat4Val, what the I/O conversion functions should\ndo on non-IEEE machines, etc. They all seemed soluble, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 02 Jun 2001 16:31:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's " }, { "msg_contents": "Tom Lane writes:\n\n> [ continuing a discussion from last August ]\n[I was *just* thinking about this. Funny.]\n\n> I believe that's not a problem anymore. The current form of the float\n> comparison functions will perform sorting and comparisons according to\n> the sequence\n>\n> \t-infinity < normal values < infinity < NaN < NULL\n\nI was thinking about making NaN equivalent to NULL. That would give\nconsistency in ordering, and probably also in arithmetic. Additionally,\nif the platform supports it we ought to make the Invalid Operation FP\nexception (which yields NaN) configurable: either get NULL or get an\nerror.\n\n-- \nPeter Eisentraut [email protected] http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 2 Jun 2001 22:50:03 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I was thinking about making NaN equivalent to NULL.\n\nMumble ... in the thread last August, someone made the point that SQL's\nidea of NULL (\"unknown value\") is not really the same as a NaN (\"I know\nthat this is not a well-defined number\"). Even though there's a lot of\nsimilarity in the behaviors, I'd be inclined to preserve that semantic\ndistinction.\n\nIf we did want to do this, the implication would be that all\nfloat-returning functions would be required to make sure they were not\nreturning NaNs:\n\tif (isnan(x))\n\t\tPG_RETURN_NULL();\n\telse\n\t\tPG_RETURN_FLOAT8(x);\nPossibly this logic could be folded right into the PG_RETURN_FLOAT\nmacros.\n\n> if the platform supports it we ought to make the Invalid Operation FP\n> exception (which yields NaN) configurable: either get NULL or get an\n> error.\n\nSeems like we could equally well offer the switch as \"either get NaN\nor get an error\".\n\nSomething to be kept in mind here is the likelihood of divergence in\nour behavior between IEEE and non-IEEE platforms. I don't object to\nthat --- it's sort of the point --- but we should be aware of how much\ndifference we're creating, and try to avoid unnecessary differences.\nHmm ... I suppose an attraction of a NULL-vs-error, as opposed to NaN-\nvs-error, option is that it could theoretically be supported on NaN-less\nhardware. But is that realizable in practice? SIGFPE is messy.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 02 Jun 2001 17:11:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's " }, { "msg_contents": "> > -infinity < normal values < infinity < NaN < NULL\n> I was thinking about making NaN equivalent to NULL. That would give\n> consistency in ordering, and probably also in arithmetic. Additionally,\n> if the platform supports it we ought to make the Invalid Operation FP\n> exception (which yields NaN) configurable: either get NULL or get an\n> error.\n\nI'd like to see the distinction between NaN and NULL retained, since the\ntwo \"values\" arise from different circumstances and under different\nconditions. If a particular app needs them to be equivalent, then that\nis easy enough to do with SQL or triggers.\n\n - Thomas\n\nOn a modestly related note, I'm come over to the notion that the\ndate/time value 'current' could be ripped out eventually. Tom, isn't\nthat the only case for those types which bolluxes up caching of\ndate/time types?\n", "msg_date": "Tue, 05 Jun 2001 03:07:04 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> On a modestly related note, I'm come over to the notion that the\n> date/time value 'current' could be ripped out eventually. Tom, isn't\n> that the only case for those types which bolluxes up caching of\n> date/time types?\n\nYes, I believe so. At least, that was the consideration that led me\nto mark those functions noncachable ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Jun 2001 00:21:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's " } ]
[ { "msg_contents": "\nJust letting y'all know that tonight around 9pm EST we are upgrading the\nmain server from a Single PIII-500 to a Dual PIII-700 to better handle\nboth web, mail and cvs traffic ...\n\nWe're expecting no more then 2 hours of downtime, and hoping for much less\nthen that ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 14 Aug 2000 16:15:13 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Heads up: Server Upgrade ..." } ]
[ { "msg_contents": "Forwarded without comment; this remote connection is too slow to want\nto check into it. I doubt the shmem failure is due to the GRANT thing\nthough. Has anyone ever seen that before?\n\n\t\t\tregards, tom lane\n\n\n------- Forwarded Message\n\nDate: Mon, 14 Aug 2000 08:03:33 -0700 (PDT)\nFrom: Marius Andreiana <[email protected]>\nTo: [email protected]\nSubject: Bugs with GRANT ?\n\nHello, please forward this to pg devel list, I'm\nnot subscribed. Got your address on the website.\nThanks for help!\n\nHi, I'm using postgresql 7.02\n\nHere's the problem : I created an user \"marius\" with\ncreate database permission. From now on I work with\n\"marius\".\n\nI created a database, populated, ... Now in psql :\n\n\\z \n\tlists objects and Access permissions column is empty.\ngrant select on members to me;\n\tCHANGE\n\\z\n\tmembers' permissions are {\"=\",\"me=r\"}\nselect * from member;\n\tERROR: members: Permission denied.\n\n\nSo, when granting a permission to another user, the\nowner of the database\nlooses the ALL permission he has on that object; even\nafter revoking\nthe perm, it's shown as {\"=\"} and nobody can access it\n(I have to use GRANT to give the owner the permission)\n\nI don't think this is right, is it ? IMHO, the owner\nof the database\nhas always ALL permission on all objects, and I can't\nmess with that.\n(nor GRANT, neither REVOKE)\n\n2 minutes later I shut down postmaster as usual, then\nwhen trying to \nstart it I get\n\n000812.12:50:04.683 [991] IpcMemoryCreate: shmget\nfailed (Identifier removed) key=5432010, size=144,\npermission=700\nThis type of error is usually caused by an improper\nshared memory or System V IPC semaphore configuration.\nFor more information, see the FAQ and\nplatform-specific\nFAQ's in the source directory pgsql/doc or on our\nweb site at http://www.postgresql.org.\n000812.12:50:04.766 [991] IpcMemoryIdGet: shmget\nfailed (Identifier removed) key=5432010, size=144,\npermission=0\n000812.12:50:04.796 [991] IpcMemoryAttach: shmat\nfailed (Invalid argument) id=-2\n000812.12:50:04.822 [991] FATAL 1: \nAttachSLockMemory: could not attach segment\n\nThis never happened to me, I use postgres from 6.5\nversion; could this be\nbecause of the same GRANT/REVOKE ? (strange\ncoincidence if not...)\nAfter reinstalling postgres, it still didn't worked\n(same error). After\nrestarting the machine (Linux 2.2.14) it worked.\n\nThanks,\nMarius\n\n------- End of Forwarded Message\n", "msg_date": "Mon, 14 Aug 2000 17:15:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "FWD: Bugs with GRANT ?" } ]
[ { "msg_contents": "> I wouldn't say that this is exactly the first time we've heard\n> about problems with MySQL's famed \"speed\". Take the Tim Perdue\n> article that came out a while back:\n\n> http://www.phpbuilder.com/columns/tim20000705.php3?page=1\n\nYes, but the conclusion at that time was that PostgreSQL in general was\nslower, but scaled better. So on a heavily loaded site, they would\nperform equally well, because PostgreSQL could handle 5 times the load\nof MySQL, but MySQL was 5 times faster than PostgreSQL (ot something\nlike that).\n\nThis is the first benchmark saying that PostgreSQL is actually faster\nthan MySQL. And as we all know, benchmarks can be stretched any way you\nlike it, so that's why I'd like some comments before I go out and\nadvocate too strongly :-)\n\n", "msg_date": "Tue, 15 Aug 2000 11:12:22 +0200", "msg_from": "Kaare Rasmussen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Open Source Database Routs Competition in New Benchmark\n Tests" }, { "msg_contents": "At 11:12 AM 8/15/00 +0200, Kaare Rasmussen wrote:\n\n>This is the first benchmark saying that PostgreSQL is actually faster\n>than MySQL. And as we all know, benchmarks can be stretched any way you\n>like it, so that's why I'd like some comments before I go out and\n>advocate too strongly :-)\n\nGood scaling characteristics are a lot more important than raw speed\nfor the web environment, at least, where short, quick queries to\npersonalize content, etc are the rule. If only a couple of folks\nare using the site simultaneously, who cares if it takes an\nextra 50 milliseconds to return the page? If I've got a hundred\nusers on my site, though, and the database engine \"starts falling\napart around 40-50 users\", then I'm in deep doo-doo.\n\nIn practice, MySQL users have to implement the atomic updating of\na set of tables \"by hand\" using special locking tables, etc. All\nthe cruft surrounding this is not very likely to be more efficient\nthan the built-in transaction code of a real RDBMS. When people\ntalk about the raw speed of MySQL they forget that working around\nits table locking granularity and lack of transaction semantics\nis a pain that costs CPU as well as programmer cycles.\n\nI came back to Postgres after rejecting it for website development\nwork when I heard that MVCC was replacing the older table-level\nlocking model. I've never been excited about MySQL for the same\nreason (among many others).\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Tue, 15 Aug 2000 06:13:35 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Open Source Database Routs Competition in New Benchmark Tests " } ]
[ { "msg_contents": "On Tue, Aug 15, 2000 at 09:23:15AM +0200, Karel Zak wrote:\n> > Thank for the pointer to these functions, which are indeed convenient.\n> > But the problem remains of not being able to change the backend's locale\n> > on-the-fly. For example if an auction user is spanish and the next one\n> > is german the locale needs to change several times during the life of\n> > the DB, which raises some larger index-related issues apparently.\n>\n> Before some weeks ago I sent to -patches list patch that allows to change\n> locales on-the-fly via 'SET LOCALE' command. But as say Tom L. it's\n> VERY DANGEROUS. Solution is locales per columns or something like this, but\n> nobody works on this :-)\n\nBut your patch sounds incredibly useful :-) Has it been integrated in\nthe mainline code yet? How does one use this functionality?\n\nAlso what is the main difference with using the standard gettext call?\n\n\tsetlocale(LC_ALL, \"en_US\");\n\nThanks,\n\n--\nLouis-David Mitterrand - [email protected] - http://www.apartia.org\n", "msg_date": "Tue, 15 Aug 2000 11:14:52 +0200", "msg_from": "Louis-David Mitterrand <[email protected]>", "msg_from_op": true, "msg_subject": "Re: dangers of setlocale() in backend (was: problem with float8 input\n\tformat)" }, { "msg_contents": "\n> But your patch sounds incredibly useful :-) Has it been integrated in\n> the mainline code yet? How does one use this functionality?\n\n It never will integrated into the PG standard main tree, because it is \nstupid patch for common usege :-( (and I feel ashamed of this :-)\n\n> Also what is the main difference with using the standard gettext call?\n> \n> \tsetlocale(LC_ALL, \"en_US\");\n\n This ('LC_ALL') call load support for all locales categories (numbers,\ntext, currency...etc.). Inside postgreSQL it's dangerous, because it\nchange for example float numbers deciamal point..etc.\n\n In the PostgreSQL are used (only):\n\n setlocale(LC_CTYPE, \"\"); \n setlocale(LC_COLLATE, \"\");\n setlocale(LC_MONETARY, \"\");\n\nFor more information see the file ustils/adt/pg_locale.c in PG sources, that\nallows you to change and load *all* locales catg. and set it back to previous \nstate. It is used for to_char() that needs load LC_NUMERIC informations. \n\n But again: after your functions you must always set correct locales. \n\nAnd in 7.1 it will more important, because CurrentLocaleConv struct in\npg_locale.c that use to_char() is load only once --- it's performance\noption. And not is a way how this struct change if it's already filled,\nbecause we not expect on-the-fly locales.... \n\n\t\t\t\t\tKarel\n\n", "msg_date": "Tue, 15 Aug 2000 11:30:11 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dangers of setlocale() in backend (was: problem with float8 input\n\tformat)" }, { "msg_contents": "On Tue, Aug 15, 2000 at 11:30:11AM +0200, Karel Zak wrote:\n> \n> > But your patch sounds incredibly useful :-) Has it been integrated in\n> > the mainline code yet? How does one use this functionality?\n> \n> It never will integrated into the PG standard main tree, because it is \n> stupid patch for common usege :-( (and I feel ashamed of this :-)\n> \n> > Also what is the main difference with using the standard gettext call?\n> > \n> > \tsetlocale(LC_ALL, \"en_US\");\n> \n> This ('LC_ALL') call load support for all locales categories (numbers,\n> text, currency...etc.). Inside postgreSQL it's dangerous, because it\n> change for example float numbers deciamal point..etc.\n\n[SNIP very interesting info on PG internal locale processing]\n\nConsidering that would it then be safe to only use LC_NUMERIC and\nLC_MESSAGES in setlocale() calls? The dangers Tom Lane talks about in\nreference to changing locale in the backend seem to be related to\nLC_COLLATE stuff, right?\n\nThanks for your input, cheers,\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.org\n\nI don't build computers, I'm a cooling engineer.\n -- Seymour Cray, founder of Cray Inc. \n", "msg_date": "Tue, 15 Aug 2000 17:53:28 +0200", "msg_from": "Louis-David Mitterrand <[email protected]>", "msg_from_op": true, "msg_subject": "Re: dangers of setlocale() in backend (was: problem with float8 input\n\tformat)" }, { "msg_contents": "\nOn Tue, 15 Aug 2000, Louis-David Mitterrand wrote:\n\n> [SNIP very interesting info on PG internal locale processing]\n> \n> Considering that would it then be safe to only use LC_NUMERIC and\n> LC_MESSAGES in setlocale() calls? The dangers Tom Lane talks about in\n> reference to changing locale in the backend seem to be related to\n> LC_COLLATE stuff, right?\n\n Not sure that use the LC_NUMERIC is correct. For example next routine\nis inside PG:\n\nDatum\nfloat4out(PG_FUNCTION_ARGS)\n{\n float4 num = PG_GETARG_FLOAT4(0);\n char *ascii = (char *) palloc(MAXFLOATWIDTH + 1);\n\n sprintf(ascii, \"%.*g\", FLT_DIG, num);\n PG_RETURN_CSTRING(ascii);\n}\n\n What happen here with/without LC_NUMERIC?\n\ntype 'man sprintf':\n\n For some numeric conversion a radic character (Decimal\n point') or thousands' grouping character is used. The\n actual character used depends on the LC_NUMERIC part of\n ^^^^^^^^^^^\n the locale. The POSIX locale uses .' as radix character,\n and does not have a grouping character. Thus,\n printf(\"%'.2f\", 1234567.89);\n results in {4567.89' in the POSIX locale, in\n {4567,89' in the nl_NL locale, and in 234.567,89' in\n the da_DK locale.\n\n\n Very simular it's in the float4in() with strtod() ...etc.\n\n\t\t\t\t\t\tKarel\n\n\n \n\n", "msg_date": "Wed, 16 Aug 2000 07:51:05 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dangers of setlocale() in backend (was: problem with float8 input\n\tformat)" } ]
[ { "msg_contents": "Hi,\n\nI've just found a good news on slashdot, it links to a good test published on Apache Today\nwhere PostgreSQL is tested against some other DBMS:\n\n\"In the AS3AP tests, PostgreSQL 7.0 significantly outperformed both the leading commercial and open source applications in speed and scalability. In the tested configuration, Postgres peaked at 1127.8 transactions per second with five users, and still processed at a steady rate of 1070.4 with 100 users.\nThe proprietary leader also performed consistently, with a high of 314.15\ntransactions per second with eight users, which fell slightly to 288.37\ntransactions per second with 100 users. The other leading proprietary\ndatabase also demonstrated consistency, running at 200.21 transactions per\nsecond with six users and 197.4 with 100.\" \n\nhttp://apachetoday.com/news_story.php3?ltsn=2000-08-14-008-01-PR-MR-SW\n\nI'm sure that it is to add to your news section.\n\n\nBye, \\fer\n", "msg_date": "Tue, 15 Aug 2000 13:16:08 +0200 (CEST)", "msg_from": "Ferruccio <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL wins against some other SQL RDBMS" } ]
[ { "msg_contents": "Given all this performance discussion, has anyone seen any numbersregarding the \nspeed of PostgreSQl vs Oracle? \n\nThanks.\n- Brandon\n\n--\nsixdegrees.com\nw 212.375.2688\nc 917.723.1981\n", "msg_date": "Tue, 15 Aug 2000 12:08:17 -0400", "msg_from": "merlin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Open Source Database Routs Competition in New Benchmark Tests" }, { "msg_contents": "> Given all this performance discussion, has anyone seen any numbers regarding \n> the speed of PostgreSQl vs Oracle?\n\nUm, er...\n\nNo, but perhaps you could consider \"one of the leading closed source\ndatabase products\" which was compared in the Great Bridge test to be\nsimilar in speed and features to The Database You Have Named. *hint\nhint*\n\n - Thomas\n", "msg_date": "Tue, 15 Aug 2000 16:56:04 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Open Source Database Routs Competition in New Benchmark\n Tests" }, { "msg_contents": "Is \"Uracle\" called \"Proprietary 1\" or \"Proprietary 2\"? I can't remember :-)\n\nAnd which other RDBMS is proprietary? Could it be M$ql Server....?\n\nHint, hint\nPoul L. Christiansen\n\nmerlin wrote:\n\n> Given all this performance discussion, has anyone seen any numbersregarding the\n> speed of PostgreSQl vs Oracle?\n>\n> Thanks.\n> - Brandon\n>\n> --\n> sixdegrees.com\n> w 212.375.2688\n> c 917.723.1981\n\n", "msg_date": "Tue, 15 Aug 2000 18:17:46 +0100", "msg_from": "\"Poul L. Christiansen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Open Source Database Routs Competition in New Benchmark\n Tests" }, { "msg_contents": "On Tue, 15 Aug 2000, merlin wrote:\n\n> Given all this performance discussion, has anyone seen any\n> numbersregarding the speed of PostgreSQl vs Oracle?\n\nOracle and MS SQL Server must have been the two\n\"leading commercial RDBMSes\" mentioned in the\narticle.\n\nThe licencing of both of those expressly forbids\npublishing benchmark results (including, we can\nprobably assume from the wording of the article,\neven referring directly to such).\n\nThat said, it would be nice to see some actual\nnumbers and configurations for the postgresql end\nof things; What hardware was used? What versions\nof what system software? What OS tuning was done?\nWhat parameters were supplied to postgres?\n\nMatthew.\n\n", "msg_date": "Tue, 15 Aug 2000 18:26:25 +0100 (BST)", "msg_from": "Matthew Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Open Source Database Routs Competition in New Benchmark\n Tests" }, { "msg_contents": "There are a thousand RDBMS products that might fall under that heading, the\nreason the names weren't published is that most commercial RDBMS product\nprohibit the publishing of benchmarks when you buy it. The guys that did\nthis benchmark weren't trying to hide who it was just for the sake of hiding\nit, they really can't *legally* say.\n\nI know some people that have benchmarked Oracle and PostgreSQL... Oracle\nwon, that's all I'll say..\n\nPeople still need to use whatever RDBMS makes their life easier, one could\nmake an argument for virtually all existing products (commercial or not) on\nthat product's individual strengths and weaknesses.\n\n-Mitch\n----- Original Message -----\nFrom: \"Poul L. Christiansen\" <[email protected]>\nTo: \"merlin\" <[email protected]>\nCc: <[email protected]>\nSent: Tuesday, August 15, 2000 10:17 AM\nSubject: Re: [HACKERS] Open Source Database Routs Competition in New\nBenchmark Tests\n\n\n> Is \"Uracle\" called \"Proprietary 1\" or \"Proprietary 2\"? I can't remember\n:-)\n>\n> And which other RDBMS is proprietary? Could it be M$ql Server....?\n>\n> Hint, hint\n> Poul L. Christiansen\n>\n> merlin wrote:\n>\n> > Given all this performance discussion, has anyone seen any\nnumbersregarding the\n> > speed of PostgreSQl vs Oracle?\n> >\n> > Thanks.\n> > - Brandon\n> >\n> > --\n> > sixdegrees.com\n> > w 212.375.2688\n> > c 917.723.1981\n>\n>\n\n", "msg_date": "Tue, 15 Aug 2000 10:29:09 -0700", "msg_from": "\"Mitch Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Open Source Database Routs Competition in New Benchmark Tests" }, { "msg_contents": "At 06:17 PM 8/15/00 +0100, Poul L. Christiansen wrote:\n>Is \"Uracle\" called \"Proprietary 1\" or \"Proprietary 2\"? I can't remember :-)\n>\n>And which other RDBMS is proprietary? Could it be M$ql Server....?\n\nInformix, possibly, I know they have the restrictive clause regarding\nbenchmarking in their contract.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Tue, 15 Aug 2000 10:30:21 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Open Source Database Routs Competition in New\n Benchmark Tests" }, { "msg_contents": "Hi Matthew,\n\nWe'll pull together some more background info on the specifics you\nmentioned, and put them on the website shortly.\n\nRegards,\nNed\n\n\n\nMatthew Kirkwood wrote:\n\n> That said, it would be nice to see some actual\n> numbers and configurations for the postgresql end\n> of things; What hardware was used? What versions\n> of what system software? What OS tuning was done?\n> What parameters were supplied to postgres?\n>\n> Matthew.\n\n", "msg_date": "Tue, 15 Aug 2000 19:26:47 -0400", "msg_from": "Ned Lilly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Open Source Database Routs Competition in New\n BenchmarkTests" }, { "msg_contents": "this may be interesting ned.. and others..\n\nhttp://www.devshed.com/BrainDump/MySQL_Benchmarks/\n\nit will probably just fuel a fire. but at the same\ntime something can probably be learned.\n\njeff\n\nOn Tue, 15 Aug 2000, Ned Lilly wrote:\n\n> Hi Matthew,\n> \n> We'll pull together some more background info on the specifics you\n> mentioned, and put them on the website shortly.\n> \n> Regards,\n> Ned\n> \n> \n> \n> Matthew Kirkwood wrote:\n> \n> > That said, it would be nice to see some actual\n> > numbers and configurations for the postgresql end\n> > of things; What hardware was used? What versions\n> > of what system software? What OS tuning was done?\n> > What parameters were supplied to postgres?\n> >\n> > Matthew.\n> \n\nJeff MacDonald,\n\n-----------------------------------------------------\nPostgreSQL Inc\t\t| Hub.Org Networking Services\[email protected]\t\t| [email protected]\nwww.pgsql.com\t\t| www.hub.org\n1-902-542-0713\t\t| 1-902-542-3657\n-----------------------------------------------------\nFascimile : 1 902 542 5386\nIRC Nick : bignose\n\n", "msg_date": "Tue, 15 Aug 2000 23:47:01 -0300 (ADT)", "msg_from": "Jeff MacDonald <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Open Source Database Routs Competition in New\n BenchmarkTests" }, { "msg_contents": "At 11:47 PM 8/15/00 -0300, Jeff MacDonald wrote:\n>this may be interesting ned.. and others..\n>\n>http://www.devshed.com/BrainDump/MySQL_Benchmarks/\n\nHe's full of shit the first moment he talks about them always trying\nto design fair tests.\n\nSorry ... I would love to see just one example where DEFAULT\ntable locking is better (as he claims) - in PG I can of course\nlock a table if I want.\n\nI was recently asked to check out an Oracle site that was dying\ndue to system loads escalating > 70.0 (the decimal point, sadly,\nis properly placed). Turns out they were doing by-hand pessimistic\ntable locking because they didn't understand that Oracle wasn't\nMySQL, so to speak, and under load (generating a digest) threads\nstacked up (not helped by an Oracle client library bug that causes\nweird spinlock deadlocks, not discovered by me but earlier by ardDigita).\n\nPessimistic locking is available in PG and real RDBMS systems like\nOracle. That's not proof that pessimistic locking is the right thing\nto do as not only your default locking but your only locking.\n\nMonty's not fair, and I think most people here know it. They lie,\nobfuscate, refuse to update comparision charts to new versions, etc\netc etc.\n\nI wouldn't trust him to pack my parachute, that's for sure.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Tue, 15 Aug 2000 20:39:36 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Open Source Database Routs Competition in New\n BenchmarkTests" }, { "msg_contents": "(re-posted to GENERAL)\n\nAll,\n\nPlease see http://www.greatbridge.com/news/p_081620001.html for more\nbackground info about how the tests were conducted.\n\nBest,\n\nNed Lilly\nVP Hacker Relations\nGreat Bridge\n\n\nMatthew Kirkwood wrote:\n\n> That said, it would be nice to see some actual\n> numbers and configurations for the postgresql end\n> of things; What hardware was used? What versions\n> of what system software? What OS tuning was done?\n> What parameters were supplied to postgres?\n>\n> Matthew.\n\n", "msg_date": "Wed, 16 Aug 2000 03:10:32 -0400", "msg_from": "Ned Lilly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Open Source Database Routs Competition in New\n\tBenchmarkTests" } ]
[ { "msg_contents": "[email protected] wrote:\n> \n> Our non-profit organization needs a skilled volunteer to create a\n> searchable database of abusive cops. Any SQL-type database program is\n> OK- others may be suitable also. We've already compiled the\n> questions/outline around which the database inputs will be structured.\n> \n> Programming credit will be prominently given upon request, and/or\n> references will be provided to parties selected by the programmer.\n> \n> This is not a left-wing, knee-jerk anti-cop endeavor, but a serious,\n> long-term attempt to foster professional accountability within the ranks\n> of law enforcement. We've already been mentioned in the Village Voice\n> and the LA Weekly.\n> \n> Please forward this query to any acquaintances who may be interested in\n> this type of subject.\n> \n> Thanks in advance for your assistance.\n\nhave you had a suitable response? i might be able to help.\n\nthanks,\n\nq\n", "msg_date": "Tue, 15 Aug 2000 11:47:57 -0700", "msg_from": "Qiron Adhikary <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Copwatch database" } ]
[ { "msg_contents": "On v7.0.2:\n\nI have a function preferred(text, text). It returns the second argument \nif the second is not null or the first if the second is null.\nI understand I can use coalesce, but this is a simple case and not \npractical but illustrates the point.\n\nIf I do select col1, col2, preferred(col1, col2) as col3 col3 only contains \nvalues where col2 had a non-null value.\n\ncreate function preferred(text, text)\nreturns text\nas '\ndeclare\n first alias for $1;\n second alias for $2;\nbegin\n if second isnull\n then\n return first;\n else\n return second;\n end if;\nend;'\nlanguage 'plpgsql';\n\ne.g.\n\ncol1|col2\n----+----\n Am | y\n Ba |NULL\n Ca | t\n\nI expect\n\ncol1|col2|col3\n----+----+-----\n Am | y | Amy\n Ba |NULL| Ba\n Ca | t | Cat\n\nI get\n\ncol1|col2|col3\n----+----+-----\n Am | y | Amy\n Ba |NULL|NULL\n Ca | t | Cat\n\nMy major question is how to pass NULL values or values that could be \npotentially NULL into the function and get a reliable result.\n\n From what I can gather the function only gets called when both values are \npresent and not when any of them are NULL. Is it because there isn't a \nmatch for preferred(text, NULL) or is it something else?\n\n\n-\n- Thomas Swan\n- Graduate Student - Computer Science\n- The University of Mississippi\n-\n- \"People can be categorized into two fundamental\n- groups, those that divide people into two groups\n- and those that don't.\"\n\nOn v7.0.2:\n\nI have a function preferred(text, text).   It returns the\nsecond argument if the second is not null or the first if the second is\nnull.\nI understand I can use coalesce, but this is a simple case and not\npractical but illustrates the point.\n\nIf I do select col1, col2, preferred(col1, col2) as col3 col3 only\ncontains values where col2 had a non-null value.\n\ncreate function preferred(text, text)\nreturns text\nas '\ndeclare\n        first\nalias for $1;\n        second\nalias for $2;\nbegin\n        if\n     second isnull\n        then\n                return\nfirst;\n        else\n                return\nsecond;\n        end\nif;\nend;'\nlanguage 'plpgsql';\n\ne.g.\n\ncol1|col2\n----+----\n Am | y\n Ba |NULL\n Ca | t\n\nI expect\n\ncol1|col2|col3\n----+----+-----\n Am | y  | Amy\n Ba |NULL| Ba\n Ca | t  | Cat\n\nI get \n\ncol1|col2|col3\n----+----+-----\n Am | y  | Amy\n Ba |NULL|NULL\n Ca | t  | Cat\n\n\nMy major question is how to pass NULL values or values that could be\npotentially NULL into the function and get a reliable result.\n\n From what I can gather the function only gets called when both values\nare present and not when any of them are NULL.   Is it because\nthere isn't a match for preferred(text, NULL) or is it something\nelse?  \n\n\n- \n- Thomas Swan\n                                  \n- Graduate Student  - Computer Science\n- The University of Mississippi\n- \n- \"People can be categorized into two fundamental \n- groups, those that divide people into two groups \n- and those that don't.\"", "msg_date": "Tue, 15 Aug 2000 14:04:39 -0500", "msg_from": "Thomas Swan <[email protected]>", "msg_from_op": true, "msg_subject": "Functions and Null Values" }, { "msg_contents": "Thomas Swan <[email protected]> writes:\n> From what I can gather the function only gets called when both values are \n> present and not when any of them are NULL.\n\nIt's sillier than that: the function does actually get called, and then\nthe return value is thrown away and replaced with a NULL. This is an\ninherent limitation of the old function-call interface. It is fixed for\n7.1 but I don't know of any good workaround for 7.0.* or before.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 Aug 2000 23:20:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Functions and Null Values " } ]
[ { "msg_contents": ">Thomas -\n>A general design question. There seems to be a good reason to\n>allow +/-Inf in float8 columns: Tim Allen has an need for them, for\n>example. That's pretty straight forward, they seem to act properly if\n>the underlying float libs handle them.\n\nThanks for pursuing this, Ross. I shall look forward to not having to use\na workaround in future versions.\n\n>I'm not convinced NaN gives us anything useful, especially given how\n>badly it breaks sorting. I've been digging into that code a little,\n>and it's not going to be pretty. It strikes me as wrong to embed type\n>specific info into the generic sorting routines.\n\nActually, we also have a use for NaN. The main thing we're using this for\nis to store \"fields\", ie general descriptions of particular items of\nmetadata that occur in our application. Each field can have a validity\ncondition (ie min <= X <= max), and for open ranges we find the easiest\nway to handle that without needing any extra bool flags or whatever is\njust to set the minimum value to -infinity and/or the max to +infinity.\n\nOur fields also have a default value, used in case the user didn't\nactually enter anything. However, we want to be able to support something\nlike a NULL, so that if the user doesn't enter anything then in some cases\nthat means \"there is no value\". These values get passed around inside our\napplications in various ways, in subsystems that don't have any direct\ncommunication with the database, so using a database NULL doesn't do the\njob. An NaN, however, is perfect for the job, because you can transmit\nNaN's down sockets between processes, you can copy them around without\nneeding any special handling, and you can (currently) write them to and\nread them from the database. So, for what it's worth, here's one vote for\nkeeping NaN's. As for sorting, we don't really care how they sort. Any\nconsistent behaviour will do for us.\n\nYes, there is a difference between an NaN and a NULL, but that difference\nis not important in our application. We never do any serious arithmetic on\nour float8 values, we just store them in the database and allow users to\nview and edit the values.\n\n>So, anyone have any ideas what NaN would be useful for? Especially given\n>we have NULL available, which most (non DB) numeric applications don't.\n\nIt's this last point that makes NaN's useful; most non DB numeric\napplications don't have a NULL, and NaN can make an adequate substitute.\nOne thing we could do, I suppose, is add code to our db interface layer to\ntranslate NaN's to NULL's and vice versa. But if we don't have to, we'd be\nhappier :-).\n\n>Ross\n\nTim\n\n--\n-----------------------------------------------\nTim Allen [email protected]\nProximity Pty Ltd http://www.proximity.com.au/\n http://www4.tpg.com.au/users/rita_tim/\n\n", "msg_date": "Wed, 16 Aug 2000 11:16:24 +1000 (EST)", "msg_from": "Tim Allen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" }, { "msg_contents": "On Wed, Aug 16, 2000 at 11:16:24AM +1000, Tim Allen wrote:\n> \n> Thanks for pursuing this, Ross. I shall look forward to not having to use\n> a workaround in future versions.\n\nSee, Tim knows how to get work out of Open Source programmers. Flattery, not\nflames ;-)\n\n> \n> Actually, we also have a use for NaN. The main thing we're using this for\n<snip>\n> read them from the database. So, for what it's worth, here's one vote for\n> keeping NaN's. As for sorting, we don't really care how they sort. Any\n> consistent behaviour will do for us.\n> \n\nRight. Currently, NaN's break sorting of everything else in the column. \nNot good. But Thomas mentioned a possible clever work around. I've got to\ndig into the code a bit more to see if it'll work.\n\n> Yes, there is a difference between an NaN and a NULL, but that difference\n> is not important in our application. We never do any serious arithmetic on\n> our float8 values, we just store them in the database and allow users to\n> view and edit the values.\n> \n> >So, anyone have any ideas what NaN would be useful for? Especially given\n> >we have NULL available, which most (non DB) numeric applications don't.\n> \n> It's this last point that makes NaN's useful; most non DB numeric\n> applications don't have a NULL, and NaN can make an adequate substitute.\n> One thing we could do, I suppose, is add code to our db interface layer to\n> translate NaN's to NULL's and vice versa. But if we don't have to, we'd be\n> happier :-).\n\nWell, this is open source: all we need is one customer, if the idea\nis sound. (Usually, that's the coder themselves, but not always. And\nconversely, if it's a lousy idea, it doesn't matter howmany people\nwant it!) I had forgotten that the DB often interacts with non-DB\ncode. (What, you mean psql and hand typed queries isn't good enough for\nyour clients?) 'Course, I'm the type that's been known to code figures\ndirectly in Postscript because the drawing package wouldn't do what I\nwanted to.\n\nI'll definitely look into this some more. If we can solve the sort\nproblem without to big a kludge, I think we might be able to let people\ndo serious math in the backend, let the non-finites fly!\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Tue, 15 Aug 2000 21:26:54 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] +/- Inf for float8's" } ]
[ { "msg_contents": "\nhttp://slashdot.org/article.pl?sid=00/08/16/0010230&mode=thread\n\nhttp://www.devshed.com/BrainDump/MySQL_Benchmarks/\n", "msg_date": "Wed, 16 Aug 2000 14:26:31 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "MySQL disputes benchmarks.." }, { "msg_contents": "> http://slashdot.org/article.pl?sid=00/08/16/0010230&mode=thread\n> http://www.devshed.com/BrainDump/MySQL_Benchmarks/\n\nThe thing that's especially funny to me is that Monty is the first one\nto cry, but has been making qualitative, unsubstantiated claims about\nPostgres' supposed shortcomings for years (OK, he thinks they are\nsubstantiated by *his* single-user benchmarks :). And contrary to his\nclaims in the past, imho Postgres has been quite silent on the issue of\ntesting fairness and MySQL attributes, trying to let the product do the\ntalking instead.\n\nGreat Bridge (not \"the Postgres people\" as he claims) actually went to\nthe trouble to do specific multi-user stress tests using published\nindustry-standard benchmarks, which is the closest thing to a fair test\nwe've ever seen. Everyone was suprised by the results. It will be nice\nto see a few more of these kinds of fair tests in the future.\n\nAll imho of course...\n\n - Thomas\n", "msg_date": "Wed, 16 Aug 2000 06:56:13 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL disputes benchmarks.." } ]
[ { "msg_contents": "\n> The thing that was the most fun about this (the PostgreSQL steering\n> committee got a sneak preview of the results a couple of \n> months ago) was\n> that we have never made an effort to benchmark Postgres against other\n> databases, so we had no quantitative measurement on how we were doing.\n> And we are doing pretty good!\n\nI think of most value would be a benchmark that joe-programmer can download\nand test for himself. Is this benchmark available for private use ?\nWhy would a benchmark need a tool (like Benchmark Factory) ?\nI think an easy doityourself benchmark would be an extremely good\nadvertising effort if it holds what Xpert Inc. sais.\n\nAndreas\n", "msg_date": "Wed, 16 Aug 2000 10:32:32 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Open Source Database Routs Competition in New Bench\n\tmark Tests" } ]
[ { "msg_contents": "\n> > Given all this performance discussion, has anyone seen any\n> > numbersregarding the speed of PostgreSQl vs Oracle?\n> \n> Oracle and MS SQL Server must have been the two\n> \"leading commercial RDBMSes\" mentioned in the\n> article.\n\nThey mention Linux as one of the OS'es tested. Dont tell me they compared\nnumbers under different OS's, like PostgreSQL on RedHat and M$Sql on NT.\nThus my conclusion would be it can't be M$sql.\n\nAndreas \n", "msg_date": "Wed, 16 Aug 2000 10:39:53 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Open Source Database Routs Competition in New Bench\n\tmark Tests" }, { "msg_contents": "On Wed, 16 Aug 2000, Zeugswetter Andreas SB wrote:\n\n> \n> > > Given all this performance discussion, has anyone seen any\n> > > numbersregarding the speed of PostgreSQl vs Oracle?\n> > \n> > Oracle and MS SQL Server must have been the two\n> > \"leading commercial RDBMSes\" mentioned in the\n> > article.\n> \n> They mention Linux as one of the OS'es tested. Dont tell me they compared\n> numbers under different OS's, like PostgreSQL on RedHat and M$Sql on NT.\n> Thus my conclusion would be it can't be M$sql.\n\nIMHO, and I think this is pretty common across must ppl in the computer\nfield, *any* benchmark generated by *anyone* has to be taken with a very\nvery large grain of salt. I don't care if its Progress benchmarking\nMySQL against the rest, or Great Bridge benchmarkng PostgreSQL against the\nrest, or Oracle doing their own against the rest ... the results of any\nbenchmark is going to favor whom the benchmarker wants to favor,\nperiod. Not because of any malicious act on the benchmarker side, but\nbecause the results that are presented generally don't show the whole\npicture, or the benchmarker spent a bit more time tweaking the server that\nthey care about, or ... 101 other reasons ...\n\n From what I've gathered in the threads, the tests that GB did were mainly\nSELECT based ... fill a table, vacuum it and then run SELECTs against that\n... but if GB were to release their exact tests, could the MySQL folks\nre-run those same tests and have them come out in their favor? My guess\nis probably ... same with Oracle ... same with Informix ...\n\nNow, a *good* benchmark would be for all the various vendors getting\ntogether, agreeing on a set of benchmark tests as well as agreeing on the\nenvironment (ie. everyone runs on a Dual-PIII 500 with 512Meg of RAM, and\nthese drives, this OS, etc) and they each run their own tests ... then\neach vendor could sit down and optimize their software as only they really\nknow how and *then* see how each compares against the other ...\n\n*that* is a test that I've love to see the results of ...\n\nthat's just my opinion ... its nice to finally see some tests out there\nthat does show us in front, and I thank GB for going through the trouble\nof doing this, but I'm more a believer in what I can *see* in a real life\nenvironment vs what a test environment shows, which is why I started in\nwith PostgreSQL in the first place, and why I've stuck with it all these\nyears *shrug*\n\n\n", "msg_date": "Wed, 16 Aug 2000 10:05:56 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Open Source Database Routs Competition in New Bench mark\n Tests" }, { "msg_contents": "At 10:05 AM 8/16/00 -0300, The Hermit Hacker wrote:\n\n>From what I've gathered in the threads, the tests that GB did were mainly\n>SELECT based ... fill a table, vacuum it and then run SELECTs against that\n>... but if GB were to release their exact tests, could the MySQL folks\n>re-run those same tests and have them come out in their favor? My guess\n>is probably ... \n\nI wouldn't bet on it. Even MySQL's own benchmark page shows Postgres 6.5\nbeating MySQL for selects with JOIN, quite handily. Since real-world\ndatabase usage depends heavily on JOINs I wouldn't be surprised if the\nstandard benchmark used by Xperts contains lots and lots of joins,\nwhich would tend to make MySQL run slow.\n\nMySQL is good at one thing, and one thing only: running simple queries\nin single-user mode.\n\n>Now, a *good* benchmark would be for all the various vendors getting\n>together, agreeing on a set of benchmark tests as well as agreeing on the\n>environment (ie. everyone runs on a Dual-PIII 500 with 512Meg of RAM, and\n>these drives, this OS, etc) and they each run their own tests ... then\n>each vendor could sit down and optimize their software as only they really\n>know how and *then* see how each compares against the other ...\n\n>*that* is a test that I've love to see the results of ...\n\nThat's how the normal TPC testing is done, I believe. Except on huge \nhonkin' hardware.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Wed, 16 Aug 2000 07:42:18 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Open Source Database Routs Competition in New Bench mark\n Tests" }, { "msg_contents": "On Wed, 16 Aug 2000, Don Baccus wrote:\n\n> >*that* is a test that I've love to see the results of ...\n> \n> That's how the normal TPC testing is done, I believe. Except on huge \n> honkin' hardware.\n\nOkay, my understandign of the 'normal TPC tseting' is that Oracle goes\nout, buys this major system to run their tests on and submits those\n... then MicroSloth goes out and buys one that happens to be bigger and\nfaster to run theirs on and submits those ...\n\n... the idea being that you basically invest the money into the hardware\nrequired to make yours look good, but all run the same test suite ...\n\n", "msg_date": "Wed, 16 Aug 2000 12:24:29 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Open Source Database Routs Competition in New\n\tBench mark Tests" }, { "msg_contents": "> That's how the normal TPC testing is done, I believe. Except on huge\n> honkin' hardware.\n\nRight. And it's on huge honkin' hardware because you won't see a number\npublished which doesn't *win* in the commercial wars. That said, I\nsuppose you wouldn't have seen the GB results if they turned out sucky\nfor Postgres. You probably wouldn't see GB anywhere if Postgres wasn't\ncompetitive in performance during their evaluation phase of the company\nstartup :)\n\notoh, GB *did* do the tests on hardware representative of equipment\nsmall- and medium-sized companies would be using, and has been (afaik)\nforthcoming about the test setup (at least to the extent that they can\ngiven the restrictive licensing of some of the tested products).\n\n - Thomas\n", "msg_date": "Wed, 16 Aug 2000 15:26:54 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Open Source Database Routs Competition inNew Bench mark Tests" }, { "msg_contents": "At 03:26 PM 8/16/00 +0000, Thomas Lockhart wrote:\n>> That's how the normal TPC testing is done, I believe. Except on huge\n>> honkin' hardware.\n>\n>Right. And it's on huge honkin' hardware because you won't see a number\n>published which doesn't *win* in the commercial wars. That said, I\n>suppose you wouldn't have seen the GB results if they turned out sucky\n>for Postgres. You probably wouldn't see GB anywhere if Postgres wasn't\n>competitive in performance during their evaluation phase of the company\n>startup :)\n\nExactly! Little Stick Over River, maybe, but not Great Bridge with \n$25M of funding!\n\n>otoh, GB *did* do the tests on hardware representative of equipment\n>small- and medium-sized companies would be using, and has been (afaik)\n>forthcoming about the test setup (at least to the extent that they can\n>given the restrictive licensing of some of the tested products).\n\nThey even split index and data files onto different platters in\nOra...oops \"Proprietary 1, V8.1.5\", seems more than fair ...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Wed, 16 Aug 2000 10:47:01 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Open Source Database Routs Competition inNew Bench mark\n Tests" } ]
[ { "msg_contents": "Hello,\n\nMy name is Jimmy Wu. I work for a startup company called eServ. Our OS is\nFreeBSD, web server is Apache, and I use PHP to write some web application\naccessing PostgreSQL database(Version 7.0.2).\nI have some questions:\n1. I explicitly locked a table and the program somehow crashed. Then that\npost process just sitting there never return the lock. How can I prevent\nthis from happening?\n2. After the crash, I don't why...there are two duplicate rows in that\ntable. This should never happen since there are primary keys.\n\nPlease tell me what could cause this? Thank you very much for your time.\n\nJimmy\n\n", "msg_date": "Wed, 16 Aug 2000 08:37:52 -0500", "msg_from": "\"Jimmy Wu\" <[email protected]>", "msg_from_op": true, "msg_subject": "pls help" }, { "msg_contents": "\nOn Wed, 16 Aug 2000, Jimmy Wu wrote:\n\n> Hello,\n> \n> My name is Jimmy Wu. I work for a startup company called eServ. Our OS is\n> FreeBSD, web server is Apache, and I use PHP to write some web application\n> accessing PostgreSQL database(Version 7.0.2).\n> I have some questions:\n> 1. I explicitly locked a table and the program somehow crashed. Then that\n> post process just sitting there never return the lock. How can I prevent\n> this from happening?\n\nWhich program crashed? The php application? I'm guessing from the\nfollowing thing of the postgres process sitting there, it didn't.\nWell, if the postgres process crashed (or maybe if you can SIGSEGV it in\nthat state), you hopefully will get a core you can back trace. Also,\ndo your backend logs have any warning or error messages from before it\noccurred?\n\n> 2. After the crash, I don't why...there are two duplicate rows in that\n> table. This should never happen since there are primary keys.\n\nDid the session that crashed insert a value into the table?\n\n", "msg_date": "Wed, 16 Aug 2000 09:50:20 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pls help" } ]
[ { "msg_contents": "Is anyone working on a set of types and transformations suitable for\ngeographic coordinates, e.g., to use in mapping or GIS applications?\nDo there exist any such types already? Note that I'm familiar with\nthe \"normal\" geometry types, but these are not really suitable for\n\"real\" mapping/GIS applications.\n\nThanks for any pointers.\n\nCheers,\nBrook\n\n", "msg_date": "Wed, 16 Aug 2000 07:55:59 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": true, "msg_subject": "geographic coordinate types?" }, { "msg_contents": "\nFor example? We use the point type for doing long/lat coordinates, and\nfor calculating distance between places on one of our projects ... but I\ntake it you are thinking of something more elaborate then that?\n\nOn Wed, 16 Aug 2000, Brook Milligan wrote:\n\n> Is anyone working on a set of types and transformations suitable for\n> geographic coordinates, e.g., to use in mapping or GIS applications?\n> Do there exist any such types already? Note that I'm familiar with\n> the \"normal\" geometry types, but these are not really suitable for\n> \"real\" mapping/GIS applications.\n> \n> Thanks for any pointers.\n> \n> Cheers,\n> Brook\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 16 Aug 2000 11:10:45 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: geographic coordinate types?" }, { "msg_contents": " For example? We use the point type for doing long/lat coordinates, and\n for calculating distance between places on one of our projects ... but I\n take it you are thinking of something more elaborate then that?\n\nYes. I was thinking of a 3D coordinate system and the various\ntransformations needed to span the globe, take into account\nmeasurement errors, etc.\n\nCheers,\nBrook\n", "msg_date": "Wed, 16 Aug 2000 08:49:02 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: geographic coordinate types?" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> For example? We use the point type for doing long/lat coordinates, and\n> for calculating distance between places on one of our projects ... but I\n> take it you are thinking of something more elaborate then that?\n> \n\ni got the impression that the \"real\" mapping types includes\ntransformation functions -- i.e., we have data in both state plane and\nlat/long (UTM) and they need to play nice together. some more details\nwould be nice -- there are free packages out there to do the\ntransformation at the application level (search for PROJ and gctpc for a\ncouple of pointers), but nobody i know of has needed anything like that\nenough to actually put it in the database itself. i think it would be\ninteresting to put out there exactly what you need to see what isn't\nthere, what can be worked around, what would be a good target for future\ndevelopment, etc. i've had a few \"wouldn't it be nice thoughts\" about\nsimilar things, but it's always been easier to workaround because as far\nas i can tell there's not a whole lot of interest in those features. \n\n-- \n\nJeff Hoffmann\nPropertyKey.com\n", "msg_date": "Wed, 16 Aug 2000 09:51:02 -0500", "msg_from": "Jeff Hoffmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: geographic coordinate types?" }, { "msg_contents": "There is a group working on a GIS front end to PostgreSQL over at:\n\n\thttp://fmaps.sourceforge.net/\n\nAlso, take a look at:\n\n\thttp://www.ossim.org/\n\nMore information can be found at:\n\n\thttp://freegis.org/\n\nT.\n\nBrook Milligan wrote:\n> \n> For example? We use the point type for doing long/lat coordinates, and\n> for calculating distance between places on one of our projects ... but I\n> take it you are thinking of something more elaborate then that?\n> \n> Yes. I was thinking of a 3D coordinate system and the various\n> transformations needed to span the globe, take into account\n> measurement errors, etc.\n> \n> Cheers,\n> Brook\n\n-- \nTimothy H. Keitt\nNational Center for Ecological Analysis and Synthesis\n735 State Street, Suite 300, Santa Barbara, CA 93101\nPhone: 805-892-2519, FAX: 805-892-2510\nhttp://www.nceas.ucsb.edu/~keitt/\n", "msg_date": "Wed, 16 Aug 2000 11:23:59 -0700", "msg_from": "\"Timothy H. Keitt\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: geographic coordinate types?" }, { "msg_contents": "It would be quite easy to add functions for projecting into different\ncoordinate systems using PROJ.4. The problem is how do you keep track\nof the metadata? The projections take parameters (prime meridian, etc.)\nand so it would be up to the user to keep track of which projection they\nare using. (Otherwise you have to define a new data type for all\nprojection/parameter combinations which doesn't make much sense.)\n\nAFAICT, most GIS programs simply project into a cartesian projection and\nthen use standard euclidean geometry for queries, i.e., exactly what is\ncurrently supported in postgres (at least in 2D). For small scale work,\nthat's probably sufficient. However, linearity of the coordinate system\nwill not hold at larger scales. \n\nAt any rate, it would be very nice to have a geographic coordinate type\nin postgres. Once the meaning of certain predicates (greater than,\ncontains, etc.) are defined, these can be indexed as well. Add in the\naforementioned projection code and you have a reasonable start on a GIS.\n\nTim\n\nJeff Hoffmann wrote:\n> \n> i got the impression that the \"real\" mapping types includes\n> transformation functions -- i.e., we have data in both state plane and\n> lat/long (UTM) and they need to play nice together. some more details\n> would be nice -- there are free packages out there to do the\n> transformation at the application level (search for PROJ and gctpc for a\n> couple of pointers), but nobody i know of has needed anything like that\n> enough to actually put it in the database itself. i think it would be\n> interesting to put out there exactly what you need to see what isn't\n> there, what can be worked around, what would be a good target for future\n> development, etc. i've had a few \"wouldn't it be nice thoughts\" about\n> similar things, but it's always been easier to workaround because as far\n> as i can tell there's not a whole lot of interest in those features.\n> \n\n-- \nTimothy H. Keitt\nNational Center for Ecological Analysis and Synthesis\n735 State Street, Suite 300, Santa Barbara, CA 93101\nPhone: 805-892-2519, FAX: 805-892-2510\nhttp://www.nceas.ucsb.edu/~keitt/\n", "msg_date": "Wed, 16 Aug 2000 11:45:11 -0700", "msg_from": "\"Timothy H. Keitt\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: geographic coordinate types?" } ]
[ { "msg_contents": "At 10:39 AM 8/16/00 +0200, Zeugswetter Andreas SB wrote:\n>\n>> > Given all this performance discussion, has anyone seen any\n>> > numbersregarding the speed of PostgreSQl vs Oracle?\n>> \n>> Oracle and MS SQL Server must have been the two\n>> \"leading commercial RDBMSes\" mentioned in the\n>> article.\n>\n>They mention Linux as one of the OS'es tested. Dont tell me they compared\n>numbers under different OS's, like PostgreSQL on RedHat and M$Sql on NT.\n>Thus my conclusion would be it can't be M$sql.\n\nYes, they did, and they explicitly state that operating system differences\nmust be considered in the case of the one RDBMS that only runs under\nNT.\n\nAll the others were run on an identical Linux box.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Wed, 16 Aug 2000 07:38:31 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AW: Open Source Database Routs Competition in New Benchmark Tests" } ]
[ { "msg_contents": "Hi !!!\n\n\nA very little patch, but very important :D\n\nFiles like `global1.bki.source' and\n`local1_template1.bki.source' are generated with 'ame' (the\ncorrect is 'name').\n\n\n---------------------------------------------------------\n\ndiff -uNr postgresql-7.0.2.orig/src/backend/catalog/genbki.sh.in postgresql-7.0.2/src/backend/catalog/genbki.sh.in\n--- postgresql-7.0.2.orig/src/backend/catalog/genbki.sh.in\nTue Jan 11 02:02:28 2000\n+++ postgresql-7.0.2/src/backend/catalog/genbki.sh.in Wed\nAug 16 14:30:59 2000\n@@ -86,7 +86,7 @@\n -e \"s/[ ]Oid/\\ oid/g\" \\\n -e \"s/[ ]NameData/\\ name/g\" \\\n -e \"s/^Oid/oid/g\" \\\n- -e \"s/^NameData/\\name/g\" \\\n+ -e \"s/^NameData/name/g\" \\\n -e \"s/(NameData/(name/g\" \\\n -e \"s/(Oid/(oid/g\" \\\n -e \"s/NAMEDATALEN/$NAMEDATALEN/g\" \\\n\n---------------------------------------------------------\n\n\n--\nPaulo Henrique Rodrigues Pinheiro <[email protected]>\nUsu�rio Linux registrado com o n�mero 173191\nFa�a seu computador feliz: LINUX nele !!!\nhttp://www.conectiva.com.br/~nulo\n\n\n", "msg_date": "Wed, 16 Aug 2000 15:01:47 -0300 (BRST)", "msg_from": "Paulo Henrique Rodrigues Pinheiro <[email protected]>", "msg_from_op": true, "msg_subject": "Patch in pgsql 7.0.2 - genbki.sh.in" } ]
[ { "msg_contents": "Howdy,\n\nI am having a little trouble with the runcheck regression testing of the compilation that I just performed. I am running a Linux Red-Hat 6.2 OS on an Alpha box and currently have Postgres 6.5.3 running in non-default directories and on a non-default port. I wanted to install Postgres 7.0.2 in the default areas and convert over. I configured the default area and compiled the new version. Then I go to perform the runcheck regression test and it fails on the initdb step. In the initdb.log the error that is provided is: \n\nFATAL: s_lock(20306f80) at spinc:116, stuck spinlock. Aborting \n\nThis seems serious, but I have no idea where to start to fix this. I posted the question on general and was pointed to the archives of general and hackers. I found some references to spinlocks in hackers, regarding too many backend processes running at once and running out of kernel space...but I don't think that this is what is happening here. When I was performing the regression test, there was only one process running to the 6.5.3 installation, and just the runcheck on the 7.0.2. The 6.5.3 is configured to handle 34 backends and the 7.0.2 is configured to handle 64. The box has a max of 4 MB of shared memory, which should be plenty to handle this.\n\nPlease point me in the right direction if you can.\n\n\nThanks!\nDarrin\n\n\n\n\n\n\n\nHowdy,I am having \na little trouble with the runcheck regression testing of the compilation that I \njust performed.  I am running a Linux Red-Hat 6.2 OS on an Alpha box and \ncurrently have Postgres 6.5.3 running in non-default directories and on a \nnon-default port.  I wanted to install Postgres 7.0.2 in the default areas \nand convert over.  I configured the default area and compiled the new \nversion.  Then I go to perform the runcheck regression test and it fails on \nthe initdb step.  In the initdb.log the error that is provided is: \nFATAL: s_lock(20306f80) at spinc:116, stuck spinlock. Aborting \nThis seems serious, but I have no idea where to start to fix this.  \nI posted the question on general and was pointed to the archives of general and \nhackers.  I found some references to spinlocks in hackers, regarding too \nmany backend processes running at once and running out of kernel space...but I \ndon't think that this is what is happening here.  When I was performing the \nregression test, there was only one process running to the 6.5.3 installation, \nand just the runcheck on the 7.0.2.  The 6.5.3 is configured to handle 34 \nbackends and the 7.0.2 is configured to handle 64.  The box has a max of 4 \nMB of shared memory, which should be plenty to handle this.\n \nPlease point me in the right direction if you \ncan.\n \n \nThanks!\nDarrin", "msg_date": "Wed, 16 Aug 2000 14:04:38 -0500", "msg_from": "\"Darrin Ladd\" <[email protected]>", "msg_from_op": true, "msg_subject": "regression test failure on initdb" } ]
[ { "msg_contents": "Hello\n\nPostgreSQL 7.0.2 on Red Hat 6.2 & 5.2 & SuSe ?\n\nA long time ago it was advised that I use datetime_in(\"now\") to get the\ntime for use in contrib/spi/moddatetime.c.\n\n...\n Datum newdt; /* The current datetime. */\n...\n /* Get the current datetime. */\n newdt = datetime_in(\"now\"); // This is line 67\n...\n rettuple = SPI_modifytuple(rel, rettuple, 1, &attnum, &newdt, NULL);\n...\n\nBut now when I try to compile:\n\ngcc -I../../src/include -I../../src/backend -O2 -Wall\n-Wmissing-prototypes -Wmissing-declarations -fpic -I../../src/include \n-c -o moddatetime.o moddatetime.c\nmoddatetime.c: In function `moddatetime':\nmoddatetime.c:67: warning: implicit declaration of function\n`datetime_in'\nmoddatetime.c:89: `DATETIMEOID' undeclared (first use in this function)\nmoddatetime.c:89: (Each undeclared identifier is reported only once\nmoddatetime.c:89: for each function it appears in.)\nmake: *** [moddatetime.o] Error 1\nrm timetravel.o\n\nDoes the function datetime_in() still exist?\nIf not, what should be used in it's stead?\n\nI've looked around the docs and mail archives, but found nothing useful\non this.\n\nThanks\n\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner http://www.shell-connection.com\nProudly powered by R H Linux, Apache, PHP, PostgreSQL\n\"If you don't know where you are going, how can you get there?\"\n", "msg_date": "Wed, 16 Aug 2000 16:39:43 -0400", "msg_from": "Terry Mackintosh <[email protected]>", "msg_from_op": true, "msg_subject": "datetime_in()" }, { "msg_contents": "> Does the function datetime_in() still exist?\n> If not, what should be used in it's stead?\n\nUse timestamp_in() and TIMESTAMPOID instead. It is related to the topic\nin the release notes for 7.0 that datetime has been deprecated and the\n(formerly barely functional) timestamp has taken its place.\n\n - Thomas\n", "msg_date": "Thu, 17 Aug 2000 04:41:18 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: datetime_in()" } ]
[ { "msg_contents": "Howdy,\n\nI am having a little trouble with the runcheck regression testing of the compilation that I just performed. I am running a Linux Red-Hat 6.2 OS on an Alpha box and currently have Postgres 6.5.3 running in non-default directories and on a non-default port. I wanted to install Postgres 7.0.2 in the default areas and convert over. I configured the default area and compiled the new version. Then I go to perform the runcheck regression test and it fails on the initdb step. In the initdb.log the error that is provided is: \n\nFATAL: s_lock(20306f80) at spinc:116, stuck spinlock. Aborting \n\nThis seems serious, but I have no idea where to start to fix this. I posted the question on general and was pointed to the archives of general and hackers. I found some references to spinlocks in hackers, regarding too many backend processes running at once and running out of kernel space...but I don't think that this is what is happening here. When I was performing the regression test, there was only one process running to the 6.5.3 installation, and just the runcheck on the 7.0.2. The 6.5.3 is configured to handle 34 backends and the 7.0.2 is configured to handle 64. The box has a max of 4 MB of shared memory, which should be plenty to handle this.\n \nPlease point me in the right direction if you can.\n \n \nThanks!\nDarrin\n\n\n\n\n\n\n\nHowdy,I am having a little trouble with the runcheck regression \ntesting of the compilation that I just performed.  I am running a Linux \nRed-Hat 6.2 OS on an Alpha box and currently have Postgres 6.5.3 running in \nnon-default directories and on a non-default port.  I wanted to install \nPostgres 7.0.2 in the default areas and convert over.  I configured the \ndefault area and compiled the new version.  Then I go to perform the \nruncheck regression test and it fails on the initdb step.  In the \ninitdb.log the error that is provided is: FATAL: s_lock(20306f80) at \nspinc:116, stuck spinlock. Aborting This seems serious, but I have no \nidea where to start to fix this.  I posted the question on general and was \npointed to the archives of general and hackers.  I found some references to \nspinlocks in hackers, regarding too many backend processes running at once and \nrunning out of kernel space...but I don't think that this is what is happening \nhere.  When I was performing the regression test, there was only one \nprocess running to the 6.5.3 installation, and just the runcheck on the \n7.0.2.  The 6.5.3 is configured to handle 34 backends and the 7.0.2 is \nconfigured to handle 64.  The box has a max of 4 MB of shared memory, which \nshould be plenty to handle this.\n \nPlease point me in the right direction if you \ncan.\n \n \nThanks!\nDarrin", "msg_date": "Wed, 16 Aug 2000 15:56:06 -0500", "msg_from": "\"Darrin Ladd\" <[email protected]>", "msg_from_op": true, "msg_subject": "regression test failure on initdb" }, { "msg_contents": "We just got a report of a successful installation on Linux/Alpha from\nthe current development tree. Earlier versions, including 7.0.2,\nrequired patching. The easiest course for you to take is to install from\nsrc or binary RPMs, which already contain the patches for Alpha boxes.\nThe RPMs are available on the Postgres ftp site, and Lamar Owens is just\nabout ready to release an update, though I'll guess that the current RPM\nwill work just fine for you.\n\n7.1 should run out of the box for you, since it has a new \"64-bit\nfriendly\" function manager interface.\n\n - Thomas\n", "msg_date": "Thu, 17 Aug 2000 06:18:01 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: regression test failure on initdb" } ]
[ { "msg_contents": "I ask myself about the following problem:\n\nWhen PostgreSQL generates an index of a string column: is this\ncolumn not only used for equal-tests but also for greater-than\nor smaller-than tests ?\n\n\nMarten Feldtmann\n", "msg_date": "Wed, 16 Aug 2000 22:01:24 +0100", "msg_from": "[email protected] (Marten Feldtmann)", "msg_from_op": true, "msg_subject": "Index-Function for Strings ...." } ]
[ { "msg_contents": "============================================================================\n POSTGRESQL BUG REPORT TEMPLATE\n============================================================================\n\n\nYour name\t\t:\tNeil Bloomer\nYour email address\t:\[email protected]\n\n\nSystem Configuration\n---------------------\n Architecture (example: Intel Pentium) \t: Pentium II 266\n\n Operating System (example: Linux 2.0.26 ELF) \t: Red Hat Linux 6.1\n\n PostgreSQL version (example: PostgreSQL-7.0): PostgreSQL 7.0.2 on\ni686-pc-linux-gnu, (Red Hat RPM)\n\n Compiler used (example: gcc 2.8.0)\t\t: compiled by gcc\negcs-2.91.66\n\n\nPlease enter a FULL description of your problem:\n------------------------------------------------\nThe to_timestamp function is not working as per the documentation. See the\nexamples below.\n\n\n\n\nPlease describe a way to repeat the problem. Please try to provide a\nconcise reproducible example, if at all possible: \n----------------------------------------------------------------------\nThe following results were returned when the queries were executed through\nipgsql, and similar results are returned through psql.\n\nselect to_timestamp('20000816000001', 'YYYYMMDDHH24MISS') returns\n'30/12/1899' (wrong)\nselect to_timestamp('2000 0816000001', 'YYYY MMDDHH24MISS') returns\n'16/08/2000 00:00:01' (ok)\nselect to_timestamp('000816000001', 'YYMMDDHH24MISS') returns '16/08/0001'\n(wrong)\n\n\nIf you know how this problem might be fixed, list the solution below:\n---------------------------------------------------------------------\n?\n\n\n\n\n\n\nBug in to_timestamp()\n\n\n============================================================================\n                        POSTGRESQL BUG REPORT TEMPLATE\n============================================================================\n\n\nYour name               :       Neil Bloomer\nYour email address      :       [email protected]\n\n\nSystem Configuration\n---------------------\n  Architecture (example: Intel Pentium)         : Pentium II 266\n\n  Operating System (example: Linux 2.0.26 ELF)  : Red Hat Linux 6.1\n\n  PostgreSQL version (example: PostgreSQL-7.0):   PostgreSQL 7.0.2 on i686-pc-linux-gnu, (Red Hat RPM)\n\n  Compiler used (example:  gcc 2.8.0)           : compiled by gcc egcs-2.91.66\n\n\nPlease enter a FULL description of your problem:\n------------------------------------------------\nThe to_timestamp function is not working as per the documentation. See the examples below.\n\n\n\n\nPlease describe a way to repeat the problem.   Please try to provide a\nconcise reproducible example, if at all possible: \n----------------------------------------------------------------------\nThe following results were returned when the queries were executed through ipgsql, and similar results are returned through psql.\nselect to_timestamp('20000816000001', 'YYYYMMDDHH24MISS') returns '30/12/1899' (wrong)\nselect to_timestamp('2000 0816000001', 'YYYY MMDDHH24MISS')  returns '16/08/2000 00:00:01' (ok)\nselect to_timestamp('000816000001', 'YYMMDDHH24MISS')  returns '16/08/0001' (wrong)\n\n\nIf you know how this problem might be fixed, list the solution below:\n---------------------------------------------------------------------\n?", "msg_date": "Wed, 16 Aug 2000 22:41:18 +0100", "msg_from": "\"Gqms2 Galway\" <[email protected]>", "msg_from_op": true, "msg_subject": "Bug in to_timestamp()" }, { "msg_contents": "On Wed, 16 Aug 2000, Gqms2 Galway wrote:\n\n> \n> Please enter a FULL description of your problem:\n> ------------------------------------------------\n> The to_timestamp function is not working as per the documentation. See the\n> examples below.\n> \n\n No. It is not bug. Where is in a documentation your example?\n\n Instead this, in the documentation is next:\n\n\tYYYY = year (4 or more digits)\n ^^^^^^^^^^^\n\n Timestamp range is 4714 BC -- 1465001 AC.\n\n> select to_timestamp('20000816000001', 'YYYYMMDDHH24MISS') returns\n> '30/12/1899' (wrong)\n\n The PostgreSQL hasn't directly limited year. The to_timestamp() stop \nparse YYYY at first non-digit char. \n\n> select to_timestamp('2000 0816000001', 'YYYY MMDDHH24MISS') returns\n> '16/08/2000 00:00:01' (ok)\n\nYes, it's right.\n\n If you want store full timestamp into one big number is better year\nkeep to end of this number, like:\n\ntest=# select to_timestamp('08160000012000', 'MMDDHH24MISSYYYY');\n to_timestamp\n------------------------\n 2000-08-16 00:00:01+02\n\n\n And YYY, YY, Y ... it's *hell*, and we support it because Oracle has it \ntoo. How number you want create from:\n\t\n\t'01' -- 'YY' ---> 2001, 1901 or 0001 .. grrrr\n\n to_timestamp() use last possibility.\n\nSome commets/suggestions about greater years than 9999 in \nto_timestamp() / to_date()?\n\n\nThanks,\n \t\t\tKarel\n\n", "msg_date": "Thu, 17 Aug 2000 12:47:43 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in to_timestamp()" } ]
[ { "msg_contents": "As postgres becomes better and more in the spotlight there are a couple of \nissues I think that the hacker group might want to address to better prepare \nit for the enterprise and highend production systems. Currently postgres \nwill support an incredible amount of tables whereas Interbase only supports \n64K, but the efficiency and performance of the pg backend quickly \ndegenerates after 1000 tables. I know that most people will think that \nfilesystem will be the bottleneck but as XFS nears completion the problem \nwill shift back to pg. It is my understanding that the system tables where \nlookups on tables occur are always done sequentially and not using any more \noptimized (btree etc) solution. I also think this may be applicable to the \ntoastable objects where large # of objects occur. I want to start to look \nat the code to maybe help out but have a few questions:\n1) When referencing a table is it only looked up once and then cached or \ndoes a scan of the system table occur only once per session.\n2) Which files should I look at in tree.\n3) Any tips, suggestions, pitfalls I should remember.\n\nThanx for the pointers,\nCarl Garland\n________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com\n\n", "msg_date": "Thu, 17 Aug 2000 06:17:01 EDT", "msg_from": "\"carl garland\" <[email protected]>", "msg_from_op": true, "msg_subject": "Large # of Tables, Getting ready for the enterprise" }, { "msg_contents": "On Thu, Aug 17, 2000 at 06:17:01AM -0400, carl garland wrote:\n> As postgres becomes better and more in the spotlight there are a couple of \n> issues I think that the hacker group might want to address to better prepare \n> it for the enterprise and highend production systems. Currently postgres \n> will support an incredible amount of tables whereas Interbase only supports \n> 64K, but the efficiency and performance of the pg backend quickly \n> degenerates after 1000 tables. I know that most people will think that \n> filesystem will be the bottleneck but as XFS nears completion the problem \n\nRealize that pg is a cross platform program. The existance (or not) of a \nparticular filesystem cannot be assumed, nor does the release of a new FS\non linux, for example, impact the entire postgresql community.\n\n> will shift back to pg. It is my understanding that the system tables where \n> lookups on tables occur are always done sequentially and not using any more \n> optimized (btree etc) solution. I also think this may be applicable to the \n\nNope. The pg_class table has indicices on oid and relname, which are used when\nappropriate. In addition, there is a cache for system tables (syscache).\n\n> toastable objects where large # of objects occur. I want to start to look \n> at the code to maybe help out but have a few questions:\n> 1) When referencing a table is it only looked up once and then cached or \n> does a scan of the system table occur only once per session.\n\nThis would be the syscache, for the system tables, or the relcache, which\ncaches relation (i.e. table) descriptors for the general case. There's\nbeen some discussion and work regarding cleaning up the seperation between\naccessing relations and storing them in particular files. At one point,\nTom Lane mentioned that he was wondering if the knowledge of the relcache\nneeds to be move out of the bufmgr/smgr interfaces (that's the buffer\nmanager and storage manager, respectively) as part of that cleanup.\n\n> 2) Which files should I look at in tree.\n\nHmm, an awful lot of them. \n\nfind . -name \\*.[chyl] |xargs grep -l syscache | wc -l\n 98\nfind . -name \\*.[chyl] |xargs grep -l relcache | wc -l\n 43 \n\nAnd that's not nearly exhaustive.\n\n> 3) Any tips, suggestions, pitfalls I should remember.\n\nTake a look at:\n\nhttp://postgresql.org/docs/faq-dev-english.html\n\nand \n\nfile:/where/ever/you/put/pgsql/src/tools/backend/index.html\n\nVery useful for getting the broad overview and links into your\nlocal filetree, with descriptions.\n\nThis is a very complex bit of code, right at the heart of the\ndatabase. I've found it a might steep learning curve. Perhaps not the\nbest place to start for a first backend coding project. But dig in: for\nme, there's no better way to learn than breaking code, and fixing it. The\ncore developers are good at not letting stupid ideas get commited to CVS.\n\nOne thing not mentioned above: memory management is quite complex, as well,\nand underlies all the other code (of course). Take a look at the mmgr code.\n\nIn fact, if you've got any skill at code analysis and documentation,\nfurther diagramming and describing of the interrelationships between\nthe different caches and managers (syscache, relcache, bufcache, bufmgr,\nsmgr, mmgr etc.) would be a welcome addition to the work Bruce has done\nwith the developers FAQ, above.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n\n", "msg_date": "Thu, 17 Aug 2000 10:35:47 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large # of Tables, Getting ready for the enterprise" }, { "msg_contents": "\"carl garland\" <[email protected]> writes:\n> Currently postgres will support an incredible amount of tables whereas\n> Interbase only supports 64K, but the efficiency and performance of the\n> pg backend quickly degenerates after 1000 tables.\n\nCurrent sources fix some problems with large numbers of indexes\n(pg_index was being sequentially scanned in several places). Offhand\nI'm not aware of any other significant real-world performance problems\nin this area; can you be more specific about what's bothering you?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 19 Aug 2000 01:03:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large # of Tables, Getting ready for the enterprise " } ]
[ { "msg_contents": "> The thing that's especially funny to me is that Monty is the first one\n\n> to cry, but has been making qualitative, unsubstantiated claims about\n> Postgres' supposed shortcomings for years (OK, he thinks they are\n\nRemembering \"crash-me\" ...\n\n> substantiated by *his* single-user benchmarks :). And contrary to his\n> claims in the past, imho Postgres has been quite silent on the issue\nof\n> testing fairness and MySQL attributes, trying to let the product do\nthe\n> talking instead.\n\nOnly when somebody comes to a pgsql-* list and starts talking about it,\nthey get some (well, sometimes heated) argumentation.\n\nI would strongly urge people to let this argumentation go - don't get\ninvolved in a pissing contest. Instead, put the test and any other\nrelevant documentation on the home page.\n\n> Great Bridge (not \"the Postgres people\" as he claims) actually went to\n\n> the trouble to do specific multi-user stress tests using published\n> industry-standard benchmarks, which is the closest thing to a fair\ntest\n> we've ever seen. Everyone was suprised by the results. It will be nice\n\n> to see a few more of these kinds of fair tests in the future.\n\nAt least the test shows that PostgreSQL is fastest under _some_\ncircumstances. I think it would be far fetched to claim PostgreSQL to be\nfastest for all purposes, but that's not what I've been reading.\n\nIt would be nice to see PostgreSQL go through a real TPC test, so it\ncould claim a place on the \"most transactions for the buck\" list. But I\nbelieve it's very expensive ?\n\n--\nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\n\nKaki Data tshirts, merchandize Fax: 3816 2501\n\nHowitzvej 75 �ben 14.00-18.00 Email: [email protected]\n\n2000 Frederiksberg L�rdag 11.00-17.00 Web: www.suse.dk\n\n\n\n", "msg_date": "Thu, 17 Aug 2000 08:40:34 +0200", "msg_from": "Kaare Rasmussen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: MySQL disputes benchmarks.." } ]
[ { "msg_contents": "Hi all,\n\nI downloaded the current ( 12 Aug ) development snapshot of the 7.1\ndevelopment software.\n\nIt runs great ! - except I am having trouble with the ssl enabled\nconnections...\n\nI used a certificate that I have generated via a mod_ssl installation\nto provide 2 files :\n\nserver.key\nserver.crt\n\nin the $PGDATA directory. This enabled the postmaster to start ok.\n\nUnfortunatly I cannot connect to the server usng psql ( or I suspect\nanything else ). I get an error like :\n\"couldn't send SSL negotiation packet (not connected) \". This appears to\nbe coming from fe-connect.c.\n\nAm I geting this because this fearture is not yet implemented, or am I\njust being a plonker and not configured ssl properly...( I wondered if\nI needed a client certificate too...) ?\n\nAnyway\n\nThanks for providing acecss at an interesting stage of your development\n\nregards\n\nMark\n\n", "msg_date": "Thu, 17 Aug 2000 20:09:35 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": true, "msg_subject": "Connections Implementing SSL in 7.1 Dev" } ]
[ { "msg_contents": "in C, I work on a database (4 table).\n\n\tCOPY FROM file ;\n\tSELECT, INSERT, UPDATE, DELETE for a result in the last \t\t\t\ttable.\n\tCOPY TO file ;\n\nin the file, are stored 10000 informations.\n\n-> It's slowly !!!\nCan you help me for optimization this?\n\nI didn't use Oid, trigger and function SQL.\n\nThanks.\nJerome.\n", "msg_date": "Thu, 17 Aug 2000 10:45:37 +0200", "msg_from": "Jerome Raupach <[email protected]>", "msg_from_op": true, "msg_subject": "optimization in C" } ]
[ { "msg_contents": "Hi,\n\nI've installed PostgreSQL 7.0.2 on Solaris following the INSTALL file\nthat comes with the source.\n\nWhen I do:\n\nmake runtest\n\nit gives out the following error message in regress.out:\n\npostmaster must already be running for the regression tests to succeed.\nThe time zone is set to PST8PDT for these tests by the client frontend.\nPlease report any apparent problems to [email protected]\nSee regress/README for more information.\n\n=============== dropping old regression database... =================\nDROP DATABASE\n=============== creating new regression database... =================\nCREATE DATABASE\n=============== installing languages... =================\ninstalling PL/pgSQL .. createlang: missing required argument PGLIB\ndirectory\n(This is the directory where the interpreter for the procedural\nlanguage is stored. Traditionally, these are installed in whatever\n'lib' directory was specified at configure time.)\nfailed\n\n\nAlso, when I do:\n\nmake runcheck\n\nthe following message is in the postmaster.log\n\nIpcSemaphoreCreate: semget failed (No space left on device)\nkey=65432015, num=16, permission=600\nThis type of error is usually caused by an improper\nshared memory or System V IPC semaphore configuration.\nFor more information, see the FAQ and platform-specific\nFAQ's in the source directory pgsql/doc or on our\nweb site at http://www.postgresql.org.\nFATAL 1: InitProcGlobal: IpcSemaphoreCreate failed\n", "msg_date": "Thu, 17 Aug 2000 17:00:49 +0800", "msg_from": "Paul Juliano <[email protected]>", "msg_from_op": true, "msg_subject": "Regression Tests" }, { "msg_contents": "On Thu, Aug 17, 2000 at 05:00:49PM +0800, Paul Juliano wrote:\n\n> installing PL/pgSQL .. createlang: missing required argument PGLIB\n> directory\n\nI don't know about this one.\n\n> Also, when I do:\n> \n> make runcheck\n> \n> the following message is in the postmaster.log\n> \n> IpcSemaphoreCreate: semget failed (No space left on device)\n> key=65432015, num=16, permission=600\n> This type of error is usually caused by an improper\n> shared memory or System V IPC semaphore configuration.\n> For more information, see the FAQ and platform-specific\n> FAQ's in the source directory pgsql/doc or on our\n> web site at http://www.postgresql.org.\n> FATAL 1: InitProcGlobal: IpcSemaphoreCreate failed\n\nSimplest solution: quit the other postmaster, then make runcheck. Otherwise\ndouble your IPC settings - depending on the OS this might involve building\na new kernel.\n\nCheers,\n\nPatrick\n", "msg_date": "Thu, 17 Aug 2000 11:46:56 +0100", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression Tests" }, { "msg_contents": "On Thu, 17 Aug 2000, Paul Juliano wrote:\n\n> =============== installing languages... =================\n> installing PL/pgSQL .. createlang: missing required argument PGLIB\n> directory\n\n It's easy: \n\n$ export PGLIB=/path/to/postgresql/lib\n$ make runtest\n\n\nBTW.:\n\n Hackers, why this 'export' not handle regression test itself \nfrom $libdir that is already defined in Makefile.global? \n\n Needs a patch? :-)\n\n\t\t\t\t\tKarel\n\n", "msg_date": "Thu, 17 Aug 2000 15:39:09 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression Tests" }, { "msg_contents": "\nOn Thu, 17 Aug 2000, Paul Juliano wrote:\n\n> Hi,\n> \n> I've installed PostgreSQL 7.0.2 on Solaris following the INSTALL file\n> that comes with the source.\n> \n> When I do:\n> \n> make runtest\n> \n> it gives out the following error message in regress.out:\n> \n> postmaster must already be running for the regression tests to succeed.\n> The time zone is set to PST8PDT for these tests by the client frontend.\n> Please report any apparent problems to [email protected]\n> See regress/README for more information.\n> \n> =============== dropping old regression database... =================\n> DROP DATABASE\n> =============== creating new regression database... =================\n> CREATE DATABASE\n> =============== installing languages... =================\n> installing PL/pgSQL .. createlang: missing required argument PGLIB\n> directory\n> (This is the directory where the interpreter for the procedural\n> language is stored. Traditionally, these are installed in whatever\n> 'lib' directory was specified at configure time.)\n> failed\nDo you have a PGLIB environment variable set?\nYou may need one so that createlang can find the procedural\nlanguage information. It's probably /usr/local/pgsql/lib\nif you didn't change the locations.\n\nI thought this information was in the INSTALL, but it's not there any\nmore in any case.\n\n> Also, when I do:\n> \n> make runcheck\n> \n> the following message is in the postmaster.log\n> \n> IpcSemaphoreCreate: semget failed (No space left on device)\n> key=65432015, num=16, permission=600\n> This type of error is usually caused by an improper\n> shared memory or System V IPC semaphore configuration.\n> For more information, see the FAQ and platform-specific\n> FAQ's in the source directory pgsql/doc or on our\n> web site at http://www.postgresql.org.\n> FATAL 1: InitProcGlobal: IpcSemaphoreCreate failed\n\nSounds like you don't have a large enough shared memory block\nconfigured on the machine. I don't know enough about solaris\nto help here, but I believe people have posted shared memory\nconfigurations on either -general or -hackers in the past.\nYou might be able to find more info in the archives.\n\n", "msg_date": "Thu, 17 Aug 2000 08:30:39 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression Tests" } ]
[ { "msg_contents": "\ntest=# CREATE TABLE rrr (id int);\nCREATE\ntest=# CREATE RULE rrr_r AS ON DELETE TO rrr \n\tDO INSTEAD SELECT 'Not Delete';\nCREATE\ntest=# INSERT INTO rrr VALUES (1);\nINSERT 161557 1\ntest=# INSERT INTO rrr VALUES (2);\nINSERT 161558 1\ntest=# DELETE FROM rrr;\n ?column?\n------------\n Not Delete\n(1 row)\n\n\nWell, all is right. I add 'WHERE OLD.id = 2' to rule definition \nand:\n\ntest=# DROP RULE rrr_r;\nDROP\ntest=# CREATE RULE rrr_r AS ON DELETE TO rrr WHERE OLD.id = 2 \n\tDO INSTEAD SELECT 'Not Delete';\nCREATE\ntest=# DELETE FROM rrr WHERE id = 2;\nDELETE 0\n#\n\n\nThe RULE works (nothing is deleted), but where is a output from SELECT?\n\nIt's in 7.1 and 6.5 too. Is it right? \n\n\n\t\t\t\t\tKarel\n\n", "msg_date": "Thu, 17 Aug 2000 13:17:50 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "right RULE?" } ]
[ { "msg_contents": "> \n> > > \tThe patch is attached, just adds a line to the resultmap file as\n> > > the geometry-solaris-precision.out file matched the \n> Linux/Alpha output.\n> > > Also, the geometry-cygwin-precision.out is an exact match to the\n> > > geometry-solaris-precision.out file if anyone is \n> interested in reducing\n> > > the number of geometry files. :)\n\nI have checked the results for geometry tests with diff and\ngeometry-cygwin-precision.out is the same as geometry-solaris-precision.out.\nSo one file can be removed and the line in resultmap can be changed.\n\n\t\tDan\n", "msg_date": "Thu, 17 Aug 2000 14:00:44 +0200", "msg_from": "=?iso-8859-1?Q?Hor=E1k_Daniel?= <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Linux/Alpha Regression Test Patch" } ]
[ { "msg_contents": "Hello,\n\ncan our autoconf guru create a test for checking the availability of AF_UNIX\nsockets? It could be defined in config.h as HAVE_UNIX_SOCKET or similar. It\nwill enable to use them in the newest cygwin where are this sockets\nimplemented.\n\n\t\tDan\n\n----------------------------------------------\nDaniel Horak\nnetwork and system administrator\ne-mail: [email protected]\nprivat e-mail: [email protected] ICQ:36448176\n----------------------------------------------\n", "msg_date": "Thu, 17 Aug 2000 14:06:52 +0200", "msg_from": "=?iso-8859-2?Q?Hor=E1k_Daniel?= <[email protected]>", "msg_from_op": true, "msg_subject": "autoconf check for AF_UNIX sockets" }, { "msg_contents": "Hor�k Daniel writes:\n\n> can our autoconf guru create a test for checking the availability of AF_UNIX\n> sockets? It could be defined in config.h as HAVE_UNIX_SOCKET or similar. It\n> will enable to use them in the newest cygwin where are this sockets\n> implemented.\n\nI'll check into it. (No presumptions about guru status made... :-) )\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 20 Aug 2000 00:54:17 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autoconf check for AF_UNIX sockets" }, { "msg_contents": "I wrote:\n\n> > can our autoconf guru create a test for checking the availability of AF_UNIX\n> > sockets? It could be defined in config.h as HAVE_UNIX_SOCKET or similar. It\n> > will enable to use them in the newest cygwin where are this sockets\n> > implemented.\n> \n> I'll check into it.\n\nA classical Autoconf test is impractical. First of all there's no reliable\ncompile-time evidence regarding these Unix sockets so we'd have to run a\nprogram from configure. That's already a semi-no-no because it will break\ncross-compilation and it also sounds a bit like a security concern.\nMoreover, it's still doubtful whether you could learn a lot this way,\nperhaps the user that runs configure cannot create these sockets or not\nwhere configure is trying to create it, etc.\n\nI have added a HAVE_UNIX_SOCKETS symbol into config.h.in that currently\nchecks !defined(__CYGWIN__) && !defined(__QNX__) in the accustomed manner.\nYou could extend it with specific Cygwin version checks.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 20 Aug 2000 13:59:14 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autoconf check for AF_UNIX sockets" }, { "msg_contents": "On Sun, 20 Aug 2000, Peter Eisentraut wrote:\n\n> I wrote:\n> \n> > > can our autoconf guru create a test for checking the availability of AF_UNIX\n> > > sockets? It could be defined in config.h as HAVE_UNIX_SOCKET or similar. It\n> > > will enable to use them in the newest cygwin where are this sockets\n> > > implemented.\n> > \n> > I'll check into it.\n> \n> A classical Autoconf test is impractical. First of all there's no reliable\n> compile-time evidence regarding these Unix sockets so we'd have to run a\n> program from configure. That's already a semi-no-no because it will break\n> cross-compilation and it also sounds a bit like a security concern.\n> Moreover, it's still doubtful whether you could learn a lot this way,\n> perhaps the user that runs configure cannot create these sockets or not\n> where configure is trying to create it, etc.\n\ncan't you just do a link test that checks that AF_UNIX is defined?\n\n\n", "msg_date": "Sun, 20 Aug 2000 11:21:38 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autoconf check for AF_UNIX sockets" } ]
[ { "msg_contents": "This is because I never updated the SSL support after I initially added it.\nSomebody later added async support, and in the process broke SSL.\nThe machines I have running with SSL still runs 6.5+SSL-patch, so I haven't\nhad the time to fix it (yet).\n\nI've said for a long time I hope to fix this soon, and haven't found the\ntime. BUt well, I still hope to fix it before 7.1 :-)\n\n//Magnus\n\n> -----Original Message-----\n> From: Mark Kirkwood [mailto:[email protected]]\n> Sent: den 17 augusti 2000 10:10\n> To: [email protected]\n> Subject: [HACKERS] Connections Implementing SSL in 7.1 Dev\n> \n> \n> Hi all,\n> \n> I downloaded the current ( 12 Aug ) development snapshot of the 7.1\n> development software.\n> \n> It runs great ! - except I am having trouble with the ssl enabled\n> connections...\n> \n> I used a certificate that I have generated via a mod_ssl \n> installation\n> to provide 2 files :\n> \n> server.key\n> server.crt\n> \n> in the $PGDATA directory. This enabled the postmaster to start ok.\n> \n> Unfortunatly I cannot connect to the server usng psql ( or I suspect\n> anything else ). I get an error like :\n> \"couldn't send SSL negotiation packet (not connected) \". This \n> appears to\n> be coming from fe-connect.c.\n> \n> Am I geting this because this fearture is not yet implemented, or am I\n> just being a plonker and not configured ssl properly...( I \n> wondered if\n> I needed a client certificate too...) ?\n> \n> Anyway\n> \n> Thanks for providing acecss at an interesting stage of your \n> development\n> \n> regards\n> \n> Mark\n> \n", "msg_date": "Thu, 17 Aug 2000 15:22:59 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Connections Implementing SSL in 7.1 Dev" } ]
[ { "msg_contents": "This solution isn't good when there are +10000 tuples in the table, it's\nslowly...\nanybody can help me ? :\n\n\n string = \"SELECT service, noeud, rubrique FROM table\" ;\n res = PQexec( conn, string.data() ) ;\n if ( (! res) || (status = PQresultStatus( res ) !=\nPGRES_TUPLES_OK) )\n {\n cerr << _ERROR << \"Problem SELECT ! \" << endl ;\n cerr << _ERROR << \"Error : \" << PQresStatus( status ) <<\nendl ;\n cerr << _ERROR << \"Error : \" << PQresultErrorMessage(\nres ) << endl ;\n PQclear( res ) ;\n }\n else\n {\n for (int m=0; m < PQntuples( res ); m++)\n {\n service = PQgetvalue( resultat1, m, 0 ) ;\n noeud = PQgetvalue( resultat1, m, 1 ) ;\n rubrique = PQgetvalue( resultat1, m, 2 ) ;\n\n commande = \"SELECT SUM(date) FROM table WHERE\nservice='\" + service +\n\"' AND noeud='\" + noeud + \"' AND rubrique='\"+ rubrique + \"'\" ;\n res1 = PQexec( conn, string.data() ) ;\n if ( (! res1) || (status = PQresultStatus( res1\n) != PGRES_TUPLES_OK)\n)\n {\n cerr << _ERROR << \"Problem SUM ! \" <<\nendl ;\n cerr << _ERROR << \"Error : \" <<\nPQresStatus( status ) << endl ;\n cerr << _ERROR << \"Error : \" <<\nPQresultErrorMessage( res1 ) << endl\n;\n PQclear( res1 ) ;\n }\n else\n {\n cout << _TRACE << \"SUM ok.\" << endl ;\n PQclear( res1 ) ;\n }\n }\n PQclear( res ) ;\n }\n\nThanks. jerome.\n", "msg_date": "Thu, 17 Aug 2000 17:26:19 +0200", "msg_from": "Jerome Raupach <[email protected]>", "msg_from_op": true, "msg_subject": "Optimization in C" }, { "msg_contents": "Is the thing you're trying to do really different from\nSELECT service, noeud, rubrique, sum(date) FROM table\ngroup by service, noeud, rubrique, assuming table is the \nsame in both queries of course.\n\nAlso, since you aren't distincting the outside query, wouldn't\nyou be doing the same sequence of service, noeud and rubrique\nmore than once in the inner loop if it had more than one date \n(if it's only got one, why bother summing?)\n\nStephan Szabo\[email protected]\n\nOn Thu, 17 Aug 2000, Jerome Raupach wrote:\n\n> This solution isn't good when there are +10000 tuples in the table, it's\n> slowly...\n> anybody can help me ? :\n> \n> \n> string = \"SELECT service, noeud, rubrique FROM table\" ;\n> res = PQexec( conn, string.data() ) ;\n> if ( (! res) || (status = PQresultStatus( res ) !=\n> PGRES_TUPLES_OK) )\n> {\n> cerr << _ERROR << \"Problem SELECT ! \" << endl ;\n> cerr << _ERROR << \"Error : \" << PQresStatus( status ) <<\n> endl ;\n> cerr << _ERROR << \"Error : \" << PQresultErrorMessage(\n> res ) << endl ;\n> PQclear( res ) ;\n> }\n> else\n> {\n> for (int m=0; m < PQntuples( res ); m++)\n> {\n> service = PQgetvalue( resultat1, m, 0 ) ;\n> noeud = PQgetvalue( resultat1, m, 1 ) ;\n> rubrique = PQgetvalue( resultat1, m, 2 ) ;\n> \n> commande = \"SELECT SUM(date) FROM table WHERE\n> service='\" + service +\n> \"' AND noeud='\" + noeud + \"' AND rubrique='\"+ rubrique + \"'\" ;\n> res1 = PQexec( conn, string.data() ) ;\n> if ( (! res1) || (status = PQresultStatus( res1\n> ) != PGRES_TUPLES_OK)\n> )\n> {\n> cerr << _ERROR << \"Problem SUM ! \" <<\n> endl ;\n> cerr << _ERROR << \"Error : \" <<\n> PQresStatus( status ) << endl ;\n> cerr << _ERROR << \"Error : \" <<\n> PQresultErrorMessage( res1 ) << endl\n> ;\n> PQclear( res1 ) ;\n> }\n> else\n> {\n> cout << _TRACE << \"SUM ok.\" << endl ;\n> PQclear( res1 ) ;\n> }\n> }\n> PQclear( res ) ;\n> }\n> \n> Thanks. jerome.\n> \n\n\n", "msg_date": "Thu, 17 Aug 2000 08:58:11 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimization in C" } ]
[ { "msg_contents": "Hi Postgresql Developers!\n\nA few weeks ago I posted a message on the pgsql-general list asking about a\npossible port of Postgresql to OpenVMS. \n\nBruce Momjian (who has a lot of VMS experience) wrote me back explaining\nthe difficulties involved in such a port.\n\nWell, since Bruce's note, I have been poking aroung in the VMS community\nand lo and behold, one of my VMS colleagues (David Mathog at Caltech) had\nactually begun to look at this in conjunction with my efforts.\n\nI had an email exchange with David today and I have attached it below. I\nhope the Postgresql team may have an opportunity to look it over and\n(perhaps) consider revisiting the possibility of a \"VMS port\" of Postgresql\nor at least maybe a \"VMS team\".\n\nHere is a condensed synopsis of my emails with Dave.\n\nBest Regards (and THANKS FOR A DYNAMITE DATABASE!!!),\n\nJim\n....................................................... \n I've been looking for an affordable database system for our OpenVMS system,\n so far, with no luck. In that quest I've examined some of the freeware\n database systems. I couldn't make heads or tails of MySql or Interbase\n (especially their build procedures). But a recent slashdot thread on the\n excellent performance of Postgresql convinced me to give it a try.\n \n I only allowed myself one workday for this - if it didn't all fall\n substantially into place by then I wasn't going to pursue it. And it didn't\n get far enough for me to go on within that time limit but somebody else\n may want to pick up the pieces (which will at least save them from having \n to start totally from scratch). The distribution is here: \n \n ftp://seqaxp.bio.caltech.edu/software/pgsql_vms_partial_port.zip\n \n (It's big - 16872 blocks!) Look at aaa_vms_port_notes.txt in the top\n directory, and then to do the partial build do \n \n $ set def [.src]\n $ set ver\n $ @make_vms\n \n The partial port is based on an edited log of a build on Linux/Intel. It is\n pretty depressingly for those of us who hope that OpenVMS has a future to\n watch a large package like this build without a hitch on Linux/Intel, and\n to compile at roughly 100x the rate that is obtained on OpenVMS (due\n entirely to the file IO wrapped up in the 100 or so #include operations\n typical for each module - and the current lack of an effective file caching\n system on OpenVMS.) Anyway, the Linux build supplied enough information to\n put together most (not all) of an OpenVMS build procedure. \n \n The good news is that most of the code seems to be pretty well written - \n for Unix code. By that I mean that while it doesn't conform to any one\n C language standard, the vast majority of it compiled with some combination\n of defined language standards, and the rest could be built with\n /standard=relaxed so long as they didn't also require some Unix API not\n present on VMS. (Just don't expect /warn=(enable=all) to be silent, even\n though it flies through gcc -Wall on linux quietly.) In short, something\n like 95% of it could be compiled cleanly \"out of the box\", and most of the \n rest of that 5% were routines that need to be replaced anyway. That's a \n lot better than most packages I see. (See aaa_vms_port_notes.txt\n for some of the potential bugs I saw - none of which were resolved.)\n \n The bad news is that the entire IPC section needs to be rewritten to use\n native VMS APIs - but that does also open up the opportunity to make it\n more \"cluster\" aware. (Thanks to Dan O'Reilly for identifying the \"UNIX\n domain sockets\" pieces, which I'd never seen before.) Those few of you \n who know your way around the innards of multithreaded OpenVMS web servers\n may be able to make quick work of the rest of the port - the issues left\n unresolved by my work are those already resolved in such servers.\n\nAfter looking at the Postgresql code it looks like a review their code\n for missing ifdefs, especially HAVE_SYS_PARAM_H may help. Those sorts of\nthings are set up by \"configure\" but I found numerous instances where the\ncode did not apply the appropriate #ifdef and I had to put it in. I think\nit also would be good have a look at my port notes as the VMS compiler\nflagged a bunch of things that were either outright bugs or were done in a\nnonportable manner. \n\nFor instance \n \n if(foo == -1) \n \n when foo (at least on OpenVMS) is an unsigned int.\n \n The build procedure for this is the poster child for the speed limitations \n currently built into VMS systems. It took forever to chunk through this \n pile of code on my DS10, and something like a minute to do it on RH 5.2/\n Intel on a 400 Mhz PII. (But if you do some line like\n \n $ cc.... source.c\n \n and\n \n $ cc..../preproc source.c\n $ cc.... source.i\n \n you'll find that the compilation of the .i file is about 6 times faster than \n that of the .c, ie, it's all file access time grinding through the includes.)\n \n \n Anyway, good luck to whoever wants to have a shot at this next.\n \n\n\n \n\n ,-,-. ,-,-. ,-,-. ,-,-. ,-\n/ / \\ \\ / / \\ \\ / / \\ \\ / / \\ \\ / /\n \\ \\ / / \\ \\ / / \\ \\ / / \\ \\ / / \n `-'-' `-'-' `-'-' `-'-' \n--------------------------------------------------------\nFSC - Building Better Information Technology Solutions-\n From the Production Floor to the Customer's Door.\n--------------------------------------------------------\n\nJim Jennis, Technical Director, Commercial Systems\nFuentez Systems Concepts, Inc.\n1 Discovery Place, Suite 2\nMartinsburg, WV. 25401 USA.\n\nPhone: +001 (304) 263-0163 ext 235\nFAX: +001 (304) 263-0702\n\nEmail: [email protected]\n [email protected]\nWeb: http://www.discovery.fuentez.com/\n--------------------------------------------------- \n\n", "msg_date": "Thu, 17 Aug 2000 18:05:48 -0400", "msg_from": "Jim Jennis <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres for OpenVMS" }, { "msg_contents": "This is a very nice summary of OpenVMS work for PostgreSQL. That IPC\nstuff can be very difficult, so I imagine it would take an experienced\nVMS person to get that done.\n\n\n> Hi Postgresql Developers!\n> \n> A few weeks ago I posted a message on the pgsql-general list asking about a\n> possible port of Postgresql to OpenVMS. \n> \n> Bruce Momjian (who has a lot of VMS experience) wrote me back explaining\n> the difficulties involved in such a port.\n> \n> Well, since Bruce's note, I have been poking aroung in the VMS community\n> and lo and behold, one of my VMS colleagues (David Mathog at Caltech) had\n> actually begun to look at this in conjunction with my efforts.\n> \n> I had an email exchange with David today and I have attached it below. I\n> hope the Postgresql team may have an opportunity to look it over and\n> (perhaps) consider revisiting the possibility of a \"VMS port\" of Postgresql\n> or at least maybe a \"VMS team\".\n> \n> Here is a condensed synopsis of my emails with Dave.\n> \n> Best Regards (and THANKS FOR A DYNAMITE DATABASE!!!),\n> \n> Jim\n> ....................................................... \n> I've been looking for an affordable database system for our OpenVMS system,\n> so far, with no luck. In that quest I've examined some of the freeware\n> database systems. I couldn't make heads or tails of MySql or Interbase\n> (especially their build procedures). But a recent slashdot thread on the\n> excellent performance of Postgresql convinced me to give it a try.\n> \n> I only allowed myself one workday for this - if it didn't all fall\n> substantially into place by then I wasn't going to pursue it. And it didn't\n> get far enough for me to go on within that time limit but somebody else\n> may want to pick up the pieces (which will at least save them from having \n> to start totally from scratch). The distribution is here: \n> \n> ftp://seqaxp.bio.caltech.edu/software/pgsql_vms_partial_port.zip\n> \n> (It's big - 16872 blocks!) Look at aaa_vms_port_notes.txt in the top\n> directory, and then to do the partial build do \n> \n> $ set def [.src]\n> $ set ver\n> $ @make_vms\n> \n> The partial port is based on an edited log of a build on Linux/Intel. It is\n> pretty depressingly for those of us who hope that OpenVMS has a future to\n> watch a large package like this build without a hitch on Linux/Intel, and\n> to compile at roughly 100x the rate that is obtained on OpenVMS (due\n> entirely to the file IO wrapped up in the 100 or so #include operations\n> typical for each module - and the current lack of an effective file caching\n> system on OpenVMS.) Anyway, the Linux build supplied enough information to\n> put together most (not all) of an OpenVMS build procedure. \n> \n> The good news is that most of the code seems to be pretty well written - \n> for Unix code. By that I mean that while it doesn't conform to any one\n> C language standard, the vast majority of it compiled with some combination\n> of defined language standards, and the rest could be built with\n> /standard=relaxed so long as they didn't also require some Unix API not\n> present on VMS. (Just don't expect /warn=(enable=all) to be silent, even\n> though it flies through gcc -Wall on linux quietly.) In short, something\n> like 95% of it could be compiled cleanly \"out of the box\", and most of the \n> rest of that 5% were routines that need to be replaced anyway. That's a \n> lot better than most packages I see. (See aaa_vms_port_notes.txt\n> for some of the potential bugs I saw - none of which were resolved.)\n> \n> The bad news is that the entire IPC section needs to be rewritten to use\n> native VMS APIs - but that does also open up the opportunity to make it\n> more \"cluster\" aware. (Thanks to Dan O'Reilly for identifying the \"UNIX\n> domain sockets\" pieces, which I'd never seen before.) Those few of you \n> who know your way around the innards of multithreaded OpenVMS web servers\n> may be able to make quick work of the rest of the port - the issues left\n> unresolved by my work are those already resolved in such servers.\n> \n> After looking at the Postgresql code it looks like a review their code\n> for missing ifdefs, especially HAVE_SYS_PARAM_H may help. Those sorts of\n> things are set up by \"configure\" but I found numerous instances where the\n> code did not apply the appropriate #ifdef and I had to put it in. I think\n> it also would be good have a look at my port notes as the VMS compiler\n> flagged a bunch of things that were either outright bugs or were done in a\n> nonportable manner. \n> \n> For instance \n> \n> if(foo == -1) \n> \n> when foo (at least on OpenVMS) is an unsigned int.\n> \n> The build procedure for this is the poster child for the speed limitations \n> currently built into VMS systems. It took forever to chunk through this \n> pile of code on my DS10, and something like a minute to do it on RH 5.2/\n> Intel on a 400 Mhz PII. (But if you do some line like\n> \n> $ cc.... source.c\n> \n> and\n> \n> $ cc..../preproc source.c\n> $ cc.... source.i\n> \n> you'll find that the compilation of the .i file is about 6 times faster than \n> that of the .c, ie, it's all file access time grinding through the includes.)\n> \n> \n> Anyway, good luck to whoever wants to have a shot at this next.\n> \n> \n> \n> \n> \n> ,-,-. ,-,-. ,-,-. ,-,-. ,-\n> / / \\ \\ / / \\ \\ / / \\ \\ / / \\ \\ / /\n> \\ \\ / / \\ \\ / / \\ \\ / / \\ \\ / / \n> `-'-' `-'-' `-'-' `-'-' \n> --------------------------------------------------------\n> FSC - Building Better Information Technology Solutions-\n> From the Production Floor to the Customer's Door.\n> --------------------------------------------------------\n> \n> Jim Jennis, Technical Director, Commercial Systems\n> Fuentez Systems Concepts, Inc.\n> 1 Discovery Place, Suite 2\n> Martinsburg, WV. 25401 USA.\n> \n> Phone: +001 (304) 263-0163 ext 235\n> FAX: +001 (304) 263-0702\n> \n> Email: [email protected]\n> [email protected]\n> Web: http://www.discovery.fuentez.com/\n> --------------------------------------------------- \n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 12 Oct 2000 14:43:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres for OpenVMS" } ]
[ { "msg_contents": "Here's two ideas I had for optimizing vacuum, I apologize in advance\nif the ideas presented here are niave and don't take into account\nthe actual code that makes up postgresql.\n\n================\n\n#1\n\nReducing the time vacuum must hold an exlusive lock on a table:\n\nThe idea is that since rows are marked deleted it's ok for the\nvacuum to fill them with data from the tail of the table as\nlong as no transaction is in progress that has started before\nthe row was deleted.\n\nThis may allow the vacuum process to copyback all the data without\na lock, when all the copying is done it then aquires an exlusive lock\nand does this:\n\nAquire an exclusive lock.\nWalk all the deleted data marking it as current.\nTruncate the table.\nRelease the lock.\n\nSince the data is still marked invalid (right?) even if valid data\nis copied into the space it should be ignored as long as there's no\ntransaction occurring that started before the data was invalidated.\n\n================\n\n#2\n\nReducing the amount of scanning a vaccum must do:\n\nIt would make sense that if a value of the earliest deleted chunk\nwas kept in a table then vacuum would not have to scan the entire\ntable in order to work, it would only need to start at the 'earliest'\ninvalidated row.\n\nThe utility of this (at least for us) is that we have several tables\nthat will grow to hundreds of megabytes, however changes will only\nhappen at the tail end (recently added rows). If we could reduce the\namount of time spent in a vacuum state it would help us a lot.\n\n================\n\nI'm wondering if these ideas make sense and may help at all.\n\nthanks,\n--\n-Alfred Perlstein - [[email protected]|[email protected]]\n", "msg_date": "Thu, 17 Aug 2000 17:01:18 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": true, "msg_subject": "VACUUM optimization ideas." }, { "msg_contents": "Alfred Perlstein wrote:\n> #1\n> \n> Reducing the time vacuum must hold an exlusive lock on a table:\n> \n> The idea is that since rows are marked deleted it's ok for the\n> vacuum to fill them with data from the tail of the table as\n> long as no transaction is in progress that has started before\n> the row was deleted.\n> \n> This may allow the vacuum process to copyback all the data without\n> a lock, when all the copying is done it then aquires an exlusive lock\n> and does this:\n> \n> Aquire an exclusive lock.\n> Walk all the deleted data marking it as current.\n> Truncate the table.\n> Release the lock.\n> \n> Since the data is still marked invalid (right?) even if valid data\n> is copied into the space it should be ignored as long as there's no\n> transaction occurring that started before the data was invalidated.\n\nYes, but nothing prevents newer transactions from modifying the _origin_ side of\nthe copied data _after_ it was copied, but before the Lock-Walk-Truncate-Unlock\ncycle takes place, and so it seems unsafe. Maybe locking each record before\ncopying it up ...\n\nRegards,\nHaroldo.\n\n-- \n----------------------+------------------------\n Haroldo Stenger | [email protected]\n Montevideo, Uruguay. | [email protected]\n----------------------+------------------------\n Visit UYLUG Web Site: http://www.linux.org.uy\n-----------------------------------------------\n", "msg_date": "Fri, 18 Aug 2000 02:18:49 -0300", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: VACUUM optimization ideas." }, { "msg_contents": "Alfred Perlstein wrote:\n\n> The idea is that since rows are marked deleted it's ok for the\n> vacuum to fill them with data from the tail of the table as\n> long as no transaction is in progress that has started before\n> the row was deleted.\n\nWell, isn't one of the advantages of vacuuming in the reordering it\ndoes? With a \"fill deleted chunks\" logic, we'd have far less order in\nthe databases. \n\n> This may allow the vacuum process to copyback all the data without\n> a lock, \n\nNope. Another process might update the values in between move and mark,\nif the record is not locked. We'd either have to write-lock the entire\ntable for that period, write lock every item as it is moved, or lock,\nmove and mark on a per-record base. The latter would be slow, but it\ncould be done in a permanent low priority background process, utilizing\nempty CPU cycles. Besides, it probably could not only be done simply\nfilling from the tail, but also moving up the records in a sorted\nfashion. \n\n> #2\n> \n> Reducing the amount of scanning a vaccum must do:\n> \n> It would make sense that if a value of the earliest deleted chunk\n> was kept in a table then vacuum would not have to scan the entire\n> table in order to work, it would only need to start at the 'earliest'\n> invalidated row.\n\nTrivial to do. But of course #1 may imply that the physical ordering is\neven less likely to be related to the logical ordering in a way where\nthis helps. \n\n> The utility of this (at least for us) is that we have several tables\n> that will grow to hundreds of megabytes, however changes will only\n> happen at the tail end (recently added rows). \n\nThe tail is a relative position - except for the case where you add\ntemporary records to a constant default set, everything in the tail will\nmove, at least relatively, to the head after some time.\n\n> If we could reduce the\n> amount of time spent in a vacuum state it would help us a lot.\n\nRather: If we can reduce the time spent in a locked state while\nvacuuming, it would help a lot. Being in a vacuum is not the issue -\neven permanent vacuuming need not be an issue, if the locks it uses are\nsuitably short-time. \n\nSevo\n\n-- \[email protected]\n", "msg_date": "Fri, 18 Aug 2000 15:25:12 +0200", "msg_from": "Sevo Stille <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM optimization ideas." }, { "msg_contents": "\"Alfred Perlstein\" <[email protected]>\n> Here's two ideas I had for optimizing vacuum, I apologize in advance\n> if the ideas presented here are niave and don't take into account\n> the actual code that makes up postgresql.\n\n * * *\n\nThis is the fist time I have dared to file in the exalted realm of\n[HACKERS]. On the other hand I wrote a memo to Bill Gates a couple of years\nago which apparently resulted in C#, which is really worth a little bit of\nattention, given the number of VB writers out there. I'm not quite as stupid\nas I look.\n\nWhy doesn't `vacuum' happen all the time, instantly?\n\nLike, does everybody feel psychologically more secure if a \"commit\" is not\nreally a commit, it's there for some Emergency Refind to find?\n\n(If there are olde hardware reasons, or software -- \"Well, uh, back at\nBBN...\" -- type reasons, I'd be happy to hear them.)\n\nScrew it. \"Is that your final answer?\" is your final answer. Commit and\nrebuild; optimize memory use all the time in the spare milliseconds; no\nhuman is needed to make obvious calls.\n\n -dlj.\n\n\n\n\n\n\n\n", "msg_date": "Sat, 19 Aug 2000 08:05:32 -0400", "msg_from": "\"David Lloyd-Jones\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM optimization ideas." }, { "msg_contents": "> Alfred Perlstein wrote:\n> > #1\n> > \n> > Reducing the time vacuum must hold an exlusive lock on a table:\n> > \n> > The idea is that since rows are marked deleted it's ok for the\n> > vacuum to fill them with data from the tail of the table as\n> > long as no transaction is in progress that has started before\n> > the row was deleted.\n> > \n> > This may allow the vacuum process to copyback all the data without\n> > a lock, when all the copying is done it then aquires an exlusive lock\n> > and does this:\n> > \n> > Aquire an exclusive lock.\n> > Walk all the deleted data marking it as current.\n> > Truncate the table.\n> > Release the lock.\n> > \n> > Since the data is still marked invalid (right?) even if valid data\n> > is copied into the space it should be ignored as long as there's no\n> > transaction occurring that started before the data was invalidated.\n> \n> Yes, but nothing prevents newer transactions from modifying the _origin_ side of\n> the copied data _after_ it was copied, but before the Lock-Walk-Truncate-Unlock\n> cycle takes place, and so it seems unsafe. Maybe locking each record before\n> copying it up ...\n\nSeems a read-lock would be necessary during the moving, but still a win.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 12 Oct 2000 14:56:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM optimization ideas." }, { "msg_contents": "> #2\n> \n> Reducing the amount of scanning a vaccum must do:\n> \n> It would make sense that if a value of the earliest deleted chunk\n> was kept in a table then vacuum would not have to scan the entire\n> table in order to work, it would only need to start at the 'earliest'\n> invalidated row.\n> \n> The utility of this (at least for us) is that we have several tables\n> that will grow to hundreds of megabytes, however changes will only\n> happen at the tail end (recently added rows). If we could reduce the\n> amount of time spent in a vacuum state it would help us a lot.\n\nBut you have to update that every time a row is modified. Seems a\nsequential scan by vacuum is fast enough.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 12 Oct 2000 14:57:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM optimization ideas." }, { "msg_contents": "> Here's two ideas I had for optimizing vacuum, I apologize in advance\n> if the ideas presented here are niave and don't take into account\n> the actual code that makes up postgresql.\n> \n> ================\n> \n> #1\n> \n> Reducing the time vacuum must hold an exlusive lock on a table:\n> \n> The idea is that since rows are marked deleted it's ok for the\n> vacuum to fill them with data from the tail of the table as\n> long as no transaction is in progress that has started before\n> the row was deleted.\n> \n> This may allow the vacuum process to copyback all the data without\n> a lock, when all the copying is done it then aquires an exlusive lock\n> and does this:\n> \n> Aquire an exclusive lock.\n> Walk all the deleted data marking it as current.\n> Truncate the table.\n> Release the lock.\n> \n> Since the data is still marked invalid (right?) even if valid data\n> is copied into the space it should be ignored as long as there's no\n> transaction occurring that started before the data was invalidated.\n\nAdded to TODO:\n\n* Reduce VACUUM lock time by moving tuples with read lock, then write \n lock and truncate table [vacuum] \n\nThe read-lock is required because other transactions must be prevented\nfrom modifying the rows, and the index is also an issue.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 12 Oct 2000 14:58:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM optimization ideas." } ]
[ { "msg_contents": "Hi,\n\nI'm having a bit of trouble with the pg_attribute table growing larger\nand larger and larger. Actually that's now the real problem, it's \nthe indexes that are the real problem. I run a site that get's a fair\namount of traffic and we use temporary table extensively for some more\ncomplex queries (because by breaking down the queries into steps, we can get \nbetter performance than by letting postgres plan the query poorly) I \nassume that creating a temporary table and then dropping it will cause \nthe pg_attribute table to grow because our pg_attribute grows by about 15MB \nper day and if it isn't vacuumed nightly the system slows down very \nquickly. After \"vacuum analyze pg_attribute\", the pg_attribute table is \nback to it's normal small size. However, the two indexes on pg_attribute do \nnot shrink at all. The only way I've found to get around this is to \ndump, drop, create, reload the database. I don't really want to trust \nthat to a script and I don't really like having the system down that much.\n\n\nMy questions are:\n\n\t1) is this problem being worked on?\n\t2) are there any better work arounds that what I'm doing?\n\t3) if this problem isn't being worked on, is it too complex\n\t for a non-experienced postgres coder to tackle?\n\n\t4) if answers to #3 are no & no, any advice on where to start?\n\n\nSystem info\n\tpsql: 7.0.2\n\tPIII 400, Linux 6.2, 512MB memory, etc, etc...\n\n-- \nThe world's most ambitious and comprehensive PC game database project.\n\n http://www.mobygames.com\n", "msg_date": "Fri, 18 Aug 2000 01:03:48 -0500", "msg_from": "Brian Hirt <[email protected]>", "msg_from_op": true, "msg_subject": "pg_attribute growing and growing and growing" }, { "msg_contents": "> -----Original Message-----\n> From: Brian Hirt\n>\n> Hi,\n>\n> I'm having a bit of trouble with the pg_attribute table growing larger\n> and larger and larger. Actually that's now the real problem, it's\n> the indexes that are the real problem. I run a site that get's a fair\n> amount of traffic and we use temporary table extensively for some more\n> complex queries (because by breaking down the queries into steps,\n> we can get\n> better performance than by letting postgres plan the query poorly) I\n> assume that creating a temporary table and then dropping it will cause\n> the pg_attribute table to grow because our pg_attribute grows by\n> about 15MB\n> per day and if it isn't vacuumed nightly the system slows down very\n> quickly. After \"vacuum analyze pg_attribute\", the pg_attribute table is\n> back to it's normal small size. However, the two indexes on\n> pg_attribute do\n> not shrink at all. The only way I've found to get around this is to\n> dump, drop, create, reload the database. I don't really want to trust\n> that to a script and I don't really like having the system down that much.\n>\n\nIf you could stop postmaster,you could reacreate indexes\nof pg_attribute as follows.\n\n1) shutdown postmaster(using pg_ctl stop etc).\n2) backup the index files of pg_attributes somewhere for safety.\n3) invoke standalone postgres\n\tpostgres -P -O your_database_name\n4) recreate indexes of pg_attribute\n\treindex table pg_attribute force;\n5) exit standalone postgres\n6) restart postmaster\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Fri, 18 Aug 2000 17:12:29 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: pg_attribute growing and growing and growing" }, { "msg_contents": "Brian Hirt <[email protected]> writes:\n> I run a site that get's a fair amount of traffic and we use temporary\n> table extensively for some more complex queries (because by breaking\n> down the queries into steps, we can get better performance than by\n> letting postgres plan the query poorly) I assume that creating a\n> temporary table and then dropping it will cause the pg_attribute table\n> to grow because our pg_attribute grows by about 15MB per day and if it\n> isn't vacuumed nightly the system slows down very quickly. After\n> \"vacuum analyze pg_attribute\", the pg_attribute table is back to it's\n> normal small size. However, the two indexes on pg_attribute do not\n> shrink at all.\n\nIndexes in general are not shrunk by vacuum. The only clean solution\nI see for this is to convert vacuum to do the \"drop/rebuild index\"\nbusiness internally --- but AFAICS we can't do that safely without some\nsort of file versioning solution. See past threads in pghackers.\n\nPossibly a better short-term attack is to eliminate the need for so\nmany temp tables. What's your gripe about bad planning, exactly?\n\nAnother possibility, which just screams HACK but might fix your problem,\nis to swap the order of the columns in the two indexes on pg_attribute:\n\nfoo=# \\d pg_attribute_relid_attnam_index\nIndex \"pg_attribute_relid_attnam_index\"\n Attribute | Type\n-----------+------\n attrelid | oid\n attname | name\nunique btree\n\nfoo=# \\d pg_attribute_relid_attnum_index\nIndex \"pg_attribute_relid_attnum_index\"\n Attribute | Type\n-----------+----------\n attrelid | oid\n attnum | smallint\nunique btree\n\nSince table OIDs keep increasing, this formulation ensures that new\nentries will always sort to the end of the index, and so space freed\ninternally in the indexes can never get re-used. Swapping the column\norder may eliminate that problem --- but I'm not sure what if any\nspeed penalty would be incurred. Thoughts anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 19 Aug 2000 01:22:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_attribute growing and growing and growing " } ]
[ { "msg_contents": "This solution isn't good when there are +10000 tuples in the table, it's\nslowly...\nanybody can help me ? :\n\n\n string = \"SELECT service, noeud, rubrique FROM table\" ;\n res = PQexec( conn, string.data() ) ;\n if ( (! res) || (status = PQresultStatus( res ) !=\nPGRES_TUPLES_OK) )\n {\n cerr << _ERROR << \"Problem SELECT ! \" << endl ;\n cerr << _ERROR << \"Error : \" << PQresStatus( status ) <<\nendl ;\n cerr << _ERROR << \"Error : \" << PQresultErrorMessage(\nres ) << endl ;\n PQclear( res ) ;\n }\n else\n {\n for (int m=0; m < PQntuples( res ); m++)\n {\n service = PQgetvalue( resultat1, m, 0 ) ;\n noeud = PQgetvalue( resultat1, m, 1 ) ;\n rubrique = PQgetvalue( resultat1, m, 2 ) ;\n\n commande = \"SELECT SUM(date) FROM table WHERE\nservice='\" + service +\n\"' AND noeud='\" + noeud + \"' AND rubrique='\"+ rubrique + \"'\" ;\n res1 = PQexec( conn, string.data() ) ;\n if ( (! res1) || (status = PQresultStatus( res1\n) != PGRES_TUPLES_OK)\n)\n {\n cerr << _ERROR << \"Problem SUM ! \" <<\nendl ;\n cerr << _ERROR << \"Error : \" <<\nPQresStatus( status ) << endl ;\n cerr << _ERROR << \"Error : \" <<\nPQresultErrorMessage( res1 ) << endl\n;\n PQclear( res1 ) ;\n }\n else\n {\n cout << _TRACE << \"SUM ok.\" << endl ;\n PQclear( res1 ) ;\n }\n }\n PQclear( res ) ;\n }\n\nThanks. jerome.", "msg_date": "Fri, 18 Aug 2000 10:14:07 +0200", "msg_from": "Jerome Raupach <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: Optimization in C]" }, { "msg_contents": "> This solution isn't good when there are +10000 tuples in the table, it's\n> slowly... anybody can help me ? :\n\nSomeone already responded, and asked some questions about what you are\nreally trying to do. If you didn't get the message, let us know or check\nthe mail archives.\n\nRegards.\n\n - Thomas\n", "msg_date": "Fri, 18 Aug 2000 13:44:34 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Fwd: Optimization in C]" } ]
[ { "msg_contents": "I seem to remember that 7.1 was scheduled for August. I believe this\nwill not happen, giving the lack of activity on the list about this\ntopic.\n\nAny plans about when it scheduled for now?\n\nNote that I'm not pushing for it to happen sooner, it's just one of\nthese things that are nice to know.\n\n--\nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\n\nKaki Data tshirts, merchandize Fax: 3816 2501\n\nHowitzvej 75 �ben 14.00-18.00 Email: [email protected]\n\n2000 Frederiksberg L�rdag 11.00-17.00 Web: www.suse.dk\n\n\n\n", "msg_date": "Fri, 18 Aug 2000 10:16:07 +0200", "msg_from": "Kaare Rasmussen <[email protected]>", "msg_from_op": true, "msg_subject": "ETA for 7.1 ?" }, { "msg_contents": "At 03:16 AM 8/18/2000, Kaare Rasmussen wrote:\n>I seem to remember that 7.1 was scheduled for August. I believe this\n>will not happen, giving the lack of activity on the list about this\n>topic.\n>\n>Any plans about when it scheduled for now?\n\nI hope no sooner than it is stable... :)\n\n", "msg_date": "Fri, 18 Aug 2000 08:51:36 -0500", "msg_from": "Thomas Swan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ETA for 7.1 ?" }, { "msg_contents": "On Fri, 18 Aug 2000, Kaare Rasmussen wrote:\n\n> I seem to remember that 7.1 was scheduled for August. I believe this\n> will not happen, giving the lack of activity on the list about this\n> topic.\n> \n> Any plans about when it scheduled for now?\n\nlast time we discussed this, it was looking at Oct/Nov for release ...\n\n\n", "msg_date": "Fri, 18 Aug 2000 11:53:02 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ETA for 7.1 ?" } ]
[ { "msg_contents": "I have plpgsql function that updates the table and may have (and also may\nnot have) null parameters:\n\ncreate function f_test(int4, int4) returns int4 as\n'\n declare\n\t\tv1 alias for $1;\n\tv2 alias for $2;\n\tv3 int4;\n begin\n\traise notice ''v1 = %'', v1;\n\tupdate _testtable set a = v1 where b = v2;\n\tselect into v3 a from _testtable where b = v2;\n\traise notice ''v3 = %'', v3;\n\treturn v3;\n end;\n' language 'plpgsql';\n\nColumn a in _testable can have null value. When I do ‘select f_test(1,1);’\nthen everything is working fine. The row is updated (v1 = v3 = 1).\nBut when I do ‘select f_test(null,1);’ nothing happens :-(. No errors\nreported, but ‘select a from _testable where b=1;’ returns 1 (v1 = v3 =\n<null>).\nWhere am I wrong?\n\nCONFIDENTIALITY NOTICE\nThis email and any files transmitted with it are confidential and are\nintended solely for the use of the individual or entity to whom they are\naddressed. This communication represents the originator's personal views and\nopinions, which do not necessarily reflect those of FastNet Solutions\nCompany. If you are not the original recipient or the person responsible for\ndelivering the email to the intended recipient, be advised that you have\nreceived this email in error, and that any use, dissemination, forwarding,\nprinting, or copying of this email is strictly prohibited. If you received\nthis email in error, please immediately notify [email protected].\n\n\n\n\n\nPL/pgSQL how to pass null values to the functions?\n\n\nI have plpgsql function that updates the table and may have (and also may not have) null parameters:\ncreate function f_test(int4, int4) returns int4 as\n'\n    declare\n\nv1 alias for $1;\n\n        v2 alias for $2;\n        v3 int4;\n    begin\n        raise notice ''v1 = %'', v1;\n        update _testtable set a = v1 where b = v2;\n        select into v3 a from _testtable where b = v2;\n        raise notice ''v3 = %'', v3;\n        return v3;\n    end;\n' language 'plpgsql';\nColumn a in _testable can have null value. When I do ‘select f_test(1,1);’ then everything is working fine. The row is updated (v1 = v3 = 1).\nBut when I do ‘select f_test(null,1);’ nothing happens L. No errors reported, but ‘select a from _testable where b=1;’ returns 1 (v1 = v3 = <null>).\nWhere am I wrong?\nCONFIDENTIALITY NOTICE\nThis email and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom they are addressed. This communication represents the originator's personal views and opinions, which do not necessarily reflect those of FastNet Solutions Company. If you are not the original recipient or the person responsible for delivering the email to the intended recipient, be advised that you have received this email in error, and that any use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. If you received this email in error, please immediately notify [email protected].", "msg_date": "Fri, 18 Aug 2000 16:01:58 +0400", "msg_from": "\"Kuvyrkin, Nick\" <[email protected]>", "msg_from_op": true, "msg_subject": "PL/pgSQL how to pass null values to the functions?" } ]
[ { "msg_contents": "configure seems to be having a problem with --without-CXX\nWhen given, the C++ compiler is set to 'no'.\n\nThis was with CVS as of about 10:00 EDT\n\nplatform is linux i386 running Debian 'woody'.\n \n --- commandline ---\n \n ./configure --enable-debug --enable-cassert --enable-depend --without-python --without-perl --without-CXX\n \n --- OUTPUT ----\n \n creating cache ./config.cache\n checking host system type... i586-pc-linux-gnu\n checking which template to use... linux\n checking whether to build with locale support... no\n checking whether to build with Cyrillic recode support... no\n checking whether to build with multibyte character support... no\n checking for default port number... 5432\n checking for default soft limit on number of connections... 32\n checking for gcc... gcc\n checking whether the C compiler (gcc ) works... yes\n checking whether the C compiler (gcc ) is a cross-compiler... no\n checking whether we are using GNU C... yes\n checking whether gcc accepts -g... yes\n using CFLAGS=-O2\n checking whether the C compiler (gcc -O2 ) works... yes\n checking whether the C compiler (gcc -O2 ) is a cross-compiler... no\n checking how to run the C preprocessor... gcc -E\n checking whether gcc needs -traditional... no\n checking setting debug compiler flag... yes\n checking setting USE_TCL... disabled\n checking whether to build Perl modules... no\n checking whether to build Python modules... no\n checking whether to build the ODBC driver... no\n checking setting ASSERT CHECKING... enabled\n checking whether to build C++ modules... yes\n checking for c++... no\n checking whether the C++ compiler (no ) works... no\n configure: error: installation or configuration problem: C++ compiler cannot create executables.\n \n ------ config.log --------\n \n This file contains any messages produced by compilers while\n running configure, to aid debugging if configure makes a mistake.\n \n configure:619: checking host system type\n configure:642: checking which template to use\n configure:787: checking whether to build with locale support\n configure:809: checking whether to build with Cyrillic recode support\n configure:832: checking whether to build with multibyte character support\n configure:872: checking for default port number\n configure:901: checking for default soft limit on number of connections\n configure:939: checking for gcc\n configure:1052: checking whether the C compiler (gcc ) works\n configure:1068: gcc -o conftest conftest.c 1>&5\n configure:1094: checking whether the C compiler (gcc ) is a cross-compiler\n configure:1099: checking whether we are using GNU C\n configure:1108: gcc -E conftest.c\n configure:1127: checking whether gcc accepts -g\n configure:1163: checking whether the C compiler (gcc -O2 ) works\n configure:1179: gcc -o conftest -O2 conftest.c 1>&5\n configure:1205: checking whether the C compiler (gcc -O2 ) is a cross-compiler\n configure:1210: checking how to run the C preprocessor\n configure:1231: gcc -E conftest.c >/dev/null 2>conftest.out\n configure:1291: checking whether gcc needs -traditional\n configure:1374: checking setting debug compiler flag\n configure:1429: checking setting USE_TCL\n configure:1482: checking whether to build Perl modules\n configure:1499: checking whether to build Python modules\n configure:1759: checking whether to build the ODBC driver\n configure:1795: checking setting ASSERT CHECKING\n configure:1843: checking whether to build C++ modules\n configure:1857: checking for c++\n configure:1889: checking whether the C++ compiler (no ) works\n configure:1905: no -o conftest conftest.C 1>&5\n ./configure: no: command not found\n configure: failed program was:\n \n #line 1900 \"configure\"\n #include \"confdefs.h\"\n \n int main(){return(0);}\n\n-- \n\nMark Hollomon\[email protected]\n", "msg_date": "Fri, 18 Aug 2000 11:00:22 -0400", "msg_from": "\"Mark Hollomon\" <[email protected]>", "msg_from_op": true, "msg_subject": "configure CXX bug" }, { "msg_contents": "\"Mark Hollomon\" <[email protected]> writes:\n> configure seems to be having a problem with --without-CXX\n> When given, the C++ compiler is set to 'no'.\n\n> This was with CVS as of about 10:00 EDT\n\nPeter E. has been hacking the configure stuff since 7.0. The behavior\nis now more consistent with GNU/autoconf standard practice, but not 100%\nbackwards compatible. I think that --without-CXX is now the default,\nand you need to say --with-CXX [=compiler] if you want the C++ code\nbuilt.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 19 Aug 2000 01:39:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: configure CXX bug " }, { "msg_contents": "Mark Hollomon writes:\n\n> configure seems to be having a problem with --without-CXX\n> When given, the C++ compiler is set to 'no'.\n\nFixed. Thanks.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 20 Aug 2000 01:42:22 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: configure CXX bug" } ]
[ { "msg_contents": "I have a friend who works at IBM on DB2, i showed\nher \"the benchmark\" just for kicks.. (we teach each\nother over who's is better etc.. all good fun)\n\nanyway she sends me back this article about how DB2\nblows the lid off of TPC-C.. just proof that any\nbenchmark can mean anything..\n\nat the same time i talked to a friend, who's a strong\npostgres proponent, he was curious as to what size\nthe databases the postgres benchmarks were done on..\n\nned maybe you can field this second question.\n\n\nJeff MacDonald,\n\n-----------------------------------------------------\nPostgreSQL Inc\t\t| Hub.Org Networking Services\[email protected]\t\t| [email protected]\nwww.pgsql.com\t\t| www.hub.org\n1-902-542-0713\t\t| 1-902-542-3657\n-----------------------------------------------------\nFascimile : 1 902 542 5386\nIRC Nick : bignose\n\n", "msg_date": "Fri, 18 Aug 2000 13:11:08 -0300 (ADT)", "msg_from": "Jeff MacDonald <[email protected]>", "msg_from_op": true, "msg_subject": "benchmarks - anyone can make em" }, { "msg_contents": "course i could definatly include the url\n\nhttp://www.zdnet.com/eweek/stories/general/0,11011,2599679,00.html\n\njeff\n\nOn Fri, 18 Aug 2000, Jeff MacDonald wrote:\n\n> I have a friend who works at IBM on DB2, i showed\n> her \"the benchmark\" just for kicks.. (we teach each\n> other over who's is better etc.. all good fun)\n> \n> anyway she sends me back this article about how DB2\n> blows the lid off of TPC-C.. just proof that any\n> benchmark can mean anything..\n> \n> at the same time i talked to a friend, who's a strong\n> postgres proponent, he was curious as to what size\n> the databases the postgres benchmarks were done on..\n> \n> ned maybe you can field this second question.\n> \n> \n> Jeff MacDonald,\n> \n> -----------------------------------------------------\n> PostgreSQL Inc\t\t| Hub.Org Networking Services\n> [email protected]\t\t| [email protected]\n> www.pgsql.com\t\t| www.hub.org\n> 1-902-542-0713\t\t| 1-902-542-3657\n> -----------------------------------------------------\n> Fascimile : 1 902 542 5386\n> IRC Nick : bignose\n> \n\nJeff MacDonald,\n\n-----------------------------------------------------\nPostgreSQL Inc\t\t| Hub.Org Networking Services\[email protected]\t\t| [email protected]\nwww.pgsql.com\t\t| www.hub.org\n1-902-542-0713\t\t| 1-902-542-3657\n-----------------------------------------------------\nFascimile : 1 902 542 5386\nIRC Nick : bignose\n\n", "msg_date": "Fri, 18 Aug 2000 13:17:12 -0300 (ADT)", "msg_from": "Jeff MacDonald <[email protected]>", "msg_from_op": true, "msg_subject": "Re: benchmarks - anyone can make em" }, { "msg_contents": "At 01:11 PM 8/18/00 -0300, Jeff MacDonald wrote:\n>I have a friend who works at IBM on DB2, i showed\n>her \"the benchmark\" just for kicks.. (we teach each\n>other over who's is better etc.. all good fun)\n>\n>anyway she sends me back this article about how DB2\n>blows the lid off of TPC-C.. just proof that any\n>benchmark can mean anything..\n\nSure they do, IBM still makes mainframes far more powerful\nthan puny dual-processor Linux boxes...\n\nThe \"proprietary 1\", mySQL and Postgres tests were all run\non the same operating system/hardware combination, at\nleast.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Fri, 18 Aug 2000 11:11:37 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: benchmarks - anyone can make em" }, { "msg_contents": "At 01:17 PM 8/18/00 -0300, Jeff MacDonald wrote:\n>course i could definatly include the url\n>\n>http://www.zdnet.com/eweek/stories/general/0,11011,2599679,00.html\n\nOK, so they weren't running on a large mainframe.\n\nThey were running on a cluster of 32 servers. I should HOPE they'd\nbeat Postgres on a single server!\n\nThis is FAR more \"apples to oranges\" than comparing engines on the\nsame machine, same Linux release ...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Fri, 18 Aug 2000 11:14:08 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: benchmarks - anyone can make em" }, { "msg_contents": "What size?\n\nAccording to the engineer who ran the test:\n\n> We set it up for 100 users. There is a default loadset for that\n> number of users, which loads up the tables to various levels. It's\ndescribed\n> in the docs. Some tables had 100,000 rows.\n\nMore info on the AS3AP test can be found at\nhttp://www.benchmarkresources.com/handbook/5-3.html\n\nMore info on the TPC-C at http://www.tpc.org/faq_TPCC.html.\n\nRegards,\nNed\n\n\n\nJeff MacDonald wrote:\n\n> at the same time i talked to a friend, who's a strong\n> postgres proponent, he was curious as to what size\n> the databases the postgres benchmarks were done on..\n>\n> ned maybe you can field this second question.\n\n", "msg_date": "Fri, 18 Aug 2000 14:58:55 -0400", "msg_from": "Ned Lilly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: benchmarks - anyone can make em" } ]
[ { "msg_contents": "Using Postgresql 7.0.2 (Linux x86, 2.2.16)\n\nCERATE FUNCTION foo(text)\n\nCREATE TABLE bar(\n fud TEXT CHECK (foo(fud))\n);\n\nDROP FUNCTION foo(TEXT);\nCREATE FUNCTION foo( .....);\n\nINSERT INTO bar VALUES ('Hey'); results in the following error\n\nERROR init_fcache: Cache lookup failed for procedure 128384\n\nIs this particular to postgres or is this a normal SQLxx standard behavior?\n-\n- Thomas Swan\n- Graduate Student - Computer Science\n- The University of Mississippi\n-\n- \"People can be categorized into two fundamental\n- groups, those that divide people into two groups\n- and those that don't.\"\nUsing Postgresql 7.0.2 (Linux x86, 2.2.16)\n\nCERATE FUNCTION foo(text)\n\nCREATE TABLE bar(\n        fud\nTEXT CHECK (foo(fud))\n);\n\nDROP FUNCTION foo(TEXT);\nCREATE FUNCTION foo( .....);\n\nINSERT INTO bar VALUES ('Hey'); results in the following\nerror\n\nERROR init_fcache: Cache lookup failed for procedure 128384\n\nIs this particular to postgres or is this a normal SQLxx standard\nbehavior?\n\n\n- \n- Thomas Swan\n                                  \n- Graduate Student  - Computer Science\n- The University of Mississippi\n- \n- \"People can be categorized into two fundamental \n- groups, those that divide people into two groups \n- and those that don't.\"", "msg_date": "Fri, 18 Aug 2000 14:35:05 -0500", "msg_from": "Thomas Swan <[email protected]>", "msg_from_op": true, "msg_subject": "possible constraint bug?" }, { "msg_contents": "\nThis is particular to postgres, although the \nSQL behavior would have either dropped \nthe constraint or prevented the drop in the\nfirst place.\n\nThere's been some talk of an ALTER FUNCTION\nthat would let you change the code behind\na function without a drop/create.\n\nGenerally you have to re-generate things that\nreference functions that have been dropped\nand re-created. This is a pain right now\nfor constraints, since it requires a dump\nand restore of the table.\n\nOn Fri, 18 Aug 2000, Thomas Swan wrote:\n\n> \n> Using Postgresql 7.0.2 (Linux x86, 2.2.16)\n> \n> CERATE FUNCTION foo(text)\n> \n> CREATE TABLE bar(\n> fud TEXT CHECK (foo(fud))\n> );\n> \n> DROP FUNCTION foo(TEXT);\n> CREATE FUNCTION foo( .....);\n> \n> INSERT INTO bar VALUES ('Hey'); results in the following error\n> \n> ERROR init_fcache: Cache lookup failed for procedure 128384\n> \n> Is this particular to postgres or is this a normal SQLxx standard behavior?\n\n", "msg_date": "Fri, 18 Aug 2000 12:58:09 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: possible constraint bug?" }, { "msg_contents": "Is this addressed in 7.1?\n\n> \n> This is particular to postgres, although the \n> SQL behavior would have either dropped \n> the constraint or prevented the drop in the\n> first place.\n> \n> There's been some talk of an ALTER FUNCTION\n> that would let you change the code behind\n> a function without a drop/create.\n> \n> Generally you have to re-generate things that\n> reference functions that have been dropped\n> and re-created. This is a pain right now\n> for constraints, since it requires a dump\n> and restore of the table.\n> \n> On Fri, 18 Aug 2000, Thomas Swan wrote:\n> \n> > \n> > Using Postgresql 7.0.2 (Linux x86, 2.2.16)\n> > \n> > CERATE FUNCTION foo(text)\n> > \n> > CREATE TABLE bar(\n> > fud TEXT CHECK (foo(fud))\n> > );\n> > \n> > DROP FUNCTION foo(TEXT);\n> > CREATE FUNCTION foo( .....);\n> > \n> > INSERT INTO bar VALUES ('Hey'); results in the following error\n> > \n> > ERROR init_fcache: Cache lookup failed for procedure 128384\n> > \n> > Is this particular to postgres or is this a normal SQLxx standard behavior?\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 12 Oct 2000 15:24:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: possible constraint bug?" }, { "msg_contents": "\nOn Thu, 12 Oct 2000, Bruce Momjian wrote:\n\n> Is this addressed in 7.1?\n\nNot as far as I know.\nIt would require one or more of:\n ALTER FUNCTION\n ALTER TABLE ... DROP CONSTRAINT\n a reference system that automatically drops/\n restricts based on objects referencing the \n thing you drop (and this wouldn't make sense\n for constraints without alter ... drop \n constraint anyway).\n\n", "msg_date": "Thu, 12 Oct 2000 12:45:45 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: possible constraint bug?" }, { "msg_contents": "> \n> On Thu, 12 Oct 2000, Bruce Momjian wrote:\n> \n> > Is this addressed in 7.1?\n> \n> Not as far as I know.\n> It would require one or more of:\n> ALTER FUNCTION\n> ALTER TABLE ... DROP CONSTRAINT\n> a reference system that automatically drops/\n> restricts based on objects referencing the \n> thing you drop (and this wouldn't make sense\n> for constraints without alter ... drop \n> constraint anyway).\n> \n> \n\nAdded to TODO:\n\n* Add ALTER FUNCTION\n* Add ALTER TABLE ... DROP CONSTRAINT\n* Automatically drop constraints/functions when object is dropped\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 12 Oct 2000 16:25:08 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: possible constraint bug?" } ]
[ { "msg_contents": "Hi,\n\nI'm using the postgresql 7.0.2., the JDBC interface. I\nneed to optimise the database storage speeds in my\napplication. In order to do this I considered having\nmore than one connection so that I can have separate\ntransactions for performing a group of inserts into a\nspecific table - 1 transaction/connection for one\ntable. But this seems to take the same time or even a\nlittle longer than having the transactions occur\nsequentially, contrary to my expectation especially\nconsidering that these are inserts into separate\ntables. \nCould you shed some light on this, and what I need to\ndo to make inserts using JDBC faster ?\n\nThanks,\nRini\n\n__________________________________________________\nDo You Yahoo!?\nSend instant messages & get email alerts with Yahoo! Messenger.\nhttp://im.yahoo.com/\n", "msg_date": "Fri, 18 Aug 2000 13:55:55 -0700 (PDT)", "msg_from": "Rini Dutta <[email protected]>", "msg_from_op": true, "msg_subject": "multiple transactions" } ]
[ { "msg_contents": "\n\nI've coded up a module that implements python as a procedural language\nfor postgresql. A tarball can be downloaded at\n\nhttp://users.ids.net/~bosma\n\nIf anyone is interested, take a look at it. Comments and suggestions\nwould be appreciated. Functions and triggers are work. The SPI\ninterface can run plain queries (SPI_exec), but not plans. I'm sure\nthere's a bug or three still lurking in there. Unfortunately it\ndoesn't use the new function interface.\n\nThanks\nAndrew Bosma\n\n-- \n\n\n", "msg_date": "Sat, 19 Aug 2000 02:50:24 -0400", "msg_from": "[email protected] (Andrew Bosma)", "msg_from_op": true, "msg_subject": "python as procedural language" } ]
[ { "msg_contents": "Here is a patch to bring SSL support back working. Sorry for the long delay\n:-(\n\nI also added the function sslinfo() to get information about the SSL\nconnection.\n (I'm not 100% sure I got that one right, though. Is it enough to put an \n entry in pg_proc.h, or do I need it anywhere else?\n Also, I picked \"lowest oid in highest free block\" - correct?)\n\nExample of the function for non-SSL connection:\ntemplate1=# select sslinfo();\n sslinfo\n-------------------------------\n SSL not active on connection.\n(1 row)\n\nExample of the function for SSL connection:\ntemplate1=# select sslinfo();\n sslinfo\n-------------------------------------\n SSL cipher: DES-CBC3-SHA, bits: 168\n(1 row)\n\n\n//Magnus", "msg_date": "Sat, 19 Aug 2000 17:25:11 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "Patch - SSL back to working" }, { "msg_contents": "Magnus Hagander <[email protected]> writes:\n> Here is a patch to bring SSL support back working. Sorry for the long delay\n> :-(\n\nThis is good ;-)\n\n> I also added the function sslinfo() to get information about the SSL\n> connection.\n\nThat strikes me as a very bizarre way of doing things. Why not add an\ninquiry function to the libpq API, instead?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 19 Aug 2000 16:25:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch - SSL back to working " }, { "msg_contents": "On Sat, 19 Aug 2000, Tom Lane wrote:\n\n> Magnus Hagander <[email protected]> writes:\n> > Here is a patch to bring SSL support back working. Sorry for the long delay\n> > :-(\n> \n> This is good ;-)\n> \n> > I also added the function sslinfo() to get information about the SSL\n> > connection.\n> \n> That strikes me as a very bizarre way of doing things. Why not add an\n> inquiry function to the libpq API, instead?\n\nwhat's the difference between 'select sslinfo()' and 'select version()',\nor is 'select version()' part of the libpq API also? \n\n\n", "msg_date": "Sat, 19 Aug 2000 17:44:48 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch - SSL back to working " }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n>>>> I also added the function sslinfo() to get information about the SSL\n>>>> connection.\n>> \n>> That strikes me as a very bizarre way of doing things. Why not add an\n>> inquiry function to the libpq API, instead?\n\n> what's the difference between 'select sslinfo()' and 'select version()',\n\nWell, (1) backend version is not known directly to libpq; the backend\n*must* be queried in some fashion for that info. I suppose the SSL\nconnection info is known equally well at both ends of the connection,\nso it seems bizarre to inquire of the backend information that would\nbe available without any round-trip query.\n\n(2) Transport-level info should be available without having to deal with\nconcerns like whether you have a half-issued query already, or are in\nabort transaction state and can't get the backend to execute a SELECT,\netc. This is a confusion of protocol-stack levels; it's like asking\nthe backend what the client's IP address is.\n\n(3) version() is a constant, more or less, but sslinfo() will vary\ndepending on how you have connected. That bothers me, although I can't\nquite put my finger on the reason why.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 19 Aug 2000 17:19:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch - SSL back to working " }, { "msg_contents": "\nall good points, thanks :)\n\nOn Sat, 19 Aug 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> >>>> I also added the function sslinfo() to get information about the SSL\n> >>>> connection.\n> >> \n> >> That strikes me as a very bizarre way of doing things. Why not add an\n> >> inquiry function to the libpq API, instead?\n> \n> > what's the difference between 'select sslinfo()' and 'select version()',\n> \n> Well, (1) backend version is not known directly to libpq; the backend\n> *must* be queried in some fashion for that info. I suppose the SSL\n> connection info is known equally well at both ends of the connection,\n> so it seems bizarre to inquire of the backend information that would\n> be available without any round-trip query.\n> \n> (2) Transport-level info should be available without having to deal with\n> concerns like whether you have a half-issued query already, or are in\n> abort transaction state and can't get the backend to execute a SELECT,\n> etc. This is a confusion of protocol-stack levels; it's like asking\n> the backend what the client's IP address is.\n> \n> (3) version() is a constant, more or less, but sslinfo() will vary\n> depending on how you have connected. That bothers me, although I can't\n> quite put my finger on the reason why.\n> \n> \t\t\tregards, tom lane\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 19 Aug 2000 18:24:37 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch - SSL back to working " }, { "msg_contents": "Magnus Hagander writes:\n\n> Here is a patch to bring SSL support back working. Sorry for the long delay\n> :-(\n\nAny chance we can get a `diff -cr' patch?\n\nBtw., a while ago I was wondering about the postmaster `-l' option: I\nthink it should be removed and the job should be done in pg_hba.conf\nalone. Instead I would like an option (possibly -l) that turns off SSL\ncompletely. Currently you can't even start the postmaster without the\ncertificate files etc. (Some docs on how to do that would be nice as\nwell.)\n\nBtw.2: Where do you get the documenation? I have been looking for SSL API\ndocs all over.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n(note to self: change signature, you don't live there anymore...)\n\n", "msg_date": "Sun, 20 Aug 2000 01:06:14 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Patch - SSL back to working" } ]
[ { "msg_contents": "I think the create and drop schema commands should throw a \"not\nimplemented\" error, rather than fiddle around with databases. Consider\nusers that try these commands just on luck.\n\nBtw., the grammar for these commands isn't implemented correctly either.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 20 Aug 2000 10:59:55 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "CREATE/DROP SCHEMA considered harmful" }, { "msg_contents": "> I think the create and drop schema commands should throw a \"not\n> implemented\" error, rather than fiddle around with databases. Consider\n> users that try these commands just on luck.\n> Btw., the grammar for these commands isn't implemented correctly either.\n\nSure. I'm not too worried about them, since what you suggest can be done\nat any time before release.\n\nAt the moment, I'm having trouble just getting a database installed. Did\nsomething change recently? I *thought* I was tracking down a problem\nwhere updated system catalogs did not get propagated into the\nshare/*.bki files. I did an update from cvsup a few minutes ago, and now\nthat seems to be fixed, but psql cannot create a database :(\n\nThe message when running createdb is:\n\npsql: ConnectDBStart() -- connect() failed: Invalid argument\n\tIs the postmaster running...\n\nAh, a \"make distclean\" seems to have fixed it for me. Not sure why it\nwould need to be that drastic; are we missing a dependency somewhere?\n\n - Tom\n", "msg_date": "Sun, 20 Aug 2000 18:16:01 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE/DROP SCHEMA considered harmful" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> The message when running createdb is:\n> psql: ConnectDBStart() -- connect() failed: Invalid argument\n> \tIs the postmaster running...\n> Ah, a \"make distclean\" seems to have fixed it for me. Not sure why it\n> would need to be that drastic; are we missing a dependency somewhere?\n\nI bet what happened is that you pulled Peter's latest change\n(introducing HAVE_UNIX_SOCKETS config symbol) and didn't rerun\nconfigure, so you had config.h without HAVE_UNIX_SOCKETS defined,\nand the unix-socket support got compiled out. Two thoughts here:\n\n1. There is no make dependency from config.h.in to config.h, and\nprobably can't be since make is not responsible for running configure\n(and should not be, IMHO). But perhaps we could have a script or\nsomething that warns you that you probably need to rerun configure\nif configure or any .in file is newer than the configure output files.\n\n2. It sounds like libpq does not fail very gracefully if you try to\nconnect via unix-sockets when that code is ifdef'd out. Someone needs\nto look at that combination --- any volunteers?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 Aug 2000 14:57:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE/DROP SCHEMA considered harmful " } ]
[ { "msg_contents": "> Well, this is how it is supposed to work. \"select for update\" only\n> works within a transaction and holds the lock until the transaction\n> is complete.\n>\n> What exactly is it that you're trying to do?\n\n Let us suppose that we have 2 transactions, and attempt to block the\nsame row in the two transactions. One of them will wait until in the other\nit is commited or rollbacked.\n\n Oracle has \"select for update nowait\", it does that instead of waiting\nthe conclusion of the other transaction, gives back an error to us saying\nthat the row already has been blocked.\n\n I am looking for something similar to this, or in its defect, knowledge\nif a row has been blocked, to avoid this waits\n\n Juan Carlos Perez Vazquez\n [email protected]\n\n\n", "msg_date": "Sun, 20 Aug 2000 11:23:46 +0200", "msg_from": "=?iso-8859-1?Q?Juan_Carlos_P=E9rez_V=E1zquez?= <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Row Level Locking Problem" }, { "msg_contents": "Hi\n\n I have a big problem, when I try to lock a row locked previouly, It wait\nuntil commit / rollback operation.\n\n How could lock a row if It is not locked already?\n\n Could I now if a row is locked?\n\n Could I get some error message from postgres when I do 'select ....\nfor update' to a locked row instead of wait for commit / rollback?\n\n\[email protected]\n\n\n\n\n", "msg_date": "Sun, 20 Aug 2000 14:33:50 +0200", "msg_from": "\"Cray2\" <[email protected]>", "msg_from_op": false, "msg_subject": "Row Level Locking Problem" }, { "msg_contents": "At 02:33 PM 8/20/00 +0200, Cray2 wrote:\n> I have a big problem, when I try to lock a row locked previouly, It wait\n>until commit / rollback operation.\n>\n> How could lock a row if It is not locked already?\n>\n> Could I now if a row is locked?\n>\n> Could I get some error message from postgres when I do 'select ....\n>for update' to a locked row instead of wait for commit / rollback?\n\nWell, this is how it is supposed to work. \"select for update\" only\nworks within a transaction and holds the lock until the transaction\nis complete.\n\nWhat exactly is it that you're trying to do?\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sun, 20 Aug 2000 17:31:03 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Row Level Locking Problem" }, { "msg_contents": "Cray2 wrote:\n> \n> Hi\n> \n> I have a big problem, when I try to lock a row locked previouly, It wait\n> until commit / rollback operation.\n> \n> How could lock a row if It is not locked already?\n> \n> Could I now if a row is locked?\n> \n> Could I get some error message from postgres when I do 'select ....\n> for update' to a locked row instead of wait for commit / rollback?\n\nIt is theoretically possible to write a function is_locked and then do\n\nselect .... for update where not is_locked(); \n\nand thereby lock only not-yet-locked functions.\n\n------------------\nHannu\n", "msg_date": "Mon, 21 Aug 2000 18:40:05 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Row Level Locking Problem" } ]
[ { "msg_contents": "> > I also added the function sslinfo() to get information about the SSL\n> > connection.\n> \n> That strikes me as a very bizarre way of doing things. Why not add an\n> inquiry function to the libpq API, instead?\n\nWell. I did it mostly so I wouldn't have to change the API :-)\nBut your points are very good :-) I'll add something to the frontend\nlibrary, remove the function, and send a new patch.\n\n\nPeter wrote:\n> Any chance we can get a `diff -cr' patch?\nSure, I'll do that next time. I just used the 'difforig' script that is\nincluded in the backend. If this is not the preferred format of the patch,\nmaybe it shuold be updated?\n\n> Btw., a while ago I was wondering about the postmaster `-l' option: I\n> think it should be removed and the job should be done in pg_hba.conf\n> alone. Instead I would like an option (possibly -l) that turns off SSL\n> completely. Currently you can't even start the postmaster without the\n> certificate files etc. (Some docs on how to do that would be nice as\n> well.)\nHm. Yeah. It's actually handled at both stages right now. You can use the\n\"-l\" option to reject *all* non-SSL INET connections at an early stage,\nbefore even looknig at pg_hba.conf. But everything can be handled in\npg_hba.conf already.\nI'll look at fixing that up as well :-)\n\n> Btw.2: Where do you get the documenation? I have been looking for SSL API\n> docs all over.\nActually, nowhere... I got it looking through other programs source when\ndevelopnig a \"poor mans VPN\" solution for work. Then I just took what I had\nthere and applied to postgresql. There is a serious lack of documentation of\nthat API...\n\n//Magnus\n", "msg_date": "Sun, 20 Aug 2000 12:31:12 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [PATCHES] Patch - SSL back to working " }, { "msg_contents": "Magnus Hagander writes:\n\n> Well. I did it mostly so I wouldn't have to change the API :-)\n> But your points are very good :-) I'll add something to the frontend\n> library, remove the function, and send a new patch.\n\nI think something like what PQhost(), PQport(), etc. are doing would be\nokay.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Sun, 20 Aug 2000 13:56:53 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [PATCHES] Patch - SSL back to working " } ]
[ { "msg_contents": "Barring protests, I will instruct configure and the makefiles to only look\nfor and use Flex rather than any old Lex, since the latter won't work\nanyway. If at a later date someone has a burning desire to make things\nwork with FooNix Lex it should be a relatively simple change back -- in\nany case simpler than making that other lex work in the first place.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 20 Aug 2000 12:54:52 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Flex vs Lex" }, { "msg_contents": "On Sun, 20 Aug 2000, Peter Eisentraut wrote:\n\n> Barring protests, I will instruct configure and the makefiles to only look\n> for and use Flex rather than any old Lex, since the latter won't work\n> anyway. If at a later date someone has a burning desire to make things\n> work with FooNix Lex it should be a relatively simple change back -- in\n> any case simpler than making that other lex work in the first place.\n\nokay, just checked FreeBSD, and /usr/bin/lex == /usr/bin/flex, so we're\nsafe here, but what about the other OSs? Any chance one of them has flex\ninstalled as just lex?\n\n\n", "msg_date": "Sun, 20 Aug 2000 11:20:16 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Flex vs Lex" }, { "msg_contents": "> On Sun, 20 Aug 2000, Peter Eisentraut wrote:\n> \n> > Barring protests, I will instruct configure and the makefiles to only look\n> > for and use Flex rather than any old Lex, since the latter won't work\n> > anyway. If at a later date someone has a burning desire to make things\n> > work with FooNix Lex it should be a relatively simple change back -- in\n> > any case simpler than making that other lex work in the first place.\n> \n> okay, just checked FreeBSD, and /usr/bin/lex == /usr/bin/flex, so we're\n> safe here, but what about the other OSs? Any chance one of them has flex\n> installed as just lex?\n\nBSDI. Don't make the change.\n\n\t#$ flex\n\tbash: flex: command not found\n\t#$ lex\n\t\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 20 Aug 2000 12:29:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Flex vs Lex" }, { "msg_contents": "* Peter Eisentraut <[email protected]> [000820 04:06] wrote:\n> Barring protests, I will instruct configure and the makefiles to only look\n> for and use Flex rather than any old Lex, since the latter won't work\n> anyway. If at a later date someone has a burning desire to make things\n> work with FooNix Lex it should be a relatively simple change back -- in\n> any case simpler than making that other lex work in the first place.\n\nI'm not sure if it's still an issue but FreeBSD has removed bison\nfrom the base system (I think NetBSD has as well but it was a long\ntime ago) the problem is that last I checked FreeBSD's yacc can't\nhandle one of postgresql's grammar description files and therefore\nrequires bison to be present.\n\nBasically the configure script ought to abort if bison isn't found,\nyacc doesn't work as a replacement.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Sun, 20 Aug 2000 13:41:36 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "and! Bison vs yacc Re: Flex vs Lex" }, { "msg_contents": "Alfred Perlstein <[email protected]> writes:\n> Basically the configure script ought to abort if bison isn't found,\n\nCertainly not, seeing as how the standard distribution doesn't require\nthe recipient to run bison at all. It's only an issue if you are\nworking from CVS sources. Even then, there are vendor yaccs that will\nwork (possibly after some YFLAGS-tuning), and it is not configure's\ncharter to prevent you from using them. I am not sure there are any\nnon-flex lexes that will work, though.\n\nThe nasty part of this is that some OSes seem to have flex or bison\ninstalled only under the name 'lex' or 'yacc'. Probably what configure\nshould do is\n 1. Search for flex; if found, use it.\n 2. Search for lex; if found, test to see if it's really flex\n (eg, by checking \"lex --version\" output).\nand likewise for bison (except go ahead and try to use yacc even\nif it's not bison).\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 Aug 2000 17:00:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: and! Bison vs yacc Re: Flex vs Lex " } ]
[ { "msg_contents": "It seems that optimiser is unaware that currval('seq') can be treated as\na constant within \nan expression and thus produces suboptimal plans for WHERE clauses that\nuse currval \nthus using a seq scan instead of index scan.\n\nIs it possible (planned) to mark functions as returning a constant when\ngiven a constant \nargument and start using it _as a constant_ (pre-evaluated) in queries\n\nThe current behaviour is not very intuitive.\n\n------------\nHannu\n", "msg_date": "Sun, 20 Aug 2000 14:38:08 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Optimisation deficiency: currval('seq')-->seq scan, constant-->index\n\tscan" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> It seems that optimiser is unaware that currval('seq') can be treated\n> as a constant within an expression and thus produces suboptimal plans\n> for WHERE clauses that use currval thus using a seq scan instead of\n> index scan.\n\ncurrval() does not qualify to be marked cachable, since it does not\nalways return the same result given the same arguments.\n\nThere are a few functions that are not cachable but could be treated\nas constants within a single transaction, now() being the most obvious\nexample. Currently there is no intermediate function type between\n\"cachable\" and \"noncachable\" but I have toyed with the idea of inventing\none. Getting the semantics right could be tricky however.\n\nHowever, even if we had a concept of \"constant within a transaction/\nscan/whatever\", currval() would not qualify --- what if there is a\nnextval() being invoked somewhere else in the query, possibly inside a\nuser-defined function where the optimizer has no chance of seeing it?\n\nIn short, there is no way of optimizing currval() in the way you want\nwithout risking breakage.\n\nFor interactive queries you could fake the behavior you want by creating\na user-defined function that just calls currval(), and then marking this\nfunction cachable. Don't try calling such a function inside a SQL or\nplpgsql function however, or you will be burnt by premature constant-\nfolding. Basically, this technique leaves it in your hands to determine\nwhether the optimization is safe.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 Aug 2000 13:25:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" }, { "msg_contents": "Hi!\n\nOn Sun, 20 Aug 2000, Hannu Krosing wrote:\n\n> It seems that optimiser is unaware that currval('seq') can be treated as\n> a constant within \n> an expression and thus produces suboptimal plans for WHERE clauses that\n> use currval \n> thus using a seq scan instead of index scan.\n> \n> Is it possible (planned) to mark functions as returning a constant when\n> given a constant \n> argument and start using it _as a constant_ (pre-evaluated) in queries\n\n\nJust one question regrarding this:\n\nSuppose you have\nselect ... where x in (select currval('seq')) and y in (select\nnextval('seq'))....\n\n What's the precise semantics of this? Should there be any precise\nsemantics? Whats the order of execution? currval before or after\nnextval? It seems to me that the declarative nature of SQL makes that no\norder whatsoever should be assumed...\n\n In the case of uncorrelated queries, there is the option of\nmaterializing (which I think - after looking at the code - that pg does\nnot use) the subqueries results as there is no need to recompute them. In\nthis case materializing vs re-executing seems to cause a semantinc\ndifference because in mater there is only one execution of nextval and in\nreexecution nextval is executed unknown number of times.\n\n If all this as pre-evaluated this last problem would disapear.\n\n Side-effects, side-effects, ...\n\nBest regards,\nTiago\nPS - I'm starting the thesis part of a MSc which will be about query\noptimization in pg. Here the thesis part of the MSc takes arround one\nyear, so at least for the next year I'll try to work hard on pg.\n\n\n\n\n", "msg_date": "Sun, 20 Aug 2000 18:26:03 +0100 (WEST)", "msg_from": "=?iso-8859-1?Q?Tiago_Ant=E3o?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" }, { "msg_contents": "At 01:25 PM 8/20/00 -0400, Tom Lane wrote:\n\n>However, even if we had a concept of \"constant within a transaction/\n>scan/whatever\", currval() would not qualify --- what if there is a\n>nextval() being invoked somewhere else in the query, possibly inside a\n>user-defined function where the optimizer has no chance of seeing it?\n>\n>In short, there is no way of optimizing currval() in the way you want\n>without risking breakage.\n\nDoes Postgres guarantee order of execution of functions? Many languages\ndon't other than to follow precedence rules, which is why functions with\nside-effects (such as nextval) can yield \"implementation defined\" or\nwhatever results. \n\nMy point is that such queries may not yield predictable results. Perhaps\ntoday due to left-right execution order or the like, but do you want to\nguarantee a defined order of execution in the future?\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sun, 20 Aug 2000 10:48:44 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq\n\tscan, constant-->index scan" }, { "msg_contents": "=?iso-8859-1?Q?Tiago_Ant=E3o?= <[email protected]> writes:\n> PS - I'm starting the thesis part of a MSc which will be about query\n> optimization in pg. Here the thesis part of the MSc takes arround one\n> year, so at least for the next year I'll try to work hard on pg.\n\nCool! Please keep us posted on what you're doing or thinking about\ndoing, so that there's not duplicate or wasted effort.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 Aug 2000 14:03:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" }, { "msg_contents": "Don Baccus <[email protected]> writes:\n> Does Postgres guarantee order of execution of functions?\n\nNo, and I don't recall having seen anything about it in the SQL spec\neither. If you were doing something like\n\n\tselect foo, nextval('seq') from tab where bar < currval('seq')\n\nthen there's no issue of \"order of evaluation\" per se: nextval will be\nevaluated at just those rows where the WHERE clause has already\nsucceeded. However, the results would still depend on the order in\nwhich tuples are scanned, an order which is most definitely not\nguaranteed by the spec nor by our implementation. (Also, in a\npipelined implementation it's conceivable that the WHERE clause would\nget evaluated for additional tuples before nextval has been evaluated\nat a matching tuple.)\n\nHowever, that just shows that some patterns of usage of the function\nwill yield unpredictable results. I don't think that translates to an\nargument that the optimizer is allowed to make semantics-altering\ntransformations...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 Aug 2000 14:18:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" }, { "msg_contents": "Tom Lane wrote:\n> \n> Don Baccus <[email protected]> writes:\n> > Does Postgres guarantee order of execution of functions?\n> \n> No, and I don't recall having seen anything about it in the SQL spec\n> either. If you were doing something like\n> \n> select foo, nextval('seq') from tab where bar < currval('seq')\n> \n> then there's no issue of \"order of evaluation\" per se: nextval will be\n> evaluated at just those rows where the WHERE clause has already\n> succeeded. However, the results would still depend on the order in\n> which tuples are scanned, an order which is most definitely not\n> guaranteed by the spec nor by our implementation. (Also, in a\n> pipelined implementation it's conceivable that the WHERE clause would\n> get evaluated for additional tuples before nextval has been evaluated\n> at a matching tuple.)\n> \n> However, that just shows that some patterns of usage of the function\n> will yield unpredictable results. I don't think that translates to an\n> argument that the optimizer is allowed to make semantics-altering\n> transformations...\n\nIMHO, if semantics in undefined then altering it should be OK, no?\n\nWhat I mean is that there is no safe use of nextval and currval in the \nsame sql sentence, even if it is used automatically, as in \"DEFAULT\nNEXTVAL('S')\"\nand thus marking it as constant is as correct as not marking it, only \nmore predictable. \n\nAnd predictability is GOOD ;)\n\nI would even suggest that PG would warn about or even refuse to run\nqueries \nthat have both nextval and curval of the same sequence inside them \n(and pre-evaluate nextval) as only that case has _any_ predictability.\n\n------------\nHannu\n", "msg_date": "Mon, 21 Aug 2000 00:22:11 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" }, { "msg_contents": "Don Baccus wrote:\n> \n> At 02:18 PM 8/20/00 -0400, Tom Lane wrote:\n> \n> >However, that just shows that some patterns of usage of the function\n> >will yield unpredictable results. I don't think that translates to an\n> >argument that the optimizer is allowed to make semantics-altering\n> >transformations...\n> \n> Very much depends on the language spec, if such usage is \"implementation\n> defined\" you can do pretty much whatever you want. Agressive optimizers\n> in traditional compilers take advantage of this.\n> \n> In the case given, though, there's no particular reason why an application\n> can't grab \"currval()\" and then use the value returned in the subsequent\n> query.\n> \n> On the other hand, heuristics like \"if there's no nextval() in the\n> query, then currval() can be treated as a constant\" are very common in\n> the traditional compiler world, too ...\n\nAnd it seems to me that even if there are both nextval and curval we can \ncrab \"curval()\" and use the value returned. \n\nIt will fail if we have no preceeding nextval inside the session, but so\nwill \nany other plan that tries to evaluate currval before nextval.\n\nSo I don't see that we would be violating the spec more by marking \ncurrval as const than by not doing so.\n\nAnd we do get faster queries, even for the weird queres with undefined\nbehaviour ;)\n\n---------------\nHannu\n", "msg_date": "Mon, 21 Aug 2000 02:27:54 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seqscan,\n\tconstant-->index scan" }, { "msg_contents": "At 02:18 PM 8/20/00 -0400, Tom Lane wrote:\n\n>However, that just shows that some patterns of usage of the function\n>will yield unpredictable results. I don't think that translates to an\n>argument that the optimizer is allowed to make semantics-altering\n>transformations...\n\nVery much depends on the language spec, if such usage is \"implementation\ndefined\" you can do pretty much whatever you want. Agressive optimizers\nin traditional compilers take advantage of this.\n\nIn the case given, though, there's no particular reason why an application\ncan't grab \"currval()\" and then use the value returned in the subsequent\nquery.\n\nOn the other hand, heuristics like \"if there's no nextval() in the\nquery, then currval() can be treated as a constant\" are very common in\nthe traditional compiler world, too ...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sun, 20 Aug 2000 16:49:57 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq\n\tscan, constant-->index scan" }, { "msg_contents": "Hi!\n\nOn Sun, 20 Aug 2000, Tom Lane wrote:\n\n> Cool! Please keep us posted on what you're doing or thinking about\n> doing, so that there's not duplicate or wasted effort.\n\n I'm starting to look at pg code, so some comments that follow can be\ncompletly dumb or useless :-).\n One thing it might be interesting (please tell me if you think\notherwise) would be to improve pg with better statistical information, by\nusing, for example, histograms. With this probably better estimations\ncould be done. I think I'd like to do this (I've not looked the code\nthoughly at this stage, so it could be really a bad idea...)...\n BTW, I'm open to suggestions, if you think there is something that would\nbe good to be on the pg optimizer, please tell me.\n\n I'd like also to comment on a matter that is on \nhttp://www.postgresql.org/docs/pgsql/doc/TODO.detail/optimizer, regarding\nvacuum and analyze beeing together: I think it is a good idea for them to\nbe together because if there is no vacuum then optimizer should also model\nthe \"clusterization\" of tables, a seqscan on a highly unclustered table\ncould be very expensive. There is a good article regarding this:\nhttp://www.db2mag.com/summer00/programmer.shtml section \"use the index any\nway that works best or use it the normal way?\"\n\nBest regards,\nTiago\n\n", "msg_date": "Mon, 21 Aug 2000 13:18:38 +0100 (WEST)", "msg_from": "=?iso-8859-1?Q?Tiago_Ant=E3o?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" }, { "msg_contents": "Hi!\n\nOn Mon, 21 Aug 2000, Hannu Krosing wrote:\n\n> And predictability is GOOD ;)\n> \n> I would even suggest that PG would warn about or even refuse to run\n> queries \n> that have both nextval and curval of the same sequence inside them \n> (and pre-evaluate nextval) as only that case has _any_ predictability.\n\n\n Isn't the problem more general than just nextval? Any user defined\nfunction with side-effects would be a problematic one... plus a user\ndefined function might not be constant:\nselect ... from ... where x in (select side_effects(x) ...)\n On correlated subqueries there is no guarantee of being constant.\n\n In Prolog, which is a declarative language with some similarities to\nrelational algebra the ideia is: \"if you use predicates with side effects,\nthen you're on your own\".\n\nTiago\nPS - Apologies for any irrelevant comment, I'm just starting to look to pg\ncode.\n\n", "msg_date": "Mon, 21 Aug 2000 13:34:49 +0100 (WEST)", "msg_from": "=?iso-8859-1?Q?Tiago_Ant=E3o?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" }, { "msg_contents": "At 01:34 PM 8/21/00 +0100, Tiago Ant�o wrote:\n>Hi!\n>\n>On Mon, 21 Aug 2000, Hannu Krosing wrote:\n>\n>> And predictability is GOOD ;)\n>> \n>> I would even suggest that PG would warn about or even refuse to run\n>> queries \n>> that have both nextval and curval of the same sequence inside them \n>> (and pre-evaluate nextval) as only that case has _any_ predictability.\n>\n>\n> Isn't the problem more general than just nextval?\n\nYes, it is. \n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 21 Aug 2000 07:14:38 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq\n\tscan, constant-->index scan" }, { "msg_contents": "=?iso-8859-1?Q?Tiago_Ant=E3o?= <[email protected]> writes:\n> One thing it might be interesting (please tell me if you think\n> otherwise) would be to improve pg with better statistical information, by\n> using, for example, histograms.\n\nYes, that's been on the todo list for a while.\n\n> There is a good article regarding this:\n> http://www.db2mag.com/summer00/programmer.shtml\n\nInteresting article. We do most of what she talks about, but we don't\nhave anything like the ClusterRatio statistic. We need it --- that was\njust being discussed a few days ago in another thread. Do you have any\nreference on exactly how DB2 defines that stat?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 Aug 2000 10:37:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" }, { "msg_contents": "=?iso-8859-1?Q?Tiago_Ant=E3o?= <[email protected]> writes:\n> Isn't the problem more general than just nextval?\n\nYes it is, and that's why I'm not very excited about the idea of\nadding special-case logic for nextval/currval into the optimizer.\n\nIt's fairly easy to get around this problem in plpgsql, eg\n\n\tdeclare x int;\n\tbegin\n\tx := currval('seq');\n\treturn f1 from foo where seqfld = x;\n\nso I really am going to resist suggestions that the optimizer should\nmake invalid assumptions about currval by itself ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 Aug 2000 10:50:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" }, { "msg_contents": "Tom Lane wrote:\n> \n> =?iso-8859-1?Q?Tiago_Ant=E3o?= <[email protected]> writes:\n> > Isn't the problem more general than just nextval?\n> \n> Yes it is, and that's why I'm not very excited about the idea of\n> adding special-case logic for nextval/currval into the optimizer.\n> \n> It's fairly easy to get around this problem in plpgsql,\n\nit is, once you know that psql implements volatile currval ;)\n\n> eg\n> \n> declare x int;\n> begin\n> x := currval('seq');\n> return f1 from foo where seqfld = x;\n>\n> so I really am going to resist suggestions that the optimizer should\n> make invalid assumptions about currval by itself ...\n\nWhy is assuming a constant currval any more \"invalid\" than not doing so ?\n\nAs the execution order of functions is undefined, can't we safely state that\nall \ncurrval's are evaluated first, before any other functions that could change \nits return value ?\n\ncurrval is not like random which changes its value without any external\nreason.\n\nAfaik, assuming it to return a constant within a single query is at least as \ncorrect as not doing so, only more predictable.\n\n----------------\nHannu\n", "msg_date": "Mon, 21 Aug 2000 18:32:24 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" }, { "msg_contents": "Tiago Ant�o wrote:\n> \n> Hi!\n> \n> On Mon, 21 Aug 2000, Hannu Krosing wrote:\n> \n> > And predictability is GOOD ;)\n> >\n> > I would even suggest that PG would warn about or even refuse to run\n> > queries\n> > that have both nextval and curval of the same sequence inside them\n> > (and pre-evaluate nextval) as only that case has _any_ predictability.\n> \n> Isn't the problem more general than just nextval? Any user defined\n> function with side-effects would be a problematic one... plus a user\n> defined function might not be constant:\n> select ... from ... where x in (select side_effects(x) ...)\n> On correlated subqueries there is no guarantee of being constant.\n> \n> In Prolog, which is a declarative language with some similarities to\n> relational algebra the ideia is: \"if you use predicates with side effects,\n> then you're on your own\".\n\nAnd you are probably even worse off in SQL where the query plan changes as \ntables are filled up, so you can't even find out what will happen by testing.\n\nwith currval marked as constant I would at least know about what currval \nwill return.\n\n---------------\nHannu\n", "msg_date": "Mon, 21 Aug 2000 18:37:38 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" }, { "msg_contents": "On Mon, 21 Aug 2000, Tom Lane wrote:\n\n> > One thing it might be interesting (please tell me if you think\n> > otherwise) would be to improve pg with better statistical information, by\n> > using, for example, histograms.\n> \n> Yes, that's been on the todo list for a while.\n\n If it's ok and nobody is working on that, I'll look on that subject.\n I'll start by looking at the analize portion of vacuum. I'm thinking in\nusing arrays for the histogram (I've never used the array data type of\npostgres).\n Should I use 7.0.2 or the cvs version?\n \n\n> Interesting article. We do most of what she talks about, but we don't\n> have anything like the ClusterRatio statistic. We need it --- that was\n> just being discussed a few days ago in another thread. Do you have any\n> reference on exactly how DB2 defines that stat?\n\n\n I don't remember seeing that information spefically. From what I've\nread I can speculate:\n\n 1. They have clusterratios for both indexes and the relation itself.\n 2. They might use an index even if there is no \"order by\" if the table\nhas a low clusterratio: just to get the RIDs, then sort the RIDs and\nfetch.\n 3. One possible way to calculate this ratio:\n a) for tables\n SeqScan\n if tuple points to a next tuple on the same page then its\n\"good\"\n ratio = # good tuples / # all tuples\n b) for indexes (high speculation ratio here)\n foreach pointed RID in index\n if RID is in same page of next RID in index than mark as\n\"good\"\n\n I suspect that if a tuple size is big (relative to page size) than the\ncluster ratio is always low.\n\n A tuple might also be \"good\" if it pointed to the next page.\n\nTiago\n\n", "msg_date": "Mon, 21 Aug 2000 16:48:08 +0100 (WEST)", "msg_from": "=?iso-8859-1?Q?Tiago_Ant=E3o?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> Why is assuming a constant currval any more \"invalid\" than not doing so ?\n\nBecause it's wrong: it changes the behavior from what happens if the\noptimizer does not do anything special with the function.\n\nThe fact that some cases involving currval+nextval (but not all) yield\nunpredictable results is not an adequate argument for causing the\nbehavior of other cases to change. Especially not when there's a\nperfectly good way for you to make it do what you want...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 Aug 2000 15:07:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" }, { "msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <[email protected]> writes:\n> > Why is assuming a constant currval any more \"invalid\" than not doing so ?\n> \n> Because it's wrong: it changes the behavior from what happens if the\n> optimizer does not do anything special with the function.\n\nOptimizer already does \"something special\" regarding the function - it\ndecides the \norder of execution of rows, and when both currval and nextval are\npresent it changes \nthe end result by doing so. If only currval is present currval is\nconstant.\n\nBut the case when \"optimiser does not do anything with the function\" is\ncompletely \nunpredictable in face of optimiser changing the order of things getting\nscanned, \ncolumns getting scanned and functions getting evaluated. \n\nAnd I'm somewhat suspicious that we have any regression tests that are\ndependent \nof left-to-right or top-to-bottom execution of functions.\n\n> The fact that some cases involving currval+nextval (but not all)\n\nCould you give me a good example of currval+nextval that has a\nSQL[92/99]-defined \nresult, or even a predictable result?\n\n> yield unpredictable results is not an adequate argument for causing the\n> behavior of other cases to change.\n\nAre not all the other cases returning \"undefined\" (by the standard)\nresults ?\n\nI mean that the fact that a seasoned pg coder has a feel for what will\nhappen \nfor some combination of nextval/currval for some combinations of indexes\nand table \nsizes does not make even his assumptions always right or future-proof.\n\n> Especially not when there's a perfectly good way for you to make it do what you want...\n\nYou mean marking it const in my personal copy of pgsql ? ;)\n\nI did\n\nupdate pg_proc set proiscachable='t' where proname = 'currval';\n\nAnd now it seems to do the right thing -\n\namphora2=# explain\namphora2-# select * from item where item_id = currval('item_id_seq');\nNOTICE: QUERY PLAN:\n\nIndex Scan using item_pkey on item (cost=0.00..2.03 rows=1 width=140)\n\n- Thanks.\n\nDo you know of any circumstances where I would get _wrong_ answers by\ndoing the above ?\nBy wrong I mean really wrong, not just different from the case where\nproiscachable='f'.\n\nCan I now trust the optimiser to always pre-evalueate the currval() or\nare there some \ncircumstances where the behaviour is still unpredictable ?\n\n\n\nPS. I would not call plpgsql or temporary tables a perfectly good way ? \nPlpgsql is not even installed by default (on linux at least).\n\n-------------\nHannu\n", "msg_date": "Mon, 21 Aug 2000 23:31:58 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> Tom Lane wrote:\n>> The fact that some cases involving currval+nextval (but not all)\n\n> Could you give me a good example of currval+nextval that has a\n> SQL[92/99]-defined result, or even a predictable result?\n\ncurrval & nextval aren't in the SQL standard, so asking for a standard-\ndefined result is rather pointless. However, it's certainly possible to\nimagine cases where the result is predictable. For example,\n\n\tUPDATE table SET dataval = foo, seqval = nextval('seq')\n\t\tWHERE seqval = currval('seq')\n\nis predictable if the seqval column is unique. Admittedly in that case\nit wouldn't matter whether we pre-evaluated currval or not. But you'd\nhave to be very careful about what you mean by \"pre-evaluation\". For\nexample, the above could be executed many times within one interactive\nquery --- say, it could be executed inside a trigger function that's\nfired multiple times by an interactive SELECT. Then the results will\nchange depending on just when you pre-evaluate currval. That's why I'd\nrather leave it to the user to evaluate currval separately if he wants\npre-evaluation. That way the user can control what happens. If we\nhard-wire an overly-optimistic pre-evaluation policy into the optimizer\nthen that policy will be wrong for some applications.\n\n>> Especially not when there's a perfectly good way for you to make it do what you want...\n\n> You mean marking it const in my personal copy of pgsql ? ;)\n\nNo, I meant putting a pre-evaluation into a plpgsql function, as I\nillustrated earlier in this thread.\n\n> Do you know of any circumstances where I would get _wrong_ answers by\n> doing the above ?\n\nI already told you earlier in this thread: it will fail inside sql or\nplpgsql functions, because the optimizer will freeze the value of the\nallegedly constant function sooner than you want, ie during first\nexecution of the sql/plpgsql function (assuming the input argument looks\nlike a constant, of course).\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 Aug 2000 18:30:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" }, { "msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <[email protected]> writes:\n> > Tom Lane wrote:\n> >> The fact that some cases involving currval+nextval (but not all)\n> \n> > Could you give me a good example of currval+nextval that has a\n> > SQL[92/99]-defined result, or even a predictable result?\n> \n> currval & nextval aren't in the SQL standard,\n\nAre sequences in SQL standard at all ?\n\nIf they are, how are they used ?\n\n> so asking for a standard-defined result is rather pointless.\n> However, it's certainly possible to\n> imagine cases where the result is predictable. For example,\n> \n> UPDATE table SET dataval = foo, seqval = nextval('seq')\n> WHERE seqval = currval('seq')\n> \n> is predictable if the seqval column is unique. \n\nAnd if no triggers/rules use nextval('seq') ...\n\nAnd it is also dependent on optimiser decisions, like order of scanning \nthe tuples - for seq being at 10 and sequval in 10,11,12,13,14\nit can either update 1 or 5 tuples depending on the order of scanning the\ntuples.\n\nWhat I'm trying to say is that using currval/nextval in the same query is\ninherently \nundefined if we assume that currval means anything else than the value of\nsequence \nat the start of query\n\n> Admittedly in that case\n> it wouldn't matter whether we pre-evaluated currval or not. But you'd\n> have to be very careful about what you mean by \"pre-evaluation\". \n\nWhat I would want is currval always return the value of sequence at the \nstart of current transaction. \n\nIf I need anything more complex I'd use pgplsql and save the value of\nnextval()\n\nI _don't_ want to use plpgsql for the simple case.\n\n> For example, the above could be executed many times within one interactive\n> query --- say, it could be executed inside a trigger function that's\n> fired multiple times by an interactive SELECT. Then the results will\n> change depending on just when you pre-evaluate currval. That's why I'd\n> rather leave it to the user to evaluate currval separately if he wants\n> pre-evaluation. That way the user can control what happens. If we\n> hard-wire an overly-optimistic pre-evaluation policy into the optimizer\n> then that policy will be wrong for some applications.\n> \n> >> Especially not when there's a perfectly good way for you to make it do what you want...\n> \n> > You mean marking it const in my personal copy of pgsql ? ;)\n> \n> No, I meant putting a pre-evaluation into a plpgsql function, as I\n> illustrated earlier in this thread.\n\nThat implies that I have to install plpgsql and probably also need to be \nin transaction and also to use a function instead of query which is somewhat \npainful to do interactively\n\n> > Do you know of any circumstances where I would get _wrong_ answers by\n> > doing the above ?\n> \n> I already told you earlier in this thread: it will fail inside sql or\n> plpgsql functions, because the optimizer will freeze the value of the\n> allegedly constant function sooner than you want, ie during first\n> execution of the sql/plpgsql function (assuming the input argument looks\n> like a constant, of course).\n\nI want curval to freeze the value at the beginning of query ;)\n\nOther people may want it to do something else.\n\nCould we add an additional function with strictly defined behaviour of \nreturning the value of a sequence at the beginning of current query, perhaps\ncalled ccurval()\n\nWould defining an additional function and marking it cacheable do the trick or \ncan such a function also return wrong data under some circumstances.\n\n--------------------\nHannu\n", "msg_date": "Tue, 22 Aug 2000 16:57:05 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> Could we add an additional function with strictly defined behaviour of \n> returning the value of a sequence at the beginning of current query, perhaps\n> called ccurval()\n\nNot unless you want the system to run around and read the current value\nof *every* sequence object at the start of *every* transaction, as\ninsurance against the possibility that some bit of code might ask for\nthe value of ccurval('foo') at some point in the transaction.\n\nThis state-saving could doubtless be optimized away to some extent,\nbut quite frankly I don't feel a strong need to work on it. You haven't\nyet presented any compelling reason why we should care deeply about the\nperformance of WHERE bar = currval('foo') --- how many people want to do\nthat? Even more to the point, why is this so important that we should\ncare about making it fast with absolutely no help from the user? I have\na hard time accepting an \"I won't use plpgsql\" argument. There are many\nmore pressing performance problems on my to-do list, most of them with\nno such easy workaround.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 Aug 2000 00:46:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" }, { "msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <[email protected]> writes:\n> > Could we add an additional function with strictly defined behaviour of\n> > returning the value of a sequence at the beginning of current query, perhaps\n> > called ccurval()\n> \n> Not unless you want the system to run around and read the current value\n> of *every* sequence object at the start of *every* transaction, as\n> insurance against the possibility that some bit of code might ask for\n> the value of ccurval('foo') at some point in the transaction.\n> \n> This state-saving could doubtless be optimized away to some extent,\n> but quite frankly I don't feel a strong need to work on it. You haven't\n> yet presented any compelling reason why we should care deeply about the\n> performance of WHERE bar = currval('foo') --- how many people want to do\n> that? \n\nProbably not many. It just happened that I had to optimise some code\nthat used it \na lot and it took me some time to figure out why it does a sequential\nscan when index \nscan would be orders of magnitude faster.\n\n> Even more to the point, why is this so important that we should\n> care about making it fast with absolutely no help from the user? \n\nBecause it would be very easy to do by marking curval as cacheable.\n\nAs I demonstrated to you earlier, using nextval and currval in the same\nquery is \ninherently unsafe and anyone doing it deserves the consequences ;)\n\nThus making curval cacheable just replaces almost completely\nundeterministic \nbehaviour with another, more predictable and arguably more \"correct\"\nbehaviour and \nalso makes life easier for people programming in pure SQL.\n\n> I have a hard time accepting an \"I won't use plpgsql\" argument. \n\nI probably will at some point, but I'd much more like the simple case to\nbe \nfast by default.\n\nBTW, did the fmgr update mend the problem with pl functions taking only\n8/16 arguments ?\n\n> There are many more pressing performance problems on my to-do list,\n> most of them with no such easy workaround.\n\nSure. This one would just be soo easy to fix ;)\n\n----------\nHannu\n", "msg_date": "Wed, 23 Aug 2000 09:03:43 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" }, { "msg_contents": "On Mon, Aug 21, 2000 at 04:48:08PM +0100, Tiago Ant?o wrote:\n> On Mon, 21 Aug 2000, Tom Lane wrote:\n> \n> > > One thing it might be interesting (please tell me if you think\n> > > otherwise) would be to improve pg with better statistical information, by\n> > > using, for example, histograms.\n> > \n> > Yes, that's been on the todo list for a while.\n> \n> If it's ok and nobody is working on that, I'll look on that subject.\n> I'll start by looking at the analize portion of vacuum. I'm thinking in\n> using arrays for the histogram (I've never used the array data type of\n> postgres).\n\nApologies if this is naive; I don't understand the details of the\noptimisation you are discussing. However, I have an optimisation of\nmy own in mind which might be related.\n\nI have in a table a 'category' column which takes a small number of\n(basically fixed) values. Here by 'small', I mean ~1000, while the\ntable itself has ~10 000 000 rows. Some categories have many, many\nmore rows than others. In particular, there's one category which hits\nover half the rows. Because of this (AIUI) postgresql assumes\nthat the query\n\nselect ... from thistable where category='something'\n\nis best served by a seqscan, even though there is an index on\ncategory. I assume this is because it calculates the 'average' number\nof rows per category, and it's too high for an index to be useful.\n\nIn fact, for lots of values of 'something' in the query above, and\nindex scan would be /much/ faster. Many categories have (obviously,\nsince there's ~1000 of them) less that 0.1% of the rows, and an index\nscan would be much faster. [I checked this with set\nenable_seqscan=off, FWIW].\n\nI don't quite know what statistics should be collected here, but\nsomething would be useful...\n\nJules\n", "msg_date": "Wed, 23 Aug 2000 13:34:19 +0100", "msg_from": "Jules Bean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" }, { "msg_contents": "Jules Bean <[email protected]> writes:\n> I have in a table a 'category' column which takes a small number of\n> (basically fixed) values. Here by 'small', I mean ~1000, while the\n> table itself has ~10 000 000 rows. Some categories have many, many\n> more rows than others. In particular, there's one category which hits\n> over half the rows. Because of this (AIUI) postgresql assumes\n> that the query\n>\tselect ... from thistable where category='something'\n> is best served by a seqscan, even though there is an index on\n> category.\n\nYes, we know about that one. We have stats about the most common value\nin a column, but no information about how the less-common values are\ndistributed. We definitely need stats about several top values not just\none, because this phenomenon of a badly skewed distribution is pretty\ncommon.\n\nBTW, if your highly-popular value is actually a dummy value ('UNKNOWN'\nor something like that), a fairly effective workaround is to replace the\ndummy entries with NULL. The system does account for NULLs separately\nfrom real values, so you'd then get stats based on the most common\nnon-dummy value.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 Aug 2000 10:30:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" }, { "msg_contents": "Hi!\n\nOn Wed, 23 Aug 2000, Tom Lane wrote:\n\n> Yes, we know about that one. We have stats about the most common value\n> in a column, but no information about how the less-common values are\n> distributed. We definitely need stats about several top values not just\n> one, because this phenomenon of a badly skewed distribution is pretty\n> common.\n\n\n An end-biased histogram has stats on top values and also on the least\nfrequent values. So if a there is a selection on a value that is well\nbellow average, the selectivity estimation will be more acurate. On some\nresearch papers I've read, it's refered that this is a better approach\nthan equi-width histograms (which are said to be the \"industry\" standard).\n\n I not sure whether to use a table or a array attribute on pg_stat for\nthe histogram, the problem is what could be expected from the size of the\nattribute (being a text). I'm very affraid of the cost of going through\nseveral tuples on a table (pg_histogram?) during the optimization phase.\n\n One other idea would be to only have better statistics for special\nattributes requested by the user... something like \"analyze special\ntable(column)\".\n\nBest Regards,\nTiago\n\n\n", "msg_date": "Wed, 23 Aug 2000 16:03:42 +0100 (WEST)", "msg_from": "=?iso-8859-1?Q?Tiago_Ant=E3o?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" }, { "msg_contents": "=?iso-8859-1?Q?Tiago_Ant=E3o?= <[email protected]> writes:\n> One other idea would be to only have better statistics for special\n> attributes requested by the user... something like \"analyze special\n> table(column)\".\n\nThis might actually fall out \"for free\" from the cheapest way of\nimplementing the stats. We've talked before about scanning btree\nindexes directly to obtain data values in sorted order, which makes\nit very easy to find the most common values. If you do that, you\nget good stats for exactly those columns that the user has created\nindexes on. A tad indirect but I bet it'd be effective...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 Aug 2000 23:56:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" }, { "msg_contents": "On Wed, Aug 23, 2000 at 10:30:30AM -0400, Tom Lane wrote:\n> Jules Bean <[email protected]> writes:\n> > I have in a table a 'category' column which takes a small number of\n> > (basically fixed) values. Here by 'small', I mean ~1000, while the\n> > table itself has ~10 000 000 rows. Some categories have many, many\n> > more rows than others. In particular, there's one category which hits\n> > over half the rows. Because of this (AIUI) postgresql assumes\n> > that the query\n> >\tselect ... from thistable where category='something'\n> > is best served by a seqscan, even though there is an index on\n> > category.\n> \n> Yes, we know about that one. We have stats about the most common value\n> in a column, but no information about how the less-common values are\n> distributed. We definitely need stats about several top values not just\n> one, because this phenomenon of a badly skewed distribution is pretty\n> common.\n\nISTM that that might be enough, in fact.\n\nIf you have stats telling you that the most popular value is 'xyz',\nand that it constitutes 50% of the rows (i.e. 5 000 000) then you can\nconclude that, on average, other entries constitute a mere 5 000\n000/999 ~~ 5000 entries, and it would be definitely be enough.\n(That's assuming you store the number of distinct values somewhere).\n\n\n> BTW, if your highly-popular value is actually a dummy value ('UNKNOWN'\n> or something like that), a fairly effective workaround is to replace the\n> dummy entries with NULL. The system does account for NULLs separately\n> from real values, so you'd then get stats based on the most common\n> non-dummy value.\n\nI can't really do that. Even if I could, the distribution is very\nskewed -- so the next most common makes up a very high proportion of\nwhat's left. I forget the figures exactly.\n\nJules\n", "msg_date": "Thu, 24 Aug 2000 10:11:14 +0100", "msg_from": "Jules Bean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" }, { "msg_contents": "> =?iso-8859-1?Q?Tiago_Ant=E3o?= <[email protected]> writes:\n> > One thing it might be interesting (please tell me if you think\n> > otherwise) would be to improve pg with better statistical information, by\n> > using, for example, histograms.\n> \n> Yes, that's been on the todo list for a while.\n> \n> > There is a good article regarding this:\n> > http://www.db2mag.com/summer00/programmer.shtml\n> \n> Interesting article. We do most of what she talks about, but we don't\n> have anything like the ClusterRatio statistic. We need it --- that was\n> just being discussed a few days ago in another thread. Do you have any\n> reference on exactly how DB2 defines that stat?\n> \n\nAdded to TODO:\n\n\t* Keep statistics about clustering of table rows\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 14 Oct 2000 00:19:19 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" }, { "msg_contents": "> On Mon, 21 Aug 2000, Tom Lane wrote:\n> \n> > > One thing it might be interesting (please tell me if you think\n> > > otherwise) would be to improve pg with better statistical information, by\n> > > using, for example, histograms.\n> > \n> > Yes, that's been on the todo list for a while.\n> \n> If it's ok and nobody is working on that, I'll look on that subject.\n> I'll start by looking at the analize portion of vacuum. I'm thinking in\n> using arrays for the histogram (I've never used the array data type of\n> postgres).\n> Should I use 7.0.2 or the cvs version?\n\nIf you don't like the analyze part, you can blame me. It is\nnon-optimal, but lean.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 14 Oct 2000 00:20:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" }, { "msg_contents": "> Hi!\n> \n> On Wed, 23 Aug 2000, Tom Lane wrote:\n> \n> > Yes, we know about that one. We have stats about the most common value\n> > in a column, but no information about how the less-common values are\n> > distributed. We definitely need stats about several top values not just\n> > one, because this phenomenon of a badly skewed distribution is pretty\n> > common.\n> \n> \n> An end-biased histogram has stats on top values and also on the least\n> frequent values. So if a there is a selection on a value that is well\n> bellow average, the selectivity estimation will be more acurate. On some\n> research papers I've read, it's refered that this is a better approach\n> than equi-width histograms (which are said to be the \"industry\" standard).\n\nI like this. I never liked the equal-size histograms. The lookup time\nwas too slow, and used too much disk space.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 14 Oct 2000 00:25:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" }, { "msg_contents": "On Sat, 14 Oct 2000, Bruce Momjian wrote:\n\n> > On Mon, 21 Aug 2000, Tom Lane wrote:\n> > \n> > > > One thing it might be interesting (please tell me if you think\n> > > > otherwise) would be to improve pg with better statistical information, by\n> > > > using, for example, histograms.\n> > > \n> > > Yes, that's been on the todo list for a while.\n> > \n> > If it's ok and nobody is working on that, I'll look on that subject.\n> > I'll start by looking at the analize portion of vacuum. I'm thinking in\n> > using arrays for the histogram (I've never used the array data type of\n> > postgres).\n> > Should I use 7.0.2 or the cvs version?\n\nand, to answer what appears to have been missed ... use the CVS version\nfor anything like this *nod* *grin*\n\n\n", "msg_date": "Sat, 14 Oct 2000 01:37:28 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation deficiency: currval('seq')-->seq scan,\n\tconstant-->index scan" } ]