threads
listlengths
1
2.99k
[ { "msg_contents": "At 21:57 4/07/00 +0200, Peter Eisentraut wrote:\n>I think we need a --with-zlib switch for the new pg_dump. Or at least a\n>--without-zlib switch, if you think that it's widespread enough.\n\nAs far as I am aware, te plan is to make 'configure' check for it...the\ncode obeys the HAVE_ZLIB #define when compiling. \n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 05 Jul 2000 11:34:34 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: zlib for pg_dump" }, { "msg_contents": "On Wed, 5 Jul 2000, Philip Warner wrote:\n\n> At 21:57 4/07/00 +0200, Peter Eisentraut wrote:\n> >I think we need a --with-zlib switch for the new pg_dump. Or at least a\n> >--without-zlib switch, if you think that it's widespread enough.\n> \n> As far as I am aware, te plan is to make 'configure' check for it...the\n> code obeys the HAVE_ZLIB #define when compiling. \n\nthe other way ... autoconf defines it as 'HAVE_LIBZ' ... and configure now\ntests for it ...\n\n\n", "msg_date": "Wed, 5 Jul 2000 14:42:21 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: zlib for pg_dump" }, { "msg_contents": "At 14:42 5/07/00 -0300, The Hermit Hacker wrote:\n>\n>the other way ... autoconf defines it as 'HAVE_LIBZ' ... and configure now\n>tests for it ...\n>\n\nFixed in my copy; I'll send a patch as soon as I hear from Jan regarding\nthe problems he reported.\n\nLet me know if you want a patch sooner.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 06 Jul 2000 16:59:20 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: zlib for pg_dump" } ]
[ { "msg_contents": "I've done a little work on enabling session-specific default behavior\nfor transaction isolation level. I'm thinking about how to extend this\nto default \"database-specific\" behaviors which persist between sessions\n(such as \"DateStyle\", character encoding, etc), perhaps using the ALTER\nSCHEMA command from SQL99. btw, this capability enables, overlaps or\nimpacts upcoming work to support general character encodings (which may\nalso be impacted by the current work on TOAST; we'll see).\n\nAnyway, if these kinds of things can be set via SQL (they are required\nto in SQL99) then istm that they could/should be stored in tables just\nlike everything else. My initial thought was to add columns to\npg_database for each setting, but this is not very extensible. Another\npossibility might be to add routines somewhere as \"trigger-able events\"\nwhich happen when, say, a row is selected from pg_database. I'll guess\nthat this in particular won't work, since pg_database is not opened from\nwithin a fully functioning Postgres backend.\n\nAny thoughts on how to go about this? I assume that Peter's recent\n\"options\" work does not apply, since it is not directly accessible\nthough SQL. But I haven't looked to verify this assumption.\n\n - Thomas\n", "msg_date": "Wed, 05 Jul 2000 06:13:54 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Per-database/schema settings" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> My initial thought was to add columns to\n> pg_database for each setting, but this is not very extensible. Another\n> possibility might be to add routines somewhere as \"trigger-able events\"\n> which happen when, say, a row is selected from pg_database. I'll guess\n> that this in particular won't work, since pg_database is not opened from\n> within a fully functioning Postgres backend.\n\nIIRC, pg_database is rechecked as soon as a new backend is up and\nrunning. So it'd be easy enough to extract additional values from\nthe pg_database row at that instant. A trigger wouldn't help though,\nit'd have to be hardwired code. (Even if we tweaked the backend to\nfire a trigger at that point, where would the trigger get the data\nfrom? You'd still need to add columns to pg_database.)\n\nI agree that adding columns to pg_database is a painful way of creating\nper-database options, but I'm not sure what would be better.\n\n> Any thoughts on how to go about this? I assume that Peter's recent\n> \"options\" work does not apply, since it is not directly accessible\n> though SQL. But I haven't looked to verify this assumption.\n\nAFAIR his options stuff does not support per-database settings.\nBut perhaps it could be made to do so ... Peter, any thoughts?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Jul 2000 02:42:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Per-database/schema settings " }, { "msg_contents": "On Wed, 5 Jul 2000, Thomas Lockhart wrote:\n\n> I've done a little work on enabling session-specific default behavior\n> for transaction isolation level.\n\nIs this the START TRANSACTION [ ISOLATION LEVEL ] command?\n\n> I'm thinking about how to extend this to default \"database-specific\"\n> behaviors which persist between sessions (such as \"DateStyle\",\n> character encoding, etc), perhaps using the ALTER SCHEMA command from\n> SQL99. btw,\n\nWhat about something like ALTER DATABASE ... SET DEFAULT foo TO bar; The\nALTER SCHEMA command should be reserved to schema alerations.\n\n> My initial thought was to add columns to pg_database for each setting,\n> but this is not very extensible.\n\nIf it's an attribute of a database, then it should be a pg_database\ncolumn. Notice how the language I chose virtually forces you to do\nthat. :) And what's so non-extensible about that?\n\n> I assume that Peter's recent \"options\" work does not apply, since it\n> is not directly accessible though SQL.\n\nThe SHOW command continues to work like it always has. But most, if not\nall, of these options are not really per-database material. They are\neither debugging or developing aids that you turn on temporarily, or\nchoices that the site administrator does only once. These options really\ndon't have a lot to do with the SQL environment.\n\n(Btw: http://www.postgresql.org/docs/admin/runtime-config.htm)\n\nWhat kind of settings are you talking about, besides default character set\nand date style? I would assume that the default charset is the one to be\nused by the NCHAR type? About datestyle, I had thought that this setting\nshould really be deprecated, with the arrival of the to_char() family. If\nyou like a default datestyle, then you can define a view based on\nto_char().\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 5 Jul 2000 08:29:07 -0400 (EDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Per-database/schema settings" }, { "msg_contents": "> > I've done a little work on enabling session-specific default behavior\n> > for transaction isolation level.\n> Is this the START TRANSACTION [ ISOLATION LEVEL ] command?\n\nafaict, SQL99 calls it\n\nSET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL ...;\n\n> > I'm thinking about how to extend this to default \"database-specific\"\n> > behaviors which persist between sessions (such as \"DateStyle\",\n> > character encoding, etc), perhaps using the ALTER SCHEMA command from\n> > SQL99. btw,\n> What about something like ALTER DATABASE ... SET DEFAULT foo TO bar; The\n> ALTER SCHEMA command should be reserved to schema alerations.\n\nafaik character collation is a \"schema\" property, but it has been quite\na while since I've looked. If my recollection is true, then there is a\npretty big grey area between \"database\" and \"schema\" imho.\n\n> > My initial thought was to add columns to pg_database for each setting,\n> > but this is not very extensible.\n> If it's an attribute of a database, then it should be a pg_database\n> column. Notice how the language I chose virtually forces you to do\n> that. :) And what's so non-extensible about that?\n\nAny time a new attribute needs to be set, a new column needs to be\nadded, requiring a dump/initdb/reload. It would be pretty neat to be\nable to execute arbitrary code during database startup, which could\nset/unset global variables and ?? I guess that was what I was asking\nabout.\n\n> What kind of settings are you talking about, besides default character set\n> and date style? I would assume that the default charset is the one to be\n> used by the NCHAR type?\n\nTransaction isolation level is one. And presumably several other things\nwe haven't yet thought through.\n\n> About datestyle, I had thought that this setting\n> should really be deprecated, with the arrival of the to_char() family. If\n> you like a default datestyle, then you can define a view based on\n> to_char().\n\nEven if we agree that various *output* date styles are not useful, the\nDateStyle setting also affects the interpretation of input date/time\n(e.g. month/day or day/month conventions). istm that a lot of apps do\nneed some flexibility in date/time inputs.\n\n - Thomas\n", "msg_date": "Wed, 05 Jul 2000 14:21:40 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Per-database/schema settings" }, { "msg_contents": ">>>>> \"eisentrp\" == eisentrp <[email protected]> writes:\n\n eisentrp> About datestyle, I had thought that this setting should\n eisentrp> really be deprecated, with the arrival of the to_char()\n eisentrp> family. If you like a default datestyle, then you can\n eisentrp> define a view based on to_char().\n\nAs far as datestyle goes, I like Oracle's\n\n alter session set nls_date_format='MON DD, YYYY HH24:MI:SS';\n\nThis doesn't really deal with default for a database, but I've longed\nfr this in Postgres....\n\nroland\n-- \n\t\t PGP Key ID: 66 BC 3B CD\nRoland B. Roberts, PhD Unix Software Solutions\[email protected] 76-15 113th Street, Apt 3B\[email protected] Forest Hills, NY 11375\n", "msg_date": "05 Jul 2000 15:31:22 -0400", "msg_from": "Roland Roberts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Per-database/schema settings" }, { "msg_contents": "Thomas Lockhart writes:\n\n> afaict, SQL99 calls it\n> \n> SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL ...;\n\nI see. Then we'd need a per-transaction isolation level that goes away\nafter the transaction, and a default transaction isolation level that each\nnew transaction starts out with.\n\n> It would be pretty neat to be able to execute arbitrary code during\n> database startup, which could set/unset global variables and ??\n\nDidn't you say that the default characters set is a schema-property? I\ndon't think there's anything like a schema \"startup\".\n\nAnyway, I could think of a way to hook this into my work (or vice versa).\nWe already have the SET command. All you need to do is to execute a number\nof SET commands for each new connection. It actually does that already,\nonly that the input comes from the PGOPTIONS variable from the client\nside. We'd just have to add one pass that reads the database default\nsettings from somewhere (to be determined, I guess :) and sets these. In\nfact, this should be pretty easy.\n\nI could see a command\n\tSET DATABASE DEFAULT \"name\" TO value;\nor, following your lead,\n\tALTER DATABASE(SCHEMA?) SET DEFAULT \"name\" TO value;\n\nI guess we could make a global table pg_databasedefaults:\n\tdboid oid,\n\toptname text,\n\toptval text\nand when you start up database \"dboid\", then you loop through all matching\nrows and effectively execute SET optname = optval for each.\n\n\n> Even if we agree that various *output* date styles are not useful, the\n> DateStyle setting also affects the interpretation of input date/time\n> (e.g. month/day or day/month conventions). istm that a lot of apps do\n> need some flexibility in date/time inputs.\n\nI've been meaning to ask about that, might as well do it now. As you say,\nthe DateStyle setting is overloaded for two separate things: default\noutput style (ISO, \"SQL\", Postgres, German), and month/day vs day/month\nsetting. This has always confused me (and presumably not only me) and it\nis quite tricky to integrate this into my options work -- there is no\nfamily of settings for \"takes a string input and sets two integer\nvariables\".\n\nMaybe we could split this up:\n\n* datetime_output_style: one of ISO, SQL, Postgres, German\n\n(In fact, if we wanted, we could also make this an arbitrary to_char()\nformat string. If it's empty we default to ISO, if it's set then we pass\nit right on to to_char. I guess then we'd need separate parameters for\ndate and time though.)\n\n* day_before_month: true/false\n\nWe can provide backward compatibility by still accepting SET DateStyle,\nbut internally parsing it apart into these two.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 6 Jul 2000 02:12:41 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Per-database/schema settings" }, { "msg_contents": "> I've been meaning to ask about that, might as well do it now. As you say,\n> the DateStyle setting is overloaded for two separate things: default\n> output style (ISO, \"SQL\", Postgres, German), and month/day vs day/month\n> setting. This has always confused me (and presumably not only me) and it\n> is quite tricky to integrate this into my options work -- there is no\n> family of settings for \"takes a string input and sets two integer\n> variables\".\n\nPerhaps it is confusing because it tries to cover (regional) cases we\naren't all familiar with?\n\n> Maybe we could split this up:\n> * datetime_output_style: one of ISO, SQL, Postgres, German\n> (In fact, if we wanted, we could also make this an arbitrary to_char()\n> format string. If it's empty we default to ISO, if it's set then we pass\n> it right on to to_char. I guess then we'd need separate parameters for\n> date and time though.)\n\nI've been pretty resistant to having a fully-tailorable native output\ncapability, since it would be possible to generate date strings which\ncan not be correctly interpreted on input. It might interact badly with\npg_dump, for example. It might be a bit slower than the current\nhard-coded technique.\n\n> * day_before_month: true/false\n> We can provide backward compatibility by still accepting SET DateStyle,\n> but internally parsing it apart into these two.\n\n\"German\" doesn't have much meaning with a flipped month/day field. So\nthese aren't entirely decoupled. We could vote quickly to get rid of it\nand hope that those Germans aren't paying attention ;)\n\nI guess that my letting go of what *I* think is important could be or is\nor will be necessary for continued progress on the date/time handling.\nBut stability and predictability is pretty important. Eventually,\nperhaps we should get rid of all of the options, insist on ISO-8601 as\nthe input and output format, and insist that people use to_char() if\nthey want anything more. But that seems a bit extreme.\n\n - Thomas\n", "msg_date": "Thu, 06 Jul 2000 03:13:41 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Per-database/schema settings" }, { "msg_contents": "Thomas Lockhart writes:\n\n> I've been pretty resistant to having a fully-tailorable native output\n> capability, since it would be possible to generate date strings which\n> can not be correctly interpreted on input.\n\nGood point. Let them use to_char().\n\n> \"German\" doesn't have much meaning with a flipped month/day field.\n\nThe only DateStyle \"major mode\" that cares about the month/day is\n\"SQL\". German, Postgres, and ISO are not affected, AFAICT.\n\n> We could vote quickly to get rid of it and hope that those Germans\n> aren't paying attention ;)\n\nWell, I'm German, but ... :-)\n\nNo, I'm not proposing to get rid of this, at least not right now. All I'm\nsaying is that there should perhaps be two separate settings:\n\n1. Major output mode\n\n2. Should month-before-day or day-before-month be *preferred* where\n*applicable*? (It's not \"applicable\" in any output mode but SQL, and it's\nnot \"applicable\" on input like '99/12/31'. -- I always thought of it as\n\"tie-breaker\".)\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 6 Jul 2000 23:36:48 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "DateStyle (was Re: Per-database/schema settings)" }, { "msg_contents": "> 1. Major output mode\n> 2. Should month-before-day or day-before-month be *preferred* where\n> *applicable*? (It's not \"applicable\" in any output mode but SQL, and it's\n> not \"applicable\" on input like '99/12/31'. -- I always thought of it as\n> \"tie-breaker\".)\n\nYes, for input it is a \"tie-breaker\"; that's how I've thought of it too.\n\nSure, let's try to break this into two orthogonal attributes. Can you\nthink of a good name or keyword? \"European\" and \"US\" sure isn't saying\nthings as clearly as \"day-before-month\" or \"month-before-day\", but they\nare easier to type and make a better shorthand.\n\nLet's find a good syntax for this setting before we go too far...\n\n - Thomas\n", "msg_date": "Fri, 07 Jul 2000 05:58:53 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DateStyle (was Re: Per-database/schema settings)" }, { "msg_contents": "\nOn Thu, 6 Jul 2000, Peter Eisentraut wrote:\n> \n> > Even if we agree that various *output* date styles are not useful, the\n> > DateStyle setting also affects the interpretation of input date/time\n> > (e.g. month/day or day/month conventions). istm that a lot of apps do\n> > need some flexibility in date/time inputs.\n> \n> I've been meaning to ask about that, might as well do it now. As you say,\n> the DateStyle setting is overloaded for two separate things: default\n> output style (ISO, \"SQL\", Postgres, German), and month/day vs day/month\n> setting. This has always confused me (and presumably not only me) and it\n> is quite tricky to integrate this into my options work -- there is no\n> family of settings for \"takes a string input and sets two integer\n> variables\".\n> \n> Maybe we could split this up:\n> \n> * datetime_output_style: one of ISO, SQL, Postgres, German\n> \n> (In fact, if we wanted, we could also make this an arbitrary to_char()\n> format string. If it's empty we default to ISO, if it's set then we pass\n> it right on to to_char. I guess then we'd need separate parameters for\n> date and time though.)\n\n I not sure, but if I good remember for example Oracle has something like\nSET DATESTYLE TO 'arbitrary style', where style is defined via to_char\ntemplates. For example: \n\n\tSET DATESTYLE TO 'YYYY Month-DD HH:MI:SS'\n\n and all date/time (like now()) outputs will formatted via this setting in \nto_char \"engine\". IMHO create support for this is possible. I will think \nabout it for 7.2\n\n This solution can forever stop all discussion about styles that PG \nmust/can support.\n\n\t\t\t\t\tKarel\n\n", "msg_date": "Fri, 7 Jul 2000 15:09:30 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Per-database/schema settings" }, { "msg_contents": "\n\nOn Thu, 6 Jul 2000, Peter Eisentraut wrote:\n\n> Thomas Lockhart writes:\n> \n> > I've been pretty resistant to having a fully-tailorable native output\n> > capability, since it would be possible to generate date strings which\n> > can not be correctly interpreted on input.\n> \n> Good point. Let them use to_char().\n> \n\n Small note, to_char() has good friend to_timestamp() and this second\nroutine must allow interpret all output from to_char() to PG internal\ndatetype.\n\ntest=# select to_timestamp( to_char( now(), '\"perverse date/time: \"Y,YYY \nFMMonth-DD HH24:MI:SS'), '\"perverse date/time: \"Y,YYY FMMonth-DD\nHH24:MI:SS') = now();\n ?column?\n----------\n t\n(1 row) \n\n\t\t\t\t\t\tKarel\n\n", "msg_date": "Fri, 7 Jul 2000 15:33:50 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DateStyle (was Re: Per-database/schema settings)" }, { "msg_contents": "> This solution can forever stop all discussion about styles that PG\n> must/can support.\n\nThis is one thing I'm *not* certain about. The problems with a fully\ngeneral, templated formatting function for the backend include\n\n1) It is easy to make a date/time template which *cannot* be used to\nread data back in. So, for example, pg_dump could be fundamentally\nbroken just be this setting. Currently, input and output are always\ncompatible (more or less ;)\n\n2) There may be a performance hit to *always* use a fully general\ntemplate for formatting.\n\n3) If the template is used for output, it should probably be used for\ninput (to minimize the possibility of (1)). But then we would be able to\naccept fewer date/time variations than we do now.\n\n - Thomas\n", "msg_date": "Fri, 07 Jul 2000 13:40:20 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Per-database/schema settings" }, { "msg_contents": "Karel Zak wrote:\n> \n> Small note, to_char() has good friend to_timestamp() and this second\n> routine must allow interpret all output from to_char() to PG internal\n> datetype.\n\nYes. The problem is if someone sets the template to\n\n'bad date/time: \"HH24\"'\n\nAt that point, there are not enough fields defined to be able to recover\nthe original date/time value.\n\n - Thomas\n", "msg_date": "Fri, 07 Jul 2000 14:05:21 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DateStyle (was Re: Per-database/schema settings)" }, { "msg_contents": "\nOn Fri, 7 Jul 2000, Thomas Lockhart wrote:\n\n> > This solution can forever stop all discussion about styles that PG\n> > must/can support.\n> \n> This is one thing I'm *not* certain about. The problems with a fully\n> general, templated formatting function for the backend include\n> \n> 1) It is easy to make a date/time template which *cannot* be used to\n> read data back in. So, for example, pg_dump could be fundamentally\n> broken just be this setting. Currently, input and output are always\n> compatible (more or less ;)\n\n full support for in/out is expect in this idea, and we can add check\nthat conterol if defined template is right for timestamp interpretation.\n\n> 2) There may be a performance hit to *always* use a fully general\n> template for formatting.\n\n Not sure. The to_char/timestamp is fast and parsed template is cached, \nIMHO is not big (speed) differention between to_char/timestamp and standard \ndate/time formatting.\n\n> 3) If the template is used for output, it should probably be used for\n> input (to minimize the possibility of (1)). But then we would be able to\n> accept fewer date/time variations than we do now.\n\n With this setting is all in user's hands...\n\nI don't know how much problematic it is, but the oracle has this feature\n(NSL_DATE_FORMAT in ALTER SESSION, etc)\n\nBut I not say that we must implement this, it is still open thema :-)\n\n\t\t\t\t\tKarel\n\n", "msg_date": "Mon, 10 Jul 2000 09:11:33 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Per-database/schema settings" }, { "msg_contents": "On Mon, 10 Jul 2000, Karel Zak wrote:\n> \n> I don't know how much problematic it is, but the oracle has this feature\n> (NSL_DATE_FORMAT in ALTER SESSION, etc)\n\n Note, I see Oracle's docs, and NSL_DATE_FORMAT is probably used as\ndefault format template for to_char/to_date, and it allowe\n\n SELECT TO_CHAR(sysdate) FROM DUAL;\n\n(is possible in Oracle formatting/parsing datetime without to_char(), like\npostgresql timestamp_in/out?)\n\n Hmm, PG and Oracle date/time design is probably more different. I was not\ntotal right in the previous letter.\n \n\t\t\t\t\t\tKarel\n\n", "msg_date": "Mon, 10 Jul 2000 09:40:13 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Per-database/schema settings" } ]
[ { "msg_contents": "I've been thinking about what changes are necessary to the libpq\ninterface to support returning variable type tuples. This was\ndiscussed a number of months back but an exact interface wasn't nailed\ndown. \n\nLet me then put forward the following suggestion open for comment. The\nsuggestion is very similar to the original postgres solution to this\nproblem. What I have added is some consideration of how a streaming\ninterface should work, and hopefully I will incorporate that\nenhancement while I'm at it.\n\nInto libpq will be (re)introduced the concept of a group. Tuples which\nare returned will be from a finite number of different layouts.\n\nThus there will be an API PQnfieldsGroup(PGresult, group_num). And\nsimilar for PQftypeGroup etc. There will be a PQgroup(PGresult,\ntuple_num) which will tell you which group any given tuple belongs to.\n\nTo support streaming of results a new function PQflush(PGresult,\ntuple_num) would\nbe introduced. It discards previous results that are cached. PQexec\nwould be changed so that it doesn't absorb the full result set\nstraight away like it does now (*). Instead it would only absorb\nresults on a need to basis when calling say PQgetValue.\n\nCurrently you might read results like this...\n\nPGresult *res = PQexec(\"select * from foo\");\nfor (int i = 0; i < PQntuples(res); i++) {\n printf(\"%s\\n\", PQgetValue(res, i, 0);\n}\n\nIt has the disadvantage that all the results are kept in memory at\nonce. This code would in the future be modified to be...\n\nPGresult *res = PQexec(\"select * from foo\");\nfor (int i = 0; i < PQntuples(res); i++) {\n printf(\"%s\\n\", PQgetValue(res, i, 0); \n PQflush(res) // NEW NEW\n}\n\nNow PQexec doesn't absorb all the results at once. PQgetValue will\nread them on a need-to basis. PQflush will discard each result through\nthe loop.\n\nI could also write...\n\nPGresult *res = PQexec(\"select * from foo\");\nfor (int i = 0; i < PQntuples(res); i++) {\n printf(\"%s\\n\", PQgetValue(res, i, 0);\n if (i % 20) {\n PQflush(res, -1)\n }\n}\n\nIn this case the results are cached in chunks of 20. Or I could write...\n\nPGresult *res = PQexec(\"select * from foo\");\nfor (int i = 0; i < PQntuples(res); i++) {\n printf(\"%s\\n\", PQgetValue(res, i, 0);\n PQflush(res, i-20)\n}\n\nIn this case the last 20 tuples are kept in memory in any one time as\na sliding window. If I try to access something out of range of the\ncurrent cache I get a NULL result.\n\nBack to the multiple tuple return types issue. psql code may do\nsomething like...\n\nint currentGroup = -1, group;\nPGresult *res = PQexec(someQuery);\nfor (int i = 0; i < PQntuples(res); i++) {\n group = PQgroup(res, i);\n if (group != currentGroup) \n printHeaders(res, group);\n }\n currentGroup = group;\n for (j = 0; j < PQnfieldsGroup(res, group); j++) {\n printf(\"%s |\", PQgetValue(res, i, j);\n }\n printf(\"\\n\");\n PQflush(res)\n}\n\nprintHeaders(PGresult *res, int group) {\n for (j = 0; j < PQnfieldsGroup(res, group); j++) {\n printf(\"%s |\", PQfnameGroup(res, group));\n }\n printf(\"\\n\");\n}\n\nThis would print different result types with appropriate headers...\ncreate table a (aa text);\ncreate table b under a (bb text);\nselect ** from a;\naa |\n----\nfoo\njar\n\naa | bb\n-------\nbar|baz\nboo|bon\n\n(*) Assuming that this doesn't unduly affect current behaviour. I\ncan't see that it would, but if it would another API would be needed\nPQexecStream.\n", "msg_date": "Wed, 05 Jul 2000 17:20:47 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "Proposed new libpq API" }, { "msg_contents": "Follow up: Where it says...\n\nPGresult *res = PQexec(\"select * from foo\");\nfor (int i = 0; i < PQntuples(res); i++) {\n printf(\"%s\\n\", PQgetValue(res, i, 0); \n PQflush(res) // NEW NEW\n}\n\nIt should say...\n\nPGresult *res = PQexec(\"select * from foo\");\nfor (int i = 0; i < PQntuples(res); i++) {\n printf(\"%s\\n\", PQgetValue(res, i, 0); \n PQflush(res, -1) // NEW NEW\n}\n\nThe -1 argument signifying \"flush everything\". A specific number\nsignifying \"flush everything below this threshold\", where the threshold\nis a tuple number.\n", "msg_date": "Wed, 05 Jul 2000 17:31:18 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposed new libpq API" }, { "msg_contents": "Chris Bitmead wrote:\n>\n> I've been thinking about what changes are necessary to the libpq\n> interface to support returning variable type tuples. This was\n> discussed a number of months back but an exact interface wasn't nailed\n> down.\n\nLet me propose an additional possible solution for the most common case \nneeding to return multiple types of tuples, the case of select ** --\njust \nhave a tupletype for each tuple, possibly as an implies field and return\nNULL \nfor missing fields (returning nulls is cheap - each only occupies one\nbit)\nso that\n\nSELECT user\nUNION \nSELECT nextval('myseq');\n\nwould return a result with the following structure\n\ntype() | user (text) | nextval(int)\n-----------------------------------\n t1 | postgres | NULL\n t2 | NULL | 1\n\nsuch way of returning tuples could possibly make also non-OO folks happy \nas the result will still be table-shaped ;)\n \n> Let me then put forward the following suggestion open for comment. The\n> suggestion is very similar to the original postgres solution to this\n> problem. What I have added is some consideration of how a streaming\n> interface should work, and hopefully I will incorporate that\n> enhancement while I'm at it.\n> \n> Into libpq will be (re)introduced the concept of a group. Tuples which\n> are returned will be from a finite number of different layouts.\n>\n> Thus there will be an API PQnfieldsGroup(PGresult, group_num). And\n> similar for PQftypeGroup etc. There will be a PQgroup(PGresult,\n> tuple_num) which will tell you which group any given tuple belongs to.\n\nSeems good ;).\n\nWill the group carry only structurte or will it have some \"higher\"\nmeaning -\ni.e. will rows selected form two different tables with the same\nstructure \nbe in the same group ?\n \n----------\nHannu\n", "msg_date": "Wed, 05 Jul 2000 11:05:18 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposed new libpq API" }, { "msg_contents": "Hannu Krosing wrote:\n\n> Let me propose an additional possible solution for the most common case\n> needing to return multiple types of tuples, the case of select ** --\n> just\n> have a tupletype for each tuple, possibly as an implies field and return\n> NULL\n> for missing fields (returning nulls is cheap - each only occupies one\n> bit)\n> so that\n> \n> SELECT user\n> UNION\n> SELECT nextval('myseq');\n> \n> would return a result with the following structure\n> \n> type() | user (text) | nextval(int)\n> -----------------------------------\n> t1 | postgres | NULL\n> t2 | NULL | 1\n> \n> such way of returning tuples could possibly make also non-OO folks happy\n> as the result will still be table-shaped ;)\n\nWhat is the essence of your suggestion? The libpq interface, the\nprotocol or the formatting for psql?\n\nThe main problem I can see with the way your idea is going, is that if a\nclass has a few dozen subclasses, each with a few dozen fields, you\ncould end up with a couple of thousand resulting columns.\n\nThat and it doesn't seem very OO.\n\n> Will the group carry only structurte or will it have some \"higher\"\n> meaning -\n> i.e. will rows selected form two different tables with the same\n> structure\n> be in the same group ?\n\nThat is the one thing in my mind I'm not certain of. At the moment I\nwill say that aspect is undefined. Hopefully a clearer answer will\nemerge once it is actually working.\n\n\n-- \nChris Bitmead\nmailto:[email protected]\nhttp://www.techphoto.org - Photography News, Stuff that Matters\n", "msg_date": "Wed, 05 Jul 2000 19:59:13 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposed new libpq API" }, { "msg_contents": "Chris Bitmead wrote:\n> \n> Hannu Krosing wrote:\n> \n> > Let me propose an additional possible solution for the most common case\n> > needing to return multiple types of tuples, the case of select ** --\n> > just\n> > have a tupletype for each tuple, possibly as an implies field and return\n> > NULL\n> > for missing fields (returning nulls is cheap - each only occupies one\n> > bit)\n> > so that\n> >\n> > SELECT user\n> > UNION\n> > SELECT nextval('myseq');\n> >\n> > would return a result with the following structure\n> >\n> > type() | user (text) | nextval(int)\n> > -----------------------------------\n> > t1 | postgres | NULL\n> > t2 | NULL | 1\n> >\n> > such way of returning tuples could possibly make also non-OO folks happy\n> > as the result will still be table-shaped ;)\n> \n> What is the essence of your suggestion? The libpq interface, the\n> protocol or the formatting for psql?\n\nI was hoping it to cover all of them, but it may not be that simple on \ncloser ispection ;(\n\n> The main problem I can see with the way your idea is going, is that if a\n> class has a few dozen subclasses, each with a few dozen fields, you\n> could end up with a couple of thousand resulting columns.\n\nYes. In fact I will end up with that number anyway, only that each tuple \ndoes not have all of them in case of returning multiple types of tuples.\n\nI still insist that the _overhead_ from returning such colums is quite \nsmall as each null is only one _bit_\n\n> That and it doesn't seem very OO.\n\nno, it does not, unless we pretend that what \"SELECT **\" returns is all \nsuperobjects which in fact do have all the NULL fields, only they have \nvalue NULL :)\n\notoh, doing things that way could \"hide\" the OO-ness from tools that \ndon't like it.\n\n-------\n\nBTW, how does one subscribe to [email protected] list ?\nI tried, but my response mail said something like \n\"processing your subscription successful, you are _NOT_ subscribed to\nlist\" \n\nI got the same result with other new lists ;(\n\n----------\nHannu\n", "msg_date": "Wed, 05 Jul 2000 15:42:09 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposed new libpq API" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> Okay, first thing off the top of my head ... how does this deal with\n> backward compatibility, or have we just blown all old apps out to fhte\n> water?\n\nAs I see it it is compatible until you _need_ the new features, like \nSELECT ** or UNION of selects with different structure.\n\nI have also posted a different/additional proposal that hopefully hides\nthis from old apps completely.\n\n---------\nHannu\n", "msg_date": "Wed, 05 Jul 2000 15:46:54 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposed new libpq API" }, { "msg_contents": "\nOkay, first thing off the top of my head ... how does this deal with\nbackward compatibility, or have we just blown all old apps out to fhte\nwater?\n\n\nOn Wed, 5 Jul 2000, Chris Bitmead wrote:\n\n> I've been thinking about what changes are necessary to the libpq\n> interface to support returning variable type tuples. This was\n> discussed a number of months back but an exact interface wasn't nailed\n> down. \n> \n> Let me then put forward the following suggestion open for comment. The\n> suggestion is very similar to the original postgres solution to this\n> problem. What I have added is some consideration of how a streaming\n> interface should work, and hopefully I will incorporate that\n> enhancement while I'm at it.\n> \n> Into libpq will be (re)introduced the concept of a group. Tuples which\n> are returned will be from a finite number of different layouts.\n> \n> Thus there will be an API PQnfieldsGroup(PGresult, group_num). And\n> similar for PQftypeGroup etc. There will be a PQgroup(PGresult,\n> tuple_num) which will tell you which group any given tuple belongs to.\n> \n> To support streaming of results a new function PQflush(PGresult,\n> tuple_num) would\n> be introduced. It discards previous results that are cached. PQexec\n> would be changed so that it doesn't absorb the full result set\n> straight away like it does now (*). Instead it would only absorb\n> results on a need to basis when calling say PQgetValue.\n> \n> Currently you might read results like this...\n> \n> PGresult *res = PQexec(\"select * from foo\");\n> for (int i = 0; i < PQntuples(res); i++) {\n> printf(\"%s\\n\", PQgetValue(res, i, 0);\n> }\n> \n> It has the disadvantage that all the results are kept in memory at\n> once. This code would in the future be modified to be...\n> \n> PGresult *res = PQexec(\"select * from foo\");\n> for (int i = 0; i < PQntuples(res); i++) {\n> printf(\"%s\\n\", PQgetValue(res, i, 0); \n> PQflush(res) // NEW NEW\n> }\n> \n> Now PQexec doesn't absorb all the results at once. PQgetValue will\n> read them on a need-to basis. PQflush will discard each result through\n> the loop.\n> \n> I could also write...\n> \n> PGresult *res = PQexec(\"select * from foo\");\n> for (int i = 0; i < PQntuples(res); i++) {\n> printf(\"%s\\n\", PQgetValue(res, i, 0);\n> if (i % 20) {\n> PQflush(res, -1)\n> }\n> }\n> \n> In this case the results are cached in chunks of 20. Or I could write...\n> \n> PGresult *res = PQexec(\"select * from foo\");\n> for (int i = 0; i < PQntuples(res); i++) {\n> printf(\"%s\\n\", PQgetValue(res, i, 0);\n> PQflush(res, i-20)\n> }\n> \n> In this case the last 20 tuples are kept in memory in any one time as\n> a sliding window. If I try to access something out of range of the\n> current cache I get a NULL result.\n> \n> Back to the multiple tuple return types issue. psql code may do\n> something like...\n> \n> int currentGroup = -1, group;\n> PGresult *res = PQexec(someQuery);\n> for (int i = 0; i < PQntuples(res); i++) {\n> group = PQgroup(res, i);\n> if (group != currentGroup) \n> printHeaders(res, group);\n> }\n> currentGroup = group;\n> for (j = 0; j < PQnfieldsGroup(res, group); j++) {\n> printf(\"%s |\", PQgetValue(res, i, j);\n> }\n> printf(\"\\n\");\n> PQflush(res)\n> }\n> \n> printHeaders(PGresult *res, int group) {\n> for (j = 0; j < PQnfieldsGroup(res, group); j++) {\n> printf(\"%s |\", PQfnameGroup(res, group));\n> }\n> printf(\"\\n\");\n> }\n> \n> This would print different result types with appropriate headers...\n> create table a (aa text);\n> create table b under a (bb text);\n> select ** from a;\n> aa |\n> ----\n> foo\n> jar\n> \n> aa | bb\n> -------\n> bar|baz\n> boo|bon\n> \n> (*) Assuming that this doesn't unduly affect current behaviour. I\n> can't see that it would, but if it would another API would be needed\n> PQexecStream.\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 5 Jul 2000 10:35:06 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposed new libpq API" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> Okay, first thing off the top of my head ... how does this deal with\n> backward compatibility, or have we just blown all old apps out to fhte\n> water?\n\nThere's no issue with compatibility, unless you can see one. It's all\nbackwards compatible.\n\n> \n> On Wed, 5 Jul 2000, Chris Bitmead wrote:\n> \n> > I've been thinking about what changes are necessary to the libpq\n> > interface to support returning variable type tuples. This was\n> > discussed a number of months back but an exact interface wasn't nailed\n> > down.\n> >\n> > Let me then put forward the following suggestion open for comment. The\n> > suggestion is very similar to the original postgres solution to this\n> > problem. What I have added is some consideration of how a streaming\n> > interface should work, and hopefully I will incorporate that\n> > enhancement while I'm at it.\n> >\n> > Into libpq will be (re)introduced the concept of a group. Tuples which\n> > are returned will be from a finite number of different layouts.\n> >\n> > Thus there will be an API PQnfieldsGroup(PGresult, group_num). And\n> > similar for PQftypeGroup etc. There will be a PQgroup(PGresult,\n> > tuple_num) which will tell you which group any given tuple belongs to.\n> >\n> > To support streaming of results a new function PQflush(PGresult,\n> > tuple_num) would\n> > be introduced. It discards previous results that are cached. PQexec\n> > would be changed so that it doesn't absorb the full result set\n> > straight away like it does now (*). Instead it would only absorb\n> > results on a need to basis when calling say PQgetValue.\n> >\n> > Currently you might read results like this...\n> >\n> > PGresult *res = PQexec(\"select * from foo\");\n> > for (int i = 0; i < PQntuples(res); i++) {\n> > printf(\"%s\\n\", PQgetValue(res, i, 0);\n> > }\n> >\n> > It has the disadvantage that all the results are kept in memory at\n> > once. This code would in the future be modified to be...\n> >\n> > PGresult *res = PQexec(\"select * from foo\");\n> > for (int i = 0; i < PQntuples(res); i++) {\n> > printf(\"%s\\n\", PQgetValue(res, i, 0);\n> > PQflush(res) // NEW NEW\n> > }\n> >\n> > Now PQexec doesn't absorb all the results at once. PQgetValue will\n> > read them on a need-to basis. PQflush will discard each result through\n> > the loop.\n> >\n> > I could also write...\n> >\n> > PGresult *res = PQexec(\"select * from foo\");\n> > for (int i = 0; i < PQntuples(res); i++) {\n> > printf(\"%s\\n\", PQgetValue(res, i, 0);\n> > if (i % 20) {\n> > PQflush(res, -1)\n> > }\n> > }\n> >\n> > In this case the results are cached in chunks of 20. Or I could write...\n> >\n> > PGresult *res = PQexec(\"select * from foo\");\n> > for (int i = 0; i < PQntuples(res); i++) {\n> > printf(\"%s\\n\", PQgetValue(res, i, 0);\n> > PQflush(res, i-20)\n> > }\n> >\n> > In this case the last 20 tuples are kept in memory in any one time as\n> > a sliding window. If I try to access something out of range of the\n> > current cache I get a NULL result.\n> >\n> > Back to the multiple tuple return types issue. psql code may do\n> > something like...\n> >\n> > int currentGroup = -1, group;\n> > PGresult *res = PQexec(someQuery);\n> > for (int i = 0; i < PQntuples(res); i++) {\n> > group = PQgroup(res, i);\n> > if (group != currentGroup)\n> > printHeaders(res, group);\n> > }\n> > currentGroup = group;\n> > for (j = 0; j < PQnfieldsGroup(res, group); j++) {\n> > printf(\"%s |\", PQgetValue(res, i, j);\n> > }\n> > printf(\"\\n\");\n> > PQflush(res)\n> > }\n> >\n> > printHeaders(PGresult *res, int group) {\n> > for (j = 0; j < PQnfieldsGroup(res, group); j++) {\n> > printf(\"%s |\", PQfnameGroup(res, group));\n> > }\n> > printf(\"\\n\");\n> > }\n> >\n> > This would print different result types with appropriate headers...\n> > create table a (aa text);\n> > create table b under a (bb text);\n> > select ** from a;\n> > aa |\n> > ----\n> > foo\n> > jar\n> >\n> > aa | bb\n> > -------\n> > bar|baz\n> > boo|bon\n> >\n> > (*) Assuming that this doesn't unduly affect current behaviour. I\n> > can't see that it would, but if it would another API would be needed\n> > PQexecStream.\n> >\n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n", "msg_date": "Wed, 05 Jul 2000 23:44:46 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposed new libpq API" }, { "msg_contents": "On Wed, 5 Jul 2000, Chris Bitmead wrote:\n\n> The Hermit Hacker wrote:\n> > \n> > Okay, first thing off the top of my head ... how does this deal with\n> > backward compatibility, or have we just blown all old apps out to fhte\n> > water?\n> \n> There's no issue with compatibility, unless you can see one. It's all\n> backwards compatible.\n\nOkay, I'm definitely missing something then ...\n\n> > > Currently you might read results like this...\n> > >\n> > > PGresult *res = PQexec(\"select * from foo\");\n> > > for (int i = 0; i < PQntuples(res); i++) {\n> > > printf(\"%s\\n\", PQgetValue(res, i, 0);\n> > > }\n> > >\n> > > It has the disadvantage that all the results are kept in memory at\n> > > once. This code would in the future be modified to be...\n> > >\n> > > PGresult *res = PQexec(\"select * from foo\");\n> > > for (int i = 0; i < PQntuples(res); i++) {\n> > > printf(\"%s\\n\", PQgetValue(res, i, 0);\n> > > PQflush(res) // NEW NEW\n> > > }\n> > >\n\nWhat is the PQflush() for here? I took it to mean that it was required,\nbut then reading further down, it just sounds like it flushs what's\nalready been used and would be optional?\n\nDoesn't this just do what CURSORs already do then? Run the query, fetch\nwhat you need, etc?\n\n", "msg_date": "Wed, 5 Jul 2000 11:09:29 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposed new libpq API" }, { "msg_contents": "The Hermit Hacker wrote:\n\n> What is the PQflush() for here? I took it to mean that it was required,\n> but then reading further down, it just sounds like it flushs what's\n> already been used and would be optional?\n> \n> Doesn't this just do what CURSORs already do then? Run the query, fetch\n> what you need, etc?\n\nThere is similarity to cursors, but there is no need to go to the\ntrouble of using a cursor to get a lot of the benefits which is that you\ndon't have to slurp it all into memory at once. I believe this is how\nmost DBMS interfaces work, like MySQL, you can only fetch the next\nrecord, you can't get random access to the whole result set. This means\nmemory usage is very small. Postgres memory usage will be huge. It\nshouldn't be necessary to resort to cursors to scale.\n\nSo what PQflush is proposed to do is limit the amount that is cached. It\ndiscards earlier results. If you flush after every sequential access\nthen you only have to use enough memory for a single record. If you use\nPQflush you no longer have random access to earlier results.\n\nOther designs are possible, like some interface for getting the next\nrecord one at a time and examining it. The idea of this proposal is to\nmake the current random access interface and a streaming interface very\ninteroperable and be able to mix and match them together. You can take a\ncurrent postgres app, and provided it doesn't actually rely on random\naccess, which I would hazard to say most don't, and just by adding the\none line of code PQflush greatly reduce memory consumption. Or you can\nmix and match and see a sliding window of the most recent X tuples. Or\nyou can just ignore this and use the current features.\n", "msg_date": "Thu, 06 Jul 2000 00:27:13 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposed new libpq API" }, { "msg_contents": "> Okay, first thing off the top of my head ... how does this deal with\n> backward compatibility, or have we just blown all old apps out to fhte\n> water?\n\nWe have a long tradition of blowing old apps out of the water :)\n\nAs Chris mentioned, his suggestion closely resembles code which used to\nbe in libpq but was excised due to lack of interest at the time.\n\nWith the upcoming \"query tree redesign\", the concept of collections of\n\"tuple sets\" will (hopefully) become more clear in the backend, to\nsupport, say, distributed databases and also Chris' OO interests.\n\n - Thomas\n", "msg_date": "Wed, 05 Jul 2000 14:29:31 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposed new libpq API" }, { "msg_contents": "On Thu, 6 Jul 2000, Chris Bitmead wrote:\n\n> The Hermit Hacker wrote:\n> \n> > What is the PQflush() for here? I took it to mean that it was required,\n> > but then reading further down, it just sounds like it flushs what's\n> > already been used and would be optional?\n> > \n> > Doesn't this just do what CURSORs already do then? Run the query, fetch\n> > what you need, etc?\n> \n> There is similarity to cursors, but there is no need to go to the\n> trouble of using a cursor to get a lot of the benefits which is that you\n> don't have to slurp it all into memory at once. I believe this is how\n> most DBMS interfaces work, like MySQL, you can only fetch the next\n> record, you can't get random access to the whole result set. This means\n> memory usage is very small. Postgres memory usage will be huge. It\n> shouldn't be necessary to resort to cursors to scale.\n> \n> So what PQflush is proposed to do is limit the amount that is cached. It\n> discards earlier results. If you flush after every sequential access\n> then you only have to use enough memory for a single record. If you use\n> PQflush you no longer have random access to earlier results.\n> \n> Other designs are possible, like some interface for getting the next\n> record one at a time and examining it. The idea of this proposal is to\n> make the current random access interface and a streaming interface very\n> interoperable and be able to mix and match them together. You can take a\n> current postgres app, and provided it doesn't actually rely on random\n> access, which I would hazard to say most don't, and just by adding the\n> one line of code PQflush greatly reduce memory consumption. Or you can\n> mix and match and see a sliding window of the most recent X tuples. Or\n> you can just ignore this and use the current features.\n\nOkay, just playing devil's advocate here, that's all ... not against the\nchanges, just want to make sure that all bases are covered ...\n\nOne last comment .. when you say 'random access', are you saying that I\ncan't do a 'PQexec()' to get the results for a SELECT, use a for loop to\ngo through those results, and then start from i=0 to go through that loop\nagain without having to do a new SELECT on it? \n\n > \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 5 Jul 2000 11:41:43 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposed new libpq API" }, { "msg_contents": "The Hermit Hacker wrote:\n\n> One last comment .. when you say 'random access', are you saying that I\n> can't do a 'PQexec()' to get the results for a SELECT, use a for loop to\n> go through those results, and then start from i=0 to go through that loop\n> again without having to do a new SELECT on it?\n\nRandom access means that the whole query result is in memory. If you\nchoose to use PQflush then you will no longer be able to go back to 0\nand re-iterate. If you don't use PQflush then you can do what you do now\nwhich is go back and iterate through. If you use PQflush it means that\nyou don't need to do that.\n", "msg_date": "Thu, 06 Jul 2000 09:42:00 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposed new libpq API" }, { "msg_contents": "If I were implementing this in C++, I would have the result object\nreturn a different generic STL iterator (forward, random access, etc.)\ndepending on how I wanted to access the data. Perhaps you could emulate\nthis in C. I generally don't like the one-interface-fits-all approach;\nyou get a much cleaner and extensible interface if you introduce a type\nfor each class of behavior being modeled.\n\nT.\n\n-- \nTimothy H. Keitt\nNational Center for Ecological Analysis and Synthesis\n735 State Street, Suite 300, Santa Barbara, CA 93101\nPhone: 805-892-2519, FAX: 805-892-2510\nhttp://www.nceas.ucsb.edu/~keitt/\n", "msg_date": "Wed, 05 Jul 2000 17:18:50 -0700", "msg_from": "\"Timothy H. Keitt\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposed new libpq API" }, { "msg_contents": "\"Timothy H. Keitt\" wrote:\n> \n> If I were implementing this in C++, I would have the result object\n> return a different generic STL iterator (forward, random access, etc.)\n> depending on how I wanted to access the data. Perhaps you could emulate\n> this in C. I generally don't like the one-interface-fits-all approach;\n> you get a much cleaner and extensible interface if you introduce a type\n> for each class of behavior being modeled.\n\nIf we want to relagate the current API to the status of \"legacy\", and\nbuild something all-new and well thought out, then this could be done.\nI'd certainly be willing to do this, but what is the consensus? If I\ncame up with something completely different but better would the rest of\nthe team be happy to make the current interface legacy? Or do we want a\ncompromise (like what Peter Eisentraut suggests perhaps), or do we want\nsomething that slots into the current world view with minimum\ndisruption? (what I have suggested).\n", "msg_date": "Thu, 06 Jul 2000 10:52:11 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposed new libpq API" }, { "msg_contents": "On Thu, 6 Jul 2000, Chris Bitmead wrote:\n\n> \"Timothy H. Keitt\" wrote:\n> > \n> > If I were implementing this in C++, I would have the result object\n> > return a different generic STL iterator (forward, random access, etc.)\n> > depending on how I wanted to access the data. Perhaps you could emulate\n> > this in C. I generally don't like the one-interface-fits-all approach;\n> > you get a much cleaner and extensible interface if you introduce a type\n> > for each class of behavior being modeled.\n> \n> If we want to relagate the current API to the status of \"legacy\", and\n> build something all-new and well thought out, then this could be done.\n> I'd certainly be willing to do this, but what is the consensus? If I\n> came up with something completely different but better would the rest of\n> the team be happy to make the current interface legacy? Or do we want a\n> compromise (like what Peter Eisentraut suggests perhaps), or do we want\n> something that slots into the current world view with minimum\n> disruption? (what I have suggested).\n\nCould we create some sort of libpq2? Maintain libpq for a release or two\nand then slow fade it out? Or maybe have a configure switch\n(--enable-libpq-compat)?\n\n\n\n", "msg_date": "Wed, 5 Jul 2000 22:08:25 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposed new libpq API" }, { "msg_contents": "Chris Bitmead wrote:\n> \n> \"Timothy H. Keitt\" wrote:\n> >\n> > If I were implementing this in C++, I would have the result object\n> > return a different generic STL iterator (forward, random access, etc.)\n> > depending on how I wanted to access the data. Perhaps you could emulate\n> > this in C. I generally don't like the one-interface-fits-all approach;\n> > you get a much cleaner and extensible interface if you introduce a type\n> > for each class of behavior being modeled.\n> \n> If we want to relagate the current API to the status of \"legacy\", and\n> build something all-new and well thought out, then this could be done.\n> I'd certainly be willing to do this, but what is the consensus? If I\n> came up with something completely different but better would the rest of\n> the team be happy to make the current interface legacy? Or do we want a\n> compromise (like what Peter Eisentraut suggests perhaps), or do we want\n> something that slots into the current world view with minimum\n> disruption? (what I have suggested).\n\nBeing designed to be extensible RDBMS, postgres should IMHO also support \nmultiple protocol modules. I would like one that follows standard \nCLI/ODBC/JDBC conventions, also XML-RPC based one would be nice. \nWe could do it by giving the requested protocol at connection startup \nand then talking to that backend module afretwards, or we could have \ndifferent protocols listening on different ports.\n\n-----------\nHannu\n", "msg_date": "Thu, 06 Jul 2000 09:36:47 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposed new libpq API" }, { "msg_contents": "On Thu, 6 Jul 2000, Chris Bitmead wrote:\n\n> The Hermit Hacker wrote:\n> \n> > One last comment .. when you say 'random access', are you saying that I\n> > can't do a 'PQexec()' to get the results for a SELECT, use a for loop to\n> > go through those results, and then start from i=0 to go through that loop\n> > again without having to do a new SELECT on it?\n> \n> Random access means that the whole query result is in memory. If you\n> choose to use PQflush then you will no longer be able to go back to 0\n> and re-iterate. If you don't use PQflush then you can do what you do now\n> which is go back and iterate through. If you use PQflush it means that\n> you don't need to do that.\n\nOkay, that sounds cool ... since nobody does the PQflush() during a\nfor() iteration now (I dont' believe), then old apps are fine, and this\ndoes add a nice level of functionality as far as memory usage is concerned\n...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 6 Jul 2000 09:10:05 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS-OO] Re: Proposed new libpq API" } ]
[ { "msg_contents": "\n> [email protected] writes:\n> > ventasge2000=# create table t1 (c1 char(10),c2 varchar(10));\n> > CREATE\n> > ventasge2000=# insert into t1 values('XXX666','XXX666');\n> > INSERT 182218 1\n> > ventasge2000=# select * from t1 where c1 ~ '666$';\n> > c1 | c2 \n> > ----+----\n> > (0 rows)\n> > ventasge2000=# select * from t1 where c2 ~ '666$';\n> > c1 | c2 \n> > ------------+--------\n> > XXX666 | XXX666\n> > (1 row)\n> \n> > Doesn't regular expressions(in particular the $ metachar) \n> work properly \n> > with char columns????\n> \n> I see no bug there --- you've forgotten about the trailing spaces in\n> the char(10) column. Try c1 ~ '666 *$' if you want to match against a\n> variable amount of padding in a char(N) column.\n\nNo, imho char(n) is defined to return trailing blanks but be insensitive to \nthe actual amount of trailing spaces. Thus I do see that this behavior can \nbe interpreted as a bug here.\nIn char(n) speak 'ab' = 'ab ' is supposed to be true.\nImho a change from char(6) to char(8) should only require more storage\nspace in a client program, but be otherwise transparent, \nwhich it currently is not. \n\nImho a similar problem is that char_length does not return the count to\nthe last non space character, which imho also is a bug.\n\nAndreas\n", "msg_date": "Wed, 5 Jul 2000 11:07:56 +0200 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: regular expressions troubles with char cols " } ]
[ { "msg_contents": "\nHaving now flirted with recreating BLOBs (and even DBs) with matching OIDs,\nI find myself thinking it may be a waste of effort for the moment. A\nmodified version of the system used by Pavel Janik in pg_dumplo may be\nsubstantially more reliable than my previous proposal:\n\nTo Dump\n-------\n\nDump all LOs by looking in pg_class for relkind='l'. \n\nDon't bother cross-referencing with actual table entries, since we are\ntrying to do a backup rather than a consistency check. \n\nThe dump will consist of the LO and it's original OID.\n\n\nTo Load\n-------\n\nCreate a temporary table, lo_xref, with appropriate indexes\n\nReload the LOs, storing old & new oid in lo_xref.\n\nNow, disable triggers and sequentially search through all tables that have\none or more oid columns: for each oid column, see if the column value is in\nlo_xref, if it is, update it with the new value.\n\nFor large databases, this system will rely heavily on lo_xref, so my main\nworries are:\n\n1. How are temp tables stored? (eg. if in memory this is a problem -\nDec/Rdb stores temp tables in memory).\n\n2. Are there any limitation on indexes of temp tables (I seem to be able to\ncreate them at least - Dec/Rdb won't even let you do that).\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 05 Jul 2000 23:58:28 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dump and LOs (another proposal)" } ]
[ { "msg_contents": "Philip Warner <[email protected]> writes:\n> Having now flirted with recreating BLOBs (and even DBs) with matching OIDs,\n> I find myself thinking it's a waste of effort for the moment. A modified\n> version of the system used by Pavel Janik in pg_dumplo may be substantially\n> more reliable than my previous proposal:\n\nI like this a lot better than trying to restore the original OIDs. For\none thing, the restore-original-OIDs idea cannot be made to work if what\nwe want to do is load additional tables into an existing database.\n\n> For large databases, this system will rely heavily on lo_xref, so my main\n> worries are:\n\n> 1. How are temp tables stored? (eg. if in memory this is a problem -\n> Dec/Rdb stores temp tables in memory).\n\n> 2. Are there any limitation on indexes of temp tables (I seem to be able to\n> create them at least - Dec/Rdb won't even let you do that).\n\nNo problem. A temp table is a table, it's just got a unique name under\nthe hood. (So do its indices, IIRC...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Jul 2000 11:09:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dump and LOs (another proposal) " }, { "msg_contents": "At 11:09 5/07/00 -0400, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> Having now flirted with recreating BLOBs (and even DBs) with matching OIDs,\n>> I find myself thinking it's a waste of effort for the moment. A modified\n>> version of the system used by Pavel Janik in pg_dumplo may be substantially\n>> more reliable than my previous proposal:\n>\n>I like this a lot better than trying to restore the original OIDs. For\n>one thing, the restore-original-OIDs idea cannot be made to work if what\n>we want to do is load additional tables into an existing database.\n>\n\nThe thing that bugs me about this if for 30,000 rows, I do 30,000 updates\nafter the restore. It seems *really* inefficient, not to mention slow.\n\nI'll also have to modify pg_restore to talk to the database directly (for\nlo import). As a result I will probably send the entire script directly\nfrom withing pg_restore. Do you know if comment parsing ('--') is done in\nthe backend, or psql?\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 06 Jul 2000 02:54:27 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: pg_dump and LOs (another proposal) " }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> The thing that bugs me about this if for 30,000 rows, I do 30,000 updates\n> after the restore. It seems *really* inefficient, not to mention slow.\n\nShouldn't be a problem. For one thing, I can assure you there are no\ndatabases with 30,000 LOs in them ;-) --- the existing two-tables-per-LO\ninfrastructure won't support it. (I think Denis Perchine has started\nto work on a replacement one-table-for-all-LOs solution, btw.) Possibly\nmore to the point, there's no reason for pg_restore to grovel through\nthe individual rows for itself. Having identified a column that\ncontains (or might contain) LO OIDs, you can do something like\n\n\tUPDATE userTable SET oidcolumn = tmptable.newLOoid WHERE\n\t\toidcolumn = tmptable.oldLOoid;\n\nwhich should be quick enough, especially given indexes.\n\n> I'll also have to modify pg_restore to talk to the database directly (for\n> lo import). As a result I will probably send the entire script directly\n> from withing pg_restore. Do you know if comment parsing ('--') is done in\n> the backend, or psql?\n\nBoth, I believe --- psql discards comments, but so will the backend.\nNot sure you really need to abandon use of psql, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Jul 2000 13:06:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: pg_dump and LOs (another proposal) " }, { "msg_contents": "Hello Tom,\n\nWednesday, July 05, 2000, 9:06:33 PM, you wrote:\n\nTL> Philip Warner <[email protected]> writes:\n>> The thing that bugs me about this if for 30,000 rows, I do 30,000 updates\n>> after the restore. It seems *really* inefficient, not to mention slow.\n\nTL> Shouldn't be a problem. For one thing, I can assure you there are no\nTL> databases with 30,000 LOs in them ;-) --- the existing two-tables-per-LO\n\nHmmm... I have 127865 LOs at the moment. :-))) But with my patch where\nall LOs are usual files on FS. I will move it to one-table-for-all-LOs\nafter my holidays.\n\nTL> infrastructure won't support it. (I think Denis Perchine has started\nTL> to work on a replacement one-table-for-all-LOs solution, btw.) Possibly\n\nYou can try it. I sent it to pgsql-patches some time ago.\n\nTL> more to the point, there's no reason for pg_restore to grovel through\nTL> the individual rows for itself. Having identified a column that\nTL> contains (or might contain) LO OIDs, you can do something like\n\n-- \nBest regards,\n Denis mailto:[email protected]\n\n\n", "msg_date": "Wed, 5 Jul 2000 23:14:11 +0400", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": false, "msg_subject": "Re[2]: Re: pg_dump and LOs (another proposal)" }, { "msg_contents": "At 13:06 5/07/00 -0400, Tom Lane wrote:\n>Shouldn't be a problem. For one thing, I can assure you there are no\n>databases with 30,000 LOs in them ;-) --- the existing two-tables-per-LO\n>infrastructure won't support it. (I think Denis Perchine has started\n\nEeek! Not so long ago I was going to use PG for a database will far more\nthan that many documents (mainly becuase of no backup, horrible storage\netc). Glad I didn't.\n\n\n>> I'll also have to modify pg_restore to talk to the database directly (for\n>> lo import). As a result I will probably send the entire script directly\n>> from withing pg_restore. Do you know if comment parsing ('--') is done in\n>> the backend, or psql?\n>\n>Both, I believe --- psql discards comments, but so will the backend.\n>Not sure you really need to abandon use of psql, though.\n\nDon't plan to abandon it, but I did plan to use lo_creat, lo_write to add\nthe LOs, and that requires no psql, I think. I want this utility to run\ndirect from tape, without lots of temp files.\n\nI'll probably just have a new arg, --blobs, and another --db, which makes a\ndirect DB connection, and --blobs without --db will not be supported. Does\nthis sound OK?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 06 Jul 2000 12:23:40 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: pg_dump and LOs (another proposal) " }, { "msg_contents": "At 13:06 5/07/00 -0400, Tom Lane wrote:\n>\n>\tUPDATE userTable SET oidcolumn = tmptable.newLOoid WHERE\n>\t\toidcolumn = tmptable.oldLOoid;\n>\n\nIt's actually nastier than this since there could be multiple oid columns,\nimplying, potentially, multiple scans of the table.\n\nI suppose\n\nupdate userTable set\n\toidCol1 = Coalesce( (Select newLOoid from oidxref where oldLOoid = oidCol1\n), oidCol1 ),\n\toidCol2 = Coalesce( (Select newLOoid from oidxref where oldLOoid = oidCol2\n), oidCol2 ),\n\t...\n\nwould work, or at least only update each row once, but it looks slow.\n\n\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 06 Jul 2000 15:02:23 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: pg_dump and LOs (another proposal) " }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> At 13:06 5/07/00 -0400, Tom Lane wrote:\n>> UPDATE userTable SET oidcolumn = tmptable.newLOoid WHERE\n>> oidcolumn = tmptable.oldLOoid;\n\n> It's actually nastier than this since there could be multiple oid columns,\n> implying, potentially, multiple scans of the table.\n\nSo?\n\n> I suppose\n\n> update userTable set\n> \toidCol1 = Coalesce( (Select newLOoid from oidxref where oldLOoid = oidCol1\n> ), oidCol1 ),\n> \toidCol2 = Coalesce( (Select newLOoid from oidxref where oldLOoid = oidCol2\n> ), oidCol2 ),\n> \t...\n\n> would work, or at least only update each row once, but it looks slow.\n\nAlmost certainly slower than processing each column in a separate\nUPDATE. It does not pay to try to be smarter than the planner is ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Jul 2000 02:32:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: pg_dump and LOs (another proposal) " }, { "msg_contents": "Philip Warner writes:\n\n> I'll also have to modify pg_restore to talk to the database directly (for\n> lo import).\n\npsql has \\lo_import.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 6 Jul 2000 18:12:50 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: pg_dump and LOs (another proposal) " }, { "msg_contents": "At 18:12 6/07/00 +0200, Peter Eisentraut wrote:\n>Philip Warner writes:\n>\n>> I'll also have to modify pg_restore to talk to the database directly (for\n>> lo import).\n>\n>psql has \\lo_import.\n>\n\nThis is true, but if there are 30000 blobs on an archive tape, I cant dump\nthem into /tmp and wait for the user to run the script. At the current time\npg_restore just sends a script to a file or stdout - it has no guarantee of\nwhen a \\lo_import command will be run, so dumping blobs into the same file\nbetween lo_import calls would not be appropriate, since I am in effect\nrequiring a psql attachment. \n\nSo the plan is, in the first pass, to make BLOB restoration dependant on\nhaving a DB connection.\n\nDoes this make more sense?\n\nP.S. I have only half-written the lo dumping code, so this is all quite\nopen...\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 07 Jul 2000 02:22:00 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: pg_dump and LOs (another proposal) " }, { "msg_contents": "\n> P.S. I have only half-written the lo dumping code, so this is all quite\n> open...\n\n A \"blasphemy\" question, is really needful LO dump if we will have TOAST and\nLO will lonely past? If anyone still need dump LO (for example I) is possible \nuse pg_dumplo from contrib tree, that (as some users say) works very well. \nNot is work on LO dump, after several years and during LO funeral loss of\ntime? (sorry).\n\n\t\t\t\t\t\tKarel\n \n\n", "msg_date": "Fri, 7 Jul 2000 09:30:10 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: pg_dump and LOs (another proposal) " }, { "msg_contents": "At 09:30 7/07/00 +0200, Karel Zak wrote:\n>\n>> P.S. I have only half-written the lo dumping code, so this is all quite\n>> open...\n>\n> A \"blasphemy\" question, is really needful LO dump if we will have TOAST and\n>LO will lonely past? If anyone still need dump LO (for example I) is\npossible \n>use pg_dumplo from contrib tree, that (as some users say) works very well. \n>Not is work on LO dump, after several years and during LO funeral loss of\n>time? (sorry).\n\nThere are three reasons why I continue:\n\n1. To learn\n\n2. Because I believe that BLOBs will exist after TOAST, although the\nimplementation will have changed. The code to handle the current format\nwill be at least 70% reusable (assuming a similar set of\nlo_open/read/write/close calls).\n\n3. We will need a way of exporting old BLOBs and importing them as TOAST\nBLOBs.\n\nI could be wrong about (2), but I think binary data can not be easily\nloaded from a pure text file, and (3) could ultimately be handled by\npg_dump_lo, but I like the idea of an integrated tool.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 07 Jul 2000 18:18:37 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: pg_dump and LOs (another proposal) " }, { "msg_contents": "Philip Warner writes:\n\n> >psql has \\lo_import.\n\n> This is true, but if there are 30000 blobs on an archive tape, I cant dump\n> them into /tmp and wait for the user to run the script. At the current time\n> pg_restore just sends a script to a file or stdout - it has no guarantee of\n> when a \\lo_import command will be run, so dumping blobs into the same file\n> between lo_import calls would not be appropriate, since I am in effect\n> requiring a psql attachment. \n\nI don't understand. How else would you restore a large object if not using\nlibpq's lo_import() call?\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 7 Jul 2000 18:15:48 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: pg_dump and LOs (another proposal) " }, { "msg_contents": "At 18:15 7/07/00 +0200, Peter Eisentraut wrote:\n>Philip Warner writes:\n>\n>> >psql has \\lo_import.\n>\n>> This is true, but if there are 30000 blobs on an archive tape, I cant dump\n>> them into /tmp and wait for the user to run the script. At the current time\n>> pg_restore just sends a script to a file or stdout - it has no guarantee of\n>> when a \\lo_import command will be run, so dumping blobs into the same file\n>> between lo_import calls would not be appropriate, since I am in effect\n>> requiring a psql attachment. \n>\n>I don't understand. How else would you restore a large object if not using\n>libpq's lo_import() call?\n>\n\nDirect connection to DB and use lo_creat, lo_open, lo_write & lo_close -\nie. what lo_import does under the hood.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 08 Jul 2000 10:36:40 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: pg_dump and LOs (another proposal) " } ]
[ { "msg_contents": " Date: Wednesday, July 5, 2000 @ 12:17:41\nAuthor: wieck\n\nUpdate of /home/projects/pgsql/cvsroot/pgsql/src/backend/commands\n from hub.org:/tmp/cvs-serv21402/backend/commands\n\nModified Files:\n\tcommand.c vacuum.c \n\n----------------------------- Log Message -----------------------------\n\nChanged TOAST relations to have relkind RELKIND_TOASTVALUE.\n\nSpecial handling of TOAST relations during VACUUM. TOAST relations\nare vacuumed while the lock on the master table is still active.\nThe ANALYZE flag doesn't propagate to their vacuuming because the\ntoaster access routines allways use index access ignoring stats, so\nwhy compute them at all.\n\nProtection of TOAST relations against normal INSERT/UPDATE/DELETE\nwhile offering SELECT for debugging purposes.\n\nJan\n\n", "msg_date": "Wed, 5 Jul 2000 12:17:43 -0400 (EDT)", "msg_from": "Jan Wieck <wieck>", "msg_from_op": true, "msg_subject": "pgsql/src/backend/commands (command.c vacuum.c)" }, { "msg_contents": "> -----Original Message-----\n> From: Jan Wieck\n> Sent: Thursday, July 06, 2000 1:18 AM\n> To: [email protected]\n> Subject: [COMMITTERS] pgsql/src/backend/commands (command.c vacuum.c)\n> \n> \n> Date: Wednesday, July 5, 2000 @ 12:17:41\n> Author: wieck\n> \n> Update of /home/projects/pgsql/cvsroot/pgsql/src/backend/commands\n> from hub.org:/tmp/cvs-serv21402/backend/commands\n> \n> Modified Files:\n> \tcommand.c vacuum.c \n> \n> ----------------------------- Log Message -----------------------------\n> \n> Changed TOAST relations to have relkind RELKIND_TOASTVALUE.\n> \n> Special handling of TOAST relations during VACUUM. TOAST relations\n> are vacuumed while the lock on the master table is still active.\n> \n\nIt seems very dangerous to me.\nWhen VACUUM of a master table was finished, the transaction is\nin already committed state in many cases. \n\nRegards.\nHiroshi Inoue\n", "msg_date": "Sat, 9 Dec 2000 22:23:22 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: pgsql/src/backend/commands (command.c vacuum.c)" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> Special handling of TOAST relations during VACUUM. TOAST relations\n>> are vacuumed while the lock on the master table is still active.\n\n> It seems very dangerous to me.\n> When VACUUM of a master table was finished, the transaction is\n> in already committed state in many cases. \n\nI don't see the problem. If the toast table doesn't get vacuumed,\nno real harm is done other than failing to recover space.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 09 Dec 2000 10:52:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql/src/backend/commands (command.c vacuum.c) " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> >> Special handling of TOAST relations during VACUUM. TOAST relations\n> >> are vacuumed while the lock on the master table is still active.\n> \n> > It seems very dangerous to me.\n> > When VACUUM of a master table was finished, the transaction is\n> > in already committed state in many cases. \n> \n> I don't see the problem. If the toast table doesn't get vacuumed,\n> no real harm is done other than failing to recover space.\n> \n\nHmm,is there any good reason to vacuum toast table in the \ntransaction which was already internally committed by vacuum\nof the master table ? Is it possible under WAL ?\n\nRegards.\nHiroshi Inoue\n", "msg_date": "Sun, 10 Dec 2000 08:25:24 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [COMMITTERS] pgsql/src/backend/commands (command.c vacuum.c) " }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> Hmm,is there any good reason to vacuum toast table in the \n> transaction which was already internally committed by vacuum\n> of the master table ? Is it possible under WAL ?\n\nIt had better be possible under WAL, because vacuuming indexes is\ndone in essentially the same way: we clean the indexes *after* we\ncommit the master's tuple movements.\n\nReally, the TOAST table is being treated the same way we handle\nindexes, and I think that's good.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 09 Dec 2000 18:58:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/src/backend/commands (command.c vacuum.c) " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > Hmm,is there any good reason to vacuum toast table in the \n> > transaction which was already internally committed by vacuum\n> > of the master table ? Is it possible under WAL ?\n> \n> It had better be possible under WAL, because vacuuming indexes is\n> done in essentially the same way: we clean the indexes *after* we\n> commit the master's tuple movements.\n> \n\nThere's no command other than VACUUM which continues to\naccess table/index after *commit*. We couldn't process\nsignificant procedures in such an already commiitted state,\ncould we ? \n\n> Really, the TOAST table is being treated the same way we handle\n> indexes, and I think that's good.\n>\n\nIf I recognize correctly,TOAST table is a table not an index and\nis little different from ordinary tables. VACUUM now vacuums\n2 tables in a transaction for tables with TOAST columns.\n ^^^^^^^^^^^^^^^^^^\nI don't think it's right and my question is simple.\nWhat's wrong with vacuuming master and the toast table in\nseparate transactions ?\n\nRegrads.\nHiroshi Inoue\n", "msg_date": "Sun, 10 Dec 2000 22:48:12 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [COMMITTERS] pgsql/src/backend/commands (command.c vacuum.c) " }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> There's no command other than VACUUM which continues to\n> access table/index after *commit*. We couldn't process\n> significant procedures in such an already commiitted state,\n> could we ? \n\nWhy not? The intermediate state *is valid*. We just haven't\nremoved no-longer-referenced index and TOAST entries yet.\n\n> What's wrong with vacuuming master and the toast table in\n> separate transactions ?\n\nYou'd have to give up the lock on the master table if there were\na true commit. I don't want to do that ... especially not when\nI don't believe there is a problem to fix.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 10 Dec 2000 12:12:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/src/backend/commands (command.c vacuum.c) " }, { "msg_contents": "\n\nTom Lane wrote:\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > There's no command other than VACUUM which continues to\n> > access table/index after *commit*. We couldn't process\n> > significant procedures in such an already commiitted state,\n> > could we ?\n> \n> Why not? The intermediate state *is valid*. We just haven't\n> removed no-longer-referenced index and TOAST entries yet.\n>\n\nDo you mean *already committed* state has no problem and \nVACUUM is always possible in the state ?\nIs VACUUM such a trivial job ?\n\n> > What's wrong with vacuuming master and the toast table in\n> > separate transactions ?\n> \n> You'd have to give up the lock on the master table if there were\n> a true commit. I don't want to do that ... especially not when\n> I don't believe there is a problem to fix.\n> \n\nHmmm,is keeping the lock on master table more important than\nrisking to break consistency ?\n\nRegards.\nHiroshi Inoue\n", "msg_date": "Mon, 11 Dec 2000 09:00:03 +0900", "msg_from": "Hiroshi Inoue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/src/backend/commands (command.c vacuum.c)" }, { "msg_contents": "Hiroshi Inoue <[email protected]> writes:\n> Tom Lane wrote:\n>> Why not? The intermediate state *is valid*. We just haven't\n>> removed no-longer-referenced index and TOAST entries yet.\n\n> Do you mean *already committed* state has no problem and \n> VACUUM is always possible in the state ?\n\nYes. Otherwise VACUUM wouldn't be crash-safe.\n\n> Hmmm,is keeping the lock on master table more important than\n> risking to break consistency ?\n\nI see no consistency risk here. I'd be more worried about potential\nrisks from dropping the lock too soon.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 10 Dec 2000 19:08:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/src/backend/commands (command.c vacuum.c) " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <[email protected]> writes:\n> > Tom Lane wrote:\n> >> Why not? The intermediate state *is valid*. We just haven't\n> >> removed no-longer-referenced index and TOAST entries yet.\n> \n> > Do you mean *already committed* state has no problem and\n> > VACUUM is always possible in the state ?\n> \n> Yes. Otherwise VACUUM wouldn't be crash-safe.\n>\n\nWhen VACUUM for a table starts, the transaction is not\ncommitted yet of cource. After *commit* VACUUM has handled\nheap/index tuples very carefully to be crash-safe before\n7.1. Currently another vacuum could be invoked in the\nalready committed transaction. There has been no such\nsituation before 7.1. Yes,VACUUM isn't crash-safe now.\n \n> > Hmmm,is keeping the lock on master table more important than\n> > risking to break consistency ?\n> \n> I see no consistency risk here. I'd be more worried about potential\n> risks from dropping the lock too soon.\n> \n\nThers's no potential risk other than deadlock.\nIf we have to avoid deadlock we could acquire\nthe lock on master table first. Is there any \nproblem ?\n\nRegards.\nHiroshi Inoue\n", "msg_date": "Mon, 11 Dec 2000 12:08:25 +0900", "msg_from": "Hiroshi Inoue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/src/backend/commands (command.c vacuum.c)" }, { "msg_contents": "Hiroshi Inoue <[email protected]> writes:\n> When VACUUM for a table starts, the transaction is not\n> committed yet of cource. After *commit* VACUUM has handled\n> heap/index tuples very carefully to be crash-safe before\n> 7.1. Currently another vacuum could be invoked in the\n> already committed transaction. There has been no such\n> situation before 7.1. Yes,VACUUM isn't crash-safe now.\n \nVadim, do you agree with this argument? If so, I think it's\nsomething we need to fix. I don't see what Hiroshi is worried\nabout, myself, but if there really is an issue here...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 Dec 2000 11:57:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Is VACUUM still crash-safe?" } ]
[ { "msg_contents": "The question is how to determine when a type is an array type (and not\nusing the leading-underscore convention). A comment in pg_type.h says:\n\n/*\n * typelem is 0 if this is not an array type. If this is an array\n * type, typelem is the OID of the type of the elements of the array\n * (it identifies another row in Table pg_type).\n */\n\nThe reverse seems to be false. If typelem is not 0, then the type is not\nnecessarily an array type. For example, the typelem entries of text,\nbpchar, and name point to char (the single-byte variant), while box and\nlseg claim to be arrays of \"point\".\n\nHow should this be handled in the context of formatting the types for\nreconsumption?\n\n\nAppendix: The complete list of not-really-array types that have typelem\nset is:\n\nbytea\t\t=> char\nname\t\t=> char\nint2vector\t=> int2\ntext\t\t=> char\noidvector\t=> oid\npoint\t\t=> float8\nlseg\t\t=> point\npath\t\t=> point\nbox\t\t=> point\nfilename\t=> char\nline\t\t=> point\nunknown\t\t=> char\nbpchar\t\t=> char\nvarchar\t\t=> char\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 5 Jul 2000 18:36:43 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Array type confusion" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> The question is how to determine when a type is an array type (and not\n> using the leading-underscore convention). A comment in pg_type.h says:\n> * typelem is 0 if this is not an array type. If this is an array\n> * type, typelem is the OID of the type of the elements of the array\n> * (it identifies another row in Table pg_type).\n> The reverse seems to be false. If typelem is not 0, then the type is not\n> necessarily an array type. For example, the typelem entries of text,\n> bpchar, and name point to char (the single-byte variant), while box and\n> lseg claim to be arrays of \"point\".\n\nI don't think that the typelem values presently given in pg_type for\nthese datatypes are necessarily sacrosanct. In fact, some of these\ndemonstrably don't work.\n\nAFAICT, the array-subscripting code supports two cases: genuine arrays\n(variable-size, variable number of dimensions, array header data) and\nfixed-length pseudo-array types like oidvector. So, for example,\nthe fact that oidvector is declared with typelem = oid makes it possible\nto write things like \"select proargtypes[1] from pg_proc\", even though\noidvector is a basic type and not a genuine array.\n\nThe way array_ref tells the difference is that typlen = -1 means a\nreal array, typlen > 0 means one of the pseudo-array types. It does\nnot work to subscript a varlena type that's not really an array.\nFor example, you get bogus results if you try to subscript a text value.\n\nI believe we need to remove the typelem specifications from these\nvarlena datatypes:\n\n 17 | bytea\n 25 | text\n 602 | path\n 705 | unknown\n 1042 | bpchar\n 1043 | varchar\n\nsince subscripting them doesn't work and can't work without additional\ninformation provided to array_ref.\n\nIf we do that then your type formatter can distinguish \"real\" array\ntypes as being those with typelem != 0 and typlen < 0. If typlen > 0\nthen treat it as a basic type regardless of typelem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Jul 2000 02:26:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Array type confusion " } ]
[ { "msg_contents": "Now that we don't -I...backend anymore, this might break:\n\nsrc/include/c.h:\n\n#ifdef FIXADE\n#if defined(hpux)\n#include \"port/hpux/fixade.h\" /* for unaligned access fixup */\n#endif /* hpux */\n#endif\n\nThe comments surrounding this make it pretty unclear whether this is still\nnecessary. Tom, any idea?\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 5 Jul 2000 18:37:12 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Removing src/backend from include path might break on HPUX" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Now that we don't -I...backend anymore, this might break:\n> src/include/c.h:\n\n> #ifdef FIXADE\n> #if defined(hpux)\n> #include \"port/hpux/fixade.h\" /* for unaligned access fixup */\n> #endif /* hpux */\n> #endif\n\n> The comments surrounding this make it pretty unclear whether this is still\n> necessary. Tom, any idea?\n\nFIXADE is not defined on the normal HPUX build, so it's a non-issue as\nfar as I know. AFAICT this file dates back to Berkeley days and exists\nonly to work around HP compiler bugs that existed about ten years ago.\nWe don't support HPUX <= 9.01 anymore anyway; there are plenty of other\nreasons not to.\n\nI have been planning for some time to nuke fixade.h, but haven't got\nround to it. Be my guest (and don't forget to zap the reference to it\nin src/backend/Makefile's install stuff too. AFAICT we don't need to\nmake an installed include/port subdirectory at all anymore...)\n\nBTW, there are a couple of other templates that define NOFIXADE, which\nseems to be something completely different; don't mistake it as being\nthe inverse of FIXADE.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Jul 2000 13:39:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Removing src/backend from include path might break on HPUX " } ]
[ { "msg_contents": "Jan Wieck wrote:\n\n> FYI,\n>\n> For now, \"lztext\" is the only test candidate datatype to\n> invoke the toaster. It can hold up to multi-megabytes now.\n> But be warned, this datatype will disappear as soon as \"text\"\n> is toastable.\n>\n\nI have not been following the TOAST discussion, but why would lztext\ndisappear? It seems like a useful datatype independent of TOAST or not\nTOAST?\n\nJeff\n\n\n", "msg_date": "Wed, 05 Jul 2000 12:52:04 -0400", "msg_from": "Jeffery Collins <[email protected]>", "msg_from_op": true, "msg_subject": "Re: update on TOAST status" }, { "msg_contents": "FYI,\n\n during the day I committed a couple of changes to TOAST.\n\n - Secondary relations for the toaster (to move off values)\n are now automatically created during CREATE TABLE, ALTER\n TABLE ... ADD COLUMN and SELECT ... INTO, whenever the\n first toastable attribute appears in the table schema.\n\n - The TOAST tables are now of kind RELKIND_TOASTVALUE.\n\n - TOAST tables cannot be vacuumed separately. They are\n allways vacuumend if their master table is, while VACUUM\n still holds the lock on the master table.\n\n - VACUUM doesn't propagate ANALYZE to TOAST tables.\n Statistics for them are needless because the toast access\n is allways hardcoded indexed.\n\n - TOAST tables are protected against manual INSERT, UPDATE\n and DELETE operations. SELECT is still possible for\n debugging purposes. The name of the TOAST table is\n pg_toast_<oid-of-master>.\n\n - The chunk_data attribute has been changed to type bytea.\n\n For now, \"lztext\" is the only test candidate datatype to\n invoke the toaster. It can hold up to multi-megabytes now.\n But be warned, this datatype will disappear as soon as \"text\"\n is toastable.\n\n Next I'll make pg_dump TOAST-safe. Will only take a couple of\n minutes I think.\n\n Toast tables aren't automatically created for system\n catalogs. Thus I'll add\n\n ALTER TABLE pg_rewrite CREATE TOAST TABLE\n\n to initdb. So we'll get unlimited view complexity for free.\n As soon as arrays are toastable, we might want to add\n pg_class because of relacl too.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Wed, 5 Jul 2000 18:56:30 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "update on TOAST status" }, { "msg_contents": "Jeffery Collins wrote:\n> Jan Wieck wrote:\n>\n> > FYI,\n> >\n> > For now, \"lztext\" is the only test candidate datatype to\n> > invoke the toaster. It can hold up to multi-megabytes now.\n> > But be warned, this datatype will disappear as soon as \"text\"\n> > is toastable.\n> >\n>\n> I have not been following the TOAST discussion, but why would lztext\n> disappear? It seems like a useful datatype independent of TOAST or not\n> TOAST?\n\n The \"lztext\" type was something I developed before TOAST was\n born. It's was a \"text\" type that tried to compress the value\n at input time.\n\n In the TOAST world, each input value will be passed around as\n is. Only when it gets down to be stored in a table and the\n resulting heap tuple exceeds 2K, the toaster will try to\n compress toastable attributes and/or move off attributes. The\n behaviour will be configurable on a per tables attribute\n base. So someone can specify \"don't try compression\", \"ignore\n this attribute until all others are toasted\" or \"never toast\n this, instead fail and abort - unwise but possible\".\n\n In the current CVS sources, \"lztext\" already doesn't know\n anything about compression anymore. It's more or less\n equivalent to \"text\" now, where it's lztextin() function\n produces a plain varlena structure like textin() does. Only\n that all it's other functions are aware that the values they\n recieve might be toasted ones. It's the toaster that does the\n compression/move-off for it now.\n\n So as soon as \"text\" is toastable, there is absolutely no\n need for \"lztext\" anymore. We will add an alias to the parser\n for 7.1, which will disappear in 7.2 again. If you\n pg_dump/restore your databases during the 7.0->7.1 upgrade,\n all your table schemas will automatically be changed from\n \"lztext\" to \"text\".\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Wed, 5 Jul 2000 20:39:37 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: update on TOAST status" }, { "msg_contents": "Jan Wieck wrote:\n\n> Toast tables aren't automatically created for system\n> catalogs. Thus I'll add\n> \n> ALTER TABLE pg_rewrite CREATE TOAST TABLE\n> \n> to initdb. So we'll get unlimited view complexity for free.\n> As soon as arrays are toastable, we might want to add\n> pg_class because of relacl too.\n\nWhy would we want system catalogs toastable?\n", "msg_date": "Thu, 06 Jul 2000 10:02:27 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update on TOAST status" }, { "msg_contents": "Chris Bitmead wrote:\n> \n> Jan Wieck wrote:\n> \n> > Toast tables aren't automatically created for system\n> > catalogs. Thus I'll add\n> >\n> > ALTER TABLE pg_rewrite CREATE TOAST TABLE\n> >\n> > to initdb. So we'll get unlimited view complexity for free.\n> > As soon as arrays are toastable, we might want to add\n> > pg_class because of relacl too.\n> \n> Why would we want system catalogs toastable?\n\nI assume this will allow for Views with large rewrite rules which\ncurrently are limited in size.\n\nMike Mascari\n", "msg_date": "Wed, 05 Jul 2000 20:44:02 -0400", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update on TOAST status" }, { "msg_contents": "> - VACUUM doesn't propagate ANALYZE to TOAST tables.\n> Statistics for them are needless because the toast access\n> is allways hardcoded indexed.\n\nI don't think statistics are insignificant for TOASTed columns. If I\nsay col=3, the optimizer uses that information for estimating the number\nof rows returned, and figuring out the type of join and order of join to\nperform, not just for \"use index, don't use index\" decisions.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Jul 2000 16:29:29 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update on TOAST status'" }, { "msg_contents": "Bruce Momjian wrote:\n> > - VACUUM doesn't propagate ANALYZE to TOAST tables.\n> > Statistics for them are needless because the toast access\n> > is allways hardcoded indexed.\n>\n> I don't think statistics are insignificant for TOASTed columns. If I\n> say col=3, the optimizer uses that information for estimating the number\n> of rows returned, and figuring out the type of join and order of join to\n> perform, not just for \"use index, don't use index\" decisions.\n\n Ask your boys to give you a training session for \"reading\"\n when they go to bed tonight - and greet them from the \"police\n officer\" :-)\n\n I said \"to TOAST tables\", not \"TOASTed columns\".\n\n Their master tables will allways have the statistics,\n including those for toasted columns, if you ask for them via\n ANALYZE.\n\n In normal operation, noone would ever know if a TOAST table\n is accessed during his query - not even the planner or\n optimmizer. It's totally transparent and the only one\n accessing the TOAST tables is the toaster himself - and he\n knows what he does.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Thu, 6 Jul 2000 23:21:59 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: update on TOAST status'" }, { "msg_contents": "Mike Mascari wrote:\n> Chris Bitmead wrote:\n> >\n> > Jan Wieck wrote:\n> >\n> > > Toast tables aren't automatically created for system\n> > > catalogs. Thus I'll add\n> > >\n> > > ALTER TABLE pg_rewrite CREATE TOAST TABLE\n> > >\n> > > to initdb. So we'll get unlimited view complexity for free.\n> > > As soon as arrays are toastable, we might want to add\n> > > pg_class because of relacl too.\n> >\n> > Why would we want system catalogs toastable?\n>\n> I assume this will allow for Views with large rewrite rules which\n> currently are limited in size.\n\n Absolutely correnct.\n\n With the code in place (after a few more fixes) I was able to\n create a \"SELECT *\" view from a 681 attribute table. The\n resulting rule is about 170K! And more complex things are\n possible too now, because the rewrite rule size is not\n limited any longer (as long as you have enough CPU, ram and\n disk space).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Fri, 7 Jul 2000 00:01:43 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: update on TOAST status" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> - VACUUM doesn't propagate ANALYZE to TOAST tables.\n>> Statistics for them are needless because the toast access\n>> is allways hardcoded indexed.\n\n> I don't think statistics are insignificant for TOASTed columns.\n\nHe didn't say that! I think what he meant is that there's no need for\nstatistics associated with the TOAST table itself, and AFAICS that's OK.\n\nBTW, I have thought of a potential problem with indexes on toasted\ncolumns. As I understand Jan's current thinking, the idea is\n\n1. During storage of the tuple in the main table, any oversize fields\nget compressed/moved off.\n\n2. The toasted item in the finished main tuple gets handed to the index\nroutines to be stored in the index.\n\nNow, storing the toasted item in the index tuple seems fine, but what\nI do not like here is the implication that all the comparisons needed\nto find where to *put* the index tuple are done using a pretoasted\nvalue. That seems to imply dozens of redundant decompressions/fetches,\nanother one for each key comparison we have to do.\n\nJan, do you have a way around this that I missed?\n\nOne simple answer that might help for other scenarios too is to keep\na small cache of the last few values that had to be untoasted. Maybe\nwe only need it for moved-off values --- it could be that decompression\nis fast enough that we should just do it over rather than trying to\ncache.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Jul 2000 18:08:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update on TOAST status' " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> >> - VACUUM doesn't propagate ANALYZE to TOAST tables.\n> >> Statistics for them are needless because the toast access\n> >> is allways hardcoded indexed.\n>\n> > I don't think statistics are insignificant for TOASTed columns.\n>\n> He didn't say that! I think what he meant is that there's no need for\n> statistics associated with the TOAST table itself, and AFAICS that's OK.\n>\n> BTW, I have thought of a potential problem with indexes on toasted\n> columns. As I understand Jan's current thinking, the idea is\n>\n> 1. During storage of the tuple in the main table, any oversize fields\n> get compressed/moved off.\n>\n> 2. The toasted item in the finished main tuple gets handed to the index\n> routines to be stored in the index.\n\n Right.\n\n> Now, storing the toasted item in the index tuple seems fine, but what\n> I do not like here is the implication that all the comparisons needed\n> to find where to *put* the index tuple are done using a pretoasted\n> value. That seems to imply dozens of redundant decompressions/fetches,\n> another one for each key comparison we have to do.\n\n Dozens - right.\n\n I just did a little gdb session tracing a\n\n SELECT ... WHERE toasted = 'xxx'\n\n The table has 151 rows and an index on 'toasted'. It needed 6\n fetches of the attribute. Better than good, because 2^6 is\n only 64, so btree did a perfect job. Anyhow, in the case of a\n real TOASTed (read burned) value, it'd mean 6 index scans to\n recreate the on disk stored representation plus 6\n decompression loops to get the plain one to compare against.\n What the hell would an \"IN (SELECT ...)\" cause?\n\n> Jan, do you have a way around this that I missed?\n>\n> One simple answer that might help for other scenarios too is to keep\n> a small cache of the last few values that had to be untoasted. Maybe\n> we only need it for moved-off values --- it could be that decompression\n> is fast enough that we should just do it over rather than trying to\n> cache.\n\n I'm still argueing that indexing huge values is a hint for a\n misleading schema. If this is true, propagating toasted\n attributes into indices is a dead end street and I'd have to\n change the heap-access<->toaster interface so that the\n modified (stored) main tuple isn't visible to the following\n code (that does the index inserts).\n\n What is the value of supporting index tuples >2K? Support of\n braindead schemas? I can live withoout it!\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Fri, 7 Jul 2000 02:05:07 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: update on TOAST status'" }, { "msg_contents": "> Bruce Momjian wrote:\n> > > - VACUUM doesn't propagate ANALYZE to TOAST tables.\n> > > Statistics for them are needless because the toast access\n> > > is allways hardcoded indexed.\n> >\n> > I don't think statistics are insignificant for TOASTed columns. If I\n> > say col=3, the optimizer uses that information for estimating the number\n> > of rows returned, and figuring out the type of join and order of join to\n> > perform, not just for \"use index, don't use index\" decisions.\n> \n> Ask your boys to give you a training session for \"reading\"\n> when they go to bed tonight - and greet them from the \"police\n> officer\" :-)\n\nSure.\n\n> \n> I said \"to TOAST tables\", not \"TOASTed columns\".\n> \n> Their master tables will allways have the statistics,\n> including those for toasted columns, if you ask for them via\n> ANALYZE.\n> \n> In normal operation, noone would ever know if a TOAST table\n> is accessed during his query - not even the planner or\n> optimmizer. It's totally transparent and the only one\n> accessing the TOAST tables is the toaster himself - and he\n> knows what he does.\n> \n\nOh, sure, got it. It is the toast table that doesn't need stats.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Jul 2000 21:58:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update on TOAST status'" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> Tom Lane wrote:\n>> One simple answer that might help for other scenarios too is to keep\n>> a small cache of the last few values that had to be untoasted. Maybe\n>> we only need it for moved-off values --- it could be that decompression\n>> is fast enough that we should just do it over rather than trying to\n>> cache.\n\n> I'm still argueing that indexing huge values is a hint for a\n> misleading schema. If this is true, propagating toasted\n> attributes into indices is a dead end street and I'd have to\n> change the heap-access<->toaster interface so that the\n> modified (stored) main tuple isn't visible to the following\n> code (that does the index inserts).\n\nBut you'll notice that is *not* what I suggested. A detoasted-value\ncache could be useful in more situations than just an index lookup.\nI don't necessarily say we've got to have it in 7.1, but let's keep\nthe idea in mind in case we start finding there is a bottleneck here.\n\n> What is the value of supporting index tuples >2K?\n\nIf you're toasting the whole main tuple down to <2K, you might find\nyourself toasting individual fields that are a good bit less than\nthat. So I don't think indexing a toasted value will be all that\nunusual.\n\nBut this is all speculation for now. Let's get it working bulletproof\nfor 7.1, and then worry about speedups after we know they are needed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Jul 2000 23:03:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update on TOAST status' " }, { "msg_contents": "Tom Lane wrote:\n> [email protected] (Jan Wieck) writes:\n> > Tom Lane wrote:\n> >> One simple answer that might help for other scenarios too is to keep\n> >> a small cache of the last few values that had to be untoasted. Maybe\n> >> we only need it for moved-off values --- it could be that decompression\n> >> is fast enough that we should just do it over rather than trying to\n> >> cache.\n>\n> > I'm still argueing that indexing huge values is a hint for a\n> > misleading schema. If this is true, propagating toasted\n> > attributes into indices is a dead end street and I'd have to\n> > change the heap-access<->toaster interface so that the\n> > modified (stored) main tuple isn't visible to the following\n> > code (that does the index inserts).\n>\n> But you'll notice that is *not* what I suggested. A detoasted-value\n\n Haven't missed it in the first read - of course.\n\n> cache could be useful in more situations than just an index lookup.\n> I don't necessarily say we've got to have it in 7.1, but let's keep\n> the idea in mind in case we start finding there is a bottleneck here.\n>\n> > What is the value of supporting index tuples >2K?\n>\n> If you're toasting the whole main tuple down to <2K, you might find\n> yourself toasting individual fields that are a good bit less than\n> that. So I don't think indexing a toasted value will be all that\n> unusual.\n\n Exactly that's why I'm asking if we wouldn't be better off by\n limiting index tuples to (blocksize - overhead) / 4 and\n allways store plain, untoasted values in indices.\n\n I've asked now a couple of times \"who really has the need for\n indexing huge values\"? All responses I got so far where of\n the kind \"would be nice if we support it\" or \"I don't like\n such restrictions\". But noone really said \"I need it\".\n\n> But this is all speculation for now. Let's get it working bulletproof\n> for 7.1, and then worry about speedups after we know they are needed.\n\n Let me speculate too a little.\n\n The experience I have up to now is that the saved time from\n requiring less blocks in the buffer cache outweights the cost\n of decompression. Especially with our algorithm, because it\n is byte oriented (instead of huffman coding beeing based on a\n bit stream), causing it to be extremely fast on\n decompression. And the technique of moving off values from\n the main heap causes the main tuples to be much smaller. As\n long as the toasted values aren't used in qualification or\n joining, only their references move around through the\n various executor steps, and only those values that are part\n of the final result set need to be fetched when sending them\n to the client.\n\n Given a limited amount of total memory available for one\n running postmaster, we save alot of disk I/O and hold more\n values in their compressed format in the shared buffers. With\n the limit on total memory, the size of the buffer cache must\n be lowered by the size of the new detoasted cache, and that\n only if we make it shared too. Given further an average of\n 50% compression ratio (what's not unlikely with typical input\n like html pages), one cached detoasted value would require\n two compressed ones to go away.\n\n Wouldn't really surprise me if we gain speed from it in the\n average query. Even if some operations might slow down\n (sorting on maybe toasted fields).\n\n We need to see some results and wait for reports for this.\n But we know already that it can cause trouble with indexed\n fields, because these are likely to be used for comparision\n during scans. So do we want to have indices storing plain\n values allways and limit them in the index-tuple size or not?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Fri, 7 Jul 2000 13:30:15 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: update on TOAST status'" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> ... So do we want to have indices storing plain\n> values allways and limit them in the index-tuple size or not?\n\nI think not: it will be seen as a robustness failure, even (or\nespecially) if it doesn't happen often. I can see the bug reports now:\n\"Hey! I tried to insert a long value in my field, and it didn't work!\nI thought you'd fixed this bug?\"\n\nYou make good arguments that we shouldn't be too concerned about the\nspeed of access to toasted index values, and I'm willing to accept\nthat point of view (at least till we have hard evidence about it).\nBut when I say \"it should be bulletproof\" I mean it should *work*,\nwithout imposing arbitrary limits on the user. Arbitrary limits are\nexactly what we are trying to eliminate.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Jul 2000 12:03:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update on TOAST status' " }, { "msg_contents": "Tom Lane wrote:\n> [email protected] (Jan Wieck) writes:\n> > ... So do we want to have indices storing plain\n> > values allways and limit them in the index-tuple size or not?\n>\n> I think not: it will be seen as a robustness failure, even (or\n> especially) if it doesn't happen often. I can see the bug reports now:\n> \"Hey! I tried to insert a long value in my field, and it didn't work!\n> I thought you'd fixed this bug?\"\n>\n> You make good arguments that we shouldn't be too concerned about the\n> speed of access to toasted index values, and I'm willing to accept\n> that point of view (at least till we have hard evidence about it).\n> But when I say \"it should be bulletproof\" I mean it should *work*,\n> without imposing arbitrary limits on the user. Arbitrary limits are\n> exactly what we are trying to eliminate.\n\n After debugging something I thought was a bug in the toaster,\n I've found something really causing headaches.\n\n TOAST AS IS IS NOT CAPABLE OF HOLDING INDEXED VALUES!\n\n It appears that brtee indices (at least) can keep references\n to old toast values that survive a VACUUM! Seems these\n references live in nodes actually not referring to a heap\n tuple any more, but used during tree traversal in\n comparisions. As if an index tuple delete from a btree not\n necessarily causes the index value to disappear from the\n btree completely. It'll never be returned by an index scan,\n but the value is still there somewhere.\n\n Everything is OK with this up to a VACUUM run. The toaster\n uses SnapShotAny to fetch toast values. So an external value\n can be fetched by the toaster even if it is already deleted\n and committed. If he has a reference somewhere, he has\n allways a share or higher lock on the main relation\n preventing VACUUM to mangle up the toast relation (I moved\n toast relation vacuuming into the lock time of the main table\n recently).\n\n But in the above case it is already vacuumed and not present\n any more. Now the btree traversal needs to compare against a\n value, long gone to the bit heaven, and that cannot work with\n the toast architecture.\n\n Seems the designs of btree and toast are colliding. As soon\n as \"text\" is toastable, this'll hurt - be warned.\n\n AFAICS, we need to detoast values for index inserts allways\n and have another toaster inside the index access methods at\n some day. In the meantime we should decide a safe maximum\n index tuple size and emit an explanative error message on the\n attempt to insert oversized index entries instead of possibly\n corrupting the index.\n\n Comment!\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Tue, 11 Jul 2000 14:02:34 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: update on TOAST status'" }, { "msg_contents": "At 14:02 11/07/00 +0200, Jan Wieck wrote:\n> AFAICS, we need to detoast values for index inserts allways\n> and have another toaster inside the index access methods at\n> some day.\n\nWe might not need it...at least not in the furst pass.\n\n\n> In the meantime we should decide a safe maximum\n> index tuple size and emit an explanative error message on the\n> attempt to insert oversized index entries instead of possibly\n> corrupting the index.\n\nCan I suggest that we also put out a warning when defining an index using a\nfield with a (potentially) unlimited size? Indexing a text field will\nmostly be a bizarre thing to do, but, eg, indexing the first 255 chars of a\ntext field (via substr) might not be.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 11 Jul 2000 22:20:40 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update on TOAST status'" }, { "msg_contents": "Philip Warner wrote:\n> At 14:02 11/07/00 +0200, Jan Wieck wrote:\n> > AFAICS, we need to detoast values for index inserts allways\n> > and have another toaster inside the index access methods at\n> > some day.\n>\n> We might not need it...at least not in the furst pass.\n\n The thing is actually broken and needs a fix. As soon as\n \"text\" is toastable, it can happen everywhere that text is\n toasted even if it's actual plain value would perfectly fit\n into an index tuple. Think of a table with 20 text columns,\n where the indexed one has a 1024 bytes value, while all\n others hold 512 bytes. In that case, the indexed one is the\n biggest and get's toasted first. And if all the data is of\n nature that compression doesn't gain enough, it might still\n be the biggest one after that step and will be considered for\n move off ... boom.\n\n We can't let this in in the first pass!\n\n> > In the meantime we should decide a safe maximum\n> > index tuple size and emit an explanative error message on the\n> > attempt to insert oversized index entries instead of possibly\n> > corrupting the index.\n>\n> Can I suggest that we also put out a warning when defining an index using a\n> field with a (potentially) unlimited size? Indexing a text field will\n> mostly be a bizarre thing to do, but, eg, indexing the first 255 chars of a\n> text field (via substr) might not be.\n\n Marking it BOLD somewhere in the release notes, the CREATE\n INDEX doc and some other places should be enough. Such a\n message at every CREATE INDEX is annoying.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Tue, 11 Jul 2000 14:38:18 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: update on TOAST status'" }, { "msg_contents": "At 14:38 11/07/00 +0200, Jan Wieck wrote:\n>> Can I suggest that we also put out a warning when defining an index using a\n>> field with a (potentially) unlimited size? Indexing a text field will\n>> mostly be a bizarre thing to do, but, eg, indexing the first 255 chars of a\n>> text field (via substr) might not be.\n>\n> Marking it BOLD somewhere in the release notes, the CREATE\n> INDEX doc and some other places should be enough. Such a\n> message at every CREATE INDEX is annoying.\n\nThe suggestion was only if the index contained a text, lztext etc field,\nbut no problem. The way I read your suggestion was that I'd get a real\nerror when doing an insert if the text was too large.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 11 Jul 2000 22:57:28 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update on TOAST status'" }, { "msg_contents": "Philip Warner wrote:\n> At 14:38 11/07/00 +0200, Jan Wieck wrote:\n> >> Can I suggest that we also put out a warning when defining an index using a\n> >> field with a (potentially) unlimited size? Indexing a text field will\n> >> mostly be a bizarre thing to do, but, eg, indexing the first 255 chars of a\n> >> text field (via substr) might not be.\n> >\n> > Marking it BOLD somewhere in the release notes, the CREATE\n> > INDEX doc and some other places should be enough. Such a\n> > message at every CREATE INDEX is annoying.\n>\n> The suggestion was only if the index contained a text, lztext etc field,\n> but no problem. The way I read your suggestion was that I'd get a real\n> error when doing an insert if the text was too large.\n\n Yes, that's what I'm after. It's too fragile IMHO to check on\n multi column indices with char(n) or so if resulting index\n tuples will fit in the future.\n\n The atttypmod field on NUMERIC columns for example doesn't\n tell the easy way how big the internal representation might\n grow. And what about variable size user defined types that\n are marked toastable? Can you estimate the maximum internal\n storage size for them?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Tue, 11 Jul 2000 15:08:57 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: update on TOAST status'" }, { "msg_contents": "At 15:08 11/07/00 +0200, Jan Wieck wrote:\n>\n> The atttypmod field on NUMERIC columns for example doesn't\n> tell the easy way how big the internal representation might\n> grow. And what about variable size user defined types that\n> are marked toastable? Can you estimate the maximum internal\n> storage size for them?\n>\n\nWell, uncompressed size would be a good upper estimate, since you may be\npassed already compressed data...\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 11 Jul 2000 23:27:32 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update on TOAST status'" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> After debugging something I thought was a bug in the toaster,\n> I've found something really causing headaches.\n> TOAST AS IS IS NOT CAPABLE OF HOLDING INDEXED VALUES!\n> It appears that brtee indices (at least) can keep references\n> to old toast values that survive a VACUUM! Seems these\n> references live in nodes actually not referring to a heap\n> tuple any more, but used during tree traversal in\n> comparisions. As if an index tuple delete from a btree not\n> necessarily causes the index value to disappear from the\n> btree completely. It'll never be returned by an index scan,\n> but the value is still there somewhere.\n\nOooh, nasty. Probably the keys you are looking at are in upper-\nlevel btree pages and indicate the ranges of keys found in lower\npages, rather than being pointers to real tuples.\n\nOne answer is to rebuild indexes from scratch during VACUUM,\nbefore we vacuum the TOAST relation. We've been talking about\ndoing that for a long time. Maybe it's time to bite the bullet\nand do it. (Of course that means fixing the relation-versioning\nproblem, which it seems we don't have a consensus on yet...)\n\n> Seems the designs of btree and toast are colliding. As soon\n> as \"text\" is toastable, this'll hurt - be warned.\n\nText *is* marked toastable in current CVS...\n\n> AFAICS, we need to detoast values for index inserts allways\n> and have another toaster inside the index access methods at\n> some day. In the meantime we should decide a safe maximum\n> index tuple size and emit an explanative error message on the\n> attempt to insert oversized index entries instead of possibly\n> corrupting the index.\n\nI don't like that --- seems it would put a definite crimp in the\nwhole point of TOAST, which is not to have arbitrary limits on field\nsizes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jul 2000 11:39:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update on TOAST status' " }, { "msg_contents": "Tom Lane wrote:\n> [email protected] (Jan Wieck) writes:\n> > After debugging something I thought was a bug in the toaster,\n> > I've found something really causing headaches.\n> > TOAST AS IS IS NOT CAPABLE OF HOLDING INDEXED VALUES!\n> > It appears that brtee indices (at least) can keep references\n> > to old toast values that survive a VACUUM! Seems these\n> > references live in nodes actually not referring to a heap\n> > tuple any more, but used during tree traversal in\n> > comparisions. As if an index tuple delete from a btree not\n> > necessarily causes the index value to disappear from the\n> > btree completely. It'll never be returned by an index scan,\n> > but the value is still there somewhere.\n>\n> Oooh, nasty. Probably the keys you are looking at are in upper-\n> level btree pages and indicate the ranges of keys found in lower\n> pages, rather than being pointers to real tuples.\n\n So our btree implementation is closer to an ISAM file\n organization than to a real tree? Anyway, either one or the\n other is the reason that an attempt to insert a new value\n results in an lztext_cmp() call that cannot be resolved due\n to a missing toast value.\n\n I added some checks to the detoaster just to throw an\n elog(ERROR) instead of a coredump in such a case earlier\n today.\n\n> One answer is to rebuild indexes from scratch during VACUUM,\n> before we vacuum the TOAST relation. We've been talking about\n> doing that for a long time. Maybe it's time to bite the bullet\n> and do it. (Of course that means fixing the relation-versioning\n> problem, which it seems we don't have a consensus on yet...)\n\n Doesn't matter if we do it before or after, because the main\n heap shouldn't contain any more toast references to deleted\n (later to be vacuumed) toast entries at that time.\n\n Anyway, it's a nice idea that should solve the problem. For\n indices, which can allways be rebuilt from the heap data, I\n don't see such a big need for the versioning. Only that a\n partially rebuilt index (rebuild crashed in the middle) needs\n another vacuum before the the DB is accessible again. How\n often does that happen?\n\n So why not having vacuum truncating the index file to zero\n and rebuilding it from scratch in place? Can anyone access an\n index while vacuum has a lock on it's heap?\n\n>\n> > Seems the designs of btree and toast are colliding. As soon\n> > as \"text\" is toastable, this'll hurt - be warned.\n>\n> Text *is* marked toastable in current CVS...\n\n Whow - haven't noticed.\n\n Will run my tests against text ... parallel. Does it have any\n impact on the regression test execution time? Does any toast\n table (that should now be there in the regression DB) loose\n it's zero size during the tests?\n\n>\n> > AFAICS, we need to detoast values for index inserts allways\n> > and have another toaster inside the index access methods at\n> > some day. In the meantime we should decide a safe maximum\n> > index tuple size and emit an explanative error message on the\n> > attempt to insert oversized index entries instead of possibly\n> > corrupting the index.\n>\n> I don't like that --- seems it would put a definite crimp in the\n> whole point of TOAST, which is not to have arbitrary limits on field\n> sizes.\n\n If we can solve it, let's do so. If we cannot, let's restrict\n it for 7.1.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Tue, 11 Jul 2000 21:33:08 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: update on TOAST status'" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> Tom Lane wrote:\n>> One answer is to rebuild indexes from scratch during VACUUM,\n>> before we vacuum the TOAST relation. We've been talking about\n>> doing that for a long time. Maybe it's time to bite the bullet\n>> and do it. (Of course that means fixing the relation-versioning\n>> problem, which it seems we don't have a consensus on yet...)\n\n> Doesn't matter if we do it before or after, because the main\n> heap shouldn't contain any more toast references to deleted\n> (later to be vacuumed) toast entries at that time.\n\nNo, we must fix the indexes first, so that they contain no bogus\nvalues if we fail while vacuuming the TOAST relation.\n\n> Anyway, it's a nice idea that should solve the problem. For\n> indices, which can allways be rebuilt from the heap data, I\n> don't see such a big need for the versioning. Only that a\n> partially rebuilt index (rebuild crashed in the middle) needs\n> another vacuum before the the DB is accessible again. How\n> often does that happen?\n\nIf it happens just once on one of your system-table indices, you\nwon't be happy. We've sweated hard to make VACUUM crash-safe,\nand I don't want to throw that away because of TOAST.\n\n>> Text *is* marked toastable in current CVS...\n\n> Whow - haven't noticed.\n\n> Will run my tests against text ... parallel. Does it have any\n> impact on the regression test execution time? Does any toast\n> table (that should now be there in the regression DB) loose\n> it's zero size during the tests?\n\nYes, there are some nonzero-size toast files in there. Haven't\ntried to run any timing tests...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jul 2000 17:27:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update on TOAST status' " }, { "msg_contents": "tOM lANE wrote:\n> [email protected] (Jan Wieck) writes:\n> > Tom Lane wrote:\n> >> One answer is to rebuild indexes from scratch during VACUUM,\n> >> before we vacuum the TOAST relation. We've been talking about\n> >> doing that for a long time. Maybe it's time to bite the bullet\n> >> and do it. (Of course that means fixing the relation-versioning\n> >> problem, which it seems we don't have a consensus on yet...)\n>\n> > Doesn't matter if we do it before or after, because the main\n> > heap shouldn't contain any more toast references to deleted\n> > (later to be vacuumed) toast entries at that time.\n>\n> No, we must fix the indexes first, so that they contain no bogus\n> values if we fail while vacuuming the TOAST relation.\n\n Got me.\n\n> > Anyway, it's a nice idea that should solve the problem. For\n> > indices, which can allways be rebuilt from the heap data, I\n> > don't see such a big need for the versioning. Only that a\n> > partially rebuilt index (rebuild crashed in the middle) needs\n> > another vacuum before the the DB is accessible again. How\n> > often does that happen?\n>\n> If it happens just once on one of your system-table indices, you\n> won't be happy. We've sweated hard to make VACUUM crash-safe,\n> and I don't want to throw that away because of TOAST.\n\n Alternatively we could go for both methods. Does any system\n catalog have an index on a varlena field? If not, we could do\n the classic vacuum on anything that is either a catalog or a\n table that doesn't have a toast relation. Then do the lazy\n reindex from scratch on anything left.\n\n>\n> >> Text *is* marked toastable in current CVS...\n>\n> > Whow - haven't noticed.\n>\n> > Will run my tests against text ... parallel. Does it have any\n> > impact on the regression test execution time? Does any toast\n> > table (that should now be there in the regression DB) loose\n> > it's zero size during the tests?\n>\n> Yes, there are some nonzero-size toast files in there. Haven't\n> tried to run any timing tests...\n\n No, there aren't. All you've seen are their indices of 16K\n each. But my tests, formerly using lztext, ran smooth with\n text.\n\n I've looked at textout() and, well, your style of detoasting\n arguments looks alot better and easier. From the way it's\n implemented I assume the per tuple memory context is done\n too, no?\n\n\nJan\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Wed, 12 Jul 2000 00:06:46 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: update on TOAST status'" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> I've looked at textout() and, well, your style of detoasting\n> arguments looks alot better and easier. From the way it's\n> implemented I assume the per tuple memory context is done\n> too, no?\n\nNot yet --- I'm running regress tests on it right now, though.\nYou're right that I'm assuming the function routines can leak\nmemory without trouble.\n\n(We might need to avoid leaks in the comparison routines that are used\nfor indexes, but otherwise I think this scheme will work comfortably.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jul 2000 18:18:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update on TOAST status' " }, { "msg_contents": "Tom Lane wrote:\n> [email protected] (Jan Wieck) writes:\n> > I've looked at textout() and, well, your style of detoasting\n> > arguments looks alot better and easier. From the way it's\n> > implemented I assume the per tuple memory context is done\n> > too, no?\n>\n> Not yet --- I'm running regress tests on it right now, though.\n> You're right that I'm assuming the function routines can leak\n> memory without trouble.\n>\n> (We might need to avoid leaks in the comparison routines that are used\n> for indexes, but otherwise I think this scheme will work comfortably.)\n\n That sounds bad. At least not very good.\n\n So we better add a PG_FREEARG_xxx(ptr, argno) macro that does\n the pfree if the pointer is different from the one in the\n argument.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Wed, 12 Jul 2000 00:46:26 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: update on TOAST status'" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n>> (We might need to avoid leaks in the comparison routines that are used\n>> for indexes, but otherwise I think this scheme will work comfortably.)\n\n> That sounds bad. At least not very good.\n\n> So we better add a PG_FREEARG_xxx(ptr, argno) macro that does\n> the pfree if the pointer is different from the one in the\n> argument.\n\nYes, I already borrowed that idea from your original code. I don't\nlike it a whole lot, but as long as the need for it is confined to\nthe indexable comparison operators I think we can tolerate it.\n\nThe alternative is to hack up the index search routines (and also\ntuplesort.c, and perhaps other places?) to maintain a short-term memory\ncontext for evaluating comparison operators, and reset said context\nfairly frequently. That might be doable but I haven't yet looked into\nwhat it would take.\n\nI'm hoping to commit what I have this evening...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jul 2000 21:05:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update on TOAST status' " } ]
[ { "msg_contents": "The Hermit Hacker wrote:\n> \n> Will you accept modifications to this if submit'd, to make better use of\n> features that PostgreSQL has to improve performance? Just downloaded it\n> and am going to look her through, just wondering if it would be a waste of\n> time for me to suggest changes though :)\n\nIf you can figure out an algorithm that shows these nested messages more\nefficiently on postgres, then that would be a pretty compelling reason\nto move SourceForge to Postgres instead of MySQL, which is totally\nreaching its limits on our site. Right now, neither database appears\nlike it will work, so Oracle is starting to loom on the horizon.\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n", "msg_date": "Wed, 05 Jul 2000 10:00:39 -0700", "msg_from": "Tim Perdue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "\nI've taken this offlist with Tim/Ben to see what we can come up with\n... the thread is/has become too \"heated\" to get anything productive done\n...\n\n\nOn Wed, 5 Jul 2000, Tim Perdue wrote:\n\n> The Hermit Hacker wrote:\n> > \n> > Will you accept modifications to this if submit'd, to make better use of\n> > features that PostgreSQL has to improve performance? Just downloaded it\n> > and am going to look her through, just wondering if it would be a waste of\n> > time for me to suggest changes though :)\n> \n> If you can figure out an algorithm that shows these nested messages more\n> efficiently on postgres, then that would be a pretty compelling reason\n> to move SourceForge to Postgres instead of MySQL, which is totally\n> reaching its limits on our site. Right now, neither database appears\n> like it will work, so Oracle is starting to loom on the horizon.\n> \n> Tim\n> \n> -- \n> Founder - PHPBuilder.com / Geocrawler.com\n> Lead Developer - SourceForge\n> VA Linux Systems\n> 408-542-5723\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 5 Jul 2000 14:01:38 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "> The Hermit Hacker wrote:\n> > \n> > Will you accept modifications to this if submit'd, to make better use of\n> > features that PostgreSQL has to improve performance? Just downloaded it\n> > and am going to look her through, just wondering if it would be a waste of\n> > time for me to suggest changes though :)\n> \n> If you can figure out an algorithm that shows these nested messages more\n> efficiently on postgres, then that would be a pretty compelling reason\n> to move SourceForge to Postgres instead of MySQL, which is totally\n> reaching its limits on our site. Right now, neither database appears\n> like it will work, so Oracle is starting to loom on the horizon.\n\nAll I can say is, \"Yikes\". Let's see if we can help this guy.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Jul 2000 16:45:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "\nWay ahead of you :)\n\n\nBen and I are workign with Tim on this ... I've provided Ben with an\naccount on postgresql.org that he can use, with access to a v7.0 database\nas well as web ... \n\n\n\nOn Thu, 6 Jul 2000, Bruce Momjian wrote:\n\n> > The Hermit Hacker wrote:\n> > > \n> > > Will you accept modifications to this if submit'd, to make better use of\n> > > features that PostgreSQL has to improve performance? Just downloaded it\n> > > and am going to look her through, just wondering if it would be a waste of\n> > > time for me to suggest changes though :)\n> > \n> > If you can figure out an algorithm that shows these nested messages more\n> > efficiently on postgres, then that would be a pretty compelling reason\n> > to move SourceForge to Postgres instead of MySQL, which is totally\n> > reaching its limits on our site. Right now, neither database appears\n> > like it will work, so Oracle is starting to loom on the horizon.\n> \n> All I can say is, \"Yikes\". Let's see if we can help this guy.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 6 Jul 2000 18:13:31 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" } ]
[ { "msg_contents": "Uh oh -\n\n on one side I'm a happy camper. pg_dump already ignored pg_*\n tables and since the TOAST tables are named pg_toast_...,\n nothing to be done. I loaded my test DB with TOAST entries,\n dumped and restored it. Anything is there, works perfectly.\n\n But then I added the ALTER TABLE for unlimited rewrite rule\n size to initdb and the problems started. I can create a table\n with 500+ attributes. Also I can create a view on it (the\n rules size is 170K - whow). Anything works pretty well, just\n a pg_dump output is garbage.\n\n Seems the dynamic string buffers used in pg_dump aren't as\n bullet proof as they should. I'm still busy with other\n things, so can someone please take a look on it? Attached is\n an SQL script that creates the table and the view that I\n cannot dump.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #", "msg_date": "Wed, 5 Jul 2000 22:01:49 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "2nd update on TOAST" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> size to initdb and the problems started. I can create a table\n> with 500+ attributes. Also I can create a view on it (the\n> rules size is 170K - whow). Anything works pretty well, just\n> a pg_dump output is garbage.\n\nAre you using current sources? Bruce committed Philip Warner's\npg_dump rewrite a day or so ago (which I thought was way premature,\nbut anyway...). Just want to know which pg_dump we're talking about.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Jul 2000 17:18:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2nd update on TOAST " }, { "msg_contents": "At 22:01 5/07/00 +0200, Jan Wieck wrote:\n>\n> Seems the dynamic string buffers used in pg_dump aren't as\n> bullet proof as they should. I'm still busy with other\n> things, so can someone please take a look on it? Attached is\n> an SQL script that creates the table and the view that I\n> cannot dump.\n>\n\nIf the TOAST patch on the FTP site the most recent?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 06 Jul 2000 12:37:39 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2nd update on TOAST" }, { "msg_contents": "At 17:18 5/07/00 -0400, Tom Lane wrote:\n>[email protected] (Jan Wieck) writes:\n>> size to initdb and the problems started. I can create a table\n>> with 500+ attributes. Also I can create a view on it (the\n>> rules size is 170K - whow). Anything works pretty well, just\n>> a pg_dump output is garbage.\n>\n>Are you using current sources? Bruce committed Philip Warner's\n>pg_dump rewrite a day or so ago (which I thought was way premature,\n>but anyway...). Just want to know which pg_dump we're talking about.\n\nIt's good that any bugs had a chnace to come out ASAP.\n\nHowever, I'd be interested to know what actually happens: I do recall that\npg_dump (both versions) have seomething like '#define COPY_BUFFER_SIZE\n8192'; if I were to start looking, that's where I'd go first.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 06 Jul 2000 12:50:06 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2nd update on TOAST " }, { "msg_contents": "At 12:50 6/07/00 +1000, Philip Warner wrote:\n>>\n>>Are you using current sources? Bruce committed Philip Warner's\n>>pg_dump rewrite a day or so ago (which I thought was way premature,\n>>but anyway...). Just want to know which pg_dump we're talking about.\n>\n>It's good that any bugs had a chnace to come out ASAP.\n>\n>However, I'd be interested to know what actually happens: I do recall that\n>pg_dump (both versions) have seomething like '#define COPY_BUFFER_SIZE\n>8192'; if I were to start looking, that's where I'd go first.\n>\n\nFurther to this, looking at the code, it now uses 'archputs', which\nreplaces 'fputs', to output the copy buffer (assuming the error is in the\ncopy, not while dumping the definitions); if PQgetline fills the entire\nbuffer (no trailing \\0), then I could imagine both would croak. The\nsimplest test would be to tell PQgetline that the buffer size if 1 byte\nsmaller, and see if it fixes the problem. \n\nI'd be interested to hear from someone...at least to know a little more of\nthe circumstaces of the error.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 06 Jul 2000 13:15:55 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2nd update on TOAST " }, { "msg_contents": "At 13:15 6/07/00 +1000, Philip Warner wrote:\n>At 12:50 6/07/00 +1000, Philip Warner wrote:\n>>>\n>>>Are you using current sources? Bruce committed Philip Warner's\n>>>pg_dump rewrite a day or so ago (which I thought was way premature,\n>>>but anyway...). Just want to know which pg_dump we're talking about.\n>>\n\nOK, I've built from the latest CVS, run initdb etc, created a new DB,\ncreated the megatable & megaview, and done a pg_dump. It works, except that\nI introduced a but that caused the view to be dumped as a table as well. I\nwill keep looking, and submit a patch shortly. In the mean time, if anyone\ncan reproduce Jan's problem, that would help...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 06 Jul 2000 15:28:49 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2nd update on TOAST " }, { "msg_contents": "Philip Warner wrote:\n> At 22:01 5/07/00 +0200, Jan Wieck wrote:\n> >\n> > Seems the dynamic string buffers used in pg_dump aren't as\n> > bullet proof as they should. I'm still busy with other\n> > things, so can someone please take a look on it? Attached is\n> > an SQL script that creates the table and the view that I\n> > cannot dump.\n> >\n> \n> If the TOAST patch on the FTP site the most recent?\n\n No. It's all in the current CVS tree. I removed that patch\n already.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n", "msg_date": "Thu, 6 Jul 2000 11:09:09 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: 2nd update on TOAST" }, { "msg_contents": "> [email protected] (Jan Wieck) writes:\n> > size to initdb and the problems started. I can create a table\n> > with 500+ attributes. Also I can create a view on it (the\n> > rules size is 170K - whow). Anything works pretty well, just\n> > a pg_dump output is garbage.\n> \n> Are you using current sources? Bruce committed Philip Warner's\n> pg_dump rewrite a day or so ago (which I thought was way premature,\n> but anyway...). Just want to know which pg_dump we're talking about.\n\nI committed pg_dump so people could see his changes and start making\nadditions. I can then have him supply incremental patches. Keeping\nstuff out of the tree usually causes the author to get frustrated.\n\nOf course, if it comes in too quickly, it can cause chaos if he needs to\nmake major changes.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Jul 2000 16:35:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2nd update on TOAST" } ]
[ { "msg_contents": "\n\n Small note about build system;\n\n * The ./configure expect for template listing template in \"template\" dir\nbut it is \"src/template\" (see line 132 in configure.in)\n\n * A question, try anyone compile PG after 'make depend'?\n It show some warning messages. \n \n And a second question, how clean 'depend' files? It is not posible via\n 'make clean' or 'make distclean'. Bug or feature?\n\n IMHO is not bad idea use \"central clean\" via \n\trm -f `find -name \"depend\"` (or something like this), if I good\n remember this \"find\" method use an example Linux kernel build system.\n \n\n\t\t\t\t\t\tKarel\t\n \n\n", "msg_date": "Wed, 5 Jul 2000 23:44:21 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "build system" }, { "msg_contents": "Karel Zak writes:\n\n> * The ./configure expect for template listing template in \"template\" dir\n> but it is \"src/template\" (see line 132 in configure.in)\n\nNoted.\n\n> * A question, try anyone compile PG after 'make depend'?\n> It show some warning messages. \n\nElaborate.\n\n> And a second question, how clean 'depend' files?\n\nfind -name depend | xargs rm\n\n> It is not posible via 'make clean' or 'make distclean'. Bug or feature?\n\nConsequence of other features:\n\n* make clean removes all files created by make all, except those that you\n want to include in the distribution.\n\n* make distclean removes all files removed by make clean, plus those\n created by configure\n\n* make maintainer-clean removes all files removed by distclean, as well as\n those that are in the distribution.\n\nThe depend files do not fall into any of these categories. Eventually\nwe'll generate dependencies automatically as a side-effect of compilation.\nAt that point they will also fall under the make clean (or\nmaintainer-clean) category.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 6 Jul 2000 18:13:40 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: build system" }, { "msg_contents": "\nOn Thu, 6 Jul 2000, Peter Eisentraut wrote:\n> \n> The depend files do not fall into any of these categories. Eventually\n> we'll generate dependencies automatically as a side-effect of compilation.\n> At that point they will also fall under the make clean (or\n> maintainer-clean) category.\n\n If something in build system create some files, I mean that build system\nwill knows how remove itself this matter. Expect some external \n'find ... | rm' is a little dirty...\n\n\t\t\t\t\t\tKarel\n \n\n", "msg_date": "Fri, 7 Jul 2000 09:17:51 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: build system" } ]
[ { "msg_contents": "Hi there,\n\nDoes anyone know if out parameters are supported in pl/pgsql functions?\n\nWhat I mean is something like this:\n\ncreate function f(out int4) returns text as\n'\n\tselect $1 = 3\n\treturn \"\"\n' language 'plpgsql';\n\ndb ==> declare i int4;\ndb ==> select f(i);\ndb ==> select i;\n\ni\n==\n3\n\n\nI have scanned most of the plsql documentation I can lay hands on\nwithout much joy\n- mailing list archives \n- User guide that is shipped with the postgres 7.02\n- Programmer's guide shipped with release 7.02\n- I notice that in the jdbc CallableStatement implementation,\nregisterOutpurParameter merely throws a notImplemented exception - Is\nthis a hint?\n- Bruce Momjian's book - There is a chapter on functions and triggers,\nbut I cannot seem to find this mentioned anywhere.\n\nIf this is not doable, could someone please confirm that so I should\nstop looking. On the other hand, if anyone out there has an idea how to\naccomplish this, please, please help.\n\nKind regards,\n\nRichard.\n", "msg_date": "Wed, 05 Jul 2000 23:06:28 +0000", "msg_from": "Richard Nfor <[email protected]>", "msg_from_op": true, "msg_subject": "pl/pgsql function out parameters" }, { "msg_contents": "> Does anyone know if out parameters are supported in pl/pgsql functions?\n\nYes. They are not supported. I've got patches ready to submit which\nrecognize the IN, OUT and INOUT keywords defined in SQL99, but the\npatches will just throw an explicit error if you specify an OUT/INOUT\nparameter.\n\nbtw, everyone: any objections to or comments on the above?\n\n - Thomas\n", "msg_date": "Thu, 06 Jul 2000 13:34:11 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pl/pgsql function out parameters" }, { "msg_contents": "Found it!\ninclude/config.h.in\n#define INDEX_MAX_KEYS 8\n----- Original Message ----- \nFrom: \"Thomas Lockhart\" <[email protected]>\nTo: \"Grigori Soloviov\" <[email protected]>\nSent: Thursday, July 06, 2000 6:30 PM\nSubject: Re: [HACKERS] pl/pgsql function out parameters\n\n\n> > Don't you know how to make it take more than 8 parameters?\n> \n> In the current development tree, the limit is higher than 8 parameters.\n> My recollection is that in earlier code the \"8 parameter limit\" is\n> fairly difficult to work around. Not sure if 7.0 makes it easier; look\n> for a hardcoded #define in include/config.h.\n> \n> - Thomas\n> \n\n", "msg_date": "Thu, 6 Jul 2000 20:00:16 +0400", "msg_from": "\"Grigori Soloviov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pl/pgsql function out parameters" }, { "msg_contents": "Thomas Lockhart wrote:\n\n> > Does anyone know if out parameters are supported in pl/pgsql functions?\n>\n> Yes. They are not supported. I've got patches ready to submit which\n> recognize the IN, OUT and INOUT keywords defined in SQL99, but the\n> patches will just throw an explicit error if you specify an OUT/INOUT\n> parameter.\n>\n> btw, everyone: any objections to or comments on the above?\n>\n> - Thomas\n\nYes, I would like to add my $2/100. I do not know what the historical trail\nleading to your functions is, but in the context of a function you normally\nwould not want to define IN/OUT parameters (in Oracle PG/SQL it is\nforbidden (Oracle PL/SQL is clearly derived from ADA)).\nThis forces some discipline on the programmer. I want say that you need\nto define something as a subroutine, but wouldn't the following be better :\n- If RETURN is used, then it is a function : forbid IN/OUT parameters\n- If there is an IN/OUT parameter in the declaration, return type of the\n function may only be opaque\n\nJurgen\n\n", "msg_date": "Thu, 06 Jul 2000 20:09:09 +0200", "msg_from": "Jurgen Defurne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] pl/pgsql function out parameters" } ]
[ { "msg_contents": "I am having the same problem, but I have 'make clean' and run the whole\ninstallation gambit on the latest pgsql package, every single time I got\nthe same answer Undef'd oidvector.\nIs there anything that may be missing from the package?\n\nRob\[email protected]\n\n\nBruce Momjian wrote:\n>\n> Did you do a 'make clean'?\n\nOops, my different installations are getting mixed up. My fault.\n\n\n>\n> > Just downloaded a completely fresh cvs copy. When I\n> > do initdb...\n> >\n> > This user will own all the files and must also own the server\nprocess.\n> >\n> > Creating Postgres database system directory /home/pghack/pgsql/data\n> >\n> > Creating Postgres database system directory\n/home/pghack/pgsql/data/base\n> >\n> > Creating template database in /home/pghack/pgsql/data/base/template1\n\n> > ERROR: Error: unknown type 'oidvector'.\n> >\n> > ERROR: Error: unknown type 'oidvector'.\n> >\n> > syntax error 12 : parse errorinitdb: could not create\ntemplate\n> > database\n> >\n> > ************\n> >\n>\n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n\n", "msg_date": "Wed, 05 Jul 2000 17:10:16 -0700", "msg_from": "rob <[email protected]>", "msg_from_op": true, "msg_subject": "oidvector problem, latest version- has read other oidvector notes" }, { "msg_contents": "rob <[email protected]> writes:\n> I am having the same problem, but I have 'make clean' and run the whole\n> installation gambit on the latest pgsql package, every single time I got\n> the same answer Undef'd oidvector.\n\nAlmost certainly, this indicates that the 7.0 initdb is invoking a 6.5\n(or older) postgres binary. Check your PATH.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Jul 2000 09:58:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: oidvector problem,\n latest version- has read other oidvector notes " } ]
[ { "msg_contents": "Hi all- I've recently had to suffer a torturous process of building 7.0.2 on AIX 4.3.2. From my pouring through other newsgroups I've concluded that the problems I've hit exist at least on AIX 4.1.5 and 4.3.3 as well. This note describes the problems I've encountered and how to get around them, as well as a request for some fixes. Please accept my apology if this is known information- I couldn't find anything in this digest here.\n\n1) First of all, you can't use IBM's make utility, gotta use GNU make. So download gmake from http://www-frec.bull.com/docs/download.htm (where its put into an AIX installp package). Be sure to rename /usr/local/bin/make to /usr/local/bin/gmake so as not to confuse it with AIX's make.\n\n2) Now, when you first download postgres and try to run the configure program, if you are using GCC instead of the AIX compiler (which is expensive), you have to use the command:\n\n./configure --with-template=aix_gcc\n\nThis forces GCC. I used GCC 2.95.2. \n\n2a) If you need to install GCC there are some AIX packages you need. I installed GCC and typed \"gcc\" and it responded \"No input files found\". Sounds good. But when I tried the configure command again it still fails to find a compiler. So I write a little test program and try to compile it and it gives me an error \"installation problem, cannot exec `as'\". Well, as it turns out, AIX doesn't install the assembler by default, you have to go and install the package \"bos.adt.tools\". That's a wonderful IBM brainchild. :) Ok, that's that mystery solved.\n\n2b) Next, it gave me errors looking for a c++ compiler, so be sure to download gcc.g++.2.95.2 which installs AFTER gcc.2.95.2.\n\n2c) Well, then I try compiling my test program with g++ and it complains about finding libm.a, so now we have to install the package \"bos.adt.libm\". So dumb. :)\n\n3) Ok so we're finally ready to build, and as soon as I type 'gmake', I get this error: \n\nMaking postgres.imp\n ./backend/port/aix/mkldexport.sh postgres /usr/local/bin > postgres.imp nm: postgres: 0654-200 Cannot open the specified file.\n nm: A file or directory in the path name does not exist.\n\nThis is apparently a bug in the make scripts for Postgres. I did find a workaround which was documented in one of the newsgroups. I went into the ./src/backend directory and did 'gmake', and even though it failed I then copied the ./src/backend/postgres.imp file to ./src, and that seemed to correct the problem.\n\n4) So I continue my gmake'ing, and this time it fails badly trying to compile libpq++.so. The same person that suggested the postgres.imp workaround also said that they couldn't get the c++ portion to compile, so I hand edited the Makefile.global file in ./src and commented out the line \"HAVE_Cplusplus=true\"\n\nThis, finally, at long last (and after all the trouble of installing G++ and libm!), got the rest of Postgres 7.0.2 to compile.\n\nOh, and as the make output scrolled by, I see that it failed as well building some plpsql stuff, but it was non fatal. There were also a zillion warnings, many of them about multiple type declarations for int8, int32, etc.\n\nNow on to the installation (which I haven't done yet)!!\n\nI hope this information is helpful to any of you building this on AIX, and hopefully we can have someone from the Postgres team revisit the AIX installation process! The missing AIX packages stuff could be added to the FAQ_AIX file, and hopefully the error in the makefile and the problem linking libpq++.so can be fixed. If the latter is a G++ problem (would not surprise me!), keep in mind that many of us AIX users don't use xlC because IBM licenses the damned thing, so GCC is more more economical!\n\nBest regards,\n\nRichard Sand ([email protected])\nhttp://www.vgalleries.com\n\n\n\n\n\nHi all-  I've recently had to suffer a torturous process \nof building 7.0.2 on AIX 4.3.2.  From my pouring through other newsgroups \nI've concluded that the problems I've hit exist at least on AIX 4.1.5 and 4.3.3 \nas well.  This note describes the problems I've encountered and how to get \naround them, as well as a request for some fixes.  Please accept my apology \nif this is known information- I couldn't find anything in this digest \nhere.\n \n1) First of all, you can't use IBM's make utility, gotta use \nGNU make.  So download gmake from http://www-frec.bull.com/docs/download.htm \n(where its put into an AIX installp package).  Be sure to rename \n/usr/local/bin/make to /usr/local/bin/gmake so as not to confuse it with AIX's \nmake.\n \n2) Now, when you first download postgres and try to run the \nconfigure program, if you are using GCC instead of the AIX compiler (which is \nexpensive), you have to use the command:\n \n./configure --with-template=aix_gcc\nThis forces GCC.  I used GCC 2.95.2.  \n\n \n2a) If you need to install GCC there are some AIX packages you \nneed.  I installed GCC and typed \"gcc\" and it responded \"No input files \nfound\".  Sounds good.  But when I tried the configure command \nagain it still fails to find a compiler.  So I write a little test \nprogram and try to compile it and it gives me an error \"installation problem, \ncannot exec `as'\".  Well, as it turns out, AIX doesn't install the \nassembler by default, you have to go and install the package \n\"bos.adt.tools\".  That's a wonderful IBM brainchild. :)  Ok, that's \nthat mystery solved.\n \n2b) Next, it gave me errors looking for a c++ compiler, so be \nsure to download gcc.g++.2.95.2 which installs AFTER gcc.2.95.2.\n \n2c) Well, then I try compiling my test program with g++ and it \ncomplains about finding libm.a, so now we have to install the package \n\"bos.adt.libm\".  So dumb. :)\n \n3) Ok so we're finally ready to build, and as soon as I type \n'gmake', I get this error: \n \nMaking postgres.imp ./backend/port/aix/mkldexport.sh \npostgres /usr/local/bin > postgres.imp nm: postgres: 0654-200 Cannot open the \nspecified file. nm: A file or directory in the path name does not \nexist.\n \nThis is apparently a bug in the make scripts for \nPostgres.  I did find a workaround which was documented in one of the \nnewsgroups.  I went into the ./src/backend directory and did 'gmake', and \neven though it failed I then copied the ./src/backend/postgres.imp file \nto ./src, and that seemed to correct the problem.\n \n4) So I continue my gmake'ing, and this time it fails badly \ntrying to compile libpq++.so.  The same person that suggested the \npostgres.imp workaround also said that they couldn't get the c++ portion to \ncompile, so I hand edited the Makefile.global file in ./src and commented out \nthe line \"HAVE_Cplusplus=true\"\n \nThis, finally, at long last (and after all the trouble of \ninstalling G++ and libm!), got the rest of Postgres 7.0.2 to \ncompile.\n \nOh, and as the make output scrolled by, I see that it failed \nas well building some plpsql stuff, but it was non fatal.  There were also \na zillion warnings, many of them about multiple type declarations for \nint8, int32, etc.\nNow on to the installation (which I haven't done \nyet)!!\n \nI hope this information is helpful to any of you building this \non AIX, and hopefully we can have someone from the Postgres team revisit the AIX \ninstallation process!  The missing AIX packages stuff could be added to the \nFAQ_AIX file, and hopefully the error in the makefile and the problem linking \nlibpq++.so can be fixed.  If the latter is a G++ problem (would not \nsurprise me!), keep in mind that many of us AIX users don't use xlC because \nIBM licenses the damned thing, so GCC is more more economical!\n \nBest regards,\n \nRichard Sand ([email protected])http://www.vgalleries.com", "msg_date": "Wed, 5 Jul 2000 21:47:56 -0400", "msg_from": "[email protected] (Richard Sand)", "msg_from_op": true, "msg_subject": "Lessons learned on how to build 7.0.2 on AIX 4.x" } ]
[ { "msg_contents": "\nHi ...\n\nPostgreSQL, 4 years ago, became a Proudly Canadian Open Source Project,\nwith developers around the world (16 out of ~21 contributing developers\nbeing non-US citizens). Over those 4 years, we've had various people pop\nup suggesting \"we should be under a GPL license\", to which the almost\ninstantaneous reply being \"over our combined dead bodies\".\n\nEveryone has their own opinions about both the GPL and the BSD licenses,\nwith the religious arguments between the two being as \"interesting\" as\nthose between Linux and FreeBSD ...\n\nWe all have our preferences, and we can argue those until we are blue in\nthe face, but that would make little differences. PostgreSQL falls under\nthe BSD license, and that will not change ... it is the license that\nBerkeley imposed on Postgres from day one, it is the license that Jolly\nand Andrew handed the code over to us under, and it is the one that\nPostgreSQL itself will impose until \"the end of time\".\n\nRecently, Landmark/Great Bridge sent us a proposed revision to our\nexisting license that, from what I can tell, has two paragraphs that\npretty instantly none of the non-US developers felt comfortable with ...\nand that I, personally, could never agree to.\n\nI've read, and re-read, this license since the first time I saw it ... I\nlike the extension of the 'liability/warranty' sections to encompass \"all\ndevelopers\" vs it just encompassing \"University of Berkeley\", and am\nshocked that we never thought of this before, as well as pleased that our\nnew community members (L/GB) took the time to contribute this ...\n\nIncluded below is what I would like to replace our current COPYRIGHT file\nwith, unless any of the developers have any serious concerns about it\nand/or I've mis-read something in it that \"loses\" the BSD License appeal\nto it. I do not believe that *extending* the license reduces/blemishes\nthe BSD openness of the license ... maybe I'm wrong ...\n\nIMHO, the current COPYRIGHT we have is/was only good until 1996, when we\ntook over the code ... what is included doesn't change the terms or\nmeaning of the COPYRIGHT, it only extends it to cover those developing the\ncode from '96 on ...\n\nI wish to publicly thank Landmark/Great Bridge for providing the basis for\nthese changes, as their contribution has provided us with a direction to\nfocus on, instead of the usual \"we need to change the license\" that\nhappens bi-yearly, and then dies off with no change ...\n\nI would like to plug this in early next week, unless someone can see\nsomething major that makes them feel uncomfortable ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n\n===========================================================================\n\n\nPostgreSQL Data Base Management System (formerly known as Postgres95)\n\nThis directory contains the _______ release of PostgreSQL, as well as\nvarious post-release patches in the patches directory. See INSTALL for\nthe installation notes and HISTORY for the changes.\n\nWe also have a WWW home page located at: http://www.postgreSQL.org\n\n-------------------------\n\nPostgreSQL is not public domain software. It is copyrighted by the\nUniversity of California but may be used according to the following\nlicensing terms:\n\nPOSTGRES95 Data Base Management System (formerly known as Postgres, then\nas Postgres95).\n\nCopyright (c) 1994-6 Regents of the University of California\n\nPermission to use, copy, modify, and distribute this software and its\ndocumentation for any purpose, without fee, and without a written\nagreement is hereby granted, provided that the above copyright notice and\nthis paragraph and the following two paragraphs appear in all copies.\n\nIN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR\nDIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING\nLOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS\nDOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED OF\nTHE POSSIBILITY OF SUCH DAMAGE.\n\nTHE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES,\nINCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY\nAND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS\nON AN \"AS IS\" BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATIONS\nTO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.\n\n-------------------------\n\nCopyright ( 1996, 1997, 1998, 1999, 2000 by various contributors (as\nidentified in HISTORY) (collectively \"Developers\") which may be used\naccording to the following licensing terms:\n\nWorldwide permission to use, copy, modify, and distribute this software\nand its documentation for any purpose, without fee, and without a written\nagreement is hereby granted, on a non-exclusive basis, provided that the\nabove copyright notice, this paragraph and the following paragraphs appear\nin all copies:\n\nIN NO EVENT SHALL ANY DEVELOPER BE LIABLE TO ANY PARTY FOR DIRECT,\nINDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING,\nWITHOUT LIMITATION, LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE\nAND ITS DOCUMENTATION, EVEN IF THE DEVELOPER HAS BEEN ADVISED OF THE\nPOSSIBILITY OF SUCH DAMAGE.\n\nTHE DEVELOPERS SPECIFICALLY DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED\nINCLUDING BUT NOT LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE, NEED, OR QUALITY, AND ANY IMPLIED\nWARRANTY FROM COURSE OF DEALING OR USAGE OF TRADE. IN ADDITION, THERE IS\nNO IMPLIED WARRANTY AGAINST INTERFERENCE WITH ENJOYMENT OR AGAINST\nINFRINGEMENT. THE SOFTWARE AND DOCUMENTATION PROVIDED HEREUNDER IS ON AN\n\"AS IS\" BASIS. NO DEVELOPER HAS ANY OBLIGATION TO PROVIDE MAINTENANCE,\nSUPPORT, UPDATES, ENHANCEMENTS OR MODIFICATIONS TO OR FOR THE SOFTWARE OR\nDOCUMENTATION.\n\nBY USING THIS SOFTWARE YOU AGREE TO THESE TERMS AND CONDITIONS. IF YOU DO\nNOT AGREE TO THESE TERMS AND CONDITIONS, YOU SHOULD NOT USE THIS SOFTWARE.\n\n\n\n\n", "msg_date": "Wed, 5 Jul 2000 23:11:16 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL & the BSD License " }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> Hi ...\n> \n> PostgreSQL, 4 years ago, became a Proudly Canadian Open Source Project,\n> with developers around the world (16 out of ~21 contributing developers\n> being non-US citizens). Over those 4 years, we've had various people pop\n> up suggesting \"we should be under a GPL license\", to which the almost\n> instantaneous reply being \"over our combined dead bodies\".\n> \n...\n\n> ===========================================================================\n> \n> PostgreSQL Data Base Management System (formerly known as Postgres95)\n> \n> This directory contains the _______ release of PostgreSQL, as well as\n> various post-release patches in the patches directory. See INSTALL for\n> the installation notes and HISTORY for the changes.\n> \n> We also have a WWW home page located at: http://www.postgreSQL.org\n> \n> -------------------------\n> \n> PostgreSQL is not public domain software. It is copyrighted by the\n> University of California but may be used according to the following\n> licensing terms:\n> \n> POSTGRES95 Data Base Management System (formerly known as Postgres, then\n> as Postgres95).\n> \n> Copyright (c) 1994-6 Regents of the University of California\n> \n> Permission to use, copy, modify, and distribute this software and its\n> documentation for any purpose, without fee, and without a written\n> agreement is hereby granted, provided that the above copyright notice and\n> this paragraph and the following two paragraphs appear in all copies.\n> \n> IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR\n> DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING\n> LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS\n> DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED OF\n> THE POSSIBILITY OF SUCH DAMAGE.\n> \n> THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES,\n> INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY\n> AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS\n> ON AN \"AS IS\" BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATIONS\n> TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.\n> \n> -------------------------\n> \n> Copyright ( 1996, 1997, 1998, 1999, 2000 by various contributors (as\n> identified in HISTORY) (collectively \"Developers\") which may be used\n> according to the following licensing terms:\n> \n> Worldwide permission to use, copy, modify, and distribute this software\n> and its documentation for any purpose, without fee, and without a written\n> agreement is hereby granted, on a non-exclusive basis, provided that the\n> above copyright notice, this paragraph and the following paragraphs appear\n> in all copies:\n> \n> IN NO EVENT SHALL ANY DEVELOPER BE LIABLE TO ANY PARTY FOR DIRECT,\n> INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING,\n> WITHOUT LIMITATION, LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE\n> AND ITS DOCUMENTATION, EVEN IF THE DEVELOPER HAS BEEN ADVISED OF THE\n> POSSIBILITY OF SUCH DAMAGE.\n> \n> THE DEVELOPERS SPECIFICALLY DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED\n> INCLUDING BUT NOT LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY,\n> FITNESS FOR A PARTICULAR PURPOSE, NEED, OR QUALITY, AND ANY IMPLIED\n> WARRANTY FROM COURSE OF DEALING OR USAGE OF TRADE. IN ADDITION, THERE IS\n> NO IMPLIED WARRANTY AGAINST INTERFERENCE WITH ENJOYMENT OR AGAINST\n> INFRINGEMENT. THE SOFTWARE AND DOCUMENTATION PROVIDED HEREUNDER IS ON AN\n> \"AS IS\" BASIS. NO DEVELOPER HAS ANY OBLIGATION TO PROVIDE MAINTENANCE,\n> SUPPORT, UPDATES, ENHANCEMENTS OR MODIFICATIONS TO OR FOR THE SOFTWARE OR\n> DOCUMENTATION.\n> \n> BY USING THIS SOFTWARE YOU AGREE TO THESE TERMS AND CONDITIONS. IF YOU DO\n> NOT AGREE TO THESE TERMS AND CONDITIONS, YOU SHOULD NOT USE THIS SOFTWARE.\n\nPerfect.\n\nMike Mascari\n", "msg_date": "Wed, 05 Jul 2000 22:18:20 -0400", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL & the BSD License" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> I would like to plug this in early next week, unless someone can see\n> something major that makes them feel uncomfortable ...\n\nWhat are you trying to do Marc, foreclose a full discussion? I think\nthis is *way* premature.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Jul 2000 04:22:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL & the BSD License " }, { "msg_contents": "\n>Recently, Landmark/Great Bridge sent us a proposed revision to our\n>existing license that, from what I can tell, has two paragraphs that\n>pretty instantly none of the non-US developers felt comfortable with ...\n>and that I, personally, could never agree to.\n\nSorry to jump in , but which two paragraphs were these and why were they\nobjectionable ?\n", "msg_date": "Thu, 06 Jul 2000 09:12:23 +0000", "msg_from": "Samy Elashmawy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL & the BSD License " }, { "msg_contents": "On Thu, 6 Jul 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > I would like to plug this in early next week, unless someone can see\n> > something major that makes them feel uncomfortable ...\n> \n> What are you trying to do Marc, foreclose a full discussion? I think\n> this is *way* premature.\n\nNo ... what I posted as a replacement for our current COPYRIGHT is the\n*base* that nobody disagrees with ... I don't care if everyone wants to\nargue til their are blue in the face for the next 6 months concerning the\ntwo paras that were drop'd, we can always add them in later, its just\nremoving stuff that is a pita ...\n\n... hell, I'll create a pgsql-license mailing list if ppl want, just to\ndiscuss those two paras and centralize the discussions ...\n\n From the feeling I got from those that have posted to the lists, what is\nin the one I posted last night is agreeable to *everyone*, both American\nand non-American, since it doesn't change the gist of the BSD license, it\nonly extends the umbrella of warranty/liability over all of us ...\n\n\n\n", "msg_date": "Thu, 6 Jul 2000 10:07:19 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL & the BSD License " }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> On Thu, 6 Jul 2000, Tom Lane wrote:\n>> The Hermit Hacker <[email protected]> writes:\n>>>> I would like to plug this in early next week, unless someone can see\n>>>> something major that makes them feel uncomfortable ...\n>> \n>> What are you trying to do Marc, foreclose a full discussion? I think\n>> this is *way* premature.\n\n> No ... what I posted as a replacement for our current COPYRIGHT is the\n> *base* that nobody disagrees with ... I don't care if everyone wants to\n> argue til their are blue in the face for the next 6 months concerning the\n> two paras that were drop'd, we can always add them in later, its just\n> removing stuff that is a pita ...\n\nIt sounded a lot like you were trying to say \"this is what we're going\nto do, end of discussion\". I take it that wasn't what you meant, but\nit sure read that way from here ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Jul 2000 10:39:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL & the BSD License " }, { "msg_contents": "On Thu, 6 Jul 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > On Thu, 6 Jul 2000, Tom Lane wrote:\n> >> The Hermit Hacker <[email protected]> writes:\n> >>>> I would like to plug this in early next week, unless someone can see\n> >>>> something major that makes them feel uncomfortable ...\n> >> \n> >> What are you trying to do Marc, foreclose a full discussion? I think\n> >> this is *way* premature.\n> \n> > No ... what I posted as a replacement for our current COPYRIGHT is the\n> > *base* that nobody disagrees with ... I don't care if everyone wants to\n> > argue til their are blue in the face for the next 6 months concerning the\n> > two paras that were drop'd, we can always add them in later, its just\n> > removing stuff that is a pita ...\n> \n> It sounded a lot like you were trying to say \"this is what we're going\n> to do, end of discussion\". I take it that wasn't what you meant, but\n> it sure read that way from here ...\n\nThe more I think on this, the less I'm sure that we *should* be changing\nanything though ... why hasn't FreeBSD (a primarily US based, BSD\nlicensed, Open Source Project) changed it? Has NetBSD? OpenBSD? Why is\nit good enough for them, and all of their commercial clients and\naffiliates, but not good enough for us? Actually, just took a look at the\nCOPYRIGHT that comes with FreeBSD ... shit, wait a second ... didn't the\nBSD COPYRIGHT just *have* a change? ... <insert explicitive here> ...\n\nYa, there was a recent change, that can be seen at:\nftp://ftp.cs.berkeley.edu/pub/4bsd/README.Impt.License.Change ... but, if\nyou look at the FreeBSD COPYRIGHT in /usr/src, I'm guessing that we've\nnever kept up with *any* of the changes to the BSD COPYRIGHT ... we just\nused the one that came with Postgres95 originally and assumed that\nBerkeley never changed it ...\n\n===================\n# $FreeBSD: src/COPYRIGHT,v 1.4 1999/09/05 21:33:47 obrien Exp $\n# @(#)COPYRIGHT 8.2 (Berkeley) 3/21/94\n\nAll of the documentation and software included in the 4.4BSD and\n4.4BSD-Lite\nReleases is copyrighted by The Regents of the University of California.\n\nCopyright 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994\n The Regents of the University of California. All rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions\nare met:\n1. Redistributions of source code must retain the above copyright\n notice, this list of conditions and the following disclaimer.\n2. Redistributions in binary form must reproduce the above copyright\n notice, this list of conditions and the following disclaimer in the\n documentation and/or other materials provided with the distribution.\n3. All advertising materials mentioning features or use of this software\n must display the following acknowledgement:\nThis product includes software developed by the University of\nCalifornia, Berkeley and its contributors.\n4. Neither the name of the University nor the names of its contributors\n may be used to endorse or promote products derived from this software\n without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND\nANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\nARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\nOR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\nHOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\nLIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\nOUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\nSUCH DAMAGE.\n========================\n\nthere is more dealing with X and whatnot ... the ftp URL I gave above\nremoves clause 3 in the above:\n\n=========================\n> more README.Impt.License.Change\n\nJuly 22, 1999\n\nTo All Licensees, Distributors of Any Version of BSD:\n\nAs you know, certain of the Berkeley Software Distribution (\"BSD\") source\ncode files require that further distributions of products containing all or\nportions of the software, acknowledge within their advertising materials\nthat such products contain software developed by UC Berkeley and its\ncontributors.\n\nSpecifically, the provision reads:\n\n\" * 3. All advertising materials mentioning features or use of this software\n * must display the following acknowledgement:\n * This product includes software developed by the University of\n * California, Berkeley and its contributors.\"\n\nEffective immediately, licensees and distributors are no longer required to\ninclude the acknowledgement within advertising materials. Accordingly, the\nforegoing paragraph of those BSD Unix files containing it is hereby deleted\nin its entirety.\n\nWilliam Hoskins\nDirector, Office of Technology Licensing\nUniversity of California, Berkeley\n============================\n\n From reading the above COPYRIGHT in FreeBSD, it sounds like our version is\nout of date with the version everyone else is using, and that the changes\nwe are discussing here have already been discussed and made, just nobody\ntold us ...\n\n\n\n", "msg_date": "Thu, 6 Jul 2000 13:02:46 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL & the BSD License " }, { "msg_contents": "The Hermit Hacker writes:\n\n> I've read, and re-read, this license since the first time I saw it ... I\n> like the extension of the 'liability/warranty' sections to encompass \"all\n> developers\" vs it just encompassing \"University of Berkeley\", and am\n> shocked that we never thought of this before,\n\nThat's not true. I recall several separate occasions this was brought up\nin the past. But anyway...\n\nI support the spirit of your suggestion, but just a couple of ideas:\n\n1) The copyright notice from the current developers should come first. It\nshould read something like:\n\n\"PostgreSQL ... Copyright 2000 whoever\n\nContains code from Postgres95, which is subject to the following\nconditions:\n\nCopyright 1996 UCB ...\"\n\n2) \"various contributors (as identified in HISTORY)\" -- Don't do that.\nWhat if someone forks the project and renames HISTORY to\nPASTPRESENTANDFUTURE? Use something like \"all contributors\". Also note\nthat the HISTORY file doesn't actually identify the contributors\nsufficiently.\n\n3) Use the same disclaimer that the UCB used, unless you have a good\nreason to change the wording.\n\n4) \"BY USING THIS SOFTWARE YOU AGREE TO THESE TERMS AND CONDITIONS. IF\nYOU DO NOT AGREE TO THESE TERMS AND CONDITIONS, YOU SHOULD NOT USE THIS\nSOFTWARE.\" -- This is not enforceable. If you want to get at this point\n(for which I see no reason), use something like GPL section 5.\n\n5) There also should be a mention that some parts of the distribution may\nbe subject to other conditions, which are identified near that \"part\".\n\n\n\n> ===========================================================================\n> \n> \n> PostgreSQL Data Base Management System (formerly known as Postgres95)\n> \n> This directory contains the _______ release of PostgreSQL, as well as\n> various post-release patches in the patches directory. See INSTALL for\n> the installation notes and HISTORY for the changes.\n> \n> We also have a WWW home page located at: http://www.postgreSQL.org\n> \n> -------------------------\n> \n> PostgreSQL is not public domain software. It is copyrighted by the\n> University of California but may be used according to the following\n> licensing terms:\n> \n> POSTGRES95 Data Base Management System (formerly known as Postgres, then\n> as Postgres95).\n> \n> Copyright (c) 1994-6 Regents of the University of California\n> \n> Permission to use, copy, modify, and distribute this software and its\n> documentation for any purpose, without fee, and without a written\n> agreement is hereby granted, provided that the above copyright notice and\n> this paragraph and the following two paragraphs appear in all copies.\n> \n> IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR\n> DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING\n> LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS\n> DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED OF\n> THE POSSIBILITY OF SUCH DAMAGE.\n> \n> THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES,\n> INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY\n> AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS\n> ON AN \"AS IS\" BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATIONS\n> TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.\n> \n> -------------------------\n> \n> Copyright ( 1996, 1997, 1998, 1999, 2000 by various contributors (as\n> identified in HISTORY) (collectively \"Developers\") which may be used\n> according to the following licensing terms:\n> \n> Worldwide permission to use, copy, modify, and distribute this software\n> and its documentation for any purpose, without fee, and without a written\n> agreement is hereby granted, on a non-exclusive basis, provided that the\n> above copyright notice, this paragraph and the following paragraphs appear\n> in all copies:\n> \n> IN NO EVENT SHALL ANY DEVELOPER BE LIABLE TO ANY PARTY FOR DIRECT,\n> INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING,\n> WITHOUT LIMITATION, LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE\n> AND ITS DOCUMENTATION, EVEN IF THE DEVELOPER HAS BEEN ADVISED OF THE\n> POSSIBILITY OF SUCH DAMAGE.\n> \n> THE DEVELOPERS SPECIFICALLY DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED\n> INCLUDING BUT NOT LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY,\n> FITNESS FOR A PARTICULAR PURPOSE, NEED, OR QUALITY, AND ANY IMPLIED\n> WARRANTY FROM COURSE OF DEALING OR USAGE OF TRADE. IN ADDITION, THERE IS\n> NO IMPLIED WARRANTY AGAINST INTERFERENCE WITH ENJOYMENT OR AGAINST\n> INFRINGEMENT. THE SOFTWARE AND DOCUMENTATION PROVIDED HEREUNDER IS ON AN\n> \"AS IS\" BASIS. NO DEVELOPER HAS ANY OBLIGATION TO PROVIDE MAINTENANCE,\n> SUPPORT, UPDATES, ENHANCEMENTS OR MODIFICATIONS TO OR FOR THE SOFTWARE OR\n> DOCUMENTATION.\n> \n> BY USING THIS SOFTWARE YOU AGREE TO THESE TERMS AND CONDITIONS. IF YOU DO\n> NOT AGREE TO THESE TERMS AND CONDITIONS, YOU SHOULD NOT USE THIS SOFTWARE.\n> \n> \n> \n> \n> \n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Thu, 6 Jul 2000 23:36:32 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL & the BSD License " }, { "msg_contents": "\nas mentioned by another person, it appears that the problem isn't with the\ncopyright, the problem is us :( BSD *has* already done all the revisons\nto the copyright, we've just never upgraded ours to match theirs ... I\nposted, in another thread, a proposed updated COPYRIGHT file based off of\nhttp://www.opensource.org/licenses/bsd-license.html ...\n\n On Thu, 6 Jul 2000, Peter Eisentraut wrote:\n\n> The Hermit Hacker writes:\n> \n> > I've read, and re-read, this license since the first time I saw it ... I\n> > like the extension of the 'liability/warranty' sections to encompass \"all\n> > developers\" vs it just encompassing \"University of Berkeley\", and am\n> > shocked that we never thought of this before,\n> \n> That's not true. I recall several separate occasions this was brought up\n> in the past. But anyway...\n> \n> I support the spirit of your suggestion, but just a couple of ideas:\n> \n> 1) The copyright notice from the current developers should come first. It\n> should read something like:\n> \n> \"PostgreSQL ... Copyright 2000 whoever\n> \n> Contains code from Postgres95, which is subject to the following\n> conditions:\n> \n> Copyright 1996 UCB ...\"\n> \n> 2) \"various contributors (as identified in HISTORY)\" -- Don't do that.\n> What if someone forks the project and renames HISTORY to\n> PASTPRESENTANDFUTURE? Use something like \"all contributors\". Also note\n> that the HISTORY file doesn't actually identify the contributors\n> sufficiently.\n> \n> 3) Use the same disclaimer that the UCB used, unless you have a good\n> reason to change the wording.\n> \n> 4) \"BY USING THIS SOFTWARE YOU AGREE TO THESE TERMS AND CONDITIONS. IF\n> YOU DO NOT AGREE TO THESE TERMS AND CONDITIONS, YOU SHOULD NOT USE THIS\n> SOFTWARE.\" -- This is not enforceable. If you want to get at this point\n> (for which I see no reason), use something like GPL section 5.\n> \n> 5) There also should be a mention that some parts of the distribution may\n> be subject to other conditions, which are identified near that \"part\".\n> \n> \n> \n> > ===========================================================================\n> > \n> > \n> > PostgreSQL Data Base Management System (formerly known as Postgres95)\n> > \n> > This directory contains the _______ release of PostgreSQL, as well as\n> > various post-release patches in the patches directory. See INSTALL for\n> > the installation notes and HISTORY for the changes.\n> > \n> > We also have a WWW home page located at: http://www.postgreSQL.org\n> > \n> > -------------------------\n> > \n> > PostgreSQL is not public domain software. It is copyrighted by the\n> > University of California but may be used according to the following\n> > licensing terms:\n> > \n> > POSTGRES95 Data Base Management System (formerly known as Postgres, then\n> > as Postgres95).\n> > \n> > Copyright (c) 1994-6 Regents of the University of California\n> > \n> > Permission to use, copy, modify, and distribute this software and its\n> > documentation for any purpose, without fee, and without a written\n> > agreement is hereby granted, provided that the above copyright notice and\n> > this paragraph and the following two paragraphs appear in all copies.\n> > \n> > IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR\n> > DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING\n> > LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS\n> > DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED OF\n> > THE POSSIBILITY OF SUCH DAMAGE.\n> > \n> > THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES,\n> > INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY\n> > AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS\n> > ON AN \"AS IS\" BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATIONS\n> > TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.\n> > \n> > -------------------------\n> > \n> > Copyright ( 1996, 1997, 1998, 1999, 2000 by various contributors (as\n> > identified in HISTORY) (collectively \"Developers\") which may be used\n> > according to the following licensing terms:\n> > \n> > Worldwide permission to use, copy, modify, and distribute this software\n> > and its documentation for any purpose, without fee, and without a written\n> > agreement is hereby granted, on a non-exclusive basis, provided that the\n> > above copyright notice, this paragraph and the following paragraphs appear\n> > in all copies:\n> > \n> > IN NO EVENT SHALL ANY DEVELOPER BE LIABLE TO ANY PARTY FOR DIRECT,\n> > INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING,\n> > WITHOUT LIMITATION, LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE\n> > AND ITS DOCUMENTATION, EVEN IF THE DEVELOPER HAS BEEN ADVISED OF THE\n> > POSSIBILITY OF SUCH DAMAGE.\n> > \n> > THE DEVELOPERS SPECIFICALLY DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED\n> > INCLUDING BUT NOT LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY,\n> > FITNESS FOR A PARTICULAR PURPOSE, NEED, OR QUALITY, AND ANY IMPLIED\n> > WARRANTY FROM COURSE OF DEALING OR USAGE OF TRADE. IN ADDITION, THERE IS\n> > NO IMPLIED WARRANTY AGAINST INTERFERENCE WITH ENJOYMENT OR AGAINST\n> > INFRINGEMENT. THE SOFTWARE AND DOCUMENTATION PROVIDED HEREUNDER IS ON AN\n> > \"AS IS\" BASIS. NO DEVELOPER HAS ANY OBLIGATION TO PROVIDE MAINTENANCE,\n> > SUPPORT, UPDATES, ENHANCEMENTS OR MODIFICATIONS TO OR FOR THE SOFTWARE OR\n> > DOCUMENTATION.\n> > \n> > BY USING THIS SOFTWARE YOU AGREE TO THESE TERMS AND CONDITIONS. IF YOU DO\n> > NOT AGREE TO THESE TERMS AND CONDITIONS, YOU SHOULD NOT USE THIS SOFTWARE.\n> > \n> > \n> > \n> > \n> > \n> \n> -- \n> Peter Eisentraut Sernanders v�g 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 6 Jul 2000 18:44:13 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] PostgreSQL & the BSD License " }, { "msg_contents": "The Hermit Hacker wrote:\n\n> Ya, there was a recent change, that can be seen at:\n> ftp://ftp.cs.berkeley.edu/pub/4bsd/README.Impt.License.Change ... but, if\n> you look at the FreeBSD COPYRIGHT in /usr/src, I'm guessing that we've\n> never kept up with *any* of the changes to the BSD COPYRIGHT ... we just\n> used the one that came with Postgres95 originally and assumed that\n> Berkeley never changed it ...\n\nSo lets just swap to the freebsd licence and be done with it.\n", "msg_date": "Fri, 07 Jul 2000 11:22:03 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL & the BSD License" }, { "msg_contents": "> The more I think on this, the less I'm sure that we *should* be changing\n> anything though ... why hasn't FreeBSD (a primarily US based, BSD\n> licensed, Open Source Project) changed it? Has NetBSD? OpenBSD? Why is\n> it good enough for them, and all of their commercial clients and\n> affiliates, but not good enough for us? Actually, just took a look at the\n> COPYRIGHT that comes with FreeBSD ... shit, wait a second ... didn't the\n> BSD COPYRIGHT just *have* a change? ... <insert explicitive here> ...\n> \n> Ya, there was a recent change, that can be seen at:\n> ftp://ftp.cs.berkeley.edu/pub/4bsd/README.Impt.License.Change ... but, if\n> you look at the FreeBSD COPYRIGHT in /usr/src, I'm guessing that we've\n> never kept up with *any* of the changes to the BSD COPYRIGHT ... we just\n> used the one that came with Postgres95 originally and assumed that\n> Berkeley never changed it ...\n\nI totally agree with Marc on this. The GB-suggested change would:\n\n\t1) Add confusion by making yet another license\n\t2) Add protection we may not even need\n\t3) Be very US-centric\n\t4) Require obnoxious license approval\n\nThese are all major issues. I think getting the most recent BSD license\nwording is the way to go.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Jul 2000 23:19:05 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL & the BSD License" }, { "msg_contents": "On Thu, 6 Jul 2000, The Hermit Hacker wrote:\n\n> as mentioned by another person, it appears that the problem isn't with the\n> copyright, the problem is us :( BSD *has* already done all the revisons\n> to the copyright, we've just never upgraded ours to match theirs ... I\n> posted, in another thread, a proposed updated COPYRIGHT file based off of\n> http://www.opensource.org/licenses/bsd-license.html ...\n\nNotice how the letter you cited was addressed to all users of 4.4BSD, and\nnot to the users of all software products that every came out of\nBerkeley. Just because some of them got to change their license doesn't\nmean that all the other packages suddenly get to choose what wording\nthey'd like.\n\nThe particular change was the removal of the \"advertisement clause\".\nPostgres doesn't have an advertisement clause.\n\nIf you want to get word from the UCB that we are allowed to insert \"AND\nALL OTHER CONTRIBUTORS\" at strategic places in the current text then we'd\nprobably be served best. But until then we have to leave the UCB license\nuntouched.\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 7 Jul 2000 07:54:33 -0400 (EDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL & the BSD License " }, { "msg_contents": "\nPaperwork has already been sent off, gears are in motion :)\n\n\nOn Fri, 7 Jul 2000 [email protected] wrote:\n\n> On Thu, 6 Jul 2000, The Hermit Hacker wrote:\n> \n> > as mentioned by another person, it appears that the problem isn't with the\n> > copyright, the problem is us :( BSD *has* already done all the revisons\n> > to the copyright, we've just never upgraded ours to match theirs ... I\n> > posted, in another thread, a proposed updated COPYRIGHT file based off of\n> > http://www.opensource.org/licenses/bsd-license.html ...\n> \n> Notice how the letter you cited was addressed to all users of 4.4BSD, and\n> not to the users of all software products that every came out of\n> Berkeley. Just because some of them got to change their license doesn't\n> mean that all the other packages suddenly get to choose what wording\n> they'd like.\n> \n> The particular change was the removal of the \"advertisement clause\".\n> Postgres doesn't have an advertisement clause.\n> \n> If you want to get word from the UCB that we are allowed to insert \"AND\n> ALL OTHER CONTRIBUTORS\" at strategic places in the current text then we'd\n> probably be served best. But until then we have to leave the UCB license\n> untouched.\n> \n> \n> -- \n> Peter Eisentraut Sernanders vaeg 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 7 Jul 2000 11:21:37 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] PostgreSQL & the BSD License " } ]
[ { "msg_contents": "Hi all- I've recently had to suffer a torturous process of building 7.0.2 on AIX 4.3.2. From my pouring through other newsgroups I've concluded that the problems I've hit exist at least on AIX 4.1.5 and 4.3.3 as well. This note describes the problems I've encountered and how to get around them, as well as a request for some fixes. Please accept my apology if this is known information- I couldn't find anything in this digest here. Some of this is in the document FAQ_AIX, but not all of it.\n\n1) First of all, you can't use IBM's make utility, gotta use GNU make. So download gmake from http://www-frec.bull.com/docs/download.htm (where its put into an AIX installp package). Be sure to rename /usr/local/bin/make to /usr/local/bin/gmake so as not to confuse it with AIX's make.\n \n2) Now, when you first download postgres and try to run the configure program, if you are using GCC instead of the AIX compiler (which is expensive), you have to use the command:\n\n./configure --with-template=aix_gcc\n\nThis forces GCC. I used GCC 2.95.2. \n\n2a) If you need to install GCC there are some AIX packages you need. I installed GCC and typed \"gcc\" and it responded \"No input files found\". Sounds good. But when I tried the configure command again it still fails to find a compiler. So I write a little test program and try to compile it and it gives me an error \"installation problem, cannot exec `as'\". Well, as it turns out, AIX doesn't install the assembler by default, you have to go and install the package \"bos.adt.tools\". That's a wonderful IBM brainchild. :) Ok, that's that mystery solved.\n\n2b) Next, it gave me errors looking for a c++ compiler, so be sure to download gcc.g++.2.95.2 which installs AFTER gcc.2.95.2.\n\n2c) Well, then I try compiling my test program with g++ and it complains about finding libm.a, so now we have to install the package \"bos.adt.libm\". So dumb. :)\n\n3) Ok so we're finally ready to build, and as soon as I type 'gmake', I get this error: \n\nMaking postgres.imp\n ./backend/port/aix/mkldexport.sh postgres /usr/local/bin > postgres.imp nm: postgres: 0654-200 Cannot open the specified file.\n nm: A file or directory in the path name does not exist.\n\nThis is apparently a bug in the make scripts for Postgres. I did find a workaround which was documented in one of the newsgroups. I went into the ./src/backend directory and did 'gmake', and even though it failed I then copied the ./src/backend/postgres.imp file to ./src, and that seemed to correct the problem.\n\n4) So I continue my gmake'ing, and this time it fails badly trying to compile libpq++.so. The same person that suggested the postgres.imp workaround also said that they couldn't get the c++ portion to compile, so I hand edited the Makefile.global file in ./src and commented out the line \"HAVE_Cplusplus=true\"\n\nThis, finally, at long last (and after all the trouble of installing G++ and libm!), got the rest of Postgres 7.0.2 to compile.\n\nOh, and as the make output scrolled by, I see that it failed as well building some plpsql stuff, but it was non fatal. There were also a zillion warnings, many of them about multiple type declarations for int8, int32, etc.\n\nNow on to the installation!!\n\n5) The install worked pretty smoothly. The only trouble I had was installing the man pages, because it expected to use \"zcat\" to handle its .gz files, which AIX doesn't like. So I had to change zcat to \"/usr/local/bin/gunzip -c\" in the ./src/Makefile.global (of course implying that I had already installed GNU zip). I added the LD_PATH and MANPATH entries for pgsql to my /etc/environment.\n\nI hope this information is helpful to any of you building this on AIX, and hopefully we can have someone from the Postgres team revisit the AIX installation process! The missing AIX packages stuff could be added to the FAQ_AIX file, and hopefully the error in the makefile and the problem linking libpq++.so can be fixed. If the latter is a G++ problem (would not surprise me!), keep in mind that many of us AIX users don't use xlC because IBM licenses the damned thing, so GCC is more more economical!\n\nBest regards,\n\nRichard Sand ([email protected])\nhttp://www.vgalleries.com\n\n\n\n\n\nHi all-  I've recently had to suffer a torturous process \nof building 7.0.2 on AIX 4.3.2.  From my pouring through other newsgroups \nI've concluded that the problems I've hit exist at least on AIX 4.1.5 and 4.3.3 \nas well.  This note describes the problems I've encountered and how to get \naround them, as well as a request for some fixes.  Please accept my apology \nif this is known information- I couldn't find anything in this digest \nhere.  Some of this is in the document FAQ_AIX, but not all of \nit.\n \n1) First of all, you can't use IBM's make utility, gotta use \nGNU make.  So download gmake from http://www-frec.bull.com/docs/download.htm \n(where its put into an AIX installp package).  Be sure to rename \n/usr/local/bin/make to /usr/local/bin/gmake so as not to confuse it with AIX's \nmake.\n \n2) Now, when you first download postgres and try to run the \nconfigure program, if you are using GCC instead of the AIX compiler (which is \nexpensive), you have to use the command:\n \n./configure --with-template=aix_gcc\nThis forces GCC.  I used GCC 2.95.2.  \n\n \n2a) If you need to install GCC there are some AIX packages you \nneed.  I installed GCC and typed \"gcc\" and it responded \"No input files \nfound\".  Sounds good.  But when I tried the configure command \nagain it still fails to find a compiler.  So I write a little test \nprogram and try to compile it and it gives me an error \"installation problem, \ncannot exec `as'\".  Well, as it turns out, AIX doesn't install the \nassembler by default, you have to go and install the package \n\"bos.adt.tools\".  That's a wonderful IBM brainchild. :)  Ok, that's \nthat mystery solved.\n \n2b) Next, it gave me errors looking for a c++ compiler, so be \nsure to download gcc.g++.2.95.2 which installs AFTER gcc.2.95.2.\n \n2c) Well, then I try compiling my test program with g++ and it \ncomplains about finding libm.a, so now we have to install the package \n\"bos.adt.libm\".  So dumb. :)\n \n3) Ok so we're finally ready to build, and as soon as I type \n'gmake', I get this error: \n \nMaking postgres.imp ./backend/port/aix/mkldexport.sh \npostgres /usr/local/bin > postgres.imp nm: postgres: 0654-200 Cannot open the \nspecified file. nm: A file or directory in the path name does not \nexist.\n \nThis is apparently a bug in the make scripts for \nPostgres.  I did find a workaround which was documented in one of the \nnewsgroups.  I went into the ./src/backend directory and did 'gmake', and \neven though it failed I then copied the ./src/backend/postgres.imp file \nto ./src, and that seemed to correct the problem.\n \n4) So I continue my gmake'ing, and this time it fails badly \ntrying to compile libpq++.so.  The same person that suggested the \npostgres.imp workaround also said that they couldn't get the c++ portion to \ncompile, so I hand edited the Makefile.global file in ./src and commented out \nthe line \"HAVE_Cplusplus=true\"\n \nThis, finally, at long last (and after all the trouble of \ninstalling G++ and libm!), got the rest of Postgres 7.0.2 to \ncompile.\n \nOh, and as the make output scrolled by, I see that it failed \nas well building some plpsql stuff, but it was non fatal.  There were also \na zillion warnings, many of them about multiple type declarations for \nint8, int32, etc.\nNow on to the installation!!\n \n5) The install worked pretty smoothly.  The only trouble \nI had was installing the man pages, because it expected to use \"zcat\" to handle \nits .gz files, which AIX doesn't like.  So I had to change zcat to \n\"/usr/local/bin/gunzip -c\" in the ./src/Makefile.global (of course implying that \nI had already installed GNU zip).  I added the LD_PATH and MANPATH entries \nfor pgsql to my /etc/environment.\n \nI hope this information is helpful to any of you building this \non AIX, and hopefully we can have someone from the Postgres team revisit the AIX \ninstallation process!  The missing AIX packages stuff could be added to the \nFAQ_AIX file, and hopefully the error in the makefile and the problem linking \nlibpq++.so can be fixed.  If the latter is a G++ problem (would not \nsurprise me!), keep in mind that many of us AIX users don't use xlC because \nIBM licenses the damned thing, so GCC is more more economical!\n \nBest regards,\n \nRichard Sand ([email protected])http://www.vgalleries.com", "msg_date": "Wed, 5 Jul 2000 23:20:35 -0400", "msg_from": "[email protected] (Richard Sand)", "msg_from_op": true, "msg_subject": "Lessons learned on how to build 7.0.2 on AIX 4.x" }, { "msg_contents": "> Hi all- I've recently had to suffer a torturous process of building\n> 7.0.2 on AIX 4.3.2. From my pouring through other newsgroups I've\n> concluded that the problems I've hit exist at least on AIX 4.1.5 and\n> 4.3.3 as well. This note describes the problems I've encountered and\n> how to get around them, as well as a request for some fixes. Please\n> accept my apology if this is known information- I couldn't find\n> anything in this digest here. Some of this is in the document\n> FAQ_AIX, but not all of it.\n\nGreat info. Would you have time to update FAQ_AIX? Or is there another\nAIX partisan out there who would like to do the honors?\n\nRegards.\n\n - Thomas\n", "msg_date": "Thu, 06 Jul 2000 03:47:03 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lessons learned on how to build 7.0.2 on AIX 4.x" }, { "msg_contents": "Richard Sand writes:\n\n> 1) First of all, you can't use IBM's make utility, gotta use GNU make.\n\nQuoth the installation instructions:\n\n\"Building PostgreSQL requires GNU make. It will not work with other make\nprograms.\"\n\n> you have to use the command:\n> \n> ./configure --with-template=aix_gcc\n\nThat has got to be a bug. The configure script should look for gcc\nfirst. Can you show the relevant lines of configure output (checking for\ncc... etc), when you don't use that option?\n\n\n> Making postgres.imp\n> ./backend/port/aix/mkldexport.sh postgres /usr/local/bin > postgres.imp nm: postgres: 0654-200 Cannot open the specified file.\n> nm: A file or directory in the path name does not exist.\n> \n> This is apparently a bug in the make scripts for Postgres.\n\nCan you describe how to fix it? The AIX shared library stuff is an enigma\nto me.\n\n \n> I hand edited the Makefile.global file in ./src and commented out the\n> line \"HAVE_Cplusplus=true\"\n\nQuoth configure --help:\n\n\" --without-CXX prevent building C++ code\"\n\n\n> Oh, and as the make output scrolled by, I see that it failed as well\n> building some plpsql stuff, but it was non fatal.\n\nIf it failed then it was fatal, and vice versa. Please elaborate.\n\n> There were also a zillion warnings, many of them about multiple type\n> declarations for int8, int32, etc.\n\nI'll make a note of it.\n\n> installing the man pages, because it expected to use \"zcat\" to handle\n> its .gz files, which AIX doesn't like. So I had to change zcat to\n> \"/usr/local/bin/gunzip -c\" in the ./src/Makefile.global (of course\n\nNoted.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 6 Jul 2000 23:36:41 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lessons learned on how to build 7.0.2 on AIX 4.x" } ]
[ { "msg_contents": "I'd be honored to! I'll give it a shot- to whom should I submit it?\n\nCould someone (at some point) take a look at the install bugs I (and others)\nhit- i.e. the glitch in postgres.imp and the problem building libpq++ and\nplpsql stuff?\n\nRichard Sand ([email protected])\nhttp://www.vgalleries.com\n-----Original Message-----\nFrom: Thomas Lockhart <[email protected]>\nTo: Richard Sand <[email protected]>\nCc: [email protected] <[email protected]>\nDate: Wednesday, July 05, 2000 11:43 PM\nSubject: Re: [HACKERS] Lessons learned on how to build 7.0.2 on AIX 4.x\n\n\n>> Hi all- I've recently had to suffer a torturous process of building\n>> 7.0.2 on AIX 4.3.2. From my pouring through other newsgroups I've\n>> concluded that the problems I've hit exist at least on AIX 4.1.5 and\n>> 4.3.3 as well. This note describes the problems I've encountered and\n>> how to get around them, as well as a request for some fixes. Please\n>> accept my apology if this is known information- I couldn't find\n>> anything in this digest here. Some of this is in the document\n>> FAQ_AIX, but not all of it.\n>\n>Great info. Would you have time to update FAQ_AIX? Or is there another\n>AIX partisan out there who would like to do the honors?\n>\n>Regards.\n>\n> - Thomas\n>\n\n", "msg_date": "Thu, 6 Jul 2000 00:41:53 -0400", "msg_from": "[email protected] (Richard Sand)", "msg_from_op": true, "msg_subject": "Re: Lessons learned on how to build 7.0.2 on AIX 4.x" } ]
[ { "msg_contents": "\nSome people suggested it might be a good idea to define a new\ninterface, maybe call it libpq2. (Actually this would be a good time\nto abandon pq = postquel in favour of libpg.a ).\n\nI'll therefore put forward the following proposal as a possible\nstarting point. I'm not particularly committed to either this\nproposal, my previous proposal, or perhaps Peter's proposal. A new\ninterface is probably the cleanest, but the current library probably\nisn't all bad either.\n\nMy idea is that there should be a very low level interface that has a\nminimum of bloat and features and caching and copying. This would be\nespecially nice for me writing an ODMG interface because the ODMG\ninterface would be needing to cache and copy things about so having\nlibpq doing it too is extra overhead. It could also form the basis of\na trivial re-implementation of the current libpq in terms of this\ninterface.\n\nSo the criteria I used for the low level interface is...\n\n*) Future-Proof. In preference to a PGconnect routine with 6 different\n arguments, you create an empty PGConnection, set various attributes\n via setter functions and then call connect. That way it is future\n proof against needing more arguments. Similar for execQuery.\n\n*) Speed. Lean and mean. We can write a fancier interface on top of\n this for people who want convenience over speed. At this point I\n havn't attempted to design one. Thus the getValue routine (pg_value\n below), is not null-terminated. The higher level interface can make\n sure of that if needed. In any case some sorts of data may contain\n nulls.\n\nThe main thing I dislike about the current interface is that it's not \nlow-level enough. It won't let me get around the features that I don't \nwant (like caching the entire result).\n\nOk guys, which way do you want me to go? Or will someone else come \nup with something better?\n\n/*\n The Postgres Low-Level Interface \n*/\n\ntypedef int PG_ErrorCode;\n/* Just creates an empty connection object. Like C++ new() */\nPG_Connection *pg_newConnection();\nvoid pg_freeConnection(PG_Connection *con);\n/* setter functions */\nvoid pg_setDb(con);\nvoid pg_setUserName(con);\nvoid pg_setPassword(con);\n/* Connect to the database. TRUE for success */\nPG_Boolean pg_connect(con);\n/* Find out the error code for what happened */\n/* In the future there should be a unified error code system */\nPG_ErrorCode pg_connect_error(PG_Connection * con);\n\n/* Just creates an empty query object */\nPGquery * pg_newQuery(PGConnection *con);\nvoid pg_freeQuery(PGquery *q);\n/* setter function */\nvoid pg_setSQL(char *query);\n/* Executes the query */\nPG_Boolean pg_exec_sql(PGquery *q);\n\ntypedef int PG_NextStatus;\n#define PG_NEXT_EOF 0 /* No more records */\n#define PG_NEXT_OK 1 /* Returned a record */\n#define PG_NEXT_ERROR -1 /* Error */\n\n/* get the next record */\nPG_NextStatus pg_next(PG_Connection *con);\n/* did the last record returned mark the start of a new group? */\nPG_Boolean pg_new_group(PG_query *q);\n\ntypedef int PG_Length;\n\n/* Get the data from a field, specifying the field number and\nreturning the length of the data */\nvoid *pg_value(PGquery *q, int field_num, PG_Length *len);\nPG_Boolean pg_is_null(PGquery *q, int field_num);\n/* If update/insert or delete, returns the number of rows affected */\nint pg_num_rows_affected(PGquery *q);\n/* Returns the oid of the last inserted object */\nOid pg_last_oid(PGquery *q);\n/* Get the field name */\nchar *pg_field_name(PGquery *q, int field_num);\n/* Get the field type */\nOid pg_field_type(PGquery *q, int field_num);\n/* Find out the error code for what happened */\n/* In the future there should be a unified error code system */\nPG_ErrorCode pg_query_error(PGquery *q);\n\n/* Get a meaningful Error message for a code */\nchar *pg_errorMessage(PG_ErrorCode);\n", "msg_date": "Thu, 06 Jul 2000 15:50:13 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "Alternative new libpq interface." }, { "msg_contents": "Chris Bitmead <[email protected]> writes:\n> The main thing I dislike about the current interface is that it's not \n> low-level enough. It won't let me get around the features that I don't \n> want (like caching the entire result).\n\nBear in mind that \"avoiding the features you don't want\" is not\ncost-free. In particular, I have seen no discussion in this thread\nof the implications that streaming read would have for error handling.\n\nIn the current libpq, you either get a complete error-free result set\nor you don't. If there is to be a streaming interface then it must\ntake into account the possibility of an error partway through the\nfetch. Applications that use the interface will also incur extra\ncomplexity from having to undo whatever they might have done with\nthe initial part of the result data.\n\nStill, something along the lines of your sketch seems worth pursuing.\nPersonally I've never once had any use for the \"random access to result\nset\" aspect of libpq's API, so it seems like buffering the whole set\nis a pretty high price to pay for a small simplification in error\nhandling.\n\nMy gut feeling about this is that if a complete rewrite is being\nconsidered, it ought to be done as a new interface library that's\nindependent of libpq. libpq has its limitations, but it's moderately\nwell debugged and lots of apps depend on it. A rewrite will need time\nto stabilize and to attract new apps --- unless you want to guarantee\n100.00% backward compatibility, which I bet you won't.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Jul 2000 04:20:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternative new libpq interface. " }, { "msg_contents": "\n-- \n\n> My gut feeling about this is that if a complete rewrite is being\n> considered, it ought to be done as a new interface library that's\n> independent of libpq. \n\nI was thinking more along the lines of massaging the current libpq to\nsupport the new interface/features rather than starting with a blank\nslate. As you say libpq is well debugged and there are a lot of fine\ndetails in there I don't want to mess with.\n\nMy aims are to get the OO features and streaming behaviour working with\na hopefully stable interface.\n\nDoes that affect your gut feeling? Your error observations are\nsignificant and I think they dismiss my 1st suggestion. That leaves the\npossibilities of the whole new interface versus massaging the current\ninterface with streaming/grouping APIs.\n", "msg_date": "Thu, 06 Jul 2000 19:19:51 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternative new libpq interface." }, { "msg_contents": "On Thu, 6 Jul 2000, Tom Lane wrote:\n\n> Chris Bitmead <[email protected]> writes:\n> > The main thing I dislike about the current interface is that it's not \n> > low-level enough. It won't let me get around the features that I don't \n> > want (like caching the entire result).\n> \n> Bear in mind that \"avoiding the features you don't want\" is not\n> cost-free. In particular, I have seen no discussion in this thread\n> of the implications that streaming read would have for error handling.\n> \n> In the current libpq, you either get a complete error-free result set\n> or you don't. If there is to be a streaming interface then it must\n> take into account the possibility of an error partway through the\n> fetch. Applications that use the interface will also incur extra\n> complexity from having to undo whatever they might have done with\n> the initial part of the result data.\n> \n> Still, something along the lines of your sketch seems worth pursuing.\n> Personally I've never once had any use for the \"random access to result\n> set\" aspect of libpq's API, so it seems like buffering the whole set\n> is a pretty high price to pay for a small simplification in error\n> handling.\n> \n> My gut feeling about this is that if a complete rewrite is being\n> considered, it ought to be done as a new interface library that's\n> independent of libpq. libpq has its limitations, but it's moderately\n> well debugged and lots of apps depend on it. A rewrite will need time\n> to stabilize and to attract new apps --- unless you want to guarantee\n> 100.00% backward compatibility, which I bet you won't.\n\nAgreed, which was why I had suggested going to a libpq2 and leaving the\ncurrent libpq intact ... but, I was always confused as to why pq vs pg, so\nChris going to a libpg.a sounds like a really nice way to accomplish this\nwithout causing any headaches with 'legacy apps' that are tied to libpq\n...\n\nWhat I'd suggest is leave libpq in for a few releases, until libpg\nstabilizes and then look at removing it and directing ppl over to libpq\n...\n\n\n\n", "msg_date": "Thu, 6 Jul 2000 09:33:23 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternative new libpq interface. " }, { "msg_contents": "On Thu, 6 Jul 2000, Chris Bitmead wrote:\n\n> \n> -- \n> \n> > My gut feeling about this is that if a complete rewrite is being\n> > considered, it ought to be done as a new interface library that's\n> > independent of libpq. \n> \n> I was thinking more along the lines of massaging the current libpq to\n> support the new interface/features rather than starting with a blank\n> slate. As you say libpq is well debugged and there are a lot of fine\n> details in there I don't want to mess with.\n> \n> My aims are to get the OO features and streaming behaviour working with\n> a hopefully stable interface.\n> \n> Does that affect your gut feeling? Your error observations are\n> significant and I think they dismiss my 1st suggestion. That leaves the\n> possibilities of the whole new interface versus massaging the current\n> interface with streaming/grouping APIs.\n\ncp -rp libpq libpg;cvs add libpg?\n\nif nothing else, it would give a template to build from without risking\nproblems to current apps using libpq ... I'm not 100% certain that I'm\nreading Tom correct, but by 'independent of libpq', I'm taking it that\nlibpg wouldn't need libpq to compile ... ?\n\n", "msg_date": "Thu, 6 Jul 2000 09:35:31 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternative new libpq interface." }, { "msg_contents": "Chris Bitmead <[email protected]> writes:\n>> My gut feeling about this is that if a complete rewrite is being\n>> considered, it ought to be done as a new interface library that's\n>> independent of libpq. \n\n> I was thinking more along the lines of massaging the current libpq to\n> support the new interface/features rather than starting with a blank\n> slate. As you say libpq is well debugged and there are a lot of fine\n> details in there I don't want to mess with.\n\nNo reason you shouldn't steal liberally from the existing code, of\ncourse.\n\n> My aims are to get the OO features and streaming behaviour working with\n> a hopefully stable interface.\n\n> Does that affect your gut feeling?\n\nThe thing that was bothering me was offhand suggestions about \"let's\nreimplement the existing libpq API atop some redesigned lower layer\".\nI think that's a recipe for trouble, in that it could introduce bugs\nand incompatibilities that will break existing applications. I'd\nrather see us leave libpq alone and start a separate development\nthread for the new version. That also has the advantage that you're\nnot hogtied by compatibility considerations.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Jul 2000 10:30:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternative new libpq interface. " }, { "msg_contents": "Chris Bitmead writes:\n\n> Some people suggested it might be a good idea to define a new\n> interface, maybe call it libpq2.\n\nIf you want to implement a new C API, look at SQL/CLI in ISO/IEC\n9075-3:1999. It would be a shame if we created yet another proprietary\nAPI.\n\nHaving said that, I don't follow the reasoning to create a completely new\nclient library just for streaming results. A lot of work was put in the\nexisting one, and if you extend it carefully then you might reap the\nbenefits of that.\n\nCreating a new API is a tedious process that needs to be done very\ncarefully. And also keep in mind that the majority of users these days\ndoesn't use libpq directly. All the other language interfaces would have\nto be converted, that's a major effort that will never get done. What we'd\nend up with are two different APIs that are only half-maintained each. And\na backend that has to support them both.\n\n\n> The main thing I dislike about the current interface is that it's not\n> low-level enough. It won't let me get around the features that I don't\n> want (like caching the entire result).\n\nThen factor out the low-level routines and make them part of the API. You\ncould certainly re-implement the current \"get all rows\" as \"while (rows\nleft) { row = malloc(); read(&row); }\".\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 6 Jul 2000 23:36:22 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternative new libpq interface." }, { "msg_contents": "Peter Eisentraut wrote:\n\n> If you want to implement a new C API, look at SQL/CLI in ISO/IEC\n> 9075-3:1999. It would be a shame if we created yet another proprietary\n> API.\n\nAs usual, our resident standards guru comes and saves the day. :-)\n\nOk, I'm going to implement the SQL3 C API, which is a streaming API. The\none change I'll make is I'll be adding a\nBoolean SQLIsNewGroup(hstmt), so that the OO stuff can tell when a new\nobject type is on the way. Oh and I'll have some appropriate APIs for\npostgres specific extensions, like SQLLastInsertOid().\n", "msg_date": "Fri, 07 Jul 2000 14:09:27 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Alternative new libpq interface." }, { "msg_contents": "On Thu, 06 Jul 2000 15:50:13 +1000, Chris Bitmead wrote:\n\n>\n>My idea is that there should be a very low level interface that has a\n>minimum of bloat and features and caching and copying. This would be\n>especially nice for me writing an ODMG interface because the ODMG\n>interface would be needing to cache and copy things about so having\n>libpq doing it too is extra overhead. It could also form the basis of\n>a trivial re-implementation of the current libpq in terms of this\n>interface.\n\n What does it mean: ODMG interface. I've the ODMG 3.0 book in front\nof me and i do not know, what you would like to create ... why is\ncaching and copying a need for ODMG ???\n\n Marten Feldtmann\n\n----\n\nMarten Feldtmann, Germany\n\n", "msg_date": "Mon, 10 Jul 2000 19:37:33 +0100 (MEZ)", "msg_from": "[email protected] (Marten Feldtmann)", "msg_from_op": false, "msg_subject": "Re: Alternative new libpq interface." }, { "msg_contents": "Marten Feldtmann wrote:\n> \n> On Thu, 06 Jul 2000 15:50:13 +1000, Chris Bitmead wrote:\n> \n> >\n> >My idea is that there should be a very low level interface that has a\n> >minimum of bloat and features and caching and copying. This would be\n> >especially nice for me writing an ODMG interface because the ODMG\n> >interface would be needing to cache and copy things about so having\n> >libpq doing it too is extra overhead. It could also form the basis of\n> >a trivial re-implementation of the current libpq in terms of this\n> >interface.\n> \n> What does it mean: ODMG interface. I've the ODMG 3.0 book in front\n> of me and i do not know, what you would like to create ... why is\n> caching and copying a need for ODMG ???\n\nEach programming language has a specified ODMG interface. Database\nobjects are mapped 1:1 with language objects. Every time you read\na database object a language object is created to represent it.\n\nNow if you read the same database object in different places in your\ncode. Maybe the same object is \"navigated\" to via different paths,\nyou don't want two objects created in memory to represent that object.\nIf that happened you could have a confusing integrity situation.\n\nSo with an ODMG interface it keeps track of what database objects\nare in memory at any one time - think of it as a cache, and makes\nsure that if you request the same object again, it doesn't construct\na new one but returns the existing one.\n\nOf course when you create one of these language objects, the values\nmust be copied into the fields of the object. That's where the copying\ncomes in. Now some object databases are implemented by just transferring\nwhole database pages to the client side. Obviously they have pretty low\noverhead in terms of memory copying data from one place to another. A \npostgres style architecture _can_ compete with this, but I suspect\nit must try harder in libpq in terms of how many times a bit of \nmemory coming in is copied around the place. (Or maybe not. Maybe that\nis premature optimisation).\n", "msg_date": "Tue, 11 Jul 2000 11:05:04 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Alternative new libpq interface." }, { "msg_contents": "Marten Feldtmann wrote:\n\n> Hmmm, what you want is not that easy. It means, that the object\n> data is stored several times on the client:\n> \n> - you MUST hold an independent cache for each open connection\n> to the database.\n> - you MUST copy the values from the cache to the language\n> dependent representation.\n\nNo it's stored once on the client. The language dependant cache IS\nthe cache.\n \n> And you still do not get the result you want to have: the\n> integrity problem. What happens, if the cache is not big\n> enough. How are cached objects thrown away ? Garbage Collector\n> in the cache system ??\n\nThe most simple scenario is that all objects are discarded upon\ntransaction\ncommit.\n\nBeyond that, there are other scenarios. Like if you want to reclaim some\ncache then UPDATE the database with any changes and leave the\ntransaction\nopen. If you need an object again then you read it in again.\n\nBut to a large extent, memory management is based on the model of\nthe programming language that you use, and managing it properly. Even\nif you use JDBC you can't just slurp gigabytes into memory. You have\nto re-use memory according to the conventions of the language in use.\n\n> And another point: this has nothing to do with an ODMG interface.\n> It's just a nice performance hint for database access, but\n> ODMG has nothing to do with it.\n\nWhat has nothing to do with ODMG?\n\n> Normally the identity is assured by the language binding - either\n> by the database (as you would like it) or by the binding of a\n> particular language to this database.\n> \n> To get an ODMG language binding you may use the libpq. You may\n> put a cache system on top of this libpq and you have the thing\n> you perhaps want to have. That's all you really need.\n\nYes, but it's nice to compete on performance too. Whether libpq has\ninefficiencies that prevent that is to be seen. Many commercial\nODBMSes are blindingly fast on object retrieval.\n \n> What indeed would be a big win, it the chance to retrieve different\n> result sets with one query !\n\nI'm working on it.\n", "msg_date": "Tue, 11 Jul 2000 15:50:10 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Alternative new libpq interface." }, { "msg_contents": "On Tue, 11 Jul 2000 11:05:04 +1000, Chris Bitmead wrote:\n\n>\n>Each programming language has a specified ODMG interface. Database\n>objects are mapped 1:1 with language objects. Every time you read\n>a database object a language object is created to represent it.\n>\n\n Ok, this is defined as the language bindungs mentioned in this \nbook.\n\n>Now if you read the same database object in different places in your\n>code. Maybe the same object is \"navigated\" to via different paths,\n>you don't want two objects created in memory to represent that object.\n>If that happened you could have a confusing integrity situation.\n>\n>So with an ODMG interface it keeps track of what database objects\n>are in memory at any one time - think of it as a cache, and makes\n>sure that if you request the same object again, it doesn't construct\n>a new one but returns the existing one.\n>\n\n Hmmm, what you want is not that easy. It means, that the object\ndata is stored several times on the client:\n\n - you MUST hold an independent cache for each open connection \n to the database.\n - you MUST copy the values from the cache to the language\n dependent representation.\n\n And you still do not get the result you want to have: the\nintegrity problem. What happens, if the cache is not big\nenough. How are cached objects thrown away ? Garbage Collector\nin the cache system ??\n\n And another point: this has nothing to do with an ODMG interface.\nIt's just a nice performance hint for database access, but\nODMG has nothing to do with it.\n\n Normally the identity is assured by the language binding - either\nby the database (as you would like it) or by the binding of a\nparticular language to this database.\n\n To get an ODMG language binding you may use the libpq. You may\nput a cache system on top of this libpq and you have the thing \nyou perhaps want to have. That's all you really need. \n\n What indeed would be a big win, it the chance to retrieve different \nresult sets with one query !\n\n\n Marten\n\n\n----\n\nMarten Feldtmann, Germany\n\n", "msg_date": "Tue, 11 Jul 2000 07:14:34 +0100 (MEZ)", "msg_from": "[email protected] (Marten Feldtmann)", "msg_from_op": false, "msg_subject": "Re: Alternative new libpq interface." }, { "msg_contents": "On Tue, 11 Jul 2000 15:50:10 +1000, Chris Bitmead wrote:\n\n>Marten Feldtmann wrote:\n>\n>> Hmmm, what you want is not that easy. It means, that the object\n>> data is stored several times on the client:\n>> \n>> - you MUST hold an independent cache for each open connection\n>> to the database.\n>> - you MUST copy the values from the cache to the language\n>> dependent representation.\n>\n>No it's stored once on the client. The language dependant cache IS\n>the cache.\n\nOk, then the new libpg has no own cache. That was not clear in your\nposting. Databases like Versant and Oracle do have client based\ncaching system, which are NOT the language dependant cache - but\nan overall client based cache. This is mainly due to performance\nimprovements they expect from that feature.\n\n> \n>> And you still do not get the result you want to have: the\n>> integrity problem. What happens, if the cache is not big\n>> enough. How are cached objects thrown away ? Garbage Collector\n>> in the cache system ??\n>\n>The most simple scenario is that all objects are discarded upon\n>transaction\n>commit.\n\n Which is handled by the language binding ... correct ?\n\n I had a strange feeling when you wrote, that you want to write\nan ODMG interface, but never ever mentioned a programming language ! \nTherefore I thought you would like to create a new libpg with \nsome support for an ODMG interface and I asked myself: what is\nso important, that I need to write a new lipq\n\n\n>\n>> Normally the identity is assured by the language binding - either\n>> by the database (as you would like it) or by the binding of a\n>> particular language to this database.\n>> \n>> To get an ODMG language binding you may use the libpq. You may\n>> put a cache system on top of this libpq and you have the thing\n>> you perhaps want to have. That's all you really need.\n>\n>Yes, but it's nice to compete on performance too. Whether libpq has\n>inefficiencies that prevent that is to be seen. Many commercial\n>ODBMSes are blindingly fast on object retrieval.\n> \n\n Hmmm, what should I say. I've seen PostgreSQL beeing blindingly\nfast fetching objects belonging to associations from one\nobject. This has mostly something to do with the object model\nand it's mapping into the database ...\n\n Marten\n \n----\n\nMarten Feldtmann, Germany\n\n", "msg_date": "Tue, 11 Jul 2000 17:35:35 +0100 (MEZ)", "msg_from": "[email protected] (Marten Feldtmann)", "msg_from_op": false, "msg_subject": "Re: Alternative new libpq interface." } ]
[ { "msg_contents": " \n> At 11:09 5/07/00 -0400, Tom Lane wrote:\n> >Philip Warner <[email protected]> writes:\n> >> Having now flirted with recreating BLOBs (and even DBs) \n> with matching OIDs,\n> >> I find myself thinking it's a waste of effort for the \n> moment. A modified\n> >> version of the system used by Pavel Janik in pg_dumplo may \n> be substantially\n> >> more reliable than my previous proposal:\n> >\n> >I like this a lot better than trying to restore the original \n> OIDs. For\n> >one thing, the restore-original-OIDs idea cannot be made to \n> work if what\n> >we want to do is load additional tables into an existing database.\n> >\n> \n> The thing that bugs me about this if for 30,000 rows, I do \n> 30,000 updates\n> after the restore. It seems *really* inefficient, not to mention slow.\n> \n> I'll also have to modify pg_restore to talk to the database \n> directly (for\n> lo import). As a result I will probably send the entire \n> script directly\n> from withing pg_restore. Do you know if comment parsing \n> ('--') is done in\n> the backend, or psql?\n\nStrictly speaking you are absolutely safe if you only do one update \nwith the max oid from the 30,000 rows before you start creating the lo's.\nDon't know if you know that beforehand though.\n\nIf you only know afterwards then you have to guarantee that no other \nconnection to this db (actually postmaster if you need the oid's site\nunique)\ndoes anything while you insert the lo's and then update to max oid.\n\nAndreas\n", "msg_date": "Thu, 6 Jul 2000 09:52:32 +0200 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Re: pg_dump and LOs (another proposal) " } ]
[ { "msg_contents": "At 09:52 6/07/00 +0200, Zeugswetter Andreas SB wrote:\n>> \n>> I'll also have to modify pg_restore to talk to the database \n>> directly (for\n>> lo import). As a result I will probably send the entire \n>> script directly\n>> from withing pg_restore. Do you know if comment parsing \n>> ('--') is done in\n>> the backend, or psql?\n>\n>Strictly speaking you are absolutely safe if you only do one update \n>with the max oid from the 30,000 rows before you start creating the lo's.\n>Don't know if you know that beforehand though.\n>\n>If you only know afterwards then you have to guarantee that no other \n>connection to this db (actually postmaster if you need the oid's site\n>unique)\n>does anything while you insert the lo's and then update to max oid.\n>\n\nYou may be confusing the two proposed techniques, the current flavour of\nthe minute is to restore the BLOBs using lo_craete to get a new oid; write\nan entry in a table indicating what the old & new are, then when the table\ndata is loaded, update all oid fields that refer to oids in the xref table.\nIt's pretty nasty, but it has the big advantage of being as vanilla as\npossible. It's also pretty close to what pg_dump_lo does.\n\n\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 06 Jul 2000 18:20:21 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AW: Re: pg_dump and LOs (another proposal) " } ]
[ { "msg_contents": "At 11:09 6/07/00 +0200, Jan Wieck wrote:\n>\n> No. It's all in the current CVS tree. I removed that patch\n> already.\n>\n\nOK, I've updated from CVS and rebuilt & it worked. The, to be sure, I did a\n'make distclean' then a 'make' & 'make install' again, and now postmaster\nwont start (SIGSEGV). I have rebuild with '-O0 -g', gone into gdb, but the\nprocess dies so I cant get a backtrace.\n\nI have now rebuilt with SYSLOG support, and get the following:\n\nJul 6 20:54:54 Cerberus2 kernel: Unable to handle kernel NULL pointer\ndereference at virtual address 00000038\nJul 6 20:54:54 Cerberus2 kernel: current->tss.cr3 = 02467000, %cr3 = 02467000\nJul 6 20:54:54 Cerberus2 kernel: *pde = 00000000\nJul 6 20:54:54 Cerberus2 kernel: Oops: 0000\nJul 6 20:54:54 Cerberus2 kernel: CPU: 0\nJul 6 20:54:54 Cerberus2 kernel: EIP: 0010:[fcntl_setlk+327/404]\nJul 6 20:54:54 Cerberus2 kernel: EFLAGS: 00000202\nJul 6 20:54:54 Cerberus2 kernel: eax: 00000000 ebx: c15485b0 ecx:\nc2494000 edx: c0d49a50\nJul 6 20:54:54 Cerberus2 kernel: esi: bffff574 edi: 00000004 ebp:\nfffffff7 esp: c2495f34\nJul 6 20:54:54 Cerberus2 kernel: ds: 0018 es: 0018 ss: 0018\nJul 6 20:54:54 Cerberus2 kernel: Process postmaster (pid: 5661, process\nnr: 50, stackpage=c2495000)\nJul 6 20:54:54 Cerberus2 kernel: Stack: 00000000 c15485b0 c2495f40\n00000001 00000000 00000000 4000bc74 00000000\nJul 6 20:54:54 Cerberus2 kernel: 00000000 00000000 00000000\n00000000 c0d49a50 0000161d 00000000 c15485b0\nJul 6 20:54:54 Cerberus2 kernel: 00000101 00000000 7fffffff\n00000000 00000000 00000000 c012c687 00000004\nJul 6 20:54:54 Cerberus2 kernel: Call Trace: [sys_fcntl+595/772]\n[sys_open+94/124] [system_call+52/56]\nJul 6 20:54:54 Cerberus2 kernel: Code: 8b 50 38 85 d2 74 15 8d 44 24 24 50\nff 74 24 6c 53 ff d2 89\n\n\nIf anyone can give me some tips on tracking this down, I would appreciate\nit....\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 06 Jul 2000 21:05:09 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 2nd update on TOAST" }, { "msg_contents": "Philip Warner wrote:\n> At 11:09 6/07/00 +0200, Jan Wieck wrote:\n> >\n> > No. It's all in the current CVS tree. I removed that patch\n> > already.\n> >\n>\n> OK, I've updated from CVS and rebuilt & it worked. The, to be sure, I did a\n> 'make distclean' then a 'make' & 'make install' again, and now postmaster\n> wont start (SIGSEGV). I have rebuild with '-O0 -g', gone into gdb, but the\n> process dies so I cant get a backtrace.\n\n Have the same symptom with a completely fresh cvs checkout.\n\n> If anyone can give me some tips on tracking this down, I would appreciate\n> it....\n\n Bruce applied a patch to configure.in yesterday. Read the\n comments from the cvslog. It tells that it triggers a bug in\n the Linux kernels fcntl(SETLK) code when used with unix\n domain sockets, and that the bug is present in Linux kernels\n <= 2.2.16. I'm running a 2.2.12 here, and so it exactly dies\n in pqcomm.c line 229 on fcntl() against the socket.\n\n Undefining HAVE_FCNTL_SETLK in config.h did it for me\n temporary. Don't know how to deal with it finally.\n\n With this setup I did\n\n initdb\n createdb\n psql <megaview.sql\n pg_dump pgsql >megaview.dump\n\n In the dump file, the first 2183 bytes look OK. What's\n following then looks like internal tables where pg_dump holds\n the info of the schema analyzing.\n\n And don't worry that the view is dumped as table with a later\n CREATE RULE. That's correct this way.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Thu, 6 Jul 2000 14:04:14 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: 2nd update on TOAST" }, { "msg_contents": "At 14:04 6/07/00 +0200, Jan Wieck wrote:\n>Philip Warner wrote:\n>\n> In the dump file, the first 2183 bytes look OK. What's\n> following then looks like internal tables where pg_dump holds\n> the info of the schema analyzing.\n\nAny chance you could mail it direct to me? \n\n\n> And don't worry that the view is dumped as table with a later\n> CREATE RULE. That's correct this way.\n\nI figured this out the hard way!\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 06 Jul 2000 22:23:58 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 2nd update on TOAST" }, { "msg_contents": "At 14:04 6/07/00 +0200, Jan Wieck wrote:\n>\n> With this setup I did\n>\n> initdb\n> createdb\n> psql <megaview.sql\n> pg_dump pgsql >megaview.dump\n>\n> In the dump file, the first 2183 bytes look OK. What's\n> following then looks like internal tables where pg_dump holds\n> the info of the schema analyzing.\n>\n\nJust in case there is some other factor, can you let me know what your\nchoices in 'configure' were?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 06 Jul 2000 22:33:59 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 2nd update on TOAST" }, { "msg_contents": "Philip Warner wrote:\n> At 14:04 6/07/00 +0200, Jan Wieck wrote:\n> >Philip Warner wrote:\n> >\n> > In the dump file, the first 2183 bytes look OK. What's\n> > following then looks like internal tables where pg_dump holds\n> > the info of the schema analyzing.\n> \n> Any chance you could mail it direct to me? \n\n Attached.\n\n> \n> \n> > And don't worry that the view is dumped as table with a later\n> > CREATE RULE. That's correct this way.\n> \n> I figured this out the hard way!\n\n :-)\n\n Will be off after this until approx. 1:00 UCT. \n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #", "msg_date": "Thu, 6 Jul 2000 14:45:25 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: 2nd update on TOAST" }, { "msg_contents": "Philip Warner wrote:\n> At 14:04 6/07/00 +0200, Jan Wieck wrote:\n> >\n> > With this setup I did\n> >\n> > initdb\n> > createdb\n> > psql <megaview.sql\n> > pg_dump pgsql >megaview.dump\n> >\n> > In the dump file, the first 2183 bytes look OK. What's\n> > following then looks like internal tables where pg_dump holds\n> > the info of the schema analyzing.\n> >\n> \n> Just in case there is some other factor, can you let me know what your\n> choices in 'configure' were?\n\n --with-tcl\n --enable-cassert\n --enable-debug\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n", "msg_date": "Thu, 6 Jul 2000 14:47:14 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: 2nd update on TOAST" }, { "msg_contents": "> Bruce applied a patch to configure.in yesterday. Read the\n> comments from the cvslog. It tells that it triggers a bug in\n> the Linux kernels fcntl(SETLK) code when used with unix\n> domain sockets, and that the bug is present in Linux kernels\n> <= 2.2.16. I'm running a 2.2.12 here, and so it exactly dies\n> in pqcomm.c line 229 on fcntl() against the socket.\n> Undefining HAVE_FCNTL_SETLK in config.h did it for me\n> temporary. Don't know how to deal with it finally.\n\nThanks Jan for the workaround. I'll see if this gets me up and going\nagain.\n\n - Thomas\n", "msg_date": "Thu, 06 Jul 2000 13:44:25 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2nd update on TOAST" }, { "msg_contents": "> > OK, I've updated from CVS and rebuilt & it worked. The, to be sure, I did a\n> > 'make distclean' then a 'make' & 'make install' again, and now postmaster\n> > wont start (SIGSEGV). I have rebuild with '-O0 -g', gone into gdb, but the\n> > process dies so I cant get a backtrace.\n> \n> Have the same symptom with a completely fresh cvs checkout.\n> \n> > If anyone can give me some tips on tracking this down, I would appreciate\n> > it....\n> \n> Bruce applied a patch to configure.in yesterday. Read the\n> comments from the cvslog. It tells that it triggers a bug in\n> the Linux kernels fcntl(SETLK) code when used with unix\n> domain sockets, and that the bug is present in Linux kernels\n> <= 2.2.16. I'm running a 2.2.12 here, and so it exactly dies\n> in pqcomm.c line 229 on fcntl() against the socket.\n\nI thought when he said flock() bug, he meant only on the new IA64\nplatform, not on all Linux platforms. Yikes, I enable flock(), and it\nbreaks initdb for all the Linux users. This is a problem!\n\nTom was mentioning the configure check for flock() was broken recently,\nso I was glad to fix it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Jul 2000 16:43:26 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2nd update on TOAST" }, { "msg_contents": "Bruce Momjian wrote:\n> > > OK, I've updated from CVS and rebuilt & it worked. The, to be sure, I did a\n> > > 'make distclean' then a 'make' & 'make install' again, and now postmaster\n> > > wont start (SIGSEGV). I have rebuild with '-O0 -g', gone into gdb, but the\n> > > process dies so I cant get a backtrace.\n> >\n> > Have the same symptom with a completely fresh cvs checkout.\n> >\n> > > If anyone can give me some tips on tracking this down, I would appreciate\n> > > it....\n> >\n> > Bruce applied a patch to configure.in yesterday. Read the\n> > comments from the cvslog. It tells that it triggers a bug in\n> > the Linux kernels fcntl(SETLK) code when used with unix\n> > domain sockets, and that the bug is present in Linux kernels\n> > <= 2.2.16. I'm running a 2.2.12 here, and so it exactly dies\n> > in pqcomm.c line 229 on fcntl() against the socket.\n>\n> I thought when he said flock() bug, he meant only on the new IA64\n> platform, not on all Linux platforms. Yikes, I enable flock(), and it\n> breaks initdb for all the Linux users. This is a problem!\n\n Not initdb, but postmaster. That's the one who tries (after a\n successful initdb) to do the fcntl(F_SETLK) on the unix\n domain socket. Causing the kernel saying \"go to hell, go\n directly, don't write a core, don't leave useful info in\n gdb\".\n\n The only reason I see for the entire section is to detect if\n it would be safe to unlink the socket because it's left by\n another postmaster in case of abnormal termination. Tell me\n if I've misread it. So why not doing it on the Linux\n platform different, using a separate file like\n .s.PGSQL.5432.LCK?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Thu, 6 Jul 2000 23:51:03 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: 2nd update on TOAST" }, { "msg_contents": "> Not initdb, but postmaster. That's the one who tries (after a\n> successful initdb) to do the fcntl(F_SETLK) on the unix\n> domain socket. Causing the kernel saying \"go to hell, go\n> directly, don't write a core, don't leave useful info in\n> gdb\".\n> \n> The only reason I see for the entire section is to detect if\n> it would be safe to unlink the socket because it's left by\n> another postmaster in case of abnormal termination. Tell me\n> if I've misread it. So why not doing it on the Linux\n> platform different, using a separate file like\n> .s.PGSQL.5432.LCK?\n\nBut how do you know if that file still belongs to an active postmaster? \nWhat if it exited before removing the file. Seems we would have to\nwrite the PID into the file, and do a kill() to see if it is running.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Jul 2000 22:04:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2nd update on TOAST" }, { "msg_contents": ">> The only reason I see for the entire section is to detect if\n>> it would be safe to unlink the socket because it's left by\n>> another postmaster in case of abnormal termination. Tell me\n>> if I've misread it.\n\nThat's exactly what it's for. We need to tell whether there is\nstill another postmaster running on the same port number. Too\nbad the kernel is not bright enough to unlink the socket file\nautomatically when it's no longer in use...\n\n>> So why not doing it on the Linux\n>> platform different, using a separate file like\n>> .s.PGSQL.5432.LCK?\n\nI think it's a bad idea to do it differently on Linux than other\nplatforms. If we fix this (other than by just disabling the fcntl\ncall again on old Linuxen) we should use the new method everywhere.\n\n> But how do you know if that file still belongs to an active postmaster? \n> What if it exited before removing the file. Seems we would have to\n> write the PID into the file, and do a kill() to see if it is running.\n\nWell, if we wanted to continue to depend on fcntl(SETLK) then we could\nuse an empty plain file. I read the bug report as being that old Linux\nkernels fail if fcntl(SETLK) is applied to a Unix-socket file. They'd\nsurely have noticed long before if the feature didn't work on plain\nfiles.\n\nBut if we are going to change this at all, I'd vote for storing pids\nin the lock files the way we are now doing in the data-directory pid\nlock files. Then we wouldn't have to depend on fcntl at all, which\nwould be a Good Thing from a portability point of view.\n\nHowever, I think it would be a really bad idea to keep the lock files\nin /tmp --- that's way too open to accidental removals, not to mention\ndeliberate denial-of-service attacks. They need to be in a more secure\ndirectory; but where? See the past discussions summarized in the\nTODO.detail file.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Jul 2000 22:46:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "fcntl(SETLK) [was Re: 2nd update on TOAST]" }, { "msg_contents": "Tom Lane writes:\n\n> However, I think it would be a really bad idea to keep the lock files\n> in /tmp --- that's way too open to accidental removals, not to mention\n> deliberate denial-of-service attacks. They need to be in a more secure\n> directory; but where? See the past discussions summarized in the\n> TODO.detail file.\n\nQuoth the file system standard:\n\n`sharedstatedir'\n The directory for installing architecture-independent data files\n which the programs modify while they run. This should normally be\n `/usr/local/com', but write it as `$(prefix)/com'. (If you are\n using Autoconf, write it as `@sharedstatedir@'.)\n\nThe problem with this approach is making that directory writeable by the\nserver account. Solutions:\n\n1) Making the postmaster executable as root but later drop root\n privileges. (This looks to be the cleanest solution, but it is\n probably a security problem waiting to happen.)\n\n2) Making initdb executable as root but with some --user switch. Have it\n create a subdirectory of $sharedstatedir writable by the server\n account, possibly with sticky bit and whatnot. Use `su' to invoke\n `postgres'.\n\n This approach might be convenient also in terms of creating the data\n directory.\n\n3) Making \"initialize lock file area\" a separate initialization step,\n possibly encapsulated into a shell script.\n\n\nBtw., what would happen if we did start a second postmaster at the same\nTCP port? Or more interestingly, what happens if some completely different\nprogram already runs at that port? How do we protect against that? This\nhas something to do with SO_REUSEADDR, but I don't understand those things\ntoo well.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 7 Jul 2000 21:27:19 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fcntl(SETLK) [was Re: 2nd update on TOAST]" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Quoth the file system standard:\n\n> `sharedstatedir'\n> The directory for installing architecture-independent data files\n> which the programs modify while they run. This should normally be\n> `/usr/local/com', but write it as `$(prefix)/com'. (If you are\n> using Autoconf, write it as `@sharedstatedir@'.)\n\n> The problem with this approach is making that directory writeable by the\n> server account.\n\nThe lock directory should certainly be one used only for Postgres locks,\nowned by postgres user and writable only by postgres user.\n\n> 2) Making initdb executable as root but with some --user switch. Have it\n> create a subdirectory of $sharedstatedir writable by the server\n> account, possibly with sticky bit and whatnot. Use `su' to invoke\n> `postgres'.\n\n> This approach might be convenient also in terms of creating the data\n> directory.\n\nWe could do that, or we could just say \"you must have arranged for\ncreation of these directories before you run initdb\". For the truly\nlazy, a small script that could be executed as root could be provided.\n\nPersonally I'd be unwilling to run a script as complex as initdb as\nroot; what if it goes wrong? Keep the stuff that requires root\npermission separate, and as small as possible.\n\nBTW, regardless of where exactly the lock directory lives (and IIRC\nthere were several schools of thought on that), I believe that the\nlock directory pathname has to be wired in at configure time. It\ncan't be an initdb argument because the whole locking thing is useless\nunless all the PG installations on a machine agree on where the port\nlocks are.\n\n> Btw., what would happen if we did start a second postmaster at the same\n> TCP port? Or more interestingly, what happens if some completely different\n> program already runs at that port? How do we protect against that? This\n> has something to do with SO_REUSEADDR, but I don't understand those things\n> too well.\n\nSO_REUSEADDR solves the problem for TCP sockets. The problem with Unix\nsockets is that the kernel's detection of conflicts is pretty braindead:\nif there is an existing socket file of the same name, you get an\n\"address in use\" failure from bind(), regardless of whether anyone else\nis actually using the socket. So, if the previous postmaster died\nungracefully and didn't delete its socket file, a new postmaster cannot\nbe started up until the old socket file is removed. What we're trying\nto do here is automate that removal so the admin doesn't have to do it.\nThe trouble is we can't just unlink() the old socket file because\nthat'll succeed even if there is a postmaster actively using the socket!\nSo we need to find out whether the old postmaster is still alive\nto decide whether it's OK to remove the old socket file or whether we\nshould abort startup.\n\nBruce and I were just talking by phone about this, and we realized that\nthere is a completely different approach to making that decision: if you\nwant to know whether there's an old postmaster connected to a socket\nfile, try to connect to the old postmaster! In other words, pretend to\nbe a client and see if your connection attempt is answered. (You don't\nhave to try to log in, just see if you get a connection.) This might\nalso answer Peter's concern about socket files that belong to\nnon-Postgres programs, although I doubt that's really a big issue.\n\nThere are some potential pitfalls here, like what if the old postmaster\nis there but overloaded? But on the whole it seems like it might be\na cleaner answer than fooling around with lockfiles, and certainly safer\nthan relying on fcntl(SETLK) to work on a socket file. Comments anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Jul 2000 19:00:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fcntl(SETLK) [was Re: 2nd update on TOAST] " }, { "msg_contents": "Tom Lane wrote:\n> \n> Bruce and I were just talking by phone about this, and we realized that\n> there is a completely different approach to making that decision: if you\n> want to know whether there's an old postmaster connected to a socket\n> file, try to connect to the old postmaster! In other words, pretend to\n> be a client and see if your connection attempt is answered. (You don't\n> have to try to log in, just see if you get a connection.) This might\n> also answer Peter's concern about socket files that belong to\n> non-Postgres programs, although I doubt that's really a big issue.\n> \n> There are some potential pitfalls here, like what if the old postmaster\n> is there but overloaded? But on the whole it seems like it might be\n> a cleaner answer than fooling around with lockfiles, and certainly safer\n> than relying on fcntl(SETLK) to work on a socket file. Comments anyone?\n\n Like it.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n", "msg_date": "Sat, 8 Jul 2000 13:15:02 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: fcntl(SETLK) [was Re: 2nd update on TOAST]" }, { "msg_contents": "* Jan Wieck <[email protected]> [000708 05:47] wrote:\n> Tom Lane wrote:\n> > \n> > Bruce and I were just talking by phone about this, and we realized that\n> > there is a completely different approach to making that decision: if you\n> > want to know whether there's an old postmaster connected to a socket\n> > file, try to connect to the old postmaster! In other words, pretend to\n> > be a client and see if your connection attempt is answered. (You don't\n> > have to try to log in, just see if you get a connection.) This might\n> > also answer Peter's concern about socket files that belong to\n> > non-Postgres programs, although I doubt that's really a big issue.\n> > \n> > There are some potential pitfalls here, like what if the old postmaster\n> > is there but overloaded? But on the whole it seems like it might be\n> > a cleaner answer than fooling around with lockfiles, and certainly safer\n> > than relying on fcntl(SETLK) to work on a socket file. Comments anyone?\n> \n> Like it.\n\nmy $pgsocket = \"/tmp/.s.PGSQL.5432\";\n\n# try to connect to the postmaster\nsocket(SOCK, PF_UNIX, SOCK_STREAM, 0)\n or die \"unable to create unix domain socket: $!\";\n\nconnect(SOCK, sockaddr_un($pgsocket))\n and errexit(\"postmaster is running you must shut it down\");\n\noh yeah... :)\n\n-Alfred\n", "msg_date": "Sat, 8 Jul 2000 05:51:39 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fcntl(SETLK) [was Re: 2nd update on TOAST]" }, { "msg_contents": "Alfred Perlstein wrote:\n> \n> * Jan Wieck <[email protected]> [000708 05:47] wrote:\n> > Tom Lane wrote:\n> > >\n> > > Bruce and I were just talking by phone about this, and we realized that\n> > > there is a completely different approach to making that decision: if you\n> > > want to know whether there's an old postmaster connected to a socket\n> > > file, try to connect to the old postmaster! In other words, pretend to\n> > > be a client and see if your connection attempt is answered. (You don't\n> > > have to try to log in, just see if you get a connection.) This might\n> > > also answer Peter's concern about socket files that belong to\n> > > non-Postgres programs, although I doubt that's really a big issue.\n> > >\n> > > There are some potential pitfalls here, like what if the old postmaster\n> > > is there but overloaded? But on the whole it seems like it might be\n> > > a cleaner answer than fooling around with lockfiles, and certainly safer\n> > > than relying on fcntl(SETLK) to work on a socket file. Comments anyone?\n> >\n> > Like it.\n> \n> my $pgsocket = \"/tmp/.s.PGSQL.5432\";\n> \n> # try to connect to the postmaster\n> socket(SOCK, PF_UNIX, SOCK_STREAM, 0)\n> or die \"unable to create unix domain socket: $!\";\n> \n> connect(SOCK, sockaddr_un($pgsocket))\n> and errexit(\"postmaster is running you must shut it down\");\n> \n> oh yeah... :)\n> \n> -Alfred\n\nI don't get this. Isn't there a race condition here?\n\nJust curious,\n\nMike Mascari\n", "msg_date": "Sat, 08 Jul 2000 08:54:48 -0400", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fcntl(SETLK) [was Re: 2nd update on TOAST]" }, { "msg_contents": "* Mike Mascari <[email protected]> [000708 05:55] wrote:\n> Alfred Perlstein wrote:\n> > \n> > * Jan Wieck <[email protected]> [000708 05:47] wrote:\n> > > Tom Lane wrote:\n> > > >\n> > > > Bruce and I were just talking by phone about this, and we realized that\n> > > > there is a completely different approach to making that decision: if you\n> > > > want to know whether there's an old postmaster connected to a socket\n> > > > file, try to connect to the old postmaster! In other words, pretend to\n> > > > be a client and see if your connection attempt is answered. (You don't\n> > > > have to try to log in, just see if you get a connection.) This might\n> > > > also answer Peter's concern about socket files that belong to\n> > > > non-Postgres programs, although I doubt that's really a big issue.\n> > > >\n> > > > There are some potential pitfalls here, like what if the old postmaster\n> > > > is there but overloaded? But on the whole it seems like it might be\n> > > > a cleaner answer than fooling around with lockfiles, and certainly safer\n> > > > than relying on fcntl(SETLK) to work on a socket file. Comments anyone?\n> > >\n> > > Like it.\n> > \n> > my $pgsocket = \"/tmp/.s.PGSQL.5432\";\n> > \n> > # try to connect to the postmaster\n> > socket(SOCK, PF_UNIX, SOCK_STREAM, 0)\n> > or die \"unable to create unix domain socket: $!\";\n> > \n> > connect(SOCK, sockaddr_un($pgsocket))\n> > and errexit(\"postmaster is running you must shut it down\");\n> > \n> > oh yeah... :)\n> > \n> > -Alfred\n> \n> I don't get this. Isn't there a race condition here?\n> \n> Just curious,\n\nSure but it's handled, if there's a postmaster starting at this\nexact instant, however since the script just runs postmaster\nafterwards the conflict will make postmaster abort and I'll get an\nerror return from my invocation of postmaster.\n\n-Alfred\n", "msg_date": "Sat, 8 Jul 2000 05:58:02 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fcntl(SETLK) [was Re: 2nd update on TOAST]" }, { "msg_contents": "> > my $pgsocket = \"/tmp/.s.PGSQL.5432\";\n> > \n> > # try to connect to the postmaster\n> > socket(SOCK, PF_UNIX, SOCK_STREAM, 0)\n> > or die \"unable to create unix domain socket: $!\";\n> > \n> > connect(SOCK, sockaddr_un($pgsocket))\n> > and errexit(\"postmaster is running you must shut it down\");\n> > \n> > oh yeah... :)\n> > \n> > -Alfred\n> \n> I don't get this. Isn't there a race condition here?\n\nThat's a good point. I don't think so because the socket will only\ncreate for one user. Basically, we don't need something bulletproof\nhere. We just need something to prevent admins from accidentally\nstarting two postmasters on the same port.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 8 Jul 2000 09:00:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fcntl(SETLK) [was Re: 2nd update on TOAST]" }, { "msg_contents": "* Bruce Momjian <[email protected]> [000708 06:02] wrote:\n> > > my $pgsocket = \"/tmp/.s.PGSQL.5432\";\n> > > \n> > > # try to connect to the postmaster\n> > > socket(SOCK, PF_UNIX, SOCK_STREAM, 0)\n> > > or die \"unable to create unix domain socket: $!\";\n> > > \n> > > connect(SOCK, sockaddr_un($pgsocket))\n> > > and errexit(\"postmaster is running you must shut it down\");\n> > > \n> > > oh yeah... :)\n> > > \n> > > -Alfred\n> > \n> > I don't get this. Isn't there a race condition here?\n> \n> That's a good point. I don't think so because the socket will only\n> create for one user. Basically, we don't need something bulletproof\n> here. We just need something to prevent admins from accidentally\n> starting two postmasters on the same port.\n\nActually I just remebered the issue here, if one wants to start\npostmaster on an alternate port there will be no conflict and \nall hell may break loose.\n\n-Alfred\n", "msg_date": "Sat, 8 Jul 2000 06:28:00 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fcntl(SETLK) [was Re: 2nd update on TOAST]" }, { "msg_contents": "> > That's a good point. I don't think so because the socket will only\n> > create for one user. Basically, we don't need something bulletproof\n> > here. We just need something to prevent admins from accidentally\n> > starting two postmasters on the same port.\n> \n> Actually I just remebered the issue here, if one wants to start\n> postmaster on an alternate port there will be no conflict and \n> all hell may break loose.\n\nWe already lock the /data directory. This is for the port lock.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 8 Jul 2000 09:40:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fcntl(SETLK) [was Re: 2nd update on TOAST]" }, { "msg_contents": "> > But how do you know if that file still belongs to an active postmaster? \n> > What if it exited before removing the file. Seems we would have to\n> > write the PID into the file, and do a kill() to see if it is running.\n\nI believe we already do this (SetPidFile() in\nutils/init/miscinit.c). Isn't it sufficient (1) to prevent starting a\nnew postmaster on the same data dir and (2) to unlink the accidently\nleft socket file?\n--\nTatsuo Ishii\n", "msg_date": "Sat, 08 Jul 2000 22:57:32 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fcntl(SETLK) [was Re: 2nd update on TOAST]" }, { "msg_contents": "> > > But how do you know if that file still belongs to an active postmaster? \n> > > What if it exited before removing the file. Seems we would have to\n> > > write the PID into the file, and do a kill() to see if it is running.\n> \n> I believe we already do this (SetPidFile() in\n> utils/init/miscinit.c). Isn't it sufficient (1) to prevent starting a\n> new postmaster on the same data dir and (2) to unlink the accidently\n> left socket file?\n\nI noticed what I was missing after sending the mail. Sorry for the\nconfusion. Seems the idea trying to connect a postmaster looks good.\n--\nTatsuo Ishii\n", "msg_date": "Sat, 08 Jul 2000 23:09:23 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fcntl(SETLK) [was Re: 2nd update on TOAST]" }, { "msg_contents": "Tom Lane writes:\n\n> Bruce and I were just talking by phone about this, and we realized that\n> there is a completely different approach to making that decision: if you\n> want to know whether there's an old postmaster connected to a socket\n> file, try to connect to the old postmaster!\n\nIt seems that that would completely reverse the assumption of risk.\nCurrently, the postmaster may fail to start because there's a stale socket\nfile lying around, out of respect to a running colleague. With this idea\nit would be the running postmaster's job to \"defend\" his socket against\nnewly starting colleagues. That doesn't seem fair.\n\nWhat are our problems?\n\nThere's a possible DoS attack when someone else comes first and creates a\nfile /tmp/.s.PGSQL.5432. But detecting whether there's another program\nrunning on that socket (if it's a socket) isn't going to help because you\nmost likely won't be able to delete it anyway. The solution to this is to\nmake the path of the socket file configurable more easily so that the\nadministrator has the choice of putting it a safer place that he prepared\nappropriately.\n\nA complementary solution is of course to add an option to run without Unix\nsocket, since we don't rely on the socket file for data directory locking\nanymore. In fact, does anybody mind if I add such an option? We can have\n\ntcpip_socket = yes|no\nunix_socket = yes|no\n\n(Security-conscious users may choose to turn off both. :-))\n\nThe other problem is a socket file left behind by a crashed postmaster. I\ndon't consider this such a big problem; a crashed postmaster is not the\nnormal mode of operation. The friendly message we have right now seems\nalright to me. And it's a way of tell that the postmaster crashed at all.\n\nOne idea to get the pid in there somewhere is creating a socket file\n\"/tmp/.s.PGSQL.port.pid\" and making /tmp/.s.PGSQL.port a symlink to it.\nThen clients don't know the difference, but the server knows the pid and\ncan take appropriate action. Or make the symlink the other way around, not\nsure.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 8 Jul 2000 16:26:17 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fcntl(SETLK) [was Re: 2nd update on TOAST] " }, { "msg_contents": "> The other problem is a socket file left behind by a crashed postmaster. I\n> don't consider this such a big problem; a crashed postmaster is not the\n> normal mode of operation. The friendly message we have right now seems\n> alright to me. And it's a way of tell that the postmaster crashed at all.\n> \n> One idea to get the pid in there somewhere is creating a socket file\n> \"/tmp/.s.PGSQL.port.pid\" and making /tmp/.s.PGSQL.port a symlink to it.\n> Then clients don't know the difference, but the server knows the pid and\n> can take appropriate action. Or make the symlink the other way around, not\n> sure.\n\nThe symlink is an interesting idea. lstat() on the normal name can give\nthe file name with pid.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 8 Jul 2000 11:03:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fcntl(SETLK) [was Re: 2nd update on TOAST]" }, { "msg_contents": "\n> There's a possible DoS attack when someone else comes first and creates a\n> file /tmp/.s.PGSQL.5432. But detecting whether there's another program\n> running on that socket (if it's a socket) isn't going to help because you\n> most likely won't be able to delete it anyway. The solution to this is to\n> make the path of the socket file configurable more easily so that the\n> administrator has the choice of putting it a safer place that he prepared\n> appropriately.\n\nIf you are worried about DoS, I think the only solution is to figure out\na way to be using one of the reserved <1000 ports. I don't think there's\nany way around that is there? Also presumably not using a reserved port\nis a security risk. Not that I'm worried.\n", "msg_date": "Sun, 09 Jul 2000 01:08:27 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fcntl(SETLK) [was Re: 2nd update on TOAST]" }, { "msg_contents": "Mike Mascari <[email protected]> writes:\n> I don't get this. Isn't there a race condition here?\n\nStrictly speaking, there is, but the race window is only a couple\nof kernel calls wide, and as Bruce pointed out we do not need something\nthat is absolutely gold-plated bulletproof. We are just trying to\nprevent dbadmins from accidentally starting two postmasters on the\nsame port number.\n\nThe way this would work is that pqcomm.c would do something like\n\n\tif (socketFileAlreadyExists) {\n\t\ttry to open connection to existing postmaster;\n\t\tif (successful) {\n\t\t\treport port conflict and die;\n\t\t}\n\t\tdelete existing socket file;\n\t}\n\tbind(socket); // kernel creates new socket file here\n\tlisten();\n\nThe race condition here is that if newly-started postmaster A has\nexecuted bind() but not yet listen(), then newly-started postmaster B\ncould come along, observe the existing socket file, try to open\nconnection, fail, delete socket file, proceed. AFAIK B will be allowed\nto bind() and create a new socket file, and A ends up listening to a\nport that's lost in hyperspace --- no one else can ever connect to it\nbecause it has no visible representative in the filesystem.\n\nBut as soon as A has executed listen() it's safe --- even though it's\nnot really ready to accept connections yet, the attempted connect from\nB will wait till it does. (We should, therefore, use a plain vanilla\nconnect attempt for the probe --- no non-blocking connect or anything\nfancy.)\n\nThe bind-to-listen delay in pqcomm.c is currently several lines long,\nbut there's no reason they couldn't be successive kernel calls with\nnothing but a test for bind() failure between.\n\nThat strikes me as plenty close enough...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jul 2000 12:13:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fcntl(SETLK) [was Re: 2nd update on TOAST] " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> It seems that that would completely reverse the assumption of risk.\n> Currently, the postmaster may fail to start because there's a stale socket\n> file lying around, out of respect to a running colleague. With this idea\n> it would be the running postmaster's job to \"defend\" his socket against\n> newly starting colleagues. That doesn't seem fair.\n\nTrue, it would reverse the most probable failure mode, but I'm not sure\nthat's a bad thing.\n\n> The other problem is a socket file left behind by a crashed postmaster. I\n> don't consider this such a big problem; a crashed postmaster is not the\n> normal mode of operation. The friendly message we have right now seems\n> alright to me. And it's a way of tell that the postmaster crashed at all.\n\nNo, actually this is a *big* problem. That friendly message is no help\nto a system boot script that can't read it (the same point you've made\nrepeatedly w.r.t configure issues; surprised you don't see it here).\n\nIf I do a fast shutdown of my Unix system (the kind where shutdown does\na 'kill -9' on all user processes --- on HPUX systems this is invoked by\nhitting the power switch or by the power supply overtemperature sensor)\nthen the postmaster doesn't get a chance to clean out its socket file.\nAfter reboot, the postmaster fails to start up until I manually\nintervene by removing the socket file. That's not robust and not\nacceptable.\n\nThe way I currently get around this (and I believe it's a pretty popular\nthing to do) is that my postmaster-start script unconditionally deletes\nthe socket file before launching the postmaster. That's actually far\nriskier than what we are discussing, because there is *no* safety check\nfor an already-started postmaster. A connection check would be a big\nimprovement.\n\nI consider failure-to-start during normal system bootup to be a far\ngraver risk than the possibility that a second postmaster will usurp\na first postmaster's Unix socket --- especially since the latter could\nonly happen if the first postmaster isn't answering connections, in\nwhich case allowing it to keep the socket is of dubious value anyhow.\nSo reversing the presumption of innocence seems like a good idea to me.\n\n> ... The solution to this is to make the path of the socket file\n> configurable more easily so that the administrator has the choice of\n> putting it a safer place that he prepared appropriately.\n\nWe talked about that in the original discussion (you might want to\nreview the flock pghackers thread from late August '98). The trouble is\nthat the socket file path is a critical part of the client-to-postmaster\nprotocol: change the path, and existing clients don't know where to\nconnect. Oops. So even though /tmp is obviously a pretty bogus place\nto keep the socket, the compatibility headaches of moving it are so\ngreat that no one really wants to bite the bullet.\n\nWe talked about compromises like keeping the real socket in some safer\ndirectory, with a symlink from /tmp for old clients, and I think that's\nwhat will happen eventually. But please note that if the socket file\npath is \"easily configurable\" then the same problem comes right back\nto bite you again. It's *not* \"easy\" to change your mind about where\nthe socket files live; on any given platform that decision had better be\ngraven on stone tablets, because you want all your clients of whatever\nvintage to be able to find your postmaster(s). I'm inclined to think\nthat a configure option might be counterproductive --- nailing it down\nin the per-OS template file seems much less likely to get screwed up.\n\nThe major problem with a hard-wired socket path that's not /tmp is\nthat you can't install the socket directory if you're not root, so the\nability to fire up a postmaster with no root privs whatever would no\nlonger exist. We could get around that if it were possible to run with\nonly TCP connection support, making Unix-domain connections an option\ninstead of the base requirement.\n\n> A complementary solution is of course to add an option to run without Unix\n> socket, since we don't rely on the socket file for data directory locking\n> anymore. In fact, does anybody mind if I add such an option? We can have\n> tcpip_socket = yes|no\n> unix_socket = yes|no\n\nYup, it would make a lot of sense to have an option for no Unix socket\nconnections (we already have that as an #ifdef for a couple of platforms\nwith no Unix socket support, but not as a postmaster start-time choice).\n\n> (Security-conscious users may choose to turn off both. :-))\n\nUh, not at the moment, because we use the port interlock(s) as a proxy\nfor a shared-memory interlock. Really there are three resources that\nwe must prevent concurrent postmasters from sharing:\n\t* data directory;\n\t* listen port number;\n\t* shared-memory blocks (and semaphore sets).\nWe have a good solution in place now for locking the data directory, but\nthe port interlock still needs work. Currently we use the port number\nto assign shmem/sema keys, and there is no separate interlock to guard\nagainst shmem conflicts. I believe we had a discussion a few months ago\nabout rejiggering the shmem key assignment method so that shmem\nconflicts would be detected and dealt with cleanly --- might be a good\nidea to make that happen before we go too far with port interlock\nchanges.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jul 2000 13:53:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fcntl(SETLK) [was Re: 2nd update on TOAST] " }, { "msg_contents": "* Bruce Momjian <[email protected]> [000708 06:40] wrote:\n> > > That's a good point. I don't think so because the socket will only\n> > > create for one user. Basically, we don't need something bulletproof\n> > > here. We just need something to prevent admins from accidentally\n> > > starting two postmasters on the same port.\n> > \n> > Actually I just remebered the issue here, if one wants to start\n> > postmaster on an alternate port there will be no conflict and \n> > all hell may break loose.\n> \n> We already lock the /data directory. This is for the port lock.\n\nThe whole process could be locked by an fcntl lock on a seperate file,\nwhich I think was already mentioned, however I've deleted most of the\nthread unfortunatly.\n\n/tmp/.l.PGSQL.5432 <- fcntl lockfile, aquired first.\n/tmp/.s.PGSQL.5432 <- socket.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Sat, 8 Jul 2000 14:41:23 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fcntl(SETLK) [was Re: 2nd update on TOAST]" } ]
[ { "msg_contents": "At 18:12 6/07/00 +0200, Peter Eisentraut wrote:\n>Philip Warner writes:\n>\n>> I'll also have to modify pg_restore to talk to the database directly (for\n>> lo import).\n>\n>psql has \\lo_import.\n>\n\nP.S. Another, possibly minor, advantage of using a direct db connection is\nI can allow the user to stop restoring the database on the first error,\nunlike a script file to psql.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 07 Jul 2000 02:55:20 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: pg_dump and LOs (another proposal) " }, { "msg_contents": "Philip Warner writes:\n\n> >psql has \\lo_import.\n\n> P.S. Another, possibly minor, advantage of using a direct db connection is\n> I can allow the user to stop restoring the database on the first error,\n> unlike a script file to psql.\n\n\\set ON_ERROR_STOP\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 7 Jul 2000 18:15:54 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: pg_dump and LOs (another proposal) " } ]
[ { "msg_contents": "Please find attached a patch for the pg_dump directory which addresses:\n\n- The problems Jan reported\n\n- incompatibility with configure (now uses HAVE_LIBZ instead of HAVE_ZLIB)\n\n- a problem in auto-detecting archive file format on piped archives\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/", "msg_date": "Fri, 07 Jul 2000 03:01:36 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Fix for pg_dump" }, { "msg_contents": "Philip Warner wrote:\n> \n> Please find attached a patch for the pg_dump directory which addresses:\n> \n> - The problems Jan reported\n> \n> - incompatibility with configure (now uses HAVE_LIBZ instead of HAVE_ZLIB)\n> \n> - a problem in auto-detecting archive file format on piped archives\n> \n\n Applied\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n", "msg_date": "Thu, 6 Jul 2000 20:36:26 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: Fix for pg_dump" }, { "msg_contents": "Seems someone has already applied this.\n\n> \n> Please find attached a patch for the pg_dump directory which addresses:\n> \n> - The problems Jan reported\n> \n> - incompatibility with configure (now uses HAVE_LIBZ instead of HAVE_ZLIB)\n> \n> - a problem in auto-detecting archive file format on piped archives\n> \n\n[ Attachment, skipping... ]\n\n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.C.N. 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Jul 2000 21:53:46 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix for pg_dump" } ]
[ { "msg_contents": "I've occasionally griped that I do not like the coding practice of\nwriting\n\tif (strcmp(foo, bar))\nto mean\n\tif (strcmp(foo, bar) != 0)\nor its inverse\n\tif (!strcmp(foo, bar))\nto mean\n\tif (strcmp(foo, bar) == 0)\n\nMy past objection to this has been purely stylistic: it's too easy\nto read these constructs backwards, eg to think \"!strcmp()\" means\n\"not equal\". However, I've now had my nose rubbed in the fact that\nthis habit is actually dangerous.\n\nUp till just now, ruleutils.c contained code like this:\n\n bool tell_as = FALSE;\n\n /* Check if we must say AS ... */\n if (!IsA(tle->expr, Var))\n tell_as = strcmp(tle->resdom->resname, \"?column?\");\n\n\t/* more code... */\n\n if (tell_as)\n /* do something */\n\nThis is subtly wrong, because it will work as intended on many\nplatforms. But on some platforms, strcmp is capable of yielding\nvalues that are not 0 but whose low 8 bits are all 0. Stuff that\ninto a char-sized \"bool\" variable, and all of a sudden it's zero,\nreversing the intended behavior of the test.\n\nCorrect, portable coding of course is\n\n tell_as = strcmp(tle->resdom->resname, \"?column?\") != 0;\n\nThis error would not have happened if the author of this code had\nbeen in the habit of regarding strcmp's result as something to compare\nagainst 0, rather than as equivalent to a boolean value. So, I assert\nthat the above-mentioned coding practice is dangerous, because it can\nlead you to do things that aren't portable.\n\nI'm not planning to engage in a wholesale search-and-destroy mission\nfor misuses of strcmp and friends just at the moment, but maybe someone\nshould --- we may have comparable portability bugs elsewhere. In any\ncase I suggest we avoid this coding practice in future.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Jul 2000 19:03:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Memo on coding practices: strcmp() does not yield bool" }, { "msg_contents": "Tom Lane wrote:\n> \n> I've occasionally griped that I do not like the coding practice of\n> writing\n> \n> to mean\n> if (strcmp(foo, bar) != 0)\n> ...\n> My past objection to this has been purely stylistic: it's too easy\n> to read these constructs backwards, eg to think \"!strcmp()\" means\n> \"not equal\". However, I've now had my nose rubbed in the fact that\n> this habit is actually dangerous.\n> \n> Up till just now, ruleutils.c contained code like this:\n> \n> bool tell_as = FALSE;\n> \n> /* Check if we must say AS ... */\n> if (!IsA(tle->expr, Var))\n> tell_as = strcmp(tle->resdom->resname, \"?column?\");\n> \n> /* more code... */\n> \n> if (tell_as)\n> /* do something */\n> \n> This is subtly wrong, because it will work as intended on many\n> platforms. But on some platforms, strcmp is capable of yielding\n> values that are not 0 but whose low 8 bits are all 0. Stuff that\n> into a char-sized \"bool\" variable, and all of a sudden it's zero,\n> reversing the intended behavior of the test.\n\nI see your examples demonstrate the danger of inappropriate or\ninattentive type conversion (e.g., splicing an int into a char), but I'm\nmissing the danger you see, beyond a style offense, of \"if (strcmp(foo,\nbar))\"?\n\nRegards,\nEd Loehr\n", "msg_date": "Thu, 06 Jul 2000 18:37:37 -0500", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memo on coding practices: strcmp() does not yield bool" }, { "msg_contents": "Tom Lane wrote:\n> I've occasionally griped that I do not like the coding practice of\n> writing\n> if (strcmp(foo, bar))\n> to mean\n> if (strcmp(foo, bar) != 0)\n>\n> [...]\n>\n> Up till just now, ruleutils.c contained code like this:\n>\n> bool tell_as = FALSE;\n>\n> /* Check if we must say AS ... */\n> if (!IsA(tle->expr, Var))\n> tell_as = strcmp(tle->resdom->resname, \"?column?\");\n>\n> [...]\n\n Yeah, blame me for that one.\n\n Oh boy. Originally I wrote ruleutils.c as a proof that\n rewrite rules \"can\" tell what the original rule (or view)\n looked like. Someone called it a \"magic piece of software\"\n and we adopted it as a useful thing to dump views and rules\n (what we wheren't able before). Now you blame me for it's\n uglyness.\n\n Well done! I was so proude beeing able to show how much I\n knew about rewrite rules by coding a reverse interpreter for\n them. Just that I was too lazy to code it in a good style.\n\n Remind me on it if I ever ask \"what should I do next\".\n\n :-)\n\n\nJan\n\nPS: I'm sure Tom can read between the lines what I really wanted\n to say. For anyone else: The functionality of ruleutils.c is\n checked in the regression suite. None of the ports reported\n problems so far, so the requested style fixes are more or\n less of cosmetic nature by now. I'm more than busy doing\n other things actually. If someone else want's to learn\n abount the internals of rules, touch it and become\n responsible for it.\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Fri, 7 Jul 2000 02:43:27 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: Memo on coding practices: strcmp() does not yield bool" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> Oh boy. Originally I wrote ruleutils.c as a proof that\n> rewrite rules \"can\" tell what the original rule (or view)\n> looked like. Someone called it a \"magic piece of software\"\n> and we adopted it as a useful thing to dump views and rules\n> (what we wheren't able before). Now you blame me for it's\n> uglyness.\n\nHey, I didn't mean to sound like I was picking on you in particular.\nThere are a lot of instances of that coding practice in our system.\n\nI just used ruleutils.c as an example because that was where the\nreported bug was --- and yes, this was from a regression test porting\nfailure report; the rules output was missing some AS clauses it\nshould've had. With this fix, we pass regress tests on MkLinux PPC \nat default optimization level. This bug was probably masked before\nbecause we couldn't compile with optimization on that platform,\ndue to the far worse portability bugs in fmgr.\n\nThe way I see it, today we learned one more tidbit about how to produce\nportable C code. You didn't know it before, and neither did I. No\nshame in that.\n\nThe more bugs we fix, the higher our standards become. It's all\npart of the process of world domination ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Jul 2000 22:16:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Memo on coding practices: strcmp() does not yield bool " }, { "msg_contents": "I checked through the code and found no other problem cases.\n\n> I've occasionally griped that I do not like the coding practice of\n> writing\n> \tif (strcmp(foo, bar))\n> to mean\n> \tif (strcmp(foo, bar) != 0)\n> or its inverse\n> \tif (!strcmp(foo, bar))\n> to mean\n> \tif (strcmp(foo, bar) == 0)\n> \n> My past objection to this has been purely stylistic: it's too easy\n> to read these constructs backwards, eg to think \"!strcmp()\" means\n> \"not equal\". However, I've now had my nose rubbed in the fact that\n> this habit is actually dangerous.\n> \n> Up till just now, ruleutils.c contained code like this:\n> \n> bool tell_as = FALSE;\n> \n> /* Check if we must say AS ... */\n> if (!IsA(tle->expr, Var))\n> tell_as = strcmp(tle->resdom->resname, \"?column?\");\n> \n> \t/* more code... */\n> \n> if (tell_as)\n> /* do something */\n> \n> This is subtly wrong, because it will work as intended on many\n> platforms. But on some platforms, strcmp is capable of yielding\n> values that are not 0 but whose low 8 bits are all 0. Stuff that\n> into a char-sized \"bool\" variable, and all of a sudden it's zero,\n> reversing the intended behavior of the test.\n> \n> Correct, portable coding of course is\n> \n> tell_as = strcmp(tle->resdom->resname, \"?column?\") != 0;\n> \n> This error would not have happened if the author of this code had\n> been in the habit of regarding strcmp's result as something to compare\n> against 0, rather than as equivalent to a boolean value. So, I assert\n> that the above-mentioned coding practice is dangerous, because it can\n> lead you to do things that aren't portable.\n> \n> I'm not planning to engage in a wholesale search-and-destroy mission\n> for misuses of strcmp and friends just at the moment, but maybe someone\n> should --- we may have comparable portability bugs elsewhere. In any\n> case I suggest we avoid this coding practice in future.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Jul 2000 23:26:59 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memo on coding practices: strcmp() does not yield bool" }, { "msg_contents": "My experience has been the same as Tom's (only perhaps more so, he\ndidn't actually admit to being confused by \"if (strcmp(...))\" :)\n\nI've always been uncomfortable with that \"implicit zero\", and have been\nconfused more than once by code which does not include the \"= 0\" or \"!=\n0\". When I'm modifying code, I usually will add this stuff in.\n\n - Thomas\n", "msg_date": "Fri, 07 Jul 2000 06:07:01 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memo on coding practices: strcmp() does not yield bool" }, { "msg_contents": "On Thu, 6 Jul 2000, Tom Lane wrote:\n\n> I've occasionally griped that I do not like the coding practice of\n> writing\n> \tif (strcmp(foo, bar))\n> to mean\n> \tif (strcmp(foo, bar) != 0)\n\nWhy not define a macro to avoid the urge to take shortcuts like that in\nthe future?\n\n#define streq(a,b) (strcmp((a), (b))==0)\n\nThis is guaranteed to yield 1 or 0, and it's very readable.\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 7 Jul 2000 07:43:51 -0400 (EDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Memo on coding practices: strcmp() does not yield bool" }, { "msg_contents": "Ed Loehr <[email protected]> writes:\n> I see your examples demonstrate the danger of inappropriate or\n> inattentive type conversion (e.g., splicing an int into a char), but I'm\n> missing the danger you see, beyond a style offense, of \"if (strcmp(foo,\n> bar))\"?\n\n\"if (strcmp(foo, bar))\" is portable, no doubt about it. My point is\nthat the idiom encourages one to think of strcmp() as yielding bool,\nwhich leads directly to the sort of thinko I exhibited. It's a\nslippery-slope argument, basically.\n\nI had always disliked the idiom on stylistic grounds, but I never quite\nhad a rational reason why. Now I do: it's a type violation. If C had\na distinction between int and bool then you'd not be allowed to write\nthis. As an old Pascal programmer I prefer to think of the two types\nas distinct...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Jul 2000 13:05:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Memo on coding practices: strcmp() does not yield bool " } ]
[ { "msg_contents": ">> 1) First of all, you can't use IBM's make utility, gotta use GNU make.\n>\n>Quoth the installation instructions:\n>\n>\"Building PostgreSQL requires GNU make. It will not work with other make\n>programs.\"\n\nYes I was just making sure all of the AIX instructions were combined. :)\n\n>> you have to use the command:\n>>\n>> ./configure --with-template=aix_gcc\n>\n>That has got to be a bug. The configure script should look for gcc\n>first. Can you show the relevant lines of configure output (checking for\n>cc... etc), when you don't use that option?\n\nIt hung up on line 1226. I just tested this again and can confirm, it will\nnot find gcc unless I explicitly use this switch. The output from\nconfig.log is:\n\nconfigure:1102: checking for gcc\nconfigure:1215: checking whether the C compiler\n(xlc -qmaxmem=16384 -qhalt=w -qs\nrcmsg -qlanglvl=extended -qlonglong ) works\nconfigure:1231: xlc -o\nconftest -qmaxmem=16384 -qhalt=w -qsrcmsg -qlanglvl=exten\nded -qlonglong conftest.c 1>&5\n./configure[1230]: xlc: not found\nconfigure: failed program was:\n\n#line 1226 \"configure\"\n#include \"confdefs.h\"\n\nmain(){return(0);}\n\n>> Making postgres.imp\n>> ./backend/port/aix/mkldexport.sh postgres /usr/local/bin > postgres.imp\nnm: postgres: 0654-200 Cannot open the specified file.\n>> nm: A file or directory in the path name does not exist.\n>>\n>> This is apparently a bug in the make scripts for Postgres.\n>\n>Can you describe how to fix it? The AIX shared library stuff is an enigma\n>to me.\n\nWell, all I did was do gmake in another directory (./src/backend) and then I\ncopy the postgres.imp file from that directory back to ./src, and then the\nmake can continue OK. So its got to be a simple bug in the makefile for\n./src that works when you build postgres.imp in another directory. I don't\nknow my way around makefiles except for the very basics, so I'm sorry I\ncan't help more... I'll run more tests if you'd like, let me know what you'd\nlike to see.\n\n>\n>> I hand edited the Makefile.global file in ./src and commented out the\n>> line \"HAVE_Cplusplus=true\"\n>\n>Quoth configure --help:\n>\n>\" --without-CXX prevent building C++ code\"\n\nAh, you are wise. :) Yes, that switch is better. BUT, since I do have g++\ninstalled and working, why can't the C++ code be built in the first place?\n\n>\n>\n>> Oh, and as the make output scrolled by, I see that it failed as well\n>> building some plpsql stuff, but it was non fatal.\n>\n>If it failed then it was fatal, and vice versa. Please elaborate.\n\nHere is an excerpt from stdout/err when I do a gmake all from ./src. You'll\nnotice that it starts to build plpgsql and then dies, but the make\ncontinues. I don't know if this is vital (the procedure stuff?)- I haven't\ntried actually doing anything with Postgres yet but I do have postmaster\nrunning, I created a database, and I can connect to it. Of course, the\nregression test failed because I don't have plpgsql! Anyway, here's the\nexcept:\n\ngmake[2]: Entering directory `/usr/src/postgresql-7.0.2/src/pl/plpgsql'\ngmake -C src all\ngmake[3]: Entering directory `/usr/src/postgresql-7.0.2/src/pl/plpgsql/src'\n../../../backend/port/aix/mkldexport.sh libplpgsql.a /usr/local/pgsql/lib >\nlibp\nlpgsql.exp\nld -H512 -bM:SRE -bI:../../../backend/postgres.imp -bE:libplpgsql.exp -o\nlibplpg\nsql.so libplpgsql.a -lPW -lcrypt -lld -lnsl -ldl -lm -lcurses -lc\nld: 0711-327 WARNING: Entry point not found: __start\nld: 0711-317 ERROR: Undefined symbol: CurrentMemoryContext\nld: 0711-317 ERROR: Undefined symbol: .MemoryContextAlloc\nld: 0711-317 ERROR: Undefined symbol: .MemoryContextFree\nld: 0711-317 ERROR: Undefined symbol: .MemoryContextRealloc\nld: 0711-317 ERROR: Undefined symbol: .elog\nld: 0711-317 ERROR: Undefined symbol: .SearchSysCacheTuple\nld: 0711-317 ERROR: Undefined symbol: .textout\nld: 0711-317 ERROR: Undefined symbol: .nameout\nld: 0711-317 ERROR: Undefined symbol: .fmgr_info\nld: 0711-317 ERROR: Undefined symbol: .int2in\nld: 0711-317 ERROR: Undefined symbol: .SPI_connect\nld: 0711-317 ERROR: Undefined symbol: CurrentTriggerData\nld: 0711-317 ERROR: Undefined symbol: .SPI_finish\nld: 0711-317 ERROR: Undefined symbol: Warn_restart\nld: 0711-317 ERROR: Undefined symbol: .SPI_palloc\nld: 0711-317 ERROR: Undefined symbol: .textin\nld: 0711-317 ERROR: Undefined symbol: .namein\nld: 0711-317 ERROR: Undefined symbol: .get_temp_rel_by_physicalname\nld: 0711-317 ERROR: Undefined symbol: .SPI_gettypeid\nld: 0711-317 ERROR: Undefined symbol: .SPI_copytuple\nld: 0711-317 ERROR: Undefined symbol: SPI_processed\nld: 0711-317 ERROR: Undefined symbol: SPI_tuptable\nld: 0711-317 ERROR: Undefined symbol: fmgr_pl_finfo\nld: 0711-317 ERROR: Undefined symbol: .SPI_fnumber\nld: 0711-317 ERROR: Undefined symbol: .SPI_getvalue\nld: 0711-317 ERROR: Undefined symbol: .SPI_prepare\nld: 0711-317 ERROR: Undefined symbol: .SPI_saveplan\nld: 0711-317 ERROR: Undefined symbol: .SPI_getbinval\nld: 0711-317 ERROR: Undefined symbol: .SPI_execp\nld: 0711-317 ERROR: Undefined symbol: .heap_formtuple\nld: 0711-317 ERROR: Undefined symbol: .newNode\nld: 0711-317 ERROR: Undefined symbol: .SPI_push\nld: 0711-317 ERROR: Undefined symbol: .ExecEvalExpr\nld: 0711-317 ERROR: Undefined symbol: .SPI_pop\nld: 0711-317 ERROR: Undefined symbol: .length\nld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more\ninformation.\ngmake[3]: *** [libplpgsql.so] Error 8\ngmake[3]: Leaving directory `/usr/src/postgresql-7.0.2/src/pl/plpgsql/src'\ngmake[2]: [all] Error 2 (ignored)\ngmake[2]: Leaving directory `/usr/src/postgresql-7.0.2/src/pl/plpgsql'\ngmake[1]: Leaving directory `/usr/src/postgresql-7.0.2/src/pl'\nAll of PostgreSQL is successfully made. Ready to install.\n\nI hope this helps!\n\n-Richard\n\n", "msg_date": "Thu, 6 Jul 2000 20:15:48 -0400", "msg_from": "[email protected] (Richard Sand)", "msg_from_op": true, "msg_subject": "Re: Lessons learned on how to build 7.0.2 on AIX 4.x" }, { "msg_contents": "Richard Sand writes:\n\n> >> ./configure --with-template=aix_gcc\n\n> It hung up on line 1226. I just tested this again and can confirm, it will\n> not find gcc unless I explicitly use this switch.\n\nI see. The template matching logic preempts the choice of compiler. We'll\nneed to ponder a fix for that.\n\n> >> Making postgres.imp\n> >> ./backend/port/aix/mkldexport.sh postgres /usr/local/bin > postgres.imp\n> nm: postgres: 0654-200 Cannot open the specified file.\n> >> nm: A file or directory in the path name does not exist.\n\n> So its got to be a simple bug in the makefile for ./src that works\n> when you build postgres.imp in another directory.\n\nLet's see: The rule that invokes this is\n\nsrc/backend/Makefile:\n\nall: postgres $(POSTGRES_IMP) ...\n\nThe commands are in src/makefiles/Makefile.aix:\n\n$(POSTGRES_IMP):\n @echo Making $@\n $(MKLDEXPORT) postgres $(BINDIR) > $@\n $(CC) -Wl,-bE:$(SRCDIR)/backend/$@ -o postgres $(OBJS) ../utils/version.o $(LDFLAGS)\n\nNow the error message seems to imply that it can't find the `postgres'\nexecutable, but the postgres executable should exist before this rule\nruns. Now you seems to be saying that you have to moving postgres.imp to\nthe src/ directory corrected this problem, but sorry, this doesn't make\nsense to me. :-( You could maybe help rebuilding completely from scratch\nand showing the complete make output so we can see what is being invoked\nin what order.\n\n\n> BUT, since I do have g++ installed and working, why can't the C++ code\n> be built in the first place?\n\nC++ is so wonderfully incompatible to itself, and the libpq++ interface is\nnot used so much that few people bother fixing it. Be our guest.\n\n\n> gmake[2]: Entering directory `/usr/src/postgresql-7.0.2/src/pl/plpgsql'\n> gmake -C src all\n> gmake[3]: Entering directory `/usr/src/postgresql-7.0.2/src/pl/plpgsql/src'\n> ../../../backend/port/aix/mkldexport.sh libplpgsql.a /usr/local/pgsql/lib >\n> libp\n> lpgsql.exp\n> ld -H512 -bM:SRE -bI:../../../backend/postgres.imp -bE:libplpgsql.exp -o\n> libplpg\n> sql.so libplpgsql.a -lPW -lcrypt -lld -lnsl -ldl -lm -lcurses -lc\n> ld: 0711-327 WARNING: Entry point not found: __start\n> ld: 0711-317 ERROR: Undefined symbol: CurrentMemoryContext\n> ld: 0711-317 ERROR: Undefined symbol: .MemoryContextAlloc\n> ld: 0711-317 ERROR: Undefined symbol: .MemoryContextFree\n[more of that]\n\nWell, yes, these symbols are undefined within plpgsql. They are supposed\nto be resolved when you load plpgsql into the server at runtime. Now I am\nventuring a guess here that this postgres.imp file is supposed to contain\na list of symbols that are defined by the postmaster and that the\ndynamically loadable modules such as plpgsql should not worry about, but\nas we saw, this file is not being created correctly. (Perhaps you should\ntry to move it back to src/backend for the purposes of building plpgsql.\nThat would at least give it a chance of finding the file.)\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 7 Jul 2000 21:27:24 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lessons learned on how to build 7.0.2 on AIX 4.x" }, { "msg_contents": "> > lpgsql.exp\n> > ld -H512 -bM:SRE -bI:../../../backend/postgres.imp -bE:libplpgsql.exp -o\n> > libplpg\n> > sql.so libplpgsql.a -lPW -lcrypt -lld -lnsl -ldl -lm -lcurses -lc\n> > ld: 0711-327 WARNING: Entry point not found: __start\n> > ld: 0711-317 ERROR: Undefined symbol: CurrentMemoryContext\n> > ld: 0711-317 ERROR: Undefined symbol: .MemoryContextAlloc\n> > ld: 0711-317 ERROR: Undefined symbol: .MemoryContextFree\n> [more of that]\n>\n> Well, yes, these symbols are undefined within plpgsql. They are supposed\n> to be resolved when you load plpgsql into the server at runtime. Now I am\n> venturing a guess here that this postgres.imp file is supposed to contain\n> a list of symbols that are defined by the postmaster and that the\n> dynamically loadable modules such as plpgsql should not worry about, but\n> as we saw, this file is not being created correctly. (Perhaps you should\n> try to move it back to src/backend for the purposes of building plpgsql.\n> That would at least give it a chance of finding the file.)\n\nExactly that...postgres.imp contains a list of symbols that are available\nfor modules to use to resolve in their code.\n\nThe script src/backend/port/aix/mkldexport.sh gathers the symbols from\npostgres.o. Then that list is given to the linker via -bE when making the\nexecutable to allow those symbols to be used by external modules. When\ncompiling the other modules, the linker needs to get that file with -bI to\ntell it that any unresolved symbols that are in postgres.imp will be in the\npostgres exectable.\n\nDid you get it to compile without munging the order of the #includes? On my\n4.1.5 system, postgres.h has to the first #include file. Seems there are\ntwo different prototypes for getopt in the aix system includes. If unistd.h\nor math.h are included *before* postgres.h, the bogus prototype is\nencountered first. There's also a function somewhere in libpq that has an\nunsigned char parameter that is at odds with the -qchars=signed flag in the\naix compile.\n\nThese are really only problems because I put in -qhalt=w to cause the\ncompile to treat warnings as errors and stop the compile. I believe Andreas\nwanted to turn this off a while back, but I had always found it useful and\nrather cool that a project as large as postgres could compile without any\nwarnings whatsoever.\n\ndarrenk\n\n", "msg_date": "Fri, 7 Jul 2000 15:42:30 -0400", "msg_from": "\"Darren King\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Lessons learned on how to build 7.0.2 on AIX 4.x" } ]
[ { "msg_contents": "Hello,\n\nI wish to perform the following query in a plsql function upon an\nupdate/insert/delete trigger:\n\nUPDATE entry_stats \nSET entry_maxprice=MAX(item_price) \nFROM item \nWHERE item.item_entry_id=NEW.item_entry_id\n AND item.item_live = 't';\n\nHowever there will be situations where there are no records for\na given item_entry_id with item_live='t'. Currently when I try\ndo update/insert a record so that this occurs I get the following\nerror 'ERROR: ExecutePlan: (junk) 'ctid' is NULL!' and the\ninsert/update attempt is rolled back.\n\nIn this scenario I want entry_stats.entry_maxprice to be set to zero \n(which is also the default value for that column if it's any help).\n\nIs there a good way of going about this or should I just be wrapping\nthe whole thing up in an\n====\nIF (COUNT(*) FROM item \n WHERE item.item_entry_id=NEW.item_entry_id\n AND item.item_live = 't')>0\nTHEN\n UPDATE ... =MAX() ...\nELSE\n UPDATE SET ... =0 ...\nEND IF\n====\n?\n\nThanks\n\n-- \nPaul McGarry mailto:[email protected] \nSystems Integrator http://www.opentec.com.au \nOpentec Pty Ltd http://www.iebusiness.com.au\n6 Lyon Park Road Phone: (02) 9878 1744 \nNorth Ryde NSW 2113 Fax: (02) 9878 1755\n", "msg_date": "Fri, 07 Jul 2000 11:44:15 +1000", "msg_from": "Paul McGarry <[email protected]>", "msg_from_op": true, "msg_subject": "MAX() of 0 records." }, { "msg_contents": "Paul McGarry <[email protected]> writes:\n> However there will be situations where there are no records for\n> a given item_entry_id with item_live='t'. Currently when I try\n> do update/insert a record so that this occurs I get the following\n> error 'ERROR: ExecutePlan: (junk) 'ctid' is NULL!' and the\n> insert/update attempt is rolled back.\n\nThis seems like a backend bug to me, but being an overworked hacker\nI'm too lazy to try to reconstruct the scenario from your sketch.\nCould I trouble you to submit a formal bug report with a specific,\nhopefully compact script that triggers the problem?\n\n> Is there a good way of going about this or should I just be wrapping\n> the whole thing up in an\n\nUntil I've isolated the bug I don't want to speculate about whether\nit'll be reasonable to try to back-patch a fix into 7.0.*. Usually\nwe don't risk back-patching complex fixes into stable releases, but\nthe fix might be simple once we know the cause.\n\n\t\t\tregards, tom lane\n\nPS: I trust you're using 7.0.* ?\n", "msg_date": "Thu, 06 Jul 2000 23:29:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MAX() of 0 records. " }, { "msg_contents": "Tom Lane wrote:\n\n> This seems like a backend bug to me, but being an overworked hacker\n> I'm too lazy to try to reconstruct the scenario from your sketch.\n> Could I trouble you to submit a formal bug report with a specific,\n> hopefully compact script that triggers the problem?\n\nI've attached it here, along with the output I see. I am running 7.0.2\nand the problem occurs on both my x86 Linux and Sparc Solaris \ninstallations.\n\nIn addition to the output attached the postmaster console adds:\n====\nDEBUG: Last error occured while executing PL/pgSQL function\nsetentrystats\nDEBUG: line 4 at SQL statement\n====\n\nThanks,\n\n-- \nPaul McGarry mailto:[email protected] \nSystems Integrator http://www.opentec.com.au \nOpentec Pty Ltd http://www.iebusiness.com.au\n6 Lyon Park Road Phone: (02) 9878 1744 \nNorth Ryde NSW 2113 Fax: (02) 9878 1755", "msg_date": "Fri, 07 Jul 2000 15:34:13 +1000", "msg_from": "Paul McGarry <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SQL] MAX() of 0 records." }, { "msg_contents": "Paul McGarry <[email protected]> writes:\n> CREATE TABLE entry_stats\n> (\n> entry_id INT4 NOT NULL REFERENCES entry ON DELETE CASCADE,\n> entry_minprice INT4 NOT NULL DEFAULT 0\n> );\n>\n> CREATE TABLE item(\n> item_id INT4 PRIMARY KEY,\n> item_entry_id INT4 NOT NULL REFERENCES entry ON DELETE NO ACTION,\n> item_price INT4 NOT NULL,\n> item_live bool NOT NULL DEFAULT 'n'\n> );\n> \n> [trigger using]\n>\n> UPDATE entry_stats \n> SET entry_minprice=min(item_price)\n> FROM item where item_entry_id=NEW.item_entry_id AND item_live='t';\n>\n> ERROR: ExecutePlan: (junk) `ctid' is NULL!\n\nHmm. There are several things going on here, but one thing that needs\nclarification is whether this UPDATE is written correctly. Since it\nhas no constraint on entry_stats, it seems to me that *every* row of\nentry_stats will have entry_minprice set to the same value, namely\nthe minimum item_price over those item rows that satisfy the WHERE\ncondition. Surely that wasn't what you wanted? Shouldn't there be an\nadditional WHERE clause like entry_id = item_entry_id?\n\nAnyway, the proximate cause of the error message is as follows.\nA cross-table UPDATE like this is actually implemented as if it were\na SELECT:\n\tSELECT entry_stats.ctid, min(item_price)\n\tFROM entry_stats, item WHERE ...;\nFor each row emitted by this underlying SELECT, the executor takes\nthe ctid result column (which identifies the particular target tuple\nin the target table) and updates that tuple by stuffing the additional\nSELECT result column(s) into the specified fields of that tuple.\n\nNow, if you try a SELECT like the above in a situation where there are\nno tuples matching the WHERE clause, what you get out is a row of all\nNULLs --- because that's what you get from SELECT if there's an\naggregate function with no GROUP BY and no input rows. The executor\ngets this dummy row, tries to do a tuple update using it, and chokes\nbecause the ctid is NULL. So that explains why the error message is\nwhat it is. Next question is what if anything should be done\ndifferently. We could just have the executor ignore result rows where\nctid is NULL, but that seems like patching around the problem not fixing\nit.\n\nThe thing that jumps out at me is that if you actually try the SELECT\nillustrated above, you do not get any row, null or otherwise; you get\nERROR: Attribute entry_stats.ctid must be GROUPed or used in an\n aggregate function\nwhich is a mighty valid complaint. If you are aggregating rows to get\nthe MIN() then you don't have a unique ctid to deliver, so which row\nought to be updated? This is the system's way of expressing the same\nconcern I started with: this query doesn't seem to be well-posed.\n\nYou don't see this complaint when you try the UPDATE, because ctid\nis added to the implicit select result in a back-door way that doesn't\nget checked for GROUP-BY validity. I wonder whether that is the bug.\nIf so, we'd basically be saying that no query like this is valid\n(since UPDATE doesn't have a GROUP BY option, there'd be no way to\npass the grouping check).\n\nAnother way to look at it is that perhaps an UPDATE involving aggregate\nfunctions ought to be implicitly treated as GROUP BY targetTable.ctid.\nIn other words, the MIN() or other aggregate function is implicitly\nevaluated over only those join tuples that are formed for a single\ntarget tuple. Intuitively that seems to make sense, and it solves the\nproblem you're complaining of, because no matching tuples = no groups =\nno result tuples = update does nothing = no problem. But I have a\nsneaking suspicion that I'm missing some nasty problem with this idea\ntoo.\n\nComments anyone? What exactly *should* be the behavior of an UPDATE\nthat uses an aggregate function and a join to another table? Over what\nset of tuples should the aggregate be evaluated?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Jul 2000 03:06:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] MAX() of 0 records. " }, { "msg_contents": "Hi Tom,\n\n> Hmm. There are several things going on here, but one thing that needs\n> clarification is whether this UPDATE is written correctly. Since it\n\nMy goofup (you said you wanted a compact script!). \nYou are correct there should have been an extra where condition in\nthe triggered function.\n====\n UPDATE entry_stats \n SET entry_minprice=min(item_price)\n FROM item where item_entry_id=NEW.item_entry_id AND item_live='f';\n====\nShould really have been:\n====\n UPDATE entry_stats \n SET entry_minprice=min(item_price)\n FROM item where item_entry_id=NEW.item_entry_id\n AND entry_stats.entry_id=item_entry_id\n AND item_live='f';\n====\nwhich still generates the same error message (as the 'problem' is\ncaused by the where clause, not what is being updated).\n\nFWIW I've attached the real function that I've implemented to get \naround the error message. In all probability the way I'm handling\nit is the right way:\n\n1. Check I'm going to get a valid response from my aggregate\n2a. If so perform the update with the aggregate\n2b. If not perform the update with zeros(default value)\n\nOriginally I was just wondering if I could do it all in one go,\nTry to perform the update and automatically get the aggregate\nresult if it were 'available' and default to zeros if not.\n\nIf I forget about aggregate functions for a moment and just\nconsider an update where nothing matches the where criterion\nthen I'd still use the same logic above to reset the values\nto their default. The only differences between using the\naggregate function and not is that one throws an error and\nthe other just updates 0 rows.\n\n> The thing that jumps out at me is that if you actually try the SELECT\n> illustrated above, you do not get any row, null or otherwise; you get\n> ERROR: Attribute entry_stats.ctid must be GROUPed or used in an\n> aggregate function\n> which is a mighty valid complaint. If you are aggregating rows to get\n> the MIN() then you don't have a unique ctid to deliver, so which row\n> ought to be updated? This is the system's way of expressing the same\n> concern I started with: this query doesn't seem to be well-posed.\n> \n> You don't see this complaint when you try the UPDATE, because ctid\n> is added to the implicit select result in a back-door way that doesn't\n> get checked for GROUP-BY validity. I wonder whether that is the bug.\n> If so, we'd basically be saying that no query like this is valid\n> (since UPDATE doesn't have a GROUP BY option, there'd be no way to\n> pass the grouping check).\n\nWould that mean that any update that used an aggregate function\nwould be invalid? That would be a bit scary seeing as I am doing\nthis in part to get around using aggregate functions in a view.\n\n> Another way to look at it is that perhaps an UPDATE involving aggregate\n> functions ought to be implicitly treated as GROUP BY targetTable.ctid.\n\nWhat exactly is a ctid?\n\nThanks for your response Tom, it has been enlightening. I feel I'm\ngetting a better understanding of what's going inside pgsql by the\nday from yourself and other peoples posts on the various lists.\n\n-- \nPaul McGarry mailto:[email protected] \nSystems Integrator http://www.opentec.com.au \nOpentec Pty Ltd http://www.iebusiness.com.au\n6 Lyon Park Road Phone: (02) 9878 1744 \nNorth Ryde NSW 2113 Fax: (02) 9878 1755\n", "msg_date": "Fri, 07 Jul 2000 18:54:37 +1000", "msg_from": "Paul McGarry <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SQL] MAX() of 0 records." }, { "msg_contents": "Here's the attachment I said I was going to attach to the last message.\n\nTFIF!\n\n-- \nPaul McGarry mailto:[email protected] \nSystems Integrator http://www.opentec.com.au \nOpentec Pty Ltd http://www.iebusiness.com.au\n6 Lyon Park Road Phone: (02) 9878 1744 \nNorth Ryde NSW 2113 Fax: (02) 9878 1755", "msg_date": "Fri, 07 Jul 2000 19:00:14 +1000", "msg_from": "Paul McGarry <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SQL] MAX() of 0 records." }, { "msg_contents": "> UPDATE entry_stats \n> SET entry_maxprice=MAX(item_price) \n> FROM item \n> WHERE item.item_entry_id=NEW.item_entry_id\n> AND item.item_live = 't';\n\nTry\n\nCOALESCE(MAX(item_price),0)\n\nChristopher J.D. Currie\nComputer Technician\nDalhousie: DalTech - CTE\n_____________________________________________\nLord, deliver me from the man who never makes a mistake,\nand also from the man who makes the same mistake twice.\n-William James Mayo\n\n\n", "msg_date": "Fri, 7 Jul 2000 09:44:53 -0300", "msg_from": "\"DalTech - Continuing Technical Education\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MAX() of 0 records." }, { "msg_contents": "\nAnother observation is that if the WHERE clause is successful, it seems\nto update the first record in the target relation that it finds which is\na pretty random result.\n\npghack=# create table e(ee text, eee integer);\nCREATE\npghack=# create table f(ff text, fff integer);\nCREATE\npghack=# insert into e values('e', 1);\nINSERT 18871 1\npghack=# insert into e values('ee', 2);\nINSERT 18872 1\npghack=# insert into e values('eee', 3);\nINSERT 18873 1\npghack=# insert into f values('fff', 3);\nINSERT 18874 1\npghack=# insert into f values('ff', 2);\nINSERT 18875 1\npghack=# insert into f values('f', 1);\nINSERT 18876 1\npghack=# update e set eee=min(f.fff) from f;\nUPDATE 1\npghack=# select * from e;\n ee | eee \n-----+-----\n ee | 2\n eee | 3\n e | 1\n(3 rows)\n\npghack=# select min(f.fff) from f;\n min \n-----\n 1\n(1 row)\n\npghack=# update e set eee=min(f.fff) from f;\nUPDATE 1\npghack=# select min(f.fff) from f;\n min \n-----\n 1\n(1 row)\n\npghack=# select * from e;\n ee | eee \n-----+-----\n eee | 3\n e | 1\n ee | 1\n(3 rows)\n\npghack=# update e set eee=min(f.fff) from f;\nUPDATE 1\npghack=# select * from e;\n ee | eee \n-----+-----\n e | 1\n ee | 1\n eee | 1\n(3 rows)\n\n\n\n\nTom Lane wrote:\n> \n> Paul McGarry <[email protected]> writes:\n> > CREATE TABLE entry_stats\n> > (\n> > entry_id INT4 NOT NULL REFERENCES entry ON DELETE CASCADE,\n> > entry_minprice INT4 NOT NULL DEFAULT 0\n> > );\n> >\n> > CREATE TABLE item(\n> > item_id INT4 PRIMARY KEY,\n> > item_entry_id INT4 NOT NULL REFERENCES entry ON DELETE NO ACTION,\n> > item_price INT4 NOT NULL,\n> > item_live bool NOT NULL DEFAULT 'n'\n> > );\n> >\n> > [trigger using]\n> >\n> > UPDATE entry_stats\n> > SET entry_minprice=min(item_price)\n> > FROM item where item_entry_id=NEW.item_entry_id AND item_live='t';\n> >\n> > ERROR: ExecutePlan: (junk) `ctid' is NULL!\n> \n> Hmm. There are several things going on here, but one thing that needs\n> clarification is whether this UPDATE is written correctly. Since it\n> has no constraint on entry_stats, it seems to me that *every* row of\n> entry_stats will have entry_minprice set to the same value, namely\n> the minimum item_price over those item rows that satisfy the WHERE\n> condition. Surely that wasn't what you wanted? Shouldn't there be an\n> additional WHERE clause like entry_id = item_entry_id?\n> \n> Anyway, the proximate cause of the error message is as follows.\n> A cross-table UPDATE like this is actually implemented as if it were\n> a SELECT:\n> SELECT entry_stats.ctid, min(item_price)\n> FROM entry_stats, item WHERE ...;\n> For each row emitted by this underlying SELECT, the executor takes\n> the ctid result column (which identifies the particular target tuple\n> in the target table) and updates that tuple by stuffing the additional\n> SELECT result column(s) into the specified fields of that tuple.\n> \n> Now, if you try a SELECT like the above in a situation where there are\n> no tuples matching the WHERE clause, what you get out is a row of all\n> NULLs --- because that's what you get from SELECT if there's an\n> aggregate function with no GROUP BY and no input rows. The executor\n> gets this dummy row, tries to do a tuple update using it, and chokes\n> because the ctid is NULL. So that explains why the error message is\n> what it is. Next question is what if anything should be done\n> differently. We could just have the executor ignore result rows where\n> ctid is NULL, but that seems like patching around the problem not fixing\n> it.\n> \n> The thing that jumps out at me is that if you actually try the SELECT\n> illustrated above, you do not get any row, null or otherwise; you get\n> ERROR: Attribute entry_stats.ctid must be GROUPed or used in an\n> aggregate function\n> which is a mighty valid complaint. If you are aggregating rows to get\n> the MIN() then you don't have a unique ctid to deliver, so which row\n> ought to be updated? This is the system's way of expressing the same\n> concern I started with: this query doesn't seem to be well-posed.\n> \n> You don't see this complaint when you try the UPDATE, because ctid\n> is added to the implicit select result in a back-door way that doesn't\n> get checked for GROUP-BY validity. I wonder whether that is the bug.\n> If so, we'd basically be saying that no query like this is valid\n> (since UPDATE doesn't have a GROUP BY option, there'd be no way to\n> pass the grouping check).\n> \n> Another way to look at it is that perhaps an UPDATE involving aggregate\n> functions ought to be implicitly treated as GROUP BY targetTable.ctid.\n> In other words, the MIN() or other aggregate function is implicitly\n> evaluated over only those join tuples that are formed for a single\n> target tuple. Intuitively that seems to make sense, and it solves the\n> problem you're complaining of, because no matching tuples = no groups =\n> no result tuples = update does nothing = no problem. But I have a\n> sneaking suspicion that I'm missing some nasty problem with this idea\n> too.\n> \n> Comments anyone? What exactly *should* be the behavior of an UPDATE\n> that uses an aggregate function and a join to another table? Over what\n> set of tuples should the aggregate be evaluated?\n> \n> regards, tom lane\n", "msg_date": "Sat, 08 Jul 2000 01:35:02 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] MAX() of 0 records." }, { "msg_contents": "Chris Bitmead <[email protected]> writes:\n> Another observation is that if the WHERE clause is successful, it seems\n> to update the first record in the target relation that it finds which is\n> a pretty random result.\n\nWouldn't surprise me --- leastwise, you will get a random one of the\ninput ctid values emitted into the aggregated SELECT row. Offhand I'd\nhave expected the last-scanned one, not the first-scanned, but the\npoint is that the behavior is dependent on the implementation's choice\nof scanning order. This is exactly the uncertainty that the check for\n\"attribute must be GROUPed or used in an aggregate function\" is designed\nto protect you from. But ctid is (currently) escaping that check.\n\nIt seems to me that we have two reasonable ways to proceed:\n\n1. Forbid aggregates at the top level of UPDATE. Then you'd need to do\na subselect, perhaps something like\n\tUPDATE foo\n\tSET bar = (SELECT min(f1) FROM othertab\n\t WHERE othertab.keycol = foo.keycol)\n\tWHERE condition-determining-which-foo-rows-to-update\nif you wanted to use an aggregate. This is pretty ugly, especially so\nif the outer WHERE condition is itself dependent on scanning othertab\nto see if there are matches to the foo row.\n\n2. Do an implicit GROUP BY ctid as I suggested last night. I still\ndon't see any holes in that idea, but I am still worried that there\nmight be one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Jul 2000 12:35:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Re: [SQL] MAX() of 0 records. " }, { "msg_contents": "On Fri, 07 Jul 2000, Tom Lane wrote:\n> Chris Bitmead <[email protected]> writes:\n> \tUPDATE foo\n> \tSET bar = (SELECT min(f1) FROM othertab\n> \t WHERE othertab.keycol = foo.keycol)\n> \tWHERE condition-determining-which-foo-rows-to-update\n> if you wanted to use an aggregate. This is pretty ugly, especially so\n\nIf you use min(x) or max(x) frequently, isn't it best to make a trigger that\nintercepts x on insert and update, then check it and store it somewhere rather\nthan scanning for it everytime? (not that this fixes any db problem thats being\ndiscussed here)\n\n -- \n\t\t\tRobert\n", "msg_date": "Fri, 7 Jul 2000 12:53:27 -0400", "msg_from": "\"Robert B. Easter\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Re: [SQL] MAX() of 0 records." }, { "msg_contents": "Paul McGarry <[email protected]> writes:\n> Would that mean that any update that used an aggregate function\n> would be invalid? That would be a bit scary seeing as I am doing\n> this in part to get around using aggregate functions in a view.\n\nYou'd have to embed the aggregate in a sub-select if we did things\nthat way. I'd rather not have such a restriction, but only if we can\nunderstand clearly what it means to put an aggregate directly into\nUPDATE. The executive summary of what I said before is \"exactly what\nSHOULD this query do, anyway?\" I think it's not well-defined without\nsome additional assumptions.\n\n>> Another way to look at it is that perhaps an UPDATE involving aggregate\n>> functions ought to be implicitly treated as GROUP BY targetTable.ctid.\n\n> What exactly is a ctid?\n\nPhysical location of the tuple, expressed as block# and tuple# within\nthe file. Try \"select ctid,* from sometable\" ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Jul 2000 13:12:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] MAX() of 0 records. " }, { "msg_contents": "\"Robert B. Easter\" <[email protected]> writes:\n> If you use min(x) or max(x) frequently, isn't it best to make a\n> trigger that intercepts x on insert and update, then check it and\n> store it somewhere rather than scanning for it everytime?\n\nI believe that's exactly what the original questioner is trying to do...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Jul 2000 14:12:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Re: [SQL] MAX() of 0 records. " }, { "msg_contents": "I wrote:\n> Comments anyone? What exactly *should* be the behavior of an UPDATE\n> that uses an aggregate function and a join to another table? Over what\n> set of tuples should the aggregate be evaluated?\n\nFurther note on this: SQL99 specifies:\n\n <update statement: searched> ::=\n UPDATE <target table>\n SET <set clause list>\n [ WHERE <search condition> ]\n\n ...\n\n 5) A <value expression> in a <set clause> shall not directly\n contain a <set function specification>.\n\nso the construct is definitely not SQL-compliant. Maybe we should just\nforbid it. However, if you are joining against another table (which\nitself is not an SQL feature) then it seems like there is some potential\nuse in it. What do people think of my implicit-GROUP-BY-ctid idea?\nThat would basically say that the aggregate is computed over all the\ntuples that join to a single target tuple.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 09 Jul 2000 14:35:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] MAX() of 0 records. " }, { "msg_contents": "At 14:35 9/07/00 -0400, Tom Lane wrote:\n>\n>so the construct is definitely not SQL-compliant. Maybe we should just\n>forbid it. However, if you are joining against another table (which\n>itself is not an SQL feature) then it seems like there is some potential\n>use in it. What do people think of my implicit-GROUP-BY-ctid idea?\n>That would basically say that the aggregate is computed over all the\n>tuples that join to a single target tuple.\n\nSounds perfect to me...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 10 Jul 2000 10:24:30 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] MAX() of 0 records. " }, { "msg_contents": "Philip Warner <[email protected]> writes:\n>> What do people think of my implicit-GROUP-BY-ctid idea?\n>> That would basically say that the aggregate is computed over all the\n>> tuples that join to a single target tuple.\n\n> Sounds perfect to me...\n\nNote that it would not meet your expectation that\n\n update t1 set f2=count(*) from t2 where t1.f1=2 and t2.f1=t1.f1 ;\n\nmeans the same as\n\n update t1 set f2=(Select Count(*) from t2 where t2.f1=t1.f1) where\nt1.f1 = 2\n\n... at least not without some kind of outer-join support too. With\nan inner join, t1 tuples not matching any t2 tuple wouldn't be modified\nat all.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 09 Jul 2000 21:21:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] MAX() of 0 records. " }, { "msg_contents": "At 21:21 9/07/00 -0400, Tom Lane wrote:\n>\n>> Sounds perfect to me...\n>\n>Note that it would not meet your expectation that\n\nThis seems OK; the 'update...from' syntax does also seemingly implies that\nthe rows affected will only be those rows that match the predicate, so your\ninterpretation is probably more in keeping with intuitive expectation.\n\n>\n>... at least not without some kind of outer-join support too. With\n>an inner join, t1 tuples not matching any t2 tuple wouldn't be modified\n>at all.\n\nThis sounds good, but even when OJ come along, I can't see how I would get\nthe same behaviour as:\n\n update t1 set f2=(Select Count(*) from t2 where t2.f1=t1.f1) \n where t1.f1 = 2\n\nsince in an OJ, count(*) will, I think, always be at least 1.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 10 Jul 2000 11:43:40 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] MAX() of 0 records. " } ]
[ { "msg_contents": "Hello,\n\nI am working on Linux RH 6.0 platform.\nI want use PostgreSQL as a backend. 'C' language as a front-end.\nI am not able to connect to each other.\nI am using libpq.\nThe program is :-\n\n/*conn2.c*/\n#include <stdio.h>\n#include \"/usr/include/pgsql/libpq-fe.h\"\n main()\n {\n char *pghost, *pgport, *pgoptions,*pgtty;\n char *dbName;\n PGconn *conn;\n pghost = NULL; /* host name of the backend server */\n pgport = NULL; /* port of the backend server */\n pgoptions = NULL; /* special options to start up the backend\n * server */\n pgtty = NULL; /* debugging tty for the backend server */\n dbName = \"template1\";\n\n /* make a connection to the database */\n conn = PQsetdb(pghost, pgport, pgoptions, pgtty, dbName);\n}\n\nThe compiling is ok, but linking have error.\n$ gcc conn2.c -c -o conn2\nNo error\n\nThe program compile and linking result :-\n*****************\n$ gcc conn2.c -o conn2\n/tmp/cchKU26L.o: In function `main':\n/tmp/cchKU26L.o(.text+0x47): undefined reference to `PQsetdbLogin'\ncollect2: ld returned 1 exit status\n*****************\n\nHow to remove this linking error, or how to make link between PostgreSQL and\n'C'?\nThanks in advance\nAnuj\n\n\n", "msg_date": "Fri, 7 Jul 2000 10:23:38 +0530", "msg_from": "\"anuj\" <[email protected]>", "msg_from_op": true, "msg_subject": "libpq connectivity" } ]
[ { "msg_contents": "Before intalling latest version of postgres we changed the namedatalen\nto 64 so that we could accomodate longer object names. After a bit of\ntesting we found out that if an index name is to long the ODBC driver is\nnot able deliver the name of the index to MS Access and therefore ms\naccess is not able to attach to the table.\n\nJust thought you guys might want to know.\n\n-- \nYou can hit reply if you want \"malcontent\" is a legit email.\n", "msg_date": "Fri, 07 Jul 2000 00:19:52 -0600", "msg_from": "Malcontent <[email protected]>", "msg_from_op": true, "msg_subject": "FYI" } ]
[ { "msg_contents": "I've got patches which:\n\n1) Implement session-specific settings, including isolation level and\ntime zone:\n\nSET SESSION CHARACTERISTICS AS\n TRANSACTION ISOLATION LEVEL SERIALIZABLE,\n TIME ZONE 'PST8PDT';\n\nPer SQL99 spec, the command rejects duplicate or conflicting clauses.\nUnder the covers, it uses the \"SET key = value\" feature, adding\nDefaultXactIsoLevel as an allowed keyword.\n\n2) Implement nested comments per SQL99. This involves (small) changes to\npsql and to scan.l to count the depth of a comment delimiter pair, and\nin the case of scan.l to explicitly recognize \"/*\" while inside a\ncomment. This seems to work in my limited testing.\n\n3) Implement SQL99 IN, OUT, INOUT keywords on arguments for function\ndeclarations. The syntax also allows a \"placeholder name\" to be\nspecified. IN is a noop, OUT and INOUT are rejected with an elog(ERROR).\nThis required that NATIONAL be removed from the list of allowed\ncolumn/table names.\n\n\nI'd like to implement an \"autocommit toggle\" feature, which would allow\none to specify that all queries open a transaction, which is closed with\nan explicit COMMIT. This mode is required by SQL9x, and the toggling\nfeature was available, for example, in Ingres.\n\nAny hints on what needs to be touched internally? I've got the parser\nwork done, so just need to tweak the relevant internals. Does someone\nelse want to pick this up??\n\n - Thomas\n", "msg_date": "Fri, 07 Jul 2000 14:53:20 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Patches coming..." } ]
[ { "msg_contents": "Your name\t\t:\tDavid MacKenzie\nYour email address\t:\[email protected]\n\n\nSystem Configuration\n---------------------\n Architecture (example: Intel Pentium) \t: Intel x86\n\n Operating System (example: Linux 2.0.26 ELF) \t: BSD/OS 4.0.1\n\n PostgreSQL version (example: PostgreSQL-7.0): PostgreSQL-7.0.2\n\n Compiler used (example: gcc 2.8.0)\t\t: gcc version 2.7.2.1\n\n\nPlease enter a FULL description of your problem:\n------------------------------------------------\n\nUUNET is looking into offering PostgreSQL as a part of a managed web\nhosting product, on both shared and dedicated machines. We currently\noffer Oracle and MySQL, and it would be a nice middle-ground.\nHowever, as shipped, PostgreSQL lacks the following features we need\nthat MySQL has:\n\n1. The ability to listen only on a particular IP address. Each\n hosting customer has their own IP address, on which all of their\n servers (http, ftp, real media, etc.) run.\n2. The ability to place the Unix-domain socket in a mode 700 directory.\n This allows us to automatically create an empty database, with an\n empty DBA password, for new or upgrading customers without having\n to interactively set a DBA password and communicate it to (or from)\n the customer. This in turn cuts down our install and upgrade times.\n3. The ability to connect to the Unix-domain socket from within a\n change-rooted environment. We run CGI programs chrooted to the\n user's home directory, which is another reason why we need to be\n able to specify where the Unix-domain socket is, instead of /tmp.\n4. The ability to, if run as root, open a pid file in /var/run as\n root, and then setuid to the desired user. (mysqld -u can almost\n do this; I had to patch it, too).\n\nThe patch below fixes problem 1-3. I plan to address #4, also, but\nhaven't done so yet. These diffs are big enough that they should give\nthe PG development team something to think about in the meantime :-)\nAlso, I'm about to leave for 2 weeks' vacation, so I thought I'd get\nout what I have, which works (for the problems it tackles), now.\n\nWith these changes, we can set up and run PostgreSQL with scripts the\nsame way we can with apache or proftpd or mysql.\n\nIn summary, this patch makes the following enhancements:\n\n1. Adds an environment variable PGUNIXSOCKET, analogous to MYSQL_UNIX_PORT,\n and command line options -k --unix-socket to the relevant programs.\n2. Adds a -h option to postmaster to set the hostname or IP address to\n listen on instead of the default INADDR_ANY.\n3. Extends some library interfaces to support the above.\n4. Fixes a few memory leaks in PQconnectdb().\n\nThe default behavior is unchanged from stock 7.0.2; if you don't use\nany of these new features, they don't change the operation.\n\nIndex: doc/src/sgml/layout.sgml\n*** doc/src/sgml/layout.sgml\t2000/06/30 21:15:36\t1.1\n--- doc/src/sgml/layout.sgml\t2000/07/02 03:56:05\t1.2\n***************\n*** 55,61 ****\n For example, if the database server machine is a remote machine, you\n will need to set the <envar>PGHOST</envar> environment variable to the name\n of the database server machine. The environment variable\n! <envar>PGPORT</envar> may also have to be set. The bottom line is this: if\n you try to start an application program and it complains\n that it cannot connect to the <Application>postmaster</Application>,\n you must go back and make sure that your\n--- 55,62 ----\n For example, if the database server machine is a remote machine, you\n will need to set the <envar>PGHOST</envar> environment variable to the name\n of the database server machine. The environment variable\n! <envar>PGPORT</envar> or <envar>PGUNIXSOCKET</envar> may also have to be set.\n! The bottom line is this: if\n you try to start an application program and it complains\n that it cannot connect to the <Application>postmaster</Application>,\n you must go back and make sure that your\nIndex: doc/src/sgml/libpq++.sgml\n*** doc/src/sgml/libpq++.sgml\t2000/06/30 21:15:36\t1.1\n--- doc/src/sgml/libpq++.sgml\t2000/07/02 03:56:05\t1.2\n***************\n*** 93,98 ****\n--- 93,105 ----\n </listitem>\n <listitem>\n <para>\n+ \t<envar>PGUNIXSOCKET</envar> sets the full Unix domain socket\n+ \tfile name for communicating with the <productname>Postgres</productname>\n+ \tbackend.\n+ </para>\n+ </listitem>\n+ <listitem>\n+ <para>\n \t<envar>PGDATABASE</envar> sets the default \n \t<productname>Postgres</productname> database name.\n </para>\nIndex: doc/src/sgml/libpq.sgml\n*** doc/src/sgml/libpq.sgml\t2000/06/30 21:15:36\t1.1\n--- doc/src/sgml/libpq.sgml\t2000/07/02 03:56:05\t1.2\n***************\n*** 134,139 ****\n--- 134,148 ----\n </varlistentry>\n \n <varlistentry>\n+ <term><literal>unixsocket</literal></term>\n+ <listitem>\n+ <para>\n+ Full path to Unix-domain socket file to connect to at the server host.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n+ <varlistentry>\n <term><literal>dbname</literal></term>\n <listitem>\n <para>\n***************\n*** 545,550 ****\n--- 554,569 ----\n \n <listitem>\n <para>\n+ <function>PQunixsocket</function>\n+ Returns the name of the Unix-domain socket of the connection.\n+ <synopsis>\n+ char *PQunixsocket(const PGconn *conn)\n+ </synopsis>\n+ </para>\n+ </listitem>\n+ \n+ <listitem>\n+ <para>\n <function>PQtty</function>\n Returns the debug tty of the connection.\n <synopsis>\n***************\n*** 1772,1777 ****\n--- 1791,1803 ----\n <envar>PGHOST</envar> sets the default server name.\n If a non-zero-length string is specified, TCP/IP communication is used.\n Without a host name, libpq will connect using a local Unix domain socket.\n+ </para>\n+ </listitem>\n+ <listitem>\n+ <para>\n+ <envar>PGPORT</envar> sets the default port or local Unix domain socket\n+ file extension for communicating with the <productname>Postgres</productname>\n+ backend.\n </para>\n </listitem>\n <listitem>\nIndex: doc/src/sgml/start.sgml\n*** doc/src/sgml/start.sgml\t2000/06/30 21:15:37\t1.1\n--- doc/src/sgml/start.sgml\t2000/07/02 03:56:05\t1.2\n***************\n*** 110,117 ****\n will need to set the <acronym>PGHOST</acronym> environment\n variable to the name\n of the database server machine. The environment variable\n! <acronym>PGPORT</acronym> may also have to be set. The bottom\n! line is this: if\n you try to start an application program and it complains\n that it cannot connect to the <application>postmaster</application>,\n you should immediately consult your site administrator to make\n--- 110,117 ----\n will need to set the <acronym>PGHOST</acronym> environment\n variable to the name\n of the database server machine. The environment variable\n! <acronym>PGPORT</acronym> or <acronym>PGUNIXSOCKET</acronym> may also have to be set.\n! The bottom line is this: if\n you try to start an application program and it complains\n that it cannot connect to the <application>postmaster</application>,\n you should immediately consult your site administrator to make\nIndex: doc/src/sgml/ref/createdb.sgml\n*** doc/src/sgml/ref/createdb.sgml\t2000/06/30 21:15:37\t1.1\n--- doc/src/sgml/ref/createdb.sgml\t2000/07/04 04:46:45\t1.2\n***************\n*** 58,63 ****\n--- 58,75 ----\n </listitem>\n </varlistentry>\n \n+ <varlistentry>\n+ <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n+ <listitem>\n+ <para>\n+ Specifies the Unix-domain socket on which the\n+ <application>postmaster</application> is running.\n+ Without this option, the socket is created in <filename>/tmp</filename>\n+ based on the port number.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n <varlistentry>\n <term>-U, --username <replaceable class=\"parameter\">username</replaceable></term>\n <listitem>\nIndex: doc/src/sgml/ref/createlang.sgml\n*** doc/src/sgml/ref/createlang.sgml\t2000/06/30 21:15:37\t1.1\n--- doc/src/sgml/ref/createlang.sgml\t2000/07/04 04:46:45\t1.2\n***************\n*** 96,101 ****\n--- 96,113 ----\n </listitem>\n </varlistentry>\n \n+ <varlistentry>\n+ <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n+ <listitem>\n+ <para>\n+ Specifies the Unix-domain socket on which the\n+ <application>postmaster</application> is running.\n+ Without this option, the socket is created in <filename>/tmp</filename>\n+ based on the port number.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n <varlistentry>\n <term>-U, --username <replaceable class=\"parameter\">username</replaceable></term>\n <listitem>\nIndex: doc/src/sgml/ref/createuser.sgml\n*** doc/src/sgml/ref/createuser.sgml\t2000/06/30 21:15:37\t1.1\n--- doc/src/sgml/ref/createuser.sgml\t2000/07/04 04:46:45\t1.2\n***************\n*** 59,64 ****\n--- 59,76 ----\n </listitem>\n </varlistentry>\n \n+ <varlistentry>\n+ <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n+ <listitem>\n+ <para>\n+ Specifies the Unix-domain socket on which the\n+ <application>postmaster</application> is running.\n+ Without this option, the socket is created in <filename>/tmp</filename>\n+ based on the port number.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n <varlistentry>\n <term>-e, --echo</term>\n <listitem>\nIndex: doc/src/sgml/ref/dropdb.sgml\n*** doc/src/sgml/ref/dropdb.sgml\t2000/06/30 21:15:38\t1.1\n--- doc/src/sgml/ref/dropdb.sgml\t2000/07/04 04:46:45\t1.2\n***************\n*** 58,63 ****\n--- 58,75 ----\n </listitem>\n </varlistentry>\n \n+ <varlistentry>\n+ <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n+ <listitem>\n+ <para>\n+ Specifies the Unix-domain socket on which the\n+ <application>postmaster</application> is running.\n+ Without this option, the socket is created in <filename>/tmp</filename>\n+ based on the port number.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n <varlistentry>\n <term>-U, --username <replaceable class=\"parameter\">username</replaceable></term>\n <listitem>\nIndex: doc/src/sgml/ref/droplang.sgml\n*** doc/src/sgml/ref/droplang.sgml\t2000/06/30 21:15:38\t1.1\n--- doc/src/sgml/ref/droplang.sgml\t2000/07/04 04:46:45\t1.2\n***************\n*** 96,101 ****\n--- 96,113 ----\n </listitem>\n </varlistentry>\n \n+ <varlistentry>\n+ <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n+ <listitem>\n+ <para>\n+ Specifies the Unix-domain socket on which the\n+ <application>postmaster</application> is running.\n+ Without this option, the socket is created in <filename>/tmp</filename>\n+ based on the port number.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n <varlistentry>\n <term>-U, --username <replaceable class=\"parameter\">username</replaceable></term>\n <listitem>\nIndex: doc/src/sgml/ref/dropuser.sgml\n*** doc/src/sgml/ref/dropuser.sgml\t2000/06/30 21:15:38\t1.1\n--- doc/src/sgml/ref/dropuser.sgml\t2000/07/04 04:46:45\t1.2\n***************\n*** 58,63 ****\n--- 58,75 ----\n </listitem>\n </varlistentry>\n \n+ <varlistentry>\n+ <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n+ <listitem>\n+ <para>\n+ Specifies the Unix-domain socket on which the\n+ <application>postmaster</application> is running.\n+ Without this option, the socket is created in <filename>/tmp</filename>\n+ based on the port number.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n <varlistentry>\n <term>-e, --echo</term>\n <listitem>\nIndex: doc/src/sgml/ref/pg_dump.sgml\n*** doc/src/sgml/ref/pg_dump.sgml\t2000/06/30 21:15:38\t1.1\n--- doc/src/sgml/ref/pg_dump.sgml\t2000/07/01 18:41:22\t1.2\n***************\n*** 24,30 ****\n </refsynopsisdivinfo>\n <synopsis>\n pg_dump [ <replaceable class=\"parameter\">dbname</replaceable> ]\n! pg_dump [ -h <replaceable class=\"parameter\">host</replaceable> ] [ -p <replaceable class=\"parameter\">port</replaceable> ]\n [ -t <replaceable class=\"parameter\">table</replaceable> ]\n [ -a ] [ -c ] [ -d ] [ -D ] [ -i ] [ -n ] [ -N ]\n [ -o ] [ -s ] [ -u ] [ -v ] [ -x ]\n--- 24,32 ----\n </refsynopsisdivinfo>\n <synopsis>\n pg_dump [ <replaceable class=\"parameter\">dbname</replaceable> ]\n! pg_dump [ -h <replaceable class=\"parameter\">host</replaceable> ]\n! [ -k <replaceable class=\"parameter\">path</replaceable> ]\n! [ -p <replaceable class=\"parameter\">port</replaceable> ]\n [ -t <replaceable class=\"parameter\">table</replaceable> ]\n [ -a ] [ -c ] [ -d ] [ -D ] [ -i ] [ -n ] [ -N ]\n [ -o ] [ -s ] [ -u ] [ -v ] [ -x ]\n***************\n*** 200,205 ****\n--- 202,222 ----\n \t<application>postmaster</application>\n \tis running. Defaults to using a local Unix domain socket\n \trather than an IP connection..\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n+ <varlistentry>\n+ <term>-k <replaceable class=\"parameter\">path</replaceable></term>\n+ <listitem>\n+ <para>\n+ \tSpecifies the local Unix domain socket file path\n+ \ton which the <application>postmaster</application>\n+ \tis listening for connections.\n+ Without this option, the socket path name defaults to\n+ the value of the <envar>PGUNIXSOCKET</envar> environment\n+ \tvariable (if set), otherwise it is constructed\n+ from the port number.\n </para>\n </listitem>\n </varlistentry>\nIndex: doc/src/sgml/ref/pg_dumpall.sgml\n*** doc/src/sgml/ref/pg_dumpall.sgml\t2000/06/30 21:15:38\t1.1\n--- doc/src/sgml/ref/pg_dumpall.sgml\t2000/07/01 18:41:22\t1.2\n***************\n*** 24,30 ****\n </refsynopsisdivinfo>\n <synopsis>\n pg_dumpall\n! pg_dumpall [ -h <replaceable class=\"parameter\">host</replaceable> ] [ -p <replaceable class=\"parameter\">port</replaceable> ] [ -a ] [ -d ] [ -D ] [ -O ] [ -s ] [ -u ] [ -v ] [ -x ]\n </synopsis>\n \n <refsect2 id=\"R2-APP-PG-DUMPALL-1\">\n--- 24,33 ----\n </refsynopsisdivinfo>\n <synopsis>\n pg_dumpall\n! pg_dumpall [ -h <replaceable class=\"parameter\">host</replaceable> ]\n! [ -k <replaceable class=\"parameter\">path</replaceable> ]\n! [ -p <replaceable class=\"parameter\">port</replaceable> ]\n! [ -a ] [ -d ] [ -D ] [ -O ] [ -s ] [ -u ] [ -v ] [ -x ]\n </synopsis>\n \n <refsect2 id=\"R2-APP-PG-DUMPALL-1\">\n***************\n*** 137,142 ****\n--- 140,160 ----\n \t<application>postmaster</application>\n \tis running. Defaults to using a local Unix domain socket\n \trather than an IP connection..\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n+ <varlistentry>\n+ <term>-k <replaceable class=\"parameter\">path</replaceable></term>\n+ <listitem>\n+ <para>\n+ \tSpecifies the local Unix domain socket file path\n+ \ton which the <application>postmaster</application>\n+ \tis listening for connections.\n+ Without this option, the socket path name defaults to\n+ the value of the <envar>PGUNIXSOCKET</envar> environment\n+ \tvariable (if set), otherwise it is constructed\n+ from the port number.\n </para>\n </listitem>\n </varlistentry>\nIndex: doc/src/sgml/ref/postmaster.sgml\n*** doc/src/sgml/ref/postmaster.sgml\t2000/06/30 21:15:38\t1.1\n--- doc/src/sgml/ref/postmaster.sgml\t2000/07/06 07:48:31\t1.7\n***************\n*** 24,30 ****\n </refsynopsisdivinfo>\n <synopsis>\n postmaster [ -B <replaceable class=\"parameter\">nBuffers</replaceable> ] [ -D <replaceable class=\"parameter\">DataDir</replaceable> ] [ -N <replaceable class=\"parameter\">maxBackends</replaceable> ] [ -S ]\n! [ -d <replaceable class=\"parameter\">DebugLevel</replaceable> ] [ -i ] [ -l ]\n [ -o <replaceable class=\"parameter\">BackendOptions</replaceable> ] [ -p <replaceable class=\"parameter\">port</replaceable> ] [ -n | -s ]\n </synopsis>\n \n--- 24,32 ----\n </refsynopsisdivinfo>\n <synopsis>\n postmaster [ -B <replaceable class=\"parameter\">nBuffers</replaceable> ] [ -D <replaceable class=\"parameter\">DataDir</replaceable> ] [ -N <replaceable class=\"parameter\">maxBackends</replaceable> ] [ -S ]\n! [ -d <replaceable class=\"parameter\">DebugLevel</replaceable> ]\n! [ -h <replaceable class=\"parameter\">hostname</replaceable> ] [ -i ]\n! [ -k <replaceable class=\"parameter\">path</replaceable> ] [ -l ]\n [ -o <replaceable class=\"parameter\">BackendOptions</replaceable> ] [ -p <replaceable class=\"parameter\">port</replaceable> ] [ -n | -s ]\n </synopsis>\n \n***************\n*** 124,129 ****\n--- 126,161 ----\n </varlistentry>\n \n <varlistentry>\n+ <term>-h <replaceable class=\"parameter\">hostName</replaceable></term>\n+ <listitem>\n+ <para>\n+ \tSpecifies the TCP/IP hostname or address\n+ \ton which the <application>postmaster</application>\n+ \tis to listen for connections from frontend applications. Defaults to\n+ \tthe value of the \n+ \t<envar>PGHOST</envar> \n+ \tenvironment variable, or if <envar>PGHOST</envar>\n+ \tis not set, then defaults to \"all\", meaning listen on all configured addresses\n+ \t(including localhost).\n+ </para>\n+ <para>\n+ \tIf you use a hostname or address other than \"all\", do not try to run\n+ \tmultiple instances of <application>postmaster</application> on the\n+ \tsame IP address but different ports. Doing so will result in them\n+ \tattempting (incorrectly) to use the same shared memory segments.\n+ \tAlso, if you use a hostname other than \"all\", all of the host's IP addresses\n+ \ton which <application>postmaster</application> instances are\n+ \tlistening must be distinct in the two last octets.\n+ </para>\n+ <para>\n+ \tIf you do use \"all\" (the default), then each instance must listen on a\n+ \tdifferent port (via -p or <envar>PGPORT</envar>). And, of course, do\n+ \tnot try to use both approaches on one host.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n+ <varlistentry>\n <term>-i</term>\n <listitem>\n <para>\n***************\n*** 135,140 ****\n--- 167,201 ----\n </varlistentry>\n \n <varlistentry>\n+ <term>-k <replaceable class=\"parameter\">path</replaceable></term>\n+ <listitem>\n+ <para>\n+ \tSpecifies the local Unix domain socket path name\n+ \ton which the <application>postmaster</application>\n+ \tis to listen for connections from frontend applications. Defaults to\n+ \tthe value of the \n+ \t<envar>PGUNIXSOCKET</envar> \n+ \tenvironment variable, or if <envar>PGUNIXSOCKET</envar>\n+ \tis not set, then defaults to a file in <filename>/tmp</filename>\n+ \tconstructed from the port number.\n+ </para>\n+ <para>\n+ You can use this option to put the Unix-domain socket in a\n+ directory that is private to one or more users using Unix\n+ \tdirectory permissions. This is necessary for securely\n+ \tcreating databases automatically on shared machines.\n+ In that situation, also disallow all TCP/IP connections\n+ \tinitially in <filename>pg_hba.conf</filename>.\n+ \tIf you specify a socket path other than the\n+ \tdefault then all frontend applications (including\n+ \t<application>psql</application>) must specify the same\n+ \tsocket path using either command-line options or\n+ \t<envar>PGUNIXSOCKET</envar>.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n+ <varlistentry>\n <term>-l</term>\n <listitem>\n <para>\nIndex: doc/src/sgml/ref/psql-ref.sgml\n*** doc/src/sgml/ref/psql-ref.sgml\t2000/06/30 21:15:38\t1.1\n--- doc/src/sgml/ref/psql-ref.sgml\t2000/07/02 03:56:05\t1.3\n***************\n*** 1329,1334 ****\n--- 1329,1347 ----\n \n \n <varlistentry>\n+ <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n+ <listitem>\n+ <para>\n+ Specifies the Unix-domain socket on which the\n+ <application>postmaster</application> is running.\n+ Without this option, the socket is created in <filename>/tmp</filename>\n+ based on the port number.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n+ \n+ <varlistentry>\n <term>-H, --html</term>\n <listitem>\n <para>\nIndex: doc/src/sgml/ref/vacuumdb.sgml\n*** doc/src/sgml/ref/vacuumdb.sgml\t2000/06/30 21:15:38\t1.1\n--- doc/src/sgml/ref/vacuumdb.sgml\t2000/07/04 04:46:45\t1.2\n***************\n*** 24,30 ****\n </refsynopsisdivinfo>\n <synopsis>\n vacuumdb [ <replaceable class=\"parameter\">options</replaceable> ] [ --analyze | -z ]\n! [ --alldb | -a ] [ --verbose | -v ]\n [ --table '<replaceable class=\"parameter\">table</replaceable> [ ( <replaceable class=\"parameter\">column</replaceable> [,...] ) ]' ] [ [-d] <replaceable class=\"parameter\">dbname</replaceable> ]\n </synopsis>\n \n--- 24,30 ----\n </refsynopsisdivinfo>\n <synopsis>\n vacuumdb [ <replaceable class=\"parameter\">options</replaceable> ] [ --analyze | -z ]\n! [ --all | -a ] [ --verbose | -v ]\n [ --table '<replaceable class=\"parameter\">table</replaceable> [ ( <replaceable class=\"parameter\">column</replaceable> [,...] ) ]' ] [ [-d] <replaceable class=\"parameter\">dbname</replaceable> ]\n </synopsis>\n \n***************\n*** 128,133 ****\n--- 128,145 ----\n </para>\n </listitem>\n </varlistentry>\n+ \n+ <varlistentry>\n+ <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n+ <listitem>\n+ <para>\n+ Specifies the Unix-domain socket on which the\n+ <application>postmaster</application> is running.\n+ Without this option, the socket is created in <filename>/tmp</filename>\n+ based on the port number.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n \n <varlistentry>\n <term>-U <replaceable class=\"parameter\">username</replaceable></term>\nIndex: src/backend/libpq/pqcomm.c\n*** src/backend/libpq/pqcomm.c\t2000/06/30 21:15:40\t1.1\n--- src/backend/libpq/pqcomm.c\t2000/07/01 18:50:46\t1.3\n***************\n*** 42,47 ****\n--- 42,48 ----\n *\t\tStreamConnection\t- Create new connection with client\n *\t\tStreamClose\t\t\t- Close a client/backend connection\n *\t\tpq_getport\t\t- return the PGPORT setting\n+ *\t\tpq_getunixsocket\t- return the PGUNIXSOCKET setting\n *\t\tpq_init\t\t\t- initialize libpq at backend startup\n *\t\tpq_close\t\t- shutdown libpq at backend exit\n *\n***************\n*** 134,139 ****\n--- 135,151 ----\n }\n \n /* --------------------------------\n+ *\t\tpq_getunixsocket - return the PGUNIXSOCKET setting.\n+ *\t\tIf NULL, default to computing it based on the port.\n+ * --------------------------------\n+ */\n+ char *\n+ pq_getunixsocket(void)\n+ {\n+ \treturn getenv(\"PGUNIXSOCKET\");\n+ }\n+ \n+ /* --------------------------------\n *\t\tpq_close - shutdown libpq at backend exit\n *\n * Note: in a standalone backend MyProcPort will be null,\n***************\n*** 177,189 ****\n /*\n * StreamServerPort -- open a sock stream \"listening\" port.\n *\n! * This initializes the Postmaster's connection-accepting port.\n *\n * RETURNS: STATUS_OK or STATUS_ERROR\n */\n \n int\n! StreamServerPort(char *hostName, unsigned short portName, int *fdP)\n {\n \tSockAddr\tsaddr;\n \tint\t\t\tfd,\n--- 189,205 ----\n /*\n * StreamServerPort -- open a sock stream \"listening\" port.\n *\n! * This initializes the Postmaster's connection-accepting port fdP.\n! * If hostName is \"any\", listen on all configured IP addresses.\n! * If hostName is NULL, listen on a Unix-domain socket instead of TCP;\n! * if unixSocketName is NULL, a default path (constructed in UNIX_SOCK_PATH\n! * in include/libpq/pqcomm.h) based on portName is used.\n *\n * RETURNS: STATUS_OK or STATUS_ERROR\n */\n \n int\n! StreamServerPort(char *hostName, unsigned short portNumber, char *unixSocketName, int *fdP)\n {\n \tSockAddr\tsaddr;\n \tint\t\t\tfd,\n***************\n*** 227,233 ****\n \tsaddr.sa.sa_family = family;\n \tif (family == AF_UNIX)\n \t{\n! \t\tlen = UNIXSOCK_PATH(saddr.un, portName);\n \t\tstrcpy(sock_path, saddr.un.sun_path);\n \n \t\t/*\n--- 243,250 ----\n \tsaddr.sa.sa_family = family;\n \tif (family == AF_UNIX)\n \t{\n! \t\tUNIXSOCK_PATH(saddr.un, portNumber, unixSocketName);\n! \t\tlen = UNIXSOCK_LEN(saddr.un);\n \t\tstrcpy(sock_path, saddr.un.sun_path);\n \n \t\t/*\n***************\n*** 259,267 ****\n \t}\n \telse\n \t{\n! \t\tsaddr.in.sin_addr.s_addr = htonl(INADDR_ANY);\n! \t\tsaddr.in.sin_port = htons(portName);\n! \t\tlen = sizeof(struct sockaddr_in);\n \t}\n \terr = bind(fd, &saddr.sa, len);\n \tif (err < 0)\n--- 276,305 ----\n \t}\n \telse\n \t{\n! \t /* TCP/IP socket */\n! \t if (!strcmp(hostName, \"all\")) /* like for databases in pg_hba.conf. */\n! \t saddr.in.sin_addr.s_addr = htonl(INADDR_ANY);\n! \t else\n! \t {\n! \t struct hostent *hp;\n! \n! \t hp = gethostbyname(hostName);\n! \t if ((hp == NULL) || (hp->h_addrtype != AF_INET))\n! \t\t{\n! \t\t snprintf(PQerrormsg, PQERRORMSG_LENGTH,\n! \t\t\t \"FATAL: StreamServerPort: gethostbyname(%s) failed: %s\\n\",\n! \t\t\t hostName, hstrerror(h_errno));\n! \t\t fputs(PQerrormsg, stderr);\n! \t\t pqdebug(\"%s\", PQerrormsg);\n! \t\t return STATUS_ERROR;\n! \t\t}\n! \t memmove((char *) &(saddr.in.sin_addr),\n! \t\t (char *) hp->h_addr,\n! \t\t hp->h_length);\n! \t }\n! \n! \t saddr.in.sin_port = htons(portNumber);\n! \t len = sizeof(struct sockaddr_in);\n \t}\n \terr = bind(fd, &saddr.sa, len);\n \tif (err < 0)\nIndex: src/backend/postmaster/postmaster.c\n*** src/backend/postmaster/postmaster.c\t2000/06/30 21:15:42\t1.1\n--- src/backend/postmaster/postmaster.c\t2000/07/06 07:38:21\t1.5\n***************\n*** 136,143 ****\n /* list of ports associated with still open, but incomplete connections */\n static Dllist *PortList;\n \n! static unsigned short PostPortName = 0;\n \n /*\n * This is a boolean indicating that there is at least one backend that\n * is accessing the current shared memory and semaphores. Between the\n--- 136,150 ----\n /* list of ports associated with still open, but incomplete connections */\n static Dllist *PortList;\n \n! /* Hostname of interface to listen on, or 'any'. */\n! static char *HostName = NULL;\n \n+ /* TCP/IP port number to listen on. Also used to default the Unix-domain socket name. */\n+ static unsigned short PostPortNumber = 0;\n+ \n+ /* Override of the default Unix-domain socket name to listen on, if non-NULL. */\n+ static char *UnixSocketName = NULL;\n+ \n /*\n * This is a boolean indicating that there is at least one backend that\n * is accessing the current shared memory and semaphores. Between the\n***************\n*** 274,280 ****\n static void SignalChildren(SIGNAL_ARGS);\n static int\tCountChildren(void);\n static int\n! SetOptsFile(char *progname, int port, char *datadir,\n \t\t\tint assert, int nbuf, char *execfile,\n \t\t\tint debuglvl, int netserver,\n #ifdef USE_SSL\n--- 281,287 ----\n static void SignalChildren(SIGNAL_ARGS);\n static int\tCountChildren(void);\n static int\n! SetOptsFile(char *progname, char *hostname, int port, char *unixsocket, char *datadir,\n \t\t\tint assert, int nbuf, char *execfile,\n \t\t\tint debuglvl, int netserver,\n #ifdef USE_SSL\n***************\n*** 370,380 ****\n {\n \textern int\tNBuffers;\t\t/* from buffer/bufmgr.c */\n \tint\t\t\topt;\n- \tchar\t *hostName;\n \tint\t\t\tstatus;\n \tint\t\t\tsilentflag = 0;\n \tbool\t\tDataDirOK;\t\t/* We have a usable PGDATA value */\n- \tchar\t\thostbuf[MAXHOSTNAMELEN];\n \tint\t\t\tnonblank_argc;\n \tchar\t\toriginal_extraoptions[MAXPGPATH];\n \n--- 377,385 ----\n***************\n*** 431,449 ****\n \t */\n \tumask((mode_t) 0077);\n \n- \tif (!(hostName = getenv(\"PGHOST\")))\n- \t{\n- \t\tif (gethostname(hostbuf, MAXHOSTNAMELEN) < 0)\n- \t\t\tstrcpy(hostbuf, \"localhost\");\n- \t\thostName = hostbuf;\n- \t}\n- \n \tMyProcPid = getpid();\n \tDataDir = getenv(\"PGDATA\"); /* default value */\n \n \topterr = 0;\n \tIgnoreSystemIndexes(false);\n! \twhile ((opt = getopt(nonblank_argc, argv, \"A:a:B:b:D:d:ilm:MN:no:p:Ss\")) != EOF)\n \t{\n \t\tswitch (opt)\n \t\t{\n--- 436,447 ----\n \t */\n \tumask((mode_t) 0077);\n \n \tMyProcPid = getpid();\n \tDataDir = getenv(\"PGDATA\"); /* default value */\n \n \topterr = 0;\n \tIgnoreSystemIndexes(false);\n! \twhile ((opt = getopt(nonblank_argc, argv, \"A:a:B:b:D:d:h:ik:lm:MN:no:p:Ss\")) != EOF)\n \t{\n \t\tswitch (opt)\n \t\t{\n***************\n*** 498,506 ****\n--- 496,511 ----\n \t\t\t\tDebugLvl = atoi(optarg);\n \t\t\t\tpg_options[TRACE_VERBOSE] = DebugLvl;\n \t\t\t\tbreak;\n+ \t\t\tcase 'h':\n+ \t\t\t\tHostName = optarg;\n+ \t\t\t\tbreak;\n \t\t\tcase 'i':\n \t\t\t\tNetServer = true;\n \t\t\t\tbreak;\n+ \t\t\tcase 'k':\n+ \t\t\t\t/* Set PGUNIXSOCKET by hand. */\n+ \t\t\t\tUnixSocketName = optarg;\n+ \t\t\t\tbreak;\n #ifdef USE_SSL\n \t\t\tcase 'l':\n \t\t\t\tSecureNetServer = true;\n***************\n*** 545,551 ****\n \t\t\t\tbreak;\n \t\t\tcase 'p':\n \t\t\t\t/* Set PGPORT by hand. */\n! \t\t\t\tPostPortName = (unsigned short) atoi(optarg);\n \t\t\t\tbreak;\n \t\t\tcase 'S':\n \n--- 550,556 ----\n \t\t\t\tbreak;\n \t\t\tcase 'p':\n \t\t\t\t/* Set PGPORT by hand. */\n! \t\t\t\tPostPortNumber = (unsigned short) atoi(optarg);\n \t\t\t\tbreak;\n \t\t\tcase 'S':\n \n***************\n*** 577,584 ****\n \t/*\n \t * Select default values for switches where needed\n \t */\n! \tif (PostPortName == 0)\n! \t\tPostPortName = (unsigned short) pq_getport();\n \n \t/*\n \t * Check for invalid combinations of switches\n--- 582,603 ----\n \t/*\n \t * Select default values for switches where needed\n \t */\n! \tif (HostName == NULL)\n! \t{\n! \t\tif (!(HostName = getenv(\"PGHOST\")))\n! \t\t{\n! \t\t\tHostName = \"any\";\n! \t\t}\n! \t}\n! \telse if (!NetServer)\n! \t{\n! \t\tfprintf(stderr, \"%s: -h requires -i.\\n\", progname);\n! \t\texit(1);\n! \t}\n! \tif (PostPortNumber == 0)\n! \t\tPostPortNumber = (unsigned short) pq_getport();\n! \tif (UnixSocketName == NULL)\n! \t\tUnixSocketName = pq_getunixsocket();\n \n \t/*\n \t * Check for invalid combinations of switches\n***************\n*** 622,628 ****\n \n \tif (NetServer)\n \t{\n! \t\tstatus = StreamServerPort(hostName, PostPortName, &ServerSock_INET);\n \t\tif (status != STATUS_OK)\n \t\t{\n \t\t\tfprintf(stderr, \"%s: cannot create INET stream port\\n\",\n--- 641,647 ----\n \n \tif (NetServer)\n \t{\n! \t\tstatus = StreamServerPort(HostName, PostPortNumber, NULL, &ServerSock_INET);\n \t\tif (status != STATUS_OK)\n \t\t{\n \t\t\tfprintf(stderr, \"%s: cannot create INET stream port\\n\",\n***************\n*** 632,638 ****\n \t}\n \n #if !defined(__CYGWIN32__) && !defined(__QNX__)\n! \tstatus = StreamServerPort(NULL, PostPortName, &ServerSock_UNIX);\n \tif (status != STATUS_OK)\n \t{\n \t\tfprintf(stderr, \"%s: cannot create UNIX stream port\\n\",\n--- 651,657 ----\n \t}\n \n #if !defined(__CYGWIN32__) && !defined(__QNX__)\n! \tstatus = StreamServerPort(NULL, PostPortNumber, UnixSocketName, &ServerSock_UNIX);\n \tif (status != STATUS_OK)\n \t{\n \t\tfprintf(stderr, \"%s: cannot create UNIX stream port\\n\",\n***************\n*** 642,648 ****\n #endif\n \t/* set up shared memory and semaphores */\n \tEnableMemoryContext(TRUE);\n! \treset_shared(PostPortName);\n \n \t/*\n \t * Initialize the list of active backends.\tThis list is only used for\n--- 661,667 ----\n #endif\n \t/* set up shared memory and semaphores */\n \tEnableMemoryContext(TRUE);\n! \treset_shared(PostPortNumber);\n \n \t/*\n \t * Initialize the list of active backends.\tThis list is only used for\n***************\n*** 664,670 ****\n \t\t{\n \t\t\tif (SetOptsFile(\n \t\t\t\t\t\t\tprogname,\t/* postmaster executable file */\n! \t\t\t\t\t\t\tPostPortName,\t\t/* port number */\n \t\t\t\t\t\t\tDataDir,\t/* PGDATA */\n \t\t\t\t\t\t\tassert_enabled,\t\t/* whether -A is specified\n \t\t\t\t\t\t\t\t\t\t\t\t * or not */\n--- 683,691 ----\n \t\t{\n \t\t\tif (SetOptsFile(\n \t\t\t\t\t\t\tprogname,\t/* postmaster executable file */\n! \t\t\t\t\t\t\tHostName, /* IP address to bind to */\n! \t\t\t\t\t\t\tPostPortNumber,\t\t/* port number */\n! \t\t\t\t\t\t\tUnixSocketName,\t/* PGUNIXSOCKET */\n \t\t\t\t\t\t\tDataDir,\t/* PGDATA */\n \t\t\t\t\t\t\tassert_enabled,\t\t/* whether -A is specified\n \t\t\t\t\t\t\t\t\t\t\t\t * or not */\n***************\n*** 753,759 ****\n \t\t{\n \t\t\tif (SetOptsFile(\n \t\t\t\t\t\t\tprogname,\t/* postmaster executable file */\n! \t\t\t\t\t\t\tPostPortName,\t\t/* port number */\n \t\t\t\t\t\t\tDataDir,\t/* PGDATA */\n \t\t\t\t\t\t\tassert_enabled,\t\t/* whether -A is specified\n \t\t\t\t\t\t\t\t\t\t\t\t * or not */\n--- 774,782 ----\n \t\t{\n \t\t\tif (SetOptsFile(\n \t\t\t\t\t\t\tprogname,\t/* postmaster executable file */\n! \t\t\t\t\t\t\tHostName, /* IP address to bind to */\n! \t\t\t\t\t\t\tPostPortNumber,\t\t/* port number */\n! \t\t\t\t\t\t\tUnixSocketName,\t/* PGUNIXSOCKET */\n \t\t\t\t\t\t\tDataDir,\t/* PGDATA */\n \t\t\t\t\t\t\tassert_enabled,\t\t/* whether -A is specified\n \t\t\t\t\t\t\t\t\t\t\t\t * or not */\n***************\n*** 837,843 ****\n--- 860,868 ----\n \tfprintf(stderr, \"\\t-a system\\tuse this authentication system\\n\");\n \tfprintf(stderr, \"\\t-b backend\\tuse a specific backend server executable\\n\");\n \tfprintf(stderr, \"\\t-d [1-5]\\tset debugging level\\n\");\n+ \tfprintf(stderr, \"\\t-h hostname\\tspecify hostname or IP address or 'any' for postmaster to listen on (also use -i)\\n\");\n \tfprintf(stderr, \"\\t-i \\t\\tlisten on TCP/IP sockets as well as Unix domain socket\\n\");\n+ \tfprintf(stderr, \"\\t-k path\\tspecify Unix-domain socket name for postmaster to listen on\\n\");\n #ifdef USE_SSL\n \tfprintf(stderr, \" \\t-l \\t\\tfor TCP/IP sockets, listen only on SSL connections\\n\");\n #endif\n***************\n*** 1318,1328 ****\n--- 1343,1417 ----\n }\n \n /*\n+ * get_host_port -- return a pseudo port number (16 bits)\n+ * derived from the primary IP address of HostName.\n+ */\n+ static unsigned short\n+ get_host_port(void)\n+ {\n+ \tstatic unsigned short hostPort = 0;\n+ \n+ \tif (hostPort == 0)\n+ \t{\n+ \t\tSockAddr\tsaddr;\n+ \t\tstruct hostent *hp;\n+ \n+ \t\thp = gethostbyname(HostName);\n+ \t\tif ((hp == NULL) || (hp->h_addrtype != AF_INET))\n+ \t\t{\n+ \t\t\tchar msg[1024];\n+ \t\t\tsnprintf(msg, sizeof(msg),\n+ \t\t\t\t \"FATAL: get_host_port: gethostbyname(%s) failed: %s\\n\",\n+ \t\t\t\t HostName, hstrerror(h_errno));\n+ \t\t\tfputs(msg, stderr);\n+ \t\t\tpqdebug(\"%s\", msg);\n+ \t\t\texit(1);\n+ \t\t}\n+ \t\tmemmove((char *) &(saddr.in.sin_addr),\n+ \t\t\t(char *) hp->h_addr,\n+ \t\t\thp->h_length);\n+ \t\thostPort = ntohl(saddr.in.sin_addr.s_addr) & 0xFFFF;\n+ \t}\n+ \n+ \treturn hostPort;\n+ }\n+ \n+ /*\n * reset_shared -- reset shared memory and semaphores\n */\n static void\n reset_shared(unsigned short port)\n {\n+ \t/*\n+ \t * A typical ipc_key is 5432001, which is port 5432, sequence\n+ \t * number 0, and 01 as the index in IPCKeyGetBufferMemoryKey().\n+ \t * The 32-bit INT_MAX is 2147483 6 47.\n+ \t *\n+ \t * The default algorithm for calculating the IPC keys assumes that all\n+ \t * instances of postmaster on a given host are listening on different\n+ \t * ports. In order to work (prevent shared memory collisions) if you\n+ \t * run multiple PostgreSQL instances on the same port and different IP\n+ \t * addresses on a host, we change the algorithm if you give postmaster\n+ \t * the -h option, or set PGHOST, to a value other than the internal\n+ \t * default of \"any\".\n+ \t *\n+ \t * If HostName is not \"any\", then we generate the IPC keys using the\n+ \t * last two octets of the IP address instead of the port number.\n+ \t * This algorithm assumes that no one will run multiple PostgreSQL\n+ \t * instances on one host using two IP addresses that have the same two\n+ \t * last octets in different class C networks. If anyone does, it\n+ \t * would be rare.\n+ \t *\n+ \t * So, if you use -h or PGHOST, don't try to run two instances of\n+ \t * PostgreSQL on the same IP address but different ports. If you\n+ \t * don't use them, then you must use different ports (via -p or\n+ \t * PGPORT). And, of course, don't try to use both approaches on one\n+ \t * host.\n+ \t */\n+ \n+ \tif (strcmp(HostName, \"any\"))\n+ \t\tport = get_host_port();\n+ \n \tipc_key = port * 1000 + shmem_seq * 100;\n \tCreateSharedMemoryAndSemaphores(ipc_key, MaxBackends);\n \tshmem_seq += 1;\n***************\n*** 1540,1546 ****\n \t\t\t\tctime(&tnow));\n \t\tfflush(stderr);\n \t\tshmem_exit(0);\n! \t\treset_shared(PostPortName);\n \t\tStartupPID = StartupDataBase();\n \t\treturn;\n \t}\n--- 1629,1635 ----\n \t\t\t\tctime(&tnow));\n \t\tfflush(stderr);\n \t\tshmem_exit(0);\n! \t\treset_shared(PostPortNumber);\n \t\tStartupPID = StartupDataBase();\n \t\treturn;\n \t}\n***************\n*** 1720,1726 ****\n \t * Set up the necessary environment variables for the backend This\n \t * should really be some sort of message....\n \t */\n! \tsprintf(envEntry[0], \"POSTPORT=%d\", PostPortName);\n \tputenv(envEntry[0]);\n \tsprintf(envEntry[1], \"POSTID=%d\", NextBackendTag);\n \tputenv(envEntry[1]);\n--- 1809,1815 ----\n \t * Set up the necessary environment variables for the backend This\n \t * should really be some sort of message....\n \t */\n! \tsprintf(envEntry[0], \"POSTPORT=%d\", PostPortNumber);\n \tputenv(envEntry[0]);\n \tsprintf(envEntry[1], \"POSTID=%d\", NextBackendTag);\n \tputenv(envEntry[1]);\n***************\n*** 2174,2180 ****\n \tfor (i = 0; i < 4; ++i)\n \t\tMemSet(ssEntry[i], 0, 2 * ARGV_SIZE);\n \n! \tsprintf(ssEntry[0], \"POSTPORT=%d\", PostPortName);\n \tputenv(ssEntry[0]);\n \tsprintf(ssEntry[1], \"POSTID=%d\", NextBackendTag);\n \tputenv(ssEntry[1]);\n--- 2263,2269 ----\n \tfor (i = 0; i < 4; ++i)\n \t\tMemSet(ssEntry[i], 0, 2 * ARGV_SIZE);\n \n! \tsprintf(ssEntry[0], \"POSTPORT=%d\", PostPortNumber);\n \tputenv(ssEntry[0]);\n \tsprintf(ssEntry[1], \"POSTID=%d\", NextBackendTag);\n \tputenv(ssEntry[1]);\n***************\n*** 2254,2260 ****\n * Create the opts file\n */\n static int\n! SetOptsFile(char *progname, int port, char *datadir,\n \t\t\tint assert, int nbuf, char *execfile,\n \t\t\tint debuglvl, int netserver,\n #ifdef USE_SSL\n--- 2343,2349 ----\n * Create the opts file\n */\n static int\n! SetOptsFile(char *progname, char *hostname, int port, char *unixsocket, char *datadir,\n \t\t\tint assert, int nbuf, char *execfile,\n \t\t\tint debuglvl, int netserver,\n #ifdef USE_SSL\n***************\n*** 2279,2284 ****\n--- 2368,2383 ----\n \t\treturn (-1);\n \t}\n \tsnprintf(opts, sizeof(opts), \"%s\\n-p %d\\n-D %s\\n\", progname, port, datadir);\n+ \tif (netserver)\n+ \t{\n+ \t\tsprintf(buf, \"-h %s\\n\", hostname);\n+ \t\tstrcat(opts, buf);\n+ \t}\n+ \tif (unixsocket)\n+ \t{\n+ \t\tsprintf(buf, \"-k %s\\n\", unixsocket);\n+ \t\tstrcat(opts, buf);\n+ \t}\n \tif (assert)\n \t{\n \t\tsprintf(buf, \"-A %d\\n\", assert);\nIndex: src/bin/pg_dump/pg_dump.c\n*** src/bin/pg_dump/pg_dump.c\t2000/06/30 21:15:44\t1.1\n--- src/bin/pg_dump/pg_dump.c\t2000/07/01 18:41:22\t1.2\n***************\n*** 140,145 ****\n--- 140,146 ----\n \t\t \" -D, --attribute-inserts dump data as INSERT commands with attribute names\\n\"\n \t\t \" -h, --host <hostname> server host name\\n\"\n \t\t \" -i, --ignore-version proceed when database version != pg_dump version\\n\"\n+ \t\t \" -k, --unixsocket <path> server Unix-domain socket name\\n\"\n \t\" -n, --no-quotes suppress most quotes around identifiers\\n\"\n \t \" -N, --quotes enable most quotes around identifiers\\n\"\n \t\t \" -o, --oids dump object ids (oids)\\n\"\n***************\n*** 158,163 ****\n--- 159,165 ----\n \t\t \" -D dump data as INSERT commands with attribute names\\n\"\n \t\t \" -h <hostname> server host name\\n\"\n \t\t \" -i proceed when database version != pg_dump version\\n\"\n+ \t\t \" -k <path> server Unix-domain socket name\\n\"\n \t\" -n suppress most quotes around identifiers\\n\"\n \t \" -N enable most quotes around identifiers\\n\"\n \t\t \" -o dump object ids (oids)\\n\"\n***************\n*** 579,584 ****\n--- 581,587 ----\n \tconst char *dbname = NULL;\n \tconst char *pghost = NULL;\n \tconst char *pgport = NULL;\n+ \tconst char *pgunixsocket = NULL;\n \tchar\t *tablename = NULL;\n \tbool\t\toids = false;\n \tTableInfo *tblinfo;\n***************\n*** 598,603 ****\n--- 601,607 ----\n \t\t{\"attribute-inserts\", no_argument, NULL, 'D'},\n \t\t{\"host\", required_argument, NULL, 'h'},\n \t\t{\"ignore-version\", no_argument, NULL, 'i'},\n+ \t\t{\"unixsocket\", required_argument, NULL, 'k'},\n \t\t{\"no-quotes\", no_argument, NULL, 'n'},\n \t\t{\"quotes\", no_argument, NULL, 'N'},\n \t\t{\"oids\", no_argument, NULL, 'o'},\n***************\n*** 662,667 ****\n--- 666,674 ----\n \t\t\tcase 'i':\t\t\t/* ignore database version mismatch */\n \t\t\t\tignore_version = true;\n \t\t\t\tbreak;\n+ \t\t\tcase 'k':\t\t\t/* server Unix-domain socket */\n+ \t\t\t\tpgunixsocket = optarg;\n+ \t\t\t\tbreak;\n \t\t\tcase 'n':\t\t\t/* Do not force double-quotes on\n \t\t\t\t\t\t\t\t * identifiers */\n \t\t\t\tforce_quotes = false;\n***************\n*** 782,788 ****\n \t\texit(1);\n \t}\n \n- \t/* g_conn = PQsetdb(pghost, pgport, NULL, NULL, dbname); */\n \tif (pghost != NULL)\n \t{\n \t\tsprintf(tmp_string, \"host=%s \", pghost);\n--- 789,794 ----\n***************\n*** 791,796 ****\n--- 797,807 ----\n \tif (pgport != NULL)\n \t{\n \t\tsprintf(tmp_string, \"port=%s \", pgport);\n+ \t\tstrcat(connect_string, tmp_string);\n+ \t}\n+ \tif (pgunixsocket != NULL)\n+ \t{\n+ \t\tsprintf(tmp_string, \"unixsocket=%s \", pgunixsocket);\n \t\tstrcat(connect_string, tmp_string);\n \t}\n \tif (dbname != NULL)\nIndex: src/bin/psql/command.c\n*** src/bin/psql/command.c\t2000/06/30 21:15:46\t1.1\n--- src/bin/psql/command.c\t2000/07/01 18:20:40\t1.2\n***************\n*** 1199,1204 ****\n--- 1199,1205 ----\n \tSetVariable(pset.vars, \"USER\", NULL);\n \tSetVariable(pset.vars, \"HOST\", NULL);\n \tSetVariable(pset.vars, \"PORT\", NULL);\n+ \tSetVariable(pset.vars, \"UNIXSOCKET\", NULL);\n \tSetVariable(pset.vars, \"ENCODING\", NULL);\n \n \t/* If dbname is \"\" then use old name, else new one (even if NULL) */\n***************\n*** 1228,1233 ****\n--- 1229,1235 ----\n \tdo\n \t{\n \t\tneed_pass = false;\n+ \t\t/* FIXME use PQconnectdb to support passing the Unix socket */\n \t\tpset.db = PQsetdbLogin(PQhost(oldconn), PQport(oldconn),\n \t\t\t\t\t\t\t NULL, NULL, dbparam, userparam, pwparam);\n \n***************\n*** 1303,1308 ****\n--- 1305,1311 ----\n \tSetVariable(pset.vars, \"USER\", PQuser(pset.db));\n \tSetVariable(pset.vars, \"HOST\", PQhost(pset.db));\n \tSetVariable(pset.vars, \"PORT\", PQport(pset.db));\n+ \tSetVariable(pset.vars, \"UNIXSOCKET\", PQunixsocket(pset.db));\n \tSetVariable(pset.vars, \"ENCODING\", pg_encoding_to_char(pset.encoding));\n \n \tpset.issuper = test_superuser(PQuser(pset.db));\nIndex: src/bin/psql/command.h\nIndex: src/bin/psql/common.c\n*** src/bin/psql/common.c\t2000/06/30 21:15:46\t1.1\n--- src/bin/psql/common.c\t2000/07/01 18:20:40\t1.2\n***************\n*** 330,335 ****\n--- 330,336 ----\n \t\t\tSetVariable(pset.vars, \"DBNAME\", NULL);\n \t\t\tSetVariable(pset.vars, \"HOST\", NULL);\n \t\t\tSetVariable(pset.vars, \"PORT\", NULL);\n+ \t\t\tSetVariable(pset.vars, \"UNIXSOCKET\", NULL);\n \t\t\tSetVariable(pset.vars, \"USER\", NULL);\n \t\t\tSetVariable(pset.vars, \"ENCODING\", NULL);\n \t\t\treturn NULL;\n***************\n*** 509,514 ****\n--- 510,516 ----\n \t\t\t\tSetVariable(pset.vars, \"DBNAME\", NULL);\n \t\t\t\tSetVariable(pset.vars, \"HOST\", NULL);\n \t\t\t\tSetVariable(pset.vars, \"PORT\", NULL);\n+ \t\t\t\tSetVariable(pset.vars, \"UNIXSOCKET\", NULL);\n \t\t\t\tSetVariable(pset.vars, \"USER\", NULL);\n \t\t\t\tSetVariable(pset.vars, \"ENCODING\", NULL);\n \t\t\t\treturn false;\nIndex: src/bin/psql/help.c\n*** src/bin/psql/help.c\t2000/06/30 21:15:46\t1.1\n--- src/bin/psql/help.c\t2000/07/01 18:20:40\t1.2\n***************\n*** 103,108 ****\n--- 103,118 ----\n \tputs(\")\");\n \n \tputs(\" -H HTML table output mode (-P format=html)\");\n+ \n+ \t/* Display default Unix-domain socket */\n+ \tenv = getenv(\"PGUNIXSOCKET\");\n+ \tprintf(\" -k <path> Specify Unix domain socket name (default: \");\n+ \tif (env)\n+ \t\tfputs(env, stdout);\n+ \telse\n+ \t\tfputs(\"computed from the port\", stdout);\n+ \tputs(\")\");\n+ \n \tputs(\" -l List available databases, then exit\");\n \tputs(\" -n Disable readline\");\n \tputs(\" -o <filename> Send query output to filename (or |pipe)\");\nIndex: src/bin/psql/prompt.c\n*** src/bin/psql/prompt.c\t2000/06/30 21:15:46\t1.1\n--- src/bin/psql/prompt.c\t2000/07/01 18:20:40\t1.2\n***************\n*** 189,194 ****\n--- 189,199 ----\n \t\t\t\t\tif (pset.db && PQport(pset.db))\n \t\t\t\t\t\tstrncpy(buf, PQport(pset.db), MAX_PROMPT_SIZE);\n \t\t\t\t\tbreak;\n+ \t\t\t\t\t/* DB server Unix-domain socket */\n+ \t\t\t\tcase '<':\n+ \t\t\t\t\tif (pset.db && PQunixsocket(pset.db))\n+ \t\t\t\t\t\tstrncpy(buf, PQunixsocket(pset.db), MAX_PROMPT_SIZE);\n+ \t\t\t\t\tbreak;\n \t\t\t\t\t/* DB server user name */\n \t\t\t\tcase 'n':\n \t\t\t\t\tif (pset.db)\nIndex: src/bin/psql/prompt.h\nIndex: src/bin/psql/settings.h\nIndex: src/bin/psql/startup.c\n*** src/bin/psql/startup.c\t2000/06/30 21:15:46\t1.1\n--- src/bin/psql/startup.c\t2000/07/01 18:20:40\t1.2\n***************\n*** 66,71 ****\n--- 66,72 ----\n \tchar\t *dbname;\n \tchar\t *host;\n \tchar\t *port;\n+ \tchar\t *unixsocket;\n \tchar\t *username;\n \tenum _actions action;\n \tchar\t *action_string;\n***************\n*** 158,163 ****\n--- 159,165 ----\n \tdo\n \t{\n \t\tneed_pass = false;\n+ \t\t/* FIXME use PQconnectdb to allow setting the unix socket */\n \t\tpset.db = PQsetdbLogin(options.host, options.port, NULL, NULL,\n \t\t\toptions.action == ACT_LIST_DB ? \"template1\" : options.dbname,\n \t\t\t\t\t\t\t username, password);\n***************\n*** 202,207 ****\n--- 204,210 ----\n \tSetVariable(pset.vars, \"USER\", PQuser(pset.db));\n \tSetVariable(pset.vars, \"HOST\", PQhost(pset.db));\n \tSetVariable(pset.vars, \"PORT\", PQport(pset.db));\n+ \tSetVariable(pset.vars, \"UNIXSOCKET\", PQunixsocket(pset.db));\n \tSetVariable(pset.vars, \"ENCODING\", pg_encoding_to_char(pset.encoding));\n \n #ifndef WIN32\n***************\n*** 313,318 ****\n--- 316,322 ----\n \t\t{\"field-separator\", required_argument, NULL, 'F'},\n \t\t{\"host\", required_argument, NULL, 'h'},\n \t\t{\"html\", no_argument, NULL, 'H'},\n+ \t\t{\"unixsocket\", required_argument, NULL, 'k'},\n \t\t{\"list\", no_argument, NULL, 'l'},\n \t\t{\"no-readline\", no_argument, NULL, 'n'},\n \t\t{\"output\", required_argument, NULL, 'o'},\n***************\n*** 346,359 ****\n \tmemset(options, 0, sizeof *options);\n \n #ifdef HAVE_GETOPT_LONG\n! \twhile ((c = getopt_long(argc, argv, \"aAc:d:eEf:F:lh:Hno:p:P:qRsStT:uU:v:VWxX?\", long_options, &optindex)) != -1)\n #else\t\t\t\t\t\t\t/* not HAVE_GETOPT_LONG */\n \n \t/*\n \t * Be sure to leave the '-' in here, so we can catch accidental long\n \t * options.\n \t */\n! \twhile ((c = getopt(argc, argv, \"aAc:d:eEf:F:lh:Hno:p:P:qRsStT:uU:v:VWxX?-\")) != -1)\n #endif\t /* not HAVE_GETOPT_LONG */\n \t{\n \t\tswitch (c)\n--- 350,363 ----\n \tmemset(options, 0, sizeof *options);\n \n #ifdef HAVE_GETOPT_LONG\n! \twhile ((c = getopt_long(argc, argv, \"aAc:d:eEf:F:lh:Hk:no:p:P:qRsStT:uU:v:VWxX?\", long_options, &optindex)) != -1)\n #else\t\t\t\t\t\t\t/* not HAVE_GETOPT_LONG */\n \n \t/*\n \t * Be sure to leave the '-' in here, so we can catch accidental long\n \t * options.\n \t */\n! \twhile ((c = getopt(argc, argv, \"aAc:d:eEf:F:lh:Hk:no:p:P:qRsStT:uU:v:VWxX?-\")) != -1)\n #endif\t /* not HAVE_GETOPT_LONG */\n \t{\n \t\tswitch (c)\n***************\n*** 398,403 ****\n--- 402,410 ----\n \t\t\t\tbreak;\n \t\t\tcase 'l':\n \t\t\t\toptions->action = ACT_LIST_DB;\n+ \t\t\t\tbreak;\n+ \t\t\tcase 'k':\n+ \t\t\t\toptions->unixsocket = optarg;\n \t\t\t\tbreak;\n \t\t\tcase 'n':\n \t\t\t\toptions->no_readline = true;\nIndex: src/bin/scripts/createdb\n*** src/bin/scripts/createdb\t2000/06/30 21:15:46\t1.1\n--- src/bin/scripts/createdb\t2000/07/04 04:46:45\t1.2\n***************\n*** 50,55 ****\n--- 50,64 ----\n --port=*)\n PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n ;;\n+ \t--unixsocket|-k)\n+ \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n+ \t\tshift;;\n+ -k*)\n+ PSQLOPT=\"$PSQLOPT $1\"\n+ ;;\n+ --unixsocket=*)\n+ PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n+ ;;\n \t--username|-U)\n \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n \t\tshift;;\n***************\n*** 114,119 ****\n--- 123,129 ----\n \techo \" -E, --encoding=ENCODING Multibyte encoding for the database\"\n \techo \" -h, --host=HOSTNAME Database server host\"\n \techo \" -p, --port=PORT Database server port\"\n+ \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n \techo \" -U, --username=USERNAME Username to connect as\"\n \techo \" -W, --password Prompt for password\"\n \techo \" -e, --echo Show the query being sent to the backend\"\nIndex: src/bin/scripts/createlang.sh\n*** src/bin/scripts/createlang.sh\t2000/06/30 21:15:46\t1.1\n--- src/bin/scripts/createlang.sh\t2000/07/04 04:46:45\t1.2\n***************\n*** 65,70 ****\n--- 65,79 ----\n --port=*)\n PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n ;;\n+ \t--unixsocket|-k)\n+ \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n+ \t\tshift;;\n+ -k*)\n+ PSQLOPT=\"$PSQLOPT $1\"\n+ ;;\n+ --unixsocket=*)\n+ PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n+ ;;\n \t--username|-U)\n \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n \t\tshift;;\n***************\n*** 126,131 ****\n--- 135,141 ----\n \techo \"Options:\"\n \techo \" -h, --host=HOSTNAME Database server host\"\n \techo \" -p, --port=PORT Database server port\"\n+ \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n \techo \" -U, --username=USERNAME Username to connect as\"\n \techo \" -W, --password Prompt for password\"\n \techo \" -d, --dbname=DBNAME Database to install language in\"\nIndex: src/bin/scripts/createuser\n*** src/bin/scripts/createuser\t2000/06/30 21:15:46\t1.1\n--- src/bin/scripts/createuser\t2000/07/04 04:46:45\t1.2\n***************\n*** 63,68 ****\n--- 63,77 ----\n --port=*)\n PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n ;;\n+ \t--unixsocket|-k)\n+ \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n+ \t\tshift;;\n+ -k*)\n+ PSQLOPT=\"$PSQLOPT $1\"\n+ ;;\n+ --unixsocket=*)\n+ PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n+ ;;\n # Note: These two specify the user to connect as (like in psql),\n # not the user you're creating.\n \t--username|-U)\n***************\n*** 135,140 ****\n--- 144,150 ----\n \techo \" -P, --pwprompt Assign a password to new user\"\n \techo \" -h, --host=HOSTNAME Database server host\"\n \techo \" -p, --port=PORT Database server port\"\n+ \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n \techo \" -U, --username=USERNAME Username to connect as (not the one to create)\"\n \techo \" -W, --password Prompt for password to connect\"\n \techo \" -e, --echo Show the query being sent to the backend\"\nIndex: src/bin/scripts/dropdb\n*** src/bin/scripts/dropdb\t2000/06/30 21:15:46\t1.1\n--- src/bin/scripts/dropdb\t2000/07/04 04:46:45\t1.2\n***************\n*** 59,64 ****\n--- 59,73 ----\n --port=*)\n PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n ;;\n+ \t--unixsocket|-k)\n+ \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n+ \t\tshift;;\n+ -k*)\n+ PSQLOPT=\"$PSQLOPT $1\"\n+ ;;\n+ --unixsocket=*)\n+ PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n+ ;;\n \t--username|-U)\n \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n \t\tshift;;\n***************\n*** 103,108 ****\n--- 112,118 ----\n \techo \"Options:\"\n \techo \" -h, --host=HOSTNAME Database server host\"\n \techo \" -p, --port=PORT Database server port\"\n+ \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n \techo \" -U, --username=USERNAME Username to connect as\"\n \techo \" -W, --password Prompt for password\"\n \techo \" -i, --interactive Prompt before deleting anything\"\nIndex: src/bin/scripts/droplang\n*** src/bin/scripts/droplang\t2000/06/30 21:15:46\t1.1\n--- src/bin/scripts/droplang\t2000/07/04 04:46:45\t1.2\n***************\n*** 65,70 ****\n--- 65,79 ----\n --port=*)\n PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n ;;\n+ \t--unixsocket|-k)\n+ \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n+ \t\tshift;;\n+ -k*)\n+ PSQLOPT=\"$PSQLOPT $1\"\n+ ;;\n+ --unixsocket=*)\n+ PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n+ ;;\n \t--username|-U)\n \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n \t\tshift;;\n***************\n*** 113,118 ****\n--- 122,128 ----\n \techo \"Options:\"\n \techo \" -h, --host=HOSTNAME Database server host\"\n \techo \" -p, --port=PORT Database server port\"\n+ \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n \techo \" -U, --username=USERNAME Username to connect as\"\n \techo \" -W, --password Prompt for password\"\n \techo \" -d, --dbname=DBNAME Database to remove language from\"\nIndex: src/bin/scripts/dropuser\n*** src/bin/scripts/dropuser\t2000/06/30 21:15:46\t1.1\n--- src/bin/scripts/dropuser\t2000/07/04 04:46:45\t1.2\n***************\n*** 59,64 ****\n--- 59,73 ----\n --port=*)\n PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n ;;\n+ \t--unixsocket|-k)\n+ \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n+ \t\tshift;;\n+ -k*)\n+ PSQLOPT=\"$PSQLOPT $1\"\n+ ;;\n+ --unixsocket=*)\n+ PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n+ ;;\n # Note: These two specify the user to connect as (like in psql),\n # not the user you're dropping.\n \t--username|-U)\n***************\n*** 105,110 ****\n--- 114,120 ----\n \techo \"Options:\"\n \techo \" -h, --host=HOSTNAME Database server host\"\n \techo \" -p, --port=PORT Database server port\"\n+ \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n \techo \" -U, --username=USERNAME Username to connect as (not the one to drop)\"\n \techo \" -W, --password Prompt for password to connect\"\n \techo \" -i, --interactive Prompt before deleting anything\"\nIndex: src/bin/scripts/vacuumdb\n*** src/bin/scripts/vacuumdb\t2000/06/30 21:15:46\t1.1\n--- src/bin/scripts/vacuumdb\t2000/07/04 04:46:45\t1.2\n***************\n*** 52,57 ****\n--- 52,66 ----\n --port=*)\n PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n ;;\n+ \t--unixsocket|-k)\n+ \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n+ \t\tshift;;\n+ -k*)\n+ PSQLOPT=\"$PSQLOPT $1\"\n+ ;;\n+ --unixsocket=*)\n+ PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n+ ;;\n \t--username|-U)\n \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n \t\tshift;;\n***************\n*** 121,126 ****\n--- 130,136 ----\n echo \"Options:\"\n \techo \" -h, --host=HOSTNAME Database server host\"\n \techo \" -p, --port=PORT Database server port\"\n+ \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n \techo \" -U, --username=USERNAME Username to connect as\"\n \techo \" -W, --password Prompt for password\"\n \techo \" -d, --dbname=DBNAME Database to vacuum\"\nIndex: src/include/libpq/libpq.h\n*** src/include/libpq/libpq.h\t2000/06/30 21:15:47\t1.1\n--- src/include/libpq/libpq.h\t2000/07/01 18:20:40\t1.2\n***************\n*** 236,246 ****\n /*\n * prototypes for functions in pqcomm.c\n */\n! extern int\tStreamServerPort(char *hostName, unsigned short portName, int *fdP);\n extern int\tStreamConnection(int server_fd, Port *port);\n extern void StreamClose(int sock);\n extern void pq_init(void);\n extern int\tpq_getport(void);\n extern void pq_close(void);\n extern int\tpq_getbytes(char *s, size_t len);\n extern int\tpq_getstring(StringInfo s);\n--- 236,247 ----\n /*\n * prototypes for functions in pqcomm.c\n */\n! extern int\tStreamServerPort(char *hostName, unsigned short portName, char *unixSocketName, int *fdP);\n extern int\tStreamConnection(int server_fd, Port *port);\n extern void StreamClose(int sock);\n extern void pq_init(void);\n extern int\tpq_getport(void);\n+ extern char\t*pq_getunixsocket(void);\n extern void pq_close(void);\n extern int\tpq_getbytes(char *s, size_t len);\n extern int\tpq_getstring(StringInfo s);\nIndex: src/include/libpq/password.h\nIndex: src/include/libpq/pqcomm.h\n*** src/include/libpq/pqcomm.h\t2000/06/30 21:15:47\t1.1\n--- src/include/libpq/pqcomm.h\t2000/07/01 18:59:33\t1.6\n***************\n*** 42,53 ****\n /* Configure the UNIX socket address for the well known port. */\n \n #if defined(SUN_LEN)\n! #define UNIXSOCK_PATH(sun,port) \\\n! \t(sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)), SUN_LEN(&(sun)))\n #else\n! #define UNIXSOCK_PATH(sun,port) \\\n! \t(sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)), \\\n! \t strlen((sun).sun_path)+ offsetof(struct sockaddr_un, sun_path))\n #endif\n \n /*\n--- 42,56 ----\n /* Configure the UNIX socket address for the well known port. */\n \n #if defined(SUN_LEN)\n! #define UNIXSOCK_PATH(sun,port,defpath) \\\n! (defpath ? (strncpy((sun).sun_path, defpath, sizeof((sun).sun_path)), (sun).sun_path[sizeof((sun).sun_path)-1] = '\\0') : sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)))\n! #define UNIXSOCK_LEN(sun) \\\n! (SUN_LEN(&(sun)))\n #else\n! #define UNIXSOCK_PATH(sun,port,defpath) \\\n! (defpath ? (strncpy((sun).sun_path, defpath, sizeof((sun).sun_path)), (sun).sun_path[sizeof((sun).sun_path)-1] = '\\0') : sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)))\n! #define UNIXSOCK_LEN(sun) \\\n! (strlen((sun).sun_path)+ offsetof(struct sockaddr_un, sun_path))\n #endif\n \n /*\nIndex: src/interfaces/libpq/fe-connect.c\n*** src/interfaces/libpq/fe-connect.c\t2000/06/30 21:15:51\t1.1\n--- src/interfaces/libpq/fe-connect.c\t2000/07/01 18:50:47\t1.3\n***************\n*** 125,130 ****\n--- 125,133 ----\n \t{\"port\", \"PGPORT\", DEF_PGPORT, NULL,\n \t\"Database-Port\", \"\", 6},\n \n+ \t{\"unixsocket\", \"PGUNIXSOCKET\", NULL, NULL,\n+ \t\"Unix-Socket\", \"\", 80},\n+ \n \t{\"tty\", \"PGTTY\", DefaultTty, NULL,\n \t\"Backend-Debug-TTY\", \"D\", 40},\n \n***************\n*** 293,298 ****\n--- 296,303 ----\n \tconn->pghost = tmp ? strdup(tmp) : NULL;\n \ttmp = conninfo_getval(connOptions, \"port\");\n \tconn->pgport = tmp ? strdup(tmp) : NULL;\n+ \ttmp = conninfo_getval(connOptions, \"unixsocket\");\n+ \tconn->pgunixsocket = tmp ? strdup(tmp) : NULL;\n \ttmp = conninfo_getval(connOptions, \"tty\");\n \tconn->pgtty = tmp ? strdup(tmp) : NULL;\n \ttmp = conninfo_getval(connOptions, \"options\");\n***************\n*** 369,374 ****\n--- 374,382 ----\n *\t PGPORT\t identifies TCP port to which to connect if <pgport> argument\n *\t\t\t\t is NULL or a null string.\n *\n+ *\t PGUNIXSOCKET\t identifies Unix-domain socket to which to connect; default\n+ *\t\t\t\t is computed from the TCP port.\n+ *\n *\t PGTTY\t\t identifies tty to which to send messages if <pgtty> argument\n *\t\t\t\t is NULL or a null string.\n *\n***************\n*** 422,427 ****\n--- 430,439 ----\n \telse\n \t\tconn->pgport = strdup(pgport);\n \n+ \tconn->pgunixsocket = getenv(\"PGUNIXSOCKET\");\n+ \tif (conn->pgunixsocket)\n+ \t\tconn->pgunixsocket = strdup(conn->pgunixsocket);\n+ \n \tif ((pgtty == NULL) || pgtty[0] == '\\0')\n \t{\n \t\tif ((tmp = getenv(\"PGTTY\")) == NULL)\n***************\n*** 489,501 ****\n \n /*\n * update_db_info -\n! * get all additional infos out of dbName\n *\n */\n static int\n update_db_info(PGconn *conn)\n {\n! \tchar\t *tmp,\n \t\t\t *old = conn->dbName;\n \n \tif (strchr(conn->dbName, '@') != NULL)\n--- 501,513 ----\n \n /*\n * update_db_info -\n! * get all additional info out of dbName\n *\n */\n static int\n update_db_info(PGconn *conn)\n {\n! \tchar\t *tmp, *tmp2,\n \t\t\t *old = conn->dbName;\n \n \tif (strchr(conn->dbName, '@') != NULL)\n***************\n*** 504,509 ****\n--- 516,523 ----\n \t\ttmp = strrchr(conn->dbName, ':');\n \t\tif (tmp != NULL)\t\t/* port number given */\n \t\t{\n+ \t\t\tif (conn->pgport)\n+ \t\t\t\tfree(conn->pgport);\n \t\t\tconn->pgport = strdup(tmp + 1);\n \t\t\t*tmp = '\\0';\n \t\t}\n***************\n*** 511,516 ****\n--- 525,532 ----\n \t\ttmp = strrchr(conn->dbName, '@');\n \t\tif (tmp != NULL)\t\t/* host name given */\n \t\t{\n+ \t\t\tif (conn->pghost)\n+ \t\t\t\tfree(conn->pghost);\n \t\t\tconn->pghost = strdup(tmp + 1);\n \t\t\t*tmp = '\\0';\n \t\t}\n***************\n*** 537,549 ****\n \n \t\t\t/*\n \t\t\t * new style:\n! \t\t\t * <tcp|unix>:postgresql://server[:port][/dbname][?options]\n \t\t\t */\n \t\t\toffset += strlen(\"postgresql://\");\n \n \t\t\ttmp = strrchr(conn->dbName + offset, '?');\n \t\t\tif (tmp != NULL)\t/* options given */\n \t\t\t{\n \t\t\t\tconn->pgoptions = strdup(tmp + 1);\n \t\t\t\t*tmp = '\\0';\n \t\t\t}\n--- 553,567 ----\n \n \t\t\t/*\n \t\t\t * new style:\n! \t\t\t * <tcp|unix>:postgresql://server[:port|:/unixsocket/path:][/dbname][?options]\n \t\t\t */\n \t\t\toffset += strlen(\"postgresql://\");\n \n \t\t\ttmp = strrchr(conn->dbName + offset, '?');\n \t\t\tif (tmp != NULL)\t/* options given */\n \t\t\t{\n+ \t\t\t\tif (conn->pgoptions)\n+ \t\t\t\t\tfree(conn->pgoptions);\n \t\t\t\tconn->pgoptions = strdup(tmp + 1);\n \t\t\t\t*tmp = '\\0';\n \t\t\t}\n***************\n*** 551,576 ****\n \t\t\ttmp = strrchr(conn->dbName + offset, '/');\n \t\t\tif (tmp != NULL)\t/* database name given */\n \t\t\t{\n \t\t\t\tconn->dbName = strdup(tmp + 1);\n \t\t\t\t*tmp = '\\0';\n \t\t\t}\n \t\t\telse\n \t\t\t{\n \t\t\t\tif ((tmp = getenv(\"PGDATABASE\")) != NULL)\n \t\t\t\t\tconn->dbName = strdup(tmp);\n \t\t\t\telse if (conn->pguser)\n \t\t\t\t\tconn->dbName = strdup(conn->pguser);\n \t\t\t}\n \n \t\t\ttmp = strrchr(old + offset, ':');\n! \t\t\tif (tmp != NULL)\t/* port number given */\n \t\t\t{\n- \t\t\t\tconn->pgport = strdup(tmp + 1);\n \t\t\t\t*tmp = '\\0';\n \t\t\t}\n \n \t\t\tif (strncmp(old, \"unix:\", 5) == 0)\n \t\t\t{\n \t\t\t\tconn->pghost = NULL;\n \t\t\t\tif (strcmp(old + offset, \"localhost\") != 0)\n \t\t\t\t{\n--- 569,630 ----\n \t\t\ttmp = strrchr(conn->dbName + offset, '/');\n \t\t\tif (tmp != NULL)\t/* database name given */\n \t\t\t{\n+ \t\t\t\tif (conn->dbName)\n+ \t\t\t\t\tfree(conn->dbName);\n \t\t\t\tconn->dbName = strdup(tmp + 1);\n \t\t\t\t*tmp = '\\0';\n \t\t\t}\n \t\t\telse\n \t\t\t{\n+ \t\t\t\t/* Why do we default only this value from the environment again? */\n \t\t\t\tif ((tmp = getenv(\"PGDATABASE\")) != NULL)\n+ \t\t\t\t{\n+ \t\t\t\t\tif (conn->dbName)\n+ \t\t\t\t\t\tfree(conn->dbName);\n \t\t\t\t\tconn->dbName = strdup(tmp);\n+ \t\t\t\t}\n \t\t\t\telse if (conn->pguser)\n+ \t\t\t\t{\n+ \t\t\t\t\tif (conn->dbName)\n+ \t\t\t\t\t\tfree(conn->dbName);\n \t\t\t\t\tconn->dbName = strdup(conn->pguser);\n+ \t\t\t\t}\n \t\t\t}\n \n \t\t\ttmp = strrchr(old + offset, ':');\n! \t\t\tif (tmp != NULL)\t/* port number or Unix socket path given */\n \t\t\t{\n \t\t\t\t*tmp = '\\0';\n+ \t\t\t\tif ((tmp2 = strchr(tmp + 1, ':')) != NULL)\n+ \t\t\t\t{\n+ \t\t\t\t\tif (strncmp(old, \"unix:\", 5) != 0)\n+ \t\t\t\t\t{\n+ \t\t\t\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n+ \t\t\t\t\t\t\t\t \"connectDBStart() -- \"\n+ \t\t\t\t\t\t\t\t \"socket name can only be specified with \"\n+ \t\t\t\t\t\t\t\t \"non-TCP\\n\");\n+ \t\t\t\t\t\treturn 1; \n+ \t\t\t\t\t}\n+ \t\t\t\t\t*tmp2 = '\\0';\n+ \t\t\t\t\tif (conn->pgunixsocket)\n+ \t\t\t\t\t\tfree(conn->pgunixsocket);\n+ \t\t\t\t\tconn->pgunixsocket = strdup(tmp + 1);\n+ \t\t\t\t}\n+ \t\t\t\telse\n+ \t\t\t\t{\n+ \t\t\t\t\tif (conn->pgport)\n+ \t\t\t\t\t\tfree(conn->pgport);\n+ \t\t\t\t\tconn->pgport = strdup(tmp + 1);\n+ \t\t\t\t\tif (conn->pgunixsocket)\n+ \t\t\t\t\t\tfree(conn->pgunixsocket);\n+ \t\t\t\t\tconn->pgunixsocket = NULL;\n+ \t\t\t\t}\n \t\t\t}\n \n \t\t\tif (strncmp(old, \"unix:\", 5) == 0)\n \t\t\t{\n+ \t\t\t\tif (conn->pghost)\n+ \t\t\t\t\tfree(conn->pghost);\n \t\t\t\tconn->pghost = NULL;\n \t\t\t\tif (strcmp(old + offset, \"localhost\") != 0)\n \t\t\t\t{\n***************\n*** 582,589 ****\n \t\t\t\t}\n \t\t\t}\n \t\t\telse\n \t\t\t\tconn->pghost = strdup(old + offset);\n! \n \t\t\tfree(old);\n \t\t}\n \t}\n--- 636,646 ----\n \t\t\t\t}\n \t\t\t}\n \t\t\telse\n+ \t\t\t{\n+ \t\t\t\tif (conn->pghost)\n+ \t\t\t\t\tfree(conn->pghost);\n \t\t\t\tconn->pghost = strdup(old + offset);\n! \t\t\t}\n \t\t\tfree(old);\n \t\t}\n \t}\n***************\n*** 743,749 ****\n \t}\n #if !defined(WIN32) && !defined(__CYGWIN32__)\n \telse\n! \t\tconn->raddr_len = UNIXSOCK_PATH(conn->raddr.un, portno);\n #endif\n \n \n--- 800,809 ----\n \t}\n #if !defined(WIN32) && !defined(__CYGWIN32__)\n \telse\n! \t{\n! \t\tUNIXSOCK_PATH(conn->raddr.un, portno, conn->pgunixsocket);\n! \t\tconn->raddr_len = UNIXSOCK_LEN(conn->raddr.un);\n! \t}\n #endif\n \n \n***************\n*** 892,898 ****\n \t\t\t\t\t\t\t conn->pghost ? conn->pghost : \"localhost\",\n \t\t\t\t\t\t\t (family == AF_INET) ?\n \t\t\t\t\t\t\t \"TCP/IP port\" : \"Unix socket\",\n! \t\t\t\t\t\t\t conn->pgport);\n \t\t\tgoto connect_errReturn;\n \t\t}\n \t}\n--- 952,959 ----\n \t\t\t\t\t\t\t conn->pghost ? conn->pghost : \"localhost\",\n \t\t\t\t\t\t\t (family == AF_INET) ?\n \t\t\t\t\t\t\t \"TCP/IP port\" : \"Unix socket\",\n! \t\t\t\t\t\t\t (family == AF_UNIX && conn->pgunixsocket) ?\n! \t\t\t\t\t\t\t conn->pgunixsocket : conn->pgport);\n \t\t\tgoto connect_errReturn;\n \t\t}\n \t}\n***************\n*** 1123,1129 ****\n \t\t\t\t\t\t\t conn->pghost ? conn->pghost : \"localhost\",\n \t\t\t\t\t\t\t\t (conn->raddr.sa.sa_family == AF_INET) ?\n \t\t\t\t\t\t\t\t\t \"TCP/IP port\" : \"Unix socket\",\n! \t\t\t\t\t\t\t\t\t conn->pgport);\n \t\t\t\t\tgoto error_return;\n \t\t\t\t}\n \n--- 1184,1191 ----\n \t\t\t\t\t\t\t conn->pghost ? conn->pghost : \"localhost\",\n \t\t\t\t\t\t\t\t (conn->raddr.sa.sa_family == AF_INET) ?\n \t\t\t\t\t\t\t\t\t \"TCP/IP port\" : \"Unix socket\",\n! \t\t\t\t\t\t\t (conn->raddr.sa.sa_family == AF_UNIX && conn->pgunixsocket) ?\n! \t\t\t\t\t\t\t\t\t conn->pgunixsocket : conn->pgport);\n \t\t\t\t\tgoto error_return;\n \t\t\t\t}\n \n***************\n*** 1799,1804 ****\n--- 1861,1868 ----\n \t\tfree(conn->pghostaddr);\n \tif (conn->pgport)\n \t\tfree(conn->pgport);\n+ \tif (conn->pgunixsocket)\n+ \t\tfree(conn->pgunixsocket);\n \tif (conn->pgtty)\n \t\tfree(conn->pgtty);\n \tif (conn->pgoptions)\n***************\n*** 2383,2388 ****\n--- 2447,2460 ----\n \tif (!conn)\n \t\treturn (char *) NULL;\n \treturn conn->pgport;\n+ }\n+ \n+ char *\n+ PQunixsocket(const PGconn *conn)\n+ {\n+ \tif (!conn)\n+ \t\treturn (char *) NULL;\n+ \treturn conn->pgunixsocket;\n }\n \n char *\nIndex: src/interfaces/libpq/libpq-fe.h\n*** src/interfaces/libpq/libpq-fe.h\t2000/06/30 21:15:51\t1.1\n--- src/interfaces/libpq/libpq-fe.h\t2000/07/01 18:20:40\t1.2\n***************\n*** 214,219 ****\n--- 214,220 ----\n \textern char *PQpass(const PGconn *conn);\n \textern char *PQhost(const PGconn *conn);\n \textern char *PQport(const PGconn *conn);\n+ \textern char *PQunixsocket(const PGconn *conn);\n \textern char *PQtty(const PGconn *conn);\n \textern char *PQoptions(const PGconn *conn);\n \textern ConnStatusType PQstatus(const PGconn *conn);\nIndex: src/interfaces/libpq/libpq-int.h\n*** src/interfaces/libpq/libpq-int.h\t2000/06/30 21:15:51\t1.1\n--- src/interfaces/libpq/libpq-int.h\t2000/07/01 18:20:40\t1.2\n***************\n*** 202,207 ****\n--- 202,209 ----\n \t\t\t\t\t\t\t\t * numbers-and-dots notation. Takes\n \t\t\t\t\t\t\t\t * precedence over above. */\n \tchar\t *pgport;\t\t\t/* the server's communication port */\n+ \tchar\t *pgunixsocket;\t\t/* the Unix-domain socket that the server is listening on;\n+ \t\t\t\t\t\t * if NULL, uses a default constructed from pgport */\n \tchar\t *pgtty;\t\t\t/* tty on which the backend messages is\n \t\t\t\t\t\t\t\t * displayed (NOT ACTUALLY USED???) */\n \tchar\t *pgoptions;\t\t/* options to start the backend with */\nIndex: src/interfaces/libpq/libpqdll.def\n*** src/interfaces/libpq/libpqdll.def\t2000/06/30 21:15:51\t1.1\n--- src/interfaces/libpq/libpqdll.def\t2000/07/01 18:20:40\t1.2\n***************\n*** 79,81 ****\n--- 79,82 ----\n \tdestroyPQExpBuffer\t@ 76\n \tcreatePQExpBuffer\t@ 77\n \tPQconninfoFree\t\t@ 78\n+ \tPQunixsocket\t\t@ 79\n", "msg_date": "Fri, 7 Jul 2000 12:00:02 -0400 (EDT)", "msg_from": "\"David J. MacKenzie\" <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL virtual hosting support" }, { "msg_contents": "David J. MacKenzie writes:\n\nGreetings,\n\nThese sound like interesting features, but you need to start with the\ncurrent sources. The options handling in particular has changed, look in\nsrc/backend/utils/misc/guc.c.\n\n> 4. The ability to, if run as root, open a pid file in /var/run as\n> root, and then setuid to the desired user. (mysqld -u can almost\n> do this; I had to patch it, too).\n\nI've been wondering about this too. If we could set the user name in the\nconfiguration file then this would certainly make the installation\nprocedures a lot simpler. Actually, initdb could use such an option as\nwell. Maybe this would work:\n\nif test `pg_id -u` -eq 0 ; then\n su -l \"$user\" \"$0 $*\"\n exit\nfi\n\nwhere $user is the value of the --user/-u option.\n\n> 1. Adds an environment variable PGUNIXSOCKET, analogous to MYSQL_UNIX_PORT,\n> and command line options -k --unix-socket to the relevant programs.\n\nHere's the trick question, what does this do: `psql -k foo -h bar'?\nPerhaps we can integrate this into the -h option:\n\npsql -h /tmp/foo\t=> Unix socket\npsql -h tmp.foo\t\t=> TCP/IP\npsql -h foo\t\t=> TCP/IP\npsql -h ./foo\t\t=> Unix socket\n\nThat way we don't have to add a new option everywhere. Just an idea.\n\n> 2. Adds a -h option to postmaster to set the hostname or IP address to\n> listen on instead of the default INADDR_ANY.\n\nThat sounds like something that needs to be added into guc.c. You'll be\nthe first to add a string option, so it probably won't work. :)\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 11 Jul 2000 22:39:56 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] PostgreSQL virtual hosting support" }, { "msg_contents": "I am tempted to apply this. This is the second person who asked for\nbinding to a single port. The patch looks quite complete, with doc\nchanges. It appears to be a thorough job.\n\nAny objections?\n\n> Your name\t\t:\tDavid MacKenzie\n> Your email address\t:\[email protected]\n> \n> \n> System Configuration\n> ---------------------\n> Architecture (example: Intel Pentium) \t: Intel x86\n> \n> Operating System (example: Linux 2.0.26 ELF) \t: BSD/OS 4.0.1\n> \n> PostgreSQL version (example: PostgreSQL-7.0): PostgreSQL-7.0.2\n> \n> Compiler used (example: gcc 2.8.0)\t\t: gcc version 2.7.2.1\n> \n> \n> Please enter a FULL description of your problem:\n> ------------------------------------------------\n> \n> UUNET is looking into offering PostgreSQL as a part of a managed web\n> hosting product, on both shared and dedicated machines. We currently\n> offer Oracle and MySQL, and it would be a nice middle-ground.\n> However, as shipped, PostgreSQL lacks the following features we need\n> that MySQL has:\n> \n> 1. The ability to listen only on a particular IP address. Each\n> hosting customer has their own IP address, on which all of their\n> servers (http, ftp, real media, etc.) run.\n> 2. The ability to place the Unix-domain socket in a mode 700 directory.\n> This allows us to automatically create an empty database, with an\n> empty DBA password, for new or upgrading customers without having\n> to interactively set a DBA password and communicate it to (or from)\n> the customer. This in turn cuts down our install and upgrade times.\n> 3. The ability to connect to the Unix-domain socket from within a\n> change-rooted environment. We run CGI programs chrooted to the\n> user's home directory, which is another reason why we need to be\n> able to specify where the Unix-domain socket is, instead of /tmp.\n> 4. The ability to, if run as root, open a pid file in /var/run as\n> root, and then setuid to the desired user. (mysqld -u can almost\n> do this; I had to patch it, too).\n> \n> The patch below fixes problem 1-3. I plan to address #4, also, but\n> haven't done so yet. These diffs are big enough that they should give\n> the PG development team something to think about in the meantime :-)\n> Also, I'm about to leave for 2 weeks' vacation, so I thought I'd get\n> out what I have, which works (for the problems it tackles), now.\n> \n> With these changes, we can set up and run PostgreSQL with scripts the\n> same way we can with apache or proftpd or mysql.\n> \n> In summary, this patch makes the following enhancements:\n> \n> 1. Adds an environment variable PGUNIXSOCKET, analogous to MYSQL_UNIX_PORT,\n> and command line options -k --unix-socket to the relevant programs.\n> 2. Adds a -h option to postmaster to set the hostname or IP address to\n> listen on instead of the default INADDR_ANY.\n> 3. Extends some library interfaces to support the above.\n> 4. Fixes a few memory leaks in PQconnectdb().\n> \n> The default behavior is unchanged from stock 7.0.2; if you don't use\n> any of these new features, they don't change the operation.\n> \n> Index: doc/src/sgml/layout.sgml\n> *** doc/src/sgml/layout.sgml\t2000/06/30 21:15:36\t1.1\n> --- doc/src/sgml/layout.sgml\t2000/07/02 03:56:05\t1.2\n> ***************\n> *** 55,61 ****\n> For example, if the database server machine is a remote machine, you\n> will need to set the <envar>PGHOST</envar> environment variable to the name\n> of the database server machine. The environment variable\n> ! <envar>PGPORT</envar> may also have to be set. The bottom line is this: if\n> you try to start an application program and it complains\n> that it cannot connect to the <Application>postmaster</Application>,\n> you must go back and make sure that your\n> --- 55,62 ----\n> For example, if the database server machine is a remote machine, you\n> will need to set the <envar>PGHOST</envar> environment variable to the name\n> of the database server machine. The environment variable\n> ! <envar>PGPORT</envar> or <envar>PGUNIXSOCKET</envar> may also have to be set.\n> ! The bottom line is this: if\n> you try to start an application program and it complains\n> that it cannot connect to the <Application>postmaster</Application>,\n> you must go back and make sure that your\n> Index: doc/src/sgml/libpq++.sgml\n> *** doc/src/sgml/libpq++.sgml\t2000/06/30 21:15:36\t1.1\n> --- doc/src/sgml/libpq++.sgml\t2000/07/02 03:56:05\t1.2\n> ***************\n> *** 93,98 ****\n> --- 93,105 ----\n> </listitem>\n> <listitem>\n> <para>\n> + \t<envar>PGUNIXSOCKET</envar> sets the full Unix domain socket\n> + \tfile name for communicating with the <productname>Postgres</productname>\n> + \tbackend.\n> + </para>\n> + </listitem>\n> + <listitem>\n> + <para>\n> \t<envar>PGDATABASE</envar> sets the default \n> \t<productname>Postgres</productname> database name.\n> </para>\n> Index: doc/src/sgml/libpq.sgml\n> *** doc/src/sgml/libpq.sgml\t2000/06/30 21:15:36\t1.1\n> --- doc/src/sgml/libpq.sgml\t2000/07/02 03:56:05\t1.2\n> ***************\n> *** 134,139 ****\n> --- 134,148 ----\n> </varlistentry>\n> \n> <varlistentry>\n> + <term><literal>unixsocket</literal></term>\n> + <listitem>\n> + <para>\n> + Full path to Unix-domain socket file to connect to at the server host.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> + <varlistentry>\n> <term><literal>dbname</literal></term>\n> <listitem>\n> <para>\n> ***************\n> *** 545,550 ****\n> --- 554,569 ----\n> \n> <listitem>\n> <para>\n> + <function>PQunixsocket</function>\n> + Returns the name of the Unix-domain socket of the connection.\n> + <synopsis>\n> + char *PQunixsocket(const PGconn *conn)\n> + </synopsis>\n> + </para>\n> + </listitem>\n> + \n> + <listitem>\n> + <para>\n> <function>PQtty</function>\n> Returns the debug tty of the connection.\n> <synopsis>\n> ***************\n> *** 1772,1777 ****\n> --- 1791,1803 ----\n> <envar>PGHOST</envar> sets the default server name.\n> If a non-zero-length string is specified, TCP/IP communication is used.\n> Without a host name, libpq will connect using a local Unix domain socket.\n> + </para>\n> + </listitem>\n> + <listitem>\n> + <para>\n> + <envar>PGPORT</envar> sets the default port or local Unix domain socket\n> + file extension for communicating with the <productname>Postgres</productname>\n> + backend.\n> </para>\n> </listitem>\n> <listitem>\n> Index: doc/src/sgml/start.sgml\n> *** doc/src/sgml/start.sgml\t2000/06/30 21:15:37\t1.1\n> --- doc/src/sgml/start.sgml\t2000/07/02 03:56:05\t1.2\n> ***************\n> *** 110,117 ****\n> will need to set the <acronym>PGHOST</acronym> environment\n> variable to the name\n> of the database server machine. The environment variable\n> ! <acronym>PGPORT</acronym> may also have to be set. The bottom\n> ! line is this: if\n> you try to start an application program and it complains\n> that it cannot connect to the <application>postmaster</application>,\n> you should immediately consult your site administrator to make\n> --- 110,117 ----\n> will need to set the <acronym>PGHOST</acronym> environment\n> variable to the name\n> of the database server machine. The environment variable\n> ! <acronym>PGPORT</acronym> or <acronym>PGUNIXSOCKET</acronym> may also have to be set.\n> ! The bottom line is this: if\n> you try to start an application program and it complains\n> that it cannot connect to the <application>postmaster</application>,\n> you should immediately consult your site administrator to make\n> Index: doc/src/sgml/ref/createdb.sgml\n> *** doc/src/sgml/ref/createdb.sgml\t2000/06/30 21:15:37\t1.1\n> --- doc/src/sgml/ref/createdb.sgml\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 58,63 ****\n> --- 58,75 ----\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry>\n> + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + Specifies the Unix-domain socket on which the\n> + <application>postmaster</application> is running.\n> + Without this option, the socket is created in <filename>/tmp</filename>\n> + based on the port number.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> <varlistentry>\n> <term>-U, --username <replaceable class=\"parameter\">username</replaceable></term>\n> <listitem>\n> Index: doc/src/sgml/ref/createlang.sgml\n> *** doc/src/sgml/ref/createlang.sgml\t2000/06/30 21:15:37\t1.1\n> --- doc/src/sgml/ref/createlang.sgml\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 96,101 ****\n> --- 96,113 ----\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry>\n> + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + Specifies the Unix-domain socket on which the\n> + <application>postmaster</application> is running.\n> + Without this option, the socket is created in <filename>/tmp</filename>\n> + based on the port number.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> <varlistentry>\n> <term>-U, --username <replaceable class=\"parameter\">username</replaceable></term>\n> <listitem>\n> Index: doc/src/sgml/ref/createuser.sgml\n> *** doc/src/sgml/ref/createuser.sgml\t2000/06/30 21:15:37\t1.1\n> --- doc/src/sgml/ref/createuser.sgml\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 59,64 ****\n> --- 59,76 ----\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry>\n> + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + Specifies the Unix-domain socket on which the\n> + <application>postmaster</application> is running.\n> + Without this option, the socket is created in <filename>/tmp</filename>\n> + based on the port number.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> <varlistentry>\n> <term>-e, --echo</term>\n> <listitem>\n> Index: doc/src/sgml/ref/dropdb.sgml\n> *** doc/src/sgml/ref/dropdb.sgml\t2000/06/30 21:15:38\t1.1\n> --- doc/src/sgml/ref/dropdb.sgml\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 58,63 ****\n> --- 58,75 ----\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry>\n> + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + Specifies the Unix-domain socket on which the\n> + <application>postmaster</application> is running.\n> + Without this option, the socket is created in <filename>/tmp</filename>\n> + based on the port number.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> <varlistentry>\n> <term>-U, --username <replaceable class=\"parameter\">username</replaceable></term>\n> <listitem>\n> Index: doc/src/sgml/ref/droplang.sgml\n> *** doc/src/sgml/ref/droplang.sgml\t2000/06/30 21:15:38\t1.1\n> --- doc/src/sgml/ref/droplang.sgml\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 96,101 ****\n> --- 96,113 ----\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry>\n> + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + Specifies the Unix-domain socket on which the\n> + <application>postmaster</application> is running.\n> + Without this option, the socket is created in <filename>/tmp</filename>\n> + based on the port number.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> <varlistentry>\n> <term>-U, --username <replaceable class=\"parameter\">username</replaceable></term>\n> <listitem>\n> Index: doc/src/sgml/ref/dropuser.sgml\n> *** doc/src/sgml/ref/dropuser.sgml\t2000/06/30 21:15:38\t1.1\n> --- doc/src/sgml/ref/dropuser.sgml\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 58,63 ****\n> --- 58,75 ----\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry>\n> + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + Specifies the Unix-domain socket on which the\n> + <application>postmaster</application> is running.\n> + Without this option, the socket is created in <filename>/tmp</filename>\n> + based on the port number.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> <varlistentry>\n> <term>-e, --echo</term>\n> <listitem>\n> Index: doc/src/sgml/ref/pg_dump.sgml\n> *** doc/src/sgml/ref/pg_dump.sgml\t2000/06/30 21:15:38\t1.1\n> --- doc/src/sgml/ref/pg_dump.sgml\t2000/07/01 18:41:22\t1.2\n> ***************\n> *** 24,30 ****\n> </refsynopsisdivinfo>\n> <synopsis>\n> pg_dump [ <replaceable class=\"parameter\">dbname</replaceable> ]\n> ! pg_dump [ -h <replaceable class=\"parameter\">host</replaceable> ] [ -p <replaceable class=\"parameter\">port</replaceable> ]\n> [ -t <replaceable class=\"parameter\">table</replaceable> ]\n> [ -a ] [ -c ] [ -d ] [ -D ] [ -i ] [ -n ] [ -N ]\n> [ -o ] [ -s ] [ -u ] [ -v ] [ -x ]\n> --- 24,32 ----\n> </refsynopsisdivinfo>\n> <synopsis>\n> pg_dump [ <replaceable class=\"parameter\">dbname</replaceable> ]\n> ! pg_dump [ -h <replaceable class=\"parameter\">host</replaceable> ]\n> ! [ -k <replaceable class=\"parameter\">path</replaceable> ]\n> ! [ -p <replaceable class=\"parameter\">port</replaceable> ]\n> [ -t <replaceable class=\"parameter\">table</replaceable> ]\n> [ -a ] [ -c ] [ -d ] [ -D ] [ -i ] [ -n ] [ -N ]\n> [ -o ] [ -s ] [ -u ] [ -v ] [ -x ]\n> ***************\n> *** 200,205 ****\n> --- 202,222 ----\n> \t<application>postmaster</application>\n> \tis running. Defaults to using a local Unix domain socket\n> \trather than an IP connection..\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> + <varlistentry>\n> + <term>-k <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + \tSpecifies the local Unix domain socket file path\n> + \ton which the <application>postmaster</application>\n> + \tis listening for connections.\n> + Without this option, the socket path name defaults to\n> + the value of the <envar>PGUNIXSOCKET</envar> environment\n> + \tvariable (if set), otherwise it is constructed\n> + from the port number.\n> </para>\n> </listitem>\n> </varlistentry>\n> Index: doc/src/sgml/ref/pg_dumpall.sgml\n> *** doc/src/sgml/ref/pg_dumpall.sgml\t2000/06/30 21:15:38\t1.1\n> --- doc/src/sgml/ref/pg_dumpall.sgml\t2000/07/01 18:41:22\t1.2\n> ***************\n> *** 24,30 ****\n> </refsynopsisdivinfo>\n> <synopsis>\n> pg_dumpall\n> ! pg_dumpall [ -h <replaceable class=\"parameter\">host</replaceable> ] [ -p <replaceable class=\"parameter\">port</replaceable> ] [ -a ] [ -d ] [ -D ] [ -O ] [ -s ] [ -u ] [ -v ] [ -x ]\n> </synopsis>\n> \n> <refsect2 id=\"R2-APP-PG-DUMPALL-1\">\n> --- 24,33 ----\n> </refsynopsisdivinfo>\n> <synopsis>\n> pg_dumpall\n> ! pg_dumpall [ -h <replaceable class=\"parameter\">host</replaceable> ]\n> ! [ -k <replaceable class=\"parameter\">path</replaceable> ]\n> ! [ -p <replaceable class=\"parameter\">port</replaceable> ]\n> ! [ -a ] [ -d ] [ -D ] [ -O ] [ -s ] [ -u ] [ -v ] [ -x ]\n> </synopsis>\n> \n> <refsect2 id=\"R2-APP-PG-DUMPALL-1\">\n> ***************\n> *** 137,142 ****\n> --- 140,160 ----\n> \t<application>postmaster</application>\n> \tis running. Defaults to using a local Unix domain socket\n> \trather than an IP connection..\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> + <varlistentry>\n> + <term>-k <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + \tSpecifies the local Unix domain socket file path\n> + \ton which the <application>postmaster</application>\n> + \tis listening for connections.\n> + Without this option, the socket path name defaults to\n> + the value of the <envar>PGUNIXSOCKET</envar> environment\n> + \tvariable (if set), otherwise it is constructed\n> + from the port number.\n> </para>\n> </listitem>\n> </varlistentry>\n> Index: doc/src/sgml/ref/postmaster.sgml\n> *** doc/src/sgml/ref/postmaster.sgml\t2000/06/30 21:15:38\t1.1\n> --- doc/src/sgml/ref/postmaster.sgml\t2000/07/06 07:48:31\t1.7\n> ***************\n> *** 24,30 ****\n> </refsynopsisdivinfo>\n> <synopsis>\n> postmaster [ -B <replaceable class=\"parameter\">nBuffers</replaceable> ] [ -D <replaceable class=\"parameter\">DataDir</replaceable> ] [ -N <replaceable class=\"parameter\">maxBackends</replaceable> ] [ -S ]\n> ! [ -d <replaceable class=\"parameter\">DebugLevel</replaceable> ] [ -i ] [ -l ]\n> [ -o <replaceable class=\"parameter\">BackendOptions</replaceable> ] [ -p <replaceable class=\"parameter\">port</replaceable> ] [ -n | -s ]\n> </synopsis>\n> \n> --- 24,32 ----\n> </refsynopsisdivinfo>\n> <synopsis>\n> postmaster [ -B <replaceable class=\"parameter\">nBuffers</replaceable> ] [ -D <replaceable class=\"parameter\">DataDir</replaceable> ] [ -N <replaceable class=\"parameter\">maxBackends</replaceable> ] [ -S ]\n> ! [ -d <replaceable class=\"parameter\">DebugLevel</replaceable> ]\n> ! [ -h <replaceable class=\"parameter\">hostname</replaceable> ] [ -i ]\n> ! [ -k <replaceable class=\"parameter\">path</replaceable> ] [ -l ]\n> [ -o <replaceable class=\"parameter\">BackendOptions</replaceable> ] [ -p <replaceable class=\"parameter\">port</replaceable> ] [ -n | -s ]\n> </synopsis>\n> \n> ***************\n> *** 124,129 ****\n> --- 126,161 ----\n> </varlistentry>\n> \n> <varlistentry>\n> + <term>-h <replaceable class=\"parameter\">hostName</replaceable></term>\n> + <listitem>\n> + <para>\n> + \tSpecifies the TCP/IP hostname or address\n> + \ton which the <application>postmaster</application>\n> + \tis to listen for connections from frontend applications. Defaults to\n> + \tthe value of the \n> + \t<envar>PGHOST</envar> \n> + \tenvironment variable, or if <envar>PGHOST</envar>\n> + \tis not set, then defaults to \"all\", meaning listen on all configured addresses\n> + \t(including localhost).\n> + </para>\n> + <para>\n> + \tIf you use a hostname or address other than \"all\", do not try to run\n> + \tmultiple instances of <application>postmaster</application> on the\n> + \tsame IP address but different ports. Doing so will result in them\n> + \tattempting (incorrectly) to use the same shared memory segments.\n> + \tAlso, if you use a hostname other than \"all\", all of the host's IP addresses\n> + \ton which <application>postmaster</application> instances are\n> + \tlistening must be distinct in the two last octets.\n> + </para>\n> + <para>\n> + \tIf you do use \"all\" (the default), then each instance must listen on a\n> + \tdifferent port (via -p or <envar>PGPORT</envar>). And, of course, do\n> + \tnot try to use both approaches on one host.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> + <varlistentry>\n> <term>-i</term>\n> <listitem>\n> <para>\n> ***************\n> *** 135,140 ****\n> --- 167,201 ----\n> </varlistentry>\n> \n> <varlistentry>\n> + <term>-k <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + \tSpecifies the local Unix domain socket path name\n> + \ton which the <application>postmaster</application>\n> + \tis to listen for connections from frontend applications. Defaults to\n> + \tthe value of the \n> + \t<envar>PGUNIXSOCKET</envar> \n> + \tenvironment variable, or if <envar>PGUNIXSOCKET</envar>\n> + \tis not set, then defaults to a file in <filename>/tmp</filename>\n> + \tconstructed from the port number.\n> + </para>\n> + <para>\n> + You can use this option to put the Unix-domain socket in a\n> + directory that is private to one or more users using Unix\n> + \tdirectory permissions. This is necessary for securely\n> + \tcreating databases automatically on shared machines.\n> + In that situation, also disallow all TCP/IP connections\n> + \tinitially in <filename>pg_hba.conf</filename>.\n> + \tIf you specify a socket path other than the\n> + \tdefault then all frontend applications (including\n> + \t<application>psql</application>) must specify the same\n> + \tsocket path using either command-line options or\n> + \t<envar>PGUNIXSOCKET</envar>.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> + <varlistentry>\n> <term>-l</term>\n> <listitem>\n> <para>\n> Index: doc/src/sgml/ref/psql-ref.sgml\n> *** doc/src/sgml/ref/psql-ref.sgml\t2000/06/30 21:15:38\t1.1\n> --- doc/src/sgml/ref/psql-ref.sgml\t2000/07/02 03:56:05\t1.3\n> ***************\n> *** 1329,1334 ****\n> --- 1329,1347 ----\n> \n> \n> <varlistentry>\n> + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + Specifies the Unix-domain socket on which the\n> + <application>postmaster</application> is running.\n> + Without this option, the socket is created in <filename>/tmp</filename>\n> + based on the port number.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> + \n> + <varlistentry>\n> <term>-H, --html</term>\n> <listitem>\n> <para>\n> Index: doc/src/sgml/ref/vacuumdb.sgml\n> *** doc/src/sgml/ref/vacuumdb.sgml\t2000/06/30 21:15:38\t1.1\n> --- doc/src/sgml/ref/vacuumdb.sgml\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 24,30 ****\n> </refsynopsisdivinfo>\n> <synopsis>\n> vacuumdb [ <replaceable class=\"parameter\">options</replaceable> ] [ --analyze | -z ]\n> ! [ --alldb | -a ] [ --verbose | -v ]\n> [ --table '<replaceable class=\"parameter\">table</replaceable> [ ( <replaceable class=\"parameter\">column</replaceable> [,...] ) ]' ] [ [-d] <replaceable class=\"parameter\">dbname</replaceable> ]\n> </synopsis>\n> \n> --- 24,30 ----\n> </refsynopsisdivinfo>\n> <synopsis>\n> vacuumdb [ <replaceable class=\"parameter\">options</replaceable> ] [ --analyze | -z ]\n> ! [ --all | -a ] [ --verbose | -v ]\n> [ --table '<replaceable class=\"parameter\">table</replaceable> [ ( <replaceable class=\"parameter\">column</replaceable> [,...] ) ]' ] [ [-d] <replaceable class=\"parameter\">dbname</replaceable> ]\n> </synopsis>\n> \n> ***************\n> *** 128,133 ****\n> --- 128,145 ----\n> </para>\n> </listitem>\n> </varlistentry>\n> + \n> + <varlistentry>\n> + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + Specifies the Unix-domain socket on which the\n> + <application>postmaster</application> is running.\n> + Without this option, the socket is created in <filename>/tmp</filename>\n> + based on the port number.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> \n> <varlistentry>\n> <term>-U <replaceable class=\"parameter\">username</replaceable></term>\n> Index: src/backend/libpq/pqcomm.c\n> *** src/backend/libpq/pqcomm.c\t2000/06/30 21:15:40\t1.1\n> --- src/backend/libpq/pqcomm.c\t2000/07/01 18:50:46\t1.3\n> ***************\n> *** 42,47 ****\n> --- 42,48 ----\n> *\t\tStreamConnection\t- Create new connection with client\n> *\t\tStreamClose\t\t\t- Close a client/backend connection\n> *\t\tpq_getport\t\t- return the PGPORT setting\n> + *\t\tpq_getunixsocket\t- return the PGUNIXSOCKET setting\n> *\t\tpq_init\t\t\t- initialize libpq at backend startup\n> *\t\tpq_close\t\t- shutdown libpq at backend exit\n> *\n> ***************\n> *** 134,139 ****\n> --- 135,151 ----\n> }\n> \n> /* --------------------------------\n> + *\t\tpq_getunixsocket - return the PGUNIXSOCKET setting.\n> + *\t\tIf NULL, default to computing it based on the port.\n> + * --------------------------------\n> + */\n> + char *\n> + pq_getunixsocket(void)\n> + {\n> + \treturn getenv(\"PGUNIXSOCKET\");\n> + }\n> + \n> + /* --------------------------------\n> *\t\tpq_close - shutdown libpq at backend exit\n> *\n> * Note: in a standalone backend MyProcPort will be null,\n> ***************\n> *** 177,189 ****\n> /*\n> * StreamServerPort -- open a sock stream \"listening\" port.\n> *\n> ! * This initializes the Postmaster's connection-accepting port.\n> *\n> * RETURNS: STATUS_OK or STATUS_ERROR\n> */\n> \n> int\n> ! StreamServerPort(char *hostName, unsigned short portName, int *fdP)\n> {\n> \tSockAddr\tsaddr;\n> \tint\t\t\tfd,\n> --- 189,205 ----\n> /*\n> * StreamServerPort -- open a sock stream \"listening\" port.\n> *\n> ! * This initializes the Postmaster's connection-accepting port fdP.\n> ! * If hostName is \"any\", listen on all configured IP addresses.\n> ! * If hostName is NULL, listen on a Unix-domain socket instead of TCP;\n> ! * if unixSocketName is NULL, a default path (constructed in UNIX_SOCK_PATH\n> ! * in include/libpq/pqcomm.h) based on portName is used.\n> *\n> * RETURNS: STATUS_OK or STATUS_ERROR\n> */\n> \n> int\n> ! StreamServerPort(char *hostName, unsigned short portNumber, char *unixSocketName, int *fdP)\n> {\n> \tSockAddr\tsaddr;\n> \tint\t\t\tfd,\n> ***************\n> *** 227,233 ****\n> \tsaddr.sa.sa_family = family;\n> \tif (family == AF_UNIX)\n> \t{\n> ! \t\tlen = UNIXSOCK_PATH(saddr.un, portName);\n> \t\tstrcpy(sock_path, saddr.un.sun_path);\n> \n> \t\t/*\n> --- 243,250 ----\n> \tsaddr.sa.sa_family = family;\n> \tif (family == AF_UNIX)\n> \t{\n> ! \t\tUNIXSOCK_PATH(saddr.un, portNumber, unixSocketName);\n> ! \t\tlen = UNIXSOCK_LEN(saddr.un);\n> \t\tstrcpy(sock_path, saddr.un.sun_path);\n> \n> \t\t/*\n> ***************\n> *** 259,267 ****\n> \t}\n> \telse\n> \t{\n> ! \t\tsaddr.in.sin_addr.s_addr = htonl(INADDR_ANY);\n> ! \t\tsaddr.in.sin_port = htons(portName);\n> ! \t\tlen = sizeof(struct sockaddr_in);\n> \t}\n> \terr = bind(fd, &saddr.sa, len);\n> \tif (err < 0)\n> --- 276,305 ----\n> \t}\n> \telse\n> \t{\n> ! \t /* TCP/IP socket */\n> ! \t if (!strcmp(hostName, \"all\")) /* like for databases in pg_hba.conf. */\n> ! \t saddr.in.sin_addr.s_addr = htonl(INADDR_ANY);\n> ! \t else\n> ! \t {\n> ! \t struct hostent *hp;\n> ! \n> ! \t hp = gethostbyname(hostName);\n> ! \t if ((hp == NULL) || (hp->h_addrtype != AF_INET))\n> ! \t\t{\n> ! \t\t snprintf(PQerrormsg, PQERRORMSG_LENGTH,\n> ! \t\t\t \"FATAL: StreamServerPort: gethostbyname(%s) failed: %s\\n\",\n> ! \t\t\t hostName, hstrerror(h_errno));\n> ! \t\t fputs(PQerrormsg, stderr);\n> ! \t\t pqdebug(\"%s\", PQerrormsg);\n> ! \t\t return STATUS_ERROR;\n> ! \t\t}\n> ! \t memmove((char *) &(saddr.in.sin_addr),\n> ! \t\t (char *) hp->h_addr,\n> ! \t\t hp->h_length);\n> ! \t }\n> ! \n> ! \t saddr.in.sin_port = htons(portNumber);\n> ! \t len = sizeof(struct sockaddr_in);\n> \t}\n> \terr = bind(fd, &saddr.sa, len);\n> \tif (err < 0)\n> Index: src/backend/postmaster/postmaster.c\n> *** src/backend/postmaster/postmaster.c\t2000/06/30 21:15:42\t1.1\n> --- src/backend/postmaster/postmaster.c\t2000/07/06 07:38:21\t1.5\n> ***************\n> *** 136,143 ****\n> /* list of ports associated with still open, but incomplete connections */\n> static Dllist *PortList;\n> \n> ! static unsigned short PostPortName = 0;\n> \n> /*\n> * This is a boolean indicating that there is at least one backend that\n> * is accessing the current shared memory and semaphores. Between the\n> --- 136,150 ----\n> /* list of ports associated with still open, but incomplete connections */\n> static Dllist *PortList;\n> \n> ! /* Hostname of interface to listen on, or 'any'. */\n> ! static char *HostName = NULL;\n> \n> + /* TCP/IP port number to listen on. Also used to default the Unix-domain socket name. */\n> + static unsigned short PostPortNumber = 0;\n> + \n> + /* Override of the default Unix-domain socket name to listen on, if non-NULL. */\n> + static char *UnixSocketName = NULL;\n> + \n> /*\n> * This is a boolean indicating that there is at least one backend that\n> * is accessing the current shared memory and semaphores. Between the\n> ***************\n> *** 274,280 ****\n> static void SignalChildren(SIGNAL_ARGS);\n> static int\tCountChildren(void);\n> static int\n> ! SetOptsFile(char *progname, int port, char *datadir,\n> \t\t\tint assert, int nbuf, char *execfile,\n> \t\t\tint debuglvl, int netserver,\n> #ifdef USE_SSL\n> --- 281,287 ----\n> static void SignalChildren(SIGNAL_ARGS);\n> static int\tCountChildren(void);\n> static int\n> ! SetOptsFile(char *progname, char *hostname, int port, char *unixsocket, char *datadir,\n> \t\t\tint assert, int nbuf, char *execfile,\n> \t\t\tint debuglvl, int netserver,\n> #ifdef USE_SSL\n> ***************\n> *** 370,380 ****\n> {\n> \textern int\tNBuffers;\t\t/* from buffer/bufmgr.c */\n> \tint\t\t\topt;\n> - \tchar\t *hostName;\n> \tint\t\t\tstatus;\n> \tint\t\t\tsilentflag = 0;\n> \tbool\t\tDataDirOK;\t\t/* We have a usable PGDATA value */\n> - \tchar\t\thostbuf[MAXHOSTNAMELEN];\n> \tint\t\t\tnonblank_argc;\n> \tchar\t\toriginal_extraoptions[MAXPGPATH];\n> \n> --- 377,385 ----\n> ***************\n> *** 431,449 ****\n> \t */\n> \tumask((mode_t) 0077);\n> \n> - \tif (!(hostName = getenv(\"PGHOST\")))\n> - \t{\n> - \t\tif (gethostname(hostbuf, MAXHOSTNAMELEN) < 0)\n> - \t\t\tstrcpy(hostbuf, \"localhost\");\n> - \t\thostName = hostbuf;\n> - \t}\n> - \n> \tMyProcPid = getpid();\n> \tDataDir = getenv(\"PGDATA\"); /* default value */\n> \n> \topterr = 0;\n> \tIgnoreSystemIndexes(false);\n> ! \twhile ((opt = getopt(nonblank_argc, argv, \"A:a:B:b:D:d:ilm:MN:no:p:Ss\")) != EOF)\n> \t{\n> \t\tswitch (opt)\n> \t\t{\n> --- 436,447 ----\n> \t */\n> \tumask((mode_t) 0077);\n> \n> \tMyProcPid = getpid();\n> \tDataDir = getenv(\"PGDATA\"); /* default value */\n> \n> \topterr = 0;\n> \tIgnoreSystemIndexes(false);\n> ! \twhile ((opt = getopt(nonblank_argc, argv, \"A:a:B:b:D:d:h:ik:lm:MN:no:p:Ss\")) != EOF)\n> \t{\n> \t\tswitch (opt)\n> \t\t{\n> ***************\n> *** 498,506 ****\n> --- 496,511 ----\n> \t\t\t\tDebugLvl = atoi(optarg);\n> \t\t\t\tpg_options[TRACE_VERBOSE] = DebugLvl;\n> \t\t\t\tbreak;\n> + \t\t\tcase 'h':\n> + \t\t\t\tHostName = optarg;\n> + \t\t\t\tbreak;\n> \t\t\tcase 'i':\n> \t\t\t\tNetServer = true;\n> \t\t\t\tbreak;\n> + \t\t\tcase 'k':\n> + \t\t\t\t/* Set PGUNIXSOCKET by hand. */\n> + \t\t\t\tUnixSocketName = optarg;\n> + \t\t\t\tbreak;\n> #ifdef USE_SSL\n> \t\t\tcase 'l':\n> \t\t\t\tSecureNetServer = true;\n> ***************\n> *** 545,551 ****\n> \t\t\t\tbreak;\n> \t\t\tcase 'p':\n> \t\t\t\t/* Set PGPORT by hand. */\n> ! \t\t\t\tPostPortName = (unsigned short) atoi(optarg);\n> \t\t\t\tbreak;\n> \t\t\tcase 'S':\n> \n> --- 550,556 ----\n> \t\t\t\tbreak;\n> \t\t\tcase 'p':\n> \t\t\t\t/* Set PGPORT by hand. */\n> ! \t\t\t\tPostPortNumber = (unsigned short) atoi(optarg);\n> \t\t\t\tbreak;\n> \t\t\tcase 'S':\n> \n> ***************\n> *** 577,584 ****\n> \t/*\n> \t * Select default values for switches where needed\n> \t */\n> ! \tif (PostPortName == 0)\n> ! \t\tPostPortName = (unsigned short) pq_getport();\n> \n> \t/*\n> \t * Check for invalid combinations of switches\n> --- 582,603 ----\n> \t/*\n> \t * Select default values for switches where needed\n> \t */\n> ! \tif (HostName == NULL)\n> ! \t{\n> ! \t\tif (!(HostName = getenv(\"PGHOST\")))\n> ! \t\t{\n> ! \t\t\tHostName = \"any\";\n> ! \t\t}\n> ! \t}\n> ! \telse if (!NetServer)\n> ! \t{\n> ! \t\tfprintf(stderr, \"%s: -h requires -i.\\n\", progname);\n> ! \t\texit(1);\n> ! \t}\n> ! \tif (PostPortNumber == 0)\n> ! \t\tPostPortNumber = (unsigned short) pq_getport();\n> ! \tif (UnixSocketName == NULL)\n> ! \t\tUnixSocketName = pq_getunixsocket();\n> \n> \t/*\n> \t * Check for invalid combinations of switches\n> ***************\n> *** 622,628 ****\n> \n> \tif (NetServer)\n> \t{\n> ! \t\tstatus = StreamServerPort(hostName, PostPortName, &ServerSock_INET);\n> \t\tif (status != STATUS_OK)\n> \t\t{\n> \t\t\tfprintf(stderr, \"%s: cannot create INET stream port\\n\",\n> --- 641,647 ----\n> \n> \tif (NetServer)\n> \t{\n> ! \t\tstatus = StreamServerPort(HostName, PostPortNumber, NULL, &ServerSock_INET);\n> \t\tif (status != STATUS_OK)\n> \t\t{\n> \t\t\tfprintf(stderr, \"%s: cannot create INET stream port\\n\",\n> ***************\n> *** 632,638 ****\n> \t}\n> \n> #if !defined(__CYGWIN32__) && !defined(__QNX__)\n> ! \tstatus = StreamServerPort(NULL, PostPortName, &ServerSock_UNIX);\n> \tif (status != STATUS_OK)\n> \t{\n> \t\tfprintf(stderr, \"%s: cannot create UNIX stream port\\n\",\n> --- 651,657 ----\n> \t}\n> \n> #if !defined(__CYGWIN32__) && !defined(__QNX__)\n> ! \tstatus = StreamServerPort(NULL, PostPortNumber, UnixSocketName, &ServerSock_UNIX);\n> \tif (status != STATUS_OK)\n> \t{\n> \t\tfprintf(stderr, \"%s: cannot create UNIX stream port\\n\",\n> ***************\n> *** 642,648 ****\n> #endif\n> \t/* set up shared memory and semaphores */\n> \tEnableMemoryContext(TRUE);\n> ! \treset_shared(PostPortName);\n> \n> \t/*\n> \t * Initialize the list of active backends.\tThis list is only used for\n> --- 661,667 ----\n> #endif\n> \t/* set up shared memory and semaphores */\n> \tEnableMemoryContext(TRUE);\n> ! \treset_shared(PostPortNumber);\n> \n> \t/*\n> \t * Initialize the list of active backends.\tThis list is only used for\n> ***************\n> *** 664,670 ****\n> \t\t{\n> \t\t\tif (SetOptsFile(\n> \t\t\t\t\t\t\tprogname,\t/* postmaster executable file */\n> ! \t\t\t\t\t\t\tPostPortName,\t\t/* port number */\n> \t\t\t\t\t\t\tDataDir,\t/* PGDATA */\n> \t\t\t\t\t\t\tassert_enabled,\t\t/* whether -A is specified\n> \t\t\t\t\t\t\t\t\t\t\t\t * or not */\n> --- 683,691 ----\n> \t\t{\n> \t\t\tif (SetOptsFile(\n> \t\t\t\t\t\t\tprogname,\t/* postmaster executable file */\n> ! \t\t\t\t\t\t\tHostName, /* IP address to bind to */\n> ! \t\t\t\t\t\t\tPostPortNumber,\t\t/* port number */\n> ! \t\t\t\t\t\t\tUnixSocketName,\t/* PGUNIXSOCKET */\n> \t\t\t\t\t\t\tDataDir,\t/* PGDATA */\n> \t\t\t\t\t\t\tassert_enabled,\t\t/* whether -A is specified\n> \t\t\t\t\t\t\t\t\t\t\t\t * or not */\n> ***************\n> *** 753,759 ****\n> \t\t{\n> \t\t\tif (SetOptsFile(\n> \t\t\t\t\t\t\tprogname,\t/* postmaster executable file */\n> ! \t\t\t\t\t\t\tPostPortName,\t\t/* port number */\n> \t\t\t\t\t\t\tDataDir,\t/* PGDATA */\n> \t\t\t\t\t\t\tassert_enabled,\t\t/* whether -A is specified\n> \t\t\t\t\t\t\t\t\t\t\t\t * or not */\n> --- 774,782 ----\n> \t\t{\n> \t\t\tif (SetOptsFile(\n> \t\t\t\t\t\t\tprogname,\t/* postmaster executable file */\n> ! \t\t\t\t\t\t\tHostName, /* IP address to bind to */\n> ! \t\t\t\t\t\t\tPostPortNumber,\t\t/* port number */\n> ! \t\t\t\t\t\t\tUnixSocketName,\t/* PGUNIXSOCKET */\n> \t\t\t\t\t\t\tDataDir,\t/* PGDATA */\n> \t\t\t\t\t\t\tassert_enabled,\t\t/* whether -A is specified\n> \t\t\t\t\t\t\t\t\t\t\t\t * or not */\n> ***************\n> *** 837,843 ****\n> --- 860,868 ----\n> \tfprintf(stderr, \"\\t-a system\\tuse this authentication system\\n\");\n> \tfprintf(stderr, \"\\t-b backend\\tuse a specific backend server executable\\n\");\n> \tfprintf(stderr, \"\\t-d [1-5]\\tset debugging level\\n\");\n> + \tfprintf(stderr, \"\\t-h hostname\\tspecify hostname or IP address or 'any' for postmaster to listen on (also use -i)\\n\");\n> \tfprintf(stderr, \"\\t-i \\t\\tlisten on TCP/IP sockets as well as Unix domain socket\\n\");\n> + \tfprintf(stderr, \"\\t-k path\\tspecify Unix-domain socket name for postmaster to listen on\\n\");\n> #ifdef USE_SSL\n> \tfprintf(stderr, \" \\t-l \\t\\tfor TCP/IP sockets, listen only on SSL connections\\n\");\n> #endif\n> ***************\n> *** 1318,1328 ****\n> --- 1343,1417 ----\n> }\n> \n> /*\n> + * get_host_port -- return a pseudo port number (16 bits)\n> + * derived from the primary IP address of HostName.\n> + */\n> + static unsigned short\n> + get_host_port(void)\n> + {\n> + \tstatic unsigned short hostPort = 0;\n> + \n> + \tif (hostPort == 0)\n> + \t{\n> + \t\tSockAddr\tsaddr;\n> + \t\tstruct hostent *hp;\n> + \n> + \t\thp = gethostbyname(HostName);\n> + \t\tif ((hp == NULL) || (hp->h_addrtype != AF_INET))\n> + \t\t{\n> + \t\t\tchar msg[1024];\n> + \t\t\tsnprintf(msg, sizeof(msg),\n> + \t\t\t\t \"FATAL: get_host_port: gethostbyname(%s) failed: %s\\n\",\n> + \t\t\t\t HostName, hstrerror(h_errno));\n> + \t\t\tfputs(msg, stderr);\n> + \t\t\tpqdebug(\"%s\", msg);\n> + \t\t\texit(1);\n> + \t\t}\n> + \t\tmemmove((char *) &(saddr.in.sin_addr),\n> + \t\t\t(char *) hp->h_addr,\n> + \t\t\thp->h_length);\n> + \t\thostPort = ntohl(saddr.in.sin_addr.s_addr) & 0xFFFF;\n> + \t}\n> + \n> + \treturn hostPort;\n> + }\n> + \n> + /*\n> * reset_shared -- reset shared memory and semaphores\n> */\n> static void\n> reset_shared(unsigned short port)\n> {\n> + \t/*\n> + \t * A typical ipc_key is 5432001, which is port 5432, sequence\n> + \t * number 0, and 01 as the index in IPCKeyGetBufferMemoryKey().\n> + \t * The 32-bit INT_MAX is 2147483 6 47.\n> + \t *\n> + \t * The default algorithm for calculating the IPC keys assumes that all\n> + \t * instances of postmaster on a given host are listening on different\n> + \t * ports. In order to work (prevent shared memory collisions) if you\n> + \t * run multiple PostgreSQL instances on the same port and different IP\n> + \t * addresses on a host, we change the algorithm if you give postmaster\n> + \t * the -h option, or set PGHOST, to a value other than the internal\n> + \t * default of \"any\".\n> + \t *\n> + \t * If HostName is not \"any\", then we generate the IPC keys using the\n> + \t * last two octets of the IP address instead of the port number.\n> + \t * This algorithm assumes that no one will run multiple PostgreSQL\n> + \t * instances on one host using two IP addresses that have the same two\n> + \t * last octets in different class C networks. If anyone does, it\n> + \t * would be rare.\n> + \t *\n> + \t * So, if you use -h or PGHOST, don't try to run two instances of\n> + \t * PostgreSQL on the same IP address but different ports. If you\n> + \t * don't use them, then you must use different ports (via -p or\n> + \t * PGPORT). And, of course, don't try to use both approaches on one\n> + \t * host.\n> + \t */\n> + \n> + \tif (strcmp(HostName, \"any\"))\n> + \t\tport = get_host_port();\n> + \n> \tipc_key = port * 1000 + shmem_seq * 100;\n> \tCreateSharedMemoryAndSemaphores(ipc_key, MaxBackends);\n> \tshmem_seq += 1;\n> ***************\n> *** 1540,1546 ****\n> \t\t\t\tctime(&tnow));\n> \t\tfflush(stderr);\n> \t\tshmem_exit(0);\n> ! \t\treset_shared(PostPortName);\n> \t\tStartupPID = StartupDataBase();\n> \t\treturn;\n> \t}\n> --- 1629,1635 ----\n> \t\t\t\tctime(&tnow));\n> \t\tfflush(stderr);\n> \t\tshmem_exit(0);\n> ! \t\treset_shared(PostPortNumber);\n> \t\tStartupPID = StartupDataBase();\n> \t\treturn;\n> \t}\n> ***************\n> *** 1720,1726 ****\n> \t * Set up the necessary environment variables for the backend This\n> \t * should really be some sort of message....\n> \t */\n> ! \tsprintf(envEntry[0], \"POSTPORT=%d\", PostPortName);\n> \tputenv(envEntry[0]);\n> \tsprintf(envEntry[1], \"POSTID=%d\", NextBackendTag);\n> \tputenv(envEntry[1]);\n> --- 1809,1815 ----\n> \t * Set up the necessary environment variables for the backend This\n> \t * should really be some sort of message....\n> \t */\n> ! \tsprintf(envEntry[0], \"POSTPORT=%d\", PostPortNumber);\n> \tputenv(envEntry[0]);\n> \tsprintf(envEntry[1], \"POSTID=%d\", NextBackendTag);\n> \tputenv(envEntry[1]);\n> ***************\n> *** 2174,2180 ****\n> \tfor (i = 0; i < 4; ++i)\n> \t\tMemSet(ssEntry[i], 0, 2 * ARGV_SIZE);\n> \n> ! \tsprintf(ssEntry[0], \"POSTPORT=%d\", PostPortName);\n> \tputenv(ssEntry[0]);\n> \tsprintf(ssEntry[1], \"POSTID=%d\", NextBackendTag);\n> \tputenv(ssEntry[1]);\n> --- 2263,2269 ----\n> \tfor (i = 0; i < 4; ++i)\n> \t\tMemSet(ssEntry[i], 0, 2 * ARGV_SIZE);\n> \n> ! \tsprintf(ssEntry[0], \"POSTPORT=%d\", PostPortNumber);\n> \tputenv(ssEntry[0]);\n> \tsprintf(ssEntry[1], \"POSTID=%d\", NextBackendTag);\n> \tputenv(ssEntry[1]);\n> ***************\n> *** 2254,2260 ****\n> * Create the opts file\n> */\n> static int\n> ! SetOptsFile(char *progname, int port, char *datadir,\n> \t\t\tint assert, int nbuf, char *execfile,\n> \t\t\tint debuglvl, int netserver,\n> #ifdef USE_SSL\n> --- 2343,2349 ----\n> * Create the opts file\n> */\n> static int\n> ! SetOptsFile(char *progname, char *hostname, int port, char *unixsocket, char *datadir,\n> \t\t\tint assert, int nbuf, char *execfile,\n> \t\t\tint debuglvl, int netserver,\n> #ifdef USE_SSL\n> ***************\n> *** 2279,2284 ****\n> --- 2368,2383 ----\n> \t\treturn (-1);\n> \t}\n> \tsnprintf(opts, sizeof(opts), \"%s\\n-p %d\\n-D %s\\n\", progname, port, datadir);\n> + \tif (netserver)\n> + \t{\n> + \t\tsprintf(buf, \"-h %s\\n\", hostname);\n> + \t\tstrcat(opts, buf);\n> + \t}\n> + \tif (unixsocket)\n> + \t{\n> + \t\tsprintf(buf, \"-k %s\\n\", unixsocket);\n> + \t\tstrcat(opts, buf);\n> + \t}\n> \tif (assert)\n> \t{\n> \t\tsprintf(buf, \"-A %d\\n\", assert);\n> Index: src/bin/pg_dump/pg_dump.c\n> *** src/bin/pg_dump/pg_dump.c\t2000/06/30 21:15:44\t1.1\n> --- src/bin/pg_dump/pg_dump.c\t2000/07/01 18:41:22\t1.2\n> ***************\n> *** 140,145 ****\n> --- 140,146 ----\n> \t\t \" -D, --attribute-inserts dump data as INSERT commands with attribute names\\n\"\n> \t\t \" -h, --host <hostname> server host name\\n\"\n> \t\t \" -i, --ignore-version proceed when database version != pg_dump version\\n\"\n> + \t\t \" -k, --unixsocket <path> server Unix-domain socket name\\n\"\n> \t\" -n, --no-quotes suppress most quotes around identifiers\\n\"\n> \t \" -N, --quotes enable most quotes around identifiers\\n\"\n> \t\t \" -o, --oids dump object ids (oids)\\n\"\n> ***************\n> *** 158,163 ****\n> --- 159,165 ----\n> \t\t \" -D dump data as INSERT commands with attribute names\\n\"\n> \t\t \" -h <hostname> server host name\\n\"\n> \t\t \" -i proceed when database version != pg_dump version\\n\"\n> + \t\t \" -k <path> server Unix-domain socket name\\n\"\n> \t\" -n suppress most quotes around identifiers\\n\"\n> \t \" -N enable most quotes around identifiers\\n\"\n> \t\t \" -o dump object ids (oids)\\n\"\n> ***************\n> *** 579,584 ****\n> --- 581,587 ----\n> \tconst char *dbname = NULL;\n> \tconst char *pghost = NULL;\n> \tconst char *pgport = NULL;\n> + \tconst char *pgunixsocket = NULL;\n> \tchar\t *tablename = NULL;\n> \tbool\t\toids = false;\n> \tTableInfo *tblinfo;\n> ***************\n> *** 598,603 ****\n> --- 601,607 ----\n> \t\t{\"attribute-inserts\", no_argument, NULL, 'D'},\n> \t\t{\"host\", required_argument, NULL, 'h'},\n> \t\t{\"ignore-version\", no_argument, NULL, 'i'},\n> + \t\t{\"unixsocket\", required_argument, NULL, 'k'},\n> \t\t{\"no-quotes\", no_argument, NULL, 'n'},\n> \t\t{\"quotes\", no_argument, NULL, 'N'},\n> \t\t{\"oids\", no_argument, NULL, 'o'},\n> ***************\n> *** 662,667 ****\n> --- 666,674 ----\n> \t\t\tcase 'i':\t\t\t/* ignore database version mismatch */\n> \t\t\t\tignore_version = true;\n> \t\t\t\tbreak;\n> + \t\t\tcase 'k':\t\t\t/* server Unix-domain socket */\n> + \t\t\t\tpgunixsocket = optarg;\n> + \t\t\t\tbreak;\n> \t\t\tcase 'n':\t\t\t/* Do not force double-quotes on\n> \t\t\t\t\t\t\t\t * identifiers */\n> \t\t\t\tforce_quotes = false;\n> ***************\n> *** 782,788 ****\n> \t\texit(1);\n> \t}\n> \n> - \t/* g_conn = PQsetdb(pghost, pgport, NULL, NULL, dbname); */\n> \tif (pghost != NULL)\n> \t{\n> \t\tsprintf(tmp_string, \"host=%s \", pghost);\n> --- 789,794 ----\n> ***************\n> *** 791,796 ****\n> --- 797,807 ----\n> \tif (pgport != NULL)\n> \t{\n> \t\tsprintf(tmp_string, \"port=%s \", pgport);\n> + \t\tstrcat(connect_string, tmp_string);\n> + \t}\n> + \tif (pgunixsocket != NULL)\n> + \t{\n> + \t\tsprintf(tmp_string, \"unixsocket=%s \", pgunixsocket);\n> \t\tstrcat(connect_string, tmp_string);\n> \t}\n> \tif (dbname != NULL)\n> Index: src/bin/psql/command.c\n> *** src/bin/psql/command.c\t2000/06/30 21:15:46\t1.1\n> --- src/bin/psql/command.c\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 1199,1204 ****\n> --- 1199,1205 ----\n> \tSetVariable(pset.vars, \"USER\", NULL);\n> \tSetVariable(pset.vars, \"HOST\", NULL);\n> \tSetVariable(pset.vars, \"PORT\", NULL);\n> + \tSetVariable(pset.vars, \"UNIXSOCKET\", NULL);\n> \tSetVariable(pset.vars, \"ENCODING\", NULL);\n> \n> \t/* If dbname is \"\" then use old name, else new one (even if NULL) */\n> ***************\n> *** 1228,1233 ****\n> --- 1229,1235 ----\n> \tdo\n> \t{\n> \t\tneed_pass = false;\n> + \t\t/* FIXME use PQconnectdb to support passing the Unix socket */\n> \t\tpset.db = PQsetdbLogin(PQhost(oldconn), PQport(oldconn),\n> \t\t\t\t\t\t\t NULL, NULL, dbparam, userparam, pwparam);\n> \n> ***************\n> *** 1303,1308 ****\n> --- 1305,1311 ----\n> \tSetVariable(pset.vars, \"USER\", PQuser(pset.db));\n> \tSetVariable(pset.vars, \"HOST\", PQhost(pset.db));\n> \tSetVariable(pset.vars, \"PORT\", PQport(pset.db));\n> + \tSetVariable(pset.vars, \"UNIXSOCKET\", PQunixsocket(pset.db));\n> \tSetVariable(pset.vars, \"ENCODING\", pg_encoding_to_char(pset.encoding));\n> \n> \tpset.issuper = test_superuser(PQuser(pset.db));\n> Index: src/bin/psql/command.h\n> Index: src/bin/psql/common.c\n> *** src/bin/psql/common.c\t2000/06/30 21:15:46\t1.1\n> --- src/bin/psql/common.c\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 330,335 ****\n> --- 330,336 ----\n> \t\t\tSetVariable(pset.vars, \"DBNAME\", NULL);\n> \t\t\tSetVariable(pset.vars, \"HOST\", NULL);\n> \t\t\tSetVariable(pset.vars, \"PORT\", NULL);\n> + \t\t\tSetVariable(pset.vars, \"UNIXSOCKET\", NULL);\n> \t\t\tSetVariable(pset.vars, \"USER\", NULL);\n> \t\t\tSetVariable(pset.vars, \"ENCODING\", NULL);\n> \t\t\treturn NULL;\n> ***************\n> *** 509,514 ****\n> --- 510,516 ----\n> \t\t\t\tSetVariable(pset.vars, \"DBNAME\", NULL);\n> \t\t\t\tSetVariable(pset.vars, \"HOST\", NULL);\n> \t\t\t\tSetVariable(pset.vars, \"PORT\", NULL);\n> + \t\t\t\tSetVariable(pset.vars, \"UNIXSOCKET\", NULL);\n> \t\t\t\tSetVariable(pset.vars, \"USER\", NULL);\n> \t\t\t\tSetVariable(pset.vars, \"ENCODING\", NULL);\n> \t\t\t\treturn false;\n> Index: src/bin/psql/help.c\n> *** src/bin/psql/help.c\t2000/06/30 21:15:46\t1.1\n> --- src/bin/psql/help.c\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 103,108 ****\n> --- 103,118 ----\n> \tputs(\")\");\n> \n> \tputs(\" -H HTML table output mode (-P format=html)\");\n> + \n> + \t/* Display default Unix-domain socket */\n> + \tenv = getenv(\"PGUNIXSOCKET\");\n> + \tprintf(\" -k <path> Specify Unix domain socket name (default: \");\n> + \tif (env)\n> + \t\tfputs(env, stdout);\n> + \telse\n> + \t\tfputs(\"computed from the port\", stdout);\n> + \tputs(\")\");\n> + \n> \tputs(\" -l List available databases, then exit\");\n> \tputs(\" -n Disable readline\");\n> \tputs(\" -o <filename> Send query output to filename (or |pipe)\");\n> Index: src/bin/psql/prompt.c\n> *** src/bin/psql/prompt.c\t2000/06/30 21:15:46\t1.1\n> --- src/bin/psql/prompt.c\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 189,194 ****\n> --- 189,199 ----\n> \t\t\t\t\tif (pset.db && PQport(pset.db))\n> \t\t\t\t\t\tstrncpy(buf, PQport(pset.db), MAX_PROMPT_SIZE);\n> \t\t\t\t\tbreak;\n> + \t\t\t\t\t/* DB server Unix-domain socket */\n> + \t\t\t\tcase '<':\n> + \t\t\t\t\tif (pset.db && PQunixsocket(pset.db))\n> + \t\t\t\t\t\tstrncpy(buf, PQunixsocket(pset.db), MAX_PROMPT_SIZE);\n> + \t\t\t\t\tbreak;\n> \t\t\t\t\t/* DB server user name */\n> \t\t\t\tcase 'n':\n> \t\t\t\t\tif (pset.db)\n> Index: src/bin/psql/prompt.h\n> Index: src/bin/psql/settings.h\n> Index: src/bin/psql/startup.c\n> *** src/bin/psql/startup.c\t2000/06/30 21:15:46\t1.1\n> --- src/bin/psql/startup.c\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 66,71 ****\n> --- 66,72 ----\n> \tchar\t *dbname;\n> \tchar\t *host;\n> \tchar\t *port;\n> + \tchar\t *unixsocket;\n> \tchar\t *username;\n> \tenum _actions action;\n> \tchar\t *action_string;\n> ***************\n> *** 158,163 ****\n> --- 159,165 ----\n> \tdo\n> \t{\n> \t\tneed_pass = false;\n> + \t\t/* FIXME use PQconnectdb to allow setting the unix socket */\n> \t\tpset.db = PQsetdbLogin(options.host, options.port, NULL, NULL,\n> \t\t\toptions.action == ACT_LIST_DB ? \"template1\" : options.dbname,\n> \t\t\t\t\t\t\t username, password);\n> ***************\n> *** 202,207 ****\n> --- 204,210 ----\n> \tSetVariable(pset.vars, \"USER\", PQuser(pset.db));\n> \tSetVariable(pset.vars, \"HOST\", PQhost(pset.db));\n> \tSetVariable(pset.vars, \"PORT\", PQport(pset.db));\n> + \tSetVariable(pset.vars, \"UNIXSOCKET\", PQunixsocket(pset.db));\n> \tSetVariable(pset.vars, \"ENCODING\", pg_encoding_to_char(pset.encoding));\n> \n> #ifndef WIN32\n> ***************\n> *** 313,318 ****\n> --- 316,322 ----\n> \t\t{\"field-separator\", required_argument, NULL, 'F'},\n> \t\t{\"host\", required_argument, NULL, 'h'},\n> \t\t{\"html\", no_argument, NULL, 'H'},\n> + \t\t{\"unixsocket\", required_argument, NULL, 'k'},\n> \t\t{\"list\", no_argument, NULL, 'l'},\n> \t\t{\"no-readline\", no_argument, NULL, 'n'},\n> \t\t{\"output\", required_argument, NULL, 'o'},\n> ***************\n> *** 346,359 ****\n> \tmemset(options, 0, sizeof *options);\n> \n> #ifdef HAVE_GETOPT_LONG\n> ! \twhile ((c = getopt_long(argc, argv, \"aAc:d:eEf:F:lh:Hno:p:P:qRsStT:uU:v:VWxX?\", long_options, &optindex)) != -1)\n> #else\t\t\t\t\t\t\t/* not HAVE_GETOPT_LONG */\n> \n> \t/*\n> \t * Be sure to leave the '-' in here, so we can catch accidental long\n> \t * options.\n> \t */\n> ! \twhile ((c = getopt(argc, argv, \"aAc:d:eEf:F:lh:Hno:p:P:qRsStT:uU:v:VWxX?-\")) != -1)\n> #endif\t /* not HAVE_GETOPT_LONG */\n> \t{\n> \t\tswitch (c)\n> --- 350,363 ----\n> \tmemset(options, 0, sizeof *options);\n> \n> #ifdef HAVE_GETOPT_LONG\n> ! \twhile ((c = getopt_long(argc, argv, \"aAc:d:eEf:F:lh:Hk:no:p:P:qRsStT:uU:v:VWxX?\", long_options, &optindex)) != -1)\n> #else\t\t\t\t\t\t\t/* not HAVE_GETOPT_LONG */\n> \n> \t/*\n> \t * Be sure to leave the '-' in here, so we can catch accidental long\n> \t * options.\n> \t */\n> ! \twhile ((c = getopt(argc, argv, \"aAc:d:eEf:F:lh:Hk:no:p:P:qRsStT:uU:v:VWxX?-\")) != -1)\n> #endif\t /* not HAVE_GETOPT_LONG */\n> \t{\n> \t\tswitch (c)\n> ***************\n> *** 398,403 ****\n> --- 402,410 ----\n> \t\t\t\tbreak;\n> \t\t\tcase 'l':\n> \t\t\t\toptions->action = ACT_LIST_DB;\n> + \t\t\t\tbreak;\n> + \t\t\tcase 'k':\n> + \t\t\t\toptions->unixsocket = optarg;\n> \t\t\t\tbreak;\n> \t\t\tcase 'n':\n> \t\t\t\toptions->no_readline = true;\n> Index: src/bin/scripts/createdb\n> *** src/bin/scripts/createdb\t2000/06/30 21:15:46\t1.1\n> --- src/bin/scripts/createdb\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 50,55 ****\n> --- 50,64 ----\n> --port=*)\n> PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> ;;\n> + \t--unixsocket|-k)\n> + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> + \t\tshift;;\n> + -k*)\n> + PSQLOPT=\"$PSQLOPT $1\"\n> + ;;\n> + --unixsocket=*)\n> + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> + ;;\n> \t--username|-U)\n> \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n> \t\tshift;;\n> ***************\n> *** 114,119 ****\n> --- 123,129 ----\n> \techo \" -E, --encoding=ENCODING Multibyte encoding for the database\"\n> \techo \" -h, --host=HOSTNAME Database server host\"\n> \techo \" -p, --port=PORT Database server port\"\n> + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> \techo \" -U, --username=USERNAME Username to connect as\"\n> \techo \" -W, --password Prompt for password\"\n> \techo \" -e, --echo Show the query being sent to the backend\"\n> Index: src/bin/scripts/createlang.sh\n> *** src/bin/scripts/createlang.sh\t2000/06/30 21:15:46\t1.1\n> --- src/bin/scripts/createlang.sh\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 65,70 ****\n> --- 65,79 ----\n> --port=*)\n> PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> ;;\n> + \t--unixsocket|-k)\n> + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> + \t\tshift;;\n> + -k*)\n> + PSQLOPT=\"$PSQLOPT $1\"\n> + ;;\n> + --unixsocket=*)\n> + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> + ;;\n> \t--username|-U)\n> \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n> \t\tshift;;\n> ***************\n> *** 126,131 ****\n> --- 135,141 ----\n> \techo \"Options:\"\n> \techo \" -h, --host=HOSTNAME Database server host\"\n> \techo \" -p, --port=PORT Database server port\"\n> + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> \techo \" -U, --username=USERNAME Username to connect as\"\n> \techo \" -W, --password Prompt for password\"\n> \techo \" -d, --dbname=DBNAME Database to install language in\"\n> Index: src/bin/scripts/createuser\n> *** src/bin/scripts/createuser\t2000/06/30 21:15:46\t1.1\n> --- src/bin/scripts/createuser\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 63,68 ****\n> --- 63,77 ----\n> --port=*)\n> PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> ;;\n> + \t--unixsocket|-k)\n> + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> + \t\tshift;;\n> + -k*)\n> + PSQLOPT=\"$PSQLOPT $1\"\n> + ;;\n> + --unixsocket=*)\n> + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> + ;;\n> # Note: These two specify the user to connect as (like in psql),\n> # not the user you're creating.\n> \t--username|-U)\n> ***************\n> *** 135,140 ****\n> --- 144,150 ----\n> \techo \" -P, --pwprompt Assign a password to new user\"\n> \techo \" -h, --host=HOSTNAME Database server host\"\n> \techo \" -p, --port=PORT Database server port\"\n> + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> \techo \" -U, --username=USERNAME Username to connect as (not the one to create)\"\n> \techo \" -W, --password Prompt for password to connect\"\n> \techo \" -e, --echo Show the query being sent to the backend\"\n> Index: src/bin/scripts/dropdb\n> *** src/bin/scripts/dropdb\t2000/06/30 21:15:46\t1.1\n> --- src/bin/scripts/dropdb\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 59,64 ****\n> --- 59,73 ----\n> --port=*)\n> PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> ;;\n> + \t--unixsocket|-k)\n> + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> + \t\tshift;;\n> + -k*)\n> + PSQLOPT=\"$PSQLOPT $1\"\n> + ;;\n> + --unixsocket=*)\n> + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> + ;;\n> \t--username|-U)\n> \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n> \t\tshift;;\n> ***************\n> *** 103,108 ****\n> --- 112,118 ----\n> \techo \"Options:\"\n> \techo \" -h, --host=HOSTNAME Database server host\"\n> \techo \" -p, --port=PORT Database server port\"\n> + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> \techo \" -U, --username=USERNAME Username to connect as\"\n> \techo \" -W, --password Prompt for password\"\n> \techo \" -i, --interactive Prompt before deleting anything\"\n> Index: src/bin/scripts/droplang\n> *** src/bin/scripts/droplang\t2000/06/30 21:15:46\t1.1\n> --- src/bin/scripts/droplang\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 65,70 ****\n> --- 65,79 ----\n> --port=*)\n> PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> ;;\n> + \t--unixsocket|-k)\n> + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> + \t\tshift;;\n> + -k*)\n> + PSQLOPT=\"$PSQLOPT $1\"\n> + ;;\n> + --unixsocket=*)\n> + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> + ;;\n> \t--username|-U)\n> \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n> \t\tshift;;\n> ***************\n> *** 113,118 ****\n> --- 122,128 ----\n> \techo \"Options:\"\n> \techo \" -h, --host=HOSTNAME Database server host\"\n> \techo \" -p, --port=PORT Database server port\"\n> + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> \techo \" -U, --username=USERNAME Username to connect as\"\n> \techo \" -W, --password Prompt for password\"\n> \techo \" -d, --dbname=DBNAME Database to remove language from\"\n> Index: src/bin/scripts/dropuser\n> *** src/bin/scripts/dropuser\t2000/06/30 21:15:46\t1.1\n> --- src/bin/scripts/dropuser\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 59,64 ****\n> --- 59,73 ----\n> --port=*)\n> PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> ;;\n> + \t--unixsocket|-k)\n> + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> + \t\tshift;;\n> + -k*)\n> + PSQLOPT=\"$PSQLOPT $1\"\n> + ;;\n> + --unixsocket=*)\n> + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> + ;;\n> # Note: These two specify the user to connect as (like in psql),\n> # not the user you're dropping.\n> \t--username|-U)\n> ***************\n> *** 105,110 ****\n> --- 114,120 ----\n> \techo \"Options:\"\n> \techo \" -h, --host=HOSTNAME Database server host\"\n> \techo \" -p, --port=PORT Database server port\"\n> + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> \techo \" -U, --username=USERNAME Username to connect as (not the one to drop)\"\n> \techo \" -W, --password Prompt for password to connect\"\n> \techo \" -i, --interactive Prompt before deleting anything\"\n> Index: src/bin/scripts/vacuumdb\n> *** src/bin/scripts/vacuumdb\t2000/06/30 21:15:46\t1.1\n> --- src/bin/scripts/vacuumdb\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 52,57 ****\n> --- 52,66 ----\n> --port=*)\n> PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> ;;\n> + \t--unixsocket|-k)\n> + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> + \t\tshift;;\n> + -k*)\n> + PSQLOPT=\"$PSQLOPT $1\"\n> + ;;\n> + --unixsocket=*)\n> + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> + ;;\n> \t--username|-U)\n> \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n> \t\tshift;;\n> ***************\n> *** 121,126 ****\n> --- 130,136 ----\n> echo \"Options:\"\n> \techo \" -h, --host=HOSTNAME Database server host\"\n> \techo \" -p, --port=PORT Database server port\"\n> + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> \techo \" -U, --username=USERNAME Username to connect as\"\n> \techo \" -W, --password Prompt for password\"\n> \techo \" -d, --dbname=DBNAME Database to vacuum\"\n> Index: src/include/libpq/libpq.h\n> *** src/include/libpq/libpq.h\t2000/06/30 21:15:47\t1.1\n> --- src/include/libpq/libpq.h\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 236,246 ****\n> /*\n> * prototypes for functions in pqcomm.c\n> */\n> ! extern int\tStreamServerPort(char *hostName, unsigned short portName, int *fdP);\n> extern int\tStreamConnection(int server_fd, Port *port);\n> extern void StreamClose(int sock);\n> extern void pq_init(void);\n> extern int\tpq_getport(void);\n> extern void pq_close(void);\n> extern int\tpq_getbytes(char *s, size_t len);\n> extern int\tpq_getstring(StringInfo s);\n> --- 236,247 ----\n> /*\n> * prototypes for functions in pqcomm.c\n> */\n> ! extern int\tStreamServerPort(char *hostName, unsigned short portName, char *unixSocketName, int *fdP);\n> extern int\tStreamConnection(int server_fd, Port *port);\n> extern void StreamClose(int sock);\n> extern void pq_init(void);\n> extern int\tpq_getport(void);\n> + extern char\t*pq_getunixsocket(void);\n> extern void pq_close(void);\n> extern int\tpq_getbytes(char *s, size_t len);\n> extern int\tpq_getstring(StringInfo s);\n> Index: src/include/libpq/password.h\n> Index: src/include/libpq/pqcomm.h\n> *** src/include/libpq/pqcomm.h\t2000/06/30 21:15:47\t1.1\n> --- src/include/libpq/pqcomm.h\t2000/07/01 18:59:33\t1.6\n> ***************\n> *** 42,53 ****\n> /* Configure the UNIX socket address for the well known port. */\n> \n> #if defined(SUN_LEN)\n> ! #define UNIXSOCK_PATH(sun,port) \\\n> ! \t(sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)), SUN_LEN(&(sun)))\n> #else\n> ! #define UNIXSOCK_PATH(sun,port) \\\n> ! \t(sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)), \\\n> ! \t strlen((sun).sun_path)+ offsetof(struct sockaddr_un, sun_path))\n> #endif\n> \n> /*\n> --- 42,56 ----\n> /* Configure the UNIX socket address for the well known port. */\n> \n> #if defined(SUN_LEN)\n> ! #define UNIXSOCK_PATH(sun,port,defpath) \\\n> ! (defpath ? (strncpy((sun).sun_path, defpath, sizeof((sun).sun_path)), (sun).sun_path[sizeof((sun).sun_path)-1] = '\\0') : sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)))\n> ! #define UNIXSOCK_LEN(sun) \\\n> ! (SUN_LEN(&(sun)))\n> #else\n> ! #define UNIXSOCK_PATH(sun,port,defpath) \\\n> ! (defpath ? (strncpy((sun).sun_path, defpath, sizeof((sun).sun_path)), (sun).sun_path[sizeof((sun).sun_path)-1] = '\\0') : sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)))\n> ! #define UNIXSOCK_LEN(sun) \\\n> ! (strlen((sun).sun_path)+ offsetof(struct sockaddr_un, sun_path))\n> #endif\n> \n> /*\n> Index: src/interfaces/libpq/fe-connect.c\n> *** src/interfaces/libpq/fe-connect.c\t2000/06/30 21:15:51\t1.1\n> --- src/interfaces/libpq/fe-connect.c\t2000/07/01 18:50:47\t1.3\n> ***************\n> *** 125,130 ****\n> --- 125,133 ----\n> \t{\"port\", \"PGPORT\", DEF_PGPORT, NULL,\n> \t\"Database-Port\", \"\", 6},\n> \n> + \t{\"unixsocket\", \"PGUNIXSOCKET\", NULL, NULL,\n> + \t\"Unix-Socket\", \"\", 80},\n> + \n> \t{\"tty\", \"PGTTY\", DefaultTty, NULL,\n> \t\"Backend-Debug-TTY\", \"D\", 40},\n> \n> ***************\n> *** 293,298 ****\n> --- 296,303 ----\n> \tconn->pghost = tmp ? strdup(tmp) : NULL;\n> \ttmp = conninfo_getval(connOptions, \"port\");\n> \tconn->pgport = tmp ? strdup(tmp) : NULL;\n> + \ttmp = conninfo_getval(connOptions, \"unixsocket\");\n> + \tconn->pgunixsocket = tmp ? strdup(tmp) : NULL;\n> \ttmp = conninfo_getval(connOptions, \"tty\");\n> \tconn->pgtty = tmp ? strdup(tmp) : NULL;\n> \ttmp = conninfo_getval(connOptions, \"options\");\n> ***************\n> *** 369,374 ****\n> --- 374,382 ----\n> *\t PGPORT\t identifies TCP port to which to connect if <pgport> argument\n> *\t\t\t\t is NULL or a null string.\n> *\n> + *\t PGUNIXSOCKET\t identifies Unix-domain socket to which to connect; default\n> + *\t\t\t\t is computed from the TCP port.\n> + *\n> *\t PGTTY\t\t identifies tty to which to send messages if <pgtty> argument\n> *\t\t\t\t is NULL or a null string.\n> *\n> ***************\n> *** 422,427 ****\n> --- 430,439 ----\n> \telse\n> \t\tconn->pgport = strdup(pgport);\n> \n> + \tconn->pgunixsocket = getenv(\"PGUNIXSOCKET\");\n> + \tif (conn->pgunixsocket)\n> + \t\tconn->pgunixsocket = strdup(conn->pgunixsocket);\n> + \n> \tif ((pgtty == NULL) || pgtty[0] == '\\0')\n> \t{\n> \t\tif ((tmp = getenv(\"PGTTY\")) == NULL)\n> ***************\n> *** 489,501 ****\n> \n> /*\n> * update_db_info -\n> ! * get all additional infos out of dbName\n> *\n> */\n> static int\n> update_db_info(PGconn *conn)\n> {\n> ! \tchar\t *tmp,\n> \t\t\t *old = conn->dbName;\n> \n> \tif (strchr(conn->dbName, '@') != NULL)\n> --- 501,513 ----\n> \n> /*\n> * update_db_info -\n> ! * get all additional info out of dbName\n> *\n> */\n> static int\n> update_db_info(PGconn *conn)\n> {\n> ! \tchar\t *tmp, *tmp2,\n> \t\t\t *old = conn->dbName;\n> \n> \tif (strchr(conn->dbName, '@') != NULL)\n> ***************\n> *** 504,509 ****\n> --- 516,523 ----\n> \t\ttmp = strrchr(conn->dbName, ':');\n> \t\tif (tmp != NULL)\t\t/* port number given */\n> \t\t{\n> + \t\t\tif (conn->pgport)\n> + \t\t\t\tfree(conn->pgport);\n> \t\t\tconn->pgport = strdup(tmp + 1);\n> \t\t\t*tmp = '\\0';\n> \t\t}\n> ***************\n> *** 511,516 ****\n> --- 525,532 ----\n> \t\ttmp = strrchr(conn->dbName, '@');\n> \t\tif (tmp != NULL)\t\t/* host name given */\n> \t\t{\n> + \t\t\tif (conn->pghost)\n> + \t\t\t\tfree(conn->pghost);\n> \t\t\tconn->pghost = strdup(tmp + 1);\n> \t\t\t*tmp = '\\0';\n> \t\t}\n> ***************\n> *** 537,549 ****\n> \n> \t\t\t/*\n> \t\t\t * new style:\n> ! \t\t\t * <tcp|unix>:postgresql://server[:port][/dbname][?options]\n> \t\t\t */\n> \t\t\toffset += strlen(\"postgresql://\");\n> \n> \t\t\ttmp = strrchr(conn->dbName + offset, '?');\n> \t\t\tif (tmp != NULL)\t/* options given */\n> \t\t\t{\n> \t\t\t\tconn->pgoptions = strdup(tmp + 1);\n> \t\t\t\t*tmp = '\\0';\n> \t\t\t}\n> --- 553,567 ----\n> \n> \t\t\t/*\n> \t\t\t * new style:\n> ! \t\t\t * <tcp|unix>:postgresql://server[:port|:/unixsocket/path:][/dbname][?options]\n> \t\t\t */\n> \t\t\toffset += strlen(\"postgresql://\");\n> \n> \t\t\ttmp = strrchr(conn->dbName + offset, '?');\n> \t\t\tif (tmp != NULL)\t/* options given */\n> \t\t\t{\n> + \t\t\t\tif (conn->pgoptions)\n> + \t\t\t\t\tfree(conn->pgoptions);\n> \t\t\t\tconn->pgoptions = strdup(tmp + 1);\n> \t\t\t\t*tmp = '\\0';\n> \t\t\t}\n> ***************\n> *** 551,576 ****\n> \t\t\ttmp = strrchr(conn->dbName + offset, '/');\n> \t\t\tif (tmp != NULL)\t/* database name given */\n> \t\t\t{\n> \t\t\t\tconn->dbName = strdup(tmp + 1);\n> \t\t\t\t*tmp = '\\0';\n> \t\t\t}\n> \t\t\telse\n> \t\t\t{\n> \t\t\t\tif ((tmp = getenv(\"PGDATABASE\")) != NULL)\n> \t\t\t\t\tconn->dbName = strdup(tmp);\n> \t\t\t\telse if (conn->pguser)\n> \t\t\t\t\tconn->dbName = strdup(conn->pguser);\n> \t\t\t}\n> \n> \t\t\ttmp = strrchr(old + offset, ':');\n> ! \t\t\tif (tmp != NULL)\t/* port number given */\n> \t\t\t{\n> - \t\t\t\tconn->pgport = strdup(tmp + 1);\n> \t\t\t\t*tmp = '\\0';\n> \t\t\t}\n> \n> \t\t\tif (strncmp(old, \"unix:\", 5) == 0)\n> \t\t\t{\n> \t\t\t\tconn->pghost = NULL;\n> \t\t\t\tif (strcmp(old + offset, \"localhost\") != 0)\n> \t\t\t\t{\n> --- 569,630 ----\n> \t\t\ttmp = strrchr(conn->dbName + offset, '/');\n> \t\t\tif (tmp != NULL)\t/* database name given */\n> \t\t\t{\n> + \t\t\t\tif (conn->dbName)\n> + \t\t\t\t\tfree(conn->dbName);\n> \t\t\t\tconn->dbName = strdup(tmp + 1);\n> \t\t\t\t*tmp = '\\0';\n> \t\t\t}\n> \t\t\telse\n> \t\t\t{\n> + \t\t\t\t/* Why do we default only this value from the environment again? */\n> \t\t\t\tif ((tmp = getenv(\"PGDATABASE\")) != NULL)\n> + \t\t\t\t{\n> + \t\t\t\t\tif (conn->dbName)\n> + \t\t\t\t\t\tfree(conn->dbName);\n> \t\t\t\t\tconn->dbName = strdup(tmp);\n> + \t\t\t\t}\n> \t\t\t\telse if (conn->pguser)\n> + \t\t\t\t{\n> + \t\t\t\t\tif (conn->dbName)\n> + \t\t\t\t\t\tfree(conn->dbName);\n> \t\t\t\t\tconn->dbName = strdup(conn->pguser);\n> + \t\t\t\t}\n> \t\t\t}\n> \n> \t\t\ttmp = strrchr(old + offset, ':');\n> ! \t\t\tif (tmp != NULL)\t/* port number or Unix socket path given */\n> \t\t\t{\n> \t\t\t\t*tmp = '\\0';\n> + \t\t\t\tif ((tmp2 = strchr(tmp + 1, ':')) != NULL)\n> + \t\t\t\t{\n> + \t\t\t\t\tif (strncmp(old, \"unix:\", 5) != 0)\n> + \t\t\t\t\t{\n> + \t\t\t\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> + \t\t\t\t\t\t\t\t \"connectDBStart() -- \"\n> + \t\t\t\t\t\t\t\t \"socket name can only be specified with \"\n> + \t\t\t\t\t\t\t\t \"non-TCP\\n\");\n> + \t\t\t\t\t\treturn 1; \n> + \t\t\t\t\t}\n> + \t\t\t\t\t*tmp2 = '\\0';\n> + \t\t\t\t\tif (conn->pgunixsocket)\n> + \t\t\t\t\t\tfree(conn->pgunixsocket);\n> + \t\t\t\t\tconn->pgunixsocket = strdup(tmp + 1);\n> + \t\t\t\t}\n> + \t\t\t\telse\n> + \t\t\t\t{\n> + \t\t\t\t\tif (conn->pgport)\n> + \t\t\t\t\t\tfree(conn->pgport);\n> + \t\t\t\t\tconn->pgport = strdup(tmp + 1);\n> + \t\t\t\t\tif (conn->pgunixsocket)\n> + \t\t\t\t\t\tfree(conn->pgunixsocket);\n> + \t\t\t\t\tconn->pgunixsocket = NULL;\n> + \t\t\t\t}\n> \t\t\t}\n> \n> \t\t\tif (strncmp(old, \"unix:\", 5) == 0)\n> \t\t\t{\n> + \t\t\t\tif (conn->pghost)\n> + \t\t\t\t\tfree(conn->pghost);\n> \t\t\t\tconn->pghost = NULL;\n> \t\t\t\tif (strcmp(old + offset, \"localhost\") != 0)\n> \t\t\t\t{\n> ***************\n> *** 582,589 ****\n> \t\t\t\t}\n> \t\t\t}\n> \t\t\telse\n> \t\t\t\tconn->pghost = strdup(old + offset);\n> ! \n> \t\t\tfree(old);\n> \t\t}\n> \t}\n> --- 636,646 ----\n> \t\t\t\t}\n> \t\t\t}\n> \t\t\telse\n> + \t\t\t{\n> + \t\t\t\tif (conn->pghost)\n> + \t\t\t\t\tfree(conn->pghost);\n> \t\t\t\tconn->pghost = strdup(old + offset);\n> ! \t\t\t}\n> \t\t\tfree(old);\n> \t\t}\n> \t}\n> ***************\n> *** 743,749 ****\n> \t}\n> #if !defined(WIN32) && !defined(__CYGWIN32__)\n> \telse\n> ! \t\tconn->raddr_len = UNIXSOCK_PATH(conn->raddr.un, portno);\n> #endif\n> \n> \n> --- 800,809 ----\n> \t}\n> #if !defined(WIN32) && !defined(__CYGWIN32__)\n> \telse\n> ! \t{\n> ! \t\tUNIXSOCK_PATH(conn->raddr.un, portno, conn->pgunixsocket);\n> ! \t\tconn->raddr_len = UNIXSOCK_LEN(conn->raddr.un);\n> ! \t}\n> #endif\n> \n> \n> ***************\n> *** 892,898 ****\n> \t\t\t\t\t\t\t conn->pghost ? conn->pghost : \"localhost\",\n> \t\t\t\t\t\t\t (family == AF_INET) ?\n> \t\t\t\t\t\t\t \"TCP/IP port\" : \"Unix socket\",\n> ! \t\t\t\t\t\t\t conn->pgport);\n> \t\t\tgoto connect_errReturn;\n> \t\t}\n> \t}\n> --- 952,959 ----\n> \t\t\t\t\t\t\t conn->pghost ? conn->pghost : \"localhost\",\n> \t\t\t\t\t\t\t (family == AF_INET) ?\n> \t\t\t\t\t\t\t \"TCP/IP port\" : \"Unix socket\",\n> ! \t\t\t\t\t\t\t (family == AF_UNIX && conn->pgunixsocket) ?\n> ! \t\t\t\t\t\t\t conn->pgunixsocket : conn->pgport);\n> \t\t\tgoto connect_errReturn;\n> \t\t}\n> \t}\n> ***************\n> *** 1123,1129 ****\n> \t\t\t\t\t\t\t conn->pghost ? conn->pghost : \"localhost\",\n> \t\t\t\t\t\t\t\t (conn->raddr.sa.sa_family == AF_INET) ?\n> \t\t\t\t\t\t\t\t\t \"TCP/IP port\" : \"Unix socket\",\n> ! \t\t\t\t\t\t\t\t\t conn->pgport);\n> \t\t\t\t\tgoto error_return;\n> \t\t\t\t}\n> \n> --- 1184,1191 ----\n> \t\t\t\t\t\t\t conn->pghost ? conn->pghost : \"localhost\",\n> \t\t\t\t\t\t\t\t (conn->raddr.sa.sa_family == AF_INET) ?\n> \t\t\t\t\t\t\t\t\t \"TCP/IP port\" : \"Unix socket\",\n> ! \t\t\t\t\t\t\t (conn->raddr.sa.sa_family == AF_UNIX && conn->pgunixsocket) ?\n> ! \t\t\t\t\t\t\t\t\t conn->pgunixsocket : conn->pgport);\n> \t\t\t\t\tgoto error_return;\n> \t\t\t\t}\n> \n> ***************\n> *** 1799,1804 ****\n> --- 1861,1868 ----\n> \t\tfree(conn->pghostaddr);\n> \tif (conn->pgport)\n> \t\tfree(conn->pgport);\n> + \tif (conn->pgunixsocket)\n> + \t\tfree(conn->pgunixsocket);\n> \tif (conn->pgtty)\n> \t\tfree(conn->pgtty);\n> \tif (conn->pgoptions)\n> ***************\n> *** 2383,2388 ****\n> --- 2447,2460 ----\n> \tif (!conn)\n> \t\treturn (char *) NULL;\n> \treturn conn->pgport;\n> + }\n> + \n> + char *\n> + PQunixsocket(const PGconn *conn)\n> + {\n> + \tif (!conn)\n> + \t\treturn (char *) NULL;\n> + \treturn conn->pgunixsocket;\n> }\n> \n> char *\n> Index: src/interfaces/libpq/libpq-fe.h\n> *** src/interfaces/libpq/libpq-fe.h\t2000/06/30 21:15:51\t1.1\n> --- src/interfaces/libpq/libpq-fe.h\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 214,219 ****\n> --- 214,220 ----\n> \textern char *PQpass(const PGconn *conn);\n> \textern char *PQhost(const PGconn *conn);\n> \textern char *PQport(const PGconn *conn);\n> + \textern char *PQunixsocket(const PGconn *conn);\n> \textern char *PQtty(const PGconn *conn);\n> \textern char *PQoptions(const PGconn *conn);\n> \textern ConnStatusType PQstatus(const PGconn *conn);\n> Index: src/interfaces/libpq/libpq-int.h\n> *** src/interfaces/libpq/libpq-int.h\t2000/06/30 21:15:51\t1.1\n> --- src/interfaces/libpq/libpq-int.h\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 202,207 ****\n> --- 202,209 ----\n> \t\t\t\t\t\t\t\t * numbers-and-dots notation. Takes\n> \t\t\t\t\t\t\t\t * precedence over above. */\n> \tchar\t *pgport;\t\t\t/* the server's communication port */\n> + \tchar\t *pgunixsocket;\t\t/* the Unix-domain socket that the server is listening on;\n> + \t\t\t\t\t\t * if NULL, uses a default constructed from pgport */\n> \tchar\t *pgtty;\t\t\t/* tty on which the backend messages is\n> \t\t\t\t\t\t\t\t * displayed (NOT ACTUALLY USED???) */\n> \tchar\t *pgoptions;\t\t/* options to start the backend with */\n> Index: src/interfaces/libpq/libpqdll.def\n> *** src/interfaces/libpq/libpqdll.def\t2000/06/30 21:15:51\t1.1\n> --- src/interfaces/libpq/libpqdll.def\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 79,81 ****\n> --- 79,82 ----\n> \tdestroyPQExpBuffer\t@ 76\n> \tcreatePQExpBuffer\t@ 77\n> \tPQconninfoFree\t\t@ 78\n> + \tPQunixsocket\t\t@ 79\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 10 Oct 2000 21:21:14 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] PostgreSQL virtual hosting support" }, { "msg_contents": "On Tue, 10 Oct 2000, Bruce Momjian wrote:\n\n> I am tempted to apply this. This is the second person who asked for\n> binding to a single port. The patch looks quite complete, with doc\n> changes. It appears to be a thorough job.\n> \n> Any objections?\n\n From a quick read of his \"description of problem\", it sounds like he has\naddressed the original patches problem where it bound only to 127.0.0.1,\nand allows you to bind to any IP on that machine ...\n\nlooks like a go for inclusion from what I can see ...\n\n > \n> > Your name\t\t:\tDavid MacKenzie\n> > Your email address\t:\[email protected]\n> > \n> > \n> > System Configuration\n> > ---------------------\n> > Architecture (example: Intel Pentium) \t: Intel x86\n> > \n> > Operating System (example: Linux 2.0.26 ELF) \t: BSD/OS 4.0.1\n> > \n> > PostgreSQL version (example: PostgreSQL-7.0): PostgreSQL-7.0.2\n> > \n> > Compiler used (example: gcc 2.8.0)\t\t: gcc version 2.7.2.1\n> > \n> > \n> > Please enter a FULL description of your problem:\n> > ------------------------------------------------\n> > \n> > UUNET is looking into offering PostgreSQL as a part of a managed web\n> > hosting product, on both shared and dedicated machines. We currently\n> > offer Oracle and MySQL, and it would be a nice middle-ground.\n> > However, as shipped, PostgreSQL lacks the following features we need\n> > that MySQL has:\n> > \n> > 1. The ability to listen only on a particular IP address. Each\n> > hosting customer has their own IP address, on which all of their\n> > servers (http, ftp, real media, etc.) run.\n> > 2. The ability to place the Unix-domain socket in a mode 700 directory.\n> > This allows us to automatically create an empty database, with an\n> > empty DBA password, for new or upgrading customers without having\n> > to interactively set a DBA password and communicate it to (or from)\n> > the customer. This in turn cuts down our install and upgrade times.\n> > 3. The ability to connect to the Unix-domain socket from within a\n> > change-rooted environment. We run CGI programs chrooted to the\n> > user's home directory, which is another reason why we need to be\n> > able to specify where the Unix-domain socket is, instead of /tmp.\n> > 4. The ability to, if run as root, open a pid file in /var/run as\n> > root, and then setuid to the desired user. (mysqld -u can almost\n> > do this; I had to patch it, too).\n> > \n> > The patch below fixes problem 1-3. I plan to address #4, also, but\n> > haven't done so yet. These diffs are big enough that they should give\n> > the PG development team something to think about in the meantime :-)\n> > Also, I'm about to leave for 2 weeks' vacation, so I thought I'd get\n> > out what I have, which works (for the problems it tackles), now.\n> > \n> > With these changes, we can set up and run PostgreSQL with scripts the\n> > same way we can with apache or proftpd or mysql.\n> > \n> > In summary, this patch makes the following enhancements:\n> > \n> > 1. Adds an environment variable PGUNIXSOCKET, analogous to MYSQL_UNIX_PORT,\n> > and command line options -k --unix-socket to the relevant programs.\n> > 2. Adds a -h option to postmaster to set the hostname or IP address to\n> > listen on instead of the default INADDR_ANY.\n> > 3. Extends some library interfaces to support the above.\n> > 4. Fixes a few memory leaks in PQconnectdb().\n> > \n> > The default behavior is unchanged from stock 7.0.2; if you don't use\n> > any of these new features, they don't change the operation.\n> > \n> > Index: doc/src/sgml/layout.sgml\n> > *** doc/src/sgml/layout.sgml\t2000/06/30 21:15:36\t1.1\n> > --- doc/src/sgml/layout.sgml\t2000/07/02 03:56:05\t1.2\n> > ***************\n> > *** 55,61 ****\n> > For example, if the database server machine is a remote machine, you\n> > will need to set the <envar>PGHOST</envar> environment variable to the name\n> > of the database server machine. The environment variable\n> > ! <envar>PGPORT</envar> may also have to be set. The bottom line is this: if\n> > you try to start an application program and it complains\n> > that it cannot connect to the <Application>postmaster</Application>,\n> > you must go back and make sure that your\n> > --- 55,62 ----\n> > For example, if the database server machine is a remote machine, you\n> > will need to set the <envar>PGHOST</envar> environment variable to the name\n> > of the database server machine. The environment variable\n> > ! <envar>PGPORT</envar> or <envar>PGUNIXSOCKET</envar> may also have to be set.\n> > ! The bottom line is this: if\n> > you try to start an application program and it complains\n> > that it cannot connect to the <Application>postmaster</Application>,\n> > you must go back and make sure that your\n> > Index: doc/src/sgml/libpq++.sgml\n> > *** doc/src/sgml/libpq++.sgml\t2000/06/30 21:15:36\t1.1\n> > --- doc/src/sgml/libpq++.sgml\t2000/07/02 03:56:05\t1.2\n> > ***************\n> > *** 93,98 ****\n> > --- 93,105 ----\n> > </listitem>\n> > <listitem>\n> > <para>\n> > + \t<envar>PGUNIXSOCKET</envar> sets the full Unix domain socket\n> > + \tfile name for communicating with the <productname>Postgres</productname>\n> > + \tbackend.\n> > + </para>\n> > + </listitem>\n> > + <listitem>\n> > + <para>\n> > \t<envar>PGDATABASE</envar> sets the default \n> > \t<productname>Postgres</productname> database name.\n> > </para>\n> > Index: doc/src/sgml/libpq.sgml\n> > *** doc/src/sgml/libpq.sgml\t2000/06/30 21:15:36\t1.1\n> > --- doc/src/sgml/libpq.sgml\t2000/07/02 03:56:05\t1.2\n> > ***************\n> > *** 134,139 ****\n> > --- 134,148 ----\n> > </varlistentry>\n> > \n> > <varlistentry>\n> > + <term><literal>unixsocket</literal></term>\n> > + <listitem>\n> > + <para>\n> > + Full path to Unix-domain socket file to connect to at the server host.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n> > + \n> > + <varlistentry>\n> > <term><literal>dbname</literal></term>\n> > <listitem>\n> > <para>\n> > ***************\n> > *** 545,550 ****\n> > --- 554,569 ----\n> > \n> > <listitem>\n> > <para>\n> > + <function>PQunixsocket</function>\n> > + Returns the name of the Unix-domain socket of the connection.\n> > + <synopsis>\n> > + char *PQunixsocket(const PGconn *conn)\n> > + </synopsis>\n> > + </para>\n> > + </listitem>\n> > + \n> > + <listitem>\n> > + <para>\n> > <function>PQtty</function>\n> > Returns the debug tty of the connection.\n> > <synopsis>\n> > ***************\n> > *** 1772,1777 ****\n> > --- 1791,1803 ----\n> > <envar>PGHOST</envar> sets the default server name.\n> > If a non-zero-length string is specified, TCP/IP communication is used.\n> > Without a host name, libpq will connect using a local Unix domain socket.\n> > + </para>\n> > + </listitem>\n> > + <listitem>\n> > + <para>\n> > + <envar>PGPORT</envar> sets the default port or local Unix domain socket\n> > + file extension for communicating with the <productname>Postgres</productname>\n> > + backend.\n> > </para>\n> > </listitem>\n> > <listitem>\n> > Index: doc/src/sgml/start.sgml\n> > *** doc/src/sgml/start.sgml\t2000/06/30 21:15:37\t1.1\n> > --- doc/src/sgml/start.sgml\t2000/07/02 03:56:05\t1.2\n> > ***************\n> > *** 110,117 ****\n> > will need to set the <acronym>PGHOST</acronym> environment\n> > variable to the name\n> > of the database server machine. The environment variable\n> > ! <acronym>PGPORT</acronym> may also have to be set. The bottom\n> > ! line is this: if\n> > you try to start an application program and it complains\n> > that it cannot connect to the <application>postmaster</application>,\n> > you should immediately consult your site administrator to make\n> > --- 110,117 ----\n> > will need to set the <acronym>PGHOST</acronym> environment\n> > variable to the name\n> > of the database server machine. The environment variable\n> > ! <acronym>PGPORT</acronym> or <acronym>PGUNIXSOCKET</acronym> may also have to be set.\n> > ! The bottom line is this: if\n> > you try to start an application program and it complains\n> > that it cannot connect to the <application>postmaster</application>,\n> > you should immediately consult your site administrator to make\n> > Index: doc/src/sgml/ref/createdb.sgml\n> > *** doc/src/sgml/ref/createdb.sgml\t2000/06/30 21:15:37\t1.1\n> > --- doc/src/sgml/ref/createdb.sgml\t2000/07/04 04:46:45\t1.2\n> > ***************\n> > *** 58,63 ****\n> > --- 58,75 ----\n> > </listitem>\n> > </varlistentry>\n> > \n> > + <varlistentry>\n> > + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> > + <listitem>\n> > + <para>\n> > + Specifies the Unix-domain socket on which the\n> > + <application>postmaster</application> is running.\n> > + Without this option, the socket is created in <filename>/tmp</filename>\n> > + based on the port number.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n> > + \n> > <varlistentry>\n> > <term>-U, --username <replaceable class=\"parameter\">username</replaceable></term>\n> > <listitem>\n> > Index: doc/src/sgml/ref/createlang.sgml\n> > *** doc/src/sgml/ref/createlang.sgml\t2000/06/30 21:15:37\t1.1\n> > --- doc/src/sgml/ref/createlang.sgml\t2000/07/04 04:46:45\t1.2\n> > ***************\n> > *** 96,101 ****\n> > --- 96,113 ----\n> > </listitem>\n> > </varlistentry>\n> > \n> > + <varlistentry>\n> > + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> > + <listitem>\n> > + <para>\n> > + Specifies the Unix-domain socket on which the\n> > + <application>postmaster</application> is running.\n> > + Without this option, the socket is created in <filename>/tmp</filename>\n> > + based on the port number.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n> > + \n> > <varlistentry>\n> > <term>-U, --username <replaceable class=\"parameter\">username</replaceable></term>\n> > <listitem>\n> > Index: doc/src/sgml/ref/createuser.sgml\n> > *** doc/src/sgml/ref/createuser.sgml\t2000/06/30 21:15:37\t1.1\n> > --- doc/src/sgml/ref/createuser.sgml\t2000/07/04 04:46:45\t1.2\n> > ***************\n> > *** 59,64 ****\n> > --- 59,76 ----\n> > </listitem>\n> > </varlistentry>\n> > \n> > + <varlistentry>\n> > + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> > + <listitem>\n> > + <para>\n> > + Specifies the Unix-domain socket on which the\n> > + <application>postmaster</application> is running.\n> > + Without this option, the socket is created in <filename>/tmp</filename>\n> > + based on the port number.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n> > + \n> > <varlistentry>\n> > <term>-e, --echo</term>\n> > <listitem>\n> > Index: doc/src/sgml/ref/dropdb.sgml\n> > *** doc/src/sgml/ref/dropdb.sgml\t2000/06/30 21:15:38\t1.1\n> > --- doc/src/sgml/ref/dropdb.sgml\t2000/07/04 04:46:45\t1.2\n> > ***************\n> > *** 58,63 ****\n> > --- 58,75 ----\n> > </listitem>\n> > </varlistentry>\n> > \n> > + <varlistentry>\n> > + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> > + <listitem>\n> > + <para>\n> > + Specifies the Unix-domain socket on which the\n> > + <application>postmaster</application> is running.\n> > + Without this option, the socket is created in <filename>/tmp</filename>\n> > + based on the port number.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n> > + \n> > <varlistentry>\n> > <term>-U, --username <replaceable class=\"parameter\">username</replaceable></term>\n> > <listitem>\n> > Index: doc/src/sgml/ref/droplang.sgml\n> > *** doc/src/sgml/ref/droplang.sgml\t2000/06/30 21:15:38\t1.1\n> > --- doc/src/sgml/ref/droplang.sgml\t2000/07/04 04:46:45\t1.2\n> > ***************\n> > *** 96,101 ****\n> > --- 96,113 ----\n> > </listitem>\n> > </varlistentry>\n> > \n> > + <varlistentry>\n> > + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> > + <listitem>\n> > + <para>\n> > + Specifies the Unix-domain socket on which the\n> > + <application>postmaster</application> is running.\n> > + Without this option, the socket is created in <filename>/tmp</filename>\n> > + based on the port number.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n> > + \n> > <varlistentry>\n> > <term>-U, --username <replaceable class=\"parameter\">username</replaceable></term>\n> > <listitem>\n> > Index: doc/src/sgml/ref/dropuser.sgml\n> > *** doc/src/sgml/ref/dropuser.sgml\t2000/06/30 21:15:38\t1.1\n> > --- doc/src/sgml/ref/dropuser.sgml\t2000/07/04 04:46:45\t1.2\n> > ***************\n> > *** 58,63 ****\n> > --- 58,75 ----\n> > </listitem>\n> > </varlistentry>\n> > \n> > + <varlistentry>\n> > + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> > + <listitem>\n> > + <para>\n> > + Specifies the Unix-domain socket on which the\n> > + <application>postmaster</application> is running.\n> > + Without this option, the socket is created in <filename>/tmp</filename>\n> > + based on the port number.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n> > + \n> > <varlistentry>\n> > <term>-e, --echo</term>\n> > <listitem>\n> > Index: doc/src/sgml/ref/pg_dump.sgml\n> > *** doc/src/sgml/ref/pg_dump.sgml\t2000/06/30 21:15:38\t1.1\n> > --- doc/src/sgml/ref/pg_dump.sgml\t2000/07/01 18:41:22\t1.2\n> > ***************\n> > *** 24,30 ****\n> > </refsynopsisdivinfo>\n> > <synopsis>\n> > pg_dump [ <replaceable class=\"parameter\">dbname</replaceable> ]\n> > ! pg_dump [ -h <replaceable class=\"parameter\">host</replaceable> ] [ -p <replaceable class=\"parameter\">port</replaceable> ]\n> > [ -t <replaceable class=\"parameter\">table</replaceable> ]\n> > [ -a ] [ -c ] [ -d ] [ -D ] [ -i ] [ -n ] [ -N ]\n> > [ -o ] [ -s ] [ -u ] [ -v ] [ -x ]\n> > --- 24,32 ----\n> > </refsynopsisdivinfo>\n> > <synopsis>\n> > pg_dump [ <replaceable class=\"parameter\">dbname</replaceable> ]\n> > ! pg_dump [ -h <replaceable class=\"parameter\">host</replaceable> ]\n> > ! [ -k <replaceable class=\"parameter\">path</replaceable> ]\n> > ! [ -p <replaceable class=\"parameter\">port</replaceable> ]\n> > [ -t <replaceable class=\"parameter\">table</replaceable> ]\n> > [ -a ] [ -c ] [ -d ] [ -D ] [ -i ] [ -n ] [ -N ]\n> > [ -o ] [ -s ] [ -u ] [ -v ] [ -x ]\n> > ***************\n> > *** 200,205 ****\n> > --- 202,222 ----\n> > \t<application>postmaster</application>\n> > \tis running. Defaults to using a local Unix domain socket\n> > \trather than an IP connection..\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n> > + \n> > + <varlistentry>\n> > + <term>-k <replaceable class=\"parameter\">path</replaceable></term>\n> > + <listitem>\n> > + <para>\n> > + \tSpecifies the local Unix domain socket file path\n> > + \ton which the <application>postmaster</application>\n> > + \tis listening for connections.\n> > + Without this option, the socket path name defaults to\n> > + the value of the <envar>PGUNIXSOCKET</envar> environment\n> > + \tvariable (if set), otherwise it is constructed\n> > + from the port number.\n> > </para>\n> > </listitem>\n> > </varlistentry>\n> > Index: doc/src/sgml/ref/pg_dumpall.sgml\n> > *** doc/src/sgml/ref/pg_dumpall.sgml\t2000/06/30 21:15:38\t1.1\n> > --- doc/src/sgml/ref/pg_dumpall.sgml\t2000/07/01 18:41:22\t1.2\n> > ***************\n> > *** 24,30 ****\n> > </refsynopsisdivinfo>\n> > <synopsis>\n> > pg_dumpall\n> > ! pg_dumpall [ -h <replaceable class=\"parameter\">host</replaceable> ] [ -p <replaceable class=\"parameter\">port</replaceable> ] [ -a ] [ -d ] [ -D ] [ -O ] [ -s ] [ -u ] [ -v ] [ -x ]\n> > </synopsis>\n> > \n> > <refsect2 id=\"R2-APP-PG-DUMPALL-1\">\n> > --- 24,33 ----\n> > </refsynopsisdivinfo>\n> > <synopsis>\n> > pg_dumpall\n> > ! pg_dumpall [ -h <replaceable class=\"parameter\">host</replaceable> ]\n> > ! [ -k <replaceable class=\"parameter\">path</replaceable> ]\n> > ! [ -p <replaceable class=\"parameter\">port</replaceable> ]\n> > ! [ -a ] [ -d ] [ -D ] [ -O ] [ -s ] [ -u ] [ -v ] [ -x ]\n> > </synopsis>\n> > \n> > <refsect2 id=\"R2-APP-PG-DUMPALL-1\">\n> > ***************\n> > *** 137,142 ****\n> > --- 140,160 ----\n> > \t<application>postmaster</application>\n> > \tis running. Defaults to using a local Unix domain socket\n> > \trather than an IP connection..\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n> > + \n> > + <varlistentry>\n> > + <term>-k <replaceable class=\"parameter\">path</replaceable></term>\n> > + <listitem>\n> > + <para>\n> > + \tSpecifies the local Unix domain socket file path\n> > + \ton which the <application>postmaster</application>\n> > + \tis listening for connections.\n> > + Without this option, the socket path name defaults to\n> > + the value of the <envar>PGUNIXSOCKET</envar> environment\n> > + \tvariable (if set), otherwise it is constructed\n> > + from the port number.\n> > </para>\n> > </listitem>\n> > </varlistentry>\n> > Index: doc/src/sgml/ref/postmaster.sgml\n> > *** doc/src/sgml/ref/postmaster.sgml\t2000/06/30 21:15:38\t1.1\n> > --- doc/src/sgml/ref/postmaster.sgml\t2000/07/06 07:48:31\t1.7\n> > ***************\n> > *** 24,30 ****\n> > </refsynopsisdivinfo>\n> > <synopsis>\n> > postmaster [ -B <replaceable class=\"parameter\">nBuffers</replaceable> ] [ -D <replaceable class=\"parameter\">DataDir</replaceable> ] [ -N <replaceable class=\"parameter\">maxBackends</replaceable> ] [ -S ]\n> > ! [ -d <replaceable class=\"parameter\">DebugLevel</replaceable> ] [ -i ] [ -l ]\n> > [ -o <replaceable class=\"parameter\">BackendOptions</replaceable> ] [ -p <replaceable class=\"parameter\">port</replaceable> ] [ -n | -s ]\n> > </synopsis>\n> > \n> > --- 24,32 ----\n> > </refsynopsisdivinfo>\n> > <synopsis>\n> > postmaster [ -B <replaceable class=\"parameter\">nBuffers</replaceable> ] [ -D <replaceable class=\"parameter\">DataDir</replaceable> ] [ -N <replaceable class=\"parameter\">maxBackends</replaceable> ] [ -S ]\n> > ! [ -d <replaceable class=\"parameter\">DebugLevel</replaceable> ]\n> > ! [ -h <replaceable class=\"parameter\">hostname</replaceable> ] [ -i ]\n> > ! [ -k <replaceable class=\"parameter\">path</replaceable> ] [ -l ]\n> > [ -o <replaceable class=\"parameter\">BackendOptions</replaceable> ] [ -p <replaceable class=\"parameter\">port</replaceable> ] [ -n | -s ]\n> > </synopsis>\n> > \n> > ***************\n> > *** 124,129 ****\n> > --- 126,161 ----\n> > </varlistentry>\n> > \n> > <varlistentry>\n> > + <term>-h <replaceable class=\"parameter\">hostName</replaceable></term>\n> > + <listitem>\n> > + <para>\n> > + \tSpecifies the TCP/IP hostname or address\n> > + \ton which the <application>postmaster</application>\n> > + \tis to listen for connections from frontend applications. Defaults to\n> > + \tthe value of the \n> > + \t<envar>PGHOST</envar> \n> > + \tenvironment variable, or if <envar>PGHOST</envar>\n> > + \tis not set, then defaults to \"all\", meaning listen on all configured addresses\n> > + \t(including localhost).\n> > + </para>\n> > + <para>\n> > + \tIf you use a hostname or address other than \"all\", do not try to run\n> > + \tmultiple instances of <application>postmaster</application> on the\n> > + \tsame IP address but different ports. Doing so will result in them\n> > + \tattempting (incorrectly) to use the same shared memory segments.\n> > + \tAlso, if you use a hostname other than \"all\", all of the host's IP addresses\n> > + \ton which <application>postmaster</application> instances are\n> > + \tlistening must be distinct in the two last octets.\n> > + </para>\n> > + <para>\n> > + \tIf you do use \"all\" (the default), then each instance must listen on a\n> > + \tdifferent port (via -p or <envar>PGPORT</envar>). And, of course, do\n> > + \tnot try to use both approaches on one host.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n> > + \n> > + <varlistentry>\n> > <term>-i</term>\n> > <listitem>\n> > <para>\n> > ***************\n> > *** 135,140 ****\n> > --- 167,201 ----\n> > </varlistentry>\n> > \n> > <varlistentry>\n> > + <term>-k <replaceable class=\"parameter\">path</replaceable></term>\n> > + <listitem>\n> > + <para>\n> > + \tSpecifies the local Unix domain socket path name\n> > + \ton which the <application>postmaster</application>\n> > + \tis to listen for connections from frontend applications. Defaults to\n> > + \tthe value of the \n> > + \t<envar>PGUNIXSOCKET</envar> \n> > + \tenvironment variable, or if <envar>PGUNIXSOCKET</envar>\n> > + \tis not set, then defaults to a file in <filename>/tmp</filename>\n> > + \tconstructed from the port number.\n> > + </para>\n> > + <para>\n> > + You can use this option to put the Unix-domain socket in a\n> > + directory that is private to one or more users using Unix\n> > + \tdirectory permissions. This is necessary for securely\n> > + \tcreating databases automatically on shared machines.\n> > + In that situation, also disallow all TCP/IP connections\n> > + \tinitially in <filename>pg_hba.conf</filename>.\n> > + \tIf you specify a socket path other than the\n> > + \tdefault then all frontend applications (including\n> > + \t<application>psql</application>) must specify the same\n> > + \tsocket path using either command-line options or\n> > + \t<envar>PGUNIXSOCKET</envar>.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n> > + \n> > + <varlistentry>\n> > <term>-l</term>\n> > <listitem>\n> > <para>\n> > Index: doc/src/sgml/ref/psql-ref.sgml\n> > *** doc/src/sgml/ref/psql-ref.sgml\t2000/06/30 21:15:38\t1.1\n> > --- doc/src/sgml/ref/psql-ref.sgml\t2000/07/02 03:56:05\t1.3\n> > ***************\n> > *** 1329,1334 ****\n> > --- 1329,1347 ----\n> > \n> > \n> > <varlistentry>\n> > + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> > + <listitem>\n> > + <para>\n> > + Specifies the Unix-domain socket on which the\n> > + <application>postmaster</application> is running.\n> > + Without this option, the socket is created in <filename>/tmp</filename>\n> > + based on the port number.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n> > + \n> > + \n> > + <varlistentry>\n> > <term>-H, --html</term>\n> > <listitem>\n> > <para>\n> > Index: doc/src/sgml/ref/vacuumdb.sgml\n> > *** doc/src/sgml/ref/vacuumdb.sgml\t2000/06/30 21:15:38\t1.1\n> > --- doc/src/sgml/ref/vacuumdb.sgml\t2000/07/04 04:46:45\t1.2\n> > ***************\n> > *** 24,30 ****\n> > </refsynopsisdivinfo>\n> > <synopsis>\n> > vacuumdb [ <replaceable class=\"parameter\">options</replaceable> ] [ --analyze | -z ]\n> > ! [ --alldb | -a ] [ --verbose | -v ]\n> > [ --table '<replaceable class=\"parameter\">table</replaceable> [ ( <replaceable class=\"parameter\">column</replaceable> [,...] ) ]' ] [ [-d] <replaceable class=\"parameter\">dbname</replaceable> ]\n> > </synopsis>\n> > \n> > --- 24,30 ----\n> > </refsynopsisdivinfo>\n> > <synopsis>\n> > vacuumdb [ <replaceable class=\"parameter\">options</replaceable> ] [ --analyze | -z ]\n> > ! [ --all | -a ] [ --verbose | -v ]\n> > [ --table '<replaceable class=\"parameter\">table</replaceable> [ ( <replaceable class=\"parameter\">column</replaceable> [,...] ) ]' ] [ [-d] <replaceable class=\"parameter\">dbname</replaceable> ]\n> > </synopsis>\n> > \n> > ***************\n> > *** 128,133 ****\n> > --- 128,145 ----\n> > </para>\n> > </listitem>\n> > </varlistentry>\n> > + \n> > + <varlistentry>\n> > + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> > + <listitem>\n> > + <para>\n> > + Specifies the Unix-domain socket on which the\n> > + <application>postmaster</application> is running.\n> > + Without this option, the socket is created in <filename>/tmp</filename>\n> > + based on the port number.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n> > \n> > <varlistentry>\n> > <term>-U <replaceable class=\"parameter\">username</replaceable></term>\n> > Index: src/backend/libpq/pqcomm.c\n> > *** src/backend/libpq/pqcomm.c\t2000/06/30 21:15:40\t1.1\n> > --- src/backend/libpq/pqcomm.c\t2000/07/01 18:50:46\t1.3\n> > ***************\n> > *** 42,47 ****\n> > --- 42,48 ----\n> > *\t\tStreamConnection\t- Create new connection with client\n> > *\t\tStreamClose\t\t\t- Close a client/backend connection\n> > *\t\tpq_getport\t\t- return the PGPORT setting\n> > + *\t\tpq_getunixsocket\t- return the PGUNIXSOCKET setting\n> > *\t\tpq_init\t\t\t- initialize libpq at backend startup\n> > *\t\tpq_close\t\t- shutdown libpq at backend exit\n> > *\n> > ***************\n> > *** 134,139 ****\n> > --- 135,151 ----\n> > }\n> > \n> > /* --------------------------------\n> > + *\t\tpq_getunixsocket - return the PGUNIXSOCKET setting.\n> > + *\t\tIf NULL, default to computing it based on the port.\n> > + * --------------------------------\n> > + */\n> > + char *\n> > + pq_getunixsocket(void)\n> > + {\n> > + \treturn getenv(\"PGUNIXSOCKET\");\n> > + }\n> > + \n> > + /* --------------------------------\n> > *\t\tpq_close - shutdown libpq at backend exit\n> > *\n> > * Note: in a standalone backend MyProcPort will be null,\n> > ***************\n> > *** 177,189 ****\n> > /*\n> > * StreamServerPort -- open a sock stream \"listening\" port.\n> > *\n> > ! * This initializes the Postmaster's connection-accepting port.\n> > *\n> > * RETURNS: STATUS_OK or STATUS_ERROR\n> > */\n> > \n> > int\n> > ! StreamServerPort(char *hostName, unsigned short portName, int *fdP)\n> > {\n> > \tSockAddr\tsaddr;\n> > \tint\t\t\tfd,\n> > --- 189,205 ----\n> > /*\n> > * StreamServerPort -- open a sock stream \"listening\" port.\n> > *\n> > ! * This initializes the Postmaster's connection-accepting port fdP.\n> > ! * If hostName is \"any\", listen on all configured IP addresses.\n> > ! * If hostName is NULL, listen on a Unix-domain socket instead of TCP;\n> > ! * if unixSocketName is NULL, a default path (constructed in UNIX_SOCK_PATH\n> > ! * in include/libpq/pqcomm.h) based on portName is used.\n> > *\n> > * RETURNS: STATUS_OK or STATUS_ERROR\n> > */\n> > \n> > int\n> > ! StreamServerPort(char *hostName, unsigned short portNumber, char *unixSocketName, int *fdP)\n> > {\n> > \tSockAddr\tsaddr;\n> > \tint\t\t\tfd,\n> > ***************\n> > *** 227,233 ****\n> > \tsaddr.sa.sa_family = family;\n> > \tif (family == AF_UNIX)\n> > \t{\n> > ! \t\tlen = UNIXSOCK_PATH(saddr.un, portName);\n> > \t\tstrcpy(sock_path, saddr.un.sun_path);\n> > \n> > \t\t/*\n> > --- 243,250 ----\n> > \tsaddr.sa.sa_family = family;\n> > \tif (family == AF_UNIX)\n> > \t{\n> > ! \t\tUNIXSOCK_PATH(saddr.un, portNumber, unixSocketName);\n> > ! \t\tlen = UNIXSOCK_LEN(saddr.un);\n> > \t\tstrcpy(sock_path, saddr.un.sun_path);\n> > \n> > \t\t/*\n> > ***************\n> > *** 259,267 ****\n> > \t}\n> > \telse\n> > \t{\n> > ! \t\tsaddr.in.sin_addr.s_addr = htonl(INADDR_ANY);\n> > ! \t\tsaddr.in.sin_port = htons(portName);\n> > ! \t\tlen = sizeof(struct sockaddr_in);\n> > \t}\n> > \terr = bind(fd, &saddr.sa, len);\n> > \tif (err < 0)\n> > --- 276,305 ----\n> > \t}\n> > \telse\n> > \t{\n> > ! \t /* TCP/IP socket */\n> > ! \t if (!strcmp(hostName, \"all\")) /* like for databases in pg_hba.conf. */\n> > ! \t saddr.in.sin_addr.s_addr = htonl(INADDR_ANY);\n> > ! \t else\n> > ! \t {\n> > ! \t struct hostent *hp;\n> > ! \n> > ! \t hp = gethostbyname(hostName);\n> > ! \t if ((hp == NULL) || (hp->h_addrtype != AF_INET))\n> > ! \t\t{\n> > ! \t\t snprintf(PQerrormsg, PQERRORMSG_LENGTH,\n> > ! \t\t\t \"FATAL: StreamServerPort: gethostbyname(%s) failed: %s\\n\",\n> > ! \t\t\t hostName, hstrerror(h_errno));\n> > ! \t\t fputs(PQerrormsg, stderr);\n> > ! \t\t pqdebug(\"%s\", PQerrormsg);\n> > ! \t\t return STATUS_ERROR;\n> > ! \t\t}\n> > ! \t memmove((char *) &(saddr.in.sin_addr),\n> > ! \t\t (char *) hp->h_addr,\n> > ! \t\t hp->h_length);\n> > ! \t }\n> > ! \n> > ! \t saddr.in.sin_port = htons(portNumber);\n> > ! \t len = sizeof(struct sockaddr_in);\n> > \t}\n> > \terr = bind(fd, &saddr.sa, len);\n> > \tif (err < 0)\n> > Index: src/backend/postmaster/postmaster.c\n> > *** src/backend/postmaster/postmaster.c\t2000/06/30 21:15:42\t1.1\n> > --- src/backend/postmaster/postmaster.c\t2000/07/06 07:38:21\t1.5\n> > ***************\n> > *** 136,143 ****\n> > /* list of ports associated with still open, but incomplete connections */\n> > static Dllist *PortList;\n> > \n> > ! static unsigned short PostPortName = 0;\n> > \n> > /*\n> > * This is a boolean indicating that there is at least one backend that\n> > * is accessing the current shared memory and semaphores. Between the\n> > --- 136,150 ----\n> > /* list of ports associated with still open, but incomplete connections */\n> > static Dllist *PortList;\n> > \n> > ! /* Hostname of interface to listen on, or 'any'. */\n> > ! static char *HostName = NULL;\n> > \n> > + /* TCP/IP port number to listen on. Also used to default the Unix-domain socket name. */\n> > + static unsigned short PostPortNumber = 0;\n> > + \n> > + /* Override of the default Unix-domain socket name to listen on, if non-NULL. */\n> > + static char *UnixSocketName = NULL;\n> > + \n> > /*\n> > * This is a boolean indicating that there is at least one backend that\n> > * is accessing the current shared memory and semaphores. Between the\n> > ***************\n> > *** 274,280 ****\n> > static void SignalChildren(SIGNAL_ARGS);\n> > static int\tCountChildren(void);\n> > static int\n> > ! SetOptsFile(char *progname, int port, char *datadir,\n> > \t\t\tint assert, int nbuf, char *execfile,\n> > \t\t\tint debuglvl, int netserver,\n> > #ifdef USE_SSL\n> > --- 281,287 ----\n> > static void SignalChildren(SIGNAL_ARGS);\n> > static int\tCountChildren(void);\n> > static int\n> > ! SetOptsFile(char *progname, char *hostname, int port, char *unixsocket, char *datadir,\n> > \t\t\tint assert, int nbuf, char *execfile,\n> > \t\t\tint debuglvl, int netserver,\n> > #ifdef USE_SSL\n> > ***************\n> > *** 370,380 ****\n> > {\n> > \textern int\tNBuffers;\t\t/* from buffer/bufmgr.c */\n> > \tint\t\t\topt;\n> > - \tchar\t *hostName;\n> > \tint\t\t\tstatus;\n> > \tint\t\t\tsilentflag = 0;\n> > \tbool\t\tDataDirOK;\t\t/* We have a usable PGDATA value */\n> > - \tchar\t\thostbuf[MAXHOSTNAMELEN];\n> > \tint\t\t\tnonblank_argc;\n> > \tchar\t\toriginal_extraoptions[MAXPGPATH];\n> > \n> > --- 377,385 ----\n> > ***************\n> > *** 431,449 ****\n> > \t */\n> > \tumask((mode_t) 0077);\n> > \n> > - \tif (!(hostName = getenv(\"PGHOST\")))\n> > - \t{\n> > - \t\tif (gethostname(hostbuf, MAXHOSTNAMELEN) < 0)\n> > - \t\t\tstrcpy(hostbuf, \"localhost\");\n> > - \t\thostName = hostbuf;\n> > - \t}\n> > - \n> > \tMyProcPid = getpid();\n> > \tDataDir = getenv(\"PGDATA\"); /* default value */\n> > \n> > \topterr = 0;\n> > \tIgnoreSystemIndexes(false);\n> > ! \twhile ((opt = getopt(nonblank_argc, argv, \"A:a:B:b:D:d:ilm:MN:no:p:Ss\")) != EOF)\n> > \t{\n> > \t\tswitch (opt)\n> > \t\t{\n> > --- 436,447 ----\n> > \t */\n> > \tumask((mode_t) 0077);\n> > \n> > \tMyProcPid = getpid();\n> > \tDataDir = getenv(\"PGDATA\"); /* default value */\n> > \n> > \topterr = 0;\n> > \tIgnoreSystemIndexes(false);\n> > ! \twhile ((opt = getopt(nonblank_argc, argv, \"A:a:B:b:D:d:h:ik:lm:MN:no:p:Ss\")) != EOF)\n> > \t{\n> > \t\tswitch (opt)\n> > \t\t{\n> > ***************\n> > *** 498,506 ****\n> > --- 496,511 ----\n> > \t\t\t\tDebugLvl = atoi(optarg);\n> > \t\t\t\tpg_options[TRACE_VERBOSE] = DebugLvl;\n> > \t\t\t\tbreak;\n> > + \t\t\tcase 'h':\n> > + \t\t\t\tHostName = optarg;\n> > + \t\t\t\tbreak;\n> > \t\t\tcase 'i':\n> > \t\t\t\tNetServer = true;\n> > \t\t\t\tbreak;\n> > + \t\t\tcase 'k':\n> > + \t\t\t\t/* Set PGUNIXSOCKET by hand. */\n> > + \t\t\t\tUnixSocketName = optarg;\n> > + \t\t\t\tbreak;\n> > #ifdef USE_SSL\n> > \t\t\tcase 'l':\n> > \t\t\t\tSecureNetServer = true;\n> > ***************\n> > *** 545,551 ****\n> > \t\t\t\tbreak;\n> > \t\t\tcase 'p':\n> > \t\t\t\t/* Set PGPORT by hand. */\n> > ! \t\t\t\tPostPortName = (unsigned short) atoi(optarg);\n> > \t\t\t\tbreak;\n> > \t\t\tcase 'S':\n> > \n> > --- 550,556 ----\n> > \t\t\t\tbreak;\n> > \t\t\tcase 'p':\n> > \t\t\t\t/* Set PGPORT by hand. */\n> > ! \t\t\t\tPostPortNumber = (unsigned short) atoi(optarg);\n> > \t\t\t\tbreak;\n> > \t\t\tcase 'S':\n> > \n> > ***************\n> > *** 577,584 ****\n> > \t/*\n> > \t * Select default values for switches where needed\n> > \t */\n> > ! \tif (PostPortName == 0)\n> > ! \t\tPostPortName = (unsigned short) pq_getport();\n> > \n> > \t/*\n> > \t * Check for invalid combinations of switches\n> > --- 582,603 ----\n> > \t/*\n> > \t * Select default values for switches where needed\n> > \t */\n> > ! \tif (HostName == NULL)\n> > ! \t{\n> > ! \t\tif (!(HostName = getenv(\"PGHOST\")))\n> > ! \t\t{\n> > ! \t\t\tHostName = \"any\";\n> > ! \t\t}\n> > ! \t}\n> > ! \telse if (!NetServer)\n> > ! \t{\n> > ! \t\tfprintf(stderr, \"%s: -h requires -i.\\n\", progname);\n> > ! \t\texit(1);\n> > ! \t}\n> > ! \tif (PostPortNumber == 0)\n> > ! \t\tPostPortNumber = (unsigned short) pq_getport();\n> > ! \tif (UnixSocketName == NULL)\n> > ! \t\tUnixSocketName = pq_getunixsocket();\n> > \n> > \t/*\n> > \t * Check for invalid combinations of switches\n> > ***************\n> > *** 622,628 ****\n> > \n> > \tif (NetServer)\n> > \t{\n> > ! \t\tstatus = StreamServerPort(hostName, PostPortName, &ServerSock_INET);\n> > \t\tif (status != STATUS_OK)\n> > \t\t{\n> > \t\t\tfprintf(stderr, \"%s: cannot create INET stream port\\n\",\n> > --- 641,647 ----\n> > \n> > \tif (NetServer)\n> > \t{\n> > ! \t\tstatus = StreamServerPort(HostName, PostPortNumber, NULL, &ServerSock_INET);\n> > \t\tif (status != STATUS_OK)\n> > \t\t{\n> > \t\t\tfprintf(stderr, \"%s: cannot create INET stream port\\n\",\n> > ***************\n> > *** 632,638 ****\n> > \t}\n> > \n> > #if !defined(__CYGWIN32__) && !defined(__QNX__)\n> > ! \tstatus = StreamServerPort(NULL, PostPortName, &ServerSock_UNIX);\n> > \tif (status != STATUS_OK)\n> > \t{\n> > \t\tfprintf(stderr, \"%s: cannot create UNIX stream port\\n\",\n> > --- 651,657 ----\n> > \t}\n> > \n> > #if !defined(__CYGWIN32__) && !defined(__QNX__)\n> > ! \tstatus = StreamServerPort(NULL, PostPortNumber, UnixSocketName, &ServerSock_UNIX);\n> > \tif (status != STATUS_OK)\n> > \t{\n> > \t\tfprintf(stderr, \"%s: cannot create UNIX stream port\\n\",\n> > ***************\n> > *** 642,648 ****\n> > #endif\n> > \t/* set up shared memory and semaphores */\n> > \tEnableMemoryContext(TRUE);\n> > ! \treset_shared(PostPortName);\n> > \n> > \t/*\n> > \t * Initialize the list of active backends.\tThis list is only used for\n> > --- 661,667 ----\n> > #endif\n> > \t/* set up shared memory and semaphores */\n> > \tEnableMemoryContext(TRUE);\n> > ! \treset_shared(PostPortNumber);\n> > \n> > \t/*\n> > \t * Initialize the list of active backends.\tThis list is only used for\n> > ***************\n> > *** 664,670 ****\n> > \t\t{\n> > \t\t\tif (SetOptsFile(\n> > \t\t\t\t\t\t\tprogname,\t/* postmaster executable file */\n> > ! \t\t\t\t\t\t\tPostPortName,\t\t/* port number */\n> > \t\t\t\t\t\t\tDataDir,\t/* PGDATA */\n> > \t\t\t\t\t\t\tassert_enabled,\t\t/* whether -A is specified\n> > \t\t\t\t\t\t\t\t\t\t\t\t * or not */\n> > --- 683,691 ----\n> > \t\t{\n> > \t\t\tif (SetOptsFile(\n> > \t\t\t\t\t\t\tprogname,\t/* postmaster executable file */\n> > ! \t\t\t\t\t\t\tHostName, /* IP address to bind to */\n> > ! \t\t\t\t\t\t\tPostPortNumber,\t\t/* port number */\n> > ! \t\t\t\t\t\t\tUnixSocketName,\t/* PGUNIXSOCKET */\n> > \t\t\t\t\t\t\tDataDir,\t/* PGDATA */\n> > \t\t\t\t\t\t\tassert_enabled,\t\t/* whether -A is specified\n> > \t\t\t\t\t\t\t\t\t\t\t\t * or not */\n> > ***************\n> > *** 753,759 ****\n> > \t\t{\n> > \t\t\tif (SetOptsFile(\n> > \t\t\t\t\t\t\tprogname,\t/* postmaster executable file */\n> > ! \t\t\t\t\t\t\tPostPortName,\t\t/* port number */\n> > \t\t\t\t\t\t\tDataDir,\t/* PGDATA */\n> > \t\t\t\t\t\t\tassert_enabled,\t\t/* whether -A is specified\n> > \t\t\t\t\t\t\t\t\t\t\t\t * or not */\n> > --- 774,782 ----\n> > \t\t{\n> > \t\t\tif (SetOptsFile(\n> > \t\t\t\t\t\t\tprogname,\t/* postmaster executable file */\n> > ! \t\t\t\t\t\t\tHostName, /* IP address to bind to */\n> > ! \t\t\t\t\t\t\tPostPortNumber,\t\t/* port number */\n> > ! \t\t\t\t\t\t\tUnixSocketName,\t/* PGUNIXSOCKET */\n> > \t\t\t\t\t\t\tDataDir,\t/* PGDATA */\n> > \t\t\t\t\t\t\tassert_enabled,\t\t/* whether -A is specified\n> > \t\t\t\t\t\t\t\t\t\t\t\t * or not */\n> > ***************\n> > *** 837,843 ****\n> > --- 860,868 ----\n> > \tfprintf(stderr, \"\\t-a system\\tuse this authentication system\\n\");\n> > \tfprintf(stderr, \"\\t-b backend\\tuse a specific backend server executable\\n\");\n> > \tfprintf(stderr, \"\\t-d [1-5]\\tset debugging level\\n\");\n> > + \tfprintf(stderr, \"\\t-h hostname\\tspecify hostname or IP address or 'any' for postmaster to listen on (also use -i)\\n\");\n> > \tfprintf(stderr, \"\\t-i \\t\\tlisten on TCP/IP sockets as well as Unix domain socket\\n\");\n> > + \tfprintf(stderr, \"\\t-k path\\tspecify Unix-domain socket name for postmaster to listen on\\n\");\n> > #ifdef USE_SSL\n> > \tfprintf(stderr, \" \\t-l \\t\\tfor TCP/IP sockets, listen only on SSL connections\\n\");\n> > #endif\n> > ***************\n> > *** 1318,1328 ****\n> > --- 1343,1417 ----\n> > }\n> > \n> > /*\n> > + * get_host_port -- return a pseudo port number (16 bits)\n> > + * derived from the primary IP address of HostName.\n> > + */\n> > + static unsigned short\n> > + get_host_port(void)\n> > + {\n> > + \tstatic unsigned short hostPort = 0;\n> > + \n> > + \tif (hostPort == 0)\n> > + \t{\n> > + \t\tSockAddr\tsaddr;\n> > + \t\tstruct hostent *hp;\n> > + \n> > + \t\thp = gethostbyname(HostName);\n> > + \t\tif ((hp == NULL) || (hp->h_addrtype != AF_INET))\n> > + \t\t{\n> > + \t\t\tchar msg[1024];\n> > + \t\t\tsnprintf(msg, sizeof(msg),\n> > + \t\t\t\t \"FATAL: get_host_port: gethostbyname(%s) failed: %s\\n\",\n> > + \t\t\t\t HostName, hstrerror(h_errno));\n> > + \t\t\tfputs(msg, stderr);\n> > + \t\t\tpqdebug(\"%s\", msg);\n> > + \t\t\texit(1);\n> > + \t\t}\n> > + \t\tmemmove((char *) &(saddr.in.sin_addr),\n> > + \t\t\t(char *) hp->h_addr,\n> > + \t\t\thp->h_length);\n> > + \t\thostPort = ntohl(saddr.in.sin_addr.s_addr) & 0xFFFF;\n> > + \t}\n> > + \n> > + \treturn hostPort;\n> > + }\n> > + \n> > + /*\n> > * reset_shared -- reset shared memory and semaphores\n> > */\n> > static void\n> > reset_shared(unsigned short port)\n> > {\n> > + \t/*\n> > + \t * A typical ipc_key is 5432001, which is port 5432, sequence\n> > + \t * number 0, and 01 as the index in IPCKeyGetBufferMemoryKey().\n> > + \t * The 32-bit INT_MAX is 2147483 6 47.\n> > + \t *\n> > + \t * The default algorithm for calculating the IPC keys assumes that all\n> > + \t * instances of postmaster on a given host are listening on different\n> > + \t * ports. In order to work (prevent shared memory collisions) if you\n> > + \t * run multiple PostgreSQL instances on the same port and different IP\n> > + \t * addresses on a host, we change the algorithm if you give postmaster\n> > + \t * the -h option, or set PGHOST, to a value other than the internal\n> > + \t * default of \"any\".\n> > + \t *\n> > + \t * If HostName is not \"any\", then we generate the IPC keys using the\n> > + \t * last two octets of the IP address instead of the port number.\n> > + \t * This algorithm assumes that no one will run multiple PostgreSQL\n> > + \t * instances on one host using two IP addresses that have the same two\n> > + \t * last octets in different class C networks. If anyone does, it\n> > + \t * would be rare.\n> > + \t *\n> > + \t * So, if you use -h or PGHOST, don't try to run two instances of\n> > + \t * PostgreSQL on the same IP address but different ports. If you\n> > + \t * don't use them, then you must use different ports (via -p or\n> > + \t * PGPORT). And, of course, don't try to use both approaches on one\n> > + \t * host.\n> > + \t */\n> > + \n> > + \tif (strcmp(HostName, \"any\"))\n> > + \t\tport = get_host_port();\n> > + \n> > \tipc_key = port * 1000 + shmem_seq * 100;\n> > \tCreateSharedMemoryAndSemaphores(ipc_key, MaxBackends);\n> > \tshmem_seq += 1;\n> > ***************\n> > *** 1540,1546 ****\n> > \t\t\t\tctime(&tnow));\n> > \t\tfflush(stderr);\n> > \t\tshmem_exit(0);\n> > ! \t\treset_shared(PostPortName);\n> > \t\tStartupPID = StartupDataBase();\n> > \t\treturn;\n> > \t}\n> > --- 1629,1635 ----\n> > \t\t\t\tctime(&tnow));\n> > \t\tfflush(stderr);\n> > \t\tshmem_exit(0);\n> > ! \t\treset_shared(PostPortNumber);\n> > \t\tStartupPID = StartupDataBase();\n> > \t\treturn;\n> > \t}\n> > ***************\n> > *** 1720,1726 ****\n> > \t * Set up the necessary environment variables for the backend This\n> > \t * should really be some sort of message....\n> > \t */\n> > ! \tsprintf(envEntry[0], \"POSTPORT=%d\", PostPortName);\n> > \tputenv(envEntry[0]);\n> > \tsprintf(envEntry[1], \"POSTID=%d\", NextBackendTag);\n> > \tputenv(envEntry[1]);\n> > --- 1809,1815 ----\n> > \t * Set up the necessary environment variables for the backend This\n> > \t * should really be some sort of message....\n> > \t */\n> > ! \tsprintf(envEntry[0], \"POSTPORT=%d\", PostPortNumber);\n> > \tputenv(envEntry[0]);\n> > \tsprintf(envEntry[1], \"POSTID=%d\", NextBackendTag);\n> > \tputenv(envEntry[1]);\n> > ***************\n> > *** 2174,2180 ****\n> > \tfor (i = 0; i < 4; ++i)\n> > \t\tMemSet(ssEntry[i], 0, 2 * ARGV_SIZE);\n> > \n> > ! \tsprintf(ssEntry[0], \"POSTPORT=%d\", PostPortName);\n> > \tputenv(ssEntry[0]);\n> > \tsprintf(ssEntry[1], \"POSTID=%d\", NextBackendTag);\n> > \tputenv(ssEntry[1]);\n> > --- 2263,2269 ----\n> > \tfor (i = 0; i < 4; ++i)\n> > \t\tMemSet(ssEntry[i], 0, 2 * ARGV_SIZE);\n> > \n> > ! \tsprintf(ssEntry[0], \"POSTPORT=%d\", PostPortNumber);\n> > \tputenv(ssEntry[0]);\n> > \tsprintf(ssEntry[1], \"POSTID=%d\", NextBackendTag);\n> > \tputenv(ssEntry[1]);\n> > ***************\n> > *** 2254,2260 ****\n> > * Create the opts file\n> > */\n> > static int\n> > ! SetOptsFile(char *progname, int port, char *datadir,\n> > \t\t\tint assert, int nbuf, char *execfile,\n> > \t\t\tint debuglvl, int netserver,\n> > #ifdef USE_SSL\n> > --- 2343,2349 ----\n> > * Create the opts file\n> > */\n> > static int\n> > ! SetOptsFile(char *progname, char *hostname, int port, char *unixsocket, char *datadir,\n> > \t\t\tint assert, int nbuf, char *execfile,\n> > \t\t\tint debuglvl, int netserver,\n> > #ifdef USE_SSL\n> > ***************\n> > *** 2279,2284 ****\n> > --- 2368,2383 ----\n> > \t\treturn (-1);\n> > \t}\n> > \tsnprintf(opts, sizeof(opts), \"%s\\n-p %d\\n-D %s\\n\", progname, port, datadir);\n> > + \tif (netserver)\n> > + \t{\n> > + \t\tsprintf(buf, \"-h %s\\n\", hostname);\n> > + \t\tstrcat(opts, buf);\n> > + \t}\n> > + \tif (unixsocket)\n> > + \t{\n> > + \t\tsprintf(buf, \"-k %s\\n\", unixsocket);\n> > + \t\tstrcat(opts, buf);\n> > + \t}\n> > \tif (assert)\n> > \t{\n> > \t\tsprintf(buf, \"-A %d\\n\", assert);\n> > Index: src/bin/pg_dump/pg_dump.c\n> > *** src/bin/pg_dump/pg_dump.c\t2000/06/30 21:15:44\t1.1\n> > --- src/bin/pg_dump/pg_dump.c\t2000/07/01 18:41:22\t1.2\n> > ***************\n> > *** 140,145 ****\n> > --- 140,146 ----\n> > \t\t \" -D, --attribute-inserts dump data as INSERT commands with attribute names\\n\"\n> > \t\t \" -h, --host <hostname> server host name\\n\"\n> > \t\t \" -i, --ignore-version proceed when database version != pg_dump version\\n\"\n> > + \t\t \" -k, --unixsocket <path> server Unix-domain socket name\\n\"\n> > \t\" -n, --no-quotes suppress most quotes around identifiers\\n\"\n> > \t \" -N, --quotes enable most quotes around identifiers\\n\"\n> > \t\t \" -o, --oids dump object ids (oids)\\n\"\n> > ***************\n> > *** 158,163 ****\n> > --- 159,165 ----\n> > \t\t \" -D dump data as INSERT commands with attribute names\\n\"\n> > \t\t \" -h <hostname> server host name\\n\"\n> > \t\t \" -i proceed when database version != pg_dump version\\n\"\n> > + \t\t \" -k <path> server Unix-domain socket name\\n\"\n> > \t\" -n suppress most quotes around identifiers\\n\"\n> > \t \" -N enable most quotes around identifiers\\n\"\n> > \t\t \" -o dump object ids (oids)\\n\"\n> > ***************\n> > *** 579,584 ****\n> > --- 581,587 ----\n> > \tconst char *dbname = NULL;\n> > \tconst char *pghost = NULL;\n> > \tconst char *pgport = NULL;\n> > + \tconst char *pgunixsocket = NULL;\n> > \tchar\t *tablename = NULL;\n> > \tbool\t\toids = false;\n> > \tTableInfo *tblinfo;\n> > ***************\n> > *** 598,603 ****\n> > --- 601,607 ----\n> > \t\t{\"attribute-inserts\", no_argument, NULL, 'D'},\n> > \t\t{\"host\", required_argument, NULL, 'h'},\n> > \t\t{\"ignore-version\", no_argument, NULL, 'i'},\n> > + \t\t{\"unixsocket\", required_argument, NULL, 'k'},\n> > \t\t{\"no-quotes\", no_argument, NULL, 'n'},\n> > \t\t{\"quotes\", no_argument, NULL, 'N'},\n> > \t\t{\"oids\", no_argument, NULL, 'o'},\n> > ***************\n> > *** 662,667 ****\n> > --- 666,674 ----\n> > \t\t\tcase 'i':\t\t\t/* ignore database version mismatch */\n> > \t\t\t\tignore_version = true;\n> > \t\t\t\tbreak;\n> > + \t\t\tcase 'k':\t\t\t/* server Unix-domain socket */\n> > + \t\t\t\tpgunixsocket = optarg;\n> > + \t\t\t\tbreak;\n> > \t\t\tcase 'n':\t\t\t/* Do not force double-quotes on\n> > \t\t\t\t\t\t\t\t * identifiers */\n> > \t\t\t\tforce_quotes = false;\n> > ***************\n> > *** 782,788 ****\n> > \t\texit(1);\n> > \t}\n> > \n> > - \t/* g_conn = PQsetdb(pghost, pgport, NULL, NULL, dbname); */\n> > \tif (pghost != NULL)\n> > \t{\n> > \t\tsprintf(tmp_string, \"host=%s \", pghost);\n> > --- 789,794 ----\n> > ***************\n> > *** 791,796 ****\n> > --- 797,807 ----\n> > \tif (pgport != NULL)\n> > \t{\n> > \t\tsprintf(tmp_string, \"port=%s \", pgport);\n> > + \t\tstrcat(connect_string, tmp_string);\n> > + \t}\n> > + \tif (pgunixsocket != NULL)\n> > + \t{\n> > + \t\tsprintf(tmp_string, \"unixsocket=%s \", pgunixsocket);\n> > \t\tstrcat(connect_string, tmp_string);\n> > \t}\n> > \tif (dbname != NULL)\n> > Index: src/bin/psql/command.c\n> > *** src/bin/psql/command.c\t2000/06/30 21:15:46\t1.1\n> > --- src/bin/psql/command.c\t2000/07/01 18:20:40\t1.2\n> > ***************\n> > *** 1199,1204 ****\n> > --- 1199,1205 ----\n> > \tSetVariable(pset.vars, \"USER\", NULL);\n> > \tSetVariable(pset.vars, \"HOST\", NULL);\n> > \tSetVariable(pset.vars, \"PORT\", NULL);\n> > + \tSetVariable(pset.vars, \"UNIXSOCKET\", NULL);\n> > \tSetVariable(pset.vars, \"ENCODING\", NULL);\n> > \n> > \t/* If dbname is \"\" then use old name, else new one (even if NULL) */\n> > ***************\n> > *** 1228,1233 ****\n> > --- 1229,1235 ----\n> > \tdo\n> > \t{\n> > \t\tneed_pass = false;\n> > + \t\t/* FIXME use PQconnectdb to support passing the Unix socket */\n> > \t\tpset.db = PQsetdbLogin(PQhost(oldconn), PQport(oldconn),\n> > \t\t\t\t\t\t\t NULL, NULL, dbparam, userparam, pwparam);\n> > \n> > ***************\n> > *** 1303,1308 ****\n> > --- 1305,1311 ----\n> > \tSetVariable(pset.vars, \"USER\", PQuser(pset.db));\n> > \tSetVariable(pset.vars, \"HOST\", PQhost(pset.db));\n> > \tSetVariable(pset.vars, \"PORT\", PQport(pset.db));\n> > + \tSetVariable(pset.vars, \"UNIXSOCKET\", PQunixsocket(pset.db));\n> > \tSetVariable(pset.vars, \"ENCODING\", pg_encoding_to_char(pset.encoding));\n> > \n> > \tpset.issuper = test_superuser(PQuser(pset.db));\n> > Index: src/bin/psql/command.h\n> > Index: src/bin/psql/common.c\n> > *** src/bin/psql/common.c\t2000/06/30 21:15:46\t1.1\n> > --- src/bin/psql/common.c\t2000/07/01 18:20:40\t1.2\n> > ***************\n> > *** 330,335 ****\n> > --- 330,336 ----\n> > \t\t\tSetVariable(pset.vars, \"DBNAME\", NULL);\n> > \t\t\tSetVariable(pset.vars, \"HOST\", NULL);\n> > \t\t\tSetVariable(pset.vars, \"PORT\", NULL);\n> > + \t\t\tSetVariable(pset.vars, \"UNIXSOCKET\", NULL);\n> > \t\t\tSetVariable(pset.vars, \"USER\", NULL);\n> > \t\t\tSetVariable(pset.vars, \"ENCODING\", NULL);\n> > \t\t\treturn NULL;\n> > ***************\n> > *** 509,514 ****\n> > --- 510,516 ----\n> > \t\t\t\tSetVariable(pset.vars, \"DBNAME\", NULL);\n> > \t\t\t\tSetVariable(pset.vars, \"HOST\", NULL);\n> > \t\t\t\tSetVariable(pset.vars, \"PORT\", NULL);\n> > + \t\t\t\tSetVariable(pset.vars, \"UNIXSOCKET\", NULL);\n> > \t\t\t\tSetVariable(pset.vars, \"USER\", NULL);\n> > \t\t\t\tSetVariable(pset.vars, \"ENCODING\", NULL);\n> > \t\t\t\treturn false;\n> > Index: src/bin/psql/help.c\n> > *** src/bin/psql/help.c\t2000/06/30 21:15:46\t1.1\n> > --- src/bin/psql/help.c\t2000/07/01 18:20:40\t1.2\n> > ***************\n> > *** 103,108 ****\n> > --- 103,118 ----\n> > \tputs(\")\");\n> > \n> > \tputs(\" -H HTML table output mode (-P format=html)\");\n> > + \n> > + \t/* Display default Unix-domain socket */\n> > + \tenv = getenv(\"PGUNIXSOCKET\");\n> > + \tprintf(\" -k <path> Specify Unix domain socket name (default: \");\n> > + \tif (env)\n> > + \t\tfputs(env, stdout);\n> > + \telse\n> > + \t\tfputs(\"computed from the port\", stdout);\n> > + \tputs(\")\");\n> > + \n> > \tputs(\" -l List available databases, then exit\");\n> > \tputs(\" -n Disable readline\");\n> > \tputs(\" -o <filename> Send query output to filename (or |pipe)\");\n> > Index: src/bin/psql/prompt.c\n> > *** src/bin/psql/prompt.c\t2000/06/30 21:15:46\t1.1\n> > --- src/bin/psql/prompt.c\t2000/07/01 18:20:40\t1.2\n> > ***************\n> > *** 189,194 ****\n> > --- 189,199 ----\n> > \t\t\t\t\tif (pset.db && PQport(pset.db))\n> > \t\t\t\t\t\tstrncpy(buf, PQport(pset.db), MAX_PROMPT_SIZE);\n> > \t\t\t\t\tbreak;\n> > + \t\t\t\t\t/* DB server Unix-domain socket */\n> > + \t\t\t\tcase '<':\n> > + \t\t\t\t\tif (pset.db && PQunixsocket(pset.db))\n> > + \t\t\t\t\t\tstrncpy(buf, PQunixsocket(pset.db), MAX_PROMPT_SIZE);\n> > + \t\t\t\t\tbreak;\n> > \t\t\t\t\t/* DB server user name */\n> > \t\t\t\tcase 'n':\n> > \t\t\t\t\tif (pset.db)\n> > Index: src/bin/psql/prompt.h\n> > Index: src/bin/psql/settings.h\n> > Index: src/bin/psql/startup.c\n> > *** src/bin/psql/startup.c\t2000/06/30 21:15:46\t1.1\n> > --- src/bin/psql/startup.c\t2000/07/01 18:20:40\t1.2\n> > ***************\n> > *** 66,71 ****\n> > --- 66,72 ----\n> > \tchar\t *dbname;\n> > \tchar\t *host;\n> > \tchar\t *port;\n> > + \tchar\t *unixsocket;\n> > \tchar\t *username;\n> > \tenum _actions action;\n> > \tchar\t *action_string;\n> > ***************\n> > *** 158,163 ****\n> > --- 159,165 ----\n> > \tdo\n> > \t{\n> > \t\tneed_pass = false;\n> > + \t\t/* FIXME use PQconnectdb to allow setting the unix socket */\n> > \t\tpset.db = PQsetdbLogin(options.host, options.port, NULL, NULL,\n> > \t\t\toptions.action == ACT_LIST_DB ? \"template1\" : options.dbname,\n> > \t\t\t\t\t\t\t username, password);\n> > ***************\n> > *** 202,207 ****\n> > --- 204,210 ----\n> > \tSetVariable(pset.vars, \"USER\", PQuser(pset.db));\n> > \tSetVariable(pset.vars, \"HOST\", PQhost(pset.db));\n> > \tSetVariable(pset.vars, \"PORT\", PQport(pset.db));\n> > + \tSetVariable(pset.vars, \"UNIXSOCKET\", PQunixsocket(pset.db));\n> > \tSetVariable(pset.vars, \"ENCODING\", pg_encoding_to_char(pset.encoding));\n> > \n> > #ifndef WIN32\n> > ***************\n> > *** 313,318 ****\n> > --- 316,322 ----\n> > \t\t{\"field-separator\", required_argument, NULL, 'F'},\n> > \t\t{\"host\", required_argument, NULL, 'h'},\n> > \t\t{\"html\", no_argument, NULL, 'H'},\n> > + \t\t{\"unixsocket\", required_argument, NULL, 'k'},\n> > \t\t{\"list\", no_argument, NULL, 'l'},\n> > \t\t{\"no-readline\", no_argument, NULL, 'n'},\n> > \t\t{\"output\", required_argument, NULL, 'o'},\n> > ***************\n> > *** 346,359 ****\n> > \tmemset(options, 0, sizeof *options);\n> > \n> > #ifdef HAVE_GETOPT_LONG\n> > ! \twhile ((c = getopt_long(argc, argv, \"aAc:d:eEf:F:lh:Hno:p:P:qRsStT:uU:v:VWxX?\", long_options, &optindex)) != -1)\n> > #else\t\t\t\t\t\t\t/* not HAVE_GETOPT_LONG */\n> > \n> > \t/*\n> > \t * Be sure to leave the '-' in here, so we can catch accidental long\n> > \t * options.\n> > \t */\n> > ! \twhile ((c = getopt(argc, argv, \"aAc:d:eEf:F:lh:Hno:p:P:qRsStT:uU:v:VWxX?-\")) != -1)\n> > #endif\t /* not HAVE_GETOPT_LONG */\n> > \t{\n> > \t\tswitch (c)\n> > --- 350,363 ----\n> > \tmemset(options, 0, sizeof *options);\n> > \n> > #ifdef HAVE_GETOPT_LONG\n> > ! \twhile ((c = getopt_long(argc, argv, \"aAc:d:eEf:F:lh:Hk:no:p:P:qRsStT:uU:v:VWxX?\", long_options, &optindex)) != -1)\n> > #else\t\t\t\t\t\t\t/* not HAVE_GETOPT_LONG */\n> > \n> > \t/*\n> > \t * Be sure to leave the '-' in here, so we can catch accidental long\n> > \t * options.\n> > \t */\n> > ! \twhile ((c = getopt(argc, argv, \"aAc:d:eEf:F:lh:Hk:no:p:P:qRsStT:uU:v:VWxX?-\")) != -1)\n> > #endif\t /* not HAVE_GETOPT_LONG */\n> > \t{\n> > \t\tswitch (c)\n> > ***************\n> > *** 398,403 ****\n> > --- 402,410 ----\n> > \t\t\t\tbreak;\n> > \t\t\tcase 'l':\n> > \t\t\t\toptions->action = ACT_LIST_DB;\n> > + \t\t\t\tbreak;\n> > + \t\t\tcase 'k':\n> > + \t\t\t\toptions->unixsocket = optarg;\n> > \t\t\t\tbreak;\n> > \t\t\tcase 'n':\n> > \t\t\t\toptions->no_readline = true;\n> > Index: src/bin/scripts/createdb\n> > *** src/bin/scripts/createdb\t2000/06/30 21:15:46\t1.1\n> > --- src/bin/scripts/createdb\t2000/07/04 04:46:45\t1.2\n> > ***************\n> > *** 50,55 ****\n> > --- 50,64 ----\n> > --port=*)\n> > PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> > ;;\n> > + \t--unixsocket|-k)\n> > + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> > + \t\tshift;;\n> > + -k*)\n> > + PSQLOPT=\"$PSQLOPT $1\"\n> > + ;;\n> > + --unixsocket=*)\n> > + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> > + ;;\n> > \t--username|-U)\n> > \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n> > \t\tshift;;\n> > ***************\n> > *** 114,119 ****\n> > --- 123,129 ----\n> > \techo \" -E, --encoding=ENCODING Multibyte encoding for the database\"\n> > \techo \" -h, --host=HOSTNAME Database server host\"\n> > \techo \" -p, --port=PORT Database server port\"\n> > + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> > \techo \" -U, --username=USERNAME Username to connect as\"\n> > \techo \" -W, --password Prompt for password\"\n> > \techo \" -e, --echo Show the query being sent to the backend\"\n> > Index: src/bin/scripts/createlang.sh\n> > *** src/bin/scripts/createlang.sh\t2000/06/30 21:15:46\t1.1\n> > --- src/bin/scripts/createlang.sh\t2000/07/04 04:46:45\t1.2\n> > ***************\n> > *** 65,70 ****\n> > --- 65,79 ----\n> > --port=*)\n> > PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> > ;;\n> > + \t--unixsocket|-k)\n> > + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> > + \t\tshift;;\n> > + -k*)\n> > + PSQLOPT=\"$PSQLOPT $1\"\n> > + ;;\n> > + --unixsocket=*)\n> > + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> > + ;;\n> > \t--username|-U)\n> > \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n> > \t\tshift;;\n> > ***************\n> > *** 126,131 ****\n> > --- 135,141 ----\n> > \techo \"Options:\"\n> > \techo \" -h, --host=HOSTNAME Database server host\"\n> > \techo \" -p, --port=PORT Database server port\"\n> > + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> > \techo \" -U, --username=USERNAME Username to connect as\"\n> > \techo \" -W, --password Prompt for password\"\n> > \techo \" -d, --dbname=DBNAME Database to install language in\"\n> > Index: src/bin/scripts/createuser\n> > *** src/bin/scripts/createuser\t2000/06/30 21:15:46\t1.1\n> > --- src/bin/scripts/createuser\t2000/07/04 04:46:45\t1.2\n> > ***************\n> > *** 63,68 ****\n> > --- 63,77 ----\n> > --port=*)\n> > PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> > ;;\n> > + \t--unixsocket|-k)\n> > + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> > + \t\tshift;;\n> > + -k*)\n> > + PSQLOPT=\"$PSQLOPT $1\"\n> > + ;;\n> > + --unixsocket=*)\n> > + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> > + ;;\n> > # Note: These two specify the user to connect as (like in psql),\n> > # not the user you're creating.\n> > \t--username|-U)\n> > ***************\n> > *** 135,140 ****\n> > --- 144,150 ----\n> > \techo \" -P, --pwprompt Assign a password to new user\"\n> > \techo \" -h, --host=HOSTNAME Database server host\"\n> > \techo \" -p, --port=PORT Database server port\"\n> > + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> > \techo \" -U, --username=USERNAME Username to connect as (not the one to create)\"\n> > \techo \" -W, --password Prompt for password to connect\"\n> > \techo \" -e, --echo Show the query being sent to the backend\"\n> > Index: src/bin/scripts/dropdb\n> > *** src/bin/scripts/dropdb\t2000/06/30 21:15:46\t1.1\n> > --- src/bin/scripts/dropdb\t2000/07/04 04:46:45\t1.2\n> > ***************\n> > *** 59,64 ****\n> > --- 59,73 ----\n> > --port=*)\n> > PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> > ;;\n> > + \t--unixsocket|-k)\n> > + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> > + \t\tshift;;\n> > + -k*)\n> > + PSQLOPT=\"$PSQLOPT $1\"\n> > + ;;\n> > + --unixsocket=*)\n> > + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> > + ;;\n> > \t--username|-U)\n> > \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n> > \t\tshift;;\n> > ***************\n> > *** 103,108 ****\n> > --- 112,118 ----\n> > \techo \"Options:\"\n> > \techo \" -h, --host=HOSTNAME Database server host\"\n> > \techo \" -p, --port=PORT Database server port\"\n> > + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> > \techo \" -U, --username=USERNAME Username to connect as\"\n> > \techo \" -W, --password Prompt for password\"\n> > \techo \" -i, --interactive Prompt before deleting anything\"\n> > Index: src/bin/scripts/droplang\n> > *** src/bin/scripts/droplang\t2000/06/30 21:15:46\t1.1\n> > --- src/bin/scripts/droplang\t2000/07/04 04:46:45\t1.2\n> > ***************\n> > *** 65,70 ****\n> > --- 65,79 ----\n> > --port=*)\n> > PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> > ;;\n> > + \t--unixsocket|-k)\n> > + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> > + \t\tshift;;\n> > + -k*)\n> > + PSQLOPT=\"$PSQLOPT $1\"\n> > + ;;\n> > + --unixsocket=*)\n> > + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> > + ;;\n> > \t--username|-U)\n> > \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n> > \t\tshift;;\n> > ***************\n> > *** 113,118 ****\n> > --- 122,128 ----\n> > \techo \"Options:\"\n> > \techo \" -h, --host=HOSTNAME Database server host\"\n> > \techo \" -p, --port=PORT Database server port\"\n> > + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> > \techo \" -U, --username=USERNAME Username to connect as\"\n> > \techo \" -W, --password Prompt for password\"\n> > \techo \" -d, --dbname=DBNAME Database to remove language from\"\n> > Index: src/bin/scripts/dropuser\n> > *** src/bin/scripts/dropuser\t2000/06/30 21:15:46\t1.1\n> > --- src/bin/scripts/dropuser\t2000/07/04 04:46:45\t1.2\n> > ***************\n> > *** 59,64 ****\n> > --- 59,73 ----\n> > --port=*)\n> > PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> > ;;\n> > + \t--unixsocket|-k)\n> > + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> > + \t\tshift;;\n> > + -k*)\n> > + PSQLOPT=\"$PSQLOPT $1\"\n> > + ;;\n> > + --unixsocket=*)\n> > + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> > + ;;\n> > # Note: These two specify the user to connect as (like in psql),\n> > # not the user you're dropping.\n> > \t--username|-U)\n> > ***************\n> > *** 105,110 ****\n> > --- 114,120 ----\n> > \techo \"Options:\"\n> > \techo \" -h, --host=HOSTNAME Database server host\"\n> > \techo \" -p, --port=PORT Database server port\"\n> > + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> > \techo \" -U, --username=USERNAME Username to connect as (not the one to drop)\"\n> > \techo \" -W, --password Prompt for password to connect\"\n> > \techo \" -i, --interactive Prompt before deleting anything\"\n> > Index: src/bin/scripts/vacuumdb\n> > *** src/bin/scripts/vacuumdb\t2000/06/30 21:15:46\t1.1\n> > --- src/bin/scripts/vacuumdb\t2000/07/04 04:46:45\t1.2\n> > ***************\n> > *** 52,57 ****\n> > --- 52,66 ----\n> > --port=*)\n> > PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> > ;;\n> > + \t--unixsocket|-k)\n> > + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> > + \t\tshift;;\n> > + -k*)\n> > + PSQLOPT=\"$PSQLOPT $1\"\n> > + ;;\n> > + --unixsocket=*)\n> > + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> > + ;;\n> > \t--username|-U)\n> > \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n> > \t\tshift;;\n> > ***************\n> > *** 121,126 ****\n> > --- 130,136 ----\n> > echo \"Options:\"\n> > \techo \" -h, --host=HOSTNAME Database server host\"\n> > \techo \" -p, --port=PORT Database server port\"\n> > + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> > \techo \" -U, --username=USERNAME Username to connect as\"\n> > \techo \" -W, --password Prompt for password\"\n> > \techo \" -d, --dbname=DBNAME Database to vacuum\"\n> > Index: src/include/libpq/libpq.h\n> > *** src/include/libpq/libpq.h\t2000/06/30 21:15:47\t1.1\n> > --- src/include/libpq/libpq.h\t2000/07/01 18:20:40\t1.2\n> > ***************\n> > *** 236,246 ****\n> > /*\n> > * prototypes for functions in pqcomm.c\n> > */\n> > ! extern int\tStreamServerPort(char *hostName, unsigned short portName, int *fdP);\n> > extern int\tStreamConnection(int server_fd, Port *port);\n> > extern void StreamClose(int sock);\n> > extern void pq_init(void);\n> > extern int\tpq_getport(void);\n> > extern void pq_close(void);\n> > extern int\tpq_getbytes(char *s, size_t len);\n> > extern int\tpq_getstring(StringInfo s);\n> > --- 236,247 ----\n> > /*\n> > * prototypes for functions in pqcomm.c\n> > */\n> > ! extern int\tStreamServerPort(char *hostName, unsigned short portName, char *unixSocketName, int *fdP);\n> > extern int\tStreamConnection(int server_fd, Port *port);\n> > extern void StreamClose(int sock);\n> > extern void pq_init(void);\n> > extern int\tpq_getport(void);\n> > + extern char\t*pq_getunixsocket(void);\n> > extern void pq_close(void);\n> > extern int\tpq_getbytes(char *s, size_t len);\n> > extern int\tpq_getstring(StringInfo s);\n> > Index: src/include/libpq/password.h\n> > Index: src/include/libpq/pqcomm.h\n> > *** src/include/libpq/pqcomm.h\t2000/06/30 21:15:47\t1.1\n> > --- src/include/libpq/pqcomm.h\t2000/07/01 18:59:33\t1.6\n> > ***************\n> > *** 42,53 ****\n> > /* Configure the UNIX socket address for the well known port. */\n> > \n> > #if defined(SUN_LEN)\n> > ! #define UNIXSOCK_PATH(sun,port) \\\n> > ! \t(sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)), SUN_LEN(&(sun)))\n> > #else\n> > ! #define UNIXSOCK_PATH(sun,port) \\\n> > ! \t(sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)), \\\n> > ! \t strlen((sun).sun_path)+ offsetof(struct sockaddr_un, sun_path))\n> > #endif\n> > \n> > /*\n> > --- 42,56 ----\n> > /* Configure the UNIX socket address for the well known port. */\n> > \n> > #if defined(SUN_LEN)\n> > ! #define UNIXSOCK_PATH(sun,port,defpath) \\\n> > ! (defpath ? (strncpy((sun).sun_path, defpath, sizeof((sun).sun_path)), (sun).sun_path[sizeof((sun).sun_path)-1] = '\\0') : sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)))\n> > ! #define UNIXSOCK_LEN(sun) \\\n> > ! (SUN_LEN(&(sun)))\n> > #else\n> > ! #define UNIXSOCK_PATH(sun,port,defpath) \\\n> > ! (defpath ? (strncpy((sun).sun_path, defpath, sizeof((sun).sun_path)), (sun).sun_path[sizeof((sun).sun_path)-1] = '\\0') : sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)))\n> > ! #define UNIXSOCK_LEN(sun) \\\n> > ! (strlen((sun).sun_path)+ offsetof(struct sockaddr_un, sun_path))\n> > #endif\n> > \n> > /*\n> > Index: src/interfaces/libpq/fe-connect.c\n> > *** src/interfaces/libpq/fe-connect.c\t2000/06/30 21:15:51\t1.1\n> > --- src/interfaces/libpq/fe-connect.c\t2000/07/01 18:50:47\t1.3\n> > ***************\n> > *** 125,130 ****\n> > --- 125,133 ----\n> > \t{\"port\", \"PGPORT\", DEF_PGPORT, NULL,\n> > \t\"Database-Port\", \"\", 6},\n> > \n> > + \t{\"unixsocket\", \"PGUNIXSOCKET\", NULL, NULL,\n> > + \t\"Unix-Socket\", \"\", 80},\n> > + \n> > \t{\"tty\", \"PGTTY\", DefaultTty, NULL,\n> > \t\"Backend-Debug-TTY\", \"D\", 40},\n> > \n> > ***************\n> > *** 293,298 ****\n> > --- 296,303 ----\n> > \tconn->pghost = tmp ? strdup(tmp) : NULL;\n> > \ttmp = conninfo_getval(connOptions, \"port\");\n> > \tconn->pgport = tmp ? strdup(tmp) : NULL;\n> > + \ttmp = conninfo_getval(connOptions, \"unixsocket\");\n> > + \tconn->pgunixsocket = tmp ? strdup(tmp) : NULL;\n> > \ttmp = conninfo_getval(connOptions, \"tty\");\n> > \tconn->pgtty = tmp ? strdup(tmp) : NULL;\n> > \ttmp = conninfo_getval(connOptions, \"options\");\n> > ***************\n> > *** 369,374 ****\n> > --- 374,382 ----\n> > *\t PGPORT\t identifies TCP port to which to connect if <pgport> argument\n> > *\t\t\t\t is NULL or a null string.\n> > *\n> > + *\t PGUNIXSOCKET\t identifies Unix-domain socket to which to connect; default\n> > + *\t\t\t\t is computed from the TCP port.\n> > + *\n> > *\t PGTTY\t\t identifies tty to which to send messages if <pgtty> argument\n> > *\t\t\t\t is NULL or a null string.\n> > *\n> > ***************\n> > *** 422,427 ****\n> > --- 430,439 ----\n> > \telse\n> > \t\tconn->pgport = strdup(pgport);\n> > \n> > + \tconn->pgunixsocket = getenv(\"PGUNIXSOCKET\");\n> > + \tif (conn->pgunixsocket)\n> > + \t\tconn->pgunixsocket = strdup(conn->pgunixsocket);\n> > + \n> > \tif ((pgtty == NULL) || pgtty[0] == '\\0')\n> > \t{\n> > \t\tif ((tmp = getenv(\"PGTTY\")) == NULL)\n> > ***************\n> > *** 489,501 ****\n> > \n> > /*\n> > * update_db_info -\n> > ! * get all additional infos out of dbName\n> > *\n> > */\n> > static int\n> > update_db_info(PGconn *conn)\n> > {\n> > ! \tchar\t *tmp,\n> > \t\t\t *old = conn->dbName;\n> > \n> > \tif (strchr(conn->dbName, '@') != NULL)\n> > --- 501,513 ----\n> > \n> > /*\n> > * update_db_info -\n> > ! * get all additional info out of dbName\n> > *\n> > */\n> > static int\n> > update_db_info(PGconn *conn)\n> > {\n> > ! \tchar\t *tmp, *tmp2,\n> > \t\t\t *old = conn->dbName;\n> > \n> > \tif (strchr(conn->dbName, '@') != NULL)\n> > ***************\n> > *** 504,509 ****\n> > --- 516,523 ----\n> > \t\ttmp = strrchr(conn->dbName, ':');\n> > \t\tif (tmp != NULL)\t\t/* port number given */\n> > \t\t{\n> > + \t\t\tif (conn->pgport)\n> > + \t\t\t\tfree(conn->pgport);\n> > \t\t\tconn->pgport = strdup(tmp + 1);\n> > \t\t\t*tmp = '\\0';\n> > \t\t}\n> > ***************\n> > *** 511,516 ****\n> > --- 525,532 ----\n> > \t\ttmp = strrchr(conn->dbName, '@');\n> > \t\tif (tmp != NULL)\t\t/* host name given */\n> > \t\t{\n> > + \t\t\tif (conn->pghost)\n> > + \t\t\t\tfree(conn->pghost);\n> > \t\t\tconn->pghost = strdup(tmp + 1);\n> > \t\t\t*tmp = '\\0';\n> > \t\t}\n> > ***************\n> > *** 537,549 ****\n> > \n> > \t\t\t/*\n> > \t\t\t * new style:\n> > ! \t\t\t * <tcp|unix>:postgresql://server[:port][/dbname][?options]\n> > \t\t\t */\n> > \t\t\toffset += strlen(\"postgresql://\");\n> > \n> > \t\t\ttmp = strrchr(conn->dbName + offset, '?');\n> > \t\t\tif (tmp != NULL)\t/* options given */\n> > \t\t\t{\n> > \t\t\t\tconn->pgoptions = strdup(tmp + 1);\n> > \t\t\t\t*tmp = '\\0';\n> > \t\t\t}\n> > --- 553,567 ----\n> > \n> > \t\t\t/*\n> > \t\t\t * new style:\n> > ! \t\t\t * <tcp|unix>:postgresql://server[:port|:/unixsocket/path:][/dbname][?options]\n> > \t\t\t */\n> > \t\t\toffset += strlen(\"postgresql://\");\n> > \n> > \t\t\ttmp = strrchr(conn->dbName + offset, '?');\n> > \t\t\tif (tmp != NULL)\t/* options given */\n> > \t\t\t{\n> > + \t\t\t\tif (conn->pgoptions)\n> > + \t\t\t\t\tfree(conn->pgoptions);\n> > \t\t\t\tconn->pgoptions = strdup(tmp + 1);\n> > \t\t\t\t*tmp = '\\0';\n> > \t\t\t}\n> > ***************\n> > *** 551,576 ****\n> > \t\t\ttmp = strrchr(conn->dbName + offset, '/');\n> > \t\t\tif (tmp != NULL)\t/* database name given */\n> > \t\t\t{\n> > \t\t\t\tconn->dbName = strdup(tmp + 1);\n> > \t\t\t\t*tmp = '\\0';\n> > \t\t\t}\n> > \t\t\telse\n> > \t\t\t{\n> > \t\t\t\tif ((tmp = getenv(\"PGDATABASE\")) != NULL)\n> > \t\t\t\t\tconn->dbName = strdup(tmp);\n> > \t\t\t\telse if (conn->pguser)\n> > \t\t\t\t\tconn->dbName = strdup(conn->pguser);\n> > \t\t\t}\n> > \n> > \t\t\ttmp = strrchr(old + offset, ':');\n> > ! \t\t\tif (tmp != NULL)\t/* port number given */\n> > \t\t\t{\n> > - \t\t\t\tconn->pgport = strdup(tmp + 1);\n> > \t\t\t\t*tmp = '\\0';\n> > \t\t\t}\n> > \n> > \t\t\tif (strncmp(old, \"unix:\", 5) == 0)\n> > \t\t\t{\n> > \t\t\t\tconn->pghost = NULL;\n> > \t\t\t\tif (strcmp(old + offset, \"localhost\") != 0)\n> > \t\t\t\t{\n> > --- 569,630 ----\n> > \t\t\ttmp = strrchr(conn->dbName + offset, '/');\n> > \t\t\tif (tmp != NULL)\t/* database name given */\n> > \t\t\t{\n> > + \t\t\t\tif (conn->dbName)\n> > + \t\t\t\t\tfree(conn->dbName);\n> > \t\t\t\tconn->dbName = strdup(tmp + 1);\n> > \t\t\t\t*tmp = '\\0';\n> > \t\t\t}\n> > \t\t\telse\n> > \t\t\t{\n> > + \t\t\t\t/* Why do we default only this value from the environment again? */\n> > \t\t\t\tif ((tmp = getenv(\"PGDATABASE\")) != NULL)\n> > + \t\t\t\t{\n> > + \t\t\t\t\tif (conn->dbName)\n> > + \t\t\t\t\t\tfree(conn->dbName);\n> > \t\t\t\t\tconn->dbName = strdup(tmp);\n> > + \t\t\t\t}\n> > \t\t\t\telse if (conn->pguser)\n> > + \t\t\t\t{\n> > + \t\t\t\t\tif (conn->dbName)\n> > + \t\t\t\t\t\tfree(conn->dbName);\n> > \t\t\t\t\tconn->dbName = strdup(conn->pguser);\n> > + \t\t\t\t}\n> > \t\t\t}\n> > \n> > \t\t\ttmp = strrchr(old + offset, ':');\n> > ! \t\t\tif (tmp != NULL)\t/* port number or Unix socket path given */\n> > \t\t\t{\n> > \t\t\t\t*tmp = '\\0';\n> > + \t\t\t\tif ((tmp2 = strchr(tmp + 1, ':')) != NULL)\n> > + \t\t\t\t{\n> > + \t\t\t\t\tif (strncmp(old, \"unix:\", 5) != 0)\n> > + \t\t\t\t\t{\n> > + \t\t\t\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> > + \t\t\t\t\t\t\t\t \"connectDBStart() -- \"\n> > + \t\t\t\t\t\t\t\t \"socket name can only be specified with \"\n> > + \t\t\t\t\t\t\t\t \"non-TCP\\n\");\n> > + \t\t\t\t\t\treturn 1; \n> > + \t\t\t\t\t}\n> > + \t\t\t\t\t*tmp2 = '\\0';\n> > + \t\t\t\t\tif (conn->pgunixsocket)\n> > + \t\t\t\t\t\tfree(conn->pgunixsocket);\n> > + \t\t\t\t\tconn->pgunixsocket = strdup(tmp + 1);\n> > + \t\t\t\t}\n> > + \t\t\t\telse\n> > + \t\t\t\t{\n> > + \t\t\t\t\tif (conn->pgport)\n> > + \t\t\t\t\t\tfree(conn->pgport);\n> > + \t\t\t\t\tconn->pgport = strdup(tmp + 1);\n> > + \t\t\t\t\tif (conn->pgunixsocket)\n> > + \t\t\t\t\t\tfree(conn->pgunixsocket);\n> > + \t\t\t\t\tconn->pgunixsocket = NULL;\n> > + \t\t\t\t}\n> > \t\t\t}\n> > \n> > \t\t\tif (strncmp(old, \"unix:\", 5) == 0)\n> > \t\t\t{\n> > + \t\t\t\tif (conn->pghost)\n> > + \t\t\t\t\tfree(conn->pghost);\n> > \t\t\t\tconn->pghost = NULL;\n> > \t\t\t\tif (strcmp(old + offset, \"localhost\") != 0)\n> > \t\t\t\t{\n> > ***************\n> > *** 582,589 ****\n> > \t\t\t\t}\n> > \t\t\t}\n> > \t\t\telse\n> > \t\t\t\tconn->pghost = strdup(old + offset);\n> > ! \n> > \t\t\tfree(old);\n> > \t\t}\n> > \t}\n> > --- 636,646 ----\n> > \t\t\t\t}\n> > \t\t\t}\n> > \t\t\telse\n> > + \t\t\t{\n> > + \t\t\t\tif (conn->pghost)\n> > + \t\t\t\t\tfree(conn->pghost);\n> > \t\t\t\tconn->pghost = strdup(old + offset);\n> > ! \t\t\t}\n> > \t\t\tfree(old);\n> > \t\t}\n> > \t}\n> > ***************\n> > *** 743,749 ****\n> > \t}\n> > #if !defined(WIN32) && !defined(__CYGWIN32__)\n> > \telse\n> > ! \t\tconn->raddr_len = UNIXSOCK_PATH(conn->raddr.un, portno);\n> > #endif\n> > \n> > \n> > --- 800,809 ----\n> > \t}\n> > #if !defined(WIN32) && !defined(__CYGWIN32__)\n> > \telse\n> > ! \t{\n> > ! \t\tUNIXSOCK_PATH(conn->raddr.un, portno, conn->pgunixsocket);\n> > ! \t\tconn->raddr_len = UNIXSOCK_LEN(conn->raddr.un);\n> > ! \t}\n> > #endif\n> > \n> > \n> > ***************\n> > *** 892,898 ****\n> > \t\t\t\t\t\t\t conn->pghost ? conn->pghost : \"localhost\",\n> > \t\t\t\t\t\t\t (family == AF_INET) ?\n> > \t\t\t\t\t\t\t \"TCP/IP port\" : \"Unix socket\",\n> > ! \t\t\t\t\t\t\t conn->pgport);\n> > \t\t\tgoto connect_errReturn;\n> > \t\t}\n> > \t}\n> > --- 952,959 ----\n> > \t\t\t\t\t\t\t conn->pghost ? conn->pghost : \"localhost\",\n> > \t\t\t\t\t\t\t (family == AF_INET) ?\n> > \t\t\t\t\t\t\t \"TCP/IP port\" : \"Unix socket\",\n> > ! \t\t\t\t\t\t\t (family == AF_UNIX && conn->pgunixsocket) ?\n> > ! \t\t\t\t\t\t\t conn->pgunixsocket : conn->pgport);\n> > \t\t\tgoto connect_errReturn;\n> > \t\t}\n> > \t}\n> > ***************\n> > *** 1123,1129 ****\n> > \t\t\t\t\t\t\t conn->pghost ? conn->pghost : \"localhost\",\n> > \t\t\t\t\t\t\t\t (conn->raddr.sa.sa_family == AF_INET) ?\n> > \t\t\t\t\t\t\t\t\t \"TCP/IP port\" : \"Unix socket\",\n> > ! \t\t\t\t\t\t\t\t\t conn->pgport);\n> > \t\t\t\t\tgoto error_return;\n> > \t\t\t\t}\n> > \n> > --- 1184,1191 ----\n> > \t\t\t\t\t\t\t conn->pghost ? conn->pghost : \"localhost\",\n> > \t\t\t\t\t\t\t\t (conn->raddr.sa.sa_family == AF_INET) ?\n> > \t\t\t\t\t\t\t\t\t \"TCP/IP port\" : \"Unix socket\",\n> > ! \t\t\t\t\t\t\t (conn->raddr.sa.sa_family == AF_UNIX && conn->pgunixsocket) ?\n> > ! \t\t\t\t\t\t\t\t\t conn->pgunixsocket : conn->pgport);\n> > \t\t\t\t\tgoto error_return;\n> > \t\t\t\t}\n> > \n> > ***************\n> > *** 1799,1804 ****\n> > --- 1861,1868 ----\n> > \t\tfree(conn->pghostaddr);\n> > \tif (conn->pgport)\n> > \t\tfree(conn->pgport);\n> > + \tif (conn->pgunixsocket)\n> > + \t\tfree(conn->pgunixsocket);\n> > \tif (conn->pgtty)\n> > \t\tfree(conn->pgtty);\n> > \tif (conn->pgoptions)\n> > ***************\n> > *** 2383,2388 ****\n> > --- 2447,2460 ----\n> > \tif (!conn)\n> > \t\treturn (char *) NULL;\n> > \treturn conn->pgport;\n> > + }\n> > + \n> > + char *\n> > + PQunixsocket(const PGconn *conn)\n> > + {\n> > + \tif (!conn)\n> > + \t\treturn (char *) NULL;\n> > + \treturn conn->pgunixsocket;\n> > }\n> > \n> > char *\n> > Index: src/interfaces/libpq/libpq-fe.h\n> > *** src/interfaces/libpq/libpq-fe.h\t2000/06/30 21:15:51\t1.1\n> > --- src/interfaces/libpq/libpq-fe.h\t2000/07/01 18:20:40\t1.2\n> > ***************\n> > *** 214,219 ****\n> > --- 214,220 ----\n> > \textern char *PQpass(const PGconn *conn);\n> > \textern char *PQhost(const PGconn *conn);\n> > \textern char *PQport(const PGconn *conn);\n> > + \textern char *PQunixsocket(const PGconn *conn);\n> > \textern char *PQtty(const PGconn *conn);\n> > \textern char *PQoptions(const PGconn *conn);\n> > \textern ConnStatusType PQstatus(const PGconn *conn);\n> > Index: src/interfaces/libpq/libpq-int.h\n> > *** src/interfaces/libpq/libpq-int.h\t2000/06/30 21:15:51\t1.1\n> > --- src/interfaces/libpq/libpq-int.h\t2000/07/01 18:20:40\t1.2\n> > ***************\n> > *** 202,207 ****\n> > --- 202,209 ----\n> > \t\t\t\t\t\t\t\t * numbers-and-dots notation. Takes\n> > \t\t\t\t\t\t\t\t * precedence over above. */\n> > \tchar\t *pgport;\t\t\t/* the server's communication port */\n> > + \tchar\t *pgunixsocket;\t\t/* the Unix-domain socket that the server is listening on;\n> > + \t\t\t\t\t\t * if NULL, uses a default constructed from pgport */\n> > \tchar\t *pgtty;\t\t\t/* tty on which the backend messages is\n> > \t\t\t\t\t\t\t\t * displayed (NOT ACTUALLY USED???) */\n> > \tchar\t *pgoptions;\t\t/* options to start the backend with */\n> > Index: src/interfaces/libpq/libpqdll.def\n> > *** src/interfaces/libpq/libpqdll.def\t2000/06/30 21:15:51\t1.1\n> > --- src/interfaces/libpq/libpqdll.def\t2000/07/01 18:20:40\t1.2\n> > ***************\n> > *** 79,81 ****\n> > --- 79,82 ----\n> > \tdestroyPQExpBuffer\t@ 76\n> > \tcreatePQExpBuffer\t@ 77\n> > \tPQconninfoFree\t\t@ 78\n> > + \tPQunixsocket\t\t@ 79\n> > \n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 10 Oct 2000 22:37:07 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] PostgreSQL virtual hosting support" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> I am tempted to apply this. This is the second person who asked for\n> binding to a single port. The patch looks quite complete, with doc\n> changes. It appears to be a thorough job.\n\nA cursory inspection makes it look like the socket file can be placed\n_anywhere_ -- even under a chroot jail. This would make more than one\nperson's day, Bruce. This would allow a chrooted webserver to connect\nto a postmaster outside the jail -- while not terribly appealing from a\nsecurity standpoint (as any pipe out of a jail could be exploited), it\nIS appealing when you need more than one chrooted webserver (doing\nvirtual hosting) to connect ot a common database (for hosting-site-wide\nservices).\n\nIf this patch passes muster (muster pass Tom Lane's eyes), and you feel\ncomfortable with it, I don't see why not -- this might not be a major\nfeature, but it IS a nice one, IMHO.\n\nNow, if I'm wrong about the placement of the socket, well, I'm just\nwrong -- but the vhosting feature is still nice.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 10 Oct 2000 21:46:44 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] PostgreSQL virtual hosting support" }, { "msg_contents": "* Bruce Momjian <[email protected]> [001010 18:36] wrote:\n> I am tempted to apply this. This is the second person who asked for\n> binding to a single port. The patch looks quite complete, with doc\n> changes. It appears to be a thorough job.\n> \n> Any objections?\n\nI know several other people were struggling with having multiple instances\nof postgresql running on a box, especially keeping the unix domain pipe\nhidden, this looks like a great thing to add.\n\n-Alfred\n", "msg_date": "Tue, 10 Oct 2000 18:48:54 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] PostgreSQL virtual hosting support" }, { "msg_contents": "Bruce Momjian writes:\n\n> I am tempted to apply this. This is the second person who asked for\n> binding to a single port. The patch looks quite complete, with doc\n> changes. It appears to be a thorough job.\n\nPostmaster options are evil, please put something in backend/utils/guc.c. \n(This is not the fault of the patch submitter, since this interface is new\nfor 7.1, but that still doesn't mean we should subvert it.)\n\n> > 2. The ability to place the Unix-domain socket in a mode 700 directory.\n\nThis would be a rather sharp instrument to offer to the world at large,\nbecause the socket file is also a lock file, so you can't just get rid of\nit.\n\nIf we were to offer that anyway, I'd opine that we reuse the -h option\n(e.g., leading slash means Unix socket) rather than adding a -k option\neverywhere.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 11 Oct 2000 18:40:54 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] PostgreSQL virtual hosting support" }, { "msg_contents": "I have finally merged this patch into our current tree, and have created\na diff file that I have attached to this email. If no one objects, I\nwill apply this change to the current tree tomorrow. It allows the\npostmaster to listen to only certain hosts and to place the socket file\nin a certain directory.\n\nThe only item not done is that PQsetdbLogin() does not have a parameter\nto accept the unix socket path. Not sure we want to change that API\njust to allow unix socket path specification.\n\n> Your name\t\t:\tDavid MacKenzie\n> Your email address\t:\[email protected]\n> \n> \n> System Configuration\n> ---------------------\n> Architecture (example: Intel Pentium) \t: Intel x86\n> \n> Operating System (example: Linux 2.0.26 ELF) \t: BSD/OS 4.0.1\n> \n> PostgreSQL version (example: PostgreSQL-7.0): PostgreSQL-7.0.2\n> \n> Compiler used (example: gcc 2.8.0)\t\t: gcc version 2.7.2.1\n> \n> \n> Please enter a FULL description of your problem:\n> ------------------------------------------------\n> \n> UUNET is looking into offering PostgreSQL as a part of a managed web\n> hosting product, on both shared and dedicated machines. We currently\n> offer Oracle and MySQL, and it would be a nice middle-ground.\n> However, as shipped, PostgreSQL lacks the following features we need\n> that MySQL has:\n> \n> 1. The ability to listen only on a particular IP address. Each\n> hosting customer has their own IP address, on which all of their\n> servers (http, ftp, real media, etc.) run.\n> 2. The ability to place the Unix-domain socket in a mode 700 directory.\n> This allows us to automatically create an empty database, with an\n> empty DBA password, for new or upgrading customers without having\n> to interactively set a DBA password and communicate it to (or from)\n> the customer. This in turn cuts down our install and upgrade times.\n> 3. The ability to connect to the Unix-domain socket from within a\n> change-rooted environment. We run CGI programs chrooted to the\n> user's home directory, which is another reason why we need to be\n> able to specify where the Unix-domain socket is, instead of /tmp.\n> 4. The ability to, if run as root, open a pid file in /var/run as\n> root, and then setuid to the desired user. (mysqld -u can almost\n> do this; I had to patch it, too).\n> \n> The patch below fixes problem 1-3. I plan to address #4, also, but\n> haven't done so yet. These diffs are big enough that they should give\n> the PG development team something to think about in the meantime :-)\n> Also, I'm about to leave for 2 weeks' vacation, so I thought I'd get\n> out what I have, which works (for the problems it tackles), now.\n> \n> With these changes, we can set up and run PostgreSQL with scripts the\n> same way we can with apache or proftpd or mysql.\n> \n> In summary, this patch makes the following enhancements:\n> \n> 1. Adds an environment variable PGUNIXSOCKET, analogous to MYSQL_UNIX_PORT,\n> and command line options -k --unix-socket to the relevant programs.\n> 2. Adds a -h option to postmaster to set the hostname or IP address to\n> listen on instead of the default INADDR_ANY.\n> 3. Extends some library interfaces to support the above.\n> 4. Fixes a few memory leaks in PQconnectdb().\n> \n> The default behavior is unchanged from stock 7.0.2; if you don't use\n> any of these new features, they don't change the operation.\n> \n> Index: doc/src/sgml/layout.sgml\n> *** doc/src/sgml/layout.sgml\t2000/06/30 21:15:36\t1.1\n> --- doc/src/sgml/layout.sgml\t2000/07/02 03:56:05\t1.2\n> ***************\n> *** 55,61 ****\n> For example, if the database server machine is a remote machine, you\n> will need to set the <envar>PGHOST</envar> environment variable to the name\n> of the database server machine. The environment variable\n> ! <envar>PGPORT</envar> may also have to be set. The bottom line is this: if\n> you try to start an application program and it complains\n> that it cannot connect to the <Application>postmaster</Application>,\n> you must go back and make sure that your\n> --- 55,62 ----\n> For example, if the database server machine is a remote machine, you\n> will need to set the <envar>PGHOST</envar> environment variable to the name\n> of the database server machine. The environment variable\n> ! <envar>PGPORT</envar> or <envar>PGUNIXSOCKET</envar> may also have to be set.\n> ! The bottom line is this: if\n> you try to start an application program and it complains\n> that it cannot connect to the <Application>postmaster</Application>,\n> you must go back and make sure that your\n> Index: doc/src/sgml/libpq++.sgml\n> *** doc/src/sgml/libpq++.sgml\t2000/06/30 21:15:36\t1.1\n> --- doc/src/sgml/libpq++.sgml\t2000/07/02 03:56:05\t1.2\n> ***************\n> *** 93,98 ****\n> --- 93,105 ----\n> </listitem>\n> <listitem>\n> <para>\n> + \t<envar>PGUNIXSOCKET</envar> sets the full Unix domain socket\n> + \tfile name for communicating with the <productname>Postgres</productname>\n> + \tbackend.\n> + </para>\n> + </listitem>\n> + <listitem>\n> + <para>\n> \t<envar>PGDATABASE</envar> sets the default \n> \t<productname>Postgres</productname> database name.\n> </para>\n> Index: doc/src/sgml/libpq.sgml\n> *** doc/src/sgml/libpq.sgml\t2000/06/30 21:15:36\t1.1\n> --- doc/src/sgml/libpq.sgml\t2000/07/02 03:56:05\t1.2\n> ***************\n> *** 134,139 ****\n> --- 134,148 ----\n> </varlistentry>\n> \n> <varlistentry>\n> + <term><literal>unixsocket</literal></term>\n> + <listitem>\n> + <para>\n> + Full path to Unix-domain socket file to connect to at the server host.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> + <varlistentry>\n> <term><literal>dbname</literal></term>\n> <listitem>\n> <para>\n> ***************\n> *** 545,550 ****\n> --- 554,569 ----\n> \n> <listitem>\n> <para>\n> + <function>PQunixsocket</function>\n> + Returns the name of the Unix-domain socket of the connection.\n> + <synopsis>\n> + char *PQunixsocket(const PGconn *conn)\n> + </synopsis>\n> + </para>\n> + </listitem>\n> + \n> + <listitem>\n> + <para>\n> <function>PQtty</function>\n> Returns the debug tty of the connection.\n> <synopsis>\n> ***************\n> *** 1772,1777 ****\n> --- 1791,1803 ----\n> <envar>PGHOST</envar> sets the default server name.\n> If a non-zero-length string is specified, TCP/IP communication is used.\n> Without a host name, libpq will connect using a local Unix domain socket.\n> + </para>\n> + </listitem>\n> + <listitem>\n> + <para>\n> + <envar>PGPORT</envar> sets the default port or local Unix domain socket\n> + file extension for communicating with the <productname>Postgres</productname>\n> + backend.\n> </para>\n> </listitem>\n> <listitem>\n> Index: doc/src/sgml/start.sgml\n> *** doc/src/sgml/start.sgml\t2000/06/30 21:15:37\t1.1\n> --- doc/src/sgml/start.sgml\t2000/07/02 03:56:05\t1.2\n> ***************\n> *** 110,117 ****\n> will need to set the <acronym>PGHOST</acronym> environment\n> variable to the name\n> of the database server machine. The environment variable\n> ! <acronym>PGPORT</acronym> may also have to be set. The bottom\n> ! line is this: if\n> you try to start an application program and it complains\n> that it cannot connect to the <application>postmaster</application>,\n> you should immediately consult your site administrator to make\n> --- 110,117 ----\n> will need to set the <acronym>PGHOST</acronym> environment\n> variable to the name\n> of the database server machine. The environment variable\n> ! <acronym>PGPORT</acronym> or <acronym>PGUNIXSOCKET</acronym> may also have to be set.\n> ! The bottom line is this: if\n> you try to start an application program and it complains\n> that it cannot connect to the <application>postmaster</application>,\n> you should immediately consult your site administrator to make\n> Index: doc/src/sgml/ref/createdb.sgml\n> *** doc/src/sgml/ref/createdb.sgml\t2000/06/30 21:15:37\t1.1\n> --- doc/src/sgml/ref/createdb.sgml\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 58,63 ****\n> --- 58,75 ----\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry>\n> + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + Specifies the Unix-domain socket on which the\n> + <application>postmaster</application> is running.\n> + Without this option, the socket is created in <filename>/tmp</filename>\n> + based on the port number.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> <varlistentry>\n> <term>-U, --username <replaceable class=\"parameter\">username</replaceable></term>\n> <listitem>\n> Index: doc/src/sgml/ref/createlang.sgml\n> *** doc/src/sgml/ref/createlang.sgml\t2000/06/30 21:15:37\t1.1\n> --- doc/src/sgml/ref/createlang.sgml\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 96,101 ****\n> --- 96,113 ----\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry>\n> + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + Specifies the Unix-domain socket on which the\n> + <application>postmaster</application> is running.\n> + Without this option, the socket is created in <filename>/tmp</filename>\n> + based on the port number.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> <varlistentry>\n> <term>-U, --username <replaceable class=\"parameter\">username</replaceable></term>\n> <listitem>\n> Index: doc/src/sgml/ref/createuser.sgml\n> *** doc/src/sgml/ref/createuser.sgml\t2000/06/30 21:15:37\t1.1\n> --- doc/src/sgml/ref/createuser.sgml\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 59,64 ****\n> --- 59,76 ----\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry>\n> + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + Specifies the Unix-domain socket on which the\n> + <application>postmaster</application> is running.\n> + Without this option, the socket is created in <filename>/tmp</filename>\n> + based on the port number.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> <varlistentry>\n> <term>-e, --echo</term>\n> <listitem>\n> Index: doc/src/sgml/ref/dropdb.sgml\n> *** doc/src/sgml/ref/dropdb.sgml\t2000/06/30 21:15:38\t1.1\n> --- doc/src/sgml/ref/dropdb.sgml\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 58,63 ****\n> --- 58,75 ----\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry>\n> + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + Specifies the Unix-domain socket on which the\n> + <application>postmaster</application> is running.\n> + Without this option, the socket is created in <filename>/tmp</filename>\n> + based on the port number.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> <varlistentry>\n> <term>-U, --username <replaceable class=\"parameter\">username</replaceable></term>\n> <listitem>\n> Index: doc/src/sgml/ref/droplang.sgml\n> *** doc/src/sgml/ref/droplang.sgml\t2000/06/30 21:15:38\t1.1\n> --- doc/src/sgml/ref/droplang.sgml\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 96,101 ****\n> --- 96,113 ----\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry>\n> + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + Specifies the Unix-domain socket on which the\n> + <application>postmaster</application> is running.\n> + Without this option, the socket is created in <filename>/tmp</filename>\n> + based on the port number.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> <varlistentry>\n> <term>-U, --username <replaceable class=\"parameter\">username</replaceable></term>\n> <listitem>\n> Index: doc/src/sgml/ref/dropuser.sgml\n> *** doc/src/sgml/ref/dropuser.sgml\t2000/06/30 21:15:38\t1.1\n> --- doc/src/sgml/ref/dropuser.sgml\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 58,63 ****\n> --- 58,75 ----\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry>\n> + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + Specifies the Unix-domain socket on which the\n> + <application>postmaster</application> is running.\n> + Without this option, the socket is created in <filename>/tmp</filename>\n> + based on the port number.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> <varlistentry>\n> <term>-e, --echo</term>\n> <listitem>\n> Index: doc/src/sgml/ref/pg_dump.sgml\n> *** doc/src/sgml/ref/pg_dump.sgml\t2000/06/30 21:15:38\t1.1\n> --- doc/src/sgml/ref/pg_dump.sgml\t2000/07/01 18:41:22\t1.2\n> ***************\n> *** 24,30 ****\n> </refsynopsisdivinfo>\n> <synopsis>\n> pg_dump [ <replaceable class=\"parameter\">dbname</replaceable> ]\n> ! pg_dump [ -h <replaceable class=\"parameter\">host</replaceable> ] [ -p <replaceable class=\"parameter\">port</replaceable> ]\n> [ -t <replaceable class=\"parameter\">table</replaceable> ]\n> [ -a ] [ -c ] [ -d ] [ -D ] [ -i ] [ -n ] [ -N ]\n> [ -o ] [ -s ] [ -u ] [ -v ] [ -x ]\n> --- 24,32 ----\n> </refsynopsisdivinfo>\n> <synopsis>\n> pg_dump [ <replaceable class=\"parameter\">dbname</replaceable> ]\n> ! pg_dump [ -h <replaceable class=\"parameter\">host</replaceable> ]\n> ! [ -k <replaceable class=\"parameter\">path</replaceable> ]\n> ! [ -p <replaceable class=\"parameter\">port</replaceable> ]\n> [ -t <replaceable class=\"parameter\">table</replaceable> ]\n> [ -a ] [ -c ] [ -d ] [ -D ] [ -i ] [ -n ] [ -N ]\n> [ -o ] [ -s ] [ -u ] [ -v ] [ -x ]\n> ***************\n> *** 200,205 ****\n> --- 202,222 ----\n> \t<application>postmaster</application>\n> \tis running. Defaults to using a local Unix domain socket\n> \trather than an IP connection..\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> + <varlistentry>\n> + <term>-k <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + \tSpecifies the local Unix domain socket file path\n> + \ton which the <application>postmaster</application>\n> + \tis listening for connections.\n> + Without this option, the socket path name defaults to\n> + the value of the <envar>PGUNIXSOCKET</envar> environment\n> + \tvariable (if set), otherwise it is constructed\n> + from the port number.\n> </para>\n> </listitem>\n> </varlistentry>\n> Index: doc/src/sgml/ref/pg_dumpall.sgml\n> *** doc/src/sgml/ref/pg_dumpall.sgml\t2000/06/30 21:15:38\t1.1\n> --- doc/src/sgml/ref/pg_dumpall.sgml\t2000/07/01 18:41:22\t1.2\n> ***************\n> *** 24,30 ****\n> </refsynopsisdivinfo>\n> <synopsis>\n> pg_dumpall\n> ! pg_dumpall [ -h <replaceable class=\"parameter\">host</replaceable> ] [ -p <replaceable class=\"parameter\">port</replaceable> ] [ -a ] [ -d ] [ -D ] [ -O ] [ -s ] [ -u ] [ -v ] [ -x ]\n> </synopsis>\n> \n> <refsect2 id=\"R2-APP-PG-DUMPALL-1\">\n> --- 24,33 ----\n> </refsynopsisdivinfo>\n> <synopsis>\n> pg_dumpall\n> ! pg_dumpall [ -h <replaceable class=\"parameter\">host</replaceable> ]\n> ! [ -k <replaceable class=\"parameter\">path</replaceable> ]\n> ! [ -p <replaceable class=\"parameter\">port</replaceable> ]\n> ! [ -a ] [ -d ] [ -D ] [ -O ] [ -s ] [ -u ] [ -v ] [ -x ]\n> </synopsis>\n> \n> <refsect2 id=\"R2-APP-PG-DUMPALL-1\">\n> ***************\n> *** 137,142 ****\n> --- 140,160 ----\n> \t<application>postmaster</application>\n> \tis running. Defaults to using a local Unix domain socket\n> \trather than an IP connection..\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> + <varlistentry>\n> + <term>-k <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + \tSpecifies the local Unix domain socket file path\n> + \ton which the <application>postmaster</application>\n> + \tis listening for connections.\n> + Without this option, the socket path name defaults to\n> + the value of the <envar>PGUNIXSOCKET</envar> environment\n> + \tvariable (if set), otherwise it is constructed\n> + from the port number.\n> </para>\n> </listitem>\n> </varlistentry>\n> Index: doc/src/sgml/ref/postmaster.sgml\n> *** doc/src/sgml/ref/postmaster.sgml\t2000/06/30 21:15:38\t1.1\n> --- doc/src/sgml/ref/postmaster.sgml\t2000/07/06 07:48:31\t1.7\n> ***************\n> *** 24,30 ****\n> </refsynopsisdivinfo>\n> <synopsis>\n> postmaster [ -B <replaceable class=\"parameter\">nBuffers</replaceable> ] [ -D <replaceable class=\"parameter\">DataDir</replaceable> ] [ -N <replaceable class=\"parameter\">maxBackends</replaceable> ] [ -S ]\n> ! [ -d <replaceable class=\"parameter\">DebugLevel</replaceable> ] [ -i ] [ -l ]\n> [ -o <replaceable class=\"parameter\">BackendOptions</replaceable> ] [ -p <replaceable class=\"parameter\">port</replaceable> ] [ -n | -s ]\n> </synopsis>\n> \n> --- 24,32 ----\n> </refsynopsisdivinfo>\n> <synopsis>\n> postmaster [ -B <replaceable class=\"parameter\">nBuffers</replaceable> ] [ -D <replaceable class=\"parameter\">DataDir</replaceable> ] [ -N <replaceable class=\"parameter\">maxBackends</replaceable> ] [ -S ]\n> ! [ -d <replaceable class=\"parameter\">DebugLevel</replaceable> ]\n> ! [ -h <replaceable class=\"parameter\">hostname</replaceable> ] [ -i ]\n> ! [ -k <replaceable class=\"parameter\">path</replaceable> ] [ -l ]\n> [ -o <replaceable class=\"parameter\">BackendOptions</replaceable> ] [ -p <replaceable class=\"parameter\">port</replaceable> ] [ -n | -s ]\n> </synopsis>\n> \n> ***************\n> *** 124,129 ****\n> --- 126,161 ----\n> </varlistentry>\n> \n> <varlistentry>\n> + <term>-h <replaceable class=\"parameter\">hostName</replaceable></term>\n> + <listitem>\n> + <para>\n> + \tSpecifies the TCP/IP hostname or address\n> + \ton which the <application>postmaster</application>\n> + \tis to listen for connections from frontend applications. Defaults to\n> + \tthe value of the \n> + \t<envar>PGHOST</envar> \n> + \tenvironment variable, or if <envar>PGHOST</envar>\n> + \tis not set, then defaults to \"all\", meaning listen on all configured addresses\n> + \t(including localhost).\n> + </para>\n> + <para>\n> + \tIf you use a hostname or address other than \"all\", do not try to run\n> + \tmultiple instances of <application>postmaster</application> on the\n> + \tsame IP address but different ports. Doing so will result in them\n> + \tattempting (incorrectly) to use the same shared memory segments.\n> + \tAlso, if you use a hostname other than \"all\", all of the host's IP addresses\n> + \ton which <application>postmaster</application> instances are\n> + \tlistening must be distinct in the two last octets.\n> + </para>\n> + <para>\n> + \tIf you do use \"all\" (the default), then each instance must listen on a\n> + \tdifferent port (via -p or <envar>PGPORT</envar>). And, of course, do\n> + \tnot try to use both approaches on one host.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> + <varlistentry>\n> <term>-i</term>\n> <listitem>\n> <para>\n> ***************\n> *** 135,140 ****\n> --- 167,201 ----\n> </varlistentry>\n> \n> <varlistentry>\n> + <term>-k <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + \tSpecifies the local Unix domain socket path name\n> + \ton which the <application>postmaster</application>\n> + \tis to listen for connections from frontend applications. Defaults to\n> + \tthe value of the \n> + \t<envar>PGUNIXSOCKET</envar> \n> + \tenvironment variable, or if <envar>PGUNIXSOCKET</envar>\n> + \tis not set, then defaults to a file in <filename>/tmp</filename>\n> + \tconstructed from the port number.\n> + </para>\n> + <para>\n> + You can use this option to put the Unix-domain socket in a\n> + directory that is private to one or more users using Unix\n> + \tdirectory permissions. This is necessary for securely\n> + \tcreating databases automatically on shared machines.\n> + In that situation, also disallow all TCP/IP connections\n> + \tinitially in <filename>pg_hba.conf</filename>.\n> + \tIf you specify a socket path other than the\n> + \tdefault then all frontend applications (including\n> + \t<application>psql</application>) must specify the same\n> + \tsocket path using either command-line options or\n> + \t<envar>PGUNIXSOCKET</envar>.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> + <varlistentry>\n> <term>-l</term>\n> <listitem>\n> <para>\n> Index: doc/src/sgml/ref/psql-ref.sgml\n> *** doc/src/sgml/ref/psql-ref.sgml\t2000/06/30 21:15:38\t1.1\n> --- doc/src/sgml/ref/psql-ref.sgml\t2000/07/02 03:56:05\t1.3\n> ***************\n> *** 1329,1334 ****\n> --- 1329,1347 ----\n> \n> \n> <varlistentry>\n> + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + Specifies the Unix-domain socket on which the\n> + <application>postmaster</application> is running.\n> + Without this option, the socket is created in <filename>/tmp</filename>\n> + based on the port number.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> + \n> + <varlistentry>\n> <term>-H, --html</term>\n> <listitem>\n> <para>\n> Index: doc/src/sgml/ref/vacuumdb.sgml\n> *** doc/src/sgml/ref/vacuumdb.sgml\t2000/06/30 21:15:38\t1.1\n> --- doc/src/sgml/ref/vacuumdb.sgml\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 24,30 ****\n> </refsynopsisdivinfo>\n> <synopsis>\n> vacuumdb [ <replaceable class=\"parameter\">options</replaceable> ] [ --analyze | -z ]\n> ! [ --alldb | -a ] [ --verbose | -v ]\n> [ --table '<replaceable class=\"parameter\">table</replaceable> [ ( <replaceable class=\"parameter\">column</replaceable> [,...] ) ]' ] [ [-d] <replaceable class=\"parameter\">dbname</replaceable> ]\n> </synopsis>\n> \n> --- 24,30 ----\n> </refsynopsisdivinfo>\n> <synopsis>\n> vacuumdb [ <replaceable class=\"parameter\">options</replaceable> ] [ --analyze | -z ]\n> ! [ --all | -a ] [ --verbose | -v ]\n> [ --table '<replaceable class=\"parameter\">table</replaceable> [ ( <replaceable class=\"parameter\">column</replaceable> [,...] ) ]' ] [ [-d] <replaceable class=\"parameter\">dbname</replaceable> ]\n> </synopsis>\n> \n> ***************\n> *** 128,133 ****\n> --- 128,145 ----\n> </para>\n> </listitem>\n> </varlistentry>\n> + \n> + <varlistentry>\n> + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + Specifies the Unix-domain socket on which the\n> + <application>postmaster</application> is running.\n> + Without this option, the socket is created in <filename>/tmp</filename>\n> + based on the port number.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> \n> <varlistentry>\n> <term>-U <replaceable class=\"parameter\">username</replaceable></term>\n> Index: src/backend/libpq/pqcomm.c\n> *** src/backend/libpq/pqcomm.c\t2000/06/30 21:15:40\t1.1\n> --- src/backend/libpq/pqcomm.c\t2000/07/01 18:50:46\t1.3\n> ***************\n> *** 42,47 ****\n> --- 42,48 ----\n> *\t\tStreamConnection\t- Create new connection with client\n> *\t\tStreamClose\t\t\t- Close a client/backend connection\n> *\t\tpq_getport\t\t- return the PGPORT setting\n> + *\t\tpq_getunixsocket\t- return the PGUNIXSOCKET setting\n> *\t\tpq_init\t\t\t- initialize libpq at backend startup\n> *\t\tpq_close\t\t- shutdown libpq at backend exit\n> *\n> ***************\n> *** 134,139 ****\n> --- 135,151 ----\n> }\n> \n> /* --------------------------------\n> + *\t\tpq_getunixsocket - return the PGUNIXSOCKET setting.\n> + *\t\tIf NULL, default to computing it based on the port.\n> + * --------------------------------\n> + */\n> + char *\n> + pq_getunixsocket(void)\n> + {\n> + \treturn getenv(\"PGUNIXSOCKET\");\n> + }\n> + \n> + /* --------------------------------\n> *\t\tpq_close - shutdown libpq at backend exit\n> *\n> * Note: in a standalone backend MyProcPort will be null,\n> ***************\n> *** 177,189 ****\n> /*\n> * StreamServerPort -- open a sock stream \"listening\" port.\n> *\n> ! * This initializes the Postmaster's connection-accepting port.\n> *\n> * RETURNS: STATUS_OK or STATUS_ERROR\n> */\n> \n> int\n> ! StreamServerPort(char *hostName, unsigned short portName, int *fdP)\n> {\n> \tSockAddr\tsaddr;\n> \tint\t\t\tfd,\n> --- 189,205 ----\n> /*\n> * StreamServerPort -- open a sock stream \"listening\" port.\n> *\n> ! * This initializes the Postmaster's connection-accepting port fdP.\n> ! * If hostName is \"any\", listen on all configured IP addresses.\n> ! * If hostName is NULL, listen on a Unix-domain socket instead of TCP;\n> ! * if unixSocketName is NULL, a default path (constructed in UNIX_SOCK_PATH\n> ! * in include/libpq/pqcomm.h) based on portName is used.\n> *\n> * RETURNS: STATUS_OK or STATUS_ERROR\n> */\n> \n> int\n> ! StreamServerPort(char *hostName, unsigned short portNumber, char *unixSocketName, int *fdP)\n> {\n> \tSockAddr\tsaddr;\n> \tint\t\t\tfd,\n> ***************\n> *** 227,233 ****\n> \tsaddr.sa.sa_family = family;\n> \tif (family == AF_UNIX)\n> \t{\n> ! \t\tlen = UNIXSOCK_PATH(saddr.un, portName);\n> \t\tstrcpy(sock_path, saddr.un.sun_path);\n> \n> \t\t/*\n> --- 243,250 ----\n> \tsaddr.sa.sa_family = family;\n> \tif (family == AF_UNIX)\n> \t{\n> ! \t\tUNIXSOCK_PATH(saddr.un, portNumber, unixSocketName);\n> ! \t\tlen = UNIXSOCK_LEN(saddr.un);\n> \t\tstrcpy(sock_path, saddr.un.sun_path);\n> \n> \t\t/*\n> ***************\n> *** 259,267 ****\n> \t}\n> \telse\n> \t{\n> ! \t\tsaddr.in.sin_addr.s_addr = htonl(INADDR_ANY);\n> ! \t\tsaddr.in.sin_port = htons(portName);\n> ! \t\tlen = sizeof(struct sockaddr_in);\n> \t}\n> \terr = bind(fd, &saddr.sa, len);\n> \tif (err < 0)\n> --- 276,305 ----\n> \t}\n> \telse\n> \t{\n> ! \t /* TCP/IP socket */\n> ! \t if (!strcmp(hostName, \"all\")) /* like for databases in pg_hba.conf. */\n> ! \t saddr.in.sin_addr.s_addr = htonl(INADDR_ANY);\n> ! \t else\n> ! \t {\n> ! \t struct hostent *hp;\n> ! \n> ! \t hp = gethostbyname(hostName);\n> ! \t if ((hp == NULL) || (hp->h_addrtype != AF_INET))\n> ! \t\t{\n> ! \t\t snprintf(PQerrormsg, PQERRORMSG_LENGTH,\n> ! \t\t\t \"FATAL: StreamServerPort: gethostbyname(%s) failed: %s\\n\",\n> ! \t\t\t hostName, hstrerror(h_errno));\n> ! \t\t fputs(PQerrormsg, stderr);\n> ! \t\t pqdebug(\"%s\", PQerrormsg);\n> ! \t\t return STATUS_ERROR;\n> ! \t\t}\n> ! \t memmove((char *) &(saddr.in.sin_addr),\n> ! \t\t (char *) hp->h_addr,\n> ! \t\t hp->h_length);\n> ! \t }\n> ! \n> ! \t saddr.in.sin_port = htons(portNumber);\n> ! \t len = sizeof(struct sockaddr_in);\n> \t}\n> \terr = bind(fd, &saddr.sa, len);\n> \tif (err < 0)\n> Index: src/backend/postmaster/postmaster.c\n> *** src/backend/postmaster/postmaster.c\t2000/06/30 21:15:42\t1.1\n> --- src/backend/postmaster/postmaster.c\t2000/07/06 07:38:21\t1.5\n> ***************\n> *** 136,143 ****\n> /* list of ports associated with still open, but incomplete connections */\n> static Dllist *PortList;\n> \n> ! static unsigned short PostPortName = 0;\n> \n> /*\n> * This is a boolean indicating that there is at least one backend that\n> * is accessing the current shared memory and semaphores. Between the\n> --- 136,150 ----\n> /* list of ports associated with still open, but incomplete connections */\n> static Dllist *PortList;\n> \n> ! /* Hostname of interface to listen on, or 'any'. */\n> ! static char *HostName = NULL;\n> \n> + /* TCP/IP port number to listen on. Also used to default the Unix-domain socket name. */\n> + static unsigned short PostPortNumber = 0;\n> + \n> + /* Override of the default Unix-domain socket name to listen on, if non-NULL. */\n> + static char *UnixSocketName = NULL;\n> + \n> /*\n> * This is a boolean indicating that there is at least one backend that\n> * is accessing the current shared memory and semaphores. Between the\n> ***************\n> *** 274,280 ****\n> static void SignalChildren(SIGNAL_ARGS);\n> static int\tCountChildren(void);\n> static int\n> ! SetOptsFile(char *progname, int port, char *datadir,\n> \t\t\tint assert, int nbuf, char *execfile,\n> \t\t\tint debuglvl, int netserver,\n> #ifdef USE_SSL\n> --- 281,287 ----\n> static void SignalChildren(SIGNAL_ARGS);\n> static int\tCountChildren(void);\n> static int\n> ! SetOptsFile(char *progname, char *hostname, int port, char *unixsocket, char *datadir,\n> \t\t\tint assert, int nbuf, char *execfile,\n> \t\t\tint debuglvl, int netserver,\n> #ifdef USE_SSL\n> ***************\n> *** 370,380 ****\n> {\n> \textern int\tNBuffers;\t\t/* from buffer/bufmgr.c */\n> \tint\t\t\topt;\n> - \tchar\t *hostName;\n> \tint\t\t\tstatus;\n> \tint\t\t\tsilentflag = 0;\n> \tbool\t\tDataDirOK;\t\t/* We have a usable PGDATA value */\n> - \tchar\t\thostbuf[MAXHOSTNAMELEN];\n> \tint\t\t\tnonblank_argc;\n> \tchar\t\toriginal_extraoptions[MAXPGPATH];\n> \n> --- 377,385 ----\n> ***************\n> *** 431,449 ****\n> \t */\n> \tumask((mode_t) 0077);\n> \n> - \tif (!(hostName = getenv(\"PGHOST\")))\n> - \t{\n> - \t\tif (gethostname(hostbuf, MAXHOSTNAMELEN) < 0)\n> - \t\t\tstrcpy(hostbuf, \"localhost\");\n> - \t\thostName = hostbuf;\n> - \t}\n> - \n> \tMyProcPid = getpid();\n> \tDataDir = getenv(\"PGDATA\"); /* default value */\n> \n> \topterr = 0;\n> \tIgnoreSystemIndexes(false);\n> ! \twhile ((opt = getopt(nonblank_argc, argv, \"A:a:B:b:D:d:ilm:MN:no:p:Ss\")) != EOF)\n> \t{\n> \t\tswitch (opt)\n> \t\t{\n> --- 436,447 ----\n> \t */\n> \tumask((mode_t) 0077);\n> \n> \tMyProcPid = getpid();\n> \tDataDir = getenv(\"PGDATA\"); /* default value */\n> \n> \topterr = 0;\n> \tIgnoreSystemIndexes(false);\n> ! \twhile ((opt = getopt(nonblank_argc, argv, \"A:a:B:b:D:d:h:ik:lm:MN:no:p:Ss\")) != EOF)\n> \t{\n> \t\tswitch (opt)\n> \t\t{\n> ***************\n> *** 498,506 ****\n> --- 496,511 ----\n> \t\t\t\tDebugLvl = atoi(optarg);\n> \t\t\t\tpg_options[TRACE_VERBOSE] = DebugLvl;\n> \t\t\t\tbreak;\n> + \t\t\tcase 'h':\n> + \t\t\t\tHostName = optarg;\n> + \t\t\t\tbreak;\n> \t\t\tcase 'i':\n> \t\t\t\tNetServer = true;\n> \t\t\t\tbreak;\n> + \t\t\tcase 'k':\n> + \t\t\t\t/* Set PGUNIXSOCKET by hand. */\n> + \t\t\t\tUnixSocketName = optarg;\n> + \t\t\t\tbreak;\n> #ifdef USE_SSL\n> \t\t\tcase 'l':\n> \t\t\t\tSecureNetServer = true;\n> ***************\n> *** 545,551 ****\n> \t\t\t\tbreak;\n> \t\t\tcase 'p':\n> \t\t\t\t/* Set PGPORT by hand. */\n> ! \t\t\t\tPostPortName = (unsigned short) atoi(optarg);\n> \t\t\t\tbreak;\n> \t\t\tcase 'S':\n> \n> --- 550,556 ----\n> \t\t\t\tbreak;\n> \t\t\tcase 'p':\n> \t\t\t\t/* Set PGPORT by hand. */\n> ! \t\t\t\tPostPortNumber = (unsigned short) atoi(optarg);\n> \t\t\t\tbreak;\n> \t\t\tcase 'S':\n> \n> ***************\n> *** 577,584 ****\n> \t/*\n> \t * Select default values for switches where needed\n> \t */\n> ! \tif (PostPortName == 0)\n> ! \t\tPostPortName = (unsigned short) pq_getport();\n> \n> \t/*\n> \t * Check for invalid combinations of switches\n> --- 582,603 ----\n> \t/*\n> \t * Select default values for switches where needed\n> \t */\n> ! \tif (HostName == NULL)\n> ! \t{\n> ! \t\tif (!(HostName = getenv(\"PGHOST\")))\n> ! \t\t{\n> ! \t\t\tHostName = \"any\";\n> ! \t\t}\n> ! \t}\n> ! \telse if (!NetServer)\n> ! \t{\n> ! \t\tfprintf(stderr, \"%s: -h requires -i.\\n\", progname);\n> ! \t\texit(1);\n> ! \t}\n> ! \tif (PostPortNumber == 0)\n> ! \t\tPostPortNumber = (unsigned short) pq_getport();\n> ! \tif (UnixSocketName == NULL)\n> ! \t\tUnixSocketName = pq_getunixsocket();\n> \n> \t/*\n> \t * Check for invalid combinations of switches\n> ***************\n> *** 622,628 ****\n> \n> \tif (NetServer)\n> \t{\n> ! \t\tstatus = StreamServerPort(hostName, PostPortName, &ServerSock_INET);\n> \t\tif (status != STATUS_OK)\n> \t\t{\n> \t\t\tfprintf(stderr, \"%s: cannot create INET stream port\\n\",\n> --- 641,647 ----\n> \n> \tif (NetServer)\n> \t{\n> ! \t\tstatus = StreamServerPort(HostName, PostPortNumber, NULL, &ServerSock_INET);\n> \t\tif (status != STATUS_OK)\n> \t\t{\n> \t\t\tfprintf(stderr, \"%s: cannot create INET stream port\\n\",\n> ***************\n> *** 632,638 ****\n> \t}\n> \n> #if !defined(__CYGWIN32__) && !defined(__QNX__)\n> ! \tstatus = StreamServerPort(NULL, PostPortName, &ServerSock_UNIX);\n> \tif (status != STATUS_OK)\n> \t{\n> \t\tfprintf(stderr, \"%s: cannot create UNIX stream port\\n\",\n> --- 651,657 ----\n> \t}\n> \n> #if !defined(__CYGWIN32__) && !defined(__QNX__)\n> ! \tstatus = StreamServerPort(NULL, PostPortNumber, UnixSocketName, &ServerSock_UNIX);\n> \tif (status != STATUS_OK)\n> \t{\n> \t\tfprintf(stderr, \"%s: cannot create UNIX stream port\\n\",\n> ***************\n> *** 642,648 ****\n> #endif\n> \t/* set up shared memory and semaphores */\n> \tEnableMemoryContext(TRUE);\n> ! \treset_shared(PostPortName);\n> \n> \t/*\n> \t * Initialize the list of active backends.\tThis list is only used for\n> --- 661,667 ----\n> #endif\n> \t/* set up shared memory and semaphores */\n> \tEnableMemoryContext(TRUE);\n> ! \treset_shared(PostPortNumber);\n> \n> \t/*\n> \t * Initialize the list of active backends.\tThis list is only used for\n> ***************\n> *** 664,670 ****\n> \t\t{\n> \t\t\tif (SetOptsFile(\n> \t\t\t\t\t\t\tprogname,\t/* postmaster executable file */\n> ! \t\t\t\t\t\t\tPostPortName,\t\t/* port number */\n> \t\t\t\t\t\t\tDataDir,\t/* PGDATA */\n> \t\t\t\t\t\t\tassert_enabled,\t\t/* whether -A is specified\n> \t\t\t\t\t\t\t\t\t\t\t\t * or not */\n> --- 683,691 ----\n> \t\t{\n> \t\t\tif (SetOptsFile(\n> \t\t\t\t\t\t\tprogname,\t/* postmaster executable file */\n> ! \t\t\t\t\t\t\tHostName, /* IP address to bind to */\n> ! \t\t\t\t\t\t\tPostPortNumber,\t\t/* port number */\n> ! \t\t\t\t\t\t\tUnixSocketName,\t/* PGUNIXSOCKET */\n> \t\t\t\t\t\t\tDataDir,\t/* PGDATA */\n> \t\t\t\t\t\t\tassert_enabled,\t\t/* whether -A is specified\n> \t\t\t\t\t\t\t\t\t\t\t\t * or not */\n> ***************\n> *** 753,759 ****\n> \t\t{\n> \t\t\tif (SetOptsFile(\n> \t\t\t\t\t\t\tprogname,\t/* postmaster executable file */\n> ! \t\t\t\t\t\t\tPostPortName,\t\t/* port number */\n> \t\t\t\t\t\t\tDataDir,\t/* PGDATA */\n> \t\t\t\t\t\t\tassert_enabled,\t\t/* whether -A is specified\n> \t\t\t\t\t\t\t\t\t\t\t\t * or not */\n> --- 774,782 ----\n> \t\t{\n> \t\t\tif (SetOptsFile(\n> \t\t\t\t\t\t\tprogname,\t/* postmaster executable file */\n> ! \t\t\t\t\t\t\tHostName, /* IP address to bind to */\n> ! \t\t\t\t\t\t\tPostPortNumber,\t\t/* port number */\n> ! \t\t\t\t\t\t\tUnixSocketName,\t/* PGUNIXSOCKET */\n> \t\t\t\t\t\t\tDataDir,\t/* PGDATA */\n> \t\t\t\t\t\t\tassert_enabled,\t\t/* whether -A is specified\n> \t\t\t\t\t\t\t\t\t\t\t\t * or not */\n> ***************\n> *** 837,843 ****\n> --- 860,868 ----\n> \tfprintf(stderr, \"\\t-a system\\tuse this authentication system\\n\");\n> \tfprintf(stderr, \"\\t-b backend\\tuse a specific backend server executable\\n\");\n> \tfprintf(stderr, \"\\t-d [1-5]\\tset debugging level\\n\");\n> + \tfprintf(stderr, \"\\t-h hostname\\tspecify hostname or IP address or 'any' for postmaster to listen on (also use -i)\\n\");\n> \tfprintf(stderr, \"\\t-i \\t\\tlisten on TCP/IP sockets as well as Unix domain socket\\n\");\n> + \tfprintf(stderr, \"\\t-k path\\tspecify Unix-domain socket name for postmaster to listen on\\n\");\n> #ifdef USE_SSL\n> \tfprintf(stderr, \" \\t-l \\t\\tfor TCP/IP sockets, listen only on SSL connections\\n\");\n> #endif\n> ***************\n> *** 1318,1328 ****\n> --- 1343,1417 ----\n> }\n> \n> /*\n> + * get_host_port -- return a pseudo port number (16 bits)\n> + * derived from the primary IP address of HostName.\n> + */\n> + static unsigned short\n> + get_host_port(void)\n> + {\n> + \tstatic unsigned short hostPort = 0;\n> + \n> + \tif (hostPort == 0)\n> + \t{\n> + \t\tSockAddr\tsaddr;\n> + \t\tstruct hostent *hp;\n> + \n> + \t\thp = gethostbyname(HostName);\n> + \t\tif ((hp == NULL) || (hp->h_addrtype != AF_INET))\n> + \t\t{\n> + \t\t\tchar msg[1024];\n> + \t\t\tsnprintf(msg, sizeof(msg),\n> + \t\t\t\t \"FATAL: get_host_port: gethostbyname(%s) failed: %s\\n\",\n> + \t\t\t\t HostName, hstrerror(h_errno));\n> + \t\t\tfputs(msg, stderr);\n> + \t\t\tpqdebug(\"%s\", msg);\n> + \t\t\texit(1);\n> + \t\t}\n> + \t\tmemmove((char *) &(saddr.in.sin_addr),\n> + \t\t\t(char *) hp->h_addr,\n> + \t\t\thp->h_length);\n> + \t\thostPort = ntohl(saddr.in.sin_addr.s_addr) & 0xFFFF;\n> + \t}\n> + \n> + \treturn hostPort;\n> + }\n> + \n> + /*\n> * reset_shared -- reset shared memory and semaphores\n> */\n> static void\n> reset_shared(unsigned short port)\n> {\n> + \t/*\n> + \t * A typical ipc_key is 5432001, which is port 5432, sequence\n> + \t * number 0, and 01 as the index in IPCKeyGetBufferMemoryKey().\n> + \t * The 32-bit INT_MAX is 2147483 6 47.\n> + \t *\n> + \t * The default algorithm for calculating the IPC keys assumes that all\n> + \t * instances of postmaster on a given host are listening on different\n> + \t * ports. In order to work (prevent shared memory collisions) if you\n> + \t * run multiple PostgreSQL instances on the same port and different IP\n> + \t * addresses on a host, we change the algorithm if you give postmaster\n> + \t * the -h option, or set PGHOST, to a value other than the internal\n> + \t * default of \"any\".\n> + \t *\n> + \t * If HostName is not \"any\", then we generate the IPC keys using the\n> + \t * last two octets of the IP address instead of the port number.\n> + \t * This algorithm assumes that no one will run multiple PostgreSQL\n> + \t * instances on one host using two IP addresses that have the same two\n> + \t * last octets in different class C networks. If anyone does, it\n> + \t * would be rare.\n> + \t *\n> + \t * So, if you use -h or PGHOST, don't try to run two instances of\n> + \t * PostgreSQL on the same IP address but different ports. If you\n> + \t * don't use them, then you must use different ports (via -p or\n> + \t * PGPORT). And, of course, don't try to use both approaches on one\n> + \t * host.\n> + \t */\n> + \n> + \tif (strcmp(HostName, \"any\"))\n> + \t\tport = get_host_port();\n> + \n> \tipc_key = port * 1000 + shmem_seq * 100;\n> \tCreateSharedMemoryAndSemaphores(ipc_key, MaxBackends);\n> \tshmem_seq += 1;\n> ***************\n> *** 1540,1546 ****\n> \t\t\t\tctime(&tnow));\n> \t\tfflush(stderr);\n> \t\tshmem_exit(0);\n> ! \t\treset_shared(PostPortName);\n> \t\tStartupPID = StartupDataBase();\n> \t\treturn;\n> \t}\n> --- 1629,1635 ----\n> \t\t\t\tctime(&tnow));\n> \t\tfflush(stderr);\n> \t\tshmem_exit(0);\n> ! \t\treset_shared(PostPortNumber);\n> \t\tStartupPID = StartupDataBase();\n> \t\treturn;\n> \t}\n> ***************\n> *** 1720,1726 ****\n> \t * Set up the necessary environment variables for the backend This\n> \t * should really be some sort of message....\n> \t */\n> ! \tsprintf(envEntry[0], \"POSTPORT=%d\", PostPortName);\n> \tputenv(envEntry[0]);\n> \tsprintf(envEntry[1], \"POSTID=%d\", NextBackendTag);\n> \tputenv(envEntry[1]);\n> --- 1809,1815 ----\n> \t * Set up the necessary environment variables for the backend This\n> \t * should really be some sort of message....\n> \t */\n> ! \tsprintf(envEntry[0], \"POSTPORT=%d\", PostPortNumber);\n> \tputenv(envEntry[0]);\n> \tsprintf(envEntry[1], \"POSTID=%d\", NextBackendTag);\n> \tputenv(envEntry[1]);\n> ***************\n> *** 2174,2180 ****\n> \tfor (i = 0; i < 4; ++i)\n> \t\tMemSet(ssEntry[i], 0, 2 * ARGV_SIZE);\n> \n> ! \tsprintf(ssEntry[0], \"POSTPORT=%d\", PostPortName);\n> \tputenv(ssEntry[0]);\n> \tsprintf(ssEntry[1], \"POSTID=%d\", NextBackendTag);\n> \tputenv(ssEntry[1]);\n> --- 2263,2269 ----\n> \tfor (i = 0; i < 4; ++i)\n> \t\tMemSet(ssEntry[i], 0, 2 * ARGV_SIZE);\n> \n> ! \tsprintf(ssEntry[0], \"POSTPORT=%d\", PostPortNumber);\n> \tputenv(ssEntry[0]);\n> \tsprintf(ssEntry[1], \"POSTID=%d\", NextBackendTag);\n> \tputenv(ssEntry[1]);\n> ***************\n> *** 2254,2260 ****\n> * Create the opts file\n> */\n> static int\n> ! SetOptsFile(char *progname, int port, char *datadir,\n> \t\t\tint assert, int nbuf, char *execfile,\n> \t\t\tint debuglvl, int netserver,\n> #ifdef USE_SSL\n> --- 2343,2349 ----\n> * Create the opts file\n> */\n> static int\n> ! SetOptsFile(char *progname, char *hostname, int port, char *unixsocket, char *datadir,\n> \t\t\tint assert, int nbuf, char *execfile,\n> \t\t\tint debuglvl, int netserver,\n> #ifdef USE_SSL\n> ***************\n> *** 2279,2284 ****\n> --- 2368,2383 ----\n> \t\treturn (-1);\n> \t}\n> \tsnprintf(opts, sizeof(opts), \"%s\\n-p %d\\n-D %s\\n\", progname, port, datadir);\n> + \tif (netserver)\n> + \t{\n> + \t\tsprintf(buf, \"-h %s\\n\", hostname);\n> + \t\tstrcat(opts, buf);\n> + \t}\n> + \tif (unixsocket)\n> + \t{\n> + \t\tsprintf(buf, \"-k %s\\n\", unixsocket);\n> + \t\tstrcat(opts, buf);\n> + \t}\n> \tif (assert)\n> \t{\n> \t\tsprintf(buf, \"-A %d\\n\", assert);\n> Index: src/bin/pg_dump/pg_dump.c\n> *** src/bin/pg_dump/pg_dump.c\t2000/06/30 21:15:44\t1.1\n> --- src/bin/pg_dump/pg_dump.c\t2000/07/01 18:41:22\t1.2\n> ***************\n> *** 140,145 ****\n> --- 140,146 ----\n> \t\t \" -D, --attribute-inserts dump data as INSERT commands with attribute names\\n\"\n> \t\t \" -h, --host <hostname> server host name\\n\"\n> \t\t \" -i, --ignore-version proceed when database version != pg_dump version\\n\"\n> + \t\t \" -k, --unixsocket <path> server Unix-domain socket name\\n\"\n> \t\" -n, --no-quotes suppress most quotes around identifiers\\n\"\n> \t \" -N, --quotes enable most quotes around identifiers\\n\"\n> \t\t \" -o, --oids dump object ids (oids)\\n\"\n> ***************\n> *** 158,163 ****\n> --- 159,165 ----\n> \t\t \" -D dump data as INSERT commands with attribute names\\n\"\n> \t\t \" -h <hostname> server host name\\n\"\n> \t\t \" -i proceed when database version != pg_dump version\\n\"\n> + \t\t \" -k <path> server Unix-domain socket name\\n\"\n> \t\" -n suppress most quotes around identifiers\\n\"\n> \t \" -N enable most quotes around identifiers\\n\"\n> \t\t \" -o dump object ids (oids)\\n\"\n> ***************\n> *** 579,584 ****\n> --- 581,587 ----\n> \tconst char *dbname = NULL;\n> \tconst char *pghost = NULL;\n> \tconst char *pgport = NULL;\n> + \tconst char *pgunixsocket = NULL;\n> \tchar\t *tablename = NULL;\n> \tbool\t\toids = false;\n> \tTableInfo *tblinfo;\n> ***************\n> *** 598,603 ****\n> --- 601,607 ----\n> \t\t{\"attribute-inserts\", no_argument, NULL, 'D'},\n> \t\t{\"host\", required_argument, NULL, 'h'},\n> \t\t{\"ignore-version\", no_argument, NULL, 'i'},\n> + \t\t{\"unixsocket\", required_argument, NULL, 'k'},\n> \t\t{\"no-quotes\", no_argument, NULL, 'n'},\n> \t\t{\"quotes\", no_argument, NULL, 'N'},\n> \t\t{\"oids\", no_argument, NULL, 'o'},\n> ***************\n> *** 662,667 ****\n> --- 666,674 ----\n> \t\t\tcase 'i':\t\t\t/* ignore database version mismatch */\n> \t\t\t\tignore_version = true;\n> \t\t\t\tbreak;\n> + \t\t\tcase 'k':\t\t\t/* server Unix-domain socket */\n> + \t\t\t\tpgunixsocket = optarg;\n> + \t\t\t\tbreak;\n> \t\t\tcase 'n':\t\t\t/* Do not force double-quotes on\n> \t\t\t\t\t\t\t\t * identifiers */\n> \t\t\t\tforce_quotes = false;\n> ***************\n> *** 782,788 ****\n> \t\texit(1);\n> \t}\n> \n> - \t/* g_conn = PQsetdb(pghost, pgport, NULL, NULL, dbname); */\n> \tif (pghost != NULL)\n> \t{\n> \t\tsprintf(tmp_string, \"host=%s \", pghost);\n> --- 789,794 ----\n> ***************\n> *** 791,796 ****\n> --- 797,807 ----\n> \tif (pgport != NULL)\n> \t{\n> \t\tsprintf(tmp_string, \"port=%s \", pgport);\n> + \t\tstrcat(connect_string, tmp_string);\n> + \t}\n> + \tif (pgunixsocket != NULL)\n> + \t{\n> + \t\tsprintf(tmp_string, \"unixsocket=%s \", pgunixsocket);\n> \t\tstrcat(connect_string, tmp_string);\n> \t}\n> \tif (dbname != NULL)\n> Index: src/bin/psql/command.c\n> *** src/bin/psql/command.c\t2000/06/30 21:15:46\t1.1\n> --- src/bin/psql/command.c\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 1199,1204 ****\n> --- 1199,1205 ----\n> \tSetVariable(pset.vars, \"USER\", NULL);\n> \tSetVariable(pset.vars, \"HOST\", NULL);\n> \tSetVariable(pset.vars, \"PORT\", NULL);\n> + \tSetVariable(pset.vars, \"UNIXSOCKET\", NULL);\n> \tSetVariable(pset.vars, \"ENCODING\", NULL);\n> \n> \t/* If dbname is \"\" then use old name, else new one (even if NULL) */\n> ***************\n> *** 1228,1233 ****\n> --- 1229,1235 ----\n> \tdo\n> \t{\n> \t\tneed_pass = false;\n> + \t\t/* FIXME use PQconnectdb to support passing the Unix socket */\n> \t\tpset.db = PQsetdbLogin(PQhost(oldconn), PQport(oldconn),\n> \t\t\t\t\t\t\t NULL, NULL, dbparam, userparam, pwparam);\n> \n> ***************\n> *** 1303,1308 ****\n> --- 1305,1311 ----\n> \tSetVariable(pset.vars, \"USER\", PQuser(pset.db));\n> \tSetVariable(pset.vars, \"HOST\", PQhost(pset.db));\n> \tSetVariable(pset.vars, \"PORT\", PQport(pset.db));\n> + \tSetVariable(pset.vars, \"UNIXSOCKET\", PQunixsocket(pset.db));\n> \tSetVariable(pset.vars, \"ENCODING\", pg_encoding_to_char(pset.encoding));\n> \n> \tpset.issuper = test_superuser(PQuser(pset.db));\n> Index: src/bin/psql/command.h\n> Index: src/bin/psql/common.c\n> *** src/bin/psql/common.c\t2000/06/30 21:15:46\t1.1\n> --- src/bin/psql/common.c\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 330,335 ****\n> --- 330,336 ----\n> \t\t\tSetVariable(pset.vars, \"DBNAME\", NULL);\n> \t\t\tSetVariable(pset.vars, \"HOST\", NULL);\n> \t\t\tSetVariable(pset.vars, \"PORT\", NULL);\n> + \t\t\tSetVariable(pset.vars, \"UNIXSOCKET\", NULL);\n> \t\t\tSetVariable(pset.vars, \"USER\", NULL);\n> \t\t\tSetVariable(pset.vars, \"ENCODING\", NULL);\n> \t\t\treturn NULL;\n> ***************\n> *** 509,514 ****\n> --- 510,516 ----\n> \t\t\t\tSetVariable(pset.vars, \"DBNAME\", NULL);\n> \t\t\t\tSetVariable(pset.vars, \"HOST\", NULL);\n> \t\t\t\tSetVariable(pset.vars, \"PORT\", NULL);\n> + \t\t\t\tSetVariable(pset.vars, \"UNIXSOCKET\", NULL);\n> \t\t\t\tSetVariable(pset.vars, \"USER\", NULL);\n> \t\t\t\tSetVariable(pset.vars, \"ENCODING\", NULL);\n> \t\t\t\treturn false;\n> Index: src/bin/psql/help.c\n> *** src/bin/psql/help.c\t2000/06/30 21:15:46\t1.1\n> --- src/bin/psql/help.c\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 103,108 ****\n> --- 103,118 ----\n> \tputs(\")\");\n> \n> \tputs(\" -H HTML table output mode (-P format=html)\");\n> + \n> + \t/* Display default Unix-domain socket */\n> + \tenv = getenv(\"PGUNIXSOCKET\");\n> + \tprintf(\" -k <path> Specify Unix domain socket name (default: \");\n> + \tif (env)\n> + \t\tfputs(env, stdout);\n> + \telse\n> + \t\tfputs(\"computed from the port\", stdout);\n> + \tputs(\")\");\n> + \n> \tputs(\" -l List available databases, then exit\");\n> \tputs(\" -n Disable readline\");\n> \tputs(\" -o <filename> Send query output to filename (or |pipe)\");\n> Index: src/bin/psql/prompt.c\n> *** src/bin/psql/prompt.c\t2000/06/30 21:15:46\t1.1\n> --- src/bin/psql/prompt.c\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 189,194 ****\n> --- 189,199 ----\n> \t\t\t\t\tif (pset.db && PQport(pset.db))\n> \t\t\t\t\t\tstrncpy(buf, PQport(pset.db), MAX_PROMPT_SIZE);\n> \t\t\t\t\tbreak;\n> + \t\t\t\t\t/* DB server Unix-domain socket */\n> + \t\t\t\tcase '<':\n> + \t\t\t\t\tif (pset.db && PQunixsocket(pset.db))\n> + \t\t\t\t\t\tstrncpy(buf, PQunixsocket(pset.db), MAX_PROMPT_SIZE);\n> + \t\t\t\t\tbreak;\n> \t\t\t\t\t/* DB server user name */\n> \t\t\t\tcase 'n':\n> \t\t\t\t\tif (pset.db)\n> Index: src/bin/psql/prompt.h\n> Index: src/bin/psql/settings.h\n> Index: src/bin/psql/startup.c\n> *** src/bin/psql/startup.c\t2000/06/30 21:15:46\t1.1\n> --- src/bin/psql/startup.c\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 66,71 ****\n> --- 66,72 ----\n> \tchar\t *dbname;\n> \tchar\t *host;\n> \tchar\t *port;\n> + \tchar\t *unixsocket;\n> \tchar\t *username;\n> \tenum _actions action;\n> \tchar\t *action_string;\n> ***************\n> *** 158,163 ****\n> --- 159,165 ----\n> \tdo\n> \t{\n> \t\tneed_pass = false;\n> + \t\t/* FIXME use PQconnectdb to allow setting the unix socket */\n> \t\tpset.db = PQsetdbLogin(options.host, options.port, NULL, NULL,\n> \t\t\toptions.action == ACT_LIST_DB ? \"template1\" : options.dbname,\n> \t\t\t\t\t\t\t username, password);\n> ***************\n> *** 202,207 ****\n> --- 204,210 ----\n> \tSetVariable(pset.vars, \"USER\", PQuser(pset.db));\n> \tSetVariable(pset.vars, \"HOST\", PQhost(pset.db));\n> \tSetVariable(pset.vars, \"PORT\", PQport(pset.db));\n> + \tSetVariable(pset.vars, \"UNIXSOCKET\", PQunixsocket(pset.db));\n> \tSetVariable(pset.vars, \"ENCODING\", pg_encoding_to_char(pset.encoding));\n> \n> #ifndef WIN32\n> ***************\n> *** 313,318 ****\n> --- 316,322 ----\n> \t\t{\"field-separator\", required_argument, NULL, 'F'},\n> \t\t{\"host\", required_argument, NULL, 'h'},\n> \t\t{\"html\", no_argument, NULL, 'H'},\n> + \t\t{\"unixsocket\", required_argument, NULL, 'k'},\n> \t\t{\"list\", no_argument, NULL, 'l'},\n> \t\t{\"no-readline\", no_argument, NULL, 'n'},\n> \t\t{\"output\", required_argument, NULL, 'o'},\n> ***************\n> *** 346,359 ****\n> \tmemset(options, 0, sizeof *options);\n> \n> #ifdef HAVE_GETOPT_LONG\n> ! \twhile ((c = getopt_long(argc, argv, \"aAc:d:eEf:F:lh:Hno:p:P:qRsStT:uU:v:VWxX?\", long_options, &optindex)) != -1)\n> #else\t\t\t\t\t\t\t/* not HAVE_GETOPT_LONG */\n> \n> \t/*\n> \t * Be sure to leave the '-' in here, so we can catch accidental long\n> \t * options.\n> \t */\n> ! \twhile ((c = getopt(argc, argv, \"aAc:d:eEf:F:lh:Hno:p:P:qRsStT:uU:v:VWxX?-\")) != -1)\n> #endif\t /* not HAVE_GETOPT_LONG */\n> \t{\n> \t\tswitch (c)\n> --- 350,363 ----\n> \tmemset(options, 0, sizeof *options);\n> \n> #ifdef HAVE_GETOPT_LONG\n> ! \twhile ((c = getopt_long(argc, argv, \"aAc:d:eEf:F:lh:Hk:no:p:P:qRsStT:uU:v:VWxX?\", long_options, &optindex)) != -1)\n> #else\t\t\t\t\t\t\t/* not HAVE_GETOPT_LONG */\n> \n> \t/*\n> \t * Be sure to leave the '-' in here, so we can catch accidental long\n> \t * options.\n> \t */\n> ! \twhile ((c = getopt(argc, argv, \"aAc:d:eEf:F:lh:Hk:no:p:P:qRsStT:uU:v:VWxX?-\")) != -1)\n> #endif\t /* not HAVE_GETOPT_LONG */\n> \t{\n> \t\tswitch (c)\n> ***************\n> *** 398,403 ****\n> --- 402,410 ----\n> \t\t\t\tbreak;\n> \t\t\tcase 'l':\n> \t\t\t\toptions->action = ACT_LIST_DB;\n> + \t\t\t\tbreak;\n> + \t\t\tcase 'k':\n> + \t\t\t\toptions->unixsocket = optarg;\n> \t\t\t\tbreak;\n> \t\t\tcase 'n':\n> \t\t\t\toptions->no_readline = true;\n> Index: src/bin/scripts/createdb\n> *** src/bin/scripts/createdb\t2000/06/30 21:15:46\t1.1\n> --- src/bin/scripts/createdb\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 50,55 ****\n> --- 50,64 ----\n> --port=*)\n> PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> ;;\n> + \t--unixsocket|-k)\n> + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> + \t\tshift;;\n> + -k*)\n> + PSQLOPT=\"$PSQLOPT $1\"\n> + ;;\n> + --unixsocket=*)\n> + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> + ;;\n> \t--username|-U)\n> \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n> \t\tshift;;\n> ***************\n> *** 114,119 ****\n> --- 123,129 ----\n> \techo \" -E, --encoding=ENCODING Multibyte encoding for the database\"\n> \techo \" -h, --host=HOSTNAME Database server host\"\n> \techo \" -p, --port=PORT Database server port\"\n> + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> \techo \" -U, --username=USERNAME Username to connect as\"\n> \techo \" -W, --password Prompt for password\"\n> \techo \" -e, --echo Show the query being sent to the backend\"\n> Index: src/bin/scripts/createlang.sh\n> *** src/bin/scripts/createlang.sh\t2000/06/30 21:15:46\t1.1\n> --- src/bin/scripts/createlang.sh\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 65,70 ****\n> --- 65,79 ----\n> --port=*)\n> PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> ;;\n> + \t--unixsocket|-k)\n> + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> + \t\tshift;;\n> + -k*)\n> + PSQLOPT=\"$PSQLOPT $1\"\n> + ;;\n> + --unixsocket=*)\n> + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> + ;;\n> \t--username|-U)\n> \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n> \t\tshift;;\n> ***************\n> *** 126,131 ****\n> --- 135,141 ----\n> \techo \"Options:\"\n> \techo \" -h, --host=HOSTNAME Database server host\"\n> \techo \" -p, --port=PORT Database server port\"\n> + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> \techo \" -U, --username=USERNAME Username to connect as\"\n> \techo \" -W, --password Prompt for password\"\n> \techo \" -d, --dbname=DBNAME Database to install language in\"\n> Index: src/bin/scripts/createuser\n> *** src/bin/scripts/createuser\t2000/06/30 21:15:46\t1.1\n> --- src/bin/scripts/createuser\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 63,68 ****\n> --- 63,77 ----\n> --port=*)\n> PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> ;;\n> + \t--unixsocket|-k)\n> + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> + \t\tshift;;\n> + -k*)\n> + PSQLOPT=\"$PSQLOPT $1\"\n> + ;;\n> + --unixsocket=*)\n> + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> + ;;\n> # Note: These two specify the user to connect as (like in psql),\n> # not the user you're creating.\n> \t--username|-U)\n> ***************\n> *** 135,140 ****\n> --- 144,150 ----\n> \techo \" -P, --pwprompt Assign a password to new user\"\n> \techo \" -h, --host=HOSTNAME Database server host\"\n> \techo \" -p, --port=PORT Database server port\"\n> + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> \techo \" -U, --username=USERNAME Username to connect as (not the one to create)\"\n> \techo \" -W, --password Prompt for password to connect\"\n> \techo \" -e, --echo Show the query being sent to the backend\"\n> Index: src/bin/scripts/dropdb\n> *** src/bin/scripts/dropdb\t2000/06/30 21:15:46\t1.1\n> --- src/bin/scripts/dropdb\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 59,64 ****\n> --- 59,73 ----\n> --port=*)\n> PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> ;;\n> + \t--unixsocket|-k)\n> + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> + \t\tshift;;\n> + -k*)\n> + PSQLOPT=\"$PSQLOPT $1\"\n> + ;;\n> + --unixsocket=*)\n> + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> + ;;\n> \t--username|-U)\n> \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n> \t\tshift;;\n> ***************\n> *** 103,108 ****\n> --- 112,118 ----\n> \techo \"Options:\"\n> \techo \" -h, --host=HOSTNAME Database server host\"\n> \techo \" -p, --port=PORT Database server port\"\n> + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> \techo \" -U, --username=USERNAME Username to connect as\"\n> \techo \" -W, --password Prompt for password\"\n> \techo \" -i, --interactive Prompt before deleting anything\"\n> Index: src/bin/scripts/droplang\n> *** src/bin/scripts/droplang\t2000/06/30 21:15:46\t1.1\n> --- src/bin/scripts/droplang\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 65,70 ****\n> --- 65,79 ----\n> --port=*)\n> PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> ;;\n> + \t--unixsocket|-k)\n> + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> + \t\tshift;;\n> + -k*)\n> + PSQLOPT=\"$PSQLOPT $1\"\n> + ;;\n> + --unixsocket=*)\n> + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> + ;;\n> \t--username|-U)\n> \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n> \t\tshift;;\n> ***************\n> *** 113,118 ****\n> --- 122,128 ----\n> \techo \"Options:\"\n> \techo \" -h, --host=HOSTNAME Database server host\"\n> \techo \" -p, --port=PORT Database server port\"\n> + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> \techo \" -U, --username=USERNAME Username to connect as\"\n> \techo \" -W, --password Prompt for password\"\n> \techo \" -d, --dbname=DBNAME Database to remove language from\"\n> Index: src/bin/scripts/dropuser\n> *** src/bin/scripts/dropuser\t2000/06/30 21:15:46\t1.1\n> --- src/bin/scripts/dropuser\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 59,64 ****\n> --- 59,73 ----\n> --port=*)\n> PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> ;;\n> + \t--unixsocket|-k)\n> + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> + \t\tshift;;\n> + -k*)\n> + PSQLOPT=\"$PSQLOPT $1\"\n> + ;;\n> + --unixsocket=*)\n> + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> + ;;\n> # Note: These two specify the user to connect as (like in psql),\n> # not the user you're dropping.\n> \t--username|-U)\n> ***************\n> *** 105,110 ****\n> --- 114,120 ----\n> \techo \"Options:\"\n> \techo \" -h, --host=HOSTNAME Database server host\"\n> \techo \" -p, --port=PORT Database server port\"\n> + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> \techo \" -U, --username=USERNAME Username to connect as (not the one to drop)\"\n> \techo \" -W, --password Prompt for password to connect\"\n> \techo \" -i, --interactive Prompt before deleting anything\"\n> Index: src/bin/scripts/vacuumdb\n> *** src/bin/scripts/vacuumdb\t2000/06/30 21:15:46\t1.1\n> --- src/bin/scripts/vacuumdb\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 52,57 ****\n> --- 52,66 ----\n> --port=*)\n> PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> ;;\n> + \t--unixsocket|-k)\n> + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> + \t\tshift;;\n> + -k*)\n> + PSQLOPT=\"$PSQLOPT $1\"\n> + ;;\n> + --unixsocket=*)\n> + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> + ;;\n> \t--username|-U)\n> \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n> \t\tshift;;\n> ***************\n> *** 121,126 ****\n> --- 130,136 ----\n> echo \"Options:\"\n> \techo \" -h, --host=HOSTNAME Database server host\"\n> \techo \" -p, --port=PORT Database server port\"\n> + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> \techo \" -U, --username=USERNAME Username to connect as\"\n> \techo \" -W, --password Prompt for password\"\n> \techo \" -d, --dbname=DBNAME Database to vacuum\"\n> Index: src/include/libpq/libpq.h\n> *** src/include/libpq/libpq.h\t2000/06/30 21:15:47\t1.1\n> --- src/include/libpq/libpq.h\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 236,246 ****\n> /*\n> * prototypes for functions in pqcomm.c\n> */\n> ! extern int\tStreamServerPort(char *hostName, unsigned short portName, int *fdP);\n> extern int\tStreamConnection(int server_fd, Port *port);\n> extern void StreamClose(int sock);\n> extern void pq_init(void);\n> extern int\tpq_getport(void);\n> extern void pq_close(void);\n> extern int\tpq_getbytes(char *s, size_t len);\n> extern int\tpq_getstring(StringInfo s);\n> --- 236,247 ----\n> /*\n> * prototypes for functions in pqcomm.c\n> */\n> ! extern int\tStreamServerPort(char *hostName, unsigned short portName, char *unixSocketName, int *fdP);\n> extern int\tStreamConnection(int server_fd, Port *port);\n> extern void StreamClose(int sock);\n> extern void pq_init(void);\n> extern int\tpq_getport(void);\n> + extern char\t*pq_getunixsocket(void);\n> extern void pq_close(void);\n> extern int\tpq_getbytes(char *s, size_t len);\n> extern int\tpq_getstring(StringInfo s);\n> Index: src/include/libpq/password.h\n> Index: src/include/libpq/pqcomm.h\n> *** src/include/libpq/pqcomm.h\t2000/06/30 21:15:47\t1.1\n> --- src/include/libpq/pqcomm.h\t2000/07/01 18:59:33\t1.6\n> ***************\n> *** 42,53 ****\n> /* Configure the UNIX socket address for the well known port. */\n> \n> #if defined(SUN_LEN)\n> ! #define UNIXSOCK_PATH(sun,port) \\\n> ! \t(sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)), SUN_LEN(&(sun)))\n> #else\n> ! #define UNIXSOCK_PATH(sun,port) \\\n> ! \t(sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)), \\\n> ! \t strlen((sun).sun_path)+ offsetof(struct sockaddr_un, sun_path))\n> #endif\n> \n> /*\n> --- 42,56 ----\n> /* Configure the UNIX socket address for the well known port. */\n> \n> #if defined(SUN_LEN)\n> ! #define UNIXSOCK_PATH(sun,port,defpath) \\\n> ! (defpath ? (strncpy((sun).sun_path, defpath, sizeof((sun).sun_path)), (sun).sun_path[sizeof((sun).sun_path)-1] = '\\0') : sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)))\n> ! #define UNIXSOCK_LEN(sun) \\\n> ! (SUN_LEN(&(sun)))\n> #else\n> ! #define UNIXSOCK_PATH(sun,port,defpath) \\\n> ! (defpath ? (strncpy((sun).sun_path, defpath, sizeof((sun).sun_path)), (sun).sun_path[sizeof((sun).sun_path)-1] = '\\0') : sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)))\n> ! #define UNIXSOCK_LEN(sun) \\\n> ! (strlen((sun).sun_path)+ offsetof(struct sockaddr_un, sun_path))\n> #endif\n> \n> /*\n> Index: src/interfaces/libpq/fe-connect.c\n> *** src/interfaces/libpq/fe-connect.c\t2000/06/30 21:15:51\t1.1\n> --- src/interfaces/libpq/fe-connect.c\t2000/07/01 18:50:47\t1.3\n> ***************\n> *** 125,130 ****\n> --- 125,133 ----\n> \t{\"port\", \"PGPORT\", DEF_PGPORT, NULL,\n> \t\"Database-Port\", \"\", 6},\n> \n> + \t{\"unixsocket\", \"PGUNIXSOCKET\", NULL, NULL,\n> + \t\"Unix-Socket\", \"\", 80},\n> + \n> \t{\"tty\", \"PGTTY\", DefaultTty, NULL,\n> \t\"Backend-Debug-TTY\", \"D\", 40},\n> \n> ***************\n> *** 293,298 ****\n> --- 296,303 ----\n> \tconn->pghost = tmp ? strdup(tmp) : NULL;\n> \ttmp = conninfo_getval(connOptions, \"port\");\n> \tconn->pgport = tmp ? strdup(tmp) : NULL;\n> + \ttmp = conninfo_getval(connOptions, \"unixsocket\");\n> + \tconn->pgunixsocket = tmp ? strdup(tmp) : NULL;\n> \ttmp = conninfo_getval(connOptions, \"tty\");\n> \tconn->pgtty = tmp ? strdup(tmp) : NULL;\n> \ttmp = conninfo_getval(connOptions, \"options\");\n> ***************\n> *** 369,374 ****\n> --- 374,382 ----\n> *\t PGPORT\t identifies TCP port to which to connect if <pgport> argument\n> *\t\t\t\t is NULL or a null string.\n> *\n> + *\t PGUNIXSOCKET\t identifies Unix-domain socket to which to connect; default\n> + *\t\t\t\t is computed from the TCP port.\n> + *\n> *\t PGTTY\t\t identifies tty to which to send messages if <pgtty> argument\n> *\t\t\t\t is NULL or a null string.\n> *\n> ***************\n> *** 422,427 ****\n> --- 430,439 ----\n> \telse\n> \t\tconn->pgport = strdup(pgport);\n> \n> + \tconn->pgunixsocket = getenv(\"PGUNIXSOCKET\");\n> + \tif (conn->pgunixsocket)\n> + \t\tconn->pgunixsocket = strdup(conn->pgunixsocket);\n> + \n> \tif ((pgtty == NULL) || pgtty[0] == '\\0')\n> \t{\n> \t\tif ((tmp = getenv(\"PGTTY\")) == NULL)\n> ***************\n> *** 489,501 ****\n> \n> /*\n> * update_db_info -\n> ! * get all additional infos out of dbName\n> *\n> */\n> static int\n> update_db_info(PGconn *conn)\n> {\n> ! \tchar\t *tmp,\n> \t\t\t *old = conn->dbName;\n> \n> \tif (strchr(conn->dbName, '@') != NULL)\n> --- 501,513 ----\n> \n> /*\n> * update_db_info -\n> ! * get all additional info out of dbName\n> *\n> */\n> static int\n> update_db_info(PGconn *conn)\n> {\n> ! \tchar\t *tmp, *tmp2,\n> \t\t\t *old = conn->dbName;\n> \n> \tif (strchr(conn->dbName, '@') != NULL)\n> ***************\n> *** 504,509 ****\n> --- 516,523 ----\n> \t\ttmp = strrchr(conn->dbName, ':');\n> \t\tif (tmp != NULL)\t\t/* port number given */\n> \t\t{\n> + \t\t\tif (conn->pgport)\n> + \t\t\t\tfree(conn->pgport);\n> \t\t\tconn->pgport = strdup(tmp + 1);\n> \t\t\t*tmp = '\\0';\n> \t\t}\n> ***************\n> *** 511,516 ****\n> --- 525,532 ----\n> \t\ttmp = strrchr(conn->dbName, '@');\n> \t\tif (tmp != NULL)\t\t/* host name given */\n> \t\t{\n> + \t\t\tif (conn->pghost)\n> + \t\t\t\tfree(conn->pghost);\n> \t\t\tconn->pghost = strdup(tmp + 1);\n> \t\t\t*tmp = '\\0';\n> \t\t}\n> ***************\n> *** 537,549 ****\n> \n> \t\t\t/*\n> \t\t\t * new style:\n> ! \t\t\t * <tcp|unix>:postgresql://server[:port][/dbname][?options]\n> \t\t\t */\n> \t\t\toffset += strlen(\"postgresql://\");\n> \n> \t\t\ttmp = strrchr(conn->dbName + offset, '?');\n> \t\t\tif (tmp != NULL)\t/* options given */\n> \t\t\t{\n> \t\t\t\tconn->pgoptions = strdup(tmp + 1);\n> \t\t\t\t*tmp = '\\0';\n> \t\t\t}\n> --- 553,567 ----\n> \n> \t\t\t/*\n> \t\t\t * new style:\n> ! \t\t\t * <tcp|unix>:postgresql://server[:port|:/unixsocket/path:][/dbname][?options]\n> \t\t\t */\n> \t\t\toffset += strlen(\"postgresql://\");\n> \n> \t\t\ttmp = strrchr(conn->dbName + offset, '?');\n> \t\t\tif (tmp != NULL)\t/* options given */\n> \t\t\t{\n> + \t\t\t\tif (conn->pgoptions)\n> + \t\t\t\t\tfree(conn->pgoptions);\n> \t\t\t\tconn->pgoptions = strdup(tmp + 1);\n> \t\t\t\t*tmp = '\\0';\n> \t\t\t}\n> ***************\n> *** 551,576 ****\n> \t\t\ttmp = strrchr(conn->dbName + offset, '/');\n> \t\t\tif (tmp != NULL)\t/* database name given */\n> \t\t\t{\n> \t\t\t\tconn->dbName = strdup(tmp + 1);\n> \t\t\t\t*tmp = '\\0';\n> \t\t\t}\n> \t\t\telse\n> \t\t\t{\n> \t\t\t\tif ((tmp = getenv(\"PGDATABASE\")) != NULL)\n> \t\t\t\t\tconn->dbName = strdup(tmp);\n> \t\t\t\telse if (conn->pguser)\n> \t\t\t\t\tconn->dbName = strdup(conn->pguser);\n> \t\t\t}\n> \n> \t\t\ttmp = strrchr(old + offset, ':');\n> ! \t\t\tif (tmp != NULL)\t/* port number given */\n> \t\t\t{\n> - \t\t\t\tconn->pgport = strdup(tmp + 1);\n> \t\t\t\t*tmp = '\\0';\n> \t\t\t}\n> \n> \t\t\tif (strncmp(old, \"unix:\", 5) == 0)\n> \t\t\t{\n> \t\t\t\tconn->pghost = NULL;\n> \t\t\t\tif (strcmp(old + offset, \"localhost\") != 0)\n> \t\t\t\t{\n> --- 569,630 ----\n> \t\t\ttmp = strrchr(conn->dbName + offset, '/');\n> \t\t\tif (tmp != NULL)\t/* database name given */\n> \t\t\t{\n> + \t\t\t\tif (conn->dbName)\n> + \t\t\t\t\tfree(conn->dbName);\n> \t\t\t\tconn->dbName = strdup(tmp + 1);\n> \t\t\t\t*tmp = '\\0';\n> \t\t\t}\n> \t\t\telse\n> \t\t\t{\n> + \t\t\t\t/* Why do we default only this value from the environment again? */\n> \t\t\t\tif ((tmp = getenv(\"PGDATABASE\")) != NULL)\n> + \t\t\t\t{\n> + \t\t\t\t\tif (conn->dbName)\n> + \t\t\t\t\t\tfree(conn->dbName);\n> \t\t\t\t\tconn->dbName = strdup(tmp);\n> + \t\t\t\t}\n> \t\t\t\telse if (conn->pguser)\n> + \t\t\t\t{\n> + \t\t\t\t\tif (conn->dbName)\n> + \t\t\t\t\t\tfree(conn->dbName);\n> \t\t\t\t\tconn->dbName = strdup(conn->pguser);\n> + \t\t\t\t}\n> \t\t\t}\n> \n> \t\t\ttmp = strrchr(old + offset, ':');\n> ! \t\t\tif (tmp != NULL)\t/* port number or Unix socket path given */\n> \t\t\t{\n> \t\t\t\t*tmp = '\\0';\n> + \t\t\t\tif ((tmp2 = strchr(tmp + 1, ':')) != NULL)\n> + \t\t\t\t{\n> + \t\t\t\t\tif (strncmp(old, \"unix:\", 5) != 0)\n> + \t\t\t\t\t{\n> + \t\t\t\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> + \t\t\t\t\t\t\t\t \"connectDBStart() -- \"\n> + \t\t\t\t\t\t\t\t \"socket name can only be specified with \"\n> + \t\t\t\t\t\t\t\t \"non-TCP\\n\");\n> + \t\t\t\t\t\treturn 1; \n> + \t\t\t\t\t}\n> + \t\t\t\t\t*tmp2 = '\\0';\n> + \t\t\t\t\tif (conn->pgunixsocket)\n> + \t\t\t\t\t\tfree(conn->pgunixsocket);\n> + \t\t\t\t\tconn->pgunixsocket = strdup(tmp + 1);\n> + \t\t\t\t}\n> + \t\t\t\telse\n> + \t\t\t\t{\n> + \t\t\t\t\tif (conn->pgport)\n> + \t\t\t\t\t\tfree(conn->pgport);\n> + \t\t\t\t\tconn->pgport = strdup(tmp + 1);\n> + \t\t\t\t\tif (conn->pgunixsocket)\n> + \t\t\t\t\t\tfree(conn->pgunixsocket);\n> + \t\t\t\t\tconn->pgunixsocket = NULL;\n> + \t\t\t\t}\n> \t\t\t}\n> \n> \t\t\tif (strncmp(old, \"unix:\", 5) == 0)\n> \t\t\t{\n> + \t\t\t\tif (conn->pghost)\n> + \t\t\t\t\tfree(conn->pghost);\n> \t\t\t\tconn->pghost = NULL;\n> \t\t\t\tif (strcmp(old + offset, \"localhost\") != 0)\n> \t\t\t\t{\n> ***************\n> *** 582,589 ****\n> \t\t\t\t}\n> \t\t\t}\n> \t\t\telse\n> \t\t\t\tconn->pghost = strdup(old + offset);\n> ! \n> \t\t\tfree(old);\n> \t\t}\n> \t}\n> --- 636,646 ----\n> \t\t\t\t}\n> \t\t\t}\n> \t\t\telse\n> + \t\t\t{\n> + \t\t\t\tif (conn->pghost)\n> + \t\t\t\t\tfree(conn->pghost);\n> \t\t\t\tconn->pghost = strdup(old + offset);\n> ! \t\t\t}\n> \t\t\tfree(old);\n> \t\t}\n> \t}\n> ***************\n> *** 743,749 ****\n> \t}\n> #if !defined(WIN32) && !defined(__CYGWIN32__)\n> \telse\n> ! \t\tconn->raddr_len = UNIXSOCK_PATH(conn->raddr.un, portno);\n> #endif\n> \n> \n> --- 800,809 ----\n> \t}\n> #if !defined(WIN32) && !defined(__CYGWIN32__)\n> \telse\n> ! \t{\n> ! \t\tUNIXSOCK_PATH(conn->raddr.un, portno, conn->pgunixsocket);\n> ! \t\tconn->raddr_len = UNIXSOCK_LEN(conn->raddr.un);\n> ! \t}\n> #endif\n> \n> \n> ***************\n> *** 892,898 ****\n> \t\t\t\t\t\t\t conn->pghost ? conn->pghost : \"localhost\",\n> \t\t\t\t\t\t\t (family == AF_INET) ?\n> \t\t\t\t\t\t\t \"TCP/IP port\" : \"Unix socket\",\n> ! \t\t\t\t\t\t\t conn->pgport);\n> \t\t\tgoto connect_errReturn;\n> \t\t}\n> \t}\n> --- 952,959 ----\n> \t\t\t\t\t\t\t conn->pghost ? conn->pghost : \"localhost\",\n> \t\t\t\t\t\t\t (family == AF_INET) ?\n> \t\t\t\t\t\t\t \"TCP/IP port\" : \"Unix socket\",\n> ! \t\t\t\t\t\t\t (family == AF_UNIX && conn->pgunixsocket) ?\n> ! \t\t\t\t\t\t\t conn->pgunixsocket : conn->pgport);\n> \t\t\tgoto connect_errReturn;\n> \t\t}\n> \t}\n> ***************\n> *** 1123,1129 ****\n> \t\t\t\t\t\t\t conn->pghost ? conn->pghost : \"localhost\",\n> \t\t\t\t\t\t\t\t (conn->raddr.sa.sa_family == AF_INET) ?\n> \t\t\t\t\t\t\t\t\t \"TCP/IP port\" : \"Unix socket\",\n> ! \t\t\t\t\t\t\t\t\t conn->pgport);\n> \t\t\t\t\tgoto error_return;\n> \t\t\t\t}\n> \n> --- 1184,1191 ----\n> \t\t\t\t\t\t\t conn->pghost ? conn->pghost : \"localhost\",\n> \t\t\t\t\t\t\t\t (conn->raddr.sa.sa_family == AF_INET) ?\n> \t\t\t\t\t\t\t\t\t \"TCP/IP port\" : \"Unix socket\",\n> ! \t\t\t\t\t\t\t (conn->raddr.sa.sa_family == AF_UNIX && conn->pgunixsocket) ?\n> ! \t\t\t\t\t\t\t\t\t conn->pgunixsocket : conn->pgport);\n> \t\t\t\t\tgoto error_return;\n> \t\t\t\t}\n> \n> ***************\n> *** 1799,1804 ****\n> --- 1861,1868 ----\n> \t\tfree(conn->pghostaddr);\n> \tif (conn->pgport)\n> \t\tfree(conn->pgport);\n> + \tif (conn->pgunixsocket)\n> + \t\tfree(conn->pgunixsocket);\n> \tif (conn->pgtty)\n> \t\tfree(conn->pgtty);\n> \tif (conn->pgoptions)\n> ***************\n> *** 2383,2388 ****\n> --- 2447,2460 ----\n> \tif (!conn)\n> \t\treturn (char *) NULL;\n> \treturn conn->pgport;\n> + }\n> + \n> + char *\n> + PQunixsocket(const PGconn *conn)\n> + {\n> + \tif (!conn)\n> + \t\treturn (char *) NULL;\n> + \treturn conn->pgunixsocket;\n> }\n> \n> char *\n> Index: src/interfaces/libpq/libpq-fe.h\n> *** src/interfaces/libpq/libpq-fe.h\t2000/06/30 21:15:51\t1.1\n> --- src/interfaces/libpq/libpq-fe.h\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 214,219 ****\n> --- 214,220 ----\n> \textern char *PQpass(const PGconn *conn);\n> \textern char *PQhost(const PGconn *conn);\n> \textern char *PQport(const PGconn *conn);\n> + \textern char *PQunixsocket(const PGconn *conn);\n> \textern char *PQtty(const PGconn *conn);\n> \textern char *PQoptions(const PGconn *conn);\n> \textern ConnStatusType PQstatus(const PGconn *conn);\n> Index: src/interfaces/libpq/libpq-int.h\n> *** src/interfaces/libpq/libpq-int.h\t2000/06/30 21:15:51\t1.1\n> --- src/interfaces/libpq/libpq-int.h\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 202,207 ****\n> --- 202,209 ----\n> \t\t\t\t\t\t\t\t * numbers-and-dots notation. Takes\n> \t\t\t\t\t\t\t\t * precedence over above. */\n> \tchar\t *pgport;\t\t\t/* the server's communication port */\n> + \tchar\t *pgunixsocket;\t\t/* the Unix-domain socket that the server is listening on;\n> + \t\t\t\t\t\t * if NULL, uses a default constructed from pgport */\n> \tchar\t *pgtty;\t\t\t/* tty on which the backend messages is\n> \t\t\t\t\t\t\t\t * displayed (NOT ACTUALLY USED???) */\n> \tchar\t *pgoptions;\t\t/* options to start the backend with */\n> Index: src/interfaces/libpq/libpqdll.def\n> *** src/interfaces/libpq/libpqdll.def\t2000/06/30 21:15:51\t1.1\n> --- src/interfaces/libpq/libpqdll.def\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 79,81 ****\n> --- 79,82 ----\n> \tdestroyPQExpBuffer\t@ 76\n> \tcreatePQExpBuffer\t@ 77\n> \tPQconninfoFree\t\t@ 78\n> + \tPQunixsocket\t\t@ 79\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n? config.log\n? config.cache\n? config.status\n? GNUmakefile\n? src/Makefile.custom\n? src/GNUmakefile\n? src/Makefile.global\n? src/log\n? src/crtags\n? src/backend/postgres\n? src/backend/catalog/global.description\n? src/backend/catalog/global.bki\n? src/backend/catalog/template1.bki\n? src/backend/catalog/template1.description\n? src/backend/port/Makefile\n? src/bin/initdb/initdb\n? src/bin/initlocation/initlocation\n? src/bin/ipcclean/ipcclean\n? src/bin/pg_config/pg_config\n? src/bin/pg_ctl/pg_ctl\n? src/bin/pg_dump/pg_dump\n? src/bin/pg_dump/pg_restore\n? src/bin/pg_dump/pg_dumpall\n? src/bin/pg_id/pg_id\n? src/bin/pg_passwd/pg_passwd\n? src/bin/pgaccess/pgaccess\n? src/bin/pgtclsh/Makefile.tkdefs\n? src/bin/pgtclsh/Makefile.tcldefs\n? src/bin/pgtclsh/pgtclsh\n? src/bin/pgtclsh/pgtksh\n? src/bin/psql/psql\n? src/bin/scripts/createlang\n? src/include/config.h\n? src/include/stamp-h\n? src/interfaces/ecpg/lib/libecpg.so.3.2.0\n? src/interfaces/ecpg/preproc/ecpg\n? src/interfaces/libpgeasy/libpgeasy.so.2.1\n? src/interfaces/libpgtcl/libpgtcl.so.2.1\n? src/interfaces/libpq/libpq.so.2.1\n? src/interfaces/perl5/blib\n? src/interfaces/perl5/Makefile\n? src/interfaces/perl5/pm_to_blib\n? src/interfaces/perl5/Pg.c\n? src/interfaces/perl5/Pg.bs\n? src/pl/plperl/blib\n? src/pl/plperl/Makefile\n? src/pl/plperl/pm_to_blib\n? src/pl/plperl/SPI.c\n? src/pl/plperl/plperl.bs\n? src/pl/plpgsql/src/libplpgsql.so.1.0\n? src/pl/tcl/Makefile.tcldefs\n? src/test/regress/pg_regress\n? src/test/regress/regress.out\n? src/test/regress/results\n? src/test/regress/regression.diffs\n? src/test/regress/expected/copy.out\n? src/test/regress/expected/create_function_1.out\n? src/test/regress/expected/create_function_2.out\n? src/test/regress/expected/misc.out\n? src/test/regress/expected/constraints.out\n? src/test/regress/sql/copy.sql\n? src/test/regress/sql/misc.sql\n? src/test/regress/sql/create_function_1.sql\n? src/test/regress/sql/create_function_2.sql\n? src/test/regress/sql/constraints.sql\nIndex: doc/src/sgml/environ.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/environ.sgml,v\nretrieving revision 1.5\ndiff -c -r1.5 environ.sgml\n*** doc/src/sgml/environ.sgml\t2000/05/02 20:01:51\t1.5\n--- doc/src/sgml/environ.sgml\t2000/11/13 05:26:14\n***************\n*** 47,62 ****\n </Para>\n \n <Para>\n! If your site administrator has not set things up in the\n! default way, you may have some more work to do. For example, if the database\n! server machine is a remote machine, you\n! will need to set the <Acronym>PGHOST</Acronym> environment variable to the name\n! of the database server machine. The environment variable\n! <Acronym>PGPORT</Acronym> may also have to be set. The bottom line is this: if\n! you try to start an application program and it complains\n! that it cannot connect to the <Application>postmaster</Application>,\n! you should immediately consult your site administrator to make sure that your\n! environment is properly set up.\n! </Para>\n \n </Chapter>\n--- 47,63 ----\n </Para>\n \n <Para>\n! \n! If your site administrator has not set things up in the default way, \n! you may have some more work to do. For example, if the database server\n! machine is a remote machine, you will need to set the\n! <Acronym>PGHOST</Acronym> environment variable to the name of the\n! database server machine. The environment variable\n! <Acronym>PGPORT</Acronym> or <envar>PGUNIXSOCKET</envar> may also have\n! to be set. The bottom line is this: if you try to start an application \n! program and it complains that it cannot connect to the\n! <Application>postmaster</Application>, you should immediately consult\n! your site administrator to make sure that your environment is properly\n! set up. </Para>\n \n </Chapter>\nIndex: doc/src/sgml/libpq++.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/libpq++.sgml,v\nretrieving revision 1.17\ndiff -c -r1.17 libpq++.sgml\n*** doc/src/sgml/libpq++.sgml\t2000/09/29 20:21:34\t1.17\n--- doc/src/sgml/libpq++.sgml\t2000/11/13 05:26:14\n***************\n*** 93,98 ****\n--- 93,105 ----\n </listitem>\n <listitem>\n <para>\n+ \t<envar>PGUNIXSOCKET</envar> sets the full Unix domain socket\n+ \tfile name for communicating with the <productname>Postgres</productname>\n+ \tbackend.\n+ </para>\n+ </listitem>\n+ <listitem>\n+ <para>\n \t<envar>PGDATABASE</envar> sets the default \n \t<productname>Postgres</productname> database name.\n </para>\nIndex: doc/src/sgml/libpq.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/libpq.sgml,v\nretrieving revision 1.44\ndiff -c -r1.44 libpq.sgml\n*** doc/src/sgml/libpq.sgml\t2000/10/17 14:27:50\t1.44\n--- doc/src/sgml/libpq.sgml\t2000/11/13 05:26:16\n***************\n*** 134,139 ****\n--- 134,148 ----\n </varlistentry>\n \n <varlistentry>\n+ <term><literal>unixsocket</literal></term>\n+ <listitem>\n+ <para>\n+ Full path to Unix-domain socket file to connect to at the server host.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n+ <varlistentry>\n <term><literal>dbname</literal></term>\n <listitem>\n <para>\n***************\n*** 556,561 ****\n--- 565,580 ----\n \n <listitem>\n <para>\n+ <function>PQunixsocket</function>\n+ Returns the name of the Unix-domain socket of the connection.\n+ <synopsis>\n+ char *PQunixsocket(const PGconn *conn)\n+ </synopsis>\n+ </para>\n+ </listitem>\n+ \n+ <listitem>\n+ <para>\n <function>PQtty</function>\n Returns the debug tty of the connection.\n <synopsis>\n***************\n*** 1821,1826 ****\n--- 1840,1852 ----\n <envar>PGHOST</envar> sets the default server name.\n If a non-zero-length string is specified, TCP/IP communication is used.\n Without a host name, libpq will connect using a local Unix domain socket.\n+ </para>\n+ </listitem>\n+ <listitem>\n+ <para>\n+ <envar>PGPORT</envar> sets the default port or local Unix domain socket\n+ file extension for communicating with the <productname>Postgres</productname>\n+ backend.\n </para>\n </listitem>\n <listitem>\nIndex: doc/src/sgml/start.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/start.sgml,v\nretrieving revision 1.13\ndiff -c -r1.13 start.sgml\n*** doc/src/sgml/start.sgml\t2000/09/29 20:21:34\t1.13\n--- doc/src/sgml/start.sgml\t2000/11/13 05:26:16\n***************\n*** 110,117 ****\n will need to set the <acronym>PGHOST</acronym> environment\n variable to the name\n of the database server machine. The environment variable\n! <acronym>PGPORT</acronym> may also have to be set. The bottom\n! line is this: if\n you try to start an application program and it complains\n that it cannot connect to the <application>postmaster</application>,\n you should immediately consult your site administrator to make\n--- 110,117 ----\n will need to set the <acronym>PGHOST</acronym> environment\n variable to the name\n of the database server machine. The environment variable\n! <acronym>PGPORT</acronym> or <acronym>PGUNIXSOCKET</acronym> may also have to be set.\n! The bottom line is this: if\n you try to start an application program and it complains\n that it cannot connect to the <application>postmaster</application>,\n you should immediately consult your site administrator to make\nIndex: doc/src/sgml/ref/createdb.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/ref/createdb.sgml,v\nretrieving revision 1.11\ndiff -c -r1.11 createdb.sgml\n*** doc/src/sgml/ref/createdb.sgml\t2000/11/11 23:01:38\t1.11\n--- doc/src/sgml/ref/createdb.sgml\t2000/11/13 05:26:16\n***************\n*** 56,61 ****\n--- 56,73 ----\n </listitem>\n </varlistentry>\n \n+ <varlistentry>\n+ <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n+ <listitem>\n+ <para>\n+ Specifies the Unix-domain socket on which the\n+ <application>postmaster</application> is running.\n+ Without this option, the socket is created in <filename>/tmp</filename>\n+ based on the port number.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n <varlistentry>\n <term>-U, --username <replaceable class=\"parameter\">username</replaceable></term>\n <listitem>\nIndex: doc/src/sgml/ref/createlang.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/ref/createlang.sgml,v\nretrieving revision 1.10\ndiff -c -r1.10 createlang.sgml\n*** doc/src/sgml/ref/createlang.sgml\t2000/11/11 23:01:38\t1.10\n--- doc/src/sgml/ref/createlang.sgml\t2000/11/13 05:26:16\n***************\n*** 101,106 ****\n--- 101,118 ----\n </listitem>\n </varlistentry>\n \n+ <varlistentry>\n+ <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n+ <listitem>\n+ <para>\n+ Specifies the Unix-domain socket on which the\n+ <application>postmaster</application> is running.\n+ Without this option, the socket is created in <filename>/tmp</filename>\n+ based on the port number.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n <varlistentry>\n <term>-U, --username <replaceable class=\"parameter\">username</replaceable></term>\n <listitem>\nIndex: doc/src/sgml/ref/createuser.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/ref/createuser.sgml,v\nretrieving revision 1.10\ndiff -c -r1.10 createuser.sgml\n*** doc/src/sgml/ref/createuser.sgml\t2000/11/11 23:01:40\t1.10\n--- doc/src/sgml/ref/createuser.sgml\t2000/11/13 05:26:16\n***************\n*** 55,60 ****\n--- 55,72 ----\n </listitem>\n </varlistentry>\n \n+ <varlistentry>\n+ <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n+ <listitem>\n+ <para>\n+ Specifies the Unix-domain socket on which the\n+ <application>postmaster</application> is running.\n+ Without this option, the socket is created in <filename>/tmp</filename>\n+ based on the port number.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n <varlistentry>\n <term>-e, --echo</term>\n <listitem>\nIndex: doc/src/sgml/ref/dropdb.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/ref/dropdb.sgml,v\nretrieving revision 1.4\ndiff -c -r1.4 dropdb.sgml\n*** doc/src/sgml/ref/dropdb.sgml\t2000/11/11 23:01:45\t1.4\n--- doc/src/sgml/ref/dropdb.sgml\t2000/11/13 05:26:16\n***************\n*** 55,60 ****\n--- 55,72 ----\n </listitem>\n </varlistentry>\n \n+ <varlistentry>\n+ <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n+ <listitem>\n+ <para>\n+ Specifies the Unix-domain socket on which the\n+ <application>postmaster</application> is running.\n+ Without this option, the socket is created in <filename>/tmp</filename>\n+ based on the port number.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n <varlistentry>\n <term>-U, --username <replaceable class=\"parameter\">username</replaceable></term>\n <listitem>\nIndex: doc/src/sgml/ref/droplang.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/ref/droplang.sgml,v\nretrieving revision 1.4\ndiff -c -r1.4 droplang.sgml\n*** doc/src/sgml/ref/droplang.sgml\t2000/11/11 23:01:45\t1.4\n--- doc/src/sgml/ref/droplang.sgml\t2000/11/13 05:26:16\n***************\n*** 101,106 ****\n--- 101,118 ----\n </listitem>\n </varlistentry>\n \n+ <varlistentry>\n+ <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n+ <listitem>\n+ <para>\n+ Specifies the Unix-domain socket on which the\n+ <application>postmaster</application> is running.\n+ Without this option, the socket is created in <filename>/tmp</filename>\n+ based on the port number.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n <varlistentry>\n <term>-U, --username <replaceable class=\"parameter\">username</replaceable></term>\n <listitem>\nIndex: doc/src/sgml/ref/dropuser.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/ref/dropuser.sgml,v\nretrieving revision 1.5\ndiff -c -r1.5 dropuser.sgml\n*** doc/src/sgml/ref/dropuser.sgml\t2000/11/11 23:01:45\t1.5\n--- doc/src/sgml/ref/dropuser.sgml\t2000/11/13 05:26:16\n***************\n*** 55,60 ****\n--- 55,72 ----\n </listitem>\n </varlistentry>\n \n+ <varlistentry>\n+ <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n+ <listitem>\n+ <para>\n+ Specifies the Unix-domain socket on which the\n+ <application>postmaster</application> is running.\n+ Without this option, the socket is created in <filename>/tmp</filename>\n+ based on the port number.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n <varlistentry>\n <term>-e, --echo</term>\n <listitem>\nIndex: doc/src/sgml/ref/pg_dump.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/ref/pg_dump.sgml,v\nretrieving revision 1.20\ndiff -c -r1.20 pg_dump.sgml\n*** doc/src/sgml/ref/pg_dump.sgml\t2000/10/05 19:48:18\t1.20\n--- doc/src/sgml/ref/pg_dump.sgml\t2000/11/13 05:26:17\n***************\n*** 24,30 ****\n </refsynopsisdivinfo>\n <synopsis>\n pg_dump [ <replaceable class=\"parameter\">dbname</replaceable> ]\n! pg_dump [ -h <replaceable class=\"parameter\">host</replaceable> ] [ -p <replaceable class=\"parameter\">port</replaceable> ]\n [ -t <replaceable class=\"parameter\">table</replaceable> ]\n [ -a ] [ -c ] [ -d ] [ -D ] [ -i ] [ -n ] [ -N ]\n [ -o ] [ -s ] [ -u ] [ -v ] [ -x ]\n--- 24,32 ----\n </refsynopsisdivinfo>\n <synopsis>\n pg_dump [ <replaceable class=\"parameter\">dbname</replaceable> ]\n! pg_dump [ -h <replaceable class=\"parameter\">host</replaceable> ]\n! [ -k <replaceable class=\"parameter\">path</replaceable> ]\n! [ -p <replaceable class=\"parameter\">port</replaceable> ]\n [ -t <replaceable class=\"parameter\">table</replaceable> ]\n [ -a ] [ -c ] [ -d ] [ -D ] [ -i ] [ -n ] [ -N ]\n [ -o ] [ -s ] [ -u ] [ -v ] [ -x ]\n***************\n*** 200,205 ****\n--- 202,222 ----\n \t<application>postmaster</application>\n \tis running. Defaults to using a local Unix domain socket\n \trather than an IP connection.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n+ <varlistentry>\n+ <term>-k <replaceable class=\"parameter\">path</replaceable></term>\n+ <listitem>\n+ <para>\n+ \tSpecifies the local Unix domain socket file path\n+ \ton which the <application>postmaster</application>\n+ \tis listening for connections.\n+ Without this option, the socket path name defaults to\n+ the value of the <envar>PGUNIXSOCKET</envar> environment\n+ \tvariable (if set), otherwise it is constructed\n+ from the port number.\n </para>\n </listitem>\n </varlistentry>\nIndex: doc/src/sgml/ref/pg_dumpall.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/ref/pg_dumpall.sgml,v\nretrieving revision 1.11\ndiff -c -r1.11 pg_dumpall.sgml\n*** doc/src/sgml/ref/pg_dumpall.sgml\t2000/11/02 21:13:31\t1.11\n--- doc/src/sgml/ref/pg_dumpall.sgml\t2000/11/13 05:26:17\n***************\n*** 23,29 ****\n <date>1999-07-20</date>\n </refsynopsisdivinfo>\n <synopsis>\n! pg_dumpall [ -h <replaceable class=\"parameter\">host</replaceable> ] [ -p <replaceable class=\"parameter\">port</replaceable> ] [ -a ] [ -d ] [ -D ] [ -O ] [ -s ] [ -u ] [ -v ] [ -x ] [ --accounts-only ]\n </synopsis>\n \n <refsect2 id=\"R2-APP-PG-DUMPALL-1\">\n--- 23,29 ----\n <date>1999-07-20</date>\n </refsynopsisdivinfo>\n <synopsis>\n! pg_dumpall [ -h <replaceable class=\"parameter\">host</replaceable> ] [ -k <replaceable class=\"parameter\">path</replaceable> ] [ -p <replaceable class=\"parameter\">port</replaceable> ] [ -a ] [ -d ] [ -D ] [ -O ] [ -s ] [ -u ] [ -v ] [ -x ] [ --accounts-only ]\n </synopsis>\n \n <refsect2 id=\"R2-APP-PG-DUMPALL-1\">\n***************\n*** 145,150 ****\n--- 145,165 ----\n \t<application>postmaster</application>\n \tis running. Defaults to using a local Unix domain socket\n \trather than an IP connection..\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n+ <varlistentry>\n+ <term>-k <replaceable class=\"parameter\">path</replaceable></term>\n+ <listitem>\n+ <para>\n+ \tSpecifies the local Unix domain socket file path\n+ \ton which the <application>postmaster</application>\n+ \tis listening for connections.\n+ Without this option, the socket path name defaults to\n+ the value of the <envar>PGUNIXSOCKET</envar> environment\n+ \tvariable (if set), otherwise it is constructed\n+ from the port number.\n </para>\n </listitem>\n </varlistentry>\nIndex: doc/src/sgml/ref/postmaster.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/ref/postmaster.sgml,v\nretrieving revision 1.12\ndiff -c -r1.12 postmaster.sgml\n*** doc/src/sgml/ref/postmaster.sgml\t2000/10/05 19:48:18\t1.12\n--- doc/src/sgml/ref/postmaster.sgml\t2000/11/13 05:26:17\n***************\n*** 24,30 ****\n </refsynopsisdivinfo>\n <synopsis>\n postmaster [ -B <replaceable class=\"parameter\">nBuffers</replaceable> ] [ -D <replaceable class=\"parameter\">DataDir</replaceable> ] [ -N <replaceable class=\"parameter\">maxBackends</replaceable> ] [ -S ]\n! [ -d <replaceable class=\"parameter\">DebugLevel</replaceable> ] [ -i ] [ -l ]\n [ -o <replaceable class=\"parameter\">BackendOptions</replaceable> ] [ -p <replaceable class=\"parameter\">port</replaceable> ] [ -n | -s ]\n </synopsis>\n \n--- 24,32 ----\n </refsynopsisdivinfo>\n <synopsis>\n postmaster [ -B <replaceable class=\"parameter\">nBuffers</replaceable> ] [ -D <replaceable class=\"parameter\">DataDir</replaceable> ] [ -N <replaceable class=\"parameter\">maxBackends</replaceable> ] [ -S ]\n! [ -d <replaceable class=\"parameter\">DebugLevel</replaceable> ]\n! [ -h <replaceable class=\"parameter\">hostname</replaceable> ] [ -i ]\n! [ -k <replaceable class=\"parameter\">path</replaceable> ] [ -l ]\n [ -o <replaceable class=\"parameter\">BackendOptions</replaceable> ] [ -p <replaceable class=\"parameter\">port</replaceable> ] [ -n | -s ]\n </synopsis>\n \n***************\n*** 124,135 ****\n--- 126,196 ----\n </varlistentry>\n \n <varlistentry>\n+ <term>-h <replaceable class=\"parameter\">hostName</replaceable></term>\n+ <listitem>\n+ <para>\n+ \tSpecifies the TCP/IP hostname or address\n+ \ton which the <application>postmaster</application>\n+ \tis to listen for connections from frontend applications. Defaults to\n+ \tthe value of the \n+ \t<envar>PGHOST</envar> \n+ \tenvironment variable, or if <envar>PGHOST</envar>\n+ \tis not set, then defaults to \"all\", meaning listen on all configured addresses\n+ \t(including localhost).\n+ </para>\n+ <para>\n+ \tIf you use a hostname or address other than \"all\", do not try to run\n+ \tmultiple instances of <application>postmaster</application> on the\n+ \tsame IP address but different ports. Doing so will result in them\n+ \tattempting (incorrectly) to use the same shared memory segments.\n+ \tAlso, if you use a hostname other than \"all\", all of the host's IP addresses\n+ \ton which <application>postmaster</application> instances are\n+ \tlistening must be distinct in the two last octets.\n+ </para>\n+ <para>\n+ \tIf you do use \"all\" (the default), then each instance must listen on a\n+ \tdifferent port (via -p or <envar>PGPORT</envar>). And, of course, do\n+ \tnot try to use both approaches on one host.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n+ <varlistentry>\n <term>-i</term>\n <listitem>\n <para>\n Allows clients to connect via TCP/IP (Internet domain) connections.\n \tWithout this option, only local Unix domain socket connections are\n \taccepted.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n+ <varlistentry>\n+ <term>-k <replaceable class=\"parameter\">path</replaceable></term>\n+ <listitem>\n+ <para>\n+ \tSpecifies the local Unix domain socket path name\n+ \ton which the <application>postmaster</application>\n+ \tis to listen for connections from frontend applications. Defaults to\n+ \tthe value of the \n+ \t<envar>PGUNIXSOCKET</envar> \n+ \tenvironment variable, or if <envar>PGUNIXSOCKET</envar>\n+ \tis not set, then defaults to a file in <filename>/tmp</filename>\n+ \tconstructed from the port number.\n+ </para>\n+ <para>\n+ You can use this option to put the Unix-domain socket in a\n+ directory that is private to one or more users using Unix\n+ \tdirectory permissions. This is necessary for securely\n+ \tcreating databases automatically on shared machines.\n+ In that situation, also disallow all TCP/IP connections\n+ \tinitially in <filename>pg_hba.conf</filename>.\n+ \tIf you specify a socket path other than the\n+ \tdefault then all frontend applications (including\n+ \t<application>psql</application>) must specify the same\n+ \tsocket path using either command-line options or\n+ \t<envar>PGUNIXSOCKET</envar>.\n </para>\n </listitem>\n </varlistentry>\nIndex: doc/src/sgml/ref/psql-ref.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/ref/psql-ref.sgml,v\nretrieving revision 1.40\ndiff -c -r1.40 psql-ref.sgml\n*** doc/src/sgml/ref/psql-ref.sgml\t2000/10/24 01:38:21\t1.40\n--- doc/src/sgml/ref/psql-ref.sgml\t2000/11/13 05:26:22\n***************\n*** 1330,1335 ****\n--- 1330,1348 ----\n \n \n <varlistentry>\n+ <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n+ <listitem>\n+ <para>\n+ Specifies the Unix-domain socket on which the\n+ <application>postmaster</application> is running.\n+ Without this option, the socket is created in <filename>/tmp</filename>\n+ based on the port number.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n+ \n+ <varlistentry>\n <term>-H, --html</term>\n <listitem>\n <para>\nIndex: doc/src/sgml/ref/vacuumdb.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/ref/vacuumdb.sgml,v\nretrieving revision 1.10\ndiff -c -r1.10 vacuumdb.sgml\n*** doc/src/sgml/ref/vacuumdb.sgml\t2000/11/11 23:01:45\t1.10\n--- doc/src/sgml/ref/vacuumdb.sgml\t2000/11/13 05:26:22\n***************\n*** 136,141 ****\n--- 136,153 ----\n </listitem>\n </varlistentry>\n \n+ <varlistentry>\n+ <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n+ <listitem>\n+ <para>\n+ Specifies the Unix-domain socket on which the\n+ <application>postmaster</application> is running.\n+ Without this option, the socket is created in <filename>/tmp</filename>\n+ based on the port number.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n <varlistentry>\n <term>-U <replaceable class=\"parameter\">username</replaceable></term>\n <term>--username <replaceable class=\"parameter\">username</replaceable></term>\nIndex: src/backend/libpq/pqcomm.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/libpq/pqcomm.c,v\nretrieving revision 1.109\ndiff -c -r1.109 pqcomm.c\n*** src/backend/libpq/pqcomm.c\t2000/11/01 21:14:01\t1.109\n--- src/backend/libpq/pqcomm.c\t2000/11/13 05:26:22\n***************\n*** 169,181 ****\n /*\n * StreamServerPort -- open a sock stream \"listening\" port.\n *\n! * This initializes the Postmaster's connection-accepting port.\n *\n * RETURNS: STATUS_OK or STATUS_ERROR\n */\n \n int\n! StreamServerPort(int family, unsigned short portName, int *fdP)\n {\n \tSockAddr\tsaddr;\n \tint\t\t\tfd,\n--- 169,182 ----\n /*\n * StreamServerPort -- open a sock stream \"listening\" port.\n *\n! * This initializes the Postmaster's connection-accepting port fdP.\n *\n * RETURNS: STATUS_OK or STATUS_ERROR\n */\n \n int\n! StreamServerPort(int family, char *hostName, unsigned short portName,\n! \t\t\t\t char *unixSocketName, int *fdP)\n {\n \tSockAddr\tsaddr;\n \tint\t\t\tfd,\n***************\n*** 218,224 ****\n #ifdef HAVE_UNIX_SOCKETS\n \tif (family == AF_UNIX)\n \t{\n! \t\tlen = UNIXSOCK_PATH(saddr.un, portName);\n \t\tstrcpy(sock_path, saddr.un.sun_path);\n \t\t/*\n \t\t * If the socket exists but nobody has an advisory lock on it we\n--- 219,226 ----\n #ifdef HAVE_UNIX_SOCKETS\n \tif (family == AF_UNIX)\n \t{\n! \t\tUNIXSOCK_PATH(saddr.un, portName, unixSocketName);\n! \t\tlen = UNIXSOCK_LEN(saddr.un);\n \t\tstrcpy(sock_path, saddr.un.sun_path);\n \t\t/*\n \t\t * If the socket exists but nobody has an advisory lock on it we\n***************\n*** 242,248 ****\n \n if (family == AF_INET) \n {\n! \t\tsaddr.in.sin_addr.s_addr = htonl(INADDR_ANY);\n \t\tsaddr.in.sin_port = htons(portName);\n \t\tlen = sizeof(struct sockaddr_in);\n \t}\n--- 244,270 ----\n \n if (family == AF_INET) \n {\n! \t\t/* TCP/IP socket */\n! \t\tif (hostName[0] == '\\0')\n! \t \t\tsaddr.in.sin_addr.s_addr = htonl(INADDR_ANY);\n! \t\telse\n! \t {\n! \t\t\tstruct hostent *hp;\n! \t\n! \t\t\thp = gethostbyname(hostName);\n! \t\t\tif ((hp == NULL) || (hp->h_addrtype != AF_INET))\n! \t\t\t{\n! \t\t\t\tsnprintf(PQerrormsg, PQERRORMSG_LENGTH,\n! \t\t\t\t\t \"FATAL: StreamServerPort: gethostbyname(%s) failed: %s\\n\",\n! \t\t\t\t\t hostName, hstrerror(h_errno));\n! \t\t\t\t\t fputs(PQerrormsg, stderr);\n! \t\t\t\t\t pqdebug(\"%s\", PQerrormsg);\n! \t\t\t\treturn STATUS_ERROR;\n! \t\t\t}\n! \t\t\tmemmove((char *) &(saddr.in.sin_addr), (char *) hp->h_addr,\n! \t\t\t\t\thp->h_length);\n! \t\t}\n! \t\n \t\tsaddr.in.sin_port = htons(portName);\n \t\tlen = sizeof(struct sockaddr_in);\n \t}\nIndex: src/backend/postmaster/postmaster.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/postmaster/postmaster.c,v\nretrieving revision 1.181\ndiff -c -r1.181 postmaster.c\n*** src/backend/postmaster/postmaster.c\t2000/11/09 11:25:59\t1.181\n--- src/backend/postmaster/postmaster.c\t2000/11/13 05:26:25\n***************\n*** 114,119 ****\n--- 114,121 ----\n static Dllist *PortList;\n \n int PostPortName;\n+ char * UnixSocketName;\n+ char * HostName;\n \n /*\n * This is a boolean indicating that there is at least one backend that\n***************\n*** 234,240 ****\n static void pmdaemonize(int argc, char *argv[]);\n static Port *ConnCreate(int serverFd);\n static void ConnFree(Port *port);\n! static void reset_shared(int port);\n static void SIGHUP_handler(SIGNAL_ARGS);\n static void pmdie(SIGNAL_ARGS);\n static void reaper(SIGNAL_ARGS);\n--- 236,242 ----\n static void pmdaemonize(int argc, char *argv[]);\n static Port *ConnCreate(int serverFd);\n static void ConnFree(Port *port);\n! static void reset_shared(unsigned short port);\n static void SIGHUP_handler(SIGNAL_ARGS);\n static void pmdie(SIGNAL_ARGS);\n static void reaper(SIGNAL_ARGS);\n***************\n*** 376,382 ****\n \t * will occur.\n \t */\n \topterr = 1;\n! \twhile ((opt = getopt(argc, argv, \"A:a:B:b:c:D:d:Film:MN:no:p:SsV-:?\")) != EOF)\n \t{\n \t\tswitch(opt)\n \t\t{\n--- 378,384 ----\n \t * will occur.\n \t */\n \topterr = 1;\n! \twhile ((opt = getopt(argc, argv, \"A:a:B:b:c:D:d:Fh:ik:lm:MN:no:p:SsV-:?\")) != EOF)\n \t{\n \t\tswitch(opt)\n \t\t{\n***************\n*** 432,438 ****\n #ifdef HAVE_INT_OPTRESET\n \toptreset = 1;\n #endif\n! \twhile ((opt = getopt(argc, argv, \"A:a:B:b:c:D:d:Film:MN:no:p:SsV-:?\")) != EOF)\n \t{\n \t\tswitch (opt)\n \t\t{\n--- 434,440 ----\n #ifdef HAVE_INT_OPTRESET\n \toptreset = 1;\n #endif\n! \twhile ((opt = getopt(argc, argv, \"A:a:B:b:c:D:d:Fh:ik:lm:MN:no:p:SsV-:?\")) != EOF)\n \t{\n \t\tswitch (opt)\n \t\t{\n***************\n*** 466,474 ****\n--- 468,483 ----\n \t\t\tcase 'F':\n \t\t\t\tenableFsync = false;\n \t\t\t\tbreak;\n+ \t\t\tcase 'h':\n+ \t\t\t\tHostName = optarg;\n+ \t\t\t\tbreak;\n \t\t\tcase 'i':\n \t\t\t\tNetServer = true;\n \t\t\t\tbreak;\n+ \t\t\tcase 'k':\n+ \t\t\t\t/* Set PGUNIXSOCKET by hand. */\n+ \t\t\t\tUnixSocketName = optarg;\n+ \t\t\t\tbreak;\n #ifdef USE_SSL\n \t\t\tcase 'l':\n \t\t\t EnableSSL = true;\n***************\n*** 600,606 ****\n \n \tif (NetServer)\n \t{\n! \t\tstatus = StreamServerPort(AF_INET, (unsigned short)PostPortName, &ServerSock_INET);\n \t\tif (status != STATUS_OK)\n \t\t{\n \t\t\tfprintf(stderr, \"%s: cannot create INET stream port\\n\",\n--- 609,616 ----\n \n \tif (NetServer)\n \t{\n! \t\tstatus = StreamServerPort(AF_INET, HostName,\n! \t\t\t\t(unsigned short)PostPortName, UnixSocketName, &ServerSock_INET);\n \t\tif (status != STATUS_OK)\n \t\t{\n \t\t\tfprintf(stderr, \"%s: cannot create INET stream port\\n\",\n***************\n*** 610,616 ****\n \t}\n \n #ifdef HAVE_UNIX_SOCKETS\n! \tstatus = StreamServerPort(AF_UNIX, (unsigned short)PostPortName, &ServerSock_UNIX);\n \tif (status != STATUS_OK)\n \t{\n \t\tfprintf(stderr, \"%s: cannot create UNIX stream port\\n\",\n--- 620,627 ----\n \t}\n \n #ifdef HAVE_UNIX_SOCKETS\n! \tstatus = StreamServerPort(AF_UNIX, HostName,\n! \t\t\t(unsigned short)PostPortName, UnixSocketName, &ServerSock_UNIX);\n \tif (status != STATUS_OK)\n \t{\n \t\tfprintf(stderr, \"%s: cannot create UNIX stream port\\n\",\n***************\n*** 780,786 ****\n--- 791,799 ----\n \tprintf(\" -d 1-5 debugging level\\n\");\n \tprintf(\" -D <directory> database directory\\n\");\n \tprintf(\" -F turn fsync off\\n\");\n+ \tprintf(\" -h hostname specify hostname or IP address\\n\");\n \tprintf(\" -i enable TCP/IP connections\\n\");\n+ \tprintf(\" -k path specify Unix-domain socket name\\n\");\n #ifdef USE_SSL\n \tprintf(\" -l enable SSL connections\\n\");\n #endif\n***************\n*** 1294,1304 ****\n }\n \n /*\n * reset_shared -- reset shared memory and semaphores\n */\n static void\n! reset_shared(int port)\n {\n \tipc_key = port * 1000 + shmem_seq * 100;\n \tCreateSharedMemoryAndSemaphores(ipc_key, MaxBackends);\n \tshmem_seq += 1;\n--- 1307,1381 ----\n }\n \n /*\n+ * get_host_port -- return a pseudo port number (16 bits)\n+ * derived from the primary IP address of HostName.\n+ */\n+ static unsigned short\n+ get_host_port(void)\n+ {\n+ \tstatic unsigned short hostPort = 0;\n+ \n+ \tif (hostPort == 0)\n+ \t{\n+ \t\tSockAddr\tsaddr;\n+ \t\tstruct hostent *hp;\n+ \n+ \t\thp = gethostbyname(HostName);\n+ \t\tif ((hp == NULL) || (hp->h_addrtype != AF_INET))\n+ \t\t{\n+ \t\t\tchar msg[1024];\n+ \t\t\tsnprintf(msg, sizeof(msg),\n+ \t\t\t\t \"FATAL: get_host_port: gethostbyname(%s) failed: %s\\n\",\n+ \t\t\t\t HostName, hstrerror(h_errno));\n+ \t\t\tfputs(msg, stderr);\n+ \t\t\tpqdebug(\"%s\", msg);\n+ \t\t\texit(1);\n+ \t\t}\n+ \t\tmemmove((char *) &(saddr.in.sin_addr),\n+ \t\t\t(char *) hp->h_addr,\n+ \t\t\thp->h_length);\n+ \t\thostPort = ntohl(saddr.in.sin_addr.s_addr) & 0xFFFF;\n+ \t}\n+ \n+ \treturn hostPort;\n+ }\n+ \n+ /*\n * reset_shared -- reset shared memory and semaphores\n */\n static void\n! reset_shared(unsigned short port)\n {\n+ \t/*\n+ \t * A typical ipc_key is 5432001, which is port 5432, sequence\n+ \t * number 0, and 01 as the index in IPCKeyGetBufferMemoryKey().\n+ \t * The 32-bit INT_MAX is 2147483 6 47.\n+ \t *\n+ \t * The default algorithm for calculating the IPC keys assumes that all\n+ \t * instances of postmaster on a given host are listening on different\n+ \t * ports. In order to work (prevent shared memory collisions) if you\n+ \t * run multiple PostgreSQL instances on the same port and different IP\n+ \t * addresses on a host, we change the algorithm if you give postmaster\n+ \t * the -h option, or set PGHOST, to a value other than the internal\n+ \t * default.\n+ \t *\n+ \t * If HostName is set, then we generate the IPC keys using the\n+ \t * last two octets of the IP address instead of the port number.\n+ \t * This algorithm assumes that no one will run multiple PostgreSQL\n+ \t * instances on one host using two IP addresses that have the same two\n+ \t * last octets in different class C networks. If anyone does, it\n+ \t * would be rare.\n+ \t *\n+ \t * So, if you use -h or PGHOST, don't try to run two instances of\n+ \t * PostgreSQL on the same IP address but different ports. If you\n+ \t * don't use them, then you must use different ports (via -p or\n+ \t * PGPORT). And, of course, don't try to use both approaches on one\n+ \t * host.\n+ \t */\n+ \n+ \tif (HostName[0] != '\\0')\n+ \t\tport = get_host_port();\n+ \n \tipc_key = port * 1000 + shmem_seq * 100;\n \tCreateSharedMemoryAndSemaphores(ipc_key, MaxBackends);\n \tshmem_seq += 1;\n***************\n*** 2205,2210 ****\n--- 2282,2289 ----\n \t\tchar\t\tnbbuf[ARGV_SIZE];\n \t\tchar\t\tdbbuf[ARGV_SIZE];\n \t\tchar\t\txlbuf[ARGV_SIZE];\n+ \t\tchar\t\thsbuf[ARGV_SIZE];\n+ \t\tchar\t\tskbuf[ARGV_SIZE];\n \n \t\t/* Lose the postmaster's on-exit routines and port connections */\n \t\ton_exit_reset();\nIndex: src/backend/utils/misc/guc.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/misc/guc.c,v\nretrieving revision 1.16\ndiff -c -r1.16 guc.c\n*** src/backend/utils/misc/guc.c\t2000/11/09 11:25:59\t1.16\n--- src/backend/utils/misc/guc.c\t2000/11/13 05:26:26\n***************\n*** 304,309 ****\n--- 304,315 ----\n \t{\"unix_socket_group\", PGC_POSTMASTER, &Unix_socket_group,\n \t \"\", NULL},\n \n+ \t{\"unixsocket\", \t\t PGC_POSTMASTER, &UnixSocketName,\n+ \t \"\", NULL},\n+ \n+ \t{\"hostname\", \t\t PGC_POSTMASTER, &HostName,\n+ \t \"\", NULL},\n+ \n \t{NULL, 0, NULL, NULL, NULL}\n };\n \nIndex: src/bin/pg_dump/pg_backup.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/pg_dump/pg_backup.h,v\nretrieving revision 1.4\ndiff -c -r1.4 pg_backup.h\n*** src/bin/pg_dump/pg_backup.h\t2000/08/01 15:51:44\t1.4\n--- src/bin/pg_dump/pg_backup.h\t2000/11/13 05:26:26\n***************\n*** 99,106 ****\n \n \tint\t\t\tuseDB;\n \tchar\t\t*dbname;\n- \tchar\t\t*pgport;\n \tchar\t\t*pghost;\n \tint\t\t\tignoreVersion;\n \tint\t\t\trequirePassword;\n \n--- 99,107 ----\n \n \tint\t\t\tuseDB;\n \tchar\t\t*dbname;\n \tchar\t\t*pghost;\n+ \tchar\t\t*pgport;\n+ \tchar\t\t*pgunixsocket;\n \tint\t\t\tignoreVersion;\n \tint\t\t\trequirePassword;\n \n***************\n*** 122,127 ****\n--- 123,129 ----\n \t\tconst char* \tdbname,\n \t\tconst char*\tpghost,\n \t\tconst char*\tpgport,\n+ \t\tconst char*\tpgunixsocket,\n \t\tconst int\treqPwd,\n \t\tconst int\tignoreVersion);\n \nIndex: src/bin/pg_dump/pg_backup_archiver.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/pg_dump/pg_backup_archiver.c,v\nretrieving revision 1.10\ndiff -c -r1.10 pg_backup_archiver.c\n*** src/bin/pg_dump/pg_backup_archiver.c\t2000/10/31 14:20:30\t1.10\n--- src/bin/pg_dump/pg_backup_archiver.c\t2000/11/13 05:26:27\n***************\n*** 131,138 ****\n \t\tif (AH->version < K_VERS_1_3)\n \t\t\tdie_horribly(AH, \"Direct database connections are not supported in pre-1.3 archives\");\n \n! \t\tConnectDatabase(AHX, ropt->dbname, ropt->pghost, ropt->pgport, \n! \t\t\t\t\t\t\tropt->requirePassword, ropt->ignoreVersion);\n \n \t\t/*\n \t\t * If no superuser was specified then see if the current user will do...\n--- 131,139 ----\n \t\tif (AH->version < K_VERS_1_3)\n \t\t\tdie_horribly(AH, \"Direct database connections are not supported in pre-1.3 archives\");\n \n! \t\tConnectDatabase(AHX, ropt->dbname, ropt->pghost, ropt->pgport,\n! \t\t\t\t\t\t\tropt->pgunixsocket, ropt->requirePassword,\n! \t\t\t\t\t\t\tropt->ignoreVersion);\n \n \t\t/*\n \t\t * If no superuser was specified then see if the current user will do...\nIndex: src/bin/pg_dump/pg_backup_archiver.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/pg_dump/pg_backup_archiver.h,v\nretrieving revision 1.16\ndiff -c -r1.16 pg_backup_archiver.h\n*** src/bin/pg_dump/pg_backup_archiver.h\t2000/10/31 14:20:30\t1.16\n--- src/bin/pg_dump/pg_backup_archiver.h\t2000/11/13 05:26:27\n***************\n*** 187,192 ****\n--- 187,193 ----\n \tchar\t\t\t\t*archdbname;\t\t/* DB name *read* from archive */\n \tchar\t\t\t\t*pghost;\n \tchar\t\t\t\t*pgport;\n+ \tchar\t\t\t\t*pgunixsocket;\n \tPGconn\t\t\t\t*connection;\n \tPGconn\t\t\t\t*blobConnection;\t/* Connection for BLOB xref */\n \tint\t\t\t\t\ttxActive;\t\t\t/* Flag set if TX active on connection */\nIndex: src/bin/pg_dump/pg_backup_db.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/pg_dump/pg_backup_db.c,v\nretrieving revision 1.8\ndiff -c -r1.8 pg_backup_db.c\n*** src/bin/pg_dump/pg_backup_db.c\t2000/10/31 14:20:30\t1.8\n--- src/bin/pg_dump/pg_backup_db.c\t2000/11/13 05:26:27\n***************\n*** 1,7 ****\n /*-------------------------------------------------------------------------\n *\n *\n! *-------------------------------------------------------------------------\n */\n \n #include <unistd.h>\t\t\t\t/* for getopt() */\n--- 1,7 ----\n /*-------------------------------------------------------------------------\n *\n *\n! *-------------------------------------------------------------------------\n */\n \n #include <unistd.h>\t\t\t\t/* for getopt() */\n***************\n*** 273,278 ****\n--- 273,279 ----\n \t\tconst char* \tdbname,\n \t\tconst char* \tpghost,\n \t\tconst char* \tpgport,\n+ \t\tconst char* \tpgunixsocket,\n \t\tconst int\t\treqPwd,\n \t\tconst int\t\tignoreVersion)\n {\n***************\n*** 306,311 ****\n--- 307,321 ----\n \t}\n \telse\n \t AH->pgport = NULL;\n+ \n+ \tif (pgunixsocket != NULL)\n+ \t{\n+ \t\tAH->pgport = strdup(pgunixsocket);\n+ \t\tsprintf(tmp_string, \"unixsocket=%s \", AH->pgunixsocket);\n+ \t\tstrcat(connect_string, tmp_string);\n+ \t}\n+ \telse\n+ \t AH->pgunixsocket = NULL;\n \n \tsprintf(tmp_string, \"dbname=%s \", AH->dbname);\n \tstrcat(connect_string, tmp_string);\nIndex: src/bin/pg_dump/pg_dump.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/pg_dump/pg_dump.c,v\nretrieving revision 1.177\ndiff -c -r1.177 pg_dump.c\n*** src/bin/pg_dump/pg_dump.c\t2000/10/31 14:20:30\t1.177\n--- src/bin/pg_dump/pg_dump.c\t2000/11/13 05:26:30\n***************\n*** 200,205 ****\n--- 200,206 ----\n \t\t\" -F, --format {c|f|p} output file format (custom, files, plain text)\\n\"\n \t\t\" -h, --host <hostname> server host name\\n\"\n \t\t\" -i, --ignore-version proceed when database version != pg_dump version\\n\"\n+ \t\t\" -k, --unixsocket <path> server Unix-domain socket name\\n\"\n \t\t\" -n, --no-quotes suppress most quotes around identifiers\\n\"\n \t\t\" -N, --quotes enable most quotes around identifiers\\n\"\n \t\t\" -o, --oids dump object ids (oids)\\n\"\n***************\n*** 226,231 ****\n--- 227,233 ----\n \t\t\" -F {c|f|p} output file format (custom, files, plain text)\\n\"\n \t\t\" -h <hostname> server host name\\n\"\n \t\t\" -i proceed when database version != pg_dump version\\n\"\n+ \t\t\" -k <path> server Unix-domain socket name\\n\"\n \t\t\" -n suppress most quotes around identifiers\\n\"\n \t\t\" -N enable most quotes around identifiers\\n\"\n \t\t\" -o dump object ids (oids)\\n\"\n***************\n*** 629,634 ****\n--- 631,637 ----\n \tconst char *dbname = NULL;\n \tconst char *pghost = NULL;\n \tconst char *pgport = NULL;\n+ \tconst char *pgunixsocket = NULL;\n \tchar\t *tablename = NULL;\n \tbool\t\toids = false;\n \tTableInfo *tblinfo;\n***************\n*** 658,663 ****\n--- 661,667 ----\n \t\t{\"attribute-inserts\", no_argument, NULL, 'D'},\n \t\t{\"host\", required_argument, NULL, 'h'},\n \t\t{\"ignore-version\", no_argument, NULL, 'i'},\n+ \t\t{\"unixsocket\", required_argument, NULL, 'k'},\n \t\t{\"no-reconnect\", no_argument, NULL, 'R'},\n \t\t{\"no-quotes\", no_argument, NULL, 'n'},\n \t\t{\"quotes\", no_argument, NULL, 'N'},\n***************\n*** 752,757 ****\n--- 756,765 ----\n \t\t\t\tignore_version = true;\n \t\t\t\tbreak;\n \n+ \t\t\tcase 'k':\t\t\t/* server Unix-domain socket */\n+ \t\t\t\tpgunixsocket = optarg;\n+ \t\t\t\tbreak;\n+ \n \t\t\tcase 'n':\t\t\t/* Do not force double-quotes on\n \t\t\t\t\t\t\t\t * identifiers */\n \t\t\t\tforce_quotes = false;\n***************\n*** 948,954 ****\n \tdbname = argv[optind];\n \n \t/* Open the database using the Archiver, so it knows about it. Errors mean death */\n! \tg_conn = ConnectDatabase(g_fout, dbname, pghost, pgport, use_password, ignore_version);\n \n \t/*\n \t * Start serializable transaction to dump consistent data\n--- 956,963 ----\n \tdbname = argv[optind];\n \n \t/* Open the database using the Archiver, so it knows about it. Errors mean death */\n! \tg_conn = ConnectDatabase(g_fout, dbname, pghost, pgport, pgunixsocket,\n! \t\t\t\t\t\t\t use_password, ignore_version);\n \n \t/*\n \t * Start serializable transaction to dump consistent data\nIndex: src/bin/pg_dump/pg_restore.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/pg_dump/pg_restore.c,v\nretrieving revision 1.8\ndiff -c -r1.8 pg_restore.c\n*** src/bin/pg_dump/pg_restore.c\t2000/10/16 14:34:08\t1.8\n--- src/bin/pg_dump/pg_restore.c\t2000/11/13 05:26:30\n***************\n*** 101,106 ****\n--- 101,107 ----\n \t\t\t\t{ \"ignore-version\", 0, NULL, 'i'},\n \t\t\t\t{ \"index\", 2, NULL, 'I'},\n \t\t\t\t{ \"list\", 0, NULL, 'l'},\n+ \t\t\t\t{ \"unixsocket\", 1, NULL, 'k' },\n \t\t\t\t{ \"no-acl\", 0, NULL, 'x' },\n \t\t\t\t{ \"no-owner\", 0, NULL, 'O'},\n \t\t\t\t{ \"no-reconnect\", 0, NULL, 'R' },\n***************\n*** 132,140 ****\n \tprogname = *argv;\n \n #ifdef HAVE_GETOPT_LONG\n! \twhile ((c = getopt_long(argc, argv, \"acCd:f:F:h:i:lNoOp:P:rRsS:t:T:uU:vx\", cmdopts, NULL)) != EOF)\n #else\n! \twhile ((c = getopt(argc, argv, \"acCd:f:F:h:i:lNoOp:P:rRsS:t:T:uU:vx\")) != -1)\n #endif\n \t{\n \t\tswitch (c)\n--- 133,141 ----\n \tprogname = *argv;\n \n #ifdef HAVE_GETOPT_LONG\n! \twhile ((c = getopt_long(argc, argv, \"acCd:f:F:h:i:k:lNoOp:P:rRsS:t:T:uU:vx\", cmdopts, NULL)) != EOF)\n #else\n! \twhile ((c = getopt(argc, argv, \"acCd:f:F:h:i:k:lNoOp:P:rRsS:t:T:uU:vx\")) != -1)\n #endif\n \t{\n \t\tswitch (c)\n***************\n*** 169,174 ****\n--- 170,179 ----\n \t\t\t\tbreak;\n \t\t\tcase 'i':\n \t\t\t\topts->ignoreVersion = 1;\n+ \t\t\t\tbreak;\n+ \t\t\tcase 'k':\n+ \t\t\t\tif (strlen(optarg) != 0)\n+ \t\t\t\t\topts->pgunixsocket = strdup(optarg);\n \t\t\t\tbreak;\n \t\t\tcase 'N':\n \t\t\t\topts->origOrder = 1;\nIndex: src/bin/psql/command.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/psql/command.c,v\nretrieving revision 1.36\ndiff -c -r1.36 command.c\n*** src/bin/psql/command.c\t2000/09/17 20:33:45\t1.36\n--- src/bin/psql/command.c\t2000/11/13 05:26:31\n***************\n*** 1202,1207 ****\n--- 1202,1208 ----\n \tSetVariable(pset.vars, \"USER\", NULL);\n \tSetVariable(pset.vars, \"HOST\", NULL);\n \tSetVariable(pset.vars, \"PORT\", NULL);\n+ \tSetVariable(pset.vars, \"UNIXSOCKET\", NULL);\n \tSetVariable(pset.vars, \"ENCODING\", NULL);\n \n \t/* If dbname is \"\" then use old name, else new one (even if NULL) */\n***************\n*** 1231,1236 ****\n--- 1232,1238 ----\n \tdo\n \t{\n \t\tneed_pass = false;\n+ \t\t/* FIXME use PQconnectdb to support passing the Unix socket */\n \t\tpset.db = PQsetdbLogin(PQhost(oldconn), PQport(oldconn),\n \t\t\t\t\t\t\t NULL, NULL, dbparam, userparam, pwparam);\n \n***************\n*** 1307,1312 ****\n--- 1309,1315 ----\n \tSetVariable(pset.vars, \"USER\", PQuser(pset.db));\n \tSetVariable(pset.vars, \"HOST\", PQhost(pset.db));\n \tSetVariable(pset.vars, \"PORT\", PQport(pset.db));\n+ \tSetVariable(pset.vars, \"UNIXSOCKET\", PQunixsocket(pset.db));\n \tSetVariable(pset.vars, \"ENCODING\", pg_encoding_to_char(pset.encoding));\n \n \tpset.issuper = test_superuser(PQuser(pset.db));\nIndex: src/bin/psql/common.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/psql/common.c,v\nretrieving revision 1.23\ndiff -c -r1.23 common.c\n*** src/bin/psql/common.c\t2000/08/29 09:36:48\t1.23\n--- src/bin/psql/common.c\t2000/11/13 05:26:31\n***************\n*** 329,334 ****\n--- 329,335 ----\n \t\t\tSetVariable(pset.vars, \"DBNAME\", NULL);\n \t\t\tSetVariable(pset.vars, \"HOST\", NULL);\n \t\t\tSetVariable(pset.vars, \"PORT\", NULL);\n+ \t\t\tSetVariable(pset.vars, \"UNIXSOCKET\", NULL);\n \t\t\tSetVariable(pset.vars, \"USER\", NULL);\n \t\t\tSetVariable(pset.vars, \"ENCODING\", NULL);\n \t\t\treturn NULL;\n***************\n*** 508,513 ****\n--- 509,515 ----\n \t\t\t\tSetVariable(pset.vars, \"DBNAME\", NULL);\n \t\t\t\tSetVariable(pset.vars, \"HOST\", NULL);\n \t\t\t\tSetVariable(pset.vars, \"PORT\", NULL);\n+ \t\t\t\tSetVariable(pset.vars, \"UNIXSOCKET\", NULL);\n \t\t\t\tSetVariable(pset.vars, \"USER\", NULL);\n \t\t\t\tSetVariable(pset.vars, \"ENCODING\", NULL);\n \t\t\t\treturn false;\nIndex: src/bin/psql/help.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/psql/help.c,v\nretrieving revision 1.32\ndiff -c -r1.32 help.c\n*** src/bin/psql/help.c\t2000/09/22 23:02:00\t1.32\n--- src/bin/psql/help.c\t2000/11/13 05:26:33\n***************\n*** 103,108 ****\n--- 103,118 ----\n \tputs(\")\");\n \n \tputs(\" -H HTML table output mode (-P format=html)\");\n+ \n+ \t/* Display default Unix-domain socket */\n+ \tenv = getenv(\"PGUNIXSOCKET\");\n+ \tprintf(\" -k <path> Specify Unix domain socket name (default: \");\n+ \tif (env)\n+ \t\tfputs(env, stdout);\n+ \telse\n+ \t\tfputs(\"computed from the port\", stdout);\n+ \tputs(\")\");\n+ \n \tputs(\" -l List available databases, then exit\");\n \tputs(\" -n Disable readline\");\n \tputs(\" -o <filename> Send query output to filename (or |pipe)\");\nIndex: src/bin/psql/prompt.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/psql/prompt.c,v\nretrieving revision 1.13\ndiff -c -r1.13 prompt.c\n*** src/bin/psql/prompt.c\t2000/08/20 10:55:34\t1.13\n--- src/bin/psql/prompt.c\t2000/11/13 05:26:33\n***************\n*** 190,195 ****\n--- 190,200 ----\n \t\t\t\t\tif (pset.db && PQport(pset.db))\n \t\t\t\t\t\tstrncpy(buf, PQport(pset.db), MAX_PROMPT_SIZE);\n \t\t\t\t\tbreak;\n+ \t\t\t\t\t/* DB server Unix-domain socket */\n+ \t\t\t\tcase '<':\n+ \t\t\t\t\tif (pset.db && PQunixsocket(pset.db))\n+ \t\t\t\t\t\tstrncpy(buf, PQunixsocket(pset.db), MAX_PROMPT_SIZE);\n+ \t\t\t\t\tbreak;\n \t\t\t\t\t/* DB server user name */\n \t\t\t\tcase 'n':\n \t\t\t\t\tif (pset.db)\nIndex: src/bin/psql/startup.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/psql/startup.c,v\nretrieving revision 1.37\ndiff -c -r1.37 startup.c\n*** src/bin/psql/startup.c\t2000/09/17 20:33:45\t1.37\n--- src/bin/psql/startup.c\t2000/11/13 05:26:34\n***************\n*** 65,70 ****\n--- 65,71 ----\n \tchar\t *dbname;\n \tchar\t *host;\n \tchar\t *port;\n+ \tchar\t *unixsocket;\n \tchar\t *username;\n \tenum _actions action;\n \tchar\t *action_string;\n***************\n*** 161,166 ****\n--- 162,168 ----\n \tdo\n \t{\n \t\tneed_pass = false;\n+ \t\t/* FIXME use PQconnectdb to allow setting the unix socket */\n \t\tpset.db = PQsetdbLogin(options.host, options.port, NULL, NULL,\n \t\t\toptions.action == ACT_LIST_DB ? \"template1\" : options.dbname,\n \t\t\t\t\t\t\t username, password);\n***************\n*** 206,211 ****\n--- 208,214 ----\n \tSetVariable(pset.vars, \"USER\", PQuser(pset.db));\n \tSetVariable(pset.vars, \"HOST\", PQhost(pset.db));\n \tSetVariable(pset.vars, \"PORT\", PQport(pset.db));\n+ \tSetVariable(pset.vars, \"UNIXSOCKET\", PQunixsocket(pset.db));\n \tSetVariable(pset.vars, \"ENCODING\", pg_encoding_to_char(pset.encoding));\n \n #ifndef WIN32\n***************\n*** 320,325 ****\n--- 323,329 ----\n \t\t{\"field-separator\", required_argument, NULL, 'F'},\n \t\t{\"host\", required_argument, NULL, 'h'},\n \t\t{\"html\", no_argument, NULL, 'H'},\n+ \t\t{\"unixsocket\", required_argument, NULL, 'k'},\n \t\t{\"list\", no_argument, NULL, 'l'},\n \t\t{\"no-readline\", no_argument, NULL, 'n'},\n \t\t{\"output\", required_argument, NULL, 'o'},\n***************\n*** 353,366 ****\n \tmemset(options, 0, sizeof *options);\n \n #ifdef HAVE_GETOPT_LONG\n! \twhile ((c = getopt_long(argc, argv, \"aAc:d:eEf:F:lh:Hno:p:P:qRsStT:uU:v:VWxX?\", long_options, &optindex)) != -1)\n #else\t\t\t\t\t\t\t/* not HAVE_GETOPT_LONG */\n \n \t/*\n \t * Be sure to leave the '-' in here, so we can catch accidental long\n \t * options.\n \t */\n! \twhile ((c = getopt(argc, argv, \"aAc:d:eEf:F:lh:Hno:p:P:qRsStT:uU:v:VWxX?-\")) != -1)\n #endif\t /* not HAVE_GETOPT_LONG */\n \t{\n \t\tswitch (c)\n--- 357,370 ----\n \tmemset(options, 0, sizeof *options);\n \n #ifdef HAVE_GETOPT_LONG\n! \twhile ((c = getopt_long(argc, argv, \"aAc:d:eEf:F:lh:Hk:no:p:P:qRsStT:uU:v:VWxX?\", long_options, &optindex)) != -1)\n #else\t\t\t\t\t\t\t/* not HAVE_GETOPT_LONG */\n \n \t/*\n \t * Be sure to leave the '-' in here, so we can catch accidental long\n \t * options.\n \t */\n! \twhile ((c = getopt(argc, argv, \"aAc:d:eEf:F:lh:Hk:no:p:P:qRsStT:uU:v:VWxX?-\")) != -1)\n #endif\t /* not HAVE_GETOPT_LONG */\n \t{\n \t\tswitch (c)\n***************\n*** 405,410 ****\n--- 409,417 ----\n \t\t\t\tbreak;\n \t\t\tcase 'l':\n \t\t\t\toptions->action = ACT_LIST_DB;\n+ \t\t\t\tbreak;\n+ \t\t\tcase 'k':\n+ \t\t\t\toptions->unixsocket = optarg;\n \t\t\t\tbreak;\n \t\t\tcase 'n':\n \t\t\t\toptions->no_readline = true;\nIndex: src/bin/scripts/createdb\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/scripts/createdb,v\nretrieving revision 1.9\ndiff -c -r1.9 createdb\n*** src/bin/scripts/createdb\t2000/11/11 22:59:48\t1.9\n--- src/bin/scripts/createdb\t2000/11/13 05:26:34\n***************\n*** 50,55 ****\n--- 50,64 ----\n --port=*)\n PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n ;;\n+ \t--unixsocket|-k)\n+ \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n+ \t\tshift;;\n+ -k*)\n+ PSQLOPT=\"$PSQLOPT $1\"\n+ ;;\n+ --unixsocket=*)\n+ PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n+ ;;\n \t--username|-U)\n \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n \t\tshift;;\n***************\n*** 114,119 ****\n--- 123,129 ----\n \techo \" -E, --encoding=ENCODING Multibyte encoding for the database\"\n \techo \" -h, --host=HOSTNAME Database server host\"\n \techo \" -p, --port=PORT Database server port\"\n+ \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n \techo \" -U, --username=USERNAME Username to connect as\"\n \techo \" -W, --password Prompt for password\"\n \techo \" -e, --echo Show the query being sent to the backend\"\nIndex: src/bin/scripts/createlang.sh\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/scripts/createlang.sh,v\nretrieving revision 1.17\ndiff -c -r1.17 createlang.sh\n*** src/bin/scripts/createlang.sh\t2000/11/11 22:59:48\t1.17\n--- src/bin/scripts/createlang.sh\t2000/11/13 05:26:34\n***************\n*** 65,70 ****\n--- 65,79 ----\n --port=*)\n PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n ;;\n+ \t--unixsocket|-k)\n+ \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n+ \t\tshift;;\n+ -k*)\n+ PSQLOPT=\"$PSQLOPT $1\"\n+ ;;\n+ --unixsocket=*)\n+ PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n+ ;;\n \t--username|-U)\n \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n \t\tshift;;\n***************\n*** 126,131 ****\n--- 135,141 ----\n \techo \"Options:\"\n \techo \" -h, --host=HOSTNAME Database server host\"\n \techo \" -p, --port=PORT Database server port\"\n+ \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n \techo \" -U, --username=USERNAME Username to connect as\"\n \techo \" -W, --password Prompt for password\"\n \techo \" -d, --dbname=DBNAME Database to install language in\"\nIndex: src/bin/scripts/createuser\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/scripts/createuser,v\nretrieving revision 1.12\ndiff -c -r1.12 createuser\n*** src/bin/scripts/createuser\t2000/11/11 22:59:48\t1.12\n--- src/bin/scripts/createuser\t2000/11/13 05:26:34\n***************\n*** 63,68 ****\n--- 63,77 ----\n --port=*)\n PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n ;;\n+ \t--unixsocket|-k)\n+ \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n+ \t\tshift;;\n+ -k*)\n+ PSQLOPT=\"$PSQLOPT $1\"\n+ ;;\n+ --unixsocket=*)\n+ PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n+ ;;\n # Note: These two specify the user to connect as (like in psql),\n # not the user you're creating.\n \t--username|-U)\n***************\n*** 135,140 ****\n--- 144,150 ----\n \techo \" -P, --pwprompt Assign a password to new user\"\n \techo \" -h, --host=HOSTNAME Database server host\"\n \techo \" -p, --port=PORT Database server port\"\n+ \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n \techo \" -U, --username=USERNAME Username to connect as (not the one to create)\"\n \techo \" -W, --password Prompt for password to connect\"\n \techo \" -e, --echo Show the query being sent to the backend\"\nIndex: src/bin/scripts/dropdb\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/scripts/dropdb,v\nretrieving revision 1.7\ndiff -c -r1.7 dropdb\n*** src/bin/scripts/dropdb\t2000/11/11 22:59:48\t1.7\n--- src/bin/scripts/dropdb\t2000/11/13 05:26:34\n***************\n*** 59,64 ****\n--- 59,73 ----\n --port=*)\n PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n ;;\n+ \t--unixsocket|-k)\n+ \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n+ \t\tshift;;\n+ -k*)\n+ PSQLOPT=\"$PSQLOPT $1\"\n+ ;;\n+ --unixsocket=*)\n+ PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n+ ;;\n \t--username|-U)\n \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n \t\tshift;;\n***************\n*** 103,108 ****\n--- 112,118 ----\n \techo \"Options:\"\n \techo \" -h, --host=HOSTNAME Database server host\"\n \techo \" -p, --port=PORT Database server port\"\n+ \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n \techo \" -U, --username=USERNAME Username to connect as\"\n \techo \" -W, --password Prompt for password\"\n \techo \" -i, --interactive Prompt before deleting anything\"\nIndex: src/bin/scripts/droplang\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/scripts/droplang,v\nretrieving revision 1.8\ndiff -c -r1.8 droplang\n*** src/bin/scripts/droplang\t2000/11/11 22:59:48\t1.8\n--- src/bin/scripts/droplang\t2000/11/13 05:26:34\n***************\n*** 65,70 ****\n--- 65,79 ----\n --port=*)\n PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n ;;\n+ \t--unixsocket|-k)\n+ \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n+ \t\tshift;;\n+ -k*)\n+ PSQLOPT=\"$PSQLOPT $1\"\n+ ;;\n+ --unixsocket=*)\n+ PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n+ ;;\n \t--username|-U)\n \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n \t\tshift;;\n***************\n*** 113,118 ****\n--- 122,128 ----\n \techo \"Options:\"\n \techo \" -h, --host=HOSTNAME Database server host\"\n \techo \" -p, --port=PORT Database server port\"\n+ \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n \techo \" -U, --username=USERNAME Username to connect as\"\n \techo \" -W, --password Prompt for password\"\n \techo \" -d, --dbname=DBNAME Database to remove language from\"\nIndex: src/bin/scripts/dropuser\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/scripts/dropuser,v\nretrieving revision 1.7\ndiff -c -r1.7 dropuser\n*** src/bin/scripts/dropuser\t2000/11/11 22:59:48\t1.7\n--- src/bin/scripts/dropuser\t2000/11/13 05:26:34\n***************\n*** 59,64 ****\n--- 59,73 ----\n --port=*)\n PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n ;;\n+ \t--unixsocket|-k)\n+ \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n+ \t\tshift;;\n+ -k*)\n+ PSQLOPT=\"$PSQLOPT $1\"\n+ ;;\n+ --unixsocket=*)\n+ PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n+ ;;\n # Note: These two specify the user to connect as (like in psql),\n # not the user you're dropping.\n \t--username|-U)\n***************\n*** 105,110 ****\n--- 114,120 ----\n \techo \"Options:\"\n \techo \" -h, --host=HOSTNAME Database server host\"\n \techo \" -p, --port=PORT Database server port\"\n+ \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n \techo \" -U, --username=USERNAME Username to connect as (not the one to drop)\"\n \techo \" -W, --password Prompt for password to connect\"\n \techo \" -i, --interactive Prompt before deleting anything\"\nIndex: src/bin/scripts/vacuumdb\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/scripts/vacuumdb,v\nretrieving revision 1.10\ndiff -c -r1.10 vacuumdb\n*** src/bin/scripts/vacuumdb\t2000/11/11 22:59:48\t1.10\n--- src/bin/scripts/vacuumdb\t2000/11/13 05:26:34\n***************\n*** 52,57 ****\n--- 52,66 ----\n --port=*)\n PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n ;;\n+ \t--unixsocket|-k)\n+ \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n+ \t\tshift;;\n+ -k*)\n+ PSQLOPT=\"$PSQLOPT $1\"\n+ ;;\n+ --unixsocket=*)\n+ PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n+ ;;\n \t--username|-U)\n \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n \t\tshift;;\n***************\n*** 121,126 ****\n--- 130,136 ----\n echo \"Options:\"\n \techo \" -h, --host=HOSTNAME Database server host\"\n \techo \" -p, --port=PORT Database server port\"\n+ \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n \techo \" -U, --username=USERNAME Username to connect as\"\n \techo \" -W, --password Prompt for password\"\n \techo \" -d, --dbname=DBNAME Database to vacuum\"\nIndex: src/include/libpq/libpq.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/libpq/libpq.h,v\nretrieving revision 1.39\ndiff -c -r1.39 libpq.h\n*** src/include/libpq/libpq.h\t2000/07/08 03:04:30\t1.39\n--- src/include/libpq/libpq.h\t2000/11/13 05:26:34\n***************\n*** 55,61 ****\n /*\n * prototypes for functions in pqcomm.c\n */\n! extern int\tStreamServerPort(int family, unsigned short portName, int *fdP);\n extern int\tStreamConnection(int server_fd, Port *port);\n extern void StreamClose(int sock);\n extern void pq_init(void);\n--- 55,62 ----\n /*\n * prototypes for functions in pqcomm.c\n */\n! extern int\tStreamServerPort(int family, char *hostName,\n! \t\t\tunsigned short portName, char *unixSocketName, int *fdP);\n extern int\tStreamConnection(int server_fd, Port *port);\n extern void StreamClose(int sock);\n extern void pq_init(void);\nIndex: src/include/libpq/pqcomm.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/libpq/pqcomm.h,v\nretrieving revision 1.43\ndiff -c -r1.43 pqcomm.h\n*** src/include/libpq/pqcomm.h\t2000/11/01 21:14:03\t1.43\n--- src/include/libpq/pqcomm.h\t2000/11/13 05:26:35\n***************\n*** 51,62 ****\n /* Configure the UNIX socket address for the well known port. */\n \n #if defined(SUN_LEN)\n! #define UNIXSOCK_PATH(sun,port) \\\n! \t(sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)), SUN_LEN(&(sun)))\n #else\n! #define UNIXSOCK_PATH(sun,port) \\\n! \t(sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)), \\\n! \t strlen((sun).sun_path)+ offsetof(struct sockaddr_un, sun_path))\n #endif\n \n /*\n--- 51,65 ----\n /* Configure the UNIX socket address for the well known port. */\n \n #if defined(SUN_LEN)\n! #define UNIXSOCK_PATH(sun,port,defpath) \\\n! ((defpath && defpath[0] != '\\0') ? (strncpy((sun).sun_path, defpath, sizeof((sun).sun_path)), (sun).sun_path[sizeof((sun).sun_path)-1] = '\\0') : sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)))\n! #define UNIXSOCK_LEN(sun) \\\n! (SUN_LEN(&(sun)))\n #else\n! #define UNIXSOCK_PATH(sun,port,defpath) \\\n! ((defpath && defpath[0] != '\\0') ? (strncpy((sun).sun_path, defpath, sizeof((sun).sun_path)), (sun).sun_path[sizeof((sun).sun_path)-1] = '\\0') : sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)))\n! #define UNIXSOCK_LEN(sun) \\\n! (strlen((sun).sun_path)+ offsetof(struct sockaddr_un, sun_path))\n #endif\n \n /*\n***************\n*** 176,180 ****\n--- 179,185 ----\n extern int Unix_socket_permissions;\n \n extern char * Unix_socket_group;\n+ extern char * UnixSocketName;\n+ extern char * HostName;\n \n #endif\t /* PQCOMM_H */\nIndex: src/interfaces/libpq/fe-connect.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/fe-connect.c,v\nretrieving revision 1.144\ndiff -c -r1.144 fe-connect.c\n*** src/interfaces/libpq/fe-connect.c\t2000/11/04 02:27:56\t1.144\n--- src/interfaces/libpq/fe-connect.c\t2000/11/13 05:26:39\n***************\n*** 130,135 ****\n--- 130,138 ----\n \t{\"port\", \"PGPORT\", DEF_PGPORT_STR, NULL,\n \t\"Database-Port\", \"\", 6},\n \n+ \t{\"unixsocket\", \"PGUNIXSOCKET\", NULL, NULL,\n+ \t\"Unix-Socket\", \"\", 80},\n+ \n \t{\"tty\", \"PGTTY\", DefaultTty, NULL,\n \t\"Backend-Debug-TTY\", \"D\", 40},\n \n***************\n*** 305,310 ****\n--- 308,315 ----\n \tconn->pghost = tmp ? strdup(tmp) : NULL;\n \ttmp = conninfo_getval(connOptions, \"port\");\n \tconn->pgport = tmp ? strdup(tmp) : NULL;\n+ \ttmp = conninfo_getval(connOptions, \"unixsocket\");\n+ \tconn->pgunixsocket = tmp ? strdup(tmp) : NULL;\n \ttmp = conninfo_getval(connOptions, \"tty\");\n \tconn->pgtty = tmp ? strdup(tmp) : NULL;\n \ttmp = conninfo_getval(connOptions, \"options\");\n***************\n*** 385,390 ****\n--- 390,398 ----\n *\t PGPORT\t identifies TCP port to which to connect if <pgport> argument\n *\t\t\t\t is NULL or a null string.\n *\n+ *\t PGUNIXSOCKET\t identifies Unix-domain socket to which to connect; default\n+ *\t\t\t\t is computed from the TCP port.\n+ *\n *\t PGTTY\t\t identifies tty to which to send messages if <pgtty> argument\n *\t\t\t\t is NULL or a null string.\n *\n***************\n*** 435,440 ****\n--- 443,456 ----\n \telse\n \t\tconn->pgport = strdup(pgport);\n \n+ #if FIX_ME\n+ \t/* we need to modify the function to accept a unix socket path */\n+ \tif (pgunixsocket)\n+ \t\tconn->pgunixsocket = strdup(pgunixsocket);\n+ \telse if ((tmp = getenv(\"PGUNIXSOCKET\")) != NULL)\n+ \t\tconn->pgunixsocket = strdup(tmp);\n+ #endif\n+ \n \tif (pgtty == NULL)\n \t{\n \t\tif ((tmp = getenv(\"PGTTY\")) == NULL)\n***************\n*** 510,522 ****\n \n /*\n * update_db_info -\n! * get all additional infos out of dbName\n *\n */\n static int\n update_db_info(PGconn *conn)\n {\n! \tchar\t *tmp,\n \t\t\t *old = conn->dbName;\n \n \tif (strchr(conn->dbName, '@') != NULL)\n--- 526,538 ----\n \n /*\n * update_db_info -\n! * get all additional info out of dbName\n *\n */\n static int\n update_db_info(PGconn *conn)\n {\n! \tchar\t *tmp, *tmp2,\n \t\t\t *old = conn->dbName;\n \n \tif (strchr(conn->dbName, '@') != NULL)\n***************\n*** 525,530 ****\n--- 541,548 ----\n \t\ttmp = strrchr(conn->dbName, ':');\n \t\tif (tmp != NULL)\t\t/* port number given */\n \t\t{\n+ \t\t\tif (conn->pgport)\n+ \t\t\t\tfree(conn->pgport);\n \t\t\tconn->pgport = strdup(tmp + 1);\n \t\t\t*tmp = '\\0';\n \t\t}\n***************\n*** 532,537 ****\n--- 550,557 ----\n \t\ttmp = strrchr(conn->dbName, '@');\n \t\tif (tmp != NULL)\t\t/* host name given */\n \t\t{\n+ \t\t\tif (conn->pghost)\n+ \t\t\t\tfree(conn->pghost);\n \t\t\tconn->pghost = strdup(tmp + 1);\n \t\t\t*tmp = '\\0';\n \t\t}\n***************\n*** 558,570 ****\n \n \t\t\t/*\n \t\t\t * new style:\n! \t\t\t * <tcp|unix>:postgresql://server[:port][/dbname][?options]\n \t\t\t */\n \t\t\toffset += strlen(\"postgresql://\");\n \n \t\t\ttmp = strrchr(conn->dbName + offset, '?');\n \t\t\tif (tmp != NULL)\t/* options given */\n \t\t\t{\n \t\t\t\tconn->pgoptions = strdup(tmp + 1);\n \t\t\t\t*tmp = '\\0';\n \t\t\t}\n--- 578,592 ----\n \n \t\t\t/*\n \t\t\t * new style:\n! \t\t\t * <tcp|unix>:postgresql://server[:port|:/unixsocket/path:][/dbname][?options]\n \t\t\t */\n \t\t\toffset += strlen(\"postgresql://\");\n \n \t\t\ttmp = strrchr(conn->dbName + offset, '?');\n \t\t\tif (tmp != NULL)\t/* options given */\n \t\t\t{\n+ \t\t\t\tif (conn->pgoptions)\n+ \t\t\t\t\tfree(conn->pgoptions);\n \t\t\t\tconn->pgoptions = strdup(tmp + 1);\n \t\t\t\t*tmp = '\\0';\n \t\t\t}\n***************\n*** 572,597 ****\n \t\t\ttmp = strrchr(conn->dbName + offset, '/');\n \t\t\tif (tmp != NULL)\t/* database name given */\n \t\t\t{\n \t\t\t\tconn->dbName = strdup(tmp + 1);\n \t\t\t\t*tmp = '\\0';\n \t\t\t}\n \t\t\telse\n \t\t\t{\n \t\t\t\tif ((tmp = getenv(\"PGDATABASE\")) != NULL)\n \t\t\t\t\tconn->dbName = strdup(tmp);\n \t\t\t\telse if (conn->pguser)\n \t\t\t\t\tconn->dbName = strdup(conn->pguser);\n \t\t\t}\n \n \t\t\ttmp = strrchr(old + offset, ':');\n! \t\t\tif (tmp != NULL)\t/* port number given */\n \t\t\t{\n- \t\t\t\tconn->pgport = strdup(tmp + 1);\n \t\t\t\t*tmp = '\\0';\n \t\t\t}\n \n \t\t\tif (strncmp(old, \"unix:\", 5) == 0)\n \t\t\t{\n \t\t\t\tconn->pghost = NULL;\n \t\t\t\tif (strcmp(old + offset, \"localhost\") != 0)\n \t\t\t\t{\n--- 594,655 ----\n \t\t\ttmp = strrchr(conn->dbName + offset, '/');\n \t\t\tif (tmp != NULL)\t/* database name given */\n \t\t\t{\n+ \t\t\t\tif (conn->dbName)\n+ \t\t\t\t\tfree(conn->dbName);\n \t\t\t\tconn->dbName = strdup(tmp + 1);\n \t\t\t\t*tmp = '\\0';\n \t\t\t}\n \t\t\telse\n \t\t\t{\n+ \t\t\t\t/* Why do we default only this value from the environment again? */\n \t\t\t\tif ((tmp = getenv(\"PGDATABASE\")) != NULL)\n+ \t\t\t\t{\n+ \t\t\t\t\tif (conn->dbName)\n+ \t\t\t\t\t\tfree(conn->dbName);\n \t\t\t\t\tconn->dbName = strdup(tmp);\n+ \t\t\t\t}\n \t\t\t\telse if (conn->pguser)\n+ \t\t\t\t{\n+ \t\t\t\t\tif (conn->dbName)\n+ \t\t\t\t\t\tfree(conn->dbName);\n \t\t\t\t\tconn->dbName = strdup(conn->pguser);\n+ \t\t\t\t}\n \t\t\t}\n \n \t\t\ttmp = strrchr(old + offset, ':');\n! \t\t\tif (tmp != NULL)\t/* port number or Unix socket path given */\n \t\t\t{\n \t\t\t\t*tmp = '\\0';\n+ \t\t\t\tif ((tmp2 = strchr(tmp + 1, ':')) != NULL)\n+ \t\t\t\t{\n+ \t\t\t\t\tif (strncmp(old, \"unix:\", 5) != 0)\n+ \t\t\t\t\t{\n+ \t\t\t\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n+ \t\t\t\t\t\t\t\t \"connectDBStart() -- \"\n+ \t\t\t\t\t\t\t\t \"socket name can only be specified with \"\n+ \t\t\t\t\t\t\t\t \"non-TCP\\n\");\n+ \t\t\t\t\t\treturn 1; \n+ \t\t\t\t\t}\n+ \t\t\t\t\t*tmp2 = '\\0';\n+ \t\t\t\t\tif (conn->pgunixsocket)\n+ \t\t\t\t\t\tfree(conn->pgunixsocket);\n+ \t\t\t\t\tconn->pgunixsocket = strdup(tmp + 1);\n+ \t\t\t\t}\n+ \t\t\t\telse\n+ \t\t\t\t{\n+ \t\t\t\t\tif (conn->pgport)\n+ \t\t\t\t\t\tfree(conn->pgport);\n+ \t\t\t\t\tconn->pgport = strdup(tmp + 1);\n+ \t\t\t\t\tif (conn->pgunixsocket)\n+ \t\t\t\t\t\tfree(conn->pgunixsocket);\n+ \t\t\t\t\tconn->pgunixsocket = NULL;\n+ \t\t\t\t}\n \t\t\t}\n \n \t\t\tif (strncmp(old, \"unix:\", 5) == 0)\n \t\t\t{\n+ \t\t\t\tif (conn->pghost)\n+ \t\t\t\t\tfree(conn->pghost);\n \t\t\t\tconn->pghost = NULL;\n \t\t\t\tif (strcmp(old + offset, \"localhost\") != 0)\n \t\t\t\t{\n***************\n*** 603,610 ****\n \t\t\t\t}\n \t\t\t}\n \t\t\telse\n \t\t\t\tconn->pghost = strdup(old + offset);\n! \n \t\t\tfree(old);\n \t\t}\n \t}\n--- 661,671 ----\n \t\t\t\t}\n \t\t\t}\n \t\t\telse\n+ \t\t\t{\n+ \t\t\t\tif (conn->pghost)\n+ \t\t\t\t\tfree(conn->pghost);\n \t\t\t\tconn->pghost = strdup(old + offset);\n! \t\t\t}\n \t\t\tfree(old);\n \t\t}\n \t}\n***************\n*** 763,769 ****\n \t}\n #ifdef HAVE_UNIX_SOCKETS\n \telse\n! \t\tconn->raddr_len = UNIXSOCK_PATH(conn->raddr.un, portno);\n #endif\n \n \n--- 824,833 ----\n \t}\n #ifdef HAVE_UNIX_SOCKETS\n \telse\n! \t{\n! \t\tUNIXSOCK_PATH(conn->raddr.un, portno, conn->pgunixsocket);\n! \t\tconn->raddr_len = UNIXSOCK_LEN(conn->raddr.un);\n! \t}\n #endif\n \n \n***************\n*** 842,848 ****\n \t\t\t\t\t\t\t conn->pghost ? conn->pghost : \"localhost\",\n \t\t\t\t\t\t\t (family == AF_INET) ?\n \t\t\t\t\t\t\t \"TCP/IP port\" : \"Unix socket\",\n! \t\t\t\t\t\t\t conn->pgport);\n \t\t\tgoto connect_errReturn;\n \t\t}\n \t}\n--- 906,913 ----\n \t\t\t\t\t\t\t conn->pghost ? conn->pghost : \"localhost\",\n \t\t\t\t\t\t\t (family == AF_INET) ?\n \t\t\t\t\t\t\t \"TCP/IP port\" : \"Unix socket\",\n! \t\t\t\t\t\t\t (family == AF_UNIX && conn->pgunixsocket) ?\n! \t\t\t\t\t\t\t conn->pgunixsocket : conn->pgport);\n \t\t\tgoto connect_errReturn;\n \t\t}\n \t}\n***************\n*** 1143,1149 ****\n \t\t\t\t\t\t\t conn->pghost ? conn->pghost : \"localhost\",\n \t\t\t\t\t\t\t\t (conn->raddr.sa.sa_family == AF_INET) ?\n \t\t\t\t\t\t\t\t\t \"TCP/IP port\" : \"Unix socket\",\n! \t\t\t\t\t\t\t\t\t conn->pgport);\n \t\t\t\t\tgoto error_return;\n \t\t\t\t}\n \n--- 1208,1215 ----\n \t\t\t\t\t\t\t conn->pghost ? conn->pghost : \"localhost\",\n \t\t\t\t\t\t\t\t (conn->raddr.sa.sa_family == AF_INET) ?\n \t\t\t\t\t\t\t\t\t \"TCP/IP port\" : \"Unix socket\",\n! \t\t\t\t\t\t\t (conn->raddr.sa.sa_family == AF_UNIX && conn->pgunixsocket) ?\n! \t\t\t\t\t\t\t\t\t conn->pgunixsocket : conn->pgport);\n \t\t\t\t\tgoto error_return;\n \t\t\t\t}\n \n***************\n*** 1819,1824 ****\n--- 1885,1892 ----\n \t\tfree(conn->pghostaddr);\n \tif (conn->pgport)\n \t\tfree(conn->pgport);\n+ \tif (conn->pgunixsocket)\n+ \t\tfree(conn->pgunixsocket);\n \tif (conn->pgtty)\n \t\tfree(conn->pgtty);\n \tif (conn->pgoptions)\n***************\n*** 2526,2531 ****\n--- 2594,2607 ----\n \tif (!conn)\n \t\treturn (char *) NULL;\n \treturn conn->pgport;\n+ }\n+ \n+ char *\n+ PQunixsocket(const PGconn *conn)\n+ {\n+ \tif (!conn)\n+ \t\treturn (char *) NULL;\n+ \treturn conn->pgunixsocket;\n }\n \n char *\nIndex: src/interfaces/libpq/libpq-fe.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/libpq-fe.h,v\nretrieving revision 1.67\ndiff -c -r1.67 libpq-fe.h\n*** src/interfaces/libpq/libpq-fe.h\t2000/08/30 14:54:23\t1.67\n--- src/interfaces/libpq/libpq-fe.h\t2000/11/13 05:26:39\n***************\n*** 217,222 ****\n--- 217,223 ----\n \textern char *PQpass(const PGconn *conn);\n \textern char *PQhost(const PGconn *conn);\n \textern char *PQport(const PGconn *conn);\n+ \textern char *PQunixsocket(const PGconn *conn);\n \textern char *PQtty(const PGconn *conn);\n \textern char *PQoptions(const PGconn *conn);\n \textern ConnStatusType PQstatus(const PGconn *conn);\nIndex: src/interfaces/libpq/libpq-int.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/libpq-int.h,v\nretrieving revision 1.27\ndiff -c -r1.27 libpq-int.h\n*** src/interfaces/libpq/libpq-int.h\t2000/08/30 14:54:24\t1.27\n--- src/interfaces/libpq/libpq-int.h\t2000/11/13 05:26:39\n***************\n*** 203,208 ****\n--- 203,210 ----\n \t\t\t\t\t\t\t\t * numbers-and-dots notation. Takes\n \t\t\t\t\t\t\t\t * precedence over above. */\n \tchar\t *pgport;\t\t\t/* the server's communication port */\n+ \tchar\t *pgunixsocket;\t\t/* the Unix-domain socket that the server is listening on;\n+ \t\t\t\t\t\t * if NULL, uses a default constructed from pgport */\n \tchar\t *pgtty;\t\t\t/* tty on which the backend messages is\n \t\t\t\t\t\t\t\t * displayed (NOT ACTUALLY USED???) */\n \tchar\t *pgoptions;\t\t/* options to start the backend with */\nIndex: src/interfaces/libpq/libpqdll.def\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/libpqdll.def,v\nretrieving revision 1.10\ndiff -c -r1.10 libpqdll.def\n*** src/interfaces/libpq/libpqdll.def\t2000/03/11 03:08:37\t1.10\n--- src/interfaces/libpq/libpqdll.def\t2000/11/13 05:26:39\n***************\n*** 79,81 ****\n--- 79,82 ----\n \tdestroyPQExpBuffer\t@ 76\n \tcreatePQExpBuffer\t@ 77\n \tPQconninfoFree\t\t@ 78\n+ \tPQunixsocket\t\t@ 79", "msg_date": "Mon, 13 Nov 2000 00:33:46 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL virtual hosting support" }, { "msg_contents": "OK, I have applied my version of patch. The only change is that '-h\nany' is not supported. If you use -h, you must use an IP address. Not\nusing -h is the same as -h any.\n\n\n;> Your name\t\t:\tDavid MacKenzie\n> Your email address\t:\[email protected]\n> \n> \n> System Configuration\n> ---------------------\n> Architecture (example: Intel Pentium) \t: Intel x86\n> \n> Operating System (example: Linux 2.0.26 ELF) \t: BSD/OS 4.0.1\n> \n> PostgreSQL version (example: PostgreSQL-7.0): PostgreSQL-7.0.2\n> \n> Compiler used (example: gcc 2.8.0)\t\t: gcc version 2.7.2.1\n> \n> \n> Please enter a FULL description of your problem:\n> ------------------------------------------------\n> \n> UUNET is looking into offering PostgreSQL as a part of a managed web\n> hosting product, on both shared and dedicated machines. We currently\n> offer Oracle and MySQL, and it would be a nice middle-ground.\n> However, as shipped, PostgreSQL lacks the following features we need\n> that MySQL has:\n> \n> 1. The ability to listen only on a particular IP address. Each\n> hosting customer has their own IP address, on which all of their\n> servers (http, ftp, real media, etc.) run.\n> 2. The ability to place the Unix-domain socket in a mode 700 directory.\n> This allows us to automatically create an empty database, with an\n> empty DBA password, for new or upgrading customers without having\n> to interactively set a DBA password and communicate it to (or from)\n> the customer. This in turn cuts down our install and upgrade times.\n> 3. The ability to connect to the Unix-domain socket from within a\n> change-rooted environment. We run CGI programs chrooted to the\n> user's home directory, which is another reason why we need to be\n> able to specify where the Unix-domain socket is, instead of /tmp.\n> 4. The ability to, if run as root, open a pid file in /var/run as\n> root, and then setuid to the desired user. (mysqld -u can almost\n> do this; I had to patch it, too).\n> \n> The patch below fixes problem 1-3. I plan to address #4, also, but\n> haven't done so yet. These diffs are big enough that they should give\n> the PG development team something to think about in the meantime :-)\n> Also, I'm about to leave for 2 weeks' vacation, so I thought I'd get\n> out what I have, which works (for the problems it tackles), now.\n> \n> With these changes, we can set up and run PostgreSQL with scripts the\n> same way we can with apache or proftpd or mysql.\n> \n> In summary, this patch makes the following enhancements:\n> \n> 1. Adds an environment variable PGUNIXSOCKET, analogous to MYSQL_UNIX_PORT,\n> and command line options -k --unix-socket to the relevant programs.\n> 2. Adds a -h option to postmaster to set the hostname or IP address to\n> listen on instead of the default INADDR_ANY.\n> 3. Extends some library interfaces to support the above.\n> 4. Fixes a few memory leaks in PQconnectdb().\n> \n> The default behavior is unchanged from stock 7.0.2; if you don't use\n> any of these new features, they don't change the operation.\n> \n> Index: doc/src/sgml/layout.sgml\n> *** doc/src/sgml/layout.sgml\t2000/06/30 21:15:36\t1.1\n> --- doc/src/sgml/layout.sgml\t2000/07/02 03:56:05\t1.2\n> ***************\n> *** 55,61 ****\n> For example, if the database server machine is a remote machine, you\n> will need to set the <envar>PGHOST</envar> environment variable to the name\n> of the database server machine. The environment variable\n> ! <envar>PGPORT</envar> may also have to be set. The bottom line is this: if\n> you try to start an application program and it complains\n> that it cannot connect to the <Application>postmaster</Application>,\n> you must go back and make sure that your\n> --- 55,62 ----\n> For example, if the database server machine is a remote machine, you\n> will need to set the <envar>PGHOST</envar> environment variable to the name\n> of the database server machine. The environment variable\n> ! <envar>PGPORT</envar> or <envar>PGUNIXSOCKET</envar> may also have to be set.\n> ! The bottom line is this: if\n> you try to start an application program and it complains\n> that it cannot connect to the <Application>postmaster</Application>,\n> you must go back and make sure that your\n> Index: doc/src/sgml/libpq++.sgml\n> *** doc/src/sgml/libpq++.sgml\t2000/06/30 21:15:36\t1.1\n> --- doc/src/sgml/libpq++.sgml\t2000/07/02 03:56:05\t1.2\n> ***************\n> *** 93,98 ****\n> --- 93,105 ----\n> </listitem>\n> <listitem>\n> <para>\n> + \t<envar>PGUNIXSOCKET</envar> sets the full Unix domain socket\n> + \tfile name for communicating with the <productname>Postgres</productname>\n> + \tbackend.\n> + </para>\n> + </listitem>\n> + <listitem>\n> + <para>\n> \t<envar>PGDATABASE</envar> sets the default \n> \t<productname>Postgres</productname> database name.\n> </para>\n> Index: doc/src/sgml/libpq.sgml\n> *** doc/src/sgml/libpq.sgml\t2000/06/30 21:15:36\t1.1\n> --- doc/src/sgml/libpq.sgml\t2000/07/02 03:56:05\t1.2\n> ***************\n> *** 134,139 ****\n> --- 134,148 ----\n> </varlistentry>\n> \n> <varlistentry>\n> + <term><literal>unixsocket</literal></term>\n> + <listitem>\n> + <para>\n> + Full path to Unix-domain socket file to connect to at the server host.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> + <varlistentry>\n> <term><literal>dbname</literal></term>\n> <listitem>\n> <para>\n> ***************\n> *** 545,550 ****\n> --- 554,569 ----\n> \n> <listitem>\n> <para>\n> + <function>PQunixsocket</function>\n> + Returns the name of the Unix-domain socket of the connection.\n> + <synopsis>\n> + char *PQunixsocket(const PGconn *conn)\n> + </synopsis>\n> + </para>\n> + </listitem>\n> + \n> + <listitem>\n> + <para>\n> <function>PQtty</function>\n> Returns the debug tty of the connection.\n> <synopsis>\n> ***************\n> *** 1772,1777 ****\n> --- 1791,1803 ----\n> <envar>PGHOST</envar> sets the default server name.\n> If a non-zero-length string is specified, TCP/IP communication is used.\n> Without a host name, libpq will connect using a local Unix domain socket.\n> + </para>\n> + </listitem>\n> + <listitem>\n> + <para>\n> + <envar>PGPORT</envar> sets the default port or local Unix domain socket\n> + file extension for communicating with the <productname>Postgres</productname>\n> + backend.\n> </para>\n> </listitem>\n> <listitem>\n> Index: doc/src/sgml/start.sgml\n> *** doc/src/sgml/start.sgml\t2000/06/30 21:15:37\t1.1\n> --- doc/src/sgml/start.sgml\t2000/07/02 03:56:05\t1.2\n> ***************\n> *** 110,117 ****\n> will need to set the <acronym>PGHOST</acronym> environment\n> variable to the name\n> of the database server machine. The environment variable\n> ! <acronym>PGPORT</acronym> may also have to be set. The bottom\n> ! line is this: if\n> you try to start an application program and it complains\n> that it cannot connect to the <application>postmaster</application>,\n> you should immediately consult your site administrator to make\n> --- 110,117 ----\n> will need to set the <acronym>PGHOST</acronym> environment\n> variable to the name\n> of the database server machine. The environment variable\n> ! <acronym>PGPORT</acronym> or <acronym>PGUNIXSOCKET</acronym> may also have to be set.\n> ! The bottom line is this: if\n> you try to start an application program and it complains\n> that it cannot connect to the <application>postmaster</application>,\n> you should immediately consult your site administrator to make\n> Index: doc/src/sgml/ref/createdb.sgml\n> *** doc/src/sgml/ref/createdb.sgml\t2000/06/30 21:15:37\t1.1\n> --- doc/src/sgml/ref/createdb.sgml\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 58,63 ****\n> --- 58,75 ----\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry>\n> + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + Specifies the Unix-domain socket on which the\n> + <application>postmaster</application> is running.\n> + Without this option, the socket is created in <filename>/tmp</filename>\n> + based on the port number.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> <varlistentry>\n> <term>-U, --username <replaceable class=\"parameter\">username</replaceable></term>\n> <listitem>\n> Index: doc/src/sgml/ref/createlang.sgml\n> *** doc/src/sgml/ref/createlang.sgml\t2000/06/30 21:15:37\t1.1\n> --- doc/src/sgml/ref/createlang.sgml\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 96,101 ****\n> --- 96,113 ----\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry>\n> + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + Specifies the Unix-domain socket on which the\n> + <application>postmaster</application> is running.\n> + Without this option, the socket is created in <filename>/tmp</filename>\n> + based on the port number.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> <varlistentry>\n> <term>-U, --username <replaceable class=\"parameter\">username</replaceable></term>\n> <listitem>\n> Index: doc/src/sgml/ref/createuser.sgml\n> *** doc/src/sgml/ref/createuser.sgml\t2000/06/30 21:15:37\t1.1\n> --- doc/src/sgml/ref/createuser.sgml\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 59,64 ****\n> --- 59,76 ----\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry>\n> + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + Specifies the Unix-domain socket on which the\n> + <application>postmaster</application> is running.\n> + Without this option, the socket is created in <filename>/tmp</filename>\n> + based on the port number.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> <varlistentry>\n> <term>-e, --echo</term>\n> <listitem>\n> Index: doc/src/sgml/ref/dropdb.sgml\n> *** doc/src/sgml/ref/dropdb.sgml\t2000/06/30 21:15:38\t1.1\n> --- doc/src/sgml/ref/dropdb.sgml\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 58,63 ****\n> --- 58,75 ----\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry>\n> + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + Specifies the Unix-domain socket on which the\n> + <application>postmaster</application> is running.\n> + Without this option, the socket is created in <filename>/tmp</filename>\n> + based on the port number.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> <varlistentry>\n> <term>-U, --username <replaceable class=\"parameter\">username</replaceable></term>\n> <listitem>\n> Index: doc/src/sgml/ref/droplang.sgml\n> *** doc/src/sgml/ref/droplang.sgml\t2000/06/30 21:15:38\t1.1\n> --- doc/src/sgml/ref/droplang.sgml\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 96,101 ****\n> --- 96,113 ----\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry>\n> + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + Specifies the Unix-domain socket on which the\n> + <application>postmaster</application> is running.\n> + Without this option, the socket is created in <filename>/tmp</filename>\n> + based on the port number.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> <varlistentry>\n> <term>-U, --username <replaceable class=\"parameter\">username</replaceable></term>\n> <listitem>\n> Index: doc/src/sgml/ref/dropuser.sgml\n> *** doc/src/sgml/ref/dropuser.sgml\t2000/06/30 21:15:38\t1.1\n> --- doc/src/sgml/ref/dropuser.sgml\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 58,63 ****\n> --- 58,75 ----\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry>\n> + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + Specifies the Unix-domain socket on which the\n> + <application>postmaster</application> is running.\n> + Without this option, the socket is created in <filename>/tmp</filename>\n> + based on the port number.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> <varlistentry>\n> <term>-e, --echo</term>\n> <listitem>\n> Index: doc/src/sgml/ref/pg_dump.sgml\n> *** doc/src/sgml/ref/pg_dump.sgml\t2000/06/30 21:15:38\t1.1\n> --- doc/src/sgml/ref/pg_dump.sgml\t2000/07/01 18:41:22\t1.2\n> ***************\n> *** 24,30 ****\n> </refsynopsisdivinfo>\n> <synopsis>\n> pg_dump [ <replaceable class=\"parameter\">dbname</replaceable> ]\n> ! pg_dump [ -h <replaceable class=\"parameter\">host</replaceable> ] [ -p <replaceable class=\"parameter\">port</replaceable> ]\n> [ -t <replaceable class=\"parameter\">table</replaceable> ]\n> [ -a ] [ -c ] [ -d ] [ -D ] [ -i ] [ -n ] [ -N ]\n> [ -o ] [ -s ] [ -u ] [ -v ] [ -x ]\n> --- 24,32 ----\n> </refsynopsisdivinfo>\n> <synopsis>\n> pg_dump [ <replaceable class=\"parameter\">dbname</replaceable> ]\n> ! pg_dump [ -h <replaceable class=\"parameter\">host</replaceable> ]\n> ! [ -k <replaceable class=\"parameter\">path</replaceable> ]\n> ! [ -p <replaceable class=\"parameter\">port</replaceable> ]\n> [ -t <replaceable class=\"parameter\">table</replaceable> ]\n> [ -a ] [ -c ] [ -d ] [ -D ] [ -i ] [ -n ] [ -N ]\n> [ -o ] [ -s ] [ -u ] [ -v ] [ -x ]\n> ***************\n> *** 200,205 ****\n> --- 202,222 ----\n> \t<application>postmaster</application>\n> \tis running. Defaults to using a local Unix domain socket\n> \trather than an IP connection..\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> + <varlistentry>\n> + <term>-k <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + \tSpecifies the local Unix domain socket file path\n> + \ton which the <application>postmaster</application>\n> + \tis listening for connections.\n> + Without this option, the socket path name defaults to\n> + the value of the <envar>PGUNIXSOCKET</envar> environment\n> + \tvariable (if set), otherwise it is constructed\n> + from the port number.\n> </para>\n> </listitem>\n> </varlistentry>\n> Index: doc/src/sgml/ref/pg_dumpall.sgml\n> *** doc/src/sgml/ref/pg_dumpall.sgml\t2000/06/30 21:15:38\t1.1\n> --- doc/src/sgml/ref/pg_dumpall.sgml\t2000/07/01 18:41:22\t1.2\n> ***************\n> *** 24,30 ****\n> </refsynopsisdivinfo>\n> <synopsis>\n> pg_dumpall\n> ! pg_dumpall [ -h <replaceable class=\"parameter\">host</replaceable> ] [ -p <replaceable class=\"parameter\">port</replaceable> ] [ -a ] [ -d ] [ -D ] [ -O ] [ -s ] [ -u ] [ -v ] [ -x ]\n> </synopsis>\n> \n> <refsect2 id=\"R2-APP-PG-DUMPALL-1\">\n> --- 24,33 ----\n> </refsynopsisdivinfo>\n> <synopsis>\n> pg_dumpall\n> ! pg_dumpall [ -h <replaceable class=\"parameter\">host</replaceable> ]\n> ! [ -k <replaceable class=\"parameter\">path</replaceable> ]\n> ! [ -p <replaceable class=\"parameter\">port</replaceable> ]\n> ! [ -a ] [ -d ] [ -D ] [ -O ] [ -s ] [ -u ] [ -v ] [ -x ]\n> </synopsis>\n> \n> <refsect2 id=\"R2-APP-PG-DUMPALL-1\">\n> ***************\n> *** 137,142 ****\n> --- 140,160 ----\n> \t<application>postmaster</application>\n> \tis running. Defaults to using a local Unix domain socket\n> \trather than an IP connection..\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> + <varlistentry>\n> + <term>-k <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + \tSpecifies the local Unix domain socket file path\n> + \ton which the <application>postmaster</application>\n> + \tis listening for connections.\n> + Without this option, the socket path name defaults to\n> + the value of the <envar>PGUNIXSOCKET</envar> environment\n> + \tvariable (if set), otherwise it is constructed\n> + from the port number.\n> </para>\n> </listitem>\n> </varlistentry>\n> Index: doc/src/sgml/ref/postmaster.sgml\n> *** doc/src/sgml/ref/postmaster.sgml\t2000/06/30 21:15:38\t1.1\n> --- doc/src/sgml/ref/postmaster.sgml\t2000/07/06 07:48:31\t1.7\n> ***************\n> *** 24,30 ****\n> </refsynopsisdivinfo>\n> <synopsis>\n> postmaster [ -B <replaceable class=\"parameter\">nBuffers</replaceable> ] [ -D <replaceable class=\"parameter\">DataDir</replaceable> ] [ -N <replaceable class=\"parameter\">maxBackends</replaceable> ] [ -S ]\n> ! [ -d <replaceable class=\"parameter\">DebugLevel</replaceable> ] [ -i ] [ -l ]\n> [ -o <replaceable class=\"parameter\">BackendOptions</replaceable> ] [ -p <replaceable class=\"parameter\">port</replaceable> ] [ -n | -s ]\n> </synopsis>\n> \n> --- 24,32 ----\n> </refsynopsisdivinfo>\n> <synopsis>\n> postmaster [ -B <replaceable class=\"parameter\">nBuffers</replaceable> ] [ -D <replaceable class=\"parameter\">DataDir</replaceable> ] [ -N <replaceable class=\"parameter\">maxBackends</replaceable> ] [ -S ]\n> ! [ -d <replaceable class=\"parameter\">DebugLevel</replaceable> ]\n> ! [ -h <replaceable class=\"parameter\">hostname</replaceable> ] [ -i ]\n> ! [ -k <replaceable class=\"parameter\">path</replaceable> ] [ -l ]\n> [ -o <replaceable class=\"parameter\">BackendOptions</replaceable> ] [ -p <replaceable class=\"parameter\">port</replaceable> ] [ -n | -s ]\n> </synopsis>\n> \n> ***************\n> *** 124,129 ****\n> --- 126,161 ----\n> </varlistentry>\n> \n> <varlistentry>\n> + <term>-h <replaceable class=\"parameter\">hostName</replaceable></term>\n> + <listitem>\n> + <para>\n> + \tSpecifies the TCP/IP hostname or address\n> + \ton which the <application>postmaster</application>\n> + \tis to listen for connections from frontend applications. Defaults to\n> + \tthe value of the \n> + \t<envar>PGHOST</envar> \n> + \tenvironment variable, or if <envar>PGHOST</envar>\n> + \tis not set, then defaults to \"all\", meaning listen on all configured addresses\n> + \t(including localhost).\n> + </para>\n> + <para>\n> + \tIf you use a hostname or address other than \"all\", do not try to run\n> + \tmultiple instances of <application>postmaster</application> on the\n> + \tsame IP address but different ports. Doing so will result in them\n> + \tattempting (incorrectly) to use the same shared memory segments.\n> + \tAlso, if you use a hostname other than \"all\", all of the host's IP addresses\n> + \ton which <application>postmaster</application> instances are\n> + \tlistening must be distinct in the two last octets.\n> + </para>\n> + <para>\n> + \tIf you do use \"all\" (the default), then each instance must listen on a\n> + \tdifferent port (via -p or <envar>PGPORT</envar>). And, of course, do\n> + \tnot try to use both approaches on one host.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> + <varlistentry>\n> <term>-i</term>\n> <listitem>\n> <para>\n> ***************\n> *** 135,140 ****\n> --- 167,201 ----\n> </varlistentry>\n> \n> <varlistentry>\n> + <term>-k <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + \tSpecifies the local Unix domain socket path name\n> + \ton which the <application>postmaster</application>\n> + \tis to listen for connections from frontend applications. Defaults to\n> + \tthe value of the \n> + \t<envar>PGUNIXSOCKET</envar> \n> + \tenvironment variable, or if <envar>PGUNIXSOCKET</envar>\n> + \tis not set, then defaults to a file in <filename>/tmp</filename>\n> + \tconstructed from the port number.\n> + </para>\n> + <para>\n> + You can use this option to put the Unix-domain socket in a\n> + directory that is private to one or more users using Unix\n> + \tdirectory permissions. This is necessary for securely\n> + \tcreating databases automatically on shared machines.\n> + In that situation, also disallow all TCP/IP connections\n> + \tinitially in <filename>pg_hba.conf</filename>.\n> + \tIf you specify a socket path other than the\n> + \tdefault then all frontend applications (including\n> + \t<application>psql</application>) must specify the same\n> + \tsocket path using either command-line options or\n> + \t<envar>PGUNIXSOCKET</envar>.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> + <varlistentry>\n> <term>-l</term>\n> <listitem>\n> <para>\n> Index: doc/src/sgml/ref/psql-ref.sgml\n> *** doc/src/sgml/ref/psql-ref.sgml\t2000/06/30 21:15:38\t1.1\n> --- doc/src/sgml/ref/psql-ref.sgml\t2000/07/02 03:56:05\t1.3\n> ***************\n> *** 1329,1334 ****\n> --- 1329,1347 ----\n> \n> \n> <varlistentry>\n> + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + Specifies the Unix-domain socket on which the\n> + <application>postmaster</application> is running.\n> + Without this option, the socket is created in <filename>/tmp</filename>\n> + based on the port number.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> + \n> + <varlistentry>\n> <term>-H, --html</term>\n> <listitem>\n> <para>\n> Index: doc/src/sgml/ref/vacuumdb.sgml\n> *** doc/src/sgml/ref/vacuumdb.sgml\t2000/06/30 21:15:38\t1.1\n> --- doc/src/sgml/ref/vacuumdb.sgml\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 24,30 ****\n> </refsynopsisdivinfo>\n> <synopsis>\n> vacuumdb [ <replaceable class=\"parameter\">options</replaceable> ] [ --analyze | -z ]\n> ! [ --alldb | -a ] [ --verbose | -v ]\n> [ --table '<replaceable class=\"parameter\">table</replaceable> [ ( <replaceable class=\"parameter\">column</replaceable> [,...] ) ]' ] [ [-d] <replaceable class=\"parameter\">dbname</replaceable> ]\n> </synopsis>\n> \n> --- 24,30 ----\n> </refsynopsisdivinfo>\n> <synopsis>\n> vacuumdb [ <replaceable class=\"parameter\">options</replaceable> ] [ --analyze | -z ]\n> ! [ --all | -a ] [ --verbose | -v ]\n> [ --table '<replaceable class=\"parameter\">table</replaceable> [ ( <replaceable class=\"parameter\">column</replaceable> [,...] ) ]' ] [ [-d] <replaceable class=\"parameter\">dbname</replaceable> ]\n> </synopsis>\n> \n> ***************\n> *** 128,133 ****\n> --- 128,145 ----\n> </para>\n> </listitem>\n> </varlistentry>\n> + \n> + <varlistentry>\n> + <term>-k, --unixsocket <replaceable class=\"parameter\">path</replaceable></term>\n> + <listitem>\n> + <para>\n> + Specifies the Unix-domain socket on which the\n> + <application>postmaster</application> is running.\n> + Without this option, the socket is created in <filename>/tmp</filename>\n> + based on the port number.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> \n> <varlistentry>\n> <term>-U <replaceable class=\"parameter\">username</replaceable></term>\n> Index: src/backend/libpq/pqcomm.c\n> *** src/backend/libpq/pqcomm.c\t2000/06/30 21:15:40\t1.1\n> --- src/backend/libpq/pqcomm.c\t2000/07/01 18:50:46\t1.3\n> ***************\n> *** 42,47 ****\n> --- 42,48 ----\n> *\t\tStreamConnection\t- Create new connection with client\n> *\t\tStreamClose\t\t\t- Close a client/backend connection\n> *\t\tpq_getport\t\t- return the PGPORT setting\n> + *\t\tpq_getunixsocket\t- return the PGUNIXSOCKET setting\n> *\t\tpq_init\t\t\t- initialize libpq at backend startup\n> *\t\tpq_close\t\t- shutdown libpq at backend exit\n> *\n> ***************\n> *** 134,139 ****\n> --- 135,151 ----\n> }\n> \n> /* --------------------------------\n> + *\t\tpq_getunixsocket - return the PGUNIXSOCKET setting.\n> + *\t\tIf NULL, default to computing it based on the port.\n> + * --------------------------------\n> + */\n> + char *\n> + pq_getunixsocket(void)\n> + {\n> + \treturn getenv(\"PGUNIXSOCKET\");\n> + }\n> + \n> + /* --------------------------------\n> *\t\tpq_close - shutdown libpq at backend exit\n> *\n> * Note: in a standalone backend MyProcPort will be null,\n> ***************\n> *** 177,189 ****\n> /*\n> * StreamServerPort -- open a sock stream \"listening\" port.\n> *\n> ! * This initializes the Postmaster's connection-accepting port.\n> *\n> * RETURNS: STATUS_OK or STATUS_ERROR\n> */\n> \n> int\n> ! StreamServerPort(char *hostName, unsigned short portName, int *fdP)\n> {\n> \tSockAddr\tsaddr;\n> \tint\t\t\tfd,\n> --- 189,205 ----\n> /*\n> * StreamServerPort -- open a sock stream \"listening\" port.\n> *\n> ! * This initializes the Postmaster's connection-accepting port fdP.\n> ! * If hostName is \"any\", listen on all configured IP addresses.\n> ! * If hostName is NULL, listen on a Unix-domain socket instead of TCP;\n> ! * if unixSocketName is NULL, a default path (constructed in UNIX_SOCK_PATH\n> ! * in include/libpq/pqcomm.h) based on portName is used.\n> *\n> * RETURNS: STATUS_OK or STATUS_ERROR\n> */\n> \n> int\n> ! StreamServerPort(char *hostName, unsigned short portNumber, char *unixSocketName, int *fdP)\n> {\n> \tSockAddr\tsaddr;\n> \tint\t\t\tfd,\n> ***************\n> *** 227,233 ****\n> \tsaddr.sa.sa_family = family;\n> \tif (family == AF_UNIX)\n> \t{\n> ! \t\tlen = UNIXSOCK_PATH(saddr.un, portName);\n> \t\tstrcpy(sock_path, saddr.un.sun_path);\n> \n> \t\t/*\n> --- 243,250 ----\n> \tsaddr.sa.sa_family = family;\n> \tif (family == AF_UNIX)\n> \t{\n> ! \t\tUNIXSOCK_PATH(saddr.un, portNumber, unixSocketName);\n> ! \t\tlen = UNIXSOCK_LEN(saddr.un);\n> \t\tstrcpy(sock_path, saddr.un.sun_path);\n> \n> \t\t/*\n> ***************\n> *** 259,267 ****\n> \t}\n> \telse\n> \t{\n> ! \t\tsaddr.in.sin_addr.s_addr = htonl(INADDR_ANY);\n> ! \t\tsaddr.in.sin_port = htons(portName);\n> ! \t\tlen = sizeof(struct sockaddr_in);\n> \t}\n> \terr = bind(fd, &saddr.sa, len);\n> \tif (err < 0)\n> --- 276,305 ----\n> \t}\n> \telse\n> \t{\n> ! \t /* TCP/IP socket */\n> ! \t if (!strcmp(hostName, \"all\")) /* like for databases in pg_hba.conf. */\n> ! \t saddr.in.sin_addr.s_addr = htonl(INADDR_ANY);\n> ! \t else\n> ! \t {\n> ! \t struct hostent *hp;\n> ! \n> ! \t hp = gethostbyname(hostName);\n> ! \t if ((hp == NULL) || (hp->h_addrtype != AF_INET))\n> ! \t\t{\n> ! \t\t snprintf(PQerrormsg, PQERRORMSG_LENGTH,\n> ! \t\t\t \"FATAL: StreamServerPort: gethostbyname(%s) failed: %s\\n\",\n> ! \t\t\t hostName, hstrerror(h_errno));\n> ! \t\t fputs(PQerrormsg, stderr);\n> ! \t\t pqdebug(\"%s\", PQerrormsg);\n> ! \t\t return STATUS_ERROR;\n> ! \t\t}\n> ! \t memmove((char *) &(saddr.in.sin_addr),\n> ! \t\t (char *) hp->h_addr,\n> ! \t\t hp->h_length);\n> ! \t }\n> ! \n> ! \t saddr.in.sin_port = htons(portNumber);\n> ! \t len = sizeof(struct sockaddr_in);\n> \t}\n> \terr = bind(fd, &saddr.sa, len);\n> \tif (err < 0)\n> Index: src/backend/postmaster/postmaster.c\n> *** src/backend/postmaster/postmaster.c\t2000/06/30 21:15:42\t1.1\n> --- src/backend/postmaster/postmaster.c\t2000/07/06 07:38:21\t1.5\n> ***************\n> *** 136,143 ****\n> /* list of ports associated with still open, but incomplete connections */\n> static Dllist *PortList;\n> \n> ! static unsigned short PostPortName = 0;\n> \n> /*\n> * This is a boolean indicating that there is at least one backend that\n> * is accessing the current shared memory and semaphores. Between the\n> --- 136,150 ----\n> /* list of ports associated with still open, but incomplete connections */\n> static Dllist *PortList;\n> \n> ! /* Hostname of interface to listen on, or 'any'. */\n> ! static char *HostName = NULL;\n> \n> + /* TCP/IP port number to listen on. Also used to default the Unix-domain socket name. */\n> + static unsigned short PostPortNumber = 0;\n> + \n> + /* Override of the default Unix-domain socket name to listen on, if non-NULL. */\n> + static char *UnixSocketName = NULL;\n> + \n> /*\n> * This is a boolean indicating that there is at least one backend that\n> * is accessing the current shared memory and semaphores. Between the\n> ***************\n> *** 274,280 ****\n> static void SignalChildren(SIGNAL_ARGS);\n> static int\tCountChildren(void);\n> static int\n> ! SetOptsFile(char *progname, int port, char *datadir,\n> \t\t\tint assert, int nbuf, char *execfile,\n> \t\t\tint debuglvl, int netserver,\n> #ifdef USE_SSL\n> --- 281,287 ----\n> static void SignalChildren(SIGNAL_ARGS);\n> static int\tCountChildren(void);\n> static int\n> ! SetOptsFile(char *progname, char *hostname, int port, char *unixsocket, char *datadir,\n> \t\t\tint assert, int nbuf, char *execfile,\n> \t\t\tint debuglvl, int netserver,\n> #ifdef USE_SSL\n> ***************\n> *** 370,380 ****\n> {\n> \textern int\tNBuffers;\t\t/* from buffer/bufmgr.c */\n> \tint\t\t\topt;\n> - \tchar\t *hostName;\n> \tint\t\t\tstatus;\n> \tint\t\t\tsilentflag = 0;\n> \tbool\t\tDataDirOK;\t\t/* We have a usable PGDATA value */\n> - \tchar\t\thostbuf[MAXHOSTNAMELEN];\n> \tint\t\t\tnonblank_argc;\n> \tchar\t\toriginal_extraoptions[MAXPGPATH];\n> \n> --- 377,385 ----\n> ***************\n> *** 431,449 ****\n> \t */\n> \tumask((mode_t) 0077);\n> \n> - \tif (!(hostName = getenv(\"PGHOST\")))\n> - \t{\n> - \t\tif (gethostname(hostbuf, MAXHOSTNAMELEN) < 0)\n> - \t\t\tstrcpy(hostbuf, \"localhost\");\n> - \t\thostName = hostbuf;\n> - \t}\n> - \n> \tMyProcPid = getpid();\n> \tDataDir = getenv(\"PGDATA\"); /* default value */\n> \n> \topterr = 0;\n> \tIgnoreSystemIndexes(false);\n> ! \twhile ((opt = getopt(nonblank_argc, argv, \"A:a:B:b:D:d:ilm:MN:no:p:Ss\")) != EOF)\n> \t{\n> \t\tswitch (opt)\n> \t\t{\n> --- 436,447 ----\n> \t */\n> \tumask((mode_t) 0077);\n> \n> \tMyProcPid = getpid();\n> \tDataDir = getenv(\"PGDATA\"); /* default value */\n> \n> \topterr = 0;\n> \tIgnoreSystemIndexes(false);\n> ! \twhile ((opt = getopt(nonblank_argc, argv, \"A:a:B:b:D:d:h:ik:lm:MN:no:p:Ss\")) != EOF)\n> \t{\n> \t\tswitch (opt)\n> \t\t{\n> ***************\n> *** 498,506 ****\n> --- 496,511 ----\n> \t\t\t\tDebugLvl = atoi(optarg);\n> \t\t\t\tpg_options[TRACE_VERBOSE] = DebugLvl;\n> \t\t\t\tbreak;\n> + \t\t\tcase 'h':\n> + \t\t\t\tHostName = optarg;\n> + \t\t\t\tbreak;\n> \t\t\tcase 'i':\n> \t\t\t\tNetServer = true;\n> \t\t\t\tbreak;\n> + \t\t\tcase 'k':\n> + \t\t\t\t/* Set PGUNIXSOCKET by hand. */\n> + \t\t\t\tUnixSocketName = optarg;\n> + \t\t\t\tbreak;\n> #ifdef USE_SSL\n> \t\t\tcase 'l':\n> \t\t\t\tSecureNetServer = true;\n> ***************\n> *** 545,551 ****\n> \t\t\t\tbreak;\n> \t\t\tcase 'p':\n> \t\t\t\t/* Set PGPORT by hand. */\n> ! \t\t\t\tPostPortName = (unsigned short) atoi(optarg);\n> \t\t\t\tbreak;\n> \t\t\tcase 'S':\n> \n> --- 550,556 ----\n> \t\t\t\tbreak;\n> \t\t\tcase 'p':\n> \t\t\t\t/* Set PGPORT by hand. */\n> ! \t\t\t\tPostPortNumber = (unsigned short) atoi(optarg);\n> \t\t\t\tbreak;\n> \t\t\tcase 'S':\n> \n> ***************\n> *** 577,584 ****\n> \t/*\n> \t * Select default values for switches where needed\n> \t */\n> ! \tif (PostPortName == 0)\n> ! \t\tPostPortName = (unsigned short) pq_getport();\n> \n> \t/*\n> \t * Check for invalid combinations of switches\n> --- 582,603 ----\n> \t/*\n> \t * Select default values for switches where needed\n> \t */\n> ! \tif (HostName == NULL)\n> ! \t{\n> ! \t\tif (!(HostName = getenv(\"PGHOST\")))\n> ! \t\t{\n> ! \t\t\tHostName = \"any\";\n> ! \t\t}\n> ! \t}\n> ! \telse if (!NetServer)\n> ! \t{\n> ! \t\tfprintf(stderr, \"%s: -h requires -i.\\n\", progname);\n> ! \t\texit(1);\n> ! \t}\n> ! \tif (PostPortNumber == 0)\n> ! \t\tPostPortNumber = (unsigned short) pq_getport();\n> ! \tif (UnixSocketName == NULL)\n> ! \t\tUnixSocketName = pq_getunixsocket();\n> \n> \t/*\n> \t * Check for invalid combinations of switches\n> ***************\n> *** 622,628 ****\n> \n> \tif (NetServer)\n> \t{\n> ! \t\tstatus = StreamServerPort(hostName, PostPortName, &ServerSock_INET);\n> \t\tif (status != STATUS_OK)\n> \t\t{\n> \t\t\tfprintf(stderr, \"%s: cannot create INET stream port\\n\",\n> --- 641,647 ----\n> \n> \tif (NetServer)\n> \t{\n> ! \t\tstatus = StreamServerPort(HostName, PostPortNumber, NULL, &ServerSock_INET);\n> \t\tif (status != STATUS_OK)\n> \t\t{\n> \t\t\tfprintf(stderr, \"%s: cannot create INET stream port\\n\",\n> ***************\n> *** 632,638 ****\n> \t}\n> \n> #if !defined(__CYGWIN32__) && !defined(__QNX__)\n> ! \tstatus = StreamServerPort(NULL, PostPortName, &ServerSock_UNIX);\n> \tif (status != STATUS_OK)\n> \t{\n> \t\tfprintf(stderr, \"%s: cannot create UNIX stream port\\n\",\n> --- 651,657 ----\n> \t}\n> \n> #if !defined(__CYGWIN32__) && !defined(__QNX__)\n> ! \tstatus = StreamServerPort(NULL, PostPortNumber, UnixSocketName, &ServerSock_UNIX);\n> \tif (status != STATUS_OK)\n> \t{\n> \t\tfprintf(stderr, \"%s: cannot create UNIX stream port\\n\",\n> ***************\n> *** 642,648 ****\n> #endif\n> \t/* set up shared memory and semaphores */\n> \tEnableMemoryContext(TRUE);\n> ! \treset_shared(PostPortName);\n> \n> \t/*\n> \t * Initialize the list of active backends.\tThis list is only used for\n> --- 661,667 ----\n> #endif\n> \t/* set up shared memory and semaphores */\n> \tEnableMemoryContext(TRUE);\n> ! \treset_shared(PostPortNumber);\n> \n> \t/*\n> \t * Initialize the list of active backends.\tThis list is only used for\n> ***************\n> *** 664,670 ****\n> \t\t{\n> \t\t\tif (SetOptsFile(\n> \t\t\t\t\t\t\tprogname,\t/* postmaster executable file */\n> ! \t\t\t\t\t\t\tPostPortName,\t\t/* port number */\n> \t\t\t\t\t\t\tDataDir,\t/* PGDATA */\n> \t\t\t\t\t\t\tassert_enabled,\t\t/* whether -A is specified\n> \t\t\t\t\t\t\t\t\t\t\t\t * or not */\n> --- 683,691 ----\n> \t\t{\n> \t\t\tif (SetOptsFile(\n> \t\t\t\t\t\t\tprogname,\t/* postmaster executable file */\n> ! \t\t\t\t\t\t\tHostName, /* IP address to bind to */\n> ! \t\t\t\t\t\t\tPostPortNumber,\t\t/* port number */\n> ! \t\t\t\t\t\t\tUnixSocketName,\t/* PGUNIXSOCKET */\n> \t\t\t\t\t\t\tDataDir,\t/* PGDATA */\n> \t\t\t\t\t\t\tassert_enabled,\t\t/* whether -A is specified\n> \t\t\t\t\t\t\t\t\t\t\t\t * or not */\n> ***************\n> *** 753,759 ****\n> \t\t{\n> \t\t\tif (SetOptsFile(\n> \t\t\t\t\t\t\tprogname,\t/* postmaster executable file */\n> ! \t\t\t\t\t\t\tPostPortName,\t\t/* port number */\n> \t\t\t\t\t\t\tDataDir,\t/* PGDATA */\n> \t\t\t\t\t\t\tassert_enabled,\t\t/* whether -A is specified\n> \t\t\t\t\t\t\t\t\t\t\t\t * or not */\n> --- 774,782 ----\n> \t\t{\n> \t\t\tif (SetOptsFile(\n> \t\t\t\t\t\t\tprogname,\t/* postmaster executable file */\n> ! \t\t\t\t\t\t\tHostName, /* IP address to bind to */\n> ! \t\t\t\t\t\t\tPostPortNumber,\t\t/* port number */\n> ! \t\t\t\t\t\t\tUnixSocketName,\t/* PGUNIXSOCKET */\n> \t\t\t\t\t\t\tDataDir,\t/* PGDATA */\n> \t\t\t\t\t\t\tassert_enabled,\t\t/* whether -A is specified\n> \t\t\t\t\t\t\t\t\t\t\t\t * or not */\n> ***************\n> *** 837,843 ****\n> --- 860,868 ----\n> \tfprintf(stderr, \"\\t-a system\\tuse this authentication system\\n\");\n> \tfprintf(stderr, \"\\t-b backend\\tuse a specific backend server executable\\n\");\n> \tfprintf(stderr, \"\\t-d [1-5]\\tset debugging level\\n\");\n> + \tfprintf(stderr, \"\\t-h hostname\\tspecify hostname or IP address or 'any' for postmaster to listen on (also use -i)\\n\");\n> \tfprintf(stderr, \"\\t-i \\t\\tlisten on TCP/IP sockets as well as Unix domain socket\\n\");\n> + \tfprintf(stderr, \"\\t-k path\\tspecify Unix-domain socket name for postmaster to listen on\\n\");\n> #ifdef USE_SSL\n> \tfprintf(stderr, \" \\t-l \\t\\tfor TCP/IP sockets, listen only on SSL connections\\n\");\n> #endif\n> ***************\n> *** 1318,1328 ****\n> --- 1343,1417 ----\n> }\n> \n> /*\n> + * get_host_port -- return a pseudo port number (16 bits)\n> + * derived from the primary IP address of HostName.\n> + */\n> + static unsigned short\n> + get_host_port(void)\n> + {\n> + \tstatic unsigned short hostPort = 0;\n> + \n> + \tif (hostPort == 0)\n> + \t{\n> + \t\tSockAddr\tsaddr;\n> + \t\tstruct hostent *hp;\n> + \n> + \t\thp = gethostbyname(HostName);\n> + \t\tif ((hp == NULL) || (hp->h_addrtype != AF_INET))\n> + \t\t{\n> + \t\t\tchar msg[1024];\n> + \t\t\tsnprintf(msg, sizeof(msg),\n> + \t\t\t\t \"FATAL: get_host_port: gethostbyname(%s) failed: %s\\n\",\n> + \t\t\t\t HostName, hstrerror(h_errno));\n> + \t\t\tfputs(msg, stderr);\n> + \t\t\tpqdebug(\"%s\", msg);\n> + \t\t\texit(1);\n> + \t\t}\n> + \t\tmemmove((char *) &(saddr.in.sin_addr),\n> + \t\t\t(char *) hp->h_addr,\n> + \t\t\thp->h_length);\n> + \t\thostPort = ntohl(saddr.in.sin_addr.s_addr) & 0xFFFF;\n> + \t}\n> + \n> + \treturn hostPort;\n> + }\n> + \n> + /*\n> * reset_shared -- reset shared memory and semaphores\n> */\n> static void\n> reset_shared(unsigned short port)\n> {\n> + \t/*\n> + \t * A typical ipc_key is 5432001, which is port 5432, sequence\n> + \t * number 0, and 01 as the index in IPCKeyGetBufferMemoryKey().\n> + \t * The 32-bit INT_MAX is 2147483 6 47.\n> + \t *\n> + \t * The default algorithm for calculating the IPC keys assumes that all\n> + \t * instances of postmaster on a given host are listening on different\n> + \t * ports. In order to work (prevent shared memory collisions) if you\n> + \t * run multiple PostgreSQL instances on the same port and different IP\n> + \t * addresses on a host, we change the algorithm if you give postmaster\n> + \t * the -h option, or set PGHOST, to a value other than the internal\n> + \t * default of \"any\".\n> + \t *\n> + \t * If HostName is not \"any\", then we generate the IPC keys using the\n> + \t * last two octets of the IP address instead of the port number.\n> + \t * This algorithm assumes that no one will run multiple PostgreSQL\n> + \t * instances on one host using two IP addresses that have the same two\n> + \t * last octets in different class C networks. If anyone does, it\n> + \t * would be rare.\n> + \t *\n> + \t * So, if you use -h or PGHOST, don't try to run two instances of\n> + \t * PostgreSQL on the same IP address but different ports. If you\n> + \t * don't use them, then you must use different ports (via -p or\n> + \t * PGPORT). And, of course, don't try to use both approaches on one\n> + \t * host.\n> + \t */\n> + \n> + \tif (strcmp(HostName, \"any\"))\n> + \t\tport = get_host_port();\n> + \n> \tipc_key = port * 1000 + shmem_seq * 100;\n> \tCreateSharedMemoryAndSemaphores(ipc_key, MaxBackends);\n> \tshmem_seq += 1;\n> ***************\n> *** 1540,1546 ****\n> \t\t\t\tctime(&tnow));\n> \t\tfflush(stderr);\n> \t\tshmem_exit(0);\n> ! \t\treset_shared(PostPortName);\n> \t\tStartupPID = StartupDataBase();\n> \t\treturn;\n> \t}\n> --- 1629,1635 ----\n> \t\t\t\tctime(&tnow));\n> \t\tfflush(stderr);\n> \t\tshmem_exit(0);\n> ! \t\treset_shared(PostPortNumber);\n> \t\tStartupPID = StartupDataBase();\n> \t\treturn;\n> \t}\n> ***************\n> *** 1720,1726 ****\n> \t * Set up the necessary environment variables for the backend This\n> \t * should really be some sort of message....\n> \t */\n> ! \tsprintf(envEntry[0], \"POSTPORT=%d\", PostPortName);\n> \tputenv(envEntry[0]);\n> \tsprintf(envEntry[1], \"POSTID=%d\", NextBackendTag);\n> \tputenv(envEntry[1]);\n> --- 1809,1815 ----\n> \t * Set up the necessary environment variables for the backend This\n> \t * should really be some sort of message....\n> \t */\n> ! \tsprintf(envEntry[0], \"POSTPORT=%d\", PostPortNumber);\n> \tputenv(envEntry[0]);\n> \tsprintf(envEntry[1], \"POSTID=%d\", NextBackendTag);\n> \tputenv(envEntry[1]);\n> ***************\n> *** 2174,2180 ****\n> \tfor (i = 0; i < 4; ++i)\n> \t\tMemSet(ssEntry[i], 0, 2 * ARGV_SIZE);\n> \n> ! \tsprintf(ssEntry[0], \"POSTPORT=%d\", PostPortName);\n> \tputenv(ssEntry[0]);\n> \tsprintf(ssEntry[1], \"POSTID=%d\", NextBackendTag);\n> \tputenv(ssEntry[1]);\n> --- 2263,2269 ----\n> \tfor (i = 0; i < 4; ++i)\n> \t\tMemSet(ssEntry[i], 0, 2 * ARGV_SIZE);\n> \n> ! \tsprintf(ssEntry[0], \"POSTPORT=%d\", PostPortNumber);\n> \tputenv(ssEntry[0]);\n> \tsprintf(ssEntry[1], \"POSTID=%d\", NextBackendTag);\n> \tputenv(ssEntry[1]);\n> ***************\n> *** 2254,2260 ****\n> * Create the opts file\n> */\n> static int\n> ! SetOptsFile(char *progname, int port, char *datadir,\n> \t\t\tint assert, int nbuf, char *execfile,\n> \t\t\tint debuglvl, int netserver,\n> #ifdef USE_SSL\n> --- 2343,2349 ----\n> * Create the opts file\n> */\n> static int\n> ! SetOptsFile(char *progname, char *hostname, int port, char *unixsocket, char *datadir,\n> \t\t\tint assert, int nbuf, char *execfile,\n> \t\t\tint debuglvl, int netserver,\n> #ifdef USE_SSL\n> ***************\n> *** 2279,2284 ****\n> --- 2368,2383 ----\n> \t\treturn (-1);\n> \t}\n> \tsnprintf(opts, sizeof(opts), \"%s\\n-p %d\\n-D %s\\n\", progname, port, datadir);\n> + \tif (netserver)\n> + \t{\n> + \t\tsprintf(buf, \"-h %s\\n\", hostname);\n> + \t\tstrcat(opts, buf);\n> + \t}\n> + \tif (unixsocket)\n> + \t{\n> + \t\tsprintf(buf, \"-k %s\\n\", unixsocket);\n> + \t\tstrcat(opts, buf);\n> + \t}\n> \tif (assert)\n> \t{\n> \t\tsprintf(buf, \"-A %d\\n\", assert);\n> Index: src/bin/pg_dump/pg_dump.c\n> *** src/bin/pg_dump/pg_dump.c\t2000/06/30 21:15:44\t1.1\n> --- src/bin/pg_dump/pg_dump.c\t2000/07/01 18:41:22\t1.2\n> ***************\n> *** 140,145 ****\n> --- 140,146 ----\n> \t\t \" -D, --attribute-inserts dump data as INSERT commands with attribute names\\n\"\n> \t\t \" -h, --host <hostname> server host name\\n\"\n> \t\t \" -i, --ignore-version proceed when database version != pg_dump version\\n\"\n> + \t\t \" -k, --unixsocket <path> server Unix-domain socket name\\n\"\n> \t\" -n, --no-quotes suppress most quotes around identifiers\\n\"\n> \t \" -N, --quotes enable most quotes around identifiers\\n\"\n> \t\t \" -o, --oids dump object ids (oids)\\n\"\n> ***************\n> *** 158,163 ****\n> --- 159,165 ----\n> \t\t \" -D dump data as INSERT commands with attribute names\\n\"\n> \t\t \" -h <hostname> server host name\\n\"\n> \t\t \" -i proceed when database version != pg_dump version\\n\"\n> + \t\t \" -k <path> server Unix-domain socket name\\n\"\n> \t\" -n suppress most quotes around identifiers\\n\"\n> \t \" -N enable most quotes around identifiers\\n\"\n> \t\t \" -o dump object ids (oids)\\n\"\n> ***************\n> *** 579,584 ****\n> --- 581,587 ----\n> \tconst char *dbname = NULL;\n> \tconst char *pghost = NULL;\n> \tconst char *pgport = NULL;\n> + \tconst char *pgunixsocket = NULL;\n> \tchar\t *tablename = NULL;\n> \tbool\t\toids = false;\n> \tTableInfo *tblinfo;\n> ***************\n> *** 598,603 ****\n> --- 601,607 ----\n> \t\t{\"attribute-inserts\", no_argument, NULL, 'D'},\n> \t\t{\"host\", required_argument, NULL, 'h'},\n> \t\t{\"ignore-version\", no_argument, NULL, 'i'},\n> + \t\t{\"unixsocket\", required_argument, NULL, 'k'},\n> \t\t{\"no-quotes\", no_argument, NULL, 'n'},\n> \t\t{\"quotes\", no_argument, NULL, 'N'},\n> \t\t{\"oids\", no_argument, NULL, 'o'},\n> ***************\n> *** 662,667 ****\n> --- 666,674 ----\n> \t\t\tcase 'i':\t\t\t/* ignore database version mismatch */\n> \t\t\t\tignore_version = true;\n> \t\t\t\tbreak;\n> + \t\t\tcase 'k':\t\t\t/* server Unix-domain socket */\n> + \t\t\t\tpgunixsocket = optarg;\n> + \t\t\t\tbreak;\n> \t\t\tcase 'n':\t\t\t/* Do not force double-quotes on\n> \t\t\t\t\t\t\t\t * identifiers */\n> \t\t\t\tforce_quotes = false;\n> ***************\n> *** 782,788 ****\n> \t\texit(1);\n> \t}\n> \n> - \t/* g_conn = PQsetdb(pghost, pgport, NULL, NULL, dbname); */\n> \tif (pghost != NULL)\n> \t{\n> \t\tsprintf(tmp_string, \"host=%s \", pghost);\n> --- 789,794 ----\n> ***************\n> *** 791,796 ****\n> --- 797,807 ----\n> \tif (pgport != NULL)\n> \t{\n> \t\tsprintf(tmp_string, \"port=%s \", pgport);\n> + \t\tstrcat(connect_string, tmp_string);\n> + \t}\n> + \tif (pgunixsocket != NULL)\n> + \t{\n> + \t\tsprintf(tmp_string, \"unixsocket=%s \", pgunixsocket);\n> \t\tstrcat(connect_string, tmp_string);\n> \t}\n> \tif (dbname != NULL)\n> Index: src/bin/psql/command.c\n> *** src/bin/psql/command.c\t2000/06/30 21:15:46\t1.1\n> --- src/bin/psql/command.c\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 1199,1204 ****\n> --- 1199,1205 ----\n> \tSetVariable(pset.vars, \"USER\", NULL);\n> \tSetVariable(pset.vars, \"HOST\", NULL);\n> \tSetVariable(pset.vars, \"PORT\", NULL);\n> + \tSetVariable(pset.vars, \"UNIXSOCKET\", NULL);\n> \tSetVariable(pset.vars, \"ENCODING\", NULL);\n> \n> \t/* If dbname is \"\" then use old name, else new one (even if NULL) */\n> ***************\n> *** 1228,1233 ****\n> --- 1229,1235 ----\n> \tdo\n> \t{\n> \t\tneed_pass = false;\n> + \t\t/* FIXME use PQconnectdb to support passing the Unix socket */\n> \t\tpset.db = PQsetdbLogin(PQhost(oldconn), PQport(oldconn),\n> \t\t\t\t\t\t\t NULL, NULL, dbparam, userparam, pwparam);\n> \n> ***************\n> *** 1303,1308 ****\n> --- 1305,1311 ----\n> \tSetVariable(pset.vars, \"USER\", PQuser(pset.db));\n> \tSetVariable(pset.vars, \"HOST\", PQhost(pset.db));\n> \tSetVariable(pset.vars, \"PORT\", PQport(pset.db));\n> + \tSetVariable(pset.vars, \"UNIXSOCKET\", PQunixsocket(pset.db));\n> \tSetVariable(pset.vars, \"ENCODING\", pg_encoding_to_char(pset.encoding));\n> \n> \tpset.issuper = test_superuser(PQuser(pset.db));\n> Index: src/bin/psql/command.h\n> Index: src/bin/psql/common.c\n> *** src/bin/psql/common.c\t2000/06/30 21:15:46\t1.1\n> --- src/bin/psql/common.c\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 330,335 ****\n> --- 330,336 ----\n> \t\t\tSetVariable(pset.vars, \"DBNAME\", NULL);\n> \t\t\tSetVariable(pset.vars, \"HOST\", NULL);\n> \t\t\tSetVariable(pset.vars, \"PORT\", NULL);\n> + \t\t\tSetVariable(pset.vars, \"UNIXSOCKET\", NULL);\n> \t\t\tSetVariable(pset.vars, \"USER\", NULL);\n> \t\t\tSetVariable(pset.vars, \"ENCODING\", NULL);\n> \t\t\treturn NULL;\n> ***************\n> *** 509,514 ****\n> --- 510,516 ----\n> \t\t\t\tSetVariable(pset.vars, \"DBNAME\", NULL);\n> \t\t\t\tSetVariable(pset.vars, \"HOST\", NULL);\n> \t\t\t\tSetVariable(pset.vars, \"PORT\", NULL);\n> + \t\t\t\tSetVariable(pset.vars, \"UNIXSOCKET\", NULL);\n> \t\t\t\tSetVariable(pset.vars, \"USER\", NULL);\n> \t\t\t\tSetVariable(pset.vars, \"ENCODING\", NULL);\n> \t\t\t\treturn false;\n> Index: src/bin/psql/help.c\n> *** src/bin/psql/help.c\t2000/06/30 21:15:46\t1.1\n> --- src/bin/psql/help.c\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 103,108 ****\n> --- 103,118 ----\n> \tputs(\")\");\n> \n> \tputs(\" -H HTML table output mode (-P format=html)\");\n> + \n> + \t/* Display default Unix-domain socket */\n> + \tenv = getenv(\"PGUNIXSOCKET\");\n> + \tprintf(\" -k <path> Specify Unix domain socket name (default: \");\n> + \tif (env)\n> + \t\tfputs(env, stdout);\n> + \telse\n> + \t\tfputs(\"computed from the port\", stdout);\n> + \tputs(\")\");\n> + \n> \tputs(\" -l List available databases, then exit\");\n> \tputs(\" -n Disable readline\");\n> \tputs(\" -o <filename> Send query output to filename (or |pipe)\");\n> Index: src/bin/psql/prompt.c\n> *** src/bin/psql/prompt.c\t2000/06/30 21:15:46\t1.1\n> --- src/bin/psql/prompt.c\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 189,194 ****\n> --- 189,199 ----\n> \t\t\t\t\tif (pset.db && PQport(pset.db))\n> \t\t\t\t\t\tstrncpy(buf, PQport(pset.db), MAX_PROMPT_SIZE);\n> \t\t\t\t\tbreak;\n> + \t\t\t\t\t/* DB server Unix-domain socket */\n> + \t\t\t\tcase '<':\n> + \t\t\t\t\tif (pset.db && PQunixsocket(pset.db))\n> + \t\t\t\t\t\tstrncpy(buf, PQunixsocket(pset.db), MAX_PROMPT_SIZE);\n> + \t\t\t\t\tbreak;\n> \t\t\t\t\t/* DB server user name */\n> \t\t\t\tcase 'n':\n> \t\t\t\t\tif (pset.db)\n> Index: src/bin/psql/prompt.h\n> Index: src/bin/psql/settings.h\n> Index: src/bin/psql/startup.c\n> *** src/bin/psql/startup.c\t2000/06/30 21:15:46\t1.1\n> --- src/bin/psql/startup.c\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 66,71 ****\n> --- 66,72 ----\n> \tchar\t *dbname;\n> \tchar\t *host;\n> \tchar\t *port;\n> + \tchar\t *unixsocket;\n> \tchar\t *username;\n> \tenum _actions action;\n> \tchar\t *action_string;\n> ***************\n> *** 158,163 ****\n> --- 159,165 ----\n> \tdo\n> \t{\n> \t\tneed_pass = false;\n> + \t\t/* FIXME use PQconnectdb to allow setting the unix socket */\n> \t\tpset.db = PQsetdbLogin(options.host, options.port, NULL, NULL,\n> \t\t\toptions.action == ACT_LIST_DB ? \"template1\" : options.dbname,\n> \t\t\t\t\t\t\t username, password);\n> ***************\n> *** 202,207 ****\n> --- 204,210 ----\n> \tSetVariable(pset.vars, \"USER\", PQuser(pset.db));\n> \tSetVariable(pset.vars, \"HOST\", PQhost(pset.db));\n> \tSetVariable(pset.vars, \"PORT\", PQport(pset.db));\n> + \tSetVariable(pset.vars, \"UNIXSOCKET\", PQunixsocket(pset.db));\n> \tSetVariable(pset.vars, \"ENCODING\", pg_encoding_to_char(pset.encoding));\n> \n> #ifndef WIN32\n> ***************\n> *** 313,318 ****\n> --- 316,322 ----\n> \t\t{\"field-separator\", required_argument, NULL, 'F'},\n> \t\t{\"host\", required_argument, NULL, 'h'},\n> \t\t{\"html\", no_argument, NULL, 'H'},\n> + \t\t{\"unixsocket\", required_argument, NULL, 'k'},\n> \t\t{\"list\", no_argument, NULL, 'l'},\n> \t\t{\"no-readline\", no_argument, NULL, 'n'},\n> \t\t{\"output\", required_argument, NULL, 'o'},\n> ***************\n> *** 346,359 ****\n> \tmemset(options, 0, sizeof *options);\n> \n> #ifdef HAVE_GETOPT_LONG\n> ! \twhile ((c = getopt_long(argc, argv, \"aAc:d:eEf:F:lh:Hno:p:P:qRsStT:uU:v:VWxX?\", long_options, &optindex)) != -1)\n> #else\t\t\t\t\t\t\t/* not HAVE_GETOPT_LONG */\n> \n> \t/*\n> \t * Be sure to leave the '-' in here, so we can catch accidental long\n> \t * options.\n> \t */\n> ! \twhile ((c = getopt(argc, argv, \"aAc:d:eEf:F:lh:Hno:p:P:qRsStT:uU:v:VWxX?-\")) != -1)\n> #endif\t /* not HAVE_GETOPT_LONG */\n> \t{\n> \t\tswitch (c)\n> --- 350,363 ----\n> \tmemset(options, 0, sizeof *options);\n> \n> #ifdef HAVE_GETOPT_LONG\n> ! \twhile ((c = getopt_long(argc, argv, \"aAc:d:eEf:F:lh:Hk:no:p:P:qRsStT:uU:v:VWxX?\", long_options, &optindex)) != -1)\n> #else\t\t\t\t\t\t\t/* not HAVE_GETOPT_LONG */\n> \n> \t/*\n> \t * Be sure to leave the '-' in here, so we can catch accidental long\n> \t * options.\n> \t */\n> ! \twhile ((c = getopt(argc, argv, \"aAc:d:eEf:F:lh:Hk:no:p:P:qRsStT:uU:v:VWxX?-\")) != -1)\n> #endif\t /* not HAVE_GETOPT_LONG */\n> \t{\n> \t\tswitch (c)\n> ***************\n> *** 398,403 ****\n> --- 402,410 ----\n> \t\t\t\tbreak;\n> \t\t\tcase 'l':\n> \t\t\t\toptions->action = ACT_LIST_DB;\n> + \t\t\t\tbreak;\n> + \t\t\tcase 'k':\n> + \t\t\t\toptions->unixsocket = optarg;\n> \t\t\t\tbreak;\n> \t\t\tcase 'n':\n> \t\t\t\toptions->no_readline = true;\n> Index: src/bin/scripts/createdb\n> *** src/bin/scripts/createdb\t2000/06/30 21:15:46\t1.1\n> --- src/bin/scripts/createdb\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 50,55 ****\n> --- 50,64 ----\n> --port=*)\n> PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> ;;\n> + \t--unixsocket|-k)\n> + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> + \t\tshift;;\n> + -k*)\n> + PSQLOPT=\"$PSQLOPT $1\"\n> + ;;\n> + --unixsocket=*)\n> + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> + ;;\n> \t--username|-U)\n> \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n> \t\tshift;;\n> ***************\n> *** 114,119 ****\n> --- 123,129 ----\n> \techo \" -E, --encoding=ENCODING Multibyte encoding for the database\"\n> \techo \" -h, --host=HOSTNAME Database server host\"\n> \techo \" -p, --port=PORT Database server port\"\n> + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> \techo \" -U, --username=USERNAME Username to connect as\"\n> \techo \" -W, --password Prompt for password\"\n> \techo \" -e, --echo Show the query being sent to the backend\"\n> Index: src/bin/scripts/createlang.sh\n> *** src/bin/scripts/createlang.sh\t2000/06/30 21:15:46\t1.1\n> --- src/bin/scripts/createlang.sh\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 65,70 ****\n> --- 65,79 ----\n> --port=*)\n> PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> ;;\n> + \t--unixsocket|-k)\n> + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> + \t\tshift;;\n> + -k*)\n> + PSQLOPT=\"$PSQLOPT $1\"\n> + ;;\n> + --unixsocket=*)\n> + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> + ;;\n> \t--username|-U)\n> \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n> \t\tshift;;\n> ***************\n> *** 126,131 ****\n> --- 135,141 ----\n> \techo \"Options:\"\n> \techo \" -h, --host=HOSTNAME Database server host\"\n> \techo \" -p, --port=PORT Database server port\"\n> + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> \techo \" -U, --username=USERNAME Username to connect as\"\n> \techo \" -W, --password Prompt for password\"\n> \techo \" -d, --dbname=DBNAME Database to install language in\"\n> Index: src/bin/scripts/createuser\n> *** src/bin/scripts/createuser\t2000/06/30 21:15:46\t1.1\n> --- src/bin/scripts/createuser\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 63,68 ****\n> --- 63,77 ----\n> --port=*)\n> PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> ;;\n> + \t--unixsocket|-k)\n> + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> + \t\tshift;;\n> + -k*)\n> + PSQLOPT=\"$PSQLOPT $1\"\n> + ;;\n> + --unixsocket=*)\n> + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> + ;;\n> # Note: These two specify the user to connect as (like in psql),\n> # not the user you're creating.\n> \t--username|-U)\n> ***************\n> *** 135,140 ****\n> --- 144,150 ----\n> \techo \" -P, --pwprompt Assign a password to new user\"\n> \techo \" -h, --host=HOSTNAME Database server host\"\n> \techo \" -p, --port=PORT Database server port\"\n> + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> \techo \" -U, --username=USERNAME Username to connect as (not the one to create)\"\n> \techo \" -W, --password Prompt for password to connect\"\n> \techo \" -e, --echo Show the query being sent to the backend\"\n> Index: src/bin/scripts/dropdb\n> *** src/bin/scripts/dropdb\t2000/06/30 21:15:46\t1.1\n> --- src/bin/scripts/dropdb\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 59,64 ****\n> --- 59,73 ----\n> --port=*)\n> PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> ;;\n> + \t--unixsocket|-k)\n> + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> + \t\tshift;;\n> + -k*)\n> + PSQLOPT=\"$PSQLOPT $1\"\n> + ;;\n> + --unixsocket=*)\n> + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> + ;;\n> \t--username|-U)\n> \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n> \t\tshift;;\n> ***************\n> *** 103,108 ****\n> --- 112,118 ----\n> \techo \"Options:\"\n> \techo \" -h, --host=HOSTNAME Database server host\"\n> \techo \" -p, --port=PORT Database server port\"\n> + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> \techo \" -U, --username=USERNAME Username to connect as\"\n> \techo \" -W, --password Prompt for password\"\n> \techo \" -i, --interactive Prompt before deleting anything\"\n> Index: src/bin/scripts/droplang\n> *** src/bin/scripts/droplang\t2000/06/30 21:15:46\t1.1\n> --- src/bin/scripts/droplang\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 65,70 ****\n> --- 65,79 ----\n> --port=*)\n> PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> ;;\n> + \t--unixsocket|-k)\n> + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> + \t\tshift;;\n> + -k*)\n> + PSQLOPT=\"$PSQLOPT $1\"\n> + ;;\n> + --unixsocket=*)\n> + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> + ;;\n> \t--username|-U)\n> \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n> \t\tshift;;\n> ***************\n> *** 113,118 ****\n> --- 122,128 ----\n> \techo \"Options:\"\n> \techo \" -h, --host=HOSTNAME Database server host\"\n> \techo \" -p, --port=PORT Database server port\"\n> + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> \techo \" -U, --username=USERNAME Username to connect as\"\n> \techo \" -W, --password Prompt for password\"\n> \techo \" -d, --dbname=DBNAME Database to remove language from\"\n> Index: src/bin/scripts/dropuser\n> *** src/bin/scripts/dropuser\t2000/06/30 21:15:46\t1.1\n> --- src/bin/scripts/dropuser\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 59,64 ****\n> --- 59,73 ----\n> --port=*)\n> PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> ;;\n> + \t--unixsocket|-k)\n> + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> + \t\tshift;;\n> + -k*)\n> + PSQLOPT=\"$PSQLOPT $1\"\n> + ;;\n> + --unixsocket=*)\n> + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> + ;;\n> # Note: These two specify the user to connect as (like in psql),\n> # not the user you're dropping.\n> \t--username|-U)\n> ***************\n> *** 105,110 ****\n> --- 114,120 ----\n> \techo \"Options:\"\n> \techo \" -h, --host=HOSTNAME Database server host\"\n> \techo \" -p, --port=PORT Database server port\"\n> + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> \techo \" -U, --username=USERNAME Username to connect as (not the one to drop)\"\n> \techo \" -W, --password Prompt for password to connect\"\n> \techo \" -i, --interactive Prompt before deleting anything\"\n> Index: src/bin/scripts/vacuumdb\n> *** src/bin/scripts/vacuumdb\t2000/06/30 21:15:46\t1.1\n> --- src/bin/scripts/vacuumdb\t2000/07/04 04:46:45\t1.2\n> ***************\n> *** 52,57 ****\n> --- 52,66 ----\n> --port=*)\n> PSQLOPT=\"$PSQLOPT -p \"`echo $1 | sed 's/^--port=//'`\n> ;;\n> + \t--unixsocket|-k)\n> + \t\tPSQLOPT=\"$PSQLOPT -k $2\"\n> + \t\tshift;;\n> + -k*)\n> + PSQLOPT=\"$PSQLOPT $1\"\n> + ;;\n> + --unixsocket=*)\n> + PSQLOPT=\"$PSQLOPT -k \"`echo $1 | sed 's/^--unixsocket=//'`\n> + ;;\n> \t--username|-U)\n> \t\tPSQLOPT=\"$PSQLOPT -U $2\"\n> \t\tshift;;\n> ***************\n> *** 121,126 ****\n> --- 130,136 ----\n> echo \"Options:\"\n> \techo \" -h, --host=HOSTNAME Database server host\"\n> \techo \" -p, --port=PORT Database server port\"\n> + \techo \" -k, --unixsocket=PATH Database server Unix-domain socket name\"\n> \techo \" -U, --username=USERNAME Username to connect as\"\n> \techo \" -W, --password Prompt for password\"\n> \techo \" -d, --dbname=DBNAME Database to vacuum\"\n> Index: src/include/libpq/libpq.h\n> *** src/include/libpq/libpq.h\t2000/06/30 21:15:47\t1.1\n> --- src/include/libpq/libpq.h\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 236,246 ****\n> /*\n> * prototypes for functions in pqcomm.c\n> */\n> ! extern int\tStreamServerPort(char *hostName, unsigned short portName, int *fdP);\n> extern int\tStreamConnection(int server_fd, Port *port);\n> extern void StreamClose(int sock);\n> extern void pq_init(void);\n> extern int\tpq_getport(void);\n> extern void pq_close(void);\n> extern int\tpq_getbytes(char *s, size_t len);\n> extern int\tpq_getstring(StringInfo s);\n> --- 236,247 ----\n> /*\n> * prototypes for functions in pqcomm.c\n> */\n> ! extern int\tStreamServerPort(char *hostName, unsigned short portName, char *unixSocketName, int *fdP);\n> extern int\tStreamConnection(int server_fd, Port *port);\n> extern void StreamClose(int sock);\n> extern void pq_init(void);\n> extern int\tpq_getport(void);\n> + extern char\t*pq_getunixsocket(void);\n> extern void pq_close(void);\n> extern int\tpq_getbytes(char *s, size_t len);\n> extern int\tpq_getstring(StringInfo s);\n> Index: src/include/libpq/password.h\n> Index: src/include/libpq/pqcomm.h\n> *** src/include/libpq/pqcomm.h\t2000/06/30 21:15:47\t1.1\n> --- src/include/libpq/pqcomm.h\t2000/07/01 18:59:33\t1.6\n> ***************\n> *** 42,53 ****\n> /* Configure the UNIX socket address for the well known port. */\n> \n> #if defined(SUN_LEN)\n> ! #define UNIXSOCK_PATH(sun,port) \\\n> ! \t(sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)), SUN_LEN(&(sun)))\n> #else\n> ! #define UNIXSOCK_PATH(sun,port) \\\n> ! \t(sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)), \\\n> ! \t strlen((sun).sun_path)+ offsetof(struct sockaddr_un, sun_path))\n> #endif\n> \n> /*\n> --- 42,56 ----\n> /* Configure the UNIX socket address for the well known port. */\n> \n> #if defined(SUN_LEN)\n> ! #define UNIXSOCK_PATH(sun,port,defpath) \\\n> ! (defpath ? (strncpy((sun).sun_path, defpath, sizeof((sun).sun_path)), (sun).sun_path[sizeof((sun).sun_path)-1] = '\\0') : sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)))\n> ! #define UNIXSOCK_LEN(sun) \\\n> ! (SUN_LEN(&(sun)))\n> #else\n> ! #define UNIXSOCK_PATH(sun,port,defpath) \\\n> ! (defpath ? (strncpy((sun).sun_path, defpath, sizeof((sun).sun_path)), (sun).sun_path[sizeof((sun).sun_path)-1] = '\\0') : sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)))\n> ! #define UNIXSOCK_LEN(sun) \\\n> ! (strlen((sun).sun_path)+ offsetof(struct sockaddr_un, sun_path))\n> #endif\n> \n> /*\n> Index: src/interfaces/libpq/fe-connect.c\n> *** src/interfaces/libpq/fe-connect.c\t2000/06/30 21:15:51\t1.1\n> --- src/interfaces/libpq/fe-connect.c\t2000/07/01 18:50:47\t1.3\n> ***************\n> *** 125,130 ****\n> --- 125,133 ----\n> \t{\"port\", \"PGPORT\", DEF_PGPORT, NULL,\n> \t\"Database-Port\", \"\", 6},\n> \n> + \t{\"unixsocket\", \"PGUNIXSOCKET\", NULL, NULL,\n> + \t\"Unix-Socket\", \"\", 80},\n> + \n> \t{\"tty\", \"PGTTY\", DefaultTty, NULL,\n> \t\"Backend-Debug-TTY\", \"D\", 40},\n> \n> ***************\n> *** 293,298 ****\n> --- 296,303 ----\n> \tconn->pghost = tmp ? strdup(tmp) : NULL;\n> \ttmp = conninfo_getval(connOptions, \"port\");\n> \tconn->pgport = tmp ? strdup(tmp) : NULL;\n> + \ttmp = conninfo_getval(connOptions, \"unixsocket\");\n> + \tconn->pgunixsocket = tmp ? strdup(tmp) : NULL;\n> \ttmp = conninfo_getval(connOptions, \"tty\");\n> \tconn->pgtty = tmp ? strdup(tmp) : NULL;\n> \ttmp = conninfo_getval(connOptions, \"options\");\n> ***************\n> *** 369,374 ****\n> --- 374,382 ----\n> *\t PGPORT\t identifies TCP port to which to connect if <pgport> argument\n> *\t\t\t\t is NULL or a null string.\n> *\n> + *\t PGUNIXSOCKET\t identifies Unix-domain socket to which to connect; default\n> + *\t\t\t\t is computed from the TCP port.\n> + *\n> *\t PGTTY\t\t identifies tty to which to send messages if <pgtty> argument\n> *\t\t\t\t is NULL or a null string.\n> *\n> ***************\n> *** 422,427 ****\n> --- 430,439 ----\n> \telse\n> \t\tconn->pgport = strdup(pgport);\n> \n> + \tconn->pgunixsocket = getenv(\"PGUNIXSOCKET\");\n> + \tif (conn->pgunixsocket)\n> + \t\tconn->pgunixsocket = strdup(conn->pgunixsocket);\n> + \n> \tif ((pgtty == NULL) || pgtty[0] == '\\0')\n> \t{\n> \t\tif ((tmp = getenv(\"PGTTY\")) == NULL)\n> ***************\n> *** 489,501 ****\n> \n> /*\n> * update_db_info -\n> ! * get all additional infos out of dbName\n> *\n> */\n> static int\n> update_db_info(PGconn *conn)\n> {\n> ! \tchar\t *tmp,\n> \t\t\t *old = conn->dbName;\n> \n> \tif (strchr(conn->dbName, '@') != NULL)\n> --- 501,513 ----\n> \n> /*\n> * update_db_info -\n> ! * get all additional info out of dbName\n> *\n> */\n> static int\n> update_db_info(PGconn *conn)\n> {\n> ! \tchar\t *tmp, *tmp2,\n> \t\t\t *old = conn->dbName;\n> \n> \tif (strchr(conn->dbName, '@') != NULL)\n> ***************\n> *** 504,509 ****\n> --- 516,523 ----\n> \t\ttmp = strrchr(conn->dbName, ':');\n> \t\tif (tmp != NULL)\t\t/* port number given */\n> \t\t{\n> + \t\t\tif (conn->pgport)\n> + \t\t\t\tfree(conn->pgport);\n> \t\t\tconn->pgport = strdup(tmp + 1);\n> \t\t\t*tmp = '\\0';\n> \t\t}\n> ***************\n> *** 511,516 ****\n> --- 525,532 ----\n> \t\ttmp = strrchr(conn->dbName, '@');\n> \t\tif (tmp != NULL)\t\t/* host name given */\n> \t\t{\n> + \t\t\tif (conn->pghost)\n> + \t\t\t\tfree(conn->pghost);\n> \t\t\tconn->pghost = strdup(tmp + 1);\n> \t\t\t*tmp = '\\0';\n> \t\t}\n> ***************\n> *** 537,549 ****\n> \n> \t\t\t/*\n> \t\t\t * new style:\n> ! \t\t\t * <tcp|unix>:postgresql://server[:port][/dbname][?options]\n> \t\t\t */\n> \t\t\toffset += strlen(\"postgresql://\");\n> \n> \t\t\ttmp = strrchr(conn->dbName + offset, '?');\n> \t\t\tif (tmp != NULL)\t/* options given */\n> \t\t\t{\n> \t\t\t\tconn->pgoptions = strdup(tmp + 1);\n> \t\t\t\t*tmp = '\\0';\n> \t\t\t}\n> --- 553,567 ----\n> \n> \t\t\t/*\n> \t\t\t * new style:\n> ! \t\t\t * <tcp|unix>:postgresql://server[:port|:/unixsocket/path:][/dbname][?options]\n> \t\t\t */\n> \t\t\toffset += strlen(\"postgresql://\");\n> \n> \t\t\ttmp = strrchr(conn->dbName + offset, '?');\n> \t\t\tif (tmp != NULL)\t/* options given */\n> \t\t\t{\n> + \t\t\t\tif (conn->pgoptions)\n> + \t\t\t\t\tfree(conn->pgoptions);\n> \t\t\t\tconn->pgoptions = strdup(tmp + 1);\n> \t\t\t\t*tmp = '\\0';\n> \t\t\t}\n> ***************\n> *** 551,576 ****\n> \t\t\ttmp = strrchr(conn->dbName + offset, '/');\n> \t\t\tif (tmp != NULL)\t/* database name given */\n> \t\t\t{\n> \t\t\t\tconn->dbName = strdup(tmp + 1);\n> \t\t\t\t*tmp = '\\0';\n> \t\t\t}\n> \t\t\telse\n> \t\t\t{\n> \t\t\t\tif ((tmp = getenv(\"PGDATABASE\")) != NULL)\n> \t\t\t\t\tconn->dbName = strdup(tmp);\n> \t\t\t\telse if (conn->pguser)\n> \t\t\t\t\tconn->dbName = strdup(conn->pguser);\n> \t\t\t}\n> \n> \t\t\ttmp = strrchr(old + offset, ':');\n> ! \t\t\tif (tmp != NULL)\t/* port number given */\n> \t\t\t{\n> - \t\t\t\tconn->pgport = strdup(tmp + 1);\n> \t\t\t\t*tmp = '\\0';\n> \t\t\t}\n> \n> \t\t\tif (strncmp(old, \"unix:\", 5) == 0)\n> \t\t\t{\n> \t\t\t\tconn->pghost = NULL;\n> \t\t\t\tif (strcmp(old + offset, \"localhost\") != 0)\n> \t\t\t\t{\n> --- 569,630 ----\n> \t\t\ttmp = strrchr(conn->dbName + offset, '/');\n> \t\t\tif (tmp != NULL)\t/* database name given */\n> \t\t\t{\n> + \t\t\t\tif (conn->dbName)\n> + \t\t\t\t\tfree(conn->dbName);\n> \t\t\t\tconn->dbName = strdup(tmp + 1);\n> \t\t\t\t*tmp = '\\0';\n> \t\t\t}\n> \t\t\telse\n> \t\t\t{\n> + \t\t\t\t/* Why do we default only this value from the environment again? */\n> \t\t\t\tif ((tmp = getenv(\"PGDATABASE\")) != NULL)\n> + \t\t\t\t{\n> + \t\t\t\t\tif (conn->dbName)\n> + \t\t\t\t\t\tfree(conn->dbName);\n> \t\t\t\t\tconn->dbName = strdup(tmp);\n> + \t\t\t\t}\n> \t\t\t\telse if (conn->pguser)\n> + \t\t\t\t{\n> + \t\t\t\t\tif (conn->dbName)\n> + \t\t\t\t\t\tfree(conn->dbName);\n> \t\t\t\t\tconn->dbName = strdup(conn->pguser);\n> + \t\t\t\t}\n> \t\t\t}\n> \n> \t\t\ttmp = strrchr(old + offset, ':');\n> ! \t\t\tif (tmp != NULL)\t/* port number or Unix socket path given */\n> \t\t\t{\n> \t\t\t\t*tmp = '\\0';\n> + \t\t\t\tif ((tmp2 = strchr(tmp + 1, ':')) != NULL)\n> + \t\t\t\t{\n> + \t\t\t\t\tif (strncmp(old, \"unix:\", 5) != 0)\n> + \t\t\t\t\t{\n> + \t\t\t\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> + \t\t\t\t\t\t\t\t \"connectDBStart() -- \"\n> + \t\t\t\t\t\t\t\t \"socket name can only be specified with \"\n> + \t\t\t\t\t\t\t\t \"non-TCP\\n\");\n> + \t\t\t\t\t\treturn 1; \n> + \t\t\t\t\t}\n> + \t\t\t\t\t*tmp2 = '\\0';\n> + \t\t\t\t\tif (conn->pgunixsocket)\n> + \t\t\t\t\t\tfree(conn->pgunixsocket);\n> + \t\t\t\t\tconn->pgunixsocket = strdup(tmp + 1);\n> + \t\t\t\t}\n> + \t\t\t\telse\n> + \t\t\t\t{\n> + \t\t\t\t\tif (conn->pgport)\n> + \t\t\t\t\t\tfree(conn->pgport);\n> + \t\t\t\t\tconn->pgport = strdup(tmp + 1);\n> + \t\t\t\t\tif (conn->pgunixsocket)\n> + \t\t\t\t\t\tfree(conn->pgunixsocket);\n> + \t\t\t\t\tconn->pgunixsocket = NULL;\n> + \t\t\t\t}\n> \t\t\t}\n> \n> \t\t\tif (strncmp(old, \"unix:\", 5) == 0)\n> \t\t\t{\n> + \t\t\t\tif (conn->pghost)\n> + \t\t\t\t\tfree(conn->pghost);\n> \t\t\t\tconn->pghost = NULL;\n> \t\t\t\tif (strcmp(old + offset, \"localhost\") != 0)\n> \t\t\t\t{\n> ***************\n> *** 582,589 ****\n> \t\t\t\t}\n> \t\t\t}\n> \t\t\telse\n> \t\t\t\tconn->pghost = strdup(old + offset);\n> ! \n> \t\t\tfree(old);\n> \t\t}\n> \t}\n> --- 636,646 ----\n> \t\t\t\t}\n> \t\t\t}\n> \t\t\telse\n> + \t\t\t{\n> + \t\t\t\tif (conn->pghost)\n> + \t\t\t\t\tfree(conn->pghost);\n> \t\t\t\tconn->pghost = strdup(old + offset);\n> ! \t\t\t}\n> \t\t\tfree(old);\n> \t\t}\n> \t}\n> ***************\n> *** 743,749 ****\n> \t}\n> #if !defined(WIN32) && !defined(__CYGWIN32__)\n> \telse\n> ! \t\tconn->raddr_len = UNIXSOCK_PATH(conn->raddr.un, portno);\n> #endif\n> \n> \n> --- 800,809 ----\n> \t}\n> #if !defined(WIN32) && !defined(__CYGWIN32__)\n> \telse\n> ! \t{\n> ! \t\tUNIXSOCK_PATH(conn->raddr.un, portno, conn->pgunixsocket);\n> ! \t\tconn->raddr_len = UNIXSOCK_LEN(conn->raddr.un);\n> ! \t}\n> #endif\n> \n> \n> ***************\n> *** 892,898 ****\n> \t\t\t\t\t\t\t conn->pghost ? conn->pghost : \"localhost\",\n> \t\t\t\t\t\t\t (family == AF_INET) ?\n> \t\t\t\t\t\t\t \"TCP/IP port\" : \"Unix socket\",\n> ! \t\t\t\t\t\t\t conn->pgport);\n> \t\t\tgoto connect_errReturn;\n> \t\t}\n> \t}\n> --- 952,959 ----\n> \t\t\t\t\t\t\t conn->pghost ? conn->pghost : \"localhost\",\n> \t\t\t\t\t\t\t (family == AF_INET) ?\n> \t\t\t\t\t\t\t \"TCP/IP port\" : \"Unix socket\",\n> ! \t\t\t\t\t\t\t (family == AF_UNIX && conn->pgunixsocket) ?\n> ! \t\t\t\t\t\t\t conn->pgunixsocket : conn->pgport);\n> \t\t\tgoto connect_errReturn;\n> \t\t}\n> \t}\n> ***************\n> *** 1123,1129 ****\n> \t\t\t\t\t\t\t conn->pghost ? conn->pghost : \"localhost\",\n> \t\t\t\t\t\t\t\t (conn->raddr.sa.sa_family == AF_INET) ?\n> \t\t\t\t\t\t\t\t\t \"TCP/IP port\" : \"Unix socket\",\n> ! \t\t\t\t\t\t\t\t\t conn->pgport);\n> \t\t\t\t\tgoto error_return;\n> \t\t\t\t}\n> \n> --- 1184,1191 ----\n> \t\t\t\t\t\t\t conn->pghost ? conn->pghost : \"localhost\",\n> \t\t\t\t\t\t\t\t (conn->raddr.sa.sa_family == AF_INET) ?\n> \t\t\t\t\t\t\t\t\t \"TCP/IP port\" : \"Unix socket\",\n> ! \t\t\t\t\t\t\t (conn->raddr.sa.sa_family == AF_UNIX && conn->pgunixsocket) ?\n> ! \t\t\t\t\t\t\t\t\t conn->pgunixsocket : conn->pgport);\n> \t\t\t\t\tgoto error_return;\n> \t\t\t\t}\n> \n> ***************\n> *** 1799,1804 ****\n> --- 1861,1868 ----\n> \t\tfree(conn->pghostaddr);\n> \tif (conn->pgport)\n> \t\tfree(conn->pgport);\n> + \tif (conn->pgunixsocket)\n> + \t\tfree(conn->pgunixsocket);\n> \tif (conn->pgtty)\n> \t\tfree(conn->pgtty);\n> \tif (conn->pgoptions)\n> ***************\n> *** 2383,2388 ****\n> --- 2447,2460 ----\n> \tif (!conn)\n> \t\treturn (char *) NULL;\n> \treturn conn->pgport;\n> + }\n> + \n> + char *\n> + PQunixsocket(const PGconn *conn)\n> + {\n> + \tif (!conn)\n> + \t\treturn (char *) NULL;\n> + \treturn conn->pgunixsocket;\n> }\n> \n> char *\n> Index: src/interfaces/libpq/libpq-fe.h\n> *** src/interfaces/libpq/libpq-fe.h\t2000/06/30 21:15:51\t1.1\n> --- src/interfaces/libpq/libpq-fe.h\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 214,219 ****\n> --- 214,220 ----\n> \textern char *PQpass(const PGconn *conn);\n> \textern char *PQhost(const PGconn *conn);\n> \textern char *PQport(const PGconn *conn);\n> + \textern char *PQunixsocket(const PGconn *conn);\n> \textern char *PQtty(const PGconn *conn);\n> \textern char *PQoptions(const PGconn *conn);\n> \textern ConnStatusType PQstatus(const PGconn *conn);\n> Index: src/interfaces/libpq/libpq-int.h\n> *** src/interfaces/libpq/libpq-int.h\t2000/06/30 21:15:51\t1.1\n> --- src/interfaces/libpq/libpq-int.h\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 202,207 ****\n> --- 202,209 ----\n> \t\t\t\t\t\t\t\t * numbers-and-dots notation. Takes\n> \t\t\t\t\t\t\t\t * precedence over above. */\n> \tchar\t *pgport;\t\t\t/* the server's communication port */\n> + \tchar\t *pgunixsocket;\t\t/* the Unix-domain socket that the server is listening on;\n> + \t\t\t\t\t\t * if NULL, uses a default constructed from pgport */\n> \tchar\t *pgtty;\t\t\t/* tty on which the backend messages is\n> \t\t\t\t\t\t\t\t * displayed (NOT ACTUALLY USED???) */\n> \tchar\t *pgoptions;\t\t/* options to start the backend with */\n> Index: src/interfaces/libpq/libpqdll.def\n> *** src/interfaces/libpq/libpqdll.def\t2000/06/30 21:15:51\t1.1\n> --- src/interfaces/libpq/libpqdll.def\t2000/07/01 18:20:40\t1.2\n> ***************\n> *** 79,81 ****\n> --- 79,82 ----\n> \tdestroyPQExpBuffer\t@ 76\n> \tcreatePQExpBuffer\t@ 77\n> \tPQconninfoFree\t\t@ 78\n> + \tPQunixsocket\t\t@ 79\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 13 Nov 2000 10:03:17 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL virtual hosting support" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > I am tempted to apply this. This is the second person who asked for\n> > binding to a single port. The patch looks quite complete, with doc\n> > changes. It appears to be a thorough job.\n> \n> Postmaster options are evil, please put something in backend/utils/guc.c. \n> (This is not the fault of the patch submitter, since this interface is new\n> for 7.1, but that still doesn't mean we should subvert it.)\n\nI have put code in guc.c to handle this, but there still are postmaster\noptions for it too.\n\n\n> \n> > > 2. The ability to place the Unix-domain socket in a mode 700 directory.\n> \n> This would be a rather sharp instrument to offer to the world at large,\n> because the socket file is also a lock file, so you can't just get rid of\n> it.\n> \n> If we were to offer that anyway, I'd opine that we reuse the -h option\n> (e.g., leading slash means Unix socket) rather than adding a -k option\n> everywhere.\n\nInteresting idea, thought kind of cryptic.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 13 Nov 2000 10:04:31 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] PostgreSQL virtual hosting support" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > > Bruce Momjian writes:\n> > > \n> > > > I am tempted to apply this. This is the second person who asked for\n> > > > binding to a single port. The patch looks quite complete, with doc\n> > > > changes. It appears to be a thorough job.\n> > > \n> > > Postmaster options are evil, please put something in backend/utils/guc.c. \n> > > (This is not the fault of the patch submitter, since this interface is new\n> > > for 7.1, but that still doesn't mean we should subvert it.)\n> > \n> > I have put code in guc.c to handle this, but there still are postmaster\n> > options for it too.\n> \n> What happened to the concerns that were raised? The socket file is a lock\n> file, you cannot just move it around.\n> \n\nUhh, not sure. The administrator can control it. If they bypass our\nchecks by specifying a non-standard location, aren't they on their own?\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 13 Nov 2000 18:30:18 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] PostgreSQL virtual hosting support" }, { "msg_contents": "Bruce Momjian writes:\n\n> > Bruce Momjian writes:\n> > \n> > > I am tempted to apply this. This is the second person who asked for\n> > > binding to a single port. The patch looks quite complete, with doc\n> > > changes. It appears to be a thorough job.\n> > \n> > Postmaster options are evil, please put something in backend/utils/guc.c. \n> > (This is not the fault of the patch submitter, since this interface is new\n> > for 7.1, but that still doesn't mean we should subvert it.)\n> \n> I have put code in guc.c to handle this, but there still are postmaster\n> options for it too.\n\nWhat happened to the concerns that were raised? The socket file is a lock\nfile, you cannot just move it around.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Tue, 14 Nov 2000 00:33:13 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] PostgreSQL virtual hosting support" }, { "msg_contents": "OK, I have removed the -k unix socketpath option from the client side of\nthis patch, and modified libpq so if they specify a host with a leading\nslash, it will be considered a unix socket path. Attached is the\nrelevant patch to libpq.\n\n\n> > Bruce Momjian writes:\n> > \n> > > > Bruce Momjian writes:\n> > > > \n> > > > > I am tempted to apply this. This is the second person who asked for\n> > > > > binding to a single port. The patch looks quite complete, with doc\n> > > > > changes. It appears to be a thorough job.\n> > > > \n> > > > Postmaster options are evil, please put something in backend/utils/guc.c. \n> > > > (This is not the fault of the patch submitter, since this interface is new\n> > > > for 7.1, but that still doesn't mean we should subvert it.)\n> > > \n> > > I have put code in guc.c to handle this, but there still are postmaster\n> > > options for it too.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: fe-connect.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/fe-connect.c,v\nretrieving revision 1.145\ndiff -c -r1.145 fe-connect.c\n*** fe-connect.c\t2000/11/13 15:18:15\t1.145\n--- fe-connect.c\t2000/11/13 23:36:38\n***************\n*** 332,337 ****\n--- 332,356 ----\n \tPQconninfoFree(connOptions);\n \n \t/* ----------\n+ \t * Allow unix socket specification in the host name\n+ \t * ----------\n+ \t */\n+ \tif (conn->pghost && conn->pghost[0] == '/')\n+ \t{\n+ \t\tif (conn->pgunixsocket)\n+ \t\t\tfree(conn->pgunixsocket);\n+ \t\tconn->pgunixsocket = conn->pghost;\n+ \t\tconn->pghost = NULL;\n+ \t}\n+ \tif (conn->pghostaddr && conn->pghostaddr[0] == '/')\n+ \t{\n+ \t\tif (conn->pgunixsocket)\n+ \t\t\tfree(conn->pgunixsocket);\n+ \t\tconn->pgunixsocket = conn->pghostaddr;\n+ \t\tconn->pghostaddr = NULL;\n+ \t}\n+ \n+ \t/* ----------\n \t * Connect to the database\n \t * ----------\n \t */\n***************\n*** 443,455 ****\n \telse\n \t\tconn->pgport = strdup(pgport);\n \n! #if FIX_ME\n! \t/* we need to modify the function to accept a unix socket path */\n! \tif (pgunixsocket)\n! \t\tconn->pgunixsocket = strdup(pgunixsocket);\n! \telse if ((tmp = getenv(\"PGUNIXSOCKET\")) != NULL)\n! \t\tconn->pgunixsocket = strdup(tmp);\n! #endif\n \n \tif (pgtty == NULL)\n \t{\n--- 462,486 ----\n \telse\n \t\tconn->pgport = strdup(pgport);\n \n! \t/* ----------\n! \t * We don't allow unix socket path as a function parameter.\n! \t * This allows unix socket specification in the host name.\n! \t * ----------\n! \t */\n! \tif (conn->pghost && conn->pghost[0] == '/')\n! \t{\n! \t\tif (conn->pgunixsocket)\n! \t\t\tfree(conn->pgunixsocket);\n! \t\tconn->pgunixsocket = conn->pghost;\n! \t\tconn->pghost = NULL;\n! \t}\n! \tif (conn->pghostaddr && conn->pghostaddr[0] == '/')\n! \t{\n! \t\tif (conn->pgunixsocket)\n! \t\t\tfree(conn->pgunixsocket);\n! \t\tconn->pgunixsocket = conn->pghostaddr;\n! \t\tconn->pghostaddr = NULL;\n! \t}\n \n \tif (pgtty == NULL)\n \t{\n***************\n*** 778,784 ****\n \t\t{\n \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t\t \"connectDBStart() -- \"\n! \t\t\t\t\t\t \"invalid host address: %s\\n\", conn->pghostaddr);\n \t\t\tgoto connect_errReturn;\n \t\t}\n \n--- 809,815 ----\n \t\t{\n \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t\t \"connectDBStart() -- \"\n! \t\t\t\t\t\t \t \"invalid host address: %s\\n\", conn->pghostaddr);\n \t\t\tgoto connect_errReturn;\n \t\t}", "msg_date": "Mon, 13 Nov 2000 18:40:07 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] PostgreSQL virtual hosting support" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> + \tif (conn->pghostaddr && conn->pghostaddr[0] == '/')\n> + \t{\n> + \t\tif (conn->pgunixsocket)\n> + \t\t\tfree(conn->pgunixsocket);\n> + \t\tconn->pgunixsocket = conn->pghostaddr;\n> + \t\tconn->pghostaddr = NULL;\n> + \t}\n\nI would be inclined to think you should NOT look for a path in\npghostaddr, since my understanding is that that's supposed to be a\nnumeric IP address and nothing but. Otherwise this looks pretty\nreasonable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 13 Nov 2000 19:01:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] PostgreSQL virtual hosting support " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> What happened to the concerns that were raised? The socket file is a lock\n> file, you cannot just move it around.\n\nGood point. IIRC, we rely on the socket file lock to ensure that you\ncan't start two postmasters with the same port number. (If both are\nstarted with -i, then you'll get a conflict on the IP port address,\nbut if one or both is started without, then the socket-file lock is\nthe only line of defense.) This is important because shared memory\nkeys are derived from the port number. I'm not sure that the code\nwill behave in a pleasant manner when two postmasters try to use the\nsame shared memory block --- most likely, death and destruction will\nensue.\n\nI think we had some discussions about changing the way that shared\nmemory keys are generated, which might make this a less critical issue.\nBut until something's done about that, this patch looks awfully\ndangerous.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 13 Nov 2000 19:11:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] PostgreSQL virtual hosting support " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > + \tif (conn->pghostaddr && conn->pghostaddr[0] == '/')\n> > + \t{\n> > + \t\tif (conn->pgunixsocket)\n> > + \t\t\tfree(conn->pgunixsocket);\n> > + \t\tconn->pgunixsocket = conn->pghostaddr;\n> > + \t\tconn->pghostaddr = NULL;\n> > + \t}\n> \n> I would be inclined to think you should NOT look for a path in\n> pghostaddr, since my understanding is that that's supposed to be a\n> numeric IP address and nothing but. Otherwise this looks pretty\n> reasonable.\n\nFixed. Thanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 13 Nov 2000 20:01:35 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] PostgreSQL virtual hosting support" }, { "msg_contents": "> Peter Eisentraut <[email protected]> writes:\n> > What happened to the concerns that were raised? The socket file is a lock\n> > file, you cannot just move it around.\n> \n> Good point. IIRC, we rely on the socket file lock to ensure that you\n> can't start two postmasters with the same port number. (If both are\n> started with -i, then you'll get a conflict on the IP port address,\n> but if one or both is started without, then the socket-file lock is\n> the only line of defense.) This is important because shared memory\n> keys are derived from the port number. I'm not sure that the code\n> will behave in a pleasant manner when two postmasters try to use the\n> same shared memory block --- most likely, death and destruction will\n> ensue.\n> \n> I think we had some discussions about changing the way that shared\n> memory keys are generated, which might make this a less critical issue.\n> But until something's done about that, this patch looks awfully\n> dangerous.\n\nBut do we yank it out for that reason? I don't think so.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 13 Nov 2000 20:02:17 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] PostgreSQL virtual hosting support" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> I think we had some discussions about changing the way that shared\n>> memory keys are generated, which might make this a less critical issue.\n>> But until something's done about that, this patch looks awfully\n>> dangerous.\n\n> But do we yank it out for that reason? I don't think so.\n\nDo you want to put a bright red \"THIS FEATURE MAY BE HAZARDOUS TO YOUR\nDATA\" warning in the manual? I think it'd be rather irresponsible of\nus to ship the patch without such a warning, unless someone builds a\nreplacement interlock capability (or gets rid of the need for the\ninterlock).\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 13 Nov 2000 20:14:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] PostgreSQL virtual hosting support " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> I think we had some discussions about changing the way that shared\n> >> memory keys are generated, which might make this a less critical issue.\n> >> But until something's done about that, this patch looks awfully\n> >> dangerous.\n> \n> > But do we yank it out for that reason? I don't think so.\n> \n> Do you want to put a bright red \"THIS FEATURE MAY BE HAZARDOUS TO YOUR\n> DATA\" warning in the manual? I think it'd be rather irresponsible of\n> us to ship the patch without such a warning, unless someone builds a\n> replacement interlock capability (or gets rid of the need for the\n> interlock).\n> \n\nSeeing that we went many releases with no lock, and people really have\nto try to have the problem by specifying a non-standard socket file, I\ndon't feel terribly concerned.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 13 Nov 2000 20:16:42 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] PostgreSQL virtual hosting support" }, { "msg_contents": "BTW, I thought you were backing out the unnecessary changes to the\nclient applications? pg_dump seems not to be reverted yet, for one...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 13 Nov 2000 20:45:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] PostgreSQL virtual hosting support " }, { "msg_contents": "> BTW, I thought you were backing out the unnecessary changes to the\n> client applications? pg_dump seems not to be reverted yet, for one...\n\nI thought I had. I don't see them here. Can you tell me what you see.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 13 Nov 2000 20:47:02 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] PostgreSQL virtual hosting support" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> BTW, I thought you were backing out the unnecessary changes to the\n>> client applications? pg_dump seems not to be reverted yet, for one...\n\n> I thought I had. I don't see them here. Can you tell me what you see.\n\nMy apologies. I must have been looking at the wrong file versions ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 13 Nov 2000 22:16:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] PostgreSQL virtual hosting support " }, { "msg_contents": "Bruce Momjian writes:\n\n> > I think we had some discussions about changing the way that shared\n> > memory keys are generated, which might make this a less critical issue.\n> > But until something's done about that, this patch looks awfully\n> > dangerous.\n> \n> But do we yank it out for that reason? I don't think so.\n\nNow that I read the author's description of this feature, I'm no longer\nsure what it's good for:\n\n You can use this option to put the Unix domain socket in a\n directory that is private to one or more users using Unix\n directory permissions. This is necessary for securely\n creating databases automatically on shared machines. In that\n situation, also disallow all TCP/IP connections initially in\n <filename>pg_hba.conf</filename>.\n\nYou can do that in a more stylish and safer manner by using the\nunix_socket_permissions and unix_socket_group options.\n\nI won't argue for removing it, but let's not spread the word too widely\nbefore we fix the issues. :-)\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Tue, 14 Nov 2000 19:03:42 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] PostgreSQL virtual hosting support" } ]
[ { "msg_contents": "The parser currently maps the type REAL to float8. I contend that it\nshould be float4.\n\nfloat4\t=>\treal\nfloat8\t=>\tdouble precision\n\nCan we make this change?\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 7 Jul 2000 18:16:54 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "SQL float types" }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> The parser currently maps the type REAL to float8. I contend that it\n> should be float4.\n> \n> float4\t=>\treal\n> float8\t=>\tdouble precision\n> \n\nIs this based on the standard documents?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Jul 2000 12:19:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL float types" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> The parser currently maps the type REAL to float8. I contend that it\n> should be float4.\n\n> float4\t=>\treal\n> float8\t=>\tdouble precision\n\n> Can we make this change?\n\nNot just on your say-so. Arguments please?\n\nIn the absence of pretty compelling reasons to change, I think\n\"backwards compatibility\" will have to win the day on something\nlike this.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Jul 2000 13:42:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL float types " }, { "msg_contents": "Tom Lane writes:\n\n> > float4\t=>\treal\n> > float8\t=>\tdouble precision\n\n> Not just on your say-so. Arguments please?\n\nSQL:\n\n 22)REAL specifies the data type approximate numeric, with implementation-\n defined precision.\n\n 23)DOUBLE PRECISION specifies the data type approximate numeric,\n with implementation-defined precision that is greater than the\n implementation-defined precision of REAL.\n\nNotice that there is no \"at least\" here anywhere.\n\n \n> In the absence of pretty compelling reasons to change, I think\n> \"backwards compatibility\" will have to win the day on something like\n> this.\n\nThe REAL data type is not even documented. I'm evidently trying to get\npeople to think in terms of standard types rather than the internal,\nlow-level sounding names, but that won't work if the standard types aren't\nreally standard and the internal types don't have a standard equivalent.\n\nActually, if you read into the release history it says:\n\nSQL standard-compliance (the following details changes that makes\npostgres95 more compliant to the SQL-92 standard):\n * the following SQL types are now built-in: smallint, int(eger), float, real,\n char(N), varchar(N), date and time.\n \n The following are aliases to existing postgres types:\n smallint -> int2\n integer, int -> int4\n float, real -> float4\n char(N) and varchar(N) are implemented as truncated text types. In\n addition, char(N) does blank-padding.\n\nSo if you take that as documentation then my suggestion counts as a bug\nfix. :-)\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 8 Jul 2000 02:09:46 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SQL float types " }, { "msg_contents": "> Tom Lane writes:\n> \n> > > float4\t=>\treal\n> > > float8\t=>\tdouble precision\n> \n> > Not just on your say-so. Arguments please?\n> \n> SQL:\n> \n> 22)REAL specifies the data type approximate numeric, with implementation-\n> defined precision.\n> \n> 23)DOUBLE PRECISION specifies the data type approximate numeric,\n> with implementation-defined precision that is greater than the\n> implementation-defined precision of REAL.\n> \n> Notice that there is no \"at least\" here anywhere.\n\nYou have convinced me.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Jul 2000 20:29:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL float types" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>>>> float4\t=>\treal\n>>>> float8\t=>\tdouble precision\n>> Not just on your say-so. Arguments please?\n\n> SQL:\n> 22)REAL specifies the data type approximate numeric, with implementation-\n> defined precision.\n> 23)DOUBLE PRECISION specifies the data type approximate numeric,\n> with implementation-defined precision that is greater than the\n> implementation-defined precision of REAL.\n> Notice that there is no \"at least\" here anywhere.\n\nGood point.\n\n> The REAL data type is not even documented.\n\nIt isn't? In that case the compatibility argument isn't as pressing\nas I thought. OK, I'm convinced.\n\n> Actually, if you read into the release history it says:\n> The following are aliases to existing postgres types:\n> float, real -> float4\n\nActually, \"float\" without any precision spec defaults to float8 at the\nmoment, so both parts of this item in the history are wrong.\n\nBTW, are you arguing to change the float->float8 default? I think that\nwould be a bad idea. Offhand I don't see anything in SQL that mandates\na particular default precision for FLOAT.\n\n\t\t\tregards, tom lane\n\nPS: I seem to recall some unhappiness about the ODBC driver's mappings\nbetween Postgres float types and the ODBC type codes. You might want\nto check that while you are at it.\n", "msg_date": "Fri, 07 Jul 2000 20:40:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL float types " } ]
[ { "msg_contents": "\nI'm looking at implementing the SQL99 C interface, which looks more or\nless reasonable. There are some annoyances however. The API to describe\nthe return result SQLDescribeCol takes a pointer to a SQLSMALLINT to\nreturn the type of the returned column. There are a whole lot of #define\nvalues in the standards document with specified values for each of the\nstandard types. This is annoying because an Oid is bigger than a\nSQLSMALLINT, and the postgres oid values are not the same as the\nstandards #define values.\n\nNow what it is tempting to do is to change the API to instead take a\npointer to a Oid, and redefine the #define values to the standard oid\nvalues for postgres. However this would obviously be a change to the\nAPI. Or I could define a new API, which kinda defeats the purpose of\nusing a standard API since the standard would become largely useless for\npostgres users.\n\nAny thoughts? I'm tempted to define a new datatype\ntypedef Oid SQLDATATYPE;\nThis is what the standard should have done IMHO. It would be one of\nthose minor incompatibilities that people trying to write portable code\ncould easily fix to be portable between implementations, simply by\ndefining this variable as a SQLDATATYPE instead of SQLSMALLINT.\n\nOr I could go for a custom API. I guess it's probably all a bit of a\nwasted argument since only postgres will have implemented the API. Maybe\nwe can set the standard?\n", "msg_date": "Sat, 08 Jul 2000 02:50:53 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "libpq / SQL3" }, { "msg_contents": "Chris Bitmead writes:\n\n> I'm looking at implementing the SQL99 C interface, which looks more or\n> less reasonable. There are some annoyances however.\n\n> The API to describe the return result SQLDescribeCol takes a pointer\n> to a SQLSMALLINT to return the type of the returned column. There are\n> a whole lot of #define values in the standards document with specified\n> values for each of the standard types. This is annoying because an Oid\n> is bigger than a SQLSMALLINT, and the postgres oid values are not the\n> same as the standards #define values.\n\nThen it seems we need to add a column to pg_type to keep track the\n\"sqltypeid\" as an int2. It would be annoying but doable. The alternative\nfor the moment would be to hard-code the translation at the client side,\ni.e., have SQLDescribeCol translate the oid it received to some standard\nnumber, but that would have obvious problems with user-defined types.\n\n> I'm tempted to define a new datatype\n> typedef Oid SQLDATATYPE;\n> This is what the standard should have done IMHO.\n\nThe standard doesn't require that system catalogs are implemented as\nuser-space tables, but equating types to oids would have effectively\nimposed that requirement.\n\n> I guess it's probably all a bit of a wasted argument since only\n> postgres will have implemented the API. Maybe we can set the standard?\n\nI wonder. I doubt that they invented this API out of the blue. (Although\nquite honestly it sometimes looks like it. Note how they religiously avoid\npointers everywhere.) It looks like a great goal to achieve though. Having\na standard as reference is always good. (\"It is so because SQL says so.\"\n\"Your API might be nice, but ours is standard.\")\n\nBtw., I've been considering implementing this as a rather thin layer on\ntop of libpq, the idea being that those who want to write portable\napplications can use SQL/CLI, and those who want to use Postgres-specific\nfeatures use libpq. I guess you'd rather completely replace libpq? I'd be\nafraid of effectively abandoning libpq, with everything that's build upon\nit.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 8 Jul 2000 02:09:20 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq / SQL3" }, { "msg_contents": "> Then it seems we need to add a column to pg_type to keep track the\n> \"sqltypeid\" as an int2. It would be annoying but doable. \n\nThat occured to me, but it doesn't seem worth cluttering up the back-end\nto achieve.\n\n> The alternative\n> for the moment would be to hard-code the translation at the client side,\n> i.e., have SQLDescribeCol translate the oid it received to some standard\n> number, but that would have obvious problems with user-defined types.\n\nFor user-defined types, you're out of luck anyway as far as strictly\nfollowing the standard.\n\nI think what you're saying is strictly follow the standard anyway. I\nguess you're right, it's just annoying when the standard is slightly\nlame but could be fixed with some ever-so subtle changes.\n\n> > I'm tempted to define a new datatype\n> > typedef Oid SQLDATATYPE;\n> > This is what the standard should have done IMHO.\n> \n> The standard doesn't require that system catalogs are implemented as\n> user-space tables, but equating types to oids would have effectively\n> imposed that requirement.\n\nWhat I'm saying is that if the standard allowed for an SQLDATATYPE type,\nwhose exact type is implementation-defined, then implementations could\nchoose without affecting portablility.\n\n> > I guess it's probably all a bit of a wasted argument since only\n> > postgres will have implemented the API. Maybe we can set the standard?\n> \n> I wonder. I doubt that they invented this API out of the blue. (Although\n> quite honestly it sometimes looks like it. Note how they religiously avoid\n> pointers everywhere.) \n\nYes that is strange. I started off by defining various types as\nstructures thinking that all those SQLSHORTINT's everywhere were either\nsuggestions, or put there because they didn't have an idea of what the\nimplementation might do. Later I realised that maybe they really mean\nthem to be SHORTINTS, and I wonder whether I should change them back or\nwhether the client doesn't need to know. Looks like I'll change them\nback I guess to be really strict about it.\n\n> It looks like a great goal to achieve though. Having\n> a standard as reference is always good. (\"It is so because SQL says so.\"\n> \"Your API might be nice, but ours is standard.\")\n> \n> Btw., I've been considering implementing this as a rather thin layer on\n> top of libpq, \n\nThat would be worth considering, except that part of the idea of me\ngoing to the new API is to avoid some of the annoyances of libpq such as\nthe non-streaming nature of it. Maybe when everyone is comfortable with\nthe stability of the new library then libpq can be redone in terms of\nSQL3? It would be pretty easy I think.\n\n> the idea being that those who want to write portable\n> applications can use SQL/CLI, and those who want to use Postgres-specific\n> features use libpq. I guess you'd rather completely replace libpq? I'd be\n> afraid of effectively abandoning libpq, with everything that's build upon\n> it.\n", "msg_date": "Sat, 08 Jul 2000 10:55:46 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "Re: libpq / SQL3" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Then it seems we need to add a column to pg_type to keep track the\n> \"sqltypeid\" as an int2. It would be annoying but doable. The alternative\n> for the moment would be to hard-code the translation at the client side,\n> i.e., have SQLDescribeCol translate the oid it received to some standard\n> number, but that would have obvious problems with user-defined types.\n\nBut there are no standard numbers for user-defined types, now are there?\nMight as well use the type OID for them.\n\nAdding another column to pg_type inside the backend is not too hard,\nbut to transmit that data to the frontend in every query would mean\nan incompatible protocol change, which is a much greater amount of\npain. I doubt it's worth it. Putting the translation table into\nSQLDescribeCol is no worse than having the ODBC driver do a similar\ntranslation, which no one has complained about in my recollection.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Jul 2000 22:56:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq / SQL3 " }, { "msg_contents": "Tom Lane wrote:\n> \n> Peter Eisentraut <[email protected]> writes:\n> > Then it seems we need to add a column to pg_type to keep track the\n> > \"sqltypeid\" as an int2. It would be annoying but doable. The alternative\n> > for the moment would be to hard-code the translation at the client side,\n> > i.e., have SQLDescribeCol translate the oid it received to some standard\n> > number, but that would have obvious problems with user-defined types.\n> \n> But there are no standard numbers for user-defined types, now are there?\n\nWell the standard lists numbers for each type, with the comment in the\nheader \n\"sqlcli.h Header File for SQL CLI.\n * The actual header file must contain at least the information\n * specified here, except that the comments may vary.\"\n\nSo if you are pedantic I guess you have to use their numbers?? The other\nproblem as I said is that their type is a short, whereas Oid is a long,\nso there is no guarantee it will fit. I guess the core types will fit\nbecause they happen to be smaller than this.\n\n> Might as well use the type OID for them.\n> \n> Adding another column to pg_type inside the backend is not too hard,\n> but to transmit that data to the frontend in every query would mean\n> an incompatible protocol change, which is a much greater amount of\n> pain. I doubt it's worth it. Putting the translation table into\n> SQLDescribeCol is no worse than having the ODBC driver do a similar\n> translation, which no one has complained about in my recollection.\n\nI agree.\n", "msg_date": "Sat, 08 Jul 2000 13:02:39 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "Re: libpq / SQL3" }, { "msg_contents": "Chris Bitmead <[email protected]> writes:\n> Tom Lane wrote:\n>> But there are no standard numbers for user-defined types, now are there?\n\n> Well the standard lists numbers for each type, ...\n> So if you are pedantic I guess you have to use their numbers?? The other\n> problem as I said is that their type is a short, whereas Oid is a long,\n> so there is no guarantee it will fit.\n\nI'd read that as saying that you have to use their numbers for the types\nthat are called out in the standard. But user-defined types cannot be\ncalled out in the standard (or have they standardized prescience as well?)\nso we're on our own about how to represent those.\n\n>> Might as well use the type OID for them.\n\nI had second thoughts about this, because one of the things I think will\nbe happening in the not-too-distant future is that we'll be offering a\nconfigure-time choice about whether OID is 4 or 8 bytes (that is, long\nor long long). I suspect it'd be a bad idea to have core aspects of\nlibpq's API change in a binary-incompatible fashion depending on a\nserver configuration choice.\n\nWhat might be the best bet is for this translation function to return\n\"short\" as in the spec, with the spec-defined values for the datatypes\nknown to the spec, and a single \"UNKNOWN\" value for everything else.\nApps that need to tell the difference among user-defined types could\nlook at either the type OID or the type name, taking a binary-\ncompatibility risk if they insist on using the OID in binary form\n(as long as they treat it as an ASCII string they probably aren't\naffected by 4 vs 8 bytes...) But a bog-standard app would never look\nat either, because it's only using bog-standard datatypes, no?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jul 2000 14:13:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq / SQL3 " }, { "msg_contents": "> What might be the best bet is for this translation function to return\n> \"short\" as in the spec, with the spec-defined values for the datatypes\n> known to the spec, and a single \"UNKNOWN\" value for everything else.\n> Apps that need to tell the difference among user-defined types could\n> look at either the type OID or the type name, taking a binary-\n> compatibility risk if they insist on using the OID in binary form\n> (as long as they treat it as an ASCII string they probably aren't\n> affected by 4 vs 8 bytes...) But a bog-standard app would never look\n> at either, because it's only using bog-standard datatypes, no?\n\nSo you are saying map to the standard-defined values. Good idea.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 8 Jul 2000 14:18:19 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq / SQL3" }, { "msg_contents": "Tom Lane wrote:\n\n> What might be the best bet is for this translation function to return\n> \"short\" as in the spec, with the spec-defined values for the datatypes\n> known to the spec, and a single \"UNKNOWN\" value for everything else.\n> Apps that need to tell the difference among user-defined types could\n> look at either the type OID or the type name, taking a binary-\n> compatibility risk if they insist on using the OID in binary form\n> (as long as they treat it as an ASCII string they probably aren't\n> affected by 4 vs 8 bytes...) But a bog-standard app would never look\n> at either, because it's only using bog-standard datatypes, no?\n\nI agree, but perhaps for different reasons. I don't see any other\nchoice.\n\nI'm making good progress on implementing the SQL99, but it is a lot\ntrickier than I thought. libpq is cruftier than meets the eye.\n\nCan anybody (i.e Peter :) provide any insight on how the SQL99 API\nhandles variable length datatypes where you don't know the length in a\nparticular tuple in advance?\n", "msg_date": "Sun, 09 Jul 2000 14:51:21 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "Re: libpq / SQL3" }, { "msg_contents": "Chris Bitmead writes:\n\n> Can anybody (i.e Peter :) provide any insight on how the SQL99 API\n> handles variable length datatypes where you don't know the length in a\n> particular tuple in advance?\n\nClause 5.9 \"Character string retrieval\" might provide some insight,\nalthough it's probably not what you had hoped for.\n\nT = target (where you want to store it)\nL = length of value\nV = the value\n\n b) Otherwise, let NB be the length in octets of a null\n terminator in the character set of T.\n \n Case:\n \n i) If L is not greater than (TL-NB), then the first (L+NB)\n octets of T are set to V concatenated with a single\n implementation-defined null character that terminates a\n C character string. The values of the remaining characters\n of T are implementation-dependent.\n \n ii) Otherwise, T is set to the first (TL-NB) octets of V\n concatenated with a single implementation-defined null\n character that terminates a C character string and a\n-=> completion condition is raised: warning - string data,\n-=> right truncation.\n\n\nSo highly robust applications would have to call DescribeCol before any\nGetData or similar call in order to allocate a sufficiently sized buffer.\nWhich is a problem if DescribeCol doesn't know about user-defined data\ntypes.\n\nBut remember that SQL does not provide any variable-without-limit length\ntypes, so there is theoretically never any uncertainty about what kind of\nbuffer to allocate if you know the query.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 9 Jul 2000 23:29:35 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq / SQL3" }, { "msg_contents": "Peter Eisentraut wrote:\n\n> So highly robust applications would have to call DescribeCol before any\n> GetData or similar call in order to allocate a sufficiently sized buffer.\n> Which is a problem if DescribeCol doesn't know about user-defined data\n> types.\n\nDescribeCol can be made to know about all data types. The problem is\nthat DescribeCol I don't think is designed to be called after every\nfetch, so it doesn't know how big each entry is.\n\n> But remember that SQL does not provide any variable-without-limit length\n> types, so there is theoretically never any uncertainty about what kind of\n> buffer to allocate if you know the query.\n\nPretty lame. But I saw somewhere in the document that GetData is able to\nretrieve big fields piece by piece. But I could never figure out how\nthat is supposed to happen.\n\nThen there is the stuff about handling blobs, which I get the feeling\nfrom some of the wording that this interface is supposed to handle any\nbig field, but it's also a bit obscure.\n", "msg_date": "Mon, 10 Jul 2000 10:21:00 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq / SQL3" } ]
[ { "msg_contents": "\n\nBefor one year ago, I ask if MD5 (free code from Debian) and DES crypt()\nis possible include to PG. \n\n Knows anyone if is possible include it to PG now?\n\nFor example GNU Debian distribute MD5 and crypt() in free software section \nwithout some restriction.\n\nBTW --- If I good keep track situation on this branch, USA some time ago \n change some restriction for this matter. Or not?\n\n\t\t\t\t\t\tKarel\n\n", "msg_date": "Fri, 7 Jul 2000 19:41:14 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "crypt and MD5 - still not wanted" }, { "msg_contents": "If I recall the prior discussion, MD5 is OK, crypt is still risky,\nbecause MD5 is not an encryption algorithm so it doesn't fall under\nthe US export laws.\n\nI believe Vince V. is working on improving the password challenge\ncode to use MD5, btw.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Jul 2000 14:23:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: crypt and MD5 - still not wanted " }, { "msg_contents": "On Fri, 7 Jul 2000, Tom Lane wrote:\n\n> If I recall the prior discussion, MD5 is OK, crypt is still risky,\n> because MD5 is not an encryption algorithm so it doesn't fall under\n> the US export laws.\n> \n> I believe Vince V. is working on improving the password challenge\n> code to use MD5, btw.\n\nyep.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 7 Jul 2000 14:26:41 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: crypt and MD5 - still not wanted " }, { "msg_contents": "On Fri, 7 Jul 2000, Tom Lane wrote:\n\n> If I recall the prior discussion, MD5 is OK, crypt is still risky,\n> because MD5 is not an encryption algorithm so it doesn't fall under\n> the US export laws.\n> \n> I believe Vince V. is working on improving the password challenge\n> code to use MD5, btw.\n\n Not only passwords, but standard SQL functions a my drean is aggregate\nfunction md5count() too. Cool --- that is MD5 OK.\n\n\n\t\t\t\tKarel\n\n", "msg_date": "Sat, 8 Jul 2000 10:19:41 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: crypt and MD5 - still not wanted " }, { "msg_contents": "Vince Vielhaber writes:\n\n> On Fri, 7 Jul 2000, Tom Lane wrote:\n> \n> > If I recall the prior discussion, MD5 is OK, crypt is still risky,\n> > because MD5 is not an encryption algorithm so it doesn't fall under\n> > the US export laws.\n> > \n> > I believe Vince V. is working on improving the password challenge\n> > code to use MD5, btw.\n> \n> yep.\n\nIf you do that, maybe also look at the secondary password files. We\nprobably don't want those using a different encryption method.\n\n\n(backward compatibility alarm goes off in the distance...)\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 8 Jul 2000 16:25:49 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: crypt and MD5 - still not wanted " }, { "msg_contents": "On Sat, 8 Jul 2000, Peter Eisentraut wrote:\n\n> Vince Vielhaber writes:\n> \n> > On Fri, 7 Jul 2000, Tom Lane wrote:\n> > \n> > > If I recall the prior discussion, MD5 is OK, crypt is still risky,\n> > > because MD5 is not an encryption algorithm so it doesn't fall under\n> > > the US export laws.\n> > > \n> > > I believe Vince V. is working on improving the password challenge\n> > > code to use MD5, btw.\n> > \n> > yep.\n> \n> If you do that, maybe also look at the secondary password files. We\n> probably don't want those using a different encryption method.\n> \n> \n> (backward compatibility alarm goes off in the distance...)\n\nAlready thinking about that.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sat, 8 Jul 2000 10:50:23 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: crypt and MD5 - still not wanted " }, { "msg_contents": "Karel Zak <[email protected]> writes:\n> Not only passwords, but standard SQL functions a my drean is aggregate\n> function md5count() too. Cool --- that is MD5 OK.\n\nEr ... what? What would an \"aggregate function md5count()\" do?\n\nBear in mind that an aggregate function is useless if its result\ndepends on the order of its inputs ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jul 2000 11:44:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: crypt and MD5 - still not wanted " }, { "msg_contents": "\n> Karel Zak <[email protected]> writes:\n> > Not only passwords, but standard SQL functions a my drean is aggregate\n> > function md5count() too. Cool --- that is MD5 OK.\n> \n> Er ... what? What would an \"aggregate function md5count()\" do?\n\n Count md5 sum from defined rows.\n\n> Bear in mind that an aggregate function is useless if its result\n> depends on the order of its inputs ...\n\n Hmm, order is a problem in this idea (I not think of this) :-(\n\n But, I mean that it is not total idiotism, make it is a differend \nthing...\n\n Well, I take back it. \n\t\t\t\t\t\tKarel\n\n", "msg_date": "Sun, 9 Jul 2000 11:56:21 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: crypt and MD5 - still not wanted " } ]
[ { "msg_contents": "The article on MySQL compared with PostgreSQL is up on\nwww.phpbuilder.com for all to read.... Not a bad article, pretty\nbalanced.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 07 Jul 2000 13:53:40 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Tim Perdue's article on PHPbuilder" } ]
[ { "msg_contents": "I found this in my mailbox. Is it any good?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/executor/execMain.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/executor/execMain.c,v\nretrieving revision 1.48\ndiff -c -r1.48 execMain.c\n*** execMain.c\t1998/06/15 19:28:19\t1.48\n--- execMain.c\t1998/07/19 03:35:49\n***************\n*** 522,539 ****\n \t * SELECT added by [email protected] 5/20/98 to allow \n \t * ORDER/GROUP BY have an identifier missing from the target.\n \t */\n- \tif (operation == CMD_UPDATE || operation == CMD_DELETE ||\n- \t\toperation == CMD_INSERT || operation == CMD_SELECT)\n \t{\n! \t\tJunkFilter *j = (JunkFilter *) ExecInitJunkFilter(targetList);\n! \t\testate->es_junkFilter = j;\n! \n \t\tif (operation == CMD_SELECT)\n! \t\t\ttupType = j->jf_cleanTupType;\n! \t}\n! \telse\n! \t\testate->es_junkFilter = NULL;\n \n \t/* ----------------\n \t *\tinitialize the \"into\" relation\n \t * ----------------\n--- 522,559 ----\n \t * SELECT added by [email protected] 5/20/98 to allow \n \t * ORDER/GROUP BY have an identifier missing from the target.\n \t */\n \t{\n! \t\tbool\tjunk_filter_needed = false;\n! \t\tList\t*tlist;\n! \t\t\n \t\tif (operation == CMD_SELECT)\n! \t\t{\n! \t\t\tforeach(tlist, targetList)\n! \t\t\t{\n! \t\t\t\tTargetEntry\t*tle = lfirst(tlist);\n! \t\n! \t\t\t\tif (tle->resdom->resjunk)\n! \t\t\t\t{\n! \t\t\t\t\tjunk_filter_needed = true;\n! \t\t\t\t\tbreak;\n! \t\t\t\t}\n! \t\t\t}\n! \t\t}\n \n+ \t\tif (operation == CMD_UPDATE || operation == CMD_DELETE ||\n+ \t\t\toperation == CMD_INSERT ||\n+ \t\t\t(operation == CMD_SELECT && junk_filter_needed))\n+ \t\t{\n+ \t\t\tJunkFilter *j = (JunkFilter *) ExecInitJunkFilter(targetList);\n+ \t\t\testate->es_junkFilter = j;\n+ \n+ \t\t\tif (operation == CMD_SELECT)\n+ \t\t\t\ttupType = j->jf_cleanTupType;\n+ \t\t}\n+ \t\telse\n+ \t\t\testate->es_junkFilter = NULL;\n+ \t}\n+ \t\n \t/* ----------------\n \t *\tinitialize the \"into\" relation\n \t * ----------------", "msg_date": "Fri, 7 Jul 2000 14:51:26 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Is this useful" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I found this in my mailbox. Is it any good?\n\nSeems to have been done long since --- see execMain.c, lines 809 ff.\n\nLooking at that code, I wonder if it's still doing unnecessary work.\nWhy would we need a junkfilter for a DELETE? We aren't going to be\nconstructing output tuples, are we?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Jul 2000 14:58:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this useful " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I found this in my mailbox. Is it any good?\n> \n> Seems to have been done long since --- see execMain.c, lines 809 ff.\n> \n> Looking at that code, I wonder if it's still doing unnecessary work.\n> Why would we need a junkfilter for a DELETE? We aren't going to be\n> constructing output tuples, are we?\n\nGood question.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Jul 2000 15:01:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Is this useful" } ]
[ { "msg_contents": "The oidvectortypes function, mostly used by psql \\df, formats types\nseparated by spaces:\n\n\tvarchar float8 timetz int4\n\nwhich would look a little odd when canonical type names are used:\n\n\tcharacter varying double precision time with time zone integer\n\nI suggest that we separate the types by <comma><space>, which also looks a\nlittle more like how they are invoked in a function call.\n\nConcerns?\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 7 Jul 2000 21:26:56 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Type formatting and oidvectortypes" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I suggest that we separate the types by <comma><space>, which also looks a\n> little more like how they are invoked in a function call.\n\nNo strong objections here. I'm not very concerned about backwards\ncompatibility because there probably aren't many apps besides psql\nthat use oidvectortypes at all. (Heck, we renamed the function in\n7.0, and I don't recall many complaints...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Jul 2000 19:22:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Type formatting and oidvectortypes " } ]
[ { "msg_contents": "Hello there,\n\nDoes anyone know of a decent psql-mode for emacs? I am looking at\nsomething like one by Peter D. Pezaris.\n\nThanks.\n\nRichard.\n", "msg_date": "Fri, 07 Jul 2000 21:19:52 +0000", "msg_from": "Richard Nfor <[email protected]>", "msg_from_op": true, "msg_subject": "psql mode for emacs" } ]
[ { "msg_contents": "============================================================================\n POSTGRESQL BUG REPORT TEMPLATE\n============================================================================\n\n\nYour name : Ryan Rawson\nYour email address : [email protected]\n\n\nSystem Configuration\n---------------------\n Architecture (example: Intel Pentium) :\nIntel Pentium II\n\n Operating System (example: Linux 2.0.26 ELF) :\nLinux 2.2.16 ELF Debian/2.2\n\n PostgreSQL version (example: PostgreSQL-7.1): \n7.0 release 1\n7.0.2\n\n\n\n Compiler used (example: gcc 2.8.0) :\nunknown\n\n\nPlease enter a FULL description of your problem:\n------------------------------------------------\nI have a database which reliable crashes the database system. When you\ntry to do insert/update on the table 'machines' the backend crashes with\nwhat I think is signal 11.\n\n\n\n\nPlease describe a way to repeat the problem. Please try to provide a\nconcise reproducible example, if at all possible: \n----------------------------------------------------------------------\n\nI'm attaching a file which builds the database which crashes the\nbackend.\n\n\n\nIf you know how this problem might be fixed, list the solution below:\n---------------------------------------------------------------------\nunknown.\n\n\n-ryan\n\n--\nRyan Rawson\nSystem Administrator\nBinary Environments Ltd.\[email protected]", "msg_date": "Fri, 07 Jul 2000 14:30:03 -0700", "msg_from": "ryan <[email protected]>", "msg_from_op": true, "msg_subject": "\"New\" bug?? Serious - crashes backend." }, { "msg_contents": "ryan <[email protected]> writes:\n> I have a database which reliable crashes the database system. When you\n> try to do insert/update on the table 'machines' the backend crashes with\n> what I think is signal 11.\n\nThe problem seems to be that you have a foreign-key trigger on\n'machines' which refers to a non-existent primary-key relation:\n\n> CREATE CONSTRAINT TRIGGER \"siteid\" AFTER INSERT OR UPDATE ON\n> \"machines\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE\n> PROCEDURE \"RI_FKey_check_ins\" ('siteid', 'machines', 'site',\n> 'UNSPECIFIED', 'sites', 'siteid');\n\nThe 'site' argument is the name of the referenced table, and you\nhave no table named site.\n\nThere are at least two bugs here: the immediate cause of the crash\nis lack of a check for heap_openr() failure in the RI trigger code,\nbut a larger question is why the system let you drop a table that\nis the target of a referential integrity check (which I assume is\nwhat you did to get into this state).\n\nAnyway, dropping the siteid trigger, as well as any others that\nrefer to gone tables, ought to get you out of trouble for now.\nMeanwhile the foreign-key boys have some work to do ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 23:26:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Foreign key bugs (Re: \"New\" bug?? Serious - crashes backend.)" }, { "msg_contents": "Tom Lane wrote:\n>\n> There are at least two bugs here: the immediate cause of the crash\n> is lack of a check for heap_openr() failure in the RI trigger code,\n\n Exactly where is that check missing (if it still is)?\n\n> but a larger question is why the system let you drop a table that\n> is the target of a referential integrity check (which I assume is\n> what you did to get into this state).\n\n For me too.\n\n> Anyway, dropping the siteid trigger, as well as any others that\n> refer to gone tables, ought to get you out of trouble for now.\n> Meanwhile the foreign-key boys have some work to do ...\n\n That's exactly the purpose of pg_trigger.tgconstrrelid, which\n is filled with the opposite relations Oid for constraint\n triggers. In RelationRemoveTriggers(), which is called\n during DROP TABLE, theres a scan for it. That's where the\n\n DROP TABLE implicitly drops referential ...\n\n NOTICE message comes from. So I wonder how he got into that\n state?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Tue, 11 Jul 2000 11:20:19 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Foreign key bugs (Re: \"New\" bug?? Serious - crashes\n\tbackend.)" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> Tom Lane wrote:\n>> There are at least two bugs here: the immediate cause of the crash\n>> is lack of a check for heap_openr() failure in the RI trigger code,\n\n> Exactly where is that check missing (if it still is)?\n\nThe heap_openr calls with NoLock --- the way heap_open[r] are set up\nis that there's an elog on open failure iff you request a lock, but\nif you don't then you have to check for a NULL return explicitly.\nPerhaps this coding convention is too error-prone and ought to be\nchanged to have two different routine names, say \"heap_open[r]\"\nand \"heap_open[r]_noerr\". Opinions anyone?\n\nI had a note to myself that ri_triggers' use of NoLock was probably\na bug anyway. Shouldn't it be acquiring *some* kind of lock on the\nreferenced relation? Else someone might be deleting it out from\nunder you.\n\n>> but a larger question is why the system let you drop a table that\n>> is the target of a referential integrity check (which I assume is\n>> what you did to get into this state).\n\n> For me too.\n\nWhat about renaming as opposed to dropping? Since the triggers are set\nup to use names rather than OIDs, seems like they are vulnerable to a\nrename. Maybe they should be using table OIDs in their parameter lists.\n(That'd make pg_dump's life harder however...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jul 2000 11:26:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Foreign key bugs (Re: \"New\" bug?? Serious - crashes\n\tbackend.)" }, { "msg_contents": "> [email protected] (Jan Wieck) writes:\n> > Tom Lane wrote:\n> >> There are at least two bugs here: the immediate cause of the crash\n> >> is lack of a check for heap_openr() failure in the RI trigger code,\n> \n> > Exactly where is that check missing (if it still is)?\n> \n> The heap_openr calls with NoLock --- the way heap_open[r] are set up\n> is that there's an elog on open failure iff you request a lock, but\n> if you don't then you have to check for a NULL return explicitly.\n> Perhaps this coding convention is too error-prone and ought to be\n> changed to have two different routine names, say \"heap_open[r]\"\n> and \"heap_open[r]_noerr\". Opinions anyone?\n\nWe already have heap_open and heap_openr. Seems another is too hard. \nBetter to give them a parameter to control it. The API is confusing enough.\n\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jul 2000 11:34:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Foreign key bugs (Re: \"New\" bug?? Serious - crashes\n\tbackend.)" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Perhaps this coding convention is too error-prone and ought to be\n>> changed to have two different routine names, say \"heap_open[r]\"\n>> and \"heap_open[r]_noerr\". Opinions anyone?\n\n> We already have heap_open and heap_openr. Seems another is too hard.\n> Better to give them a parameter to control it. The API is confusing\n> enough.\n\nIt is confusing, but we have here graphic evidence that the way it's\ncurrently done is confusing. In a quick check, I found several other\ncases of the same error that must have crept in over the past year or\nso. So I'm now convinced that we'd better change the API of these\nroutines to make it crystal-clear whether you are getting a check for\nopen failure or not.\n\nI like a different routine name better than a check-or-no-check\nparameter. If you invoke the no-check case then you *MUST* have a check\nfor failure return --- forgetting to do this is exactly the problem.\nSo I think it should be harder to get at the no-check case, and you\nshould have to write something that reminds you that the routine is not\nchecking for you. Thus \"heap_open_noerr\" (I'm not particularly wedded\nto that suffix, though, if anyone has a better idea for what to call\nit). A parameter would only be useful if the same calling code might\nreasonably do different things at different times --- but either there's\na check following the call, or there's not.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jul 2000 12:03:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Foreign key bugs (Re: \"New\" bug?? Serious - crashes\n\tbackend.)" }, { "msg_contents": "> I like a different routine name better than a check-or-no-check\n> parameter. If you invoke the no-check case then you *MUST* have a check\n> for failure return --- forgetting to do this is exactly the problem.\n> So I think it should be harder to get at the no-check case, and you\n> should have to write something that reminds you that the routine is not\n> checking for you. Thus \"heap_open_noerr\" (I'm not particularly wedded\n> to that suffix, though, if anyone has a better idea for what to call\n> it). A parameter would only be useful if the same calling code might\n> reasonably do different things at different times --- but either there's\n> a check following the call, or there's not.\n\nOK.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jul 2000 12:03:57 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Foreign key bugs (Re: \"New\" bug?? Serious - crashes\n\tbackend.)" }, { "msg_contents": "Tom Lane wrote:\n>\n> >> but a larger question is why the system let you drop a table that\n> >> is the target of a referential integrity check (which I assume is\n> >> what you did to get into this state).\n>\n> > For me too.\n>\n> What about renaming as opposed to dropping? Since the triggers are set\n> up to use names rather than OIDs, seems like they are vulnerable to a\n> rename. Maybe they should be using table OIDs in their parameter lists.\n> (That'd make pg_dump's life harder however...)\n\n That at least shows how he might have gotten there. And yes,\n they need to either keep track of renamings or use OID's.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Tue, 11 Jul 2000 20:47:52 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Foreign key bugs (Re: \"New\" bug?? Serious - crashes\n\tbackend.)" }, { "msg_contents": "> > but a larger question is why the system let you drop a table that\n> > is the target of a referential integrity check (which I assume is\n> > what you did to get into this state).\n>\n> For me too.\n>\n> > Anyway, dropping the siteid trigger, as well as any others that\n> > refer to gone tables, ought to get you out of trouble for now.\n> > Meanwhile the foreign-key boys have some work to do ...\n>\n> That's exactly the purpose of pg_trigger.tgconstrrelid, which\n> is filled with the opposite relations Oid for constraint\n> triggers. In RelationRemoveTriggers(), which is called\n> during DROP TABLE, theres a scan for it. That's where the\n>\n> DROP TABLE implicitly drops referential ...\n>\n> NOTICE message comes from. So I wonder how he got into that\n> state?\n\nI don't know in his case, but I think you could get into this state\nfrom a partial restore from pg_dump. If you restore one of the\ntwo tables, and create the constraint trigger for the RI_FKey_check_ins\nbut the other table doesn't really exist, it will crash. I just tried it on\na 7.0.2 system by making a table with an int and then defining the\ncheck_ins trigger manually with create constraint trigger with a bad\nreferenced table.\n\nAlso, I realized something else that is a little wierd. When a temporary\ntable shadows a permanent table that you've made a foreign key reference\nto, which table should it be going to check the constraint?\n\n", "msg_date": "Tue, 11 Jul 2000 12:24:13 -0700", "msg_from": "\"Stephan Szabo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Foreign key bugs (Re: [BUGS] \"New\" bug?? Serious -\n\tcrashesbackend.)" }, { "msg_contents": "Stephan Szabo wrote:\n>\n> Also, I realized something else that is a little wierd. When a temporary\n> table shadows a permanent table that you've made a foreign key reference\n> to, which table should it be going to check the constraint?\n>\n\n Outch - that hurts. Haven't checked it yet, but from what I\n have in memory it should be a possibility to violate\n constraints.\n\n Damned.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Tue, 11 Jul 2000 21:42:26 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: Foreign key bugs (Re: [BUGS] \"New\" bug?? Serious -\n\tcrashesbackend.)" }, { "msg_contents": "> Stephan Szabo wrote:\n> >\n> > Also, I realized something else that is a little wierd. When a\ntemporary\n> > table shadows a permanent table that you've made a foreign key reference\n> > to, which table should it be going to check the constraint?\n\n> Outch - that hurts. Haven't checked it yet, but from what I\n> have in memory it should be a possibility to violate\n> constraints.\n\nYeah, I realized it when I was going in on AlterTableAddConstraint that I\nneed to prevent constraints referencing temporary tables on permanent\ntables, and then I realized that the shadowing is a problem. Also, this is\na problem for other users too, what about people who log things to\nother tables via rules and triggers? At least you can't shadow the\nsystem catalogs :-)\n\nI think that schemas might help, if you assume that at creation time of a\nrule or constraint you must qualify any tables being used in a way that\nprevents misunderstandings, since temporary tables live in a different\nsystem defined schema assuming that schema.tablename is not\nshadowed, only the unadorned tablename. In the FK case, I think\nthat the idea of moving to keeping oids would probably be the way\nto go (that way the table is very explicitly defined as a particular one).\nNot that this will help right now since I don't think we can make an\nSPI request that will handle it.\n\nOr, you might be able to make a case that you CANNOT shadow a table\nthat is referenced by a constraint (due to the permanent table constraints\ncannot reference a temporary table restriction). Since the creation of\nthe temp table would break the restriction, it is illegal.\n\n-----\n\nIn an unrelated problem. Well, I was thinking, well, maybe we could for\nthis transaction during the execution of the trigger, rename the temp table\nand then rename it back. Noone else should see the change (right?) because\nwe're not comitted and that user isn't doing anything else while we're\nchecking.\nHowever, this tickles another problem. It seems that you can't rename a\ntemp\ntable. In fact it does something bad:\n\ncreate table z (a int);\ncreate temp table z (a int);\nalter table z rename to zz;\nselect * from z;\nERROR: relation 'z' does not exist\nselect * from zz;\n- 0 rows\n\\q\n<enter again>\nselect * from z;\nNOTICE: mdopen: couldn't open z: No such file or directory\nNOTICE: RelationIdBuildRelation: smgropen(z): Input/output error\nNOTICE: mdopen: couldn't open z: No such file or directory\nNOTICE: mdopen: couldn't open z: No such file or directory\nNOTICE: mdopen: couldn't open z: No such file or directory\nNOTICE: mdopen: couldn't open z: No such file or directory\nERROR: cannot open relation z\nselect * from zz;\n- 0 rows\n\n\n\n\n", "msg_date": "Tue, 11 Jul 2000 13:44:05 -0700", "msg_from": "\"Stephan Szabo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Foreign key bugs + other problems" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> Tom Lane wrote:\n>> What about renaming as opposed to dropping? Since the triggers are set\n>> up to use names rather than OIDs, seems like they are vulnerable to a\n>> rename. Maybe they should be using table OIDs in their parameter lists.\n>> (That'd make pg_dump's life harder however...)\n\n> That at least shows how he might have gotten there. And yes,\n> they need to either keep track of renamings or use OID's.\n\nI got mail from Ryan earlier admitting that he'd hand-edited a dump file\nand reloaded it, so that was how his triggers got out of sync with the\ntable names. But still, it sounds like we need to fix the RENAME case.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jul 2000 17:22:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Foreign key bugs (Re: \"New\" bug?? Serious - crashes\n\tbackend.)" }, { "msg_contents": "Stephan Szabo wrote:\n> > Stephan Szabo wrote:\n> > >\n> > > Also, I realized something else that is a little wierd. When a\n> temporary\n> > > table shadows a permanent table that you've made a foreign key reference\n> > > to, which table should it be going to check the constraint?\n>\n> > Outch - that hurts. Haven't checked it yet, but from what I\n> > have in memory it should be a possibility to violate\n> > constraints.\n>\n> Yeah, I realized it when I was going in on AlterTableAddConstraint that I\n> need to prevent constraints referencing temporary tables on permanent\n> tables, and then I realized that the shadowing is a problem. Also, this is\n> a problem for other users too, what about people who log things to\n> other tables via rules and triggers? At least you can't shadow the\n> system catalogs :-)\n\n I think triggers are in general problematic in this context.\n They usually use SPI and name the tables in the querystring.\n If a trigger uses saved plans (as RI does as much as\n possible), the problem is gone if the trigger has been\n invoked once and prepared all the plans. But if it's invoked\n the first time while a temp table exists, it'll do the wrong\n things and save the wrong plan for future invocations.\n\n Rules aren't affected, because they refer to tables by OID\n allways.\n\n> [...]\n>\n> Or, you might be able to make a case that you CANNOT shadow a table\n> that is referenced by a constraint (due to the permanent table constraints\n> cannot reference a temporary table restriction). Since the creation of\n> the temp table would break the restriction, it is illegal.\n\n Good point. What does the standard say about it? Can a table,\n referred to by foreign key constraints or referential\n actions, be shadowed by a temp table?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Tue, 11 Jul 2000 23:30:06 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: Foreign key bugs + other problems" }, { "msg_contents": "\"Stephan Szabo\" <[email protected]> writes:\n> Also, I realized something else that is a little wierd. When a temporary\n> table shadows a permanent table that you've made a foreign key reference\n> to, which table should it be going to check the constraint?\n\nSeems to me it should certainly be going to the permanent table, which\nis another argument in favor of making the link via OID not table name.\nThe existing code will get this wrong.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jul 2000 17:39:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Foreign key bugs (Re: [BUGS] \"New\" bug?? Serious -\n\tcrashesbackend.)" }, { "msg_contents": "> > Or, you might be able to make a case that you CANNOT shadow a table\n> > that is referenced by a constraint (due to the permanent table constraints\n> > cannot reference a temporary table restriction). Since the creation of\n> > the temp table would break the restriction, it is illegal.\n> \n> Good point. What does the standard say about it? Can a table,\n> referred to by foreign key constraints or referential\n> actions, be shadowed by a temp table?\n\nMaybe we should prevent shadowing of a table that is used as a foreign\nkey.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jul 2000 17:47:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Foreign key bugs + other problems" }, { "msg_contents": "> Rules aren't affected, because they refer to tables by OID\n> always.\nAh, that's good. I hadn't tried that variation because I don't really\nuse rules that much other than select rules for views.\n\n> > Or, you might be able to make a case that you CANNOT shadow a table\n> > that is referenced by a constraint (due to the permanent table\nconstraints\n> > cannot reference a temporary table restriction). Since the creation of\n> > the temp table would break the restriction, it is illegal.\n>\n> Good point. What does the standard say about it? Can a table,\n> referred to by foreign key constraints or referential\n> actions, be shadowed by a temp table?\nWell, that's the question. I don't see anything in the spec saying that it\ncan't precisely. It's a question of wording on the rules about the\nconstraint.\n\n11.8 Syntax rule 4 Case a)\nIf the referencing table is a permanent base table, then the referenced\ntable\nshall be a persistant base table.\n\nSo, if you shadowed the table by a temp table, would this no longer be true\n(and therefore the create is illegal) or are you supposed to imply that you\nstill reference the persistant base table despite the fact it is shadowed?\nI'd\nguess the latter because it's a syntax rule (I had thought it was a general\nrule until I went to look it up).\n\n\n", "msg_date": "Tue, 11 Jul 2000 15:00:10 -0700", "msg_from": "\"Stephan Szabo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Foreign key bugs + other problems" }, { "msg_contents": "\"Stephan Szabo\" <[email protected]> writes:\n> However, this tickles another problem. It seems that you can't rename\n> a temp table. In fact it does something bad:\n\nAre you using current CVS? I thought I'd fixed that recently.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jul 2000 18:09:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Foreign key bugs + other problems " }, { "msg_contents": "Tom Lane wrote:\n> \"Stephan Szabo\" <[email protected]> writes:\n> > Also, I realized something else that is a little wierd. When a temporary\n> > table shadows a permanent table that you've made a foreign key reference\n> > to, which table should it be going to check the constraint?\n>\n> Seems to me it should certainly be going to the permanent table, which\n> is another argument in favor of making the link via OID not table name.\n> The existing code will get this wrong.\n\n But even if the trigger knows the OID of the table to query,\n can it prepare a plan to do so via SPI? I think no.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Wed, 12 Jul 2000 00:18:20 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: Foreign key bugs (Re: [BUGS] \"New\" bug?? Serious -\n\tcrashesbackend.)" }, { "msg_contents": "> \"Stephan Szabo\" <[email protected]> writes:\n> > Also, I realized something else that is a little wierd. When a\ntemporary\n> > table shadows a permanent table that you've made a foreign key reference\n> > to, which table should it be going to check the constraint?\n>\n> Seems to me it should certainly be going to the permanent table, which\n> is another argument in favor of making the link via OID not table name.\n> The existing code will get this wrong.\n\nCan I force the SPI query that's being generated to use the permanent\ntable rather than the shadowed table when they have the same name?\nIf not, then storing the oid isn't sufficient without moving away from\nSPI. I do agree that storing the oids is a good idea (and am planning to\nchange it unless someone comes up with a compelling reason not to)\nsince the only way via something like SPI that I can think of is once\nwe have schemas, using schemaname.tablename which may not be\nshadowed by the temp table and it'll just be easier for everyone\ninvolved if we store the oid.\n\n", "msg_date": "Tue, 11 Jul 2000 15:20:56 -0700", "msg_from": "\"Stephan Szabo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Foreign key bugs (Re: [BUGS] \"New\" bug?? Serious -\n\tcrashesbackend.)" }, { "msg_contents": "The machine I tried it on is only a 7.0.2 system. My CVS system\nis at home on a unconnected dialup, so I can't try it from work. \nI'll try it when I get back tonight.\n\n\n> \"Stephan Szabo\" <[email protected]> writes:\n> > However, this tickles another problem. It seems that you can't rename\n> > a temp table. In fact it does something bad:\n> \n> Are you using current CVS? I thought I'd fixed that recently.\n\n\n", "msg_date": "Tue, 11 Jul 2000 15:24:50 -0700", "msg_from": "\"Stephan Szabo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Foreign key bugs + other problems " }, { "msg_contents": "Bruce Momjian wrote:\n> > > Or, you might be able to make a case that you CANNOT shadow a table\n> > > that is referenced by a constraint (due to the permanent table constraints\n> > > cannot reference a temporary table restriction). Since the creation of\n> > > the temp table would break the restriction, it is illegal.\n> >\n> > Good point. What does the standard say about it? Can a table,\n> > referred to by foreign key constraints or referential\n> > actions, be shadowed by a temp table?\n>\n> Maybe we should prevent shadowing of a table that is used as a foreign\n> key.\n\n And open another DOS attack possibility? Each user could\n potentially create a permanent table of a name used as a temp\n table somewhere inside of an application. Then create another\n referencing it and boom - the application goes down. Not that\n good IMHO.\n\n I think we need to hand OID's to the RI triggers and have\n some special syntax to tell the parser that a table or\n attribute is named by OID in the actual query.\n\n Will cause alot of headaches for the next pg_dump/restore\n cycle.\n\n (Bruce: still waiting for your call)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Wed, 12 Jul 2000 00:30:28 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: Foreign key bugs + other problems" }, { "msg_contents": "Jan Wieck wrote:\n> > Maybe we should prevent shadowing of a table that is used as a foreign\n> > key.\n> \n> And open another DOS attack possibility? Each user could\n> potentially create a permanent table of a name used as a temp\n> table somewhere inside of an application. Then create another\n> referencing it and boom - the application goes down. Not that\n> good IMHO.\n> \n> I think we need to hand OID's to the RI triggers and have\n> some special syntax to tell the parser that a table or\n> attribute is named by OID in the actual query.\n> \n> Will cause alot of headaches for the next pg_dump/restore\n> cycle.\n> \n> (Bruce: still waiting for your call)\n> \n> Jan\n\nJust curious, but couldn't this also be done in the TRUNCATE\nTABLE case? Where the consensus was to disallow TRUNCATE TABLE on\na table whose attribute was the foreign key of another relation?\nIt seems that proper implementation of SCHEMA and a finer\ngranularity of GRANT/REVOKE is need to address these issues.\n\nMike Mascari\n", "msg_date": "Wed, 12 Jul 2000 03:36:19 -0400", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Foreign key bugs + other problems" } ]
[ { "msg_contents": "When using psql's \\dd command, the backend crashes with a segfault.\n\n\n#0 pglz_decompress (source=0x4034aeb0, dest=0x4039e024 \"\") at\npg_lzcompress.c:724\n724 *bp++ = *dp++;\n#1 0x8074526 in heap_tuple_untoast_attr (attr=0x4034aeb0) at\ntuptoaster.c:123\n#2 0x8139a32 in pg_detoast_datum (datum=0x4034aeb0) at fmgr.c:1297\n#3 0x8126d31 in text_lt (fcinfo=0xbfffec04) at varlena.c:553\n#4 0x81390b0 in FunctionCall2 (flinfo=0x402bde30, arg1=1077194416,\narg2=1077171544)\n at fmgr.c:722\n#5 0x81413e6 in comparetup_heap (state=0x403463e0, a=0x4034ae78,\nb=0x40345520)\n at tuplesort.c:1719\n#6 0x8140f27 in qsort_comparetup (a=0x403465e8, b=0x403465ec) at\ntuplesort.c:1673\n#7 0x400f2fb0 in msort_with_tmp (b=0x403465e8, n=2, s=4, cmp=0x8140f0c\n<qsort_comparetup>,\n t=0xbfffedcc \"@'\\034\\b\") at msort.c:58\n#8 0x400f2f64 in msort_with_tmp (b=0x403465e8, n=5, s=4, cmp=0x8140f0c\n<qsort_comparetup>,\n t=0xbfffedcc \"@'\\034\\b\") at msort.c:49\n#9 0x400f2f64 in msort_with_tmp (b=0x403465e8, n=11, s=4, cmp=0x8140f0c\n<qsort_comparetup>,\n t=0xbfffedcc \"@'\\034\\b\") at msort.c:49\n#10 0x400f2f64 in msort_with_tmp (b=0x403465e8, n=23, s=4, cmp=0x8140f0c\n<qsort_comparetup>,\n t=0xbfffedcc \"@'\\034\\b\") at msort.c:49\n#11 0x400f2f64 in msort_with_tmp (b=0x403465e8, n=46, s=4, cmp=0x8140f0c\n<qsort_comparetup>,\n t=0xbfffedcc \"@'\\034\\b\") at msort.c:49\n#12 0x400f30d0 in qsort (b=0x403465e8, n=46, s=4, cmp=0x8140f0c\n<qsort_comparetup>)\n at msort.c:102\n#13 0x813fc70 in tuplesort_performsort (state=0x403463e0) at tuplesort.c:699\n#14 0x80b4385 in ExecSort (node=0x8264c10) at nodeSort.c:182\n#15 0x80ad816 in ExecProcNode (node=0x8264c10, parent=0x82646c8) at execProcnode.c:296\n#16 0x80b45be in ExecUnique (node=0x82646c8) at nodeUnique.c:71\n#17 0x80ad81e in ExecProcNode (node=0x82646c8, parent=0x8261248) at execProcnode.c:300\n#18 0x80b0aee in ExecProcAppend (node=0x8261248) at nodeAppend.c:422\n#19 0x80ad7d9 in ExecProcNode (node=0x8261248, parent=0x8260f80) at execProcnode.c:260\n#20 0x80b4370 in ExecSort (node=0x8260f80) at nodeSort.c:169\n#21 0x80ad816 in ExecProcNode (node=0x8260f80, parent=0x8260f80) at execProcnode.c:296\n#22 0x80ac7f4 in ExecutePlan (estate=0x826be70, plan=0x8260f80,\noperation=CMD_SELECT,\n offsetTuples=0, numberTuples=0, direction=ForwardScanDirection,\ndestfunc=0x402c0060)\n at execMain.c:1044\n#23 0x80abd88 in ExecutorRun (queryDesc=0x826be58, estate=0x826be70,\nfeature=3, limoffset=0x0,\n limcount=0x0) at execMain.c:321\n#24 0x81012f1 in ProcessQuery (parsetree=0x823a190, plan=0x8260f80,\ndest=Remote)\n at pquery.c:292\n#25 0x8100044 in pg_exec_query_dest (\n query_string=0x821a0f8 \"SELECT DISTINCT a.aggname as \\\"Name\\\",\n'aggregate'::text as \\\"Object\\\", d.description as \\\"Description\\\"\\nFROM\npg_aggregate a, pg_description d\\nWHERE a.oid = d.objoid\\n\\nUNION\nALL\\n\\nSELECT DISTINCT p.proname as\"..., dest=Remote,\nparse_context=0x81c28d8)\n at postgres.c:665\n#26 0x8100dbe in PostgresMain (argc=4, argv=0xbffff274, real_argc=3,\nreal_argv=0xbffffb34)\n at postgres.c:1594\n#27 0x80e7f62 in DoBackend (port=0x81cf8b0) at postmaster.c:1962\n#28 0x80e7af1 in BackendStartup (port=0x81cf8b0) at postmaster.c:1735\n#29 0x80e6e1a in ServerLoop () at postmaster.c:980\n#30 0x80e68b4 in PostmasterMain (argc=3, argv=0xbffffb34) at postmaster.c:673\n#31 0x80bcaf8 in main (argc=3, argv=0xbffffb34) at main.c:97\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 8 Jul 2000 02:09:57 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Something's not (de)compressing right..." }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> When using psql's \\dd command, the backend crashes with a segfault.\n\nI get an Assert failure because heap_tuple_untoast_attr() tries to\nallocate a ridiculous amount of memory. It looks like it's being\nhanded a datum pointer that's pointing at plain text, not a Datum.\nI don't think it's directly the untoaster's fault ... something's\ndropping the ball somewhere else...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Jul 2000 22:48:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Something's not (de)compressing right... " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> When using psql's \\dd command, the backend crashes with a segfault.\n\nThe new query \\dd is issuing is tickling a longstanding UNION bug.\nFor now I'd suggest that you work around the problem by explicitly\ncoercing the result of format_type() to NAME:\n\n...\nUNION ALL\nSELECT DISTINCT format_type(t.oid, NULL)::name as \"Name\", 'type'::text as \"Object\", d.description as \"Description\"\n...\n\nThe crash occurs because text_lt is used to sort a column of NAME values\n--- the result of format_type is coerced to NAME so that it can be\nunion'd with the NAME results of the other sub-selects, but by the time\nthat happens we've already chosen the sort operator for the DISTINCT,\nand what we chose was text_lt :-(. If you do the coercion explicitly\nthen name_lt gets chosen for DISTINCT and everything works.\n\nActually, given that the result of format_type might well exceed 32\ncharacters, you might think it better to coerce the results of all the\nsub-selects to \"text\". But the point is you can't rely on UNION's auto-\ncoercion to do the right thing when a sub-select requires its own sort.\n\nI have a list of about two dozen UNION/INTERSECT/EXCEPT bugs (including\nthis one) that I don't think can be fixed without a querytree redesign.\nSo hold your nose and coerce explicitly until 7.2 ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jul 2000 00:30:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Something's not (de)compressing right... " } ]
[ { "msg_contents": "> hi,\n> \n> threre are a postgresql/mysql comparative.\n> You can get something for the TODO:\n> \n> http://www.phpbuilder.com/columns/tim20000705.php3?page=1\n> \n> regards,\n> \n\nThanks. Yes, I have added to the TODO list:\n\n\t* Add function to return primary key value on INSERT\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 8 Jul 2000 09:14:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres TODO" }, { "msg_contents": "At 09:14 8/07/00 -0400, Bruce Momjian wrote:\n>> hi,\n>> \n>> threre are a postgresql/mysql comparative.\n>> You can get something for the TODO:\n>> \n>> http://www.phpbuilder.com/columns/tim20000705.php3?page=1\n>> \n>> regards,\n>> \n>\n>Thanks. Yes, I have added to the TODO list:\n>\n>\t* Add function to return primary key value on INSERT\n\nI had a look at the page and could not see the reference, so this\nsuggestion may be inappropriate, but...\n\nHow about something more general - an incredibly useful feature of Dec/Rdb is:\n\n insert into t1(...) values(...) returning attr-list\n\nwhich is like performing a select directly after the insert. The same kind\nof syntax applies to updates as well, eg.\n\n update t1 set f1 = 2 where <stuff> returning f1, f2, f3;\n\nPerhaps your original suggestion is a lot easier, but this is a convenient\nfeature...\n\n \n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 08 Jul 2000 23:41:30 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: postgres TODO" }, { "msg_contents": "Bruce Momjian writes:\n\n> \t* Add function to return primary key value on INSERT\n\nI don't get the point of this. Don't you know what you inserted? For\nsequences there's curval(), for oids there's PQoidValue().\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 8 Jul 2000 16:31:27 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: postgres TODO" }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> Bruce Momjian writes:\n> \n> > \t* Add function to return primary key value on INSERT\n> \n> I don't get the point of this. Don't you know what you inserted? For\n> sequences there's curval(), for oids there's PQoidValue().\n\nYes, item removed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 8 Jul 2000 11:04:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: postgres TODO" }, { "msg_contents": "Peter Eisentraut wrote:\n\n> Bruce Momjian writes:\n> \n> > * Add function to return primary key value on INSERT\n> \n> I don't get the point of this. Don't you know what you inserted? For\n> sequences there's curval()\n\nMmmhhh... it means that we can assume no update to the sequence value\nbetween the insert and the curval selection?\n\n-- \nAlessio F. Bragadini\t\[email protected]\nAPL Financial Services\t\thttp://www.sevenseas.org/~alessio\nNicosia, Cyprus\t\t \tphone: +357-2-750652\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n", "msg_date": "Mon, 10 Jul 2000 16:00:46 +0300", "msg_from": "Alessio Bragadini <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: postgres TODO" }, { "msg_contents": "On Mon, 10 Jul 2000, Alessio Bragadini wrote:\n\n> > I don't get the point of this. Don't you know what you inserted? For\n> > sequences there's curval()\n> \n> Mmmhhh... it means that we can assume no update to the sequence value\n> between the insert and the curval selection?\n\nSequences are transaction safe.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 10 Jul 2000 09:14:21 -0400 (EDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Re: postgres TODO" }, { "msg_contents": "> Peter Eisentraut wrote:\n> \n> > Bruce Momjian writes:\n> > \n> > > * Add function to return primary key value on INSERT\n> > \n> > I don't get the point of this. Don't you know what you inserted? For\n> > sequences there's curval()\n> \n> Mmmhhh... it means that we can assume no update to the sequence value\n> between the insert and the curval selection?\n\nNo curval() is per-backend value that is not affected by other users. \nMy book has a mention of that issue, and so does the FAQ.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jul 2000 09:25:51 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: postgres TODO" }, { "msg_contents": "At 09:14 10/07/00 -0400, [email protected] wrote:\n>On Mon, 10 Jul 2000, Alessio Bragadini wrote:\n>\n>> > I don't get the point of this. Don't you know what you inserted? For\n>> > sequences there's curval()\n>> \n>> Mmmhhh... it means that we can assume no update to the sequence value\n>> between the insert and the curval selection?\n>\n>Sequences are transaction safe.\n>\n\nReally? I thought I read somewhere that they did not rollback so that\nlocking could be avoided, hence they would not be a major source of\ncontention. If that is true, it does seem to imply that they can be updated\nby other processes (Otherwise they would present a locking problem). Or do\nyou mean that they maintain a 'curval' that was the last value use in the\ncurrent TX?\n\nEither way it's still not a help, consider:\n\ncreate table t1(f1 int4, f2 text);\n\ncreate trigger t1_ir_tg1 after insert on t1\n(\n insert into t1_audit(t1.id, nextval('id'), \"Row created\");\n) for each row;\n\ninsert into t1(nextval('id'), \"my main row\");\n\nNot necessarily a real case, and fixed by using two sequences. But with a\nmore complex set of triggers or rules, there is a real chance of stepping\non curval().\n\nHow hard would it be to implement:\n\n insert into t1(nextval('id'), \"my main row\") returning f1, f2;\n\nor similar?\n\n[in the above case, the insert statement should be identical to:\n\n insert into t1(nextval('id'), \"my main row\") returning f1, f2;\n select f1, f2 from t1 where oid=<new row oid>\n]\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 10 Jul 2000 23:32:17 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: postgres TODO" }, { "msg_contents": "Thus spake Alessio Bragadini\n> > > * Add function to return primary key value on INSERT\n> > \n> > I don't get the point of this. Don't you know what you inserted? For\n> > sequences there's curval()\n> \n> Mmmhhh... it means that we can assume no update to the sequence value\n> between the insert and the curval selection?\n\nWe can within one connection so this is safe but there are other problems\nwhich I am not sure would be solved by this anyway. With rules, triggers\nand defaults there are often changes to the row between the insert and the\nvalues that hit the backing store. This is a general problem of which\nthe primary key is only one example.\n\nIn fact, the OID of the new row is returned so what stops one from just\nusing it to get any information required. This is exactly what PyGreSQL\ndoes in its insert method. After returning, the dictionary used to store\nthe fields for the row have been updated with the actual contents of the\nrow in the database. It simply does a \"SELECT *\" using the new OID to\nget the row back.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Mon, 10 Jul 2000 09:52:21 -0400 (EDT)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: Re: postgres TODO" }, { "msg_contents": "Alessio Bragadini <[email protected]> writes:\n> Peter Eisentraut wrote:\n>>>> * Add function to return primary key value on INSERT\n>> \n>> I don't get the point of this. Don't you know what you inserted? For\n>> sequences there's curval()\n\n> Mmmhhh... it means that we can assume no update to the sequence value\n> between the insert and the curval selection?\n\nYes, we can --- currval is defined to tell you the last sequence value\nallocated *in this backend*.\n\nActually you could still get burnt if you had a sufficiently complicated\nset of rules and triggers ... there could be another update of the\nsequence induced by one of your own triggers, and if you forget to allow\nfor that you'd have a problem. But you don't have to worry about other\nbackends.\n\nHowever, I still prefer the SELECT nextval() followed by INSERT approach\nover INSERT followed by SELECT currval(). It just feels cleaner.\n\n\nTo get back to Peter's original question, you don't necessarily \"know\nwhat you inserted\" if you allow columns to be filled with default values\nthat are calculated by complicated functions. A serial column is just\nthe simplest example of that. Whether this situation is common enough\nto justify a special hack in INSERT is another question. I kinda doubt\nit. We already return the OID which is sufficient info to select the\nrow again if you need it. Returning the primary key would be\nconsiderably more work for no visible gain in functionality...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 11:14:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: postgres TODO " }, { "msg_contents": "Tom Lane wrote:\n> \n> Alessio Bragadini <[email protected]> writes:\n> > Peter Eisentraut wrote:\n> >>>> * Add function to return primary key value on INSERT\n> >>\n> >> I don't get the point of this. Don't you know what you inserted? For\n> >> sequences there's curval()\n> \n> To get back to Peter's original question, you don't necessarily \"know\n> what you inserted\" if you allow columns to be filled with default values\n> that are calculated by complicated functions. A serial column is just\n> the simplest example of that. Whether this situation is common enough\n> to justify a special hack in INSERT is another question. I kinda doubt\n> it. We already return the OID which is sufficient info to select the\n> row again if you need it. Returning the primary key would be\n> considerably more work for no visible gain in functionality...\n\nIt's definitely not a crucial functionality gain, IMO, but it is\nnonetheless a gain when you consider that *every* pgsql developer on \nthe planet could then do something in one query that currently takes \ntwo (plus the requisite error-handling code). A few other counter-\narguments for returning the autoincrement/serial/pkey:\n\n\t1) it earns bad press w/r/t usability;\n\t2) it is an FAQ on the lists;\n\t3) it is an extremely common operation;\n\t4) other DBs provide it;\n\nRegards,\nEd Loehr\n", "msg_date": "Mon, 10 Jul 2000 10:35:50 -0500", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: postgres TODO" }, { "msg_contents": "Tom Lane wrote:\n> it. We already return the OID which is sufficient info to select the\n> row again if you need it. Returning the primary key would be\n> considerably more work for no visible gain in functionality...\n\nBut OID is not available for views. I have already run into\nthis situation. I have a view which is a join across 3 tables.\ntwo of the underlying tables have serial fields as primary keys.\n\nINSERT ... RETURNING ... would be very nice indeed.\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n", "msg_date": "Mon, 10 Jul 2000 13:17:29 -0400", "msg_from": "\"Mark Hollomon\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: postgres TODO" }, { "msg_contents": "On Mon, 10 Jul 2000, Tom Lane wrote:\n\n> However, I still prefer the SELECT nextval() followed by INSERT approach\n> over INSERT followed by SELECT currval(). It just feels cleaner.\n\nJust an aside. We use a system similar to MySQL's \"auto_increment\" system to\nget the value. What we do is have a function that will return CURRVAL of the\nfirst defaulted int4 column of the table in question. This query gets the\ndefault clause:\n\nSELECT d.adsrc, a.attnum, a.attname\nFROM pg_class c, pg_attribute a, pg_attrdef d, pg_type t\nWHERE c.relname = ?\n AND a.attnum > 0\n AND a.attrelid = c.oid\n AND d.adrelid = c.oid\n AND a.atthasdef = true\n AND d.adnum = a.attnum\n AND a.atttypid = t.oid\n AND t.typname = 'int4'\nORDER BY a.attnum\nLIMIT 1\n\nThen we just pull out the part in the nextval('.....') and return the currval\nof that string. Works like a charm. This is done in perl, so when we need the\nlast insert id, we just call:\n\n$id = get_insert_id($dbh, $table);\n\nAnyways, its easy enough to get at the information this way without making your\napplication depend on OID values. Yes, you might still get bunt by triggers.\nI am not sure if there is an easy solution to that.\n\nMike\n\n", "msg_date": "Tue, 11 Jul 2000 09:07:20 -0500 (CDT)", "msg_from": "Michael J Schout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: postgres TODO " }, { "msg_contents": "At 09:07 11/07/00 -0500, Michael J Schout wrote:\n>\n>Anyways, its easy enough to get at the information this way without making\nyour\n>application depend on OID values. Yes, you might still get bunt by triggers.\n>I am not sure if there is an easy solution to that.\n>\n\nWell, not wanting to sound too much like a broken record, \n\n insert...returning... \n\nwould seem to fix the problem.\n\nIs there some obvious (to anyone who knows something about pg internals)\nreason why this is *not* a good idea?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 12 Jul 2000 00:21:22 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: postgres TODO " }, { "msg_contents": "Tom Lane wrote:\n> \n> However, I still prefer the SELECT nextval() followed by INSERT approach\n> over INSERT followed by SELECT currval(). It just feels cleaner.\n\nThis is the way I have been doing it, so I'm pleased to see you\nendorsing it :-)\n\nWhat I don't like about this way though is that I have to (A) do two\nstatements and (B) set up the permissions on my sequence as well as on\nmy table. If I could just get the inserted tuple back somehow it would\ndefinitely simplify my application.\n\n\n> To get back to Peter's original question, you don't necessarily \"know\n> what you inserted\" if you allow columns to be filled with default values\n> that are calculated by complicated functions. A serial column is just\n> the simplest example of that. Whether this situation is common enough\n> to justify a special hack in INSERT is another question. I kinda doubt\n> it. We already return the OID which is sufficient info to select the\n> row again if you need it. Returning the primary key would be\n> considerably more work for no visible gain in functionality...\n\nFor some reason I find almost every situation in which I INSERT with a\nSERIAL I want to provide user feedback that includes that allocated\nSERIAL. The use of primary keys is not restricted purely to in-database\nstorage - they can get transferred into people's brains and e-mailed\naround the place and so on.\n\nGetting that back from an INSERT would definitely be useful to me.\n\nThanks,\n\t\t\t\t\tAndrew.\n\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: [email protected]\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267\n", "msg_date": "Wed, 12 Jul 2000 09:10:51 +1200", "msg_from": "Andrew McMillan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: postgres TODO" }, { "msg_contents": "At 00:21 12/07/00 +1000, Philip Warner wrote:\n>\n>Well, not wanting to sound too much like a broken record, \n>\n> insert...returning... \n>\n>would seem to fix the problem.\n>\n>Is there some obvious (to anyone who knows something about pg internals)\n>reason why this is *not* a good idea?\n>\n\nPutting this another way, does anyone object to this being implemented, *at\nleast* in the case of single row updates?\n\nSecondly, can anyone suggest likely problems that would occur in a naieve\n'do a select after an insert' or 'keep a list of affected oids' approach?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 12 Jul 2000 09:54:52 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Insert..returning (was Re: Re: postgres TODO)" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n>> Is there some obvious (to anyone who knows something about pg internals)\n>> reason why this is *not* a good idea?\n\n> Putting this another way, does anyone object to this being implemented, *at\n> least* in the case of single row updates?\n\nProvide a specification *first*. What exactly do you expect to do,\nand how will the code behave in the case of multiple affected rows,\nzero affected rows, same row affected multiple times (possible with\na joined UPDATE), inherited UPDATE that affects rows in multiple tables,\ninserts/updates that are suppressed or redirected or turned into\nmultiple operations (possibly on multiple tables) by rules or triggers,\netc etc? Not to mention the juicy topics of access permissions and\npossible errors. Also, how will this affect the frontend/backend\nprotocol and what are the risks of breaking existing frontend code?\nFinally, how does your spec compare to similar features in other DBMSs?\n\nI don't have any fundamental objection to it given a well-thought-out\nspecification and implementation ... but I don't want to find us stuck\nwith supporting a half-baked nonstandard feature. We have quite enough\nof those already ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jul 2000 21:28:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert..returning (was Re: Re: postgres TODO) " }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Tom Lane\n> \n> it. We already return the OID which is sufficient info to select the\n> row again if you need it. Returning the primary key would be\n> considerably more work for no visible gain in functionality...\n>\n\nIs OID really sufficient ?\nI've wondered why people love OID so much.\nPostgreSQL provides no specific access method using OID.\nWe couldn't assume that every table has its OID index,\nwhen we need to handle general resultsets.\nIn fact,I've never created OID indexes on user tables.\n\nI've forgotten to propose that INSERT returns TID together\nwith OID before 7.0. This has been in my mind since\nI planned to implement Tid scan. Different from OID\n,TID has its specific (fast) access method now.\n\nComments ?\n\nRegards.\n\nHiroshi Inoue\n", "msg_date": "Wed, 12 Jul 2000 10:55:42 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Re: postgres TODO " }, { "msg_contents": "At 21:28 11/07/00 -0400, Tom Lane wrote:\n>\n>> Putting this another way, does anyone object to this being implemented, *at\n>> least* in the case of single row updates?\n>\n>Provide a specification *first*.\n\nPicky, picky, picky...\n\nThe basic pholosophy would be:\n\n insert into x ... returning f1, f2\n\nshould produce the same results as:\n\n insert into x ...\n select f1,f2 from x where x.oid in (oid's of affected rows).\n\nSo the returned fields must be in the target table, and multiple rows could\nbe returned. \n\nThe only commercial DB that implements this kind of behaviour does it on\nupdate only, and restricts it to updates that only affect one row. As a\nfirst pass, it would satisfy 99.9% of users needs to only allow this\nfeature on inserts & updates that affected one row.\n\n\n> What exactly do you expect to do,\n>and how will the code behave in the case of multiple affected rows,\n\nIdeally, return the rowset as per a select. But as above, it might be a lot\nsimpler to raise an error if more than one row is affected.\n\n\n>zero affected rows,\n\nDo whatever 'update' does with zero rows affected.\n\n\n> same row affected multiple times (possible with\n>a joined UPDATE),\n\nReturn the most recent version of the row.\n\n\n> inherited UPDATE that affects rows in multiple tables,\n\nI don't know much about inherited stuff, which is why I posted the original\nquestion about non-trivial problems with the implementation.\n\nIn this case I would say it should fall back on trying to reproduce the\nbehaviour of an 'insert into x*' followed by a 'select ... from x*'\n\n\n>inserts/updates that are suppressed or redirected or turned into\n>multiple operations (possibly on multiple tables) by rules or triggers,\n>etc etc?\n\nThis is why I mentioned the 'maintain a list of affected oids' option; it\nshould only return rows of the target table that were affected by the\nstatement. When I do an 'insert' statement in psql, it reports the number\nof rows inserted: whatever is used to show this number should be used to\ndetermine the rows returned.\n\n\n> Not to mention the juicy topics of access permissions and\n>possible errors.\n\nCan't one fall back here on the 'insert followed by select' analogy? Or is\nthere a specific example that you have in mind?\n\n\n>Also, how will this affect the frontend/backend\n>protocol and what are the risks of breaking existing frontend code?\n\nI have absolutely no idea - hence why I asked what people who knew PG\nthought of the suggestion. \n\nI had naievely assumed that the fe would pass a query off to the be, and\nhandle the result based on what the be tells it to do. ie. I assumed that\nthe fe would not know that it was passing an 'insert' statement, and\ntherefor would not die when it got a set of tuples returned.\n\n\n>Finally, how does your spec compare to similar features in other DBMSs?\n\nSee above.\n\n\n>I don't have any fundamental objection to it given a well-thought-out\n>specification and implementation ... but I don't want to find us stuck\n>with supporting a half-baked nonstandard feature. We have quite enough\n>of those already ;-)\n\nAnd I'm happy to do the leg work, if we can come to a design that people\nwho understand pg internals think will (a) not involve rewriting half of\npg, and (b) be clear, concise and easily supportable.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 12 Jul 2000 12:15:09 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert..returning (was Re: Re: postgres TODO) " }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> I've forgotten to propose that INSERT returns TID together\n> with OID before 7.0. This has been in my mind since\n> I planned to implement Tid scan. Different from OID\n> ,TID has its specific (fast) access method now.\n\nCouple of thoughts here ---\n\n* OID is (nominally) unique across tables. TID is not. This is a\nserious point in the presence of inheritance. I'd like to see the\nreturn be table OID plus TID if we are going to rely on TID.\n\n* TID identification of a row does not survive VACUUM, does it?\nSo you'd have to assume a vacuum didn't happen in between. Seems a\nlittle risky. Vadim's overwriting smgr would make this issue a lot\nworse. Might be OK in certain application contexts, but I wouldn't\nwant to encourage people to use it without thinking.\n\n* I don't see any way to add TID (or table OID) to the default return\ndata without changing the fe/be protocol and breaking a lot of existing\nclient code.\n\nPhilip's INSERT ... RETURNING idea could support returning TID and\ntable OID as a special case, and it has the saving grace that it\nwon't affect apps that don't use it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jul 2000 23:05:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: postgres TODO " }, { "msg_contents": "At 12:15 12/07/00 +1000, Philip Warner wrote:\n>\n>The only commercial DB that implements this kind of behaviour does it on\n>update only, and restricts it to updates that only affect one row. As a\n>first pass, it would satisfy 99.9% of users needs to only allow this\n>feature on inserts & updates that affected one row.\n>\n\nThe more I think about this, the more I think they probably had a good\nreason for doing it. The cleanest solution seems to be that updates &\ninserts affecting more than one row should produce an error.\n\nI'd be very interested in how people think rules and triggers should be\nhandled.\n\nMy initial inclination is that if a trigger prevents the insert, then it is\nthe responsibility of the programmer to check the number of rows affected\nafter the update (the returned fields would either not exist, or be null).\n\nIf a rule rewrites the insert as an insert into another table, then I am\nnot sure what is best: either raise an error, or return the fields from the\n*real* target table. I *think* I prefer raising an error, since any other\nbehaviour could be very confusing.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 12 Jul 2000 13:22:40 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert..returning (was Re: Re: postgres TODO) " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > I've forgotten to propose that INSERT returns TID together\n> > with OID before 7.0. This has been in my mind since\n> > I planned to implement Tid scan. Different from OID\n> > ,TID has its specific (fast) access method now.\n> \n> Couple of thoughts here ---\n> \n> * OID is (nominally) unique across tables. TID is not. This is a\n> serious point in the presence of inheritance. I'd like to see the\n> return be table OID plus TID if we are going to rely on TID.\n> \n> * TID identification of a row does not survive VACUUM, does it?\n> So you'd have to assume a vacuum didn't happen in between. Seems a\n> little risky. Vadim's overwriting smgr would make this issue a lot\n> worse. Might be OK in certain application contexts, but I wouldn't\n> want to encourage people to use it without thinking.\n>\n\nVACUUM would invalidate keeped TIDs. Even OIDs couldn't\nsurvive 'drop and create table'.\nSo I would keep [relid],oid,tid-s for fetched rows and reload\nthe rows using tids (and [relid]). If the OID != keeped OID,then\nI would refresh the resultset entirely.\n\nBTW,wouldn't TIDs be more stable under overwriting smgr ?\nUnfortunately TIDs are transient under current no overwrite\nsmgr and need to follow update chain of tuples.\n \n> * I don't see any way to add TID (or table OID) to the default return\n> data without changing the fe/be protocol and breaking a lot of existing\n> client code.\n>\n\nI've thought backends could return info\n 'INSERT oid count tid' \nto their frontends but is it imposiible ?\nShould (tuples)count be the 3rd and the last item to return on INSERT ?\n \n> Philip's INSERT ... RETURNING idea could support returning TID and\n> table OID as a special case, and it has the saving grace that it\n> won't affect apps that don't use it...\n>\n\nIf commandInfo(cmdStatus) is unavailable,this seems to be\nneeded though I don't know how to implement it.\n\nRegards.\n\nHiroshi Inoue\n", "msg_date": "Wed, 12 Jul 2000 15:13:38 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Re: postgres TODO " }, { "msg_contents": "Tom Lane wrote:\n>\n> Philip's INSERT ... RETURNING idea could support returning TID and\n> table OID as a special case, and it has the saving grace that it\n> won't affect apps that don't use it...\n\n I like that one alot more too. It should be relatively easy\n to add a list of attributes (specified after RETURNING) to\n the querytree. Then send out a regular result set of tuples\n built from the requested attributes of the new tuple (at\n INSERT/UPDATE) or the old one (at DELETE) during the executor\n run. Or maybe both and specified as NEW.attname vs.\n OLD.attnam? Then it needs AS too, making the attribute list\n looking like a targetlist restricted to Var nodes.\n\n This doesn't require any changes in the FE/BE protocol. And a\n client using this new feature just expects TUPLES_OK instead\n of COMMAND_OK when using the new functionality.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Wed, 12 Jul 2000 11:05:21 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: Re: postgres TODO" }, { "msg_contents": "Thus spake Philip Warner\n> > Not to mention the juicy topics of access permissions and\n> >possible errors.\n> \n> Can't one fall back here on the 'insert followed by select' analogy? Or is\n> there a specific example that you have in mind?\n\nI think the thing he has in mind is the situation where one has insert\nperms but not select. The decision is whether to have the insert fail\nif the select fails. Or, do you allow the (virtual) select in this\ncase since it is your own inserted row you are trying to read?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Wed, 12 Jul 2000 05:14:16 -0400 (EDT)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: Insert..returning (was Re: Re: postgres TODO)" }, { "msg_contents": "At 05:14 12/07/00 -0400, D'Arcy J.M. Cain wrote:\n>Thus spake Philip Warner\n>> > Not to mention the juicy topics of access permissions and\n>> >possible errors.\n>> \n>> Can't one fall back here on the 'insert followed by select' analogy? Or is\n>> there a specific example that you have in mind?\n>\n>I think the thing he has in mind is the situation where one has insert\n>perms but not select. The decision is whether to have the insert fail\n>if the select fails. Or, do you allow the (virtual) select in this\n>case since it is your own inserted row you are trying to read?\n\nI would be inclined to follow the perms; is there a problem with that? You\nshould not let them read the row they inserted since it *may* contain\nsensitive (automatically generated) data - the DBA must have had a reason\nfor preventing SELECT.\n\nThe next question is whether they should be allowed to do the insert, and\nagain I would be inclined to say 'no'. Can we check perms easily at the start?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 12 Jul 2000 21:27:14 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert..returning (was Re: Re: postgres TODO)" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n>> I think the thing he has in mind is the situation where one has insert\n>> perms but not select.\n\nExactly --- and that's a perfectly reasonable setup in some cases (think\nblind mailbox). INSERT ... RETURNING should require both insert and\nselect privileges IMHO.\n\n> I would be inclined to follow the perms; is there a problem with that? You\n> should not let them read the row they inserted since it *may* contain\n> sensitive (automatically generated) data - the DBA must have had a reason\n> for preventing SELECT.\n\nIt would be a pretty stupid app that would be using INSERT ... RETURNING\nto obtain the data that it itself is supplying. The only reason I can\nsee for the feature is to get hold of automatically-generated column\nvalues. Thus, obeying select permissions is relevant.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Jul 2000 12:47:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert..returning (was Re: Re: postgres TODO) " }, { "msg_contents": "At 12:47 12/07/00 -0400, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>>> I think the thing he has in mind is the situation where one has insert\n>>> perms but not select.\n>\n>Exactly --- and that's a perfectly reasonable setup in some cases (think\n>blind mailbox). INSERT ... RETURNING should require both insert and\n>select privileges IMHO.\n\nYou won't get any argument from me.\n\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 13 Jul 2000 02:51:04 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert..returning (was Re: Re: postgres TODO) " }, { "msg_contents": "At 11:05 12/07/00 +0200, Jan Wieck wrote:\n>Tom Lane wrote:\n>>\n>> Philip's INSERT ... RETURNING idea could support returning TID and\n>> table OID as a special case, and it has the saving grace that it\n>> won't affect apps that don't use it...\n\nWhat sort of syntax would you use to request TID?\n\n\n> I like that one alot more too. It should be relatively easy\n> to add a list of attributes (specified after RETURNING) to\n> the querytree.\n\nFor you, maybe! If you feel like giving me a list of sources that will get\nme into this, that would be great. I've looked through various executor\nmodules and the parser, but would appreciate any advice you have to offer...\n\nNote: I am not plaaning on *making* changes, just yet. I'm mainly\ninterested in understanding the suggestions people are making!\n\n\n> Then send out a regular result set of tuples\n> built from the requested attributes of the new tuple (at\n> INSERT/UPDATE) or the old one (at DELETE) during the executor\n> run.\n\nThis sounds like what I want to do.\n\n\n> Or maybe both and specified as NEW.attname vs.\n> OLD.attnam? Then it needs AS too, making the attribute list\n> looking like a targetlist restricted to Var nodes.\n\nThis also sounds like a cute feature, so long as it fits naturally into the\nchanges.\n\n\n> This doesn't require any changes in the FE/BE protocol. And a\n> client using this new feature just expects TUPLES_OK instead\n> of COMMAND_OK when using the new functionality.\n\nSounds good.\n\n\nThanks everybody for the feedback, I'll try to understand it and then get\nback with a revised plan...\n\n \n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 13 Jul 2000 16:57:27 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: postgres TODO" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> At 11:05 12/07/00 +0200, Jan Wieck wrote:\n>> Tom Lane wrote:\n>>> \n>>> Philip's INSERT ... RETURNING idea could support returning TID and\n>>> table OID as a special case, and it has the saving grace that it\n>>> won't affect apps that don't use it...\n\n> What sort of syntax would you use to request TID?\n\n... RETURNING ctid\n\nThis might be a little tricky; you'd have to be sure the RETURNING\ncode executes late enough that a TID has been assigned to the tuple.\nNot sure if post-insert trigger time is late enough or not (Jan?)\nbut in principle it's not a special case at all, just a system\nattribute the same as OID.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Jul 2000 03:13:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: postgres TODO " }, { "msg_contents": "Restated in TODO:\n\n* Allow [INSERT/UPDATE] ... RETURNING new.col or old.col (Philip) \n \n\n> At 09:14 8/07/00 -0400, Bruce Momjian wrote:\n> >> hi,\n> >> \n> >> threre are a postgresql/mysql comparative.\n> >> You can get something for the TODO:\n> >> \n> >> http://www.phpbuilder.com/columns/tim20000705.php3?page=1\n> >> \n> >> regards,\n> >> \n> >\n> >Thanks. Yes, I have added to the TODO list:\n> >\n> >\t* Add function to return primary key value on INSERT\n> \n> I had a look at the page and could not see the reference, so this\n> suggestion may be inappropriate, but...\n> \n> How about something more general - an incredibly useful feature of Dec/Rdb is:\n> \n> insert into t1(...) values(...) returning attr-list\n> \n> which is like performing a select directly after the insert. The same kind\n> of syntax applies to updates as well, eg.\n> \n> update t1 set f1 = 2 where <stuff> returning f1, f2, f3;\n> \n> Perhaps your original suggestion is a lot easier, but this is a convenient\n> feature...\n> \n> \n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.C.N. 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Jan 2001 23:07:04 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: postgres TODO" } ]
[ { "msg_contents": "First, I appologize if this is not the correct list for this :).\n\nIve had a crash occur in postgres 7.0.2. When the db crashed, there was quite a\nbit going on, so I am not really sure what caused it. As such, I cant really\nreproduce this :(. The only thing that I have that might help you guys is the\ncore file. Below is a backtrace from the core file.\n\nAnyways, I thought I should at least post what I hqve here in case this is\nuseful to the backend hackers :).\n\nThe system this is running on is:\n\nLinux 2.2.14\n\nDual P-III 500 MHz.\n768 MB ram.\n\nI ran the core file through gdb and got a backtrace that looks like this:\n\n[root@testbed mschout]# gdb postmaster core\nGNU gdb 19991004\nCopyright 1998 Free Software Foundation, Inc.\nGDB is free software, covered by the GNU General Public License, and you are\nwelcome to change it and/or distribute copies of it under certain conditions.\nType \"show copying\" to see the conditions.\nThere is absolutely no warranty for GDB. Type \"show warranty\" for details.\nThis GDB was configured as \"i386-redhat-linux\"...\n(no debugging symbols found)...\nCore was generated by `/usr/bin/postgres localhost postgres'.\nProgram terminated with signal 11, Segmentation fault.\nReading symbols from /lib/libcrypt.so.1...done.\nReading symbols from /lib/libnsl.so.1...done.\nReading symbols from /lib/libdl.so.2...done.\nReading symbols from /lib/libm.so.6...done.\nReading symbols from /lib/libutil.so.1...done.\nReading symbols from /usr/lib/libreadline.so.3...done.\nReading symbols from /lib/libtermcap.so.2...done.\nReading symbols from /usr/lib/libncurses.so.4...done.\nReading symbols from /lib/libc.so.6...done.\nReading symbols from /lib/ld-linux.so.2...done.\nReading symbols from /lib/libnss_compat.so.2...done.\nReading symbols from /lib/libnss_files.so.2...done.\nReading symbols from /usr/lib/gconv/ISO8859-1.so...done.\n#0 0x0 in ?? ()\n(gdb) bt\n#0 0x0 in ?? ()\n#1 0x401029e6 in getenv (name=0x1 <Address 0x1 out of bounds>)\n at ../sysdeps/generic/getenv.c:77\n#2 0x80f2074 in quickdie ()\n#3 0x40100408 in __ldexpf (value=2.4801743, exp=1074648272)\n at ../sysdeps/libm-ieee754/s_ldexpf.c:30\n#4 0x4000a120 in _dl_map_object_deps (map=0x4000a120, preloads=0x40012024, \n npreloads=3221220516, trace_mode=1073815588, global_scope=-1073746744)\n at dl-deps.c:257\n#5 0x400ceaf5 in _nc_parse_entry () from /usr/lib/libncurses.so.4\n#6 0x4000a6a6 in _dl_map_object_deps (map=0xbfffecb0, preloads=0x20657661, \n npreloads=1074647000, trace_mode=10, global_scope=1) at dl-deps.c:542\n#7 0x401029e6 in getenv (name=0x1 <Address 0x1 out of bounds>)\n at ../sysdeps/generic/getenv.c:77\n#8 0x80f2074 in quickdie ()\n#9 0x40100408 in __ldexpf (value=0, exp=135961688)\n at ../sysdeps/libm-ieee754/s_ldexpf.c:30\n#10 0x80aea87 in pq_getbytes ()\n#11 0x80f1b6c in HandleFunctionRequest ()\n#12 0x80f1c07 in HandleFunctionRequest ()\n#13 0x80f2e2d in PostgresMain ()\n#14 0x80dbea2 in PostmasterMain ()\n#15 0x80db9ea in PostmasterMain ()\n#16 0x80dad36 in PostmasterMain ()\n#17 0x80da7ac in PostmasterMain ()\n#18 0x80af655 in main ()\n#19 0x400fa1eb in ?? () from /lib/libc.so.6\n(gdb) \n\n\n", "msg_date": "Sat, 8 Jul 2000 10:49:47 -0500 (CDT)", "msg_from": "Michael J Schout <[email protected]>", "msg_from_op": true, "msg_subject": "crash in 7.0.2..." }, { "msg_contents": "Michael J Schout <[email protected]> writes:\n> Ive had a crash occur in postgres 7.0.2. When the db crashed, there\n> was quite a bit going on, so I am not really sure what caused it. As\n> such, I cant really reproduce this :(. The only thing that I have\n> that might help you guys is the core file. Below is a backtrace from\n> the core file.\n\nUnfortunately I don't think I believe the backtrace at all :-(.\nquickdie does not call getenv, for example. I think gdb has probably\ngotten confused and printed garbage information.\n\nYou might want to recompile the backend with -g, in hopes of getting\na more useful backtrace if it happens again.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jul 2000 13:00:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: crash in 7.0.2... " } ]
[ { "msg_contents": "I'm trying to get the SSL stuff to at least build out of the box. It seems\nthere's a flaw here: Even when you only want to build with SSL support \"to\ntry later\" the postmaster refuses to start unless you set up appropriate\ncertificate and key files. There's no way to disable SSL at run time.\n\nAt first I thought the -l option was supposed to that. But the\nresponsibility of the -l option is to refuse any non-SSL connections. But\ndeciding that should rather be the responsibility of the pg_hba.conf file,\nas indeed it is, with its hostssl directive. (At least that is my\nunderstanding.)\n\nDoes anyone have any suggestions how to handle this? This was never an\nadvertised feature so we have a little room to play with, I suppose.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 8 Jul 2000 21:22:31 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "SSL" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Does anyone have any suggestions how to handle this? This was never an\n> advertised feature so we have a little room to play with, I suppose.\n\nI think the SSL code is actually broken --- leastwise, the libpq side\nof it looks mighty bogus to me. It can't possibly work to negotiate\nthe SSL setup before we've done the connect, can it? (I believe whoever\nadded the nonblocking-connect logic to libpq fouled this up.)\n\nI've been griping about that since January but no one's responded, not\neven to say \"yes it's busted\" or \"it works for me\". So the level of\ninterest seems awfully low, and I have no particular interest in fixing\nit myself.\n\nBottom line: if you think it needs changing then change it. There\nsure aren't going to be very many complainers.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jul 2000 19:02:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL " } ]
[ { "msg_contents": " Hi,\n This is an replay to mail on 18 May 2000. \nSorry, I am always late...\n\nts<[email protected]> writes:\n>B> What function languages do triggers support? C, plpgsql, pltcl, SQL?\n>\n> You can add plruby.\n\n Were you able to add plruby ?\n Can you add plruby to PostgreSQL Documentation ?\n\n I think that Ruby is more popular than Python in Japan.\n Next Web page easily explains an outline of Ruby.\nhttp://www-4.ibm.com/software/developer/library/ruby.html\n or\nhttp://www.informit.com/matter/art0000016/\n\n And I introduced Ruby interface for PostgreSQL with pgsql-interfaces.\nI would like add 'Ruby interface' to your list, too.\n\nRuby:\nhttp://www.ruby-lang.org/en/\n\nplruby:\nhttp://www.ruby-lang.org/en/raa-list.rhtml?name=PL%2FRuby\n\nRuby interface for PostgreSQL:\nhttp://www.ruby-lang.org/en/raa-list.rhtml?name=postgres\n\n\nNoboru Saitou.\n", "msg_date": "Sun, 09 Jul 2000 06:37:09 +0900", "msg_from": "Noboru Saitou <[email protected]>", "msg_from_op": true, "msg_subject": "plruby(Re:Trigger function languages)" } ]
[ { "msg_contents": "\n\nIs this supposed to happen? I discovered this when I was experimenting with\nconverting a string to a number.\n\n# SELECT to_number('12,454.8-', '99G999D9S');\n to_number\n-----------\n -12454.8\n(1 row)\n\n# SELECT to_number('12,454.8-', '');\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!#\n\n\n\n\nI am running PostgreSQL 7.0.2 on FreeBSD 3.4-STABLE (x86). Thanks,\n\n\n- Andrew.\n\n\n", "msg_date": "Sun, 9 Jul 2000 15:37:13 +1000", "msg_from": "\"Andrew Snow\" <[email protected]>", "msg_from_op": true, "msg_subject": "Unnexpected results using to_number()" }, { "msg_contents": "\"Andrew Snow\" <[email protected]> writes:\n> # SELECT to_number('12,454.8-', '');\n> pqReadData() -- backend closed the channel unexpectedly.\n\nIn current sources I get a NULL result, which seems to be what the\ncode author intended originally. However this seems a little bit\ninconsistent --- shouldn't it raise a bad-format error instead?\nFor example,\n\nregression=# SELECT to_number('12,454.8-', ' ');\nERROR: Bad numeric input format ' '\n\nSeems odd that no spaces means \"return NULL\" but 1 or more spaces\ndoesn't.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 09 Jul 2000 13:38:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unnexpected results using to_number() " }, { "msg_contents": "On Sun, 9 Jul 2000, Tom Lane wrote:\n\n> \"Andrew Snow\" <[email protected]> writes:\n> > # SELECT to_number('12,454.8-', '');\n> > pqReadData() -- backend closed the channel unexpectedly.\n> \n> In current sources I get a NULL result, which seems to be what the\n> code author intended originally. However this seems a little bit\n\n my original code not return NULL, but return numeric_in(NULL, 0, 0) for\nthis situation.\n\n> inconsistent --- shouldn't it raise a bad-format error instead?\n> For example,\n> \n> regression=# SELECT to_number('12,454.8-', ' ');\n> ERROR: Bad numeric input format ' '\n\n Thanks for fix Tom.\n\n\t\t\t\t\tKarel\n\n", "msg_date": "Mon, 10 Jul 2000 08:42:31 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Unnexpected results using to_number() " }, { "msg_contents": "Karel Zak <[email protected]> writes:\n> On Sun, 9 Jul 2000, Tom Lane wrote:\n>> \"Andrew Snow\" <[email protected]> writes:\n>>>> # SELECT to_number('12,454.8-', '');\n>>>> pqReadData() -- backend closed the channel unexpectedly.\n>> \n>> In current sources I get a NULL result, which seems to be what the\n>> code author intended originally. However this seems a little bit\n\n> my original code not return NULL, but return numeric_in(NULL, 0, 0) for\n> this situation.\n\nYeah, I know. What did you expect that to produce, if not a NULL?\n\n>> inconsistent --- shouldn't it raise a bad-format error instead?\n\nWhat do you think about raising an error instead of returning NULL?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 02:53:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Unnexpected results using to_number() " }, { "msg_contents": "\nOn Mon, 10 Jul 2000, Tom Lane wrote:\n\n> Karel Zak <[email protected]> writes:\n> > On Sun, 9 Jul 2000, Tom Lane wrote:\n> >> \"Andrew Snow\" <[email protected]> writes:\n> >>>> # SELECT to_number('12,454.8-', '');\n> >>>> pqReadData() -- backend closed the channel unexpectedly.\n> >> \n> >> In current sources I get a NULL result, which seems to be what the\n> >> code author intended originally. However this seems a little bit\n> \n> > my original code not return NULL, but return numeric_in(NULL, 0, 0) for\n> > this situation.\n> \n> Yeah, I know. What did you expect that to produce, if not a NULL?\n\n It is a numeric_in() problem :-), but yes, it is still NULL.\n\n> \n> >> inconsistent --- shouldn't it raise a bad-format error instead?\n> \n> What do you think about raising an error instead of returning NULL?\n\nOracle:\nSVRMGR> select to_number('12,454.8-', '') from dual;\nTO_NUMBER(\n----------\nORA-01722: invalid number\n\n\nI mean that we can use ERROR here too. My original idea was same form for \nto_char and for to_number --- for to_char() Oracle say:\n\nSVRMGR> select to_char(SYSDATE, '') from dual;\nTO_CHAR(S\n---------\n\n1 row selected.\n\n\nI not sure here what is better. If you mean that ERROR is better I will \nchange it in some next patch fot formattin.c.\n\n Comments?\n \n\t\t\t\t\t\tKarel\n\n\n\n", "msg_date": "Mon, 10 Jul 2000 09:32:37 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [BUGS] Unnexpected results using to_number() " }, { "msg_contents": "Karel Zak <[email protected]> writes:\n>> What do you think about raising an error instead of returning NULL?\n\n> Oracle:\n> SVRMGR> select to_number('12,454.8-', '') from dual;\n> TO_NUMBER(\n> ----------\n> ORA-01722: invalid number\n\n> I mean that we can use ERROR here too. My original idea was same form for \n> to_char and for to_number --- for to_char() Oracle say:\n\n> SVRMGR> select to_char(SYSDATE, '') from dual;\n> TO_CHAR(S\n> ---------\n>\n> 1 row selected.\n\n> I not sure here what is better.\n\nWell, I think there is a good reason for the difference in Oracle's\nbehavior. The second case is presumably returning a zero-length string,\nnot a NULL, and that is a perfectly valid string. to_number() has no\ncomparable option, so I think it makes sense for it to raise an error.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 10:12:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [BUGS] Unnexpected results using to_number() " } ]
[ { "msg_contents": "zzz=# create table zzz(f1 int4);\nzzz=# create temporary table zzz(f1 int4);\n\n...works\n\nzzz=# create temporary table zzz(f1 int4);\nzzz=# create table zzz(f1 int4);\nERROR: Relation 'zzz' already exists\n\nIs this a problem?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sun, 09 Jul 2000 18:00:33 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Interesting featurette?" }, { "msg_contents": "> zzz=# create table zzz(f1 int4);\n> zzz=# create temporary table zzz(f1 int4);\n> \n> ...works\n> \n> zzz=# create temporary table zzz(f1 int4);\n> zzz=# create table zzz(f1 int4);\n> ERROR: Relation 'zzz' already exists\n> \n> Is this a problem?\n\nTemporary tables mask real tables. Real tables can't mask temporary\ntables. This is the way it should be, I think.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 9 Jul 2000 12:27:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Interesting featurette?" } ]
[ { "msg_contents": "\nI've got a version of pg_dump that supports BLOB output (currently using\ncustom format only), and would like some feedback. If anybody out there has\na 'real-world' db using blobs, I'd love them to test it...\n\nThanks,\n\nPhilip Warner.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sun, 09 Jul 2000 18:09:42 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Anyone willing to test pg_dump with BLOB support?" } ]
[ { "msg_contents": "\nIt seems some code disappeared from pg_dump.c between 7.0.2 and current, I\nthink in the following revisions:\n\nrevision 1.148\ndate: 2000/05/28 20:34:52; author: tgl; state: Exp; lines: +21 -42\nMiscellaneous cleanups of places that needed to account for new\npg_language entries.\n----------------------------\nrevision 1.147\ndate: 2000/04/14 01:34:24; author: tgl; state: Exp; lines: +2 -2\nAnother static-vs-not-static error.\n\nThe code is:\n\n...in dumpOneFunc....\n\n if (finfo[i].dumped)\n return;\n else\n finfo[i].dumped = 1;\n\n /* becomeUser(fout, finfo[i].usename); */\n\n if (finfo[i].lang == INTERNALlanguageId)\n {\n func_def = finfo[i].prosrc;\n strcpy(func_lang, \"INTERNAL\");\n }\n else if (finfo[i].lang == ClanguageId)\n {\n func_def = finfo[i].probin;\n strcpy(func_lang, \"C\");\n }\n else if (finfo[i].lang == SQLlanguageId)\n {\n func_def = finfo[i].prosrc;\n strcpy(func_lang, \"SQL\");\n }\n else\n {\n\nand without this code, the dumps for plpgsql call handlers do not work (at\nleast for me). It may be that I have messed something up in the code, but I\ndon't think so.\n\nAny light you can shed on this would be great.\n\nP.S. The specific problem is that it now uses plsrc as the definition for\nall functions, whereas the (C language) plpgsql call handler requires plbin\nto be used.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sun, 09 Jul 2000 20:50:07 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "C language function dump problem" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> It seems some code disappeared from pg_dump.c between 7.0.2 and current,\n> ...\n> P.S. The specific problem is that it now uses plsrc as the definition for\n> all functions, whereas the (C language) plpgsql call handler requires plbin\n> to be used.\n\nLooks like I broke it :-(. Didn't read the code carefully enough,\nI guess, and thought that the selection of language name was the only\nuseful thing it was accomplishing. So I figured the default path would\nhandle all cases just as easily.\n\nNow that I think about it, the code was actually broken before that,\nbecause for a C-language function it needs to produce two AS items\nspecifying the link symbol and the library path. Looks like we\nneglected to update pg_dump when that feature was added.\n\nBasically you need to make pg_dump do the inverse of\ninterpret_AS_clause() in src/backend/commands/define.c. Note that\nthere are now two OIDs that need to be handled this way, ClanguageId\nand NEWClanguageId.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 09 Jul 2000 13:25:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: C language function dump problem " }, { "msg_contents": "At 13:25 9/07/00 -0400, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> It seems some code disappeared from pg_dump.c between 7.0.2 and current,\n>> ...\n>> P.S. The specific problem is that it now uses plsrc as the definition for\n>> all functions, whereas the (C language) plpgsql call handler requires plbin\n>> to be used.\n>\n>Now that I think about it, the code was actually broken before that,\n>because for a C-language function it needs to produce two AS items\n>specifying the link symbol and the library path. Looks like we\n>neglected to update pg_dump when that feature was added.\n>\n\nLooking at the code, it *seems* that I should be able to (in pseudo-code):\n\nif ( finfo[i].probin != \"-\")\n defn = defn || \"AS \" || finfo[i].probin;\n\nif ( finfo[i].prosrc != \"-\")\n defn = defn || \"AS \" || finfo[i].prosrc;\n\nie. Use probin is it is not \"-\", and use prosrc if it is not \"-\".\n\nThis gets around hard coding for C & newC, so reduces the chance of\nproblems in the future...I think. \n\nDoes that sound reasonable to everyone?\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 10 Jul 2000 10:38:30 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: C language function dump problem " }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> Looking at the code, it *seems* that I should be able to (in pseudo-code):\n\n> if ( finfo[i].probin != \"-\")\n> defn = defn || \"AS \" || finfo[i].probin;\n\n> if ( finfo[i].prosrc != \"-\")\n> defn = defn || \"AS \" || finfo[i].prosrc;\n\nNot quite; I think the correct syntax for C functions is\n\n\tAS 'probin', 'prosrc'\n\nAlso I'm not real sure that the unused field will be \"-\" for all the\nother languages --- but if that's true you could make it work. Not\nhardwiring the language OIDs would definitely be a Good Thing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 09 Jul 2000 21:13:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: C language function dump problem " }, { "msg_contents": "At 21:13 9/07/00 -0400, Tom Lane wrote:\n>\n>Also I'm not real sure that the unused field will be \"-\" for all the\n>other languages --- but if that's true you could make it work. Not\n>hardwiring the language OIDs would definitely be a Good Thing.\n>\n\nDone. In backend/commands/define.c unused field is set to '-' for the moment.\n\nA patch for CVS is attached, and I have amended my BLOB dumping version\nappropriately.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/", "msg_date": "Mon, 10 Jul 2000 12:05:23 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: C language function dump problem " }, { "msg_contents": "I take it back. I did have this. I was looking for pg_dump in the\nsubject. Searching for Warner, I found it. Sorry for the confusion. \nApplied.\n\n> At 21:13 9/07/00 -0400, Tom Lane wrote:\n> >\n> >Also I'm not real sure that the unused field will be \"-\" for all the\n> >other languages --- but if that's true you could make it work. Not\n> >hardwiring the language OIDs would definitely be a Good Thing.\n> >\n> \n> Done. In backend/commands/define.c unused field is set to '-' for the moment.\n> \n> A patch for CVS is attached, and I have amended my BLOB dumping version\n> appropriately.\n> \n\n[ Attachment, skipping... ]\n\n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.C.N. 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jul 2000 10:45:14 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: C language function dump problem" } ]
[ { "msg_contents": "\nIs there an official bug list?\n\nThe reason I ask is that there are a couple of issues that probably need to\nbe filed somewhere (soft peat?), at least for my peace of mind...\n\n1. Temp tables preventing permanent table creation:\n---------------------------------------------------\n\nzzz=# create table zzz(f1 int4);\nzzz=# create temporary table zzz(f1 int4);\n\n...works\n\nzzz=# create temporary table zzz(f1 int4);\nzzz=# create table zzz(f1 int4);\nERROR: Relation 'zzz' already exists\n\nI would have thought that the order sjould not be important.\n\n\n2. Unpleasant error (& behaviour) from legal statement\n------------------------------------------------------\n\ncreate table t1(f1 int4, f2 int4);\ncreate table t2(f1 int4, f2 int4);\n\ninsert into t1 values(1, 0);\ninsert into t1 values(2, 0);\n\ninsert into t2 values(1, 0);\n\nupdate t1 set f2=count(*) from t2 where t1.f1=1 and t2.f1=t1.f1 ;\nUPDATE 1\n\nupdate t1 set f2=count(*) from t2 where t1.f1=2 and t2.f1=t1.f1 ;\nERROR: ExecutePlan: (junk) `ctid' is NULL!\n\nI would have expected f2 to be either 0 when no rows matched, or to be\nunchanged.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sun, 09 Jul 2000 22:26:38 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Bug list?" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> Is there an official bug list?\n\nThere's the TODO list, but things usually only get on there if they're\nnot going to be fixed quickly. Active discussion threads in pghackers\ndon't normally get reflected into TODO ...\n\n> 1. Temp tables preventing permanent table creation:\n\nNot a bug IMHO, since temp tables mask permanent tables. Drop\nor rename the temp table if you want to make a permanent table.\n\n> update t1 set f2=count(*) from t2 where t1.f1=2 and t2.f1=t1.f1 ;\n> ERROR: ExecutePlan: (junk) `ctid' is NULL!\n\nThis is a bug, but it's not clear what the behavior should be; maybe\nthe bug is accepting an ill-defined command in the first place. See\n\"MAX() of 0 records\" thread nearby.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 09 Jul 2000 13:57:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug list? " }, { "msg_contents": "At 13:57 9/07/00 -0400, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> Is there an official bug list?\n>\n>There's the TODO list, but things usually only get on there if they're\n>not going to be fixed quickly. Active discussion threads in pghackers\n>don't normally get reflected into TODO ...\n\nThe only reason I brought this up was because it'd be good to know it was\non a 'known issues' list, possibly with with your \"it's a feature\"\nresponse. Then when/if people actually work on the code, they could\n*consider* seeing if any of these items could be addressed. Then again,\nmaybe I should have spotted the 'Max() of no records' discussion sooner...\n\n\n>> 1. Temp tables preventing permanent table creation:\n>\n>Not a bug IMHO, since temp tables mask permanent tables. Drop\n>or rename the temp table if you want to make a permanent table.\n\nDon't mind them masking them on select, update, drop etc, but why mask it\non create?\n\n\n>> update t1 set f2=count(*) from t2 where t1.f1=2 and t2.f1=t1.f1 ;\n>> ERROR: ExecutePlan: (junk) `ctid' is NULL!\n>\n>This is a bug, but it's not clear what the behavior should be; maybe\n>the bug is accepting an ill-defined command in the first place. See\n>\"MAX() of 0 records\" thread nearby.\n\nIn the case up the above query, I expected it to be the same as:\n\n update t1 set f2=(Select Count(*) from t2 where t2.f1=t1.f1) where\nt1.f1 = 2\n\nand I would have expected Count(*) to return 0 with no matches, and Max(*)\nto return NULL with no matches; then in the case of Max(*) one can use\nCoalesce if one wants a non-null value.\n\nThe big advantage I see about the 'update ... from...' syntax is it helps\nthe planner (at least in theory) in the case where multiple attrs are being\nupdated:\n\n update t1 set \n f1=(Select Count(t2.f2) from t2 where t2.f1=t1.f1), \n f2=(Select Max(t2.f2) from t2 where t2.f1=t1.f1), \n f3=(Select Min(t2.f2) from t2 where t2.f1=t1.f1) \n where t1.f1=2;\n\nseems more clumsy and perhaps harder to optimize than:\n\n update t1 set\n f1=Count(t2.f2),\n f2=Max(t2.f2),\n f3=Min(t2.f2)\n From\n t2 \n where t1.f1=2 and t2.f1=t1.f1\n\nBut I agree it's a little unclear as to how it should be interpreted!\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 10 Jul 2000 10:21:56 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug list? " } ]
[ { "msg_contents": "Hi,\n\nHere are 2 patches against current CVS to fix a couple of\nproblems I found building and testing on Solaris.\n\nFirstly genbki.sh had a couple of call to sed using the syntax\n\"sed <command>\" which results in sed reporting a garbled command.\n\nI have changed this to \"sed -e '<command>'\" which keeps sed happy.\n\nSecondly run_check.sh needed a couple of changes.\n\nOne to fix it's handling of a pre-existing LD_LIBRARY_PATH. (Which\nI needed to use to point the executables at the location of libz.so)\n\nOne to allow the call to \"initdb\" to find the correct template files\netc.\n\nKeith.", "msg_date": "Sun, 9 Jul 2000 13:40:25 +0100 (BST)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "initdb and runcheck problems (Latest CVS)" }, { "msg_contents": "Keith Parks writes:\n\n> Here are 2 patches against current CVS to fix a couple of\n> problems I found building and testing on Solaris.\n\nDone.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 9 Jul 2000 15:20:19 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb and runcheck problems (Latest CVS)" }, { "msg_contents": "I have done the sed -e change, and Peter has done the LD_PATH change.\n\n> Hi,\n> \n> Here are 2 patches against current CVS to fix a couple of\n> problems I found building and testing on Solaris.\n> \n> Firstly genbki.sh had a couple of call to sed using the syntax\n> \"sed <command>\" which results in sed reporting a garbled command.\n> \n> I have changed this to \"sed -e '<command>'\" which keeps sed happy.\n> \n> Secondly run_check.sh needed a couple of changes.\n> \n> One to fix it's handling of a pre-existing LD_LIBRARY_PATH. (Which\n> I needed to use to point the executables at the location of libz.so)\n> \n> One to allow the call to \"initdb\" to find the correct template files\n> etc.\n> \n> Keith.\nContent-Description: genbki.patch\n\n> *** src/backend/catalog/genbki.sh.orig\tThu Jul 6 22:33:22 2000\n> --- src/backend/catalog/genbki.sh\tSun Jul 9 09:00:01 2000\n> ***************\n> *** 45,57 ****\n> INCLUDE_DIR=\"$2\"\n> shift;;\n> -I*)\n> ! INCLUDE_DIR=`echo $1 | sed s/^-I//`\n> ;;\n> -o)\n> OUTPUT_PREFIX=\"$2\"\n> shift;;\n> -o*)\n> ! OUTPUT_PREFIX=`echo $1 | sed s/^-o//`\n> ;;\n> --help)\n> echo \"$CMDNAME generates system catalog bootstrapping files.\"\n> --- 45,57 ----\n> INCLUDE_DIR=\"$2\"\n> shift;;\n> -I*)\n> ! INCLUDE_DIR=`echo $1 | sed -e 's/^-I//'`\n> ;;\n> -o)\n> OUTPUT_PREFIX=\"$2\"\n> shift;;\n> -o*)\n> ! OUTPUT_PREFIX=`echo $1 | sed -e 's/^-o//'`\n> ;;\n> --help)\n> echo \"$CMDNAME generates system catalog bootstrapping files.\"\nContent-Description: run_check.patch\n\n> *** src/test/regress/run_check.sh.orig\tSun Jul 9 10:58:16 2000\n> --- src/test/regress/run_check.sh\tSun Jul 9 12:58:47 2000\n> ***************\n> *** 24,29 ****\n> --- 24,30 ----\n> PGDATA=\"$CHKDIR/data\"\n> LIBDIR=\"$CHKDIR/lib\"\n> BINDIR=\"$CHKDIR/bin\"\n> + SHAREDIR=\"$CHKDIR/share\"\n> LOGDIR=\"$CHKDIR/log\"\n> TIMDIR=\"$CHKDIR/timestamp\"\n> PGPORT=\"65432\"\n> ***************\n> *** 43,49 ****\n> # otherwise feel free to cover your platform here as well.\n> if [ \"$LD_LIBRARY_PATH\" ]; then\n> \told_LD_LIBRARY_PATH=\"$LD_LIBRARY_PATH\"\n> ! \tLD_LIBRARY_PATH=\"$LIBDIR:$LD_LIBARY_PATH\"\n> else\n> \tLD_LIBRARY_PATH=\"$LIBDIR\"\n> fi\n> --- 44,50 ----\n> # otherwise feel free to cover your platform here as well.\n> if [ \"$LD_LIBRARY_PATH\" ]; then\n> \told_LD_LIBRARY_PATH=\"$LD_LIBRARY_PATH\"\n> ! \tLD_LIBRARY_PATH=\"$LIBDIR:$old_LD_LIBRARY_PATH\"\n> else\n> \tLD_LIBRARY_PATH=\"$LIBDIR\"\n> fi\n> ***************\n> *** 187,193 ****\n> # Run initdb to initialize a database system in ./tmp_check\n> # ----------\n> echo \"=============== Initializing check database instance ================\"\n> ! initdb -D $PGDATA --noclean >$LOGDIR/initdb.log 2>&1\n> \n> if [ $? -ne 0 ]\n> then\n> --- 188,194 ----\n> # Run initdb to initialize a database system in ./tmp_check\n> # ----------\n> echo \"=============== Initializing check database instance ================\"\n> ! initdb -D $PGDATA -L $SHAREDIR --noclean >$LOGDIR/initdb.log 2>&1\n> \n> if [ $? -ne 0 ]\n> then\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 9 Jul 2000 12:38:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb and runcheck problems (Latest CVS)" }, { "msg_contents": "I take that back. Peter applied both.\n\n> Hi,\n> \n> Here are 2 patches against current CVS to fix a couple of\n> problems I found building and testing on Solaris.\n> \n> Firstly genbki.sh had a couple of call to sed using the syntax\n> \"sed <command>\" which results in sed reporting a garbled command.\n> \n> I have changed this to \"sed -e '<command>'\" which keeps sed happy.\n> \n> Secondly run_check.sh needed a couple of changes.\n> \n> One to fix it's handling of a pre-existing LD_LIBRARY_PATH. (Which\n> I needed to use to point the executables at the location of libz.so)\n> \n> One to allow the call to \"initdb\" to find the correct template files\n> etc.\n> \n> Keith.\nContent-Description: genbki.patch\n\n> *** src/backend/catalog/genbki.sh.orig\tThu Jul 6 22:33:22 2000\n> --- src/backend/catalog/genbki.sh\tSun Jul 9 09:00:01 2000\n> ***************\n> *** 45,57 ****\n> INCLUDE_DIR=\"$2\"\n> shift;;\n> -I*)\n> ! INCLUDE_DIR=`echo $1 | sed s/^-I//`\n> ;;\n> -o)\n> OUTPUT_PREFIX=\"$2\"\n> shift;;\n> -o*)\n> ! OUTPUT_PREFIX=`echo $1 | sed s/^-o//`\n> ;;\n> --help)\n> echo \"$CMDNAME generates system catalog bootstrapping files.\"\n> --- 45,57 ----\n> INCLUDE_DIR=\"$2\"\n> shift;;\n> -I*)\n> ! INCLUDE_DIR=`echo $1 | sed -e 's/^-I//'`\n> ;;\n> -o)\n> OUTPUT_PREFIX=\"$2\"\n> shift;;\n> -o*)\n> ! OUTPUT_PREFIX=`echo $1 | sed -e 's/^-o//'`\n> ;;\n> --help)\n> echo \"$CMDNAME generates system catalog bootstrapping files.\"\nContent-Description: run_check.patch\n\n> *** src/test/regress/run_check.sh.orig\tSun Jul 9 10:58:16 2000\n> --- src/test/regress/run_check.sh\tSun Jul 9 12:58:47 2000\n> ***************\n> *** 24,29 ****\n> --- 24,30 ----\n> PGDATA=\"$CHKDIR/data\"\n> LIBDIR=\"$CHKDIR/lib\"\n> BINDIR=\"$CHKDIR/bin\"\n> + SHAREDIR=\"$CHKDIR/share\"\n> LOGDIR=\"$CHKDIR/log\"\n> TIMDIR=\"$CHKDIR/timestamp\"\n> PGPORT=\"65432\"\n> ***************\n> *** 43,49 ****\n> # otherwise feel free to cover your platform here as well.\n> if [ \"$LD_LIBRARY_PATH\" ]; then\n> \told_LD_LIBRARY_PATH=\"$LD_LIBRARY_PATH\"\n> ! \tLD_LIBRARY_PATH=\"$LIBDIR:$LD_LIBARY_PATH\"\n> else\n> \tLD_LIBRARY_PATH=\"$LIBDIR\"\n> fi\n> --- 44,50 ----\n> # otherwise feel free to cover your platform here as well.\n> if [ \"$LD_LIBRARY_PATH\" ]; then\n> \told_LD_LIBRARY_PATH=\"$LD_LIBRARY_PATH\"\n> ! \tLD_LIBRARY_PATH=\"$LIBDIR:$old_LD_LIBRARY_PATH\"\n> else\n> \tLD_LIBRARY_PATH=\"$LIBDIR\"\n> fi\n> ***************\n> *** 187,193 ****\n> # Run initdb to initialize a database system in ./tmp_check\n> # ----------\n> echo \"=============== Initializing check database instance ================\"\n> ! initdb -D $PGDATA --noclean >$LOGDIR/initdb.log 2>&1\n> \n> if [ $? -ne 0 ]\n> then\n> --- 188,194 ----\n> # Run initdb to initialize a database system in ./tmp_check\n> # ----------\n> echo \"=============== Initializing check database instance ================\"\n> ! initdb -D $PGDATA -L $SHAREDIR --noclean >$LOGDIR/initdb.log 2>&1\n> \n> if [ $? -ne 0 ]\n> then\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 9 Jul 2000 12:38:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb and runcheck problems (Latest CVS)" } ]
[ { "msg_contents": "AFAICT, the EState field es_BaseId and the CommonState field cs_base_id\nare not used for anything, and so the plan-initialization code that\nsets them is a complete waste of time.\n\nPresumably these were once used for something, but I can't think of\nany good reason not to remove them. Does anyone see a reason to\nkeep them?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 09 Jul 2000 15:40:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "es_BaseId/cs_base_id seem to be dead code?" } ]
[ { "msg_contents": "There's a benchmark/review comparing PostgreSQL and MySQL on PHP Builder:\n http://www.phpbuilder.com/columns/tim20000705.php3\n\nCiao\n --Louis <[email protected]> \n\n\n", "msg_date": "Sun, 9 Jul 2000 16:29:52 -0400 (EDT)", "msg_from": "Louis Bertrand <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL vs. MySQL" }, { "msg_contents": "Louis Bertrand wrote:\n> \n> There's a benchmark/review comparing PostgreSQL and MySQL on PHP Builder:\n> http://www.phpbuilder.com/columns/tim20000705.php3\n\nI'm wondering about the comments that postgres is slower in connection\ntime, could this be related to that libpq always uses asynchronous\nsockets to connect? It always turns off blocking and then goes through a\nstate machine to go through the various stages of connect, instead of\njust calling connect() and waiting for the kernel to do its thing. Of\ncourse asynchronous connecting is a benefit when you want it. Or is the\noverhead elsewhere, and I'm just being paranoid?\n", "msg_date": "Mon, 10 Jul 2000 10:28:04 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL vs. MySQL" }, { "msg_contents": "> Louis Bertrand wrote:\n> > \n> > There's a benchmark/review comparing PostgreSQL and MySQL on PHP Builder:\n> > http://www.phpbuilder.com/columns/tim20000705.php3\n> \n> I'm wondering about the comments that postgres is slower in connection\n> time, could this be related to that libpq always uses asynchronous\n> sockets to connect? It always turns off blocking and then goes through a\n> state machine to go through the various stages of connect, instead of\n> just calling connect() and waiting for the kernel to do its thing. Of\n> course asynchronous connecting is a benefit when you want it. Or is the\n> overhead elsewhere, and I'm just being paranoid?\n\nThe truth is, we really don't know what it is.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 9 Jul 2000 20:37:46 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] PostgreSQL vs. MySQL" }, { "msg_contents": "Chris Bitmead <[email protected]> writes:\n> I'm wondering about the comments that postgres is slower in connection\n> time, could this be related to that libpq always uses asynchronous\n> sockets to connect? It always turns off blocking and then goes through a\n> state machine to go through the various stages of connect, instead of\n> just calling connect() and waiting for the kernel to do its thing.\n\nI think you'd be wasting your time to \"improve\" that. A couple of\nkernel calls are not enough to explain the problem. Moreover, we\nhad complaints about slow startup even back when libpq had never heard\nof async anything.\n\nI believe that the problem is on the backend side: there's an awful lot\nof cache-initialization and so forth that happens each time a backend\nis started. It's quick enough to be hard to profile accurately,\nhowever, so getting the info needed to speed it up is not so easy.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 09 Jul 2000 22:59:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] PostgreSQL vs. MySQL " }, { "msg_contents": "At 22:59 9/07/00 -0400, Tom Lane wrote:\n>Chris Bitmead <[email protected]> writes:\n>> I'm wondering about the comments that postgres is slower in connection\n>> time, could this be related to that libpq always uses asynchronous\n>> sockets to connect? It always turns off blocking and then goes through a\n>> state machine to go through the various stages of connect, instead of\n>> just calling connect() and waiting for the kernel to do its thing.\n>\n>I believe that the problem is on the backend side: there's an awful lot\n>of cache-initialization and so forth that happens each time a backend\n>is started. It's quick enough to be hard to profile accurately,\n>however, so getting the info needed to speed it up is not so easy.\n>\n\nYou could pre-start servers (ala Apache), then when a connection request\ncomes in, the connection should be pretty fast. This would involve\ndefining, for each database, the number of servers to prestart (default 0),\nand perhaps the minimum number of free servers to maintain (ie. when all\nfree servers are used up, automatically create some new ones). You would\ndefinitely need to make this dynamic to allow for clean database shutdowns.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 10 Jul 2000 13:10:42 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] PostgreSQL vs. MySQL " }, { "msg_contents": "\nDoes anyone have a philosophical objection to a symlink from pg_dump to\n(new) pg_backup?\n\nThe reason I ask is that with the new BLOB support, to do a proper backup\nof the database one has to type:\n\n pg_dump --blob -Fc ...etc\n\nwhere --blob tells it to dump BLOBs and -Fc tells it to use the custon file\nformat, which at the moment is the only one that supports BLOB storage.\n\nThe idea would be for pg_dump to look at it's name, and make --blob and -Fc\ndefaults if it is called as pg_backup. These can of course be overridden\nwhen binary blob load direct into psql is supported (maybe 'LO_COPY from\nstdin Length {len}'?)\n\nI know someone (Tom?) objected to symlinked files drastically changing\ncommand behaviour, but this is not a drastic change, so I live in hope.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 10 Jul 2000 13:42:36 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "pg_backup symlink?" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> At 22:59 9/07/00 -0400, Tom Lane wrote:\n>> I believe that the problem is on the backend side: there's an awful lot\n>> of cache-initialization and so forth that happens each time a backend\n>> is started. It's quick enough to be hard to profile accurately,\n>> however, so getting the info needed to speed it up is not so easy.\n\n> You could pre-start servers (ala Apache), then when a connection request\n> comes in, the connection should be pretty fast. This would involve\n> defining, for each database, the number of servers to prestart (default 0),\n\nYeah, that's been discussed before. It seems possible if not exactly\nsimple --- one of the implications is that the postmaster no longer\nlistens for client connections, but is reduced to being a factory for\nnew backends. The prestarted backends themselves have to be listening\nfor client connections, since there's no portable way for the postmaster\nto pass off a client socket to an already-existing backend.\n\nAnd of course the major problem with *that* is how do you get the\nconnection request to arrive at a backend that's been prestarted in\nthe right database? If you don't commit to a database then there's\nnot a whole lot of prestarting that can be done.\n\nIt occurs to me that this'd get a whole lot more feasible if one\npostmaster == one database, which is something we *could* do if we\nimplemented schemas. Hiroshi's been arguing that the current hard\nseparation between databases in an installation should be done away\nwith in favor of schemas, and I'm starting to see his point...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 01:02:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] PostgreSQL vs. MySQL " }, { "msg_contents": "At 01:02 10/07/00 -0400, Tom Lane wrote:\n>\n>> You could pre-start servers (ala Apache), then when a connection request\n>> comes in, the connection should be pretty fast. This would involve\n>> defining, for each database, the number of servers to prestart (default 0),\n>\n>since there's no portable way for the postmaster\n>to pass off a client socket to an already-existing backend.\n\nThat's a pain, because you probably don't want to vary the postmaster\nbehaviour that much. \n\nCouldn't you modify the connection protocol to request the port of a free\ndb server, then redo the connect invisibly inside the front end?\n\nThe postmaster would have to manage free servers, and mark the db server as\nused etc etc.\n\n\n>It occurs to me that this'd get a whole lot more feasible if one\n>postmaster == one database, which is something we *could* do if we\n>implemented schemas. Hiroshi's been arguing that the current hard\n>separation between databases in an installation should be done away\n>with in favor of schemas, and I'm starting to see his point...\n\nThis has other advantages too - I'd like to be able to shutdown *one*\ndatabase, and possibly restart it in 'administrator mode' (eg. for a\nrestore operation). It also means one misbehaving DB doesn't mess up other\nDBs. Sounds very good to me.\n\nDoes this mean there would be a postmaster-master that told you the\npostmaster port to connect to for the desired DB? Or is there a nicer way\nof doing this...\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 10 Jul 2000 15:12:27 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] PostgreSQL vs. MySQL " }, { "msg_contents": "\nThis \"pre-starting\" is already being done by any web application that uses\nconnection pooling. (I suspect this speed of connection startup is only\nimportant for something like a web system, correct?). Even if you did\n\"pre-start\" these back-ends, you'd end up with one of two possibilities:\n\n- you reuse the back-end processes from one connection to the other. I\nsuspect this is very hard, and you'd just be recreating connection pooling\nat a lower level, which I don't think is that worthwhile an investment...\n\n- you don't reuse the back-end processes, in which case you're still\nspawning one process per connection, which remains a bad idea for web\nsystems, so you're back to the application-layer connection pooling idea.\n\nI admire the entire Postgres's team efforts to fix any and all issues that\ncome in. You guys show true humility and a real desire to make this product\nthe best it can be.\n\nIt seems to me, though, that this particular issue is better resolved at the\napplication layer.\n\n-Ben\n\non 7/10/00 1:02 AM, Tom Lane at [email protected] wrote:\n\n>> You could pre-start servers (ala Apache), then when a connection request\n>> comes in, the connection should be pretty fast. This would involve\n>> defining, for each database, the number of servers to prestart (default 0),\n> \n> Yeah, that's been discussed before. It seems possible if not exactly\n> simple --- one of the implications is that the postmaster no longer\n> listens for client connections, but is reduced to being a factory for\n> new backends. The prestarted backends themselves have to be listening\n> for client connections, since there's no portable way for the postmaster\n> to pass off a client socket to an already-existing backend.\n> \n> And of course the major problem with *that* is how do you get the\n> connection request to arrive at a backend that's been prestarted in\n> the right database? If you don't commit to a database then there's\n> not a whole lot of prestarting that can be done.\n> \n> It occurs to me that this'd get a whole lot more feasible if one\n> postmaster == one database, which is something we *could* do if we\n> implemented schemas. Hiroshi's been arguing that the current hard\n> separation between databases in an installation should be done away\n> with in favor of schemas, and I'm starting to see his point...\n> \n> regards, tom lane\n> \n\n", "msg_date": "Mon, 10 Jul 2000 08:56:04 -0400", "msg_from": "Benjamin Adida <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] PostgreSQL vs. MySQL " }, { "msg_contents": "> And of course the major problem with *that* is how do you get the\n> connection request to arrive at a backend that's been prestarted in\n> the right database? If you don't commit to a database then there's\n> not a whole lot of prestarting that can be done.\n> \n> It occurs to me that this'd get a whole lot more feasible if one\n> postmaster == one database, which is something we *could* do if we\n> implemented schemas. Hiroshi's been arguing that the current hard\n> separation between databases in an installation should be done away\n> with in favor of schemas, and I'm starting to see his point...\n\nThis is interesting. You believe schema's would allow a pool of\nbackends to connect to any database? That would clearly be a win.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jul 2000 09:10:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] PostgreSQL vs. MySQL" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > And of course the major problem with *that* is how do you get the\n> > connection request to arrive at a backend that's been prestarted in\n> > the right database? If you don't commit to a database then there's\n> > not a whole lot of prestarting that can be done.\n> >\n> > It occurs to me that this'd get a whole lot more feasible if one\n> > postmaster == one database, which is something we *could* do if we\n> > implemented schemas. Hiroshi's been arguing that the current hard\n> > separation between databases in an installation should be done away\n> > with in favor of schemas, and I'm starting to see his point...\n> \n> This is interesting. You believe schema's would allow a pool of\n> backends to connect to any database? That would clearly be a win.\n\nI'm just curious, but did a consensus ever develop on schemas? It\nseemed that the schemas/tablespace thread just ran out of steam.\nFor what its worth, I like the idea of:\n\n1. PostgreSQL installation -> SQL cluster of catalogs\n2. PostgreSQL database -> SQL catalog\n3. PostgreSQL schema -> SQL schema\n\nThis correlates nicely with the current representation of\nDATABASE. People can run multiple SQL clusters by running\nmultiple postmasters on different ports. Today, most people\nachieve a logical separation of data by issuing multiple CREATE\nDATABASE commands. But under the above, most sites would run with\na single PostgreSQL database (SQL catalog), since:\n\n\"Catalogs are named collections of schemas in an SQL-environment\"\n\nThis would mirror the behavior of Oracle, where most people run\nwith a single Oracle SID. The logical separation would be\nachieved with SCHEMA's a level under the current DATABASE (a.k.a.\ncatalog). This eliminates the problem of using softlinks and\ncreating various subdirectories to mirror *logical* parititioning\nof data. It also alleviates the problem people currently\nencounter when they've built their data model around multiple\nDATABASE's but learn later that they need access to more than one\nsimultaneously. Instead, they'll model their design around\nmultiple SCHEMA's which exist within a single DATABASE instance. \n\nIt seems that the discussion of tablespaces shouldn't be mixed\nwith SCHEMA's except to note that a DATABASE (catalog) should\nhave a default TABLESPACE whose path matches the current one:\n\n../pgsql/data/base/<mydatabase>\n\nLater, users might be able to create a hierarchy of default\nTABLESPACE's where the location of the object is found with logic\nlike:\n\n1. Is there a object-specified tablespace?\n (ex: CREATE TABLE payroll IN TABLESPACE...)\n2. Is there a user-specified default tablespace?\n (ex: CREATE USER mike DEFAULT TABLESPACE...)\n2. Is there a schema-specified default tablespace?\n (ex: CREATE SCHEMA accounting DEFAULT TABLESPACE..)\n3. Use the catalog-default tablespace\n (ex: CREATE DATABASE postgres DEFAULT LOCATION '/home/pgsql')\n\nwith the last example creating the system tablespace,\n'system_tablespace', with '/home/pgsql' as the location.\n\nAnyways, it seems a consensus should be developed on the whole\nCluster/Catalog/Schema scenario.\n\nMike Mascari\n", "msg_date": "Mon, 10 Jul 2000 10:03:54 -0400", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] PostgreSQL vs. MySQL" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> It occurs to me that this'd get a whole lot more feasible if one\n>> postmaster == one database, which is something we *could* do if we\n>> implemented schemas. Hiroshi's been arguing that the current hard\n>> separation between databases in an installation should be done away\n>> with in favor of schemas, and I'm starting to see his point...\n\n> This is interesting. You believe schema's would allow a pool of\n> backends to connect to any database? That would clearly be a win.\n\nNo, I meant that we wouldn't have physically separate databases anymore\nwithin an installation, but would provide the illusion of it via\nschemas. So, only one pg_class (for example) per installation.\nThis would simplify life in a number of areas... but there are downsides\nto it as well, of course.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 10:29:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] PostgreSQL vs. MySQL " }, { "msg_contents": "> No, I meant that we wouldn't have physically separate databases anymore\n> within an installation, but would provide the illusion of it via\n> schemas. So, only one pg_class (for example) per installation.\n> This would simplify life in a number of areas... but there are downsides\n> to it as well, of course.\n\nOops. This seems the wrong way to go. Increasing coupling between\ndatabases to support schemas really means that we've traded one feature\nfor another, not increased our feature set. \n\nSchemas are intended to help logically partition a work area/database.\nWe will need to implement the SQL99 path lookup scheme for finding\nresources within a schema-divided database. But imho most installations\nwill still want resource- and permissions-partitioning between different\ndatabases, and schemas should figure out how to fit within a single\ndatabase.\n\nI didn't participate in the tablespace discussion because there seems to\nbe several PoV's well represented, but I'm interested in the schema\nissue ;)\n\n - Thomas\n", "msg_date": "Mon, 10 Jul 2000 15:39:54 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] PostgreSQL vs. MySQL" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> No, I meant that we wouldn't have physically separate databases anymore\n>> within an installation, but would provide the illusion of it via\n>> schemas. So, only one pg_class (for example) per installation.\n>> This would simplify life in a number of areas... but there are downsides\n>> to it as well, of course.\n\n> Oops. This seems the wrong way to go. Increasing coupling between\n> databases to support schemas really means that we've traded one feature\n> for another, not increased our feature set. \n\nYou could argue it that way, or you could say that we're replacing a\ncrufty old single-purpose feature with a nice new multi-purpose feature.\n\nI'm not by any means sold on removing the physical separation between\ndatabases --- I can see lots of reasons not to. But I think we ought\nto think hard about the choice, not have a knee-jerk reaction that we\ndon't want to \"eliminate a feature\". Physically separate databases\nare an implementation choice, not a user feature.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 12:08:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] PostgreSQL vs. MySQL " }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > > And of course the major problem with *that* is how do you get the\n> > > connection request to arrive at a backend that's been prestarted in\n> > > the right database? If you don't commit to a database then there's\n> > > not a whole lot of prestarting that can be done.\n> > >\n> > > It occurs to me that this'd get a whole lot more feasible if one\n> > > postmaster == one database, which is something we *could* do if we\n> > > implemented schemas. Hiroshi's been arguing that the current hard\n> > > separation between databases in an installation should be done away\n> > > with in favor of schemas, and I'm starting to see his point...\n> > \n> > This is interesting. You believe schema's would allow a pool of\n> > backends to connect to any database? That would clearly be a win.\n> \n> I'm just curious, but did a consensus ever develop on schemas? It\n> seemed that the schemas/tablespace thread just ran out of steam.\n> For what its worth, I like the idea of:\n\nYou can find the entire thread in the current development tree in\ndoc/TODO.detail/tablespaces.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jul 2000 14:52:32 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] PostgreSQL vs. MySQL" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> It occurs to me that this'd get a whole lot more feasible if one\n> >> postmaster == one database, which is something we *could* do if we\n> >> implemented schemas. Hiroshi's been arguing that the current hard\n> >> separation between databases in an installation should be done away\n> >> with in favor of schemas, and I'm starting to see his point...\n> \n> > This is interesting. You believe schema's would allow a pool of\n> > backends to connect to any database? That would clearly be a win.\n> \n> No, I meant that we wouldn't have physically separate databases anymore\n> within an installation, but would provide the illusion of it via\n> schemas. So, only one pg_class (for example) per installation.\n> This would simplify life in a number of areas... but there are downsides\n> to it as well, of course.\n\nWow, I can image the complications.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jul 2000 14:56:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] PostgreSQL vs. MySQL" }, { "msg_contents": "> I'm not by any means sold on removing the physical separation between\n> databases --- I can see lots of reasons not to. But I think we ought\n> to think hard about the choice, not have a knee-jerk reaction that we\n> don't want to \"eliminate a feature\". Physically separate databases\n> are an implementation choice, not a user feature.\n\nIf we put tables from different database in the same tablespace\ndirectory, and a database gets hosed, there is no way to delete the\nfiles associated with the hosed database, unless we go around and find\nall the table files used by all databases, then remove the ones not\nreferenced.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jul 2000 17:38:38 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] PostgreSQL vs. MySQL" }, { "msg_contents": "Philip Warner writes:\n\n> Does anyone have a philosophical objection to a symlink from pg_dump to\n> (new) pg_backup?\n\nYes. The behaviour of a program should not depend on the name used to\ninvoke it. You can use shell aliases or scripts for that.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 11 Jul 2000 00:24:36 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_backup symlink?" }, { "msg_contents": "At 00:24 11/07/00 +0200, Peter Eisentraut wrote:\n>Philip Warner writes:\n>\n>> Does anyone have a philosophical objection to a symlink from pg_dump to\n>> (new) pg_backup?\n>\n>Yes. The behaviour of a program should not depend on the name used to\n>invoke it. You can use shell aliases or scripts for that.\n\nOK, I suppose I was thinking of the pg_dump symlink as a tool for\ncompatibility. \n\nIs there a good solution? It dumps to text for compatibility with the old\npg_dump, but I will most often use 'pg_dump -Fc --blob'. Is there a\nrecommended 'correct' approach?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 11 Jul 2000 08:33:11 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_backup symlink?" }, { "msg_contents": "\nPhilip Warner wrote:\n> At 00:24 11/07/00 +0200, Peter Eisentraut wrote:\n> >Philip Warner writes:\n> >> Does anyone have a philosophical objection to a symlink from pg_dump to\n> >> (new) pg_backup?\n\n> >Yes. The behaviour of a program should not depend on the name used to\n> >invoke it. You can use shell aliases or scripts for that.\n \n> OK, I suppose I was thinking of the pg_dump symlink as a tool for\n> compatibility.\n\nThere is already precedent -- postmaster is a symlink to postgres, but\noperates differently due to its invocation name.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 10 Jul 2000 18:35:58 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_backup symlink?" }, { "msg_contents": "> At 00:24 11/07/00 +0200, Peter Eisentraut wrote:\n> >Philip Warner writes:\n> >\n> >> Does anyone have a philosophical objection to a symlink from pg_dump to\n> >> (new) pg_backup?\n> >\n> >Yes. The behaviour of a program should not depend on the name used to\n> >invoke it. You can use shell aliases or scripts for that.\n> \n> OK, I suppose I was thinking of the pg_dump symlink as a tool for\n> compatibility. \n> \n> Is there a good solution? It dumps to text for compatibility with the old\n> pg_dump, but I will most often use 'pg_dump -Fc --blob'. Is there a\n> recommended 'correct' approach?\n\nThe BSD way is to define an environment variable that is used to supply\nadditional arguments to the command. For example, BLOCKSIZE controls if\nblocks are reported in 512 or 1k sizes by commands like du.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jul 2000 18:41:19 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_backup symlink?" }, { "msg_contents": "On Tue, 11 Jul 2000, Peter Eisentraut wrote:\n\n> Philip Warner writes:\n> \n> > Does anyone have a philosophical objection to a symlink from pg_dump to\n> > (new) pg_backup?\n> \n> Yes. The behaviour of a program should not depend on the name used to\n> invoke it. You can use shell aliases or scripts for that.\n\ntell that to *how many* Unix programs? :) sendmail, of course, being the\nfirst to jump to mind ...\n\n", "msg_date": "Mon, 10 Jul 2000 19:52:58 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_backup symlink?" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n>>>> Yes. The behaviour of a program should not depend on the name used to\n>>>> invoke it. You can use shell aliases or scripts for that.\n\n> There is already precedent -- postmaster is a symlink to postgres, but\n> operates differently due to its invocation name.\n\nThere are dozens of other examples in any standard Unix system. Just\nto take one example, 'ls' has six different links to it on my Unix box,\nand they all act differently (ie, supply different default switches to\nthe basic 'ls' behavior).\n\nPeter is definitely swimming upstream if he hopes to get anyone to adopt\nthe above as received wisdom.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 18:58:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_backup symlink? " }, { "msg_contents": "At 18:58 10/07/00 -0400, Tom Lane wrote:\n>\n>There are dozens of other examples in any standard Unix system. Just\n>to take one example, 'ls' has six different links to it on my Unix box,\n>and they all act differently (ie, supply different default switches to\n>the basic 'ls' behavior).\n>\n>Peter is definitely swimming upstream if he hopes to get anyone to adopt\n>the above as received wisdom.\n>\n\nDoes this mean that using a pg_backup symlink would be deemed acceptable?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 11 Jul 2000 09:16:31 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_backup symlink? " }, { "msg_contents": "On Tue, 11 Jul 2000, Philip Warner wrote:\n\n> At 18:58 10/07/00 -0400, Tom Lane wrote:\n> >\n> >There are dozens of other examples in any standard Unix system. Just\n> >to take one example, 'ls' has six different links to it on my Unix box,\n> >and they all act differently (ie, supply different default switches to\n> >the basic 'ls' behavior).\n> >\n> >Peter is definitely swimming upstream if he hopes to get anyone to adopt\n> >the above as received wisdom.\n> >\n> \n> Does this mean that using a pg_backup symlink would be deemed acceptable?\n\nyes :)\n\nboth 'commands' should be documented in the man pages too ... right? :)\n\n\n", "msg_date": "Mon, 10 Jul 2000 20:39:28 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_backup symlink? " }, { "msg_contents": "At 20:39 10/07/00 -0300, The Hermit Hacker wrote:\n>\n>both 'commands' should be documented in the man pages too ... right? :)\n>\n\nBelieve it ot not, I have actually started on this. The SGML sources are a\nbit hard on the eyes, even for someone who used to use TeX. Is there a\nsimpler way than manually editing pg_dump.sgml?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 11 Jul 2000 17:32:53 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_backup symlink? " }, { "msg_contents": "> At 20:39 10/07/00 -0300, The Hermit Hacker wrote:\n> >\n> >both 'commands' should be documented in the man pages too ... right? :)\n> >\n> \n> Believe it ot not, I have actually started on this. The SGML sources are a\n> bit hard on the eyes, even for someone who used to use TeX. Is there a\n> simpler way than manually editing pg_dump.sgml?\n\nYes, hard on the eyes. No, no better way. The only suggestion I have\nis to use an editor in HTML colorizer mode so the tags are colored.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jul 2000 09:09:51 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_backup symlink?" }, { "msg_contents": "The Hermit Hacker writes:\n\n> tell that to *how many* Unix programs? :) sendmail, of course, being the\n> first to jump to mind ...\n\nThat doesn't mean it's a good idea. For one, it would prevent anyone to\ninstall them as pg_dump71, etc., which I had hoped to offer sometime. But\nI'm just one voice... If you make pg_dump a one-line shell script on the\nother hand you don't hurt anyone.\n\nDoes Windows 98 have (sym)links? That's a supported client platform.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 11 Jul 2000 22:38:33 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_backup symlink?" }, { "msg_contents": "Philip Warner writes:\n\n> Is there a good solution? It dumps to text for compatibility with the old\n> pg_dump, but I will most often use 'pg_dump -Fc --blob'. Is there a\n> recommended 'correct' approach?\n\nIMHO, it's a bad strategy to add symlinks as shortcuts to certain\noptions. Where would that ever lead? There are tons of options settings I\nuse \"most often\" in various programs, but for that you can use shells\naliases or scripts, or the program provides an environment variable for\ndefault options.\n\nThe default behaviour of pg_dump (or pg_backup or whatever) should be to\nwrite plain text to stdout. If you want to write format \"foo\", use the\n-Ffoo option. If you want to dump blobs, use the --blob option. That makes\nsense.\n\nYou're really trying to force certain usage patterns by labeling one\ninvocation \"backup\" and another \"dump\". I can foresee the user problems:\n\"No, you have to use pg_dump for that, not pg_backup!\" -- \"Don't they do\nthe same thing?\" -- \"Why aren't they the same program then?\" We're still\nbattling that sympton in the createdb vs CREATE DATABASE case.\n\nWhat's wrong with just having pg_dump, period? After all pg_dump isn't\nsomething you use like `ls' or `cat' where every extra keystroke is a\npain.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 12 Jul 2000 02:23:23 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_backup symlink?" }, { "msg_contents": "At 02:23 12/07/00 +0200, Peter Eisentraut wrote:\n>\n>IMHO, it's a bad strategy to add symlinks as shortcuts to certain\n>options. Where would that ever lead? \n\nI suppose the glib answer is \"to a more convenient and easy to use tool\" 8-}.\n\n>There are tons of options settings I\n>use \"most often\" in various programs, but for that you can use shells\n>aliases or scripts, or the program provides an environment variable for\n>default options.\n\nIn this case I view pg_dump's default behaviour as an anachronism caused by\ncompatibility issues, not a feature. Dumping to text without blobs is like\nasking ls to only list files whose names are in lower case.\n\n\n>The default behaviour of pg_dump (or pg_backup or whatever) should be to\n>write plain text to stdout. If you want to write format \"foo\", use the\n>-Ffoo option. If you want to dump blobs, use the --blob option. That makes\n>sense.\n\nWith a symlink, that's what you get. You will still be able to add '-Ffoo'\nto pg_dump (or -Fp to pg_backup)\n\n\n>You're really trying to force certain usage patterns by labeling one\n>invocation \"backup\" and another \"dump\". I can foresee the user problems:\n>\"No, you have to use pg_dump for that, not pg_backup!\"\n\nThe actualy answer to the question is: \"either use 'pg_dump -Fc --blob', or\njust use pg_backup, whichever you find easiest to remember\".\n\nThis works both ways: \"I used pg_dump to backup my db, but it doesn't\ncontain the blobs\" - I've certainly seen that message a few times. Both\nissues are solved by documentation. \n\nUntil a scipt file can import blob data directly from stdin, a text file\ncan not be used to backup blobs, so the default behaviour of pg_dump is\nunsuitable for backups.\n\n\n>We're still\n>battling that sympton in the createdb vs CREATE DATABASE case.\n\nMy guess is these issues were also created by legacy code.\n\n\n>What's wrong with just having pg_dump, period? After all pg_dump isn't\n>something you use like `ls' or `cat' where every extra keystroke is a\n>pain.\n\nNo, but for less commonly used utilities, it's probably more important to\nhave a simple way invoke a basic, important, function.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 12 Jul 2000 11:50:51 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_backup symlink?" } ]
[ { "msg_contents": "hi\n\nthe list arhive is dead again ...\n\nyours, oliver teuber\n-- \n // Oliver Teuber Softwareentwicklung email [email protected]\n // Schulweg 8, 37534 Badenhausen, Germany phone +49 5522 951066\n // http://www.reitsport.de/ fax +49 5522 951068\n/+============================================== I N U X rulez the World\n", "msg_date": "Mon, 10 Jul 2000 01:02:52 +0200", "msg_from": "Oliver Teuber <[email protected]>", "msg_from_op": true, "msg_subject": "mailing list archive dead again!" }, { "msg_contents": "\nfixed, permissions problem on two fo the directories was preventing the\nupdate script from running ...\n\n\nOn Mon, 10 Jul 2000, Oliver Teuber wrote:\n\n> hi\n> \n> the list arhive is dead again ...\n> \n> yours, oliver teuber\n> -- \n> // Oliver Teuber Softwareentwicklung email [email protected]\n> // Schulweg 8, 37534 Badenhausen, Germany phone +49 5522 951066\n> // http://www.reitsport.de/ fax +49 5522 951068\n> /+============================================== I N U X rulez the World\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 11 Jul 2000 10:49:08 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: mailing list archive dead again!" } ]
[ { "msg_contents": "\nA new version of pg_dump (for V7.*) is available for testing at:\n\n ftp://ftp.rhyme.com.au/pub/postgresql/pg_dump/blobs/\n\nThis version supports BLOBS in pg_dump using the --blob qualifier (but only\nin the custon outut format).\n\nAnybody who is willing to build and test this, plase do, and let me know\nhow you go.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 10 Jul 2000 12:24:59 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Announce: Testers needed for pg_dump with BLOB support." } ]
[ { "msg_contents": "Now I know that you all believe that postgres only has problems due to\nbad programming, but I'm getting another problem that I can't figure out\nin 6.5.3\n\n[PostgreSQL 6.5.3 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66]\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: db_geocrawler\n\ndb_geocrawler=> vacuum analyze;\nERROR: cannot find attribute 1 of relation pg_attrdef\n\n\nThis is causing geocrawler.com to be totally fubar at this point.\n\nAny ideas? Do I have to recover from the last backup?\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n", "msg_date": "Sun, 09 Jul 2000 19:26:57 -0700", "msg_from": "Tim Perdue <[email protected]>", "msg_from_op": true, "msg_subject": "more corruption" }, { "msg_contents": "On Sun, 9 Jul 2000, Tim Perdue wrote:\n\n> Now I know that you all believe that postgres only has problems due to\n> bad programming, but I'm getting another problem that I can't figure out\n> in 6.5.3\n> \n> [PostgreSQL 6.5.3 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66]\n> \n> type \\? for help on slash commands\n> type \\q to quit\n> type \\g or terminate with semicolon to execute query\n> You are currently connected to the database: db_geocrawler\n> \n> db_geocrawler=> vacuum analyze;\n> ERROR: cannot find attribute 1 of relation pg_attrdef\n> \n> \n> This is causing geocrawler.com to be totally fubar at this point.\n> \n> Any ideas? Do I have to recover from the last backup?\n\njust a quick thought ... have you tried shutting down and restrating the\npostmaster? basically, \"reset\" the shared memory? v7.x handles\ncorruptions like that alot cleaner, but previous versions caused odd\nresults if shared memory got corrupted ...\n\n\n", "msg_date": "Mon, 10 Jul 2000 00:13:13 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more corruption" }, { "msg_contents": "The Hermit Hacker wrote:\n> just a quick thought ... have you tried shutting down and restrating the\n> postmaster? basically, \"reset\" the shared memory? v7.x handles\n> corruptions like that alot cleaner, but previous versions caused odd\n> results if shared memory got corrupted ...\n\nWell, I've rebooted twice. In fact, it was a hard lock that caused the\nproblems. When the machine was brought back up, the db was foobar.\n\nI'm doing something really really evil to avoid losing the last days'\ndata:\n\n-I created a new db\n-used the old db schema to create all new blank tables\n-copied the physical table files from the old data directory into the\nnew database directory\n-currently vacuuming the new db - nothing is barfing yet\n-now hopefully I can create my indexes and be back in business\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n", "msg_date": "Sun, 09 Jul 2000 20:26:33 -0700", "msg_from": "Tim Perdue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: more corruption" }, { "msg_contents": "You have recreated what pg_upgrade does. It is for upgrading system\ntables. If only your system tables were hosed, you are fine now.\n\n\n> The Hermit Hacker wrote:\n> > just a quick thought ... have you tried shutting down and restrating the\n> > postmaster? basically, \"reset\" the shared memory? v7.x handles\n> > corruptions like that alot cleaner, but previous versions caused odd\n> > results if shared memory got corrupted ...\n> \n> Well, I've rebooted twice. In fact, it was a hard lock that caused the\n> problems. When the machine was brought back up, the db was foobar.\n> \n> I'm doing something really really evil to avoid losing the last days'\n> data:\n> \n> -I created a new db\n> -used the old db schema to create all new blank tables\n> -copied the physical table files from the old data directory into the\n> new database directory\n> -currently vacuuming the new db - nothing is barfing yet\n> -now hopefully I can create my indexes and be back in business\n> \n> Tim\n> \n> -- \n> Founder - PHPBuilder.com / Geocrawler.com\n> Lead Developer - SourceForge\n> VA Linux Systems\n> 408-542-5723\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 9 Jul 2000 23:36:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more corruption" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> You have recreated what pg_upgrade does. It is for upgrading system\n> tables. If only your system tables were hosed, you are fine now.\n\nEr, not unless he did exactly the right fancy footwork with pg_log and\nvacuum. Or have you forgotten how tricky it was to get pg_upgrade to\nwork reliably?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 01:14:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more corruption " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > You have recreated what pg_upgrade does. It is for upgrading system\n> > tables. If only your system tables were hosed, you are fine now.\n> \n> Er, not unless he did exactly the right fancy footwork with pg_log and\n> vacuum. Or have you forgotten how tricky it was to get pg_upgrade to\n> work reliably?\n\nYes, I had forgotten. The new table file names will make pg_upgrade\nuseless in the future.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jul 2000 09:11:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more corruption" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Yes, I had forgotten. The new table file names will make pg_upgrade\n> useless in the future.\n\nHmm ... that's an implication I hadn't thought about. I wonder how much\nwork it would be to get pg_upgrade to rename table files. Be a shame to\nthrow pg_upgrade away after all the sweat we put into making it work ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 10:32:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more corruption " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Yes, I had forgotten. The new table file names will make pg_upgrade\n> > useless in the future.\n> \n> Hmm ... that's an implication I hadn't thought about. I wonder how much\n> work it would be to get pg_upgrade to rename table files. Be a shame to\n> throw pg_upgrade away after all the sweat we put into making it work ;-)\n\nSeems impossible. The physical file names are not dumped by pg_dump, so\nthere is really no way to re-assocate the files with the table names. \nLooks like a lost cause.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jul 2000 14:57:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more corruption" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Yes, I had forgotten. The new table file names will make pg_upgrade\n> > useless in the future.\n> \n> Hmm ... that's an implication I hadn't thought about. I wonder how much\n> work it would be to get pg_upgrade to rename table files. Be a shame to\n> throw pg_upgrade away after all the sweat we put into making it work ;-)\n\nI guess we could throw the physical file into a comment, and somehow\nread that in pg_upgrade, but it seems too error-prone. I am sure Vadim\nwill come up with something to break pg_upgrade soon anyway. It is a\nnifty feature while we have it.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jul 2000 14:58:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more corruption" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>>>> Yes, I had forgotten. The new table file names will make pg_upgrade\n>>>> useless in the future.\n>> \n>> Hmm ... that's an implication I hadn't thought about. I wonder how much\n>> work it would be to get pg_upgrade to rename table files. Be a shame to\n>> throw pg_upgrade away after all the sweat we put into making it work ;-)\n\n> Seems impossible. The physical file names are not dumped by pg_dump, so\n> there is really no way to re-assocate the files with the table names. \n> Looks like a lost cause.\n\nWell, we'd need to modify the pg_dump format so that the OIDs of the\ntables are recorded, but given that it doesn't seem impossible.\n\nI suppose tablespaces might complicate the situation to the point where\nit wasn't worth the trouble, though.\n\nGiven Vadim's plans for WAL and smgr changes, at least the next two\nversion updates likely won't be updatable with pg_upgrade anyway.\nHowever, we've seen a couple of times recently when pg_upgrade was\nuseful as a recovery tool for system-table corruption, and that's why\nI'm unhappy about the prospect of just discarding it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 18:08:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more corruption " }, { "msg_contents": "At 18:08 10/07/00 -0400, Tom Lane wrote:\n>\n>Well, we'd need to modify the pg_dump format so that the OIDs of the\n>tables are recorded, but given that it doesn't seem impossible.\n\nAlready are (in text it's in the comments, and in the other formats it's\npart of the data in the TOC.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 11 Jul 2000 08:28:18 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more corruption " }, { "msg_contents": "> Given Vadim's plans for WAL and smgr changes, at least the next two\n> version updates likely won't be updatable with pg_upgrade anyway.\n> However, we've seen a couple of times recently when pg_upgrade was\n> useful as a recovery tool for system-table corruption, and that's why\n> I'm unhappy about the prospect of just discarding it...\n\nAgreed. I can see some cases where several types of recovery will be\nharder in the new system.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jul 2000 18:38:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more corruption" } ]
[ { "msg_contents": "[postgres@geocrawler postgres]$ pg_dump -s db_geocrawler\ngetTables(): SELECT failed. Explanation from backend: 'ERROR: No such\nattribute or function 'oid'\n'.\n\n\nThat one looks pretty scary to me... No such attribute: oid??\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n", "msg_date": "Sun, 09 Jul 2000 19:44:06 -0700", "msg_from": "Tim Perdue <[email protected]>", "msg_from_op": true, "msg_subject": "Corruption Pt II" } ]
[ { "msg_contents": "\nSELECT geo_distance(location::point,'(-79.412636,43.720768)'::point)\n FROM location_table;\n\nwhere location is defined as point already?\n\nI'm getting:\n\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\neach time ... its producing a core that I've yet to analyze,as I have to\ncompile a new debugging server to do so. figured I'd check first that\nwhat I'm trying to do is possible, before I try and 'bark up the wrong\ntree' ...\n\nthanks ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 10 Jul 2000 00:03:52 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "[7.0.2] should this work? geo_distance() ..." }, { "msg_contents": "\nappendum:\n\nIf I add a WHERE to it, it works fine ... its only when I try to do\n*everyone* that it screws up:\n\nSELECT geo_distance(location::point,'(-79.412636,43.720768)'::point) \n FROM personal_data \n WHERE gid = 14215;\n\n geo_distance \n------------------\n 337.832062731434\n(1 row)\n\n\n\nOn Mon, 10 Jul 2000, The Hermit Hacker wrote:\n\n> \n> SELECT geo_distance(location::point,'(-79.412636,43.720768)'::point)\n> FROM location_table;\n> \n> where location is defined as point already?\n> \n> I'm getting:\n> \n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> \n> each time ... its producing a core that I've yet to analyze,as I have to\n> compile a new debugging server to do so. figured I'd check first that\n> what I'm trying to do is possible, before I try and 'bark up the wrong\n> tree' ...\n> \n> thanks ...\n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n\n", "msg_date": "Mon, 10 Jul 2000 00:07:02 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [7.0.2] should this work? geo_distance() ..." } ]
[ { "msg_contents": "Any idea why 6.5.3 would have created tens of thousands of files like\nthese in the /data/base/db_geocrawler/ directory?\n\nThere may have even been a million of these files there - so many that\nrm -f idx_arch_date_list_year_mo.* didn't work.\n\nTim\n\n\nidx_arch_date_list_year_mo.10548 idx_arch_date_list_year_mo.1981 \nidx_arch_date_list_year_mo.6827 pg_class_relname_index.2653\nidx_arch_date_list_year_mo.10549 idx_arch_date_list_year_mo.1982 \nidx_arch_date_list_year_mo.6828 pg_class_relname_index.2654\nidx_arch_date_list_year_mo.1055 idx_arch_date_list_year_mo.1983 \nidx_arch_date_list_year_mo.6829 pg_class_relname_index.2655\nidx_arch_date_list_year_mo.10550 idx_arch_date_list_year_mo.1984 \nidx_arch_date_list_year_mo.683 pg_class_relname_index.2656\nidx_arch_date_list_year_mo.10551 idx_arch_date_list_year_mo.1985 \nidx_arch_date_list_year_mo.6830 pg_class_relname_index.2657\nidx_arch_date_list_year_mo.10552 idx_arch_date_list_year_mo.1986 \nidx_arch_date_list_year_mo.6831 pg_class_relname_index.2658\nidx_arch_date_list_year_mo.10553 idx_arch_date_list_year_mo.1987 \nidx_arch_date_list_year_mo.6832 pg_class_relname_index.2659\nidx_arch_date_list_year_mo.10554 idx_arch_date_list_year_mo.1988 \nidx_arch_date_list_year_mo.6833 pg_class_relname_index.266\nidx_arch_date_list_year_mo.10555 idx_arch_date_list_year_mo.1989 \nidx_arch_date_list_year_mo.6834 pg_class_relname_index.2660\nidx_arch_date_list_year_mo.10556 idx_arch_date_list_year_mo.199 \nidx_arch_date_list_year_mo.6835 pg_class_relname_index.2661\nidx_arch_date_list_year_mo.10557 idx_arch_date_list_year_mo.1990 \nidx_arch_date_list_year_mo.6836 pg_class_relname_index.2662\nidx_arch_date_list_year_mo.10558 idx_arch_date_list_year_mo.1991 \nidx_arch_date_list_year_mo.6837 pg_class_relname_index.2663\nidx_arch_date_list_year_mo.10559 idx_arch_date_list_year_mo.1992 \nidx_arch_date_list_year_mo.6838 pg_class_relname_index.2664\nidx_arch_date_list_year_mo.1056 idx_arch_date_list_year_mo.1993 \nidx_arch_date_list_year_mo.6839 pg_class_relname_index.2665\nidx_arch_date_list_year_mo.10560 idx_arch_date_list_year_mo.1994 \nidx_arch_date_list_year_mo.684 pg_class_relname_index.2666\nidx_arch_date_list_year_mo.10561 idx_arch_date_list_year_mo.1995 \n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n", "msg_date": "Sun, 09 Jul 2000 20:52:47 -0700", "msg_from": "Tim Perdue <[email protected]>", "msg_from_op": true, "msg_subject": "More info" }, { "msg_contents": "Tim Perdue <[email protected]> writes:\n> Any idea why 6.5.3 would have created tens of thousands of files like\n> these in the /data/base/db_geocrawler/ directory?\n\nFunny you should mention that, because I was just in process of testing\na fix when your mail came in. The low-level routine that accesses a\nparticular segment of a multi-segment relation develops a serious case\nof Sorcerer's Apprentice syndrome if higher levels hand it a silly block\nnumber. If you tell it to access, say, block# 2 billion, it will\nmerrily start creating empty segment files till it gets to the segment\nnumber that corresponds to that block number.\n\nThe routine does need to be able to create *one* new segment, in case it\nis asked to access the block just past the current EOF (when EOF is at a\nsegment boundary) ... but not more than one. As of current sources, it\nknows not to do more.\n\nThis bug has been known for a while. It doesn't directly answer your\nproblem though, since the real issue is \"what generated the silly block\nnumber, and why\"?\n\nI can't quite resist the temptation to suggest that you should be\nrunning 7.0.2 ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 00:39:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More info " }, { "msg_contents": "Tom Lane wrote:\n> I can't quite resist the temptation to suggest that you should be\n> running 7.0.2 ...\n\nI'll tell you what I *really* want... How about 7.1.2 (I'm afraid of\n7.1.0)\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n", "msg_date": "Mon, 10 Jul 2000 06:05:16 -0700", "msg_from": "Tim Perdue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: More info" }, { "msg_contents": "Tim Perdue <[email protected]> writes:\n> Tom Lane wrote:\n>> I can't quite resist the temptation to suggest that you should be\n>> running 7.0.2 ...\n\n> I'll tell you what I *really* want... How about 7.1.2 (I'm afraid of\n> 7.1.0)\n\nAnd when that's out, you'll be waiting for 7.2.2?\n\nI can understand reluctance to install whatever.0 as a production\nserver on its first day of release. But we have enough field experience\nnow with 7.0.* to say confidently that it is more stable than 6.5.*,\nand we know for a fact that we have fixed hundreds of bugs in it\ncompared to 6.5.*. Frankly, if I had to bet today, I'd bet on 7.1.*\nbeing less stable than 7.0.*, at least till we shake out all the\nimplications of TOAST, WAL, etc.\n\nIf you're being bitten by 6.5.* bugs then an update seems in order.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 10:26:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More info " }, { "msg_contents": "\n> I can understand reluctance to install whatever.0 as a production\n> server on its first day of release. But we have enough field experience\n> now with 7.0.* to say confidently that it is more stable than 6.5.*,\n> and we know for a fact that we have fixed hundreds of bugs in it\n> compared to 6.5.*. Frankly, if I had to bet today, I'd bet on 7.1.*\n> being less stable than 7.0.*, at least till we shake out all the\n> implications of TOAST, WAL, etc.\n \nIs WAL planned for 7.1? What is the story with WAL? I'm a bit concerned\nthat the current storage manager is going to be thrown in the bit bucket\nwithout any thought for its benefits. There's some stuff I want to do\nwith it like resurrecting time travel, some database replication stuff\nwhich can make use of the non-destructive storage method etc. There's a\nwhole lot of interesting stuff that can be done with the current storage\nmanager.\n", "msg_date": "Tue, 11 Jul 2000 10:38:39 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "postgres 7.2 features." }, { "msg_contents": "WAL is 7.1. It doesn't affect the storage manager very much. A new\nstorage manager is scheduled for 7.2.\n\n> \n> > I can understand reluctance to install whatever.0 as a production\n> > server on its first day of release. But we have enough field experience\n> > now with 7.0.* to say confidently that it is more stable than 6.5.*,\n> > and we know for a fact that we have fixed hundreds of bugs in it\n> > compared to 6.5.*. Frankly, if I had to bet today, I'd bet on 7.1.*\n> > being less stable than 7.0.*, at least till we shake out all the\n> > implications of TOAST, WAL, etc.\n> \n> Is WAL planned for 7.1? What is the story with WAL? I'm a bit concerned\n> that the current storage manager is going to be thrown in the bit bucket\n> without any thought for its benefits. There's some stuff I want to do\n> with it like resurrecting time travel, some database replication stuff\n> which can make use of the non-destructive storage method etc. There's a\n> whole lot of interesting stuff that can be done with the current storage\n> manager.\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jul 2000 20:58:44 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 7.2 features." }, { "msg_contents": "Bruce Momjian wrote:\n> \n> WAL is 7.1. It doesn't affect the storage manager very much. A new\n> storage manager is scheduled for 7.2.\n\nIs it going to hurt my ability to resurrect time travel using the\noriginal\nmethodology?\n", "msg_date": "Tue, 11 Jul 2000 11:29:01 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 7.2 features." } ]
[ { "msg_contents": "OK, thanks to the www.phpbuilder.com PostgreSQL/MySQL comparison, there\nis another PostgreSQL/MySQL thread on shashdot.org. Looks interesting,\nand of course, we are looking good too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jul 2000 00:14:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Slashdot discussion" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> OK, thanks to the www.phpbuilder.com PostgreSQL/MySQL comparison, there\n> is another PostgreSQL/MySQL thread on shashdot.org. Looks interesting,\n> and of course, we are looking good too.\nI stuck my two cents in of course. :) \nI think most people are of the opinion that each too is good to fit a\ncertain niche. Without wanting to starta thread war I think postgres is\ngreat, but for many people the learning curve is too great and thus\nMySQL is a good introduction.\nI think a lot of people get started in MySQL and move up to Postgres.\n\nCheers,\n Graeme\n", "msg_date": "Mon, 10 Jul 2000 14:31:52 +1000", "msg_from": "Graeme Merrall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "Graeme Merrall wrote:\n> Without wanting to starta thread war I think postgres is\n> great, but for many people the learning curve is too great and thus\n> MySQL is a good introduction.\n\nIn what way is mysql easier to learn?\n", "msg_date": "Mon, 10 Jul 2000 16:19:12 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "Chris Bitmead wrote:\n> \n> Graeme Merrall wrote:\n> > Without wanting to starta thread war I think postgres is\n> > great, but for many people the learning curve is too great and thus\n> > MySQL is a good introduction.\nWell speaking personally I found the documentation for MySQL better, the\ninstall process simpler and getting stuff done just generally easier.\nThings like phpMyAdmin just aren't out there for postgres although the\npgsql port is a pretty damn fine effort and I've never seen anything\nlike pgaccess for mysql. It may have been that I got my start in mSQL to\nthe transition was a little easier. Having said that, having now seen\nOracle and various other larger RDBMS's I understand the niches that\npostgres and mysql fill. Postgres is now my DB of choice so I'm not\nanti-mysql or anti-pgsql and if I don't like the docs that's fine, I\nshould just do something about it. \nOne thing that really cheeses me off are ppl going \"Your documentation\nsucks - do something about it\" in an open source situation. There's the\nCVS big fella, get writing. And if I was in a situation to do that, I\nwould.\n\nCheers,\n Graeme\n", "msg_date": "Mon, 10 Jul 2000 16:37:31 +1000", "msg_from": "Graeme Merrall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "Bruce Momjian writes:\n\n> OK, thanks to the www.phpbuilder.com PostgreSQL/MySQL comparison, there\n> is another PostgreSQL/MySQL thread on shashdot.org. Looks interesting,\n> and of course, we are looking good too.\n\nIs anyone else noticing this: Everytime this sort of thing comes up a\nnumber of people invariably tell that they are using MySQL because it's\neasier to install, and that PostgreSQL is difficult (\"a pain\") to install.\n\nI've studied the MySQL installation instructions, and they don't strike me\nas inherently simpler. Is it only perception, or what can we do better?\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 11 Jul 2000 00:24:20 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> Bruce Momjian writes:\n> \n> > OK, thanks to the www.phpbuilder.com PostgreSQL/MySQL comparison, there\n> > is another PostgreSQL/MySQL thread on shashdot.org. Looks interesting,\n> > and of course, we are looking good too.\n> \n> Is anyone else noticing this: Everytime this sort of thing comes up a\n> number of people invariably tell that they are using MySQL because it's\n> easier to install, and that PostgreSQL is difficult (\"a pain\") to install.\n> \n> I've studied the MySQL installation instructions, and they don't strike me\n> as inherently simpler. Is it only perception, or what can we do better?\n\nI am confused by this also.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jul 2000 18:39:21 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> I've studied the MySQL installation instructions, and they don't strike me\n> as inherently simpler. Is it only perception, or what can we do better?\n\nGood question. My 2 cents...\n\n1) The RPM-installed binaries that come with RH 6.0/6.1 can easily and\nstealthly interfere with a src.tar.gz installation due to $PATH settings\n(accidentally drawing on /bin/p* instead of /opt/pgsql/bin/p* ... Adding\ndetection to setup/install scripts might mitigate that.\n\n2) Write an install wizard script that figures everything out for the\nuser based on questions/prompts.\n\nRegards,\nEd Loehr\n", "msg_date": "Mon, 10 Jul 2000 17:44:02 -0500", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I've studied the MySQL installation instructions, and they don't strike me\n> as inherently simpler. Is it only perception, or what can we do better?\n\nIMHO it's partly a documentation problem, and partly a matter of people\nnot having looked at recent versions. A few years back it did take some\nknow-how to get Postgres installed.\n\nI think the install-procedure docs in 7.0 are markedly better than they\nwere before, but they could still use further improvement.\n\nYour work on configure/build/install scripts will help too of course ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 18:54:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion " }, { "msg_contents": "\n> Is anyone else noticing this: Everytime this sort of thing comes up a\n> number of people invariably tell that they are using MySQL because it's\n> easier to install, and that PostgreSQL is difficult (\"a pain\") to install.\n> \n> I've studied the MySQL installation instructions, and they don't strike me\n> as inherently simpler. Is it only perception, or what can we do better?\nPossibly because for most people the process is a simple './configure;\nmake; make install'\n\nPgsql doesn't do this. Not the install process is any less better but\nmore because pgsql is a different beast and it's desifgned to work\ndifferently. Just as (say) you can't install Oracle the same way as\nMySQL, you can't install pgsql the same way either. The price of freedom\nis enternal vigilance or in our case, the price of a more powerful DB is\na harder install :)\nI had the ermm.. joy of installing Oracle in a dev situation and that\nwas much tricker then a pgsql install. env vars, directory set up -\nsheesh :)\n\nCheers,\n Graeme\n", "msg_date": "Tue, 11 Jul 2000 09:19:47 +1000", "msg_from": "Graeme Merrall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "Chris Bitmead wrote:\n\n> Graeme Merrall wrote:\n> > Without wanting to starta thread war I think postgres is\n> > great, but for many people the learning curve is too great and thus\n> > MySQL is a good introduction.\n>\n> In what way is mysql easier to learn?\n\nTo my point of view postgres is more easy to install and to start from.\nI have\nbegan to learn SQL and database with postgres95. It's still the most\neasiest\ndatabase to install and to manage. The only thing it's to read\ndocumentation\nand Postgres is now very well documented. I have tried to run mysql\nmany\ntime but I never be patient enought to see it run. Beginers knowledge is\nperhaps\nmore advanced than in Postgres mailing list because every one have a\nfriend\nwho have run mysql ... Not me :-)\n\n", "msg_date": "Tue, 11 Jul 2000 01:21:11 +0200", "msg_from": "DAROLD Gilles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "On Tue, 11 Jul 2000, Graeme Merrall wrote:\n\n> \n> > Is anyone else noticing this: Everytime this sort of thing comes up a\n> > number of people invariably tell that they are using MySQL because it's\n> > easier to install, and that PostgreSQL is difficult (\"a pain\") to install.\n> > \n> > I've studied the MySQL installation instructions, and they don't strike me\n> > as inherently simpler. Is it only perception, or what can we do better?\n> Possibly because for most people the process is a simple './configure;\n> make; make install'\n> \n> Pgsql doesn't do this. Not the install process is any less better but\n\nhuh? all i do is './configure;make;make install' ...\n\n\n", "msg_date": "Mon, 10 Jul 2000 20:40:27 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Tue, 11 Jul 2000, Graeme Merrall wrote:\n> \n> >\n> > > Is anyone else noticing this: Everytime this sort of thing comes up a\n> > > number of people invariably tell that they are using MySQL because it's\n> > > easier to install, and that PostgreSQL is difficult (\"a pain\") to install.\n> > >\n> > > I've studied the MySQL installation instructions, and they don't strike me\n> > > as inherently simpler. Is it only perception, or what can we do better?\n> > Possibly because for most people the process is a simple './configure;\n> > make; make install'\n> >\n> > Pgsql doesn't do this. Not the install process is any less better but\n> \n> huh? all i do is './configure;make;make install' ...\n\nI was referring to creating a new user etc which although mysql says it\nwould be a good idea to do, doesn't recommend from the start.\nHmm.. mind you, I've yet to do a v7.x install so I should just kepe my\ntrap shut :)\n", "msg_date": "Tue, 11 Jul 2000 09:47:47 +1000", "msg_from": "Graeme Merrall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "\n> > Bruce Momjian writes:\n> > \n> > > OK, thanks to the www.phpbuilder.com PostgreSQL/MySQL comparison, there\n> > > is another PostgreSQL/MySQL thread on shashdot.org. Looks interesting,\n> > > and of course, we are looking good too.\n> > \n> > Is anyone else noticing this: Everytime this sort of thing comes up a\n> > number of people invariably tell that they are using MySQL because it's\n> > easier to install, and that PostgreSQL is difficult (\"a pain\") to install.\n> > \n> > I've studied the MySQL installation instructions, and they don't strike me\n> > as inherently simpler. Is it only perception, or what can we do better?\n> \n> I am confused by this also.\n\nMost of us tend to think of the development of the human species as if\nthe natural evolution was still a factor. It isn't anymore -- not so\nmuch as it used to be. Back in the 19th century, what where the\nchances of survival for a child born with a three-chamber heart? What\nare these now?\n\nBecause of the ever diminishing evolutionary pressure, we become ever\nmore different and the concept of \"bad\" becomes murky. What once was\ndeadly is just abnormal today, and may even be OK tomorrow. How could\nsuch an increasing variety pass unnoticed in the world of software,\nwhich, like other tools in general, is arguably an extension of one's\norganism?\n\nI recall the days just about 20 years back, when Bill first emerged\nwith his BASIC. Who in their sane mind would then bet on its\nsurvival, let alone see any commercial value in it? Even today, I know\nlots of people who believe that Bill's BASIC was and is the best\nsoftware available to them. Who cares whether it works or not? It's\ngood. Period.\n\nBottom line -- we will eventually come to peace with the following\nugly facts:\n\n* Bad things survive\n* Useless things flourish\n* The perception of the difficulty and simplicity is random\n* The presence of features may repel users as much as the lack thereof\n* A fairly large population *prefers* to do things in the hard way\n* Free market is not automatically a smart one (look at the destiny\n of the Dvorak keyboard or how they harass the GM food manufacturers).\n\nSad as all this is, we are going to leave with it. But you folks are\ndoing a great job!\n\n--Gene\n", "msg_date": "Mon, 10 Jul 2000 18:49:25 -0500", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion " }, { "msg_contents": "On Mon, Jul 10, 2000 at 08:40:27PM -0300, The Hermit Hacker wrote:\n> On Tue, 11 Jul 2000, Graeme Merrall wrote:\n> \n> > \n> > > Is anyone else noticing this: Everytime this sort of thing comes up a\n> > > number of people invariably tell that they are using MySQL because it's\n> > > easier to install, and that PostgreSQL is difficult (\"a pain\") to install.\n> > > \n> > > I've studied the MySQL installation instructions, and they don't strike me\n> > > as inherently simpler. Is it only perception, or what can we do better?\n> > Possibly because for most people the process is a simple './configure;\n> > make; make install'\n> > \n> > Pgsql doesn't do this. Not the install process is any less better but\n> \n> huh? all i do is './configure;make;make install' ...\n\nAnd what about CVS?\n\nbash-2.01$ cd ../pgsql\nbash-2.01$ cvs -z9 update -dP\ncvs [update aborted]: authorization failed: server postgresql.org rejected\naccess\nbash-2.01$ \n\n-Egon\n", "msg_date": "Tue, 11 Jul 2000 01:55:49 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "On Tue, 11 Jul 2000, Graeme Merrall wrote:\n\n> The Hermit Hacker wrote:\n> > \n> > On Tue, 11 Jul 2000, Graeme Merrall wrote:\n> > \n> > >\n> > > > Is anyone else noticing this: Everytime this sort of thing comes up a\n> > > > number of people invariably tell that they are using MySQL because it's\n> > > > easier to install, and that PostgreSQL is difficult (\"a pain\") to install.\n> > > >\n> > > > I've studied the MySQL installation instructions, and they don't strike me\n> > > > as inherently simpler. Is it only perception, or what can we do better?\n> > > Possibly because for most people the process is a simple './configure;\n> > > make; make install'\n> > >\n> > > Pgsql doesn't do this. Not the install process is any less better but\n> > \n> > huh? all i do is './configure;make;make install' ...\n> \n> I was referring to creating a new user etc which although mysql says it\n> would be a good idea to do, doesn't recommend from the start.\n> Hmm.. mind you, I've yet to do a v7.x install so I should just kepe my\n> trap shut :)\n\nwhat? mysql let's you install as root?? :)\n\n\n", "msg_date": "Mon, 10 Jul 2000 21:03:53 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "Just a comment on the Slashdot thread in general. I see us really\ngaining on MySQL. Every month we get farther. Our rate of improvement\nmeans we should leave them in the dust in 1-2 years.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jul 2000 20:16:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> Bruce Momjian writes:\n> \n> > OK, thanks to the www.phpbuilder.com PostgreSQL/MySQL comparison, there\n> > is another PostgreSQL/MySQL thread on shashdot.org. Looks interesting,\n> > and of course, we are looking good too.\n> \n> Is anyone else noticing this: Everytime this sort of thing comes up a\n> number of people invariably tell that they are using MySQL because it's\n> easier to install, and that PostgreSQL is difficult (\"a pain\") to install.\n> \n> I've studied the MySQL installation instructions, and they don't strike me\n> as inherently simpler. Is it only perception, or what can we do better?\n\nI think postgres is pretty easy to install. But for newbies, I think\nmention should be made of \"createuser\" within the top level \"INSTALL\"\ndocument. I think figuring out getting permission to create a database\nis \nsomething new users struggle a bit with.\n", "msg_date": "Tue, 11 Jul 2000 11:16:00 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "Ed Loehr wrote:\n\n> Good question. My 2 cents...\n> \n> 1) The RPM-installed binaries that come with RH 6.0/6.1 can easily and\n> stealthly interfere with a src.tar.gz installation due to $PATH settings\n> (accidentally drawing on /bin/p* instead of /opt/pgsql/bin/p* ... Adding\n> detection to setup/install scripts might mitigate that.\n\nThat's a good point too. The INSTALL instructions should probably\ncontain\ninfo on how to remove the default redhat or debian postgres\ninstallation.\n", "msg_date": "Tue, 11 Jul 2000 11:17:53 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "PostgreSQL:\n\n\nEarly on, it was quite a bit easier to find resources on the Internet\npertaining to MySQL. PostgreSQL info is out there, but I had to look a\nbit further to find it. #mysql has much more activity than does\n#postgresql, and since I prefer this method of communication over\nmailing-lists, I found this to be very much in MySQL's favor.\n\nThe MySQL installation was somewhat more straightforward, but I didn't\nfind the PostgreSQL installation to be terribly intimidating.\n\nI found MySQL to be significantly easier to use, however, once I started\nexperimenting with basic functionality. Most tasks in MySQL were\nstraightforward, so I was surprised to find that the same tasks in\nPostgreSQL required much more effort (for example dropping a column, or\nchanging a column's data type). Further, I ran across a web-based\nadministrative program called WebMin that has a MySQL module. For a\nnovice user like myself, this kind of GUI simplifies things tremendously\nand has really made working with MySQL much more pleasant in comparison.\n\nAs I learned more about the advanced features PostgreSQL offered, I\nbecame concerned that MySQL might not be desirable for my application. \nBut I shortly realized that while PostgreSQL includes support for\nadvanced functionality such as Transactions, Subselects, Views, etc.,\nit's not likely that I'll have the skills to take advantage of these\nfeatures for quite some time. And since it seems reasonable to expect\nthat MySQL will add many of these features in the near future, it makes\nsense for me to go with MySQL for my application.\n\nOf course, I reserve the right to change my mind. :-)\n\n\n\nPeter Eisentraut wrote:\n> \n> Bruce Momjian writes:\n> \n> > OK, thanks to the www.phpbuilder.com PostgreSQL/MySQL comparison, there\n> > is another PostgreSQL/MySQL thread on shashdot.org. Looks interesting,\n> > and of course, we are looking good too.\n> \n> Is anyone else noticing this: Everytime this sort of thing comes up a\n> number of people invariably tell that they are using MySQL because it's\n> easier to install, and that PostgreSQL is difficult (\"a pain\") to install.\n> \n> I've studied the MySQL installation instructions, and they don't strike me\n> as inherently simpler. Is it only perception, or what can we do better?\n> \n> --\n> Peter Eisentraut Sernanders v�g 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 10 Jul 2000 18:48:59 -0700", "msg_from": "\"J.R. Belding\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "> J.R. Belding wrote:\n> \n> #mysql has much more activity than does #postgresql, and since I prefer\n> this method of communication over mailing-lists, I found this to be very\n> much in MySQL's favor.\n\n\nHmm. So which server do most postgresql people hang out on?\n\n- Jeff\n\n\n-- [email protected] --------------------------------- http://linux.conf.au/ --\n\n linux.conf.au - coming to Sydney in January 2001\n\n\tInstalling Linux Around Australia - http://linux.org.au/installfest/\n\n", "msg_date": "Tue, 11 Jul 2000 12:14:47 +1000", "msg_from": "Jeff Waugh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "Bruce Momjian wrote:\n\n> Just a comment on the Slashdot thread in general. I see us really\n> gaining on MySQL. Every month we get farther. Our rate of improvement\n> means we should leave them in the dust in 1-2 years.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nDo you think it's a good idea to turn fsync off by default? I think most\nfirst-time users will not know about turning fsync off when comparing\nPostgreSQL and MySQL, and mistakenly judge PostgreSQL's slowness.\n\nRegards,\nThomas.\n\n", "msg_date": "Tue, 11 Jul 2000 13:09:05 +0800", "msg_from": "Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "Chris Bitmead wrote:\n >That's a good point too. The INSTALL instructions should probably\n >contain\n >info on how to remove the default redhat or debian postgres\n >installation.\n \nFor Debian, at least, this is a standard operation for any package; i.e.,\n`dpkg --remove package' or `dpkg --purge package'. (--purge removes the\nconfiguration files as well.) dpkg will complain if this would cause any\ndependency to be violated.\n\nI'm not sure that there's much use in putting this in the INSTALL; any\nDebian user ought to know it any case, and Debian users don't generally\nexpect to read the upstream install files because they expect the\npackage maintainer to have handled anything necessary. \nPostgreSQL is in the main Debian distribution, so there ought to be no\nspecial issues about its installation or removal.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"I waited patiently for the LORD; and he inclined unto \n me, and heard my cry. He brought me up also out of an \n horrible pit, out of the miry clay, and set my feet \n upon a rock, and established my goings. And he hath \n put a new song in my mouth, even praise unto our God.\n Many shall see it, and fear, and shall trust in the \n LORD.\" Psalms 40:1-3 \n\n\n", "msg_date": "Tue, 11 Jul 2000 11:29:54 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion " }, { "msg_contents": "On Mon, 10 Jul 2000, J.R. Belding wrote:\n\n> PostgreSQL:\n> \n> \n> Early on, it was quite a bit easier to find resources on the Internet\n> pertaining to MySQL. PostgreSQL info is out there, but I had to look a\n> bit further to find it. #mysql has much more activity than does\n> #postgresql, and since I prefer this method of communication over\n> mailing-lists, I found this to be very much in MySQL's favor.\n> \n> The MySQL installation was somewhat more straightforward, but I didn't\n> find the PostgreSQL installation to be terribly intimidating.\n> \n> I found MySQL to be significantly easier to use, however, once I started\n> experimenting with basic functionality. Most tasks in MySQL were\n> straightforward, so I was surprised to find that the same tasks in\n> PostgreSQL required much more effort (for example dropping a column, or\n> changing a column's data type). Further, I ran across a web-based\n> administrative program called WebMin that has a MySQL module. For a\n> novice user like myself, this kind of GUI simplifies things tremendously\n> and has really made working with MySQL much more pleasant in comparison.\n> \n> As I learned more about the advanced features PostgreSQL offered, I\n> became concerned that MySQL might not be desirable for my application. \n> But I shortly realized that while PostgreSQL includes support for\n> advanced functionality such as Transactions, Subselects, Views, etc.,\n\ntransactions:\n\n begin;\n select <value> from table;\n update <value> in table;\n end;\n\nsubselect:\n\nSELECT a.field\n FROM atable a, btable b\n WHERE a.key = b.key\n AND a.field2 IN ( SELECT field2 \n FROM ctable\n WHERE field1 = value );\n\nview:\n\nCREATE VIEW a_field\nSELECT a.field \n FROM atable a, btable b\n WHERE a.key = b.key\n AND a.field2 IN ( SELECT field2\n FROM ctable \n WHERE field1 = value );\n\nnext? :)\n\nit makes even more sense if you can put it into context of something, but\nyou get the idea, I hope :)\n\n\n\n\n\n\n", "msg_date": "Tue, 11 Jul 2000 09:27:30 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "On Tue, 11 Jul 2000, Jeff Waugh wrote:\n\n> > J.R. Belding wrote:\n> > \n> > #mysql has much more activity than does #postgresql, and since I prefer\n> > this method of communication over mailing-lists, I found this to be very\n> > much in MySQL's favor.\n> \n> \n> Hmm. So which server do most postgresql people hang out on?\n\nEFNet, channel #PostgreSQL ... always someone there, but activity on it\ntends to be sporatic ...\n\n\n", "msg_date": "Tue, 11 Jul 2000 09:28:03 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "The Hermit Hacker wrote:\n> On Mon, 10 Jul 2000, J.R. Belding wrote:\n> > As I learned more about the advanced features PostgreSQL offered, I\n> > became concerned that MySQL might not be desirable for my application.\n> > But I shortly realized that while PostgreSQL includes support for\n> > advanced functionality such as Transactions, Subselects, Views, etc.,\n>\n> transactions:\n>\n> begin;\n> select <value> from table;\n> update <value> in table;\n> end;\n\n I never considered transactions an advanced feature. It's in\n the basics of every relational database, coming in MySQL too\n now.\n\n The very first DB application I ever built for myself (using\n PG 4.2 with a selfmade embedded PostQUEL preprocessor for C)\n needed them already.\n\n Might have an impact that I used Siemens ISAM files with\n LEASY (a transactional layer with before/after image logging\n capability on top of ISAM) in the BS2000 mainframe\n environment for years before starting that project.\n\n Anyway, I think the first thing anybody starting with\n databases MUST learn is the concept of transactions! How many\n people must be out in the world, considering themself an SQL\n DB programmer, who learned the \"basics\" using past MySQL\n releases? Father forgive them, because they don't know what\n they do!\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Tue, 11 Jul 2000 14:58:38 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Slashdot discussion" }, { "msg_contents": "On Tue, Jul 11, 2000 at 12:24:20AM +0200, Peter Eisentraut wrote:\n\n> Is anyone else noticing this: Everytime this sort of thing comes up a\n> number of people invariably tell that they are using MySQL because it's\n> easier to install, and that PostgreSQL is difficult (\"a pain\") to install.\n\nI've noticed that, too, but having installed them both (from source and from\npackages), I have to say they're both \"a pain\" to install, at least as much\nas anything is. Of course they are. They're complicated, and they're set\nup to be flexible in installation on many machines.\n\nIt occurs to me, though, that many people may not install from source. \nMaybe the RPMs are better for MySQL? I don't use 'em, so I don't know.\n\nA\n\n-- \nAndrew Sullivan Computer Services\n<[email protected]> Burlington Public Library\n+1 905 639 3611 x158 2331 New Street\n Burlington, Ontario, Canada L7R 1J4\n", "msg_date": "Tue, 11 Jul 2000 09:04:01 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "Once I ran into a guy who said that the postgres rpm was broken in Red Hat\n5.2. This was when I was first getting into postgres. I spent some time\nwith it and realized that there were a number of things that had to be\ndone before it would work: creating the postgres users, initializing the\ndatabase, getting something into rc.d so it would boot up\nautomatically. The RPM was not broken, but it was a pain to get postgres\nrunning unless you spent some time reading about it. My experience with\nMySQL was less painful, although dealing with user permissions was more\ncomplex.\n\n----------------------------------------------------------------\nTravis Bauer | CS Grad Student | IU |www.cs.indiana.edu/~trbauer\n----------------------------------------------------------------\n\nOn Tue, 11 Jul 2000, Peter Eisentraut wrote:\n\n> \n> Is anyone else noticing this: Everytime this sort of thing comes up a\n> number of people invariably tell that they are using MySQL because it's\n> easier to install, and that PostgreSQL is difficult (\"a pain\") to install.\n> \n> I've studied the MySQL installation instructions, and they don't strike me\n> as inherently simpler. Is it only perception, or what can we do better?\n> \n\n", "msg_date": "Tue, 11 Jul 2000 12:06:45 -0500 (EST)", "msg_from": "Travis Bauer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "Travis Bauer <[email protected]> writes:\n\n> Once I ran into a guy who said that the postgres rpm was broken in Red Hat\n> 5.2. This was when I was first getting into postgres. I spent some time\n> with it and realized that there were a number of things that had to be\n> done before it would work: creating the postgres users, initializing the\n> database, getting something into rc.d so it would boot up\n> automatically. The RPM was not broken, but it was a pain to get postgres\n> running unless you spent some time reading about it. My experience with\n> MySQL was less painful, although dealing with user permissions was more\n> complex.\n<snip>\n\nThe current Red Hat RPMS do create the postgres user and initialize\nthe database but doesn't define any of the environment variables. One\nminor comment about the RPMS at the postgress website... The current\nstandard in Red Hat RPMS is to gzip all man pages because the man\nprogram will automatically decompress them. If you run a RPM through\nthe program rpmlint, it will provide some useful warning about other\npotential packing problems also...\n\n-- \nPrasanth Kumar\[email protected]\n", "msg_date": "11 Jul 2000 10:31:40 -0700", "msg_from": "[email protected] (Prasanth A. Kumar)", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "Travis Bauer wrote:\n> \n> Once I ran into a guy who said that the postgres rpm was broken in Red Hat\n> 5.2. This was when I was first getting into postgres. I spent some time\n> with it and realized that there were a number of things that had to be\n> done before it would work: creating the postgres users, initializing the\n> database, getting something into rc.d so it would boot up\n> automatically. The RPM was not broken, but it was a pain to get postgres\n\nAnd, if most people's experience with the RedHat 5.2 RPM's is what\nthey're going on, they need to get with the program -- RH 5.2 shipped\nPostgreSQL *6.3.2* which is absolutely ancient. Although, at the time,\n6.3.2 was better than nothing.\n\nThe newer RPM's, hopefully, have corrected many of the problems that\nexisted with the _horrid_ 6.3.2 RPMset RedHat shipped with 5.1/5.2 (5.0\nshipped *6.2.1*, which we won't even talk about -- although it was a\nbetter RPM set than *6.1.1*, which is where I first experienced the 'Joy\nof PostgreSQL') And, yes, the 6.3.2 RPMset was _horrid_ -- only there\nwere you entreated to the joy of an upgrade from one release of 6.3.2 to\nanother release of 6.3.2 totally breaking your database without warning\n(thanks to the misconcieved postgresql-data subpackage).\n\nThe PostgreSQL group has come light years from the days of 6.1.1 -- I\ncannot overemphasize that! Although, I won't go as far as the 6.5\nrelease statement of \"This represents the team's Final Mastery...\" :-).\n\nThe documentation is several orders of magnitude better in 7.x than\n6.1.1 or even as late as 6.3.2. The web site is also much much better\n-- I still remember the logo breaking through the brick wall.\n\nSo, if most people's experience with PostgreSQL is that old.....\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 11 Jul 2000 13:50:54 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "\"Prasanth A. Kumar\" wrote:\n> \n> Travis Bauer <[email protected]> writes:\n> \n> > Once I ran into a guy who said that the postgres rpm was broken in Red Hat\n> > 5.2. This was when I was first getting into postgres. I spent some time\n> > with it and realized that there were a number of things that had to be\n> > done before it would work: creating the postgres users, initializing the\n> > database, getting something into rc.d so it would boot up\n> > automatically. The RPM was not broken, but it was a pain to get postgres\n> > running unless you spent some time reading about it. My experience with\n> > MySQL was less painful, although dealing with user permissions was more\n> > complex.\n> <snip>\n> \n> The current Red Hat RPMS do create the postgres user and initialize\n> the database but doesn't define any of the environment variables.\n\nAnd where should they be defined? /etc/profile, perhaps? Do I really\nwant ot go there with ENVVARS? (maybe I do -- maybe I don't :-))\n\n> One\n> minor comment about the RPMS at the postgress website... The current\n> standard in Red Hat RPMS is to gzip all man pages because the man\n> program will automatically decompress them. If you run a RPM through\n> the program rpmlint, it will provide some useful warning about other\n> potential packing problems also...\n\nThe only other rpmlint-able problem with the 7.0.2-2 set is the dangling\nsymlink of os.h in -devel (which will be fixed in the next release).\n(and rpmlint's broken idea of file and directory permissions, which are\nset the way they are for a reason...., and its broken idea of Vendor and\nDistribution......).\n\nThe man pages are compressed in the latest RawHide release -- however,\ndue to my desire for cross-distribution capability with these RPM's,\nsince each distribution seems to have a different idea of where things\nought to go, and what format they ought to be in... --buildpolicy in the\nlatest RPM version fixes this sort of thing. The spec file itself is\nbuilt with this in mind, allowing for the manpages to be compressed in\nany format or not compressed at all (due to the use of the appended * in\nthe %files listing). Of course, that is a build-time thing -- my goal is\nnot binary RPM compatibility, but SOURCE RPM compatibility.\n\nBut, thanks for the critique anyway! :-)\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 11 Jul 2000 14:05:34 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "On Tue, 11 Jul 2000, Lamar Owen wrote:\n\n> And, if most people's experience with the RedHat 5.2 RPM's is what\n> they're going on, they need to get with the program -- RH 5.2 shipped\n> PostgreSQL *6.3.2* which is absolutely ancient. Although, at the time,\n> 6.3.2 was better than nothing.\n\nHello Lamar,\n\n'Better than nothing' - hmm...\n\nPerhaps better than MySQL? Definitely better than PROGRESS which is\nwhat it replaced in my shop. I have one linux box running Pg up over\n190 days - and it gets hammered on daily. Mind you I don't run RedHat \non production machines - it's a little too cute and a little too unstable. \nI use slackware.\n\nAnyway, I have some development boxes using newer versions of Pg\n(both FBSD and Linux - even a RedHat workstation) but there is nothing\nwrong with 6.3.2. Sure 7+ boasts more features and better performance\nbut there is nothing fatally flawed in 6.3.2. Trust me, 190 days for\na linux box running linux is pretty good. Especially when the users\nare social workers - afraid of technology and overly fond of abusing the\ntext data type.\n\n6.3.2 is certainly 'better than nothing' and, aside from slow vacuums,\nI have no complaints. Of course, I have the old logo taped to the cover\nof my notebook: a printout of the various pg manuals and Bruce's book. \nBeing a bit of a blockhead I kind of fancy to exploding bricks. ;-)\n\nBTW, re the slashdot business...\n\nMaybe MySQL is 'perceived' as easier to use than Pg - like Access (Abcess?)\nis perceived as being friendlier than a real database. But the reality\nis that MySQL always struck me as being more of a toy than an industrial\nstrength db - and installation isn't really that much easier.\n\nI still recall my first build with Pg. The docs were very good and I\nhad it up and running on my first attempt. The only difficulty I had\nwas determining what IF to use. I started with ecpg then switched to\nDBI. I think now I'd like to have a crack on the new and improved ecpg.\nI see that Michael's done alot of work - of course it was always 'better\nthan nothing'! ;-)\n\nCheers,\nTom\n--------------------------------------------------------------------\n SVCMC - Center for Behavioral Health \n--------------------------------------------------------------------\nThomas Good tomg@ { admin | q8 } .nrnet.org\nIS Coordinator / DBA Phone: 718-354-5528 \n Fax: 718-354-5056 \n--------------------------------------------------------------------\nPowered by: PostgreSQL s l a c k w a r e FreeBSD:\n RDBMS |---------- linux The Power To Serve\n--------------------------------------------------------------------\n\n", "msg_date": "Tue, 11 Jul 2000 14:18:15 -0400 (EDT)", "msg_from": "Thomas Good <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "<snip>\n> > The current Red Hat RPMS do create the postgres user and initialize\n> > the database but doesn't define any of the environment variables.\n> \n> And where should they be defined? /etc/profile, perhaps? Do I really\n> want ot go there with ENVVARS? (maybe I do -- maybe I don't :-))\n\nIf it were done, you would put it into a separate file in\n/etc/profile.d thus making it easy to implement in RPMS. \n\n> \n> > One\n> > minor comment about the RPMS at the postgress website... The current\n> > standard in Red Hat RPMS is to gzip all man pages because the man\n> > program will automatically decompress them. If you run a RPM through\n> > the program rpmlint, it will provide some useful warning about other\n> > potential packing problems also...\n> \n> The only other rpmlint-able problem with the 7.0.2-2 set is the dangling\n> symlink of os.h in -devel (which will be fixed in the next release).\n> (and rpmlint's broken idea of file and directory permissions, which are\n> set the way they are for a reason...., and its broken idea of Vendor and\n> Distribution......).\n\nI didn't specifically mean to imply there were huge problems with the\nRPMS nor do I necessarily agree with all the warnings in rpmlint but I\nthink it is a good practice to run it in general once before releasing\nRPMS because it can catch comman mistakes.\n\nActually, I kind of hate the fact that rpmlint exists... I'm the kind\nof person who gets obsessed over compiler warnings and such in my own\ncode or annoyed of all the changed files when you do 'rpm -Va' so\nrpmlint is just other thing for me to 'worry' about...\n\n> \n> The man pages are compressed in the latest RawHide release -- however,\n> due to my desire for cross-distribution capability with these RPM's,\n> since each distribution seems to have a different idea of where things\n> ought to go, and what format they ought to be in... --buildpolicy in the\n> latest RPM version fixes this sort of thing. The spec file itself is\n> built with this in mind, allowing for the manpages to be compressed in\n> any format or not compressed at all (due to the use of the appended * in\n> the %files listing). Of course, that is a build-time thing -- my goal is\n> not binary RPM compatibility, but SOURCE RPM compatibility.\n<snip>\n\nAre these new features in RPM 4.0? I was trying to install them from\nRawHide and couldn't because my RPM is too old.\n\n-- \nPrasanth Kumar\[email protected]\n", "msg_date": "11 Jul 2000 11:46:23 -0700", "msg_from": "[email protected] (Prasanth A. Kumar)", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "Thomas Good wrote:\n> On Tue, 11 Jul 2000, Lamar Owen wrote:\n> > And, if most people's experience with the RedHat 5.2 RPM's is what\n> > they're going on, they need to get with the program -- RH 5.2 shipped\n> > PostgreSQL *6.3.2* which is absolutely ancient. Although, at the time,\n> > 6.3.2 was better than nothing.\n \n> Hello Lamar,\n \n> 'Better than nothing' - hmm...\n\n> 6.3.2 is certainly 'better than nothing' and, aside from slow vacuums,\n> I have no complaints. Of course, I have the old logo taped to the cover\n> of my notebook: a printout of the various pg manuals and Bruce's book.\n> Being a bit of a blockhead I kind of fancy to exploding bricks. ;-)\n\nAt the time of 6.1.1, there really was 'nothing' else that would work\nfor me, Free Software-wise. MySQL/mSQL wouldn't work, as they weren't\nsupported by AOLserver, nor did they do transactions (both of those\nshortcomings have been/are being fixed). Sybase wasn't yet gratis, nor\nwas Interbase -- there was _nothing_ else. PostgreSQL was the only game\nin town if you wanted a resonably complete RDBMS (although, 6.1.1 wasn't\nreally up to the standards of being an RDBMS).\n\nAt the time of 6.3.2, MySQL/mSQL/Sybase/Interbase were still not\ncontenders, as they either weren't 'Free' or weren't supported by\nAOLserver. PostgreSQL (since Postgres95 1.01) was and is supported,\nalthough as of AOLserver 2.2.1, you had to have at least PostgreSQL\n6.2.1.\n\n6.3.2 was a quantum leap forward, as subselects were finally (and\nfunctionally!) implemented. However, the documentation was not really\npolished -- certainly not what it is now.\n\nBut, my problem was never with PostgreSQL itself -- it was with the\nbraindead RPM's that had oddball dependencies and oddball behavior. Not\nto mention the fact that until 6.3.2 RedHat Linux and PostgreSQL weren't\nthe closest of friends. The 6.3.1 version was by far the worst version\nof the RPMs ever -- but that was as much the fault of RedHat as of\nPostgreSQL. The 6.4.2 RPMs that shipped with RedHat 6.0 were also not\nthought of very highly....\n\nIn fact, I upgraded from 6.3.2 to 6.5.2, skipping 6.4.x altogether. \nMVCC made the difference, and the difference was GOOD. 6.5 was the real\nstandout release, in my book, that made the world of difference -- and I\nwas glad I had perservered until then.\n\nNow you have OpenACS on AOLserver, which _requires_ PostgreSQL 7.0.x or\nabove.....\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 11 Jul 2000 19:10:15 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "Oliver Elphick wrote:\n\n> I'm not sure that there's much use in putting this in the INSTALL; any\n> Debian user ought to know it any case,\n\nThe first rule of documentation: Don't assume what the user knows. It's\na one-liner to tell them, so just do it.\n", "msg_date": "Wed, 12 Jul 2000 10:07:13 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "Chris Bitmead wrote:\n> \n> Oliver Elphick wrote:\n> \n> > I'm not sure that there's much use in putting this in the INSTALL; any\n> > Debian user ought to know it any case,\n> \n> The first rule of documentation: Don't assume what the user knows. It's\n> a one-liner to tell them, so just do it.\n\nAs a Debian user I'm afraid I'd have to agree with Oliver on this - the\ntools for installing / removing / purging Debian packages are there. \nThe documentation for them is there. If a person is using Debian and\nhas managed to install PostgreSQL then they know how to remove it or\npurge it - it doesn't (in this case) belong in the PostgreSQL\ndocumentation.\n\nCheers,\n\t\t\t\t\tAndrew.\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: [email protected]\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267\n", "msg_date": "Wed, 12 Jul 2000 12:48:49 +1200", "msg_from": "Andrew McMillan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Slashdot discussion" }, { "msg_contents": "[email protected] wrote:\n> > huh? all i do is './configure;make;make install' ...\n> \n> And what about CVS?\n\nHave you changed the CVSROOT since it was changed between 7.0.0 and\n7.0.2?\n\nMy nightly CVS mirrors have worked perfectly for a long time -- that is,\nonce I got the CVSROOT change properly done.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 12 Jul 2000 13:48:57 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "On Tue, 11 Jul 2000, The Hermit Hacker wrote:\n\n> > > #mysql has much more activity than does #postgresql, and since I prefer\n> > > this method of communication over mailing-lists, I found this to be very\n> > > much in MySQL's favor.\n> > \n> > Hmm. So which server do most postgresql people hang out on?\n> \n> EFNet, channel #PostgreSQL ... always someone there, but activity on it\n> tends to be sporatic ...\n\nWe also have quite a few PostgreSQL people in EFnet's #Linux, along with a\nnice little bot called 'helper' that consults a PostgreSQL database for\nknowledgebase (and THEN some) stuff. Just make sure you join #linux with\nidentd on; we've had a lot of problems with people abusing open proxies.\n\n---\nHowie <[email protected]> URL: http://www.toodarkpark.org\n\"Programming today is a race between software engineers striving to \n build bigger and better idiot-proof programs, and the Universe trying \n to produce bigger and better idiots. So far, the Universe is winning.\"\n\n", "msg_date": "Wed, 12 Jul 2000 19:09:18 +0000 (GMT)", "msg_from": "Howie <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "On Mon, 10 Jul 2000, Bruce Momjian wrote:\n\n> Just a comment on the Slashdot thread in general. I see us really\n> gaining on MySQL. Every month we get farther. Our rate of improvement\n> means we should leave them in the dust in 1-2 years.\n\nEspecially with:\n\n- Tablespace support (oh god would I love this)\n- not having to pgdump in order to do a major version upgrade (tricky, i\nknow)\n\nWho would I talk to about (partially) funding these, btw? or is that no\nlonger a concern with Great Bridge ?\n\n---\nHowie <[email protected]> URL: http://www.toodarkpark.org\n\"Programming today is a race between software engineers striving to \n build bigger and better idiot-proof programs, and the Universe trying \n to produce bigger and better idiots. So far, the Universe is winning.\"\n\n", "msg_date": "Wed, 12 Jul 2000 19:14:48 +0000 (GMT)", "msg_from": "Howie <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "Howie <[email protected]> writes:\n> Who would I talk to about (partially) funding these, btw? or is that no\n> longer a concern with Great Bridge ?\n\nGreat Bridge isn't actually up-and-running yet, AFAICT, so PostgreSQL\nInc would be the only likely place to funnel cash into a near-term\ndevelopment project. (Or you could maybe make a personal agreement\nwith some key developer, but in the current state of affairs that's\ndifficult because we all have other full-time jobs...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Jul 2000 15:28:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion " }, { "msg_contents": "[email protected] wrote:\n> On Mon, Jul 10, 2000 at 08:40:27PM -0300, The Hermit Hacker wrote:\n> > On Tue, 11 Jul 2000, Graeme Merrall wrote:\n> >\n> > >\n> > > > Is anyone else noticing this: Everytime this sort of thing comes up a\n> > > > number of people invariably tell that they are using MySQL because it's\n> > > > easier to install, and that PostgreSQL is difficult (\"a pain\") to install.\n> > > >\n> > > > I've studied the MySQL installation instructions, and they don't strike me\n> > > > as inherently simpler. Is it only perception, or what can we do better?\n> > > Possibly because for most people the process is a simple './configure;\n> > > make; make install'\n> > >\n> > > Pgsql doesn't do this. Not the install process is any less better but\n> >\n> > huh? all i do is './configure;make;make install' ...\n>\n> And what about CVS?\n>\n> bash-2.01$ cd ../pgsql\n> bash-2.01$ cvs -z9 update -dP\n> cvs [update aborted]: authorization failed: server postgresql.org rejected\n> access\n> bash-2.01$\n\n What does \"echo $CVS_RSH\" report? Marc is an Admin, not a\n Wannabe. So access is restricted to ssh connections and cvs\n uses rsh by default. If you tell me that MySQL's CVS is\n accessible with rsh, let's think of a totally different way\n to get rid of this entire discussion ...\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Wed, 12 Jul 2000 21:42:24 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" }, { "msg_contents": "Tom's right that Great Bridge doesn't have a shipping product yet, or a\nwebsite that has a whole lot to say for that matter. But there's a lot going\non behind the curtain over the summer that we'll be sharing with everyone over\nthe next few months.\n\nIn the meantime, if anyone would like to talk off-list about specific\ndevelopment priorities, either in PostgreSQL itself or related tools,\ninterfaces, etc., please feel free to contact me directly.\n\nAnd I'll reiterate once again: Any code we write will go straight into the\npatch bucket like everyone else. We'll work with the folks on the -hackers\nlist to make sure that we don't go off and do anything stupid too :)\n\nThanks,\n\nNed Lilly\nVP, Hacker Relations\nGreat Bridge, LLC\n\n\nTom Lane wrote:\n\n> Howie <[email protected]> writes:\n> > Who would I talk to about (partially) funding these, btw? or is that no\n> > longer a concern with Great Bridge ?\n>\n> Great Bridge isn't actually up-and-running yet, AFAICT, so PostgreSQL\n> Inc would be the only likely place to funnel cash into a near-term\n> development project. (Or you could maybe make a personal agreement\n> with some key developer, but in the current state of affairs that's\n> difficult because we all have other full-time jobs...)\n>\n> regards, tom lane\n\n", "msg_date": "Wed, 12 Jul 2000 16:02:24 -0400", "msg_from": "Ned Lilly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slashdot discussion" } ]
[ { "msg_contents": "\nWell after hacking around on libpq for a few days I'm starting to\nunderstand some of the subtley of what's going on. I'm also\nunderstanding that it's complex enough that I'm not going to be sure\nanymore about what subtleties I might break. It's a pity there isn't\nregression tests for this so I know for sure what is expected to happen,\nand what subtle features are in there.\n\nAnyway for this reason, I think I'm going to go back and simply add the\nfeatures I want to libpq. I've learnt enough that I think I know now how\nto do it, with a minimum of patches.\n\nLonger term, I'd like to implement SQL3 on top of libpq, then get some\ncomprehensive test harnesses in place, and then slowly dissolve libpq.\nBut without those regression tests, it's a bit scary. I assume that the\npsql tests wouldn't fully exercise libpq what with the asynchonous stuff\netc?\n", "msg_date": "Mon, 10 Jul 2000 15:45:47 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "libpq work" }, { "msg_contents": "Chris Bitmead <[email protected]> writes:\n> But without those regression tests, it's a bit scary. I assume that the\n> psql tests wouldn't fully exercise libpq what with the asynchonous stuff\n> etc?\n\nThe regression tests don't come anywhere *near* full coverage of psql,\nlet alone libpq (which has many features psql never heard of).\n\nI learned this the hard way awhile ago :-(. It's difficult to see how\nto do regression testing of async behavior though, at least with the\nsimplistic test methods we have now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 01:58:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq work " }, { "msg_contents": "Tom Lane writes:\n\n> The regression tests don't come anywhere *near* full coverage of psql,\n\nI have a psql test suite, in case someone needs it.\n\n> let alone libpq (which has many features psql never heard of).\n\nIt doesn't use the fast-track thing or the non-blocking interface, but\nother than that it seems that it at least calls most functions.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 10 Jul 2000 20:26:21 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq work " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> The regression tests don't come anywhere *near* full coverage of psql,\n\n> I have a psql test suite, in case someone needs it.\n\nCool. You should commit it as a new subdirectory of src/test.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 18:03:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq work " } ]
[ { "msg_contents": "\nIs this a known problem?\n\npjw=# create temporary table tt(f int4);\nCREATE\npjw=# create index tt_ix1 on tt(f);\nCREATE\npjw=# \\q\n\nThe postmaster says:\n\nNOTICE: mdopen: couldn't open pg_temp.31633.1: No such file or directory\npq_flush: send() failed: Bad file descriptor\nNOTICE: RelationIdBuildRelation: smgropen(pg_temp.31633.1): Bad file\ndescriptor\npq_flush: send() failed: Bad file descriptor\n\nIt only happens if you create an index on a temporary table...\n\n\nP.S. Searching in the mail archives is still timing out...so if I've missed\na related thread, sorry.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 10 Jul 2000 15:57:09 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "postmaster errors with index on temp table?" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> Is this a known problem?\n\n> pjw=# create temporary table tt(f int4);\n> CREATE\n> pjw=# create index tt_ix1 on tt(f);\n> CREATE\n> pjw=# \\q\n\n> The postmaster says:\n\n> NOTICE: mdopen: couldn't open pg_temp.31633.1: No such file or directory\n> pq_flush: send() failed: Bad file descriptor\n> NOTICE: RelationIdBuildRelation: smgropen(pg_temp.31633.1): Bad file\n> descriptor\n> pq_flush: send() failed: Bad file descriptor\n\nI see the same. \"DROP INDEX tt_ix1\" seems to do the right things, but\nmaybe temp-file cleanup fails to delink the index from its table.\nOr, could temp-file cleanup be trying to delete these in the wrong\norder?\n\nThe notices look pretty harmless, and AFAICT the tables do get cleaned\nup, but it's ugly nonetheless...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 03:00:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postmaster errors with index on temp table? " }, { "msg_contents": "> \n> Is this a known problem?\n> \n> pjw=# create temporary table tt(f int4);\n> CREATE\n> pjw=# create index tt_ix1 on tt(f);\n> CREATE\n> pjw=# \\q\n> \n> The postmaster says:\n> \n> NOTICE: mdopen: couldn't open pg_temp.31633.1: No such file or directory\n> pq_flush: send() failed: Bad file descriptor\n> NOTICE: RelationIdBuildRelation: smgropen(pg_temp.31633.1): Bad file\n> descriptor\n> pq_flush: send() failed: Bad file descriptor\n> \n> It only happens if you create an index on a temporary table...\n\nYikes, I never looked in the postmaster log to see this.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jul 2000 09:15:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postmaster errors with index on temp table?" }, { "msg_contents": "At 09:15 10/07/00 -0400, Bruce Momjian wrote:\n>> \n>> Is this a known problem?\n>> \n>> pjw=# create temporary table tt(f int4);\n>> CREATE\n>> pjw=# create index tt_ix1 on tt(f);\n>> CREATE\n>> pjw=# \\q\n>> \n>> The postmaster says:\n>> \n>> NOTICE: mdopen: couldn't open pg_temp.31633.1: No such file or directory\n>> pq_flush: send() failed: Bad file descriptor\n>> NOTICE: RelationIdBuildRelation: smgropen(pg_temp.31633.1): Bad file\n>> descriptor\n>> pq_flush: send() failed: Bad file descriptor\n>> \n>> It only happens if you create an index on a temporary table...\n>\n>Yikes, I never looked in the postmaster log to see this.\n\nAs Tom Lane said, it's non-fatal, and probably not important, but I worried\nme. I'll see if I can see the cause while I wait for people to test my new\npg_dump with BLOB support.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 10 Jul 2000 23:21:46 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postmaster errors with index on temp table?" }, { "msg_contents": "> >Yikes, I never looked in the postmaster log to see this.\n> \n> As Tom Lane said, it's non-fatal, and probably not important, but I worried\n> me. I'll see if I can see the cause while I wait for people to test my new\n> pg_dump with BLOB support.\n\nI will certainly find the cause and fix it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jul 2000 09:27:51 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postmaster errors with index on temp table?" }, { "msg_contents": "At 09:27 10/07/00 -0400, Bruce Momjian wrote:\n>> >Yikes, I never looked in the postmaster log to see this.\n>> \n>> As Tom Lane said, it's non-fatal, and probably not important, but I worried\n>> me. I'll see if I can see the cause while I wait for people to test my new\n>> pg_dump with BLOB support.\n>\n\nI've had a look at temprel.c, and remove_all_temp_relations has the following:\n\n if (temp_rel->relkind != RELKIND_INDEX)\n {\n char relname[NAMEDATALEN];\n\n /* safe from deallocation */\n strcpy(relname, temp_rel->user_relname);\n heap_drop_with_catalog(relname, allowSystemTableMods);\n }\n else\n index_drop(temp_rel->relid);\n\nBut, when a temp rel is dropped it seems that heap_drop_with_catalog also\ndrops the indexes, so the error occurs when index_drop is called (at least\nI think this is the case). Certainly commenting out the last two lines\n*seems* to work.\n\nThese lines were changed in rev 1.18 from 'heap_destroy*' calls to\n'heap_drop' calls...did 'heap_destroy*' also delete related objects as\nheap_drop* now does?\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 11 Jul 2000 23:08:41 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postmaster errors with index on temp table?" }, { "msg_contents": "> At 09:27 10/07/00 -0400, Bruce Momjian wrote:\n> >> >Yikes, I never looked in the postmaster log to see this.\n> >> \n> >> As Tom Lane said, it's non-fatal, and probably not important, but I worried\n> >> me. I'll see if I can see the cause while I wait for people to test my new\n> >> pg_dump with BLOB support.\n> >\n> \n> I've had a look at temprel.c, and remove_all_temp_relations has the following:\n> \n> if (temp_rel->relkind != RELKIND_INDEX)\n> {\n> char relname[NAMEDATALEN];\n> \n> /* safe from deallocation */\n> strcpy(relname, temp_rel->user_relname);\n> heap_drop_with_catalog(relname, allowSystemTableMods);\n> }\n> else\n> index_drop(temp_rel->relid);\n> \n> But, when a temp rel is dropped it seems that heap_drop_with_catalog also\n> drops the indexes, so the error occurs when index_drop is called (at least\n> I think this is the case). Certainly commenting out the last two lines\n> *seems* to work.\n> \n> These lines were changed in rev 1.18 from 'heap_destroy*' calls to\n> 'heap_drop' calls...did 'heap_destroy*' also delete related objects as\n> heap_drop* now does?\n\nNot sure why I introduced that bug in 1.18. Your suggestion was 100%\ncorrect. I have applied the following patch.\n\nThanks for finding that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n? config.log\n? config.cache\n? config.status\n? GNUmakefile\n? src/GNUmakefile\n? src/Makefile.custom\n? src/Makefile.global\n? src/log\n? src/crtags\n? src/backend/postgres\n? src/backend/catalog/global.bki\n? src/backend/catalog/global.description\n? src/backend/catalog/template1.bki\n? src/backend/catalog/template1.description\n? src/backend/port/Makefile\n? src/bin/initdb/initdb\n? src/bin/initlocation/initlocation\n? src/bin/ipcclean/ipcclean\n? src/bin/pg_ctl/pg_ctl\n? src/bin/pg_dump/pg_dump\n? src/bin/pg_dump/pg_restore\n? src/bin/pg_dump/pg_dumpall\n? src/bin/pg_id/pg_id\n? src/bin/pg_passwd/pg_passwd\n? src/bin/pgaccess/pgaccess\n? src/bin/pgtclsh/Makefile.tkdefs\n? src/bin/pgtclsh/Makefile.tcldefs\n? src/bin/pgtclsh/pgtclsh\n? src/bin/pgtclsh/pgtksh\n? src/bin/psql/psql\n? src/bin/scripts/createlang\n? src/include/config.h\n? src/interfaces/ecpg/lib/libecpg.so.3.1.1\n? src/interfaces/ecpg/preproc/ecpg\n? src/interfaces/libpgeasy/libpgeasy.so.2.1\n? src/interfaces/libpgtcl/libpgtcl.so.2.1\n? src/interfaces/libpq/libpq.so.2.1\n? src/pl/plpgsql/src/libplpgsql.so.1.0\n? src/pl/tcl/Makefile.tcldefs\n? src/test/regress/GNUmakefile\nIndex: src/backend/utils/cache/temprel.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/cache/temprel.c,v\nretrieving revision 1.26\ndiff -c -r1.26 temprel.c\n*** src/backend/utils/cache/temprel.c\t2000/07/04 06:11:47\t1.26\n--- src/backend/utils/cache/temprel.c\t2000/07/11 15:02:48\n***************\n*** 107,112 ****\n--- 107,113 ----\n \n \t\tnext = lnext(l);\t\t/* do this first, l is deallocated */\n \n+ \t\t/* Indexes are dropped during heap drop */\n \t\tif (temp_rel->relkind != RELKIND_INDEX)\n \t\t{\n \t\t\tchar\t\trelname[NAMEDATALEN];\n***************\n*** 115,122 ****\n \t\t\tstrcpy(relname, temp_rel->user_relname);\n \t\t\theap_drop_with_catalog(relname, allowSystemTableMods);\n \t\t}\n- \t\telse\n- \t\t\tindex_drop(temp_rel->relid);\n \n \t\tl = next;\n \t}\n--- 116,121 ----\n***************\n*** 235,241 ****\n *\n * We also reject an attempt to rename a normal table to a name in use\n * as a temp table name. That would fail later on anyway when rename.c\n! * looks for a rename conflict, but we can give a more specific error \n * message for the problem here.\n *\n * It might seem that we need to check for attempts to rename the physical\n--- 234,240 ----\n *\n * We also reject an attempt to rename a normal table to a name in use\n * as a temp table name. That would fail later on anyway when rename.c\n! * looks for a rename conflict, but we can give a more specific error\n * message for the problem here.\n *\n * It might seem that we need to check for attempts to rename the physical", "msg_date": "Tue, 11 Jul 2000 11:03:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postmaster errors with index on temp table?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> But, when a temp rel is dropped it seems that heap_drop_with_catalog also\n>> drops the indexes, so the error occurs when index_drop is called (at least\n>> I think this is the case). Certainly commenting out the last two lines\n>> *seems* to work.\n\n> Not sure why I introduced that bug in 1.18. Your suggestion was 100%\n> correct. I have applied the following patch.\n\nActually, I don't think this is the true explanation. index_drop'ing\nthe temp indexes may not be *necessary*, but it shouldn't *hurt* either.\n\nNow that I think about it, the reason for the failure is probably that\nthere's no CommandCounterIncrement in this loop. Therefore, when we\nindex_drop an index, the resulting system-table updates are *not seen*\nby heap_drop_with_catalog when it comes time to drop the owning table,\nand so it tries to drop the index again.\n\nYour solution of not doing index_drop at all is sufficient for the\ntable-and-index case, but I bet it is not sufficient for more complex\ncases like RI checks between temp relations. I'd recommend doing\nCommandCounterIncrement after each temp item is dropped, instead.\n\nThere is another potential bug here: remove_all_temp_relations() is\nfailing to consider the possibility that removing one list entry may\ncause other ones (eg, indexes) to go away. It's holding onto a \"next\"\npointer to a list entry that may not be there by the time control comes\nback from heap_drop_with_catalog. This is probably OK for tables and\nindexes because the indexes will always be added after their table and\nthus appear earlier in the list, but again it seems like trouble just\nwaiting to happen. I would recommend logic along the lines of\n\n\twhile (temp_rels != NIL)\n\t{\n\t\tget first entry in temp_rels,\n\t\tand either heap_drop or index_drop it as appropriate;\n\t\tCommandCounterIncrement();\n\t}\n\nThis relies on the drop to come back and remove the temp_rels entry\n(and, possibly, other entries); so it's not an infinite loop even\nthough it looks like one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jul 2000 12:52:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postmaster errors with index on temp table? " }, { "msg_contents": "Attached is a patch for temp tables that implements Tom's requests.\n\n> Bruce Momjian <[email protected]> writes:\n> >> But, when a temp rel is dropped it seems that heap_drop_with_catalog also\n> >> drops the indexes, so the error occurs when index_drop is called (at least\n> >> I think this is the case). Certainly commenting out the last two lines\n> >> *seems* to work.\n> \n> > Not sure why I introduced that bug in 1.18. Your suggestion was 100%\n> > correct. I have applied the following patch.\n> \n> Actually, I don't think this is the true explanation. index_drop'ing\n> the temp indexes may not be *necessary*, but it shouldn't *hurt* either.\n> \n> Now that I think about it, the reason for the failure is probably that\n> there's no CommandCounterIncrement in this loop. Therefore, when we\n> index_drop an index, the resulting system-table updates are *not seen*\n> by heap_drop_with_catalog when it comes time to drop the owning table,\n> and so it tries to drop the index again.\n> \n> Your solution of not doing index_drop at all is sufficient for the\n> table-and-index case, but I bet it is not sufficient for more complex\n> cases like RI checks between temp relations. I'd recommend doing\n> CommandCounterIncrement after each temp item is dropped, instead.\n> \n> There is another potential bug here: remove_all_temp_relations() is\n> failing to consider the possibility that removing one list entry may\n> cause other ones (eg, indexes) to go away. It's holding onto a \"next\"\n> pointer to a list entry that may not be there by the time control comes\n> back from heap_drop_with_catalog. This is probably OK for tables and\n> indexes because the indexes will always be added after their table and\n> thus appear earlier in the list, but again it seems like trouble just\n> waiting to happen. I would recommend logic along the lines of\n> \n> \twhile (temp_rels != NIL)\n> \t{\n> \t\tget first entry in temp_rels,\n> \t\tand either heap_drop or index_drop it as appropriate;\n> \t\tCommandCounterIncrement();\n> \t}\n> \n> This relies on the drop to come back and remove the temp_rels entry\n> (and, possibly, other entries); so it's not an infinite loop even\n> though it looks like one.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n? config.log\n? config.cache\n? config.status\n? GNUmakefile\n? src/Makefile.custom\n? src/GNUmakefile\n? src/Makefile.global\n? src/log\n? src/crtags\n? src/backend/postgres\n? src/backend/catalog/global.bki\n? src/backend/catalog/global.description\n? src/backend/catalog/template1.bki\n? src/backend/catalog/template1.description\n? src/backend/port/Makefile\n? src/bin/x\n? src/bin/initdb/initdb\n? src/bin/initlocation/initlocation\n? src/bin/ipcclean/ipcclean\n? src/bin/pg_config/pg_config\n? src/bin/pg_ctl/pg_ctl\n? src/bin/pg_dump/pg_dump\n? src/bin/pg_dump/pg_restore\n? src/bin/pg_dump/pg_dumpall\n? src/bin/pg_id/pg_id\n? src/bin/pg_passwd/pg_passwd\n? src/bin/pgaccess/pgaccess\n? src/bin/pgtclsh/Makefile.tkdefs\n? src/bin/pgtclsh/Makefile.tcldefs\n? src/bin/pgtclsh/pgtclsh\n? src/bin/pgtclsh/pgtksh\n? src/bin/psql/psql\n? src/bin/scripts/createlang\n? src/include/config.h\n? src/include/stamp-h\n? src/interfaces/ecpg/lib/libecpg.so.3.2.0\n? src/interfaces/ecpg/preproc/ecpg\n? src/interfaces/libpgeasy/libpgeasy.so.2.1\n? src/interfaces/libpgtcl/libpgtcl.so.2.1\n? src/interfaces/libpq/libpq.so.2.1\n? src/interfaces/perl5/blib\n? src/interfaces/perl5/Makefile\n? src/interfaces/perl5/pm_to_blib\n? src/interfaces/perl5/Pg.c\n? src/interfaces/perl5/Pg.bs\n? src/pl/plpgsql/src/libplpgsql.so.1.0\n? src/pl/tcl/Makefile.tcldefs\n? src/test/regress/pg_regress\n? src/test/regress/regress.out\n? src/test/regress/results\n? src/test/regress/regression.diffs\n? src/test/regress/expected/copy.out\n? src/test/regress/expected/create_function_1.out\n? src/test/regress/expected/create_function_2.out\n? src/test/regress/expected/misc.out\n? src/test/regress/expected/constraints.out\n? src/test/regress/sql/copy.sql\n? src/test/regress/sql/misc.sql\n? src/test/regress/sql/create_function_1.sql\n? src/test/regress/sql/create_function_2.sql\n? src/test/regress/sql/constraints.sql\nIndex: src/backend/access/transam/xact.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/access/transam/xact.c,v\nretrieving revision 1.71\ndiff -c -r1.71 xact.c\n*** src/backend/access/transam/xact.c\t2000/09/27 10:41:55\t1.71\n--- src/backend/access/transam/xact.c\t2000/10/11 21:22:55\n***************\n*** 1119,1125 ****\n \tAtEOXact_portals();\n \tRecordTransactionAbort();\n \tRelationPurgeLocalRelation(false);\n! \tinvalidate_temp_relations();\n \tAtEOXact_SPI();\n \tAtEOXact_nbtree();\n \tAtAbort_Cache();\n--- 1119,1125 ----\n \tAtEOXact_portals();\n \tRecordTransactionAbort();\n \tRelationPurgeLocalRelation(false);\n! \tremove_temp_rel_in_myxid();\n \tAtEOXact_SPI();\n \tAtEOXact_nbtree();\n \tAtAbort_Cache();\nIndex: src/backend/catalog/heap.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/catalog/heap.c,v\nretrieving revision 1.147\ndiff -c -r1.147 heap.c\n*** src/backend/catalog/heap.c\t2000/10/05 19:48:21\t1.147\n--- src/backend/catalog/heap.c\t2000/10/11 21:22:59\n***************\n*** 131,141 ****\n \tMaxCommandIdAttributeNumber, 0, -1, -1, '\\001', 'p', '\\0', 'i', '\\0', '\\0'\n };\n \n! /* \n We decide to call this attribute \"tableoid\" rather than say\n \"classoid\" on the basis that in the future there may be more than one\n table of a particular class/type. In any case table is still the word\n! used in SQL. \n */\n static FormData_pg_attribute a7 = {\n \t0xffffffff, {\"tableoid\"}, OIDOID, 0, sizeof(Oid),\n--- 131,141 ----\n \tMaxCommandIdAttributeNumber, 0, -1, -1, '\\001', 'p', '\\0', 'i', '\\0', '\\0'\n };\n \n! /*\n We decide to call this attribute \"tableoid\" rather than say\n \"classoid\" on the basis that in the future there may be more than one\n table of a particular class/type. In any case table is still the word\n! used in SQL.\n */\n static FormData_pg_attribute a7 = {\n \t0xffffffff, {\"tableoid\"}, OIDOID, 0, sizeof(Oid),\n***************\n*** 1489,1495 ****\n \tRelationForgetRelation(rid);\n \n \tif (istemp)\n! \t\tremove_temp_relation(rid);\n \n \tif (has_toasttable)\n \t{\n--- 1489,1495 ----\n \tRelationForgetRelation(rid);\n \n \tif (istemp)\n! \t\tremove_temp_rel_by_relid(rid);\n \n \tif (has_toasttable)\n \t{\nIndex: src/backend/catalog/index.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/catalog/index.c,v\nretrieving revision 1.127\ndiff -c -r1.127 index.c\n*** src/backend/catalog/index.c\t2000/10/05 19:48:21\t1.127\n--- src/backend/catalog/index.c\t2000/10/11 21:23:00\n***************\n*** 1145,1151 ****\n \tRelationForgetRelation(indexId);\n \n \t/* does something only if it is a temp index */\n! \tremove_temp_relation(indexId);\n }\n \n /* ----------------------------------------------------------------\n--- 1145,1151 ----\n \tRelationForgetRelation(indexId);\n \n \t/* does something only if it is a temp index */\n! \tremove_temp_rel_by_relid(indexId);\n }\n \n /* ----------------------------------------------------------------\n***************\n*** 1374,1380 ****\n \tif (!LockClassinfoForUpdate(relid, &tuple, &buffer, confirmCommitted))\n \t\telog(ERROR, \"IndexesAreActive couldn't lock %u\", relid);\n \tif (((Form_pg_class) GETSTRUCT(&tuple))->relkind != RELKIND_RELATION &&\n! \t ((Form_pg_class) GETSTRUCT(&tuple))->relkind != RELKIND_TOASTVALUE)\n \t\telog(ERROR, \"relation %u isn't an indexable relation\", relid);\n \tisactive = ((Form_pg_class) GETSTRUCT(&tuple))->relhasindex;\n \tReleaseBuffer(buffer);\n--- 1374,1380 ----\n \tif (!LockClassinfoForUpdate(relid, &tuple, &buffer, confirmCommitted))\n \t\telog(ERROR, \"IndexesAreActive couldn't lock %u\", relid);\n \tif (((Form_pg_class) GETSTRUCT(&tuple))->relkind != RELKIND_RELATION &&\n! \t\t((Form_pg_class) GETSTRUCT(&tuple))->relkind != RELKIND_TOASTVALUE)\n \t\telog(ERROR, \"relation %u isn't an indexable relation\", relid);\n \tisactive = ((Form_pg_class) GETSTRUCT(&tuple))->relhasindex;\n \tReleaseBuffer(buffer);\nIndex: src/backend/utils/cache/temprel.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/cache/temprel.c,v\nretrieving revision 1.27\ndiff -c -r1.27 temprel.c\n*** src/backend/utils/cache/temprel.c\t2000/07/12 18:04:45\t1.27\n--- src/backend/utils/cache/temprel.c\t2000/10/11 21:23:01\n***************\n*** 91,125 ****\n void\n remove_all_temp_relations(void)\n {\n- \tList\t *l,\n- \t\t\t *next;\n- \n- \tif (temp_rels == NIL)\n- \t\treturn;\n- \n \tAbortOutOfAnyTransaction();\n \tStartTransactionCommand();\n \n! \tl = temp_rels;\n! \twhile (l != NIL)\n \t{\n! \t\tTempTable *temp_rel = (TempTable *) lfirst(l);\n \n! \t\tnext = lnext(l);\t\t/* do this first, l is deallocated */\n! \n! \t\t/* Indexes are dropped during heap drop */\n \t\tif (temp_rel->relkind != RELKIND_INDEX)\n- \t\t{\n- \t\t\tchar\t\trelname[NAMEDATALEN];\n- \n- \t\t\t/* safe from deallocation */\n- \t\t\tstrcpy(relname, temp_rel->user_relname);\n \t\t\theap_drop_with_catalog(relname, allowSystemTableMods);\n! \t\t}\n! \n! \t\tl = next;\n \t}\n- \ttemp_rels = NIL;\n \tCommitTransactionCommand();\n }\n \n--- 91,112 ----\n void\n remove_all_temp_relations(void)\n {\n \tAbortOutOfAnyTransaction();\n \tStartTransactionCommand();\n \n! \twhile (temp_rels != NIL)\n \t{\n! \t\tchar\t\trelname[NAMEDATALEN];\n! \t\tTempTable *temp_rel = (TempTable *) lfirst(temp_rels);\n \n! \t\t/* safe from deallocation */\n! \t\tstrcpy(relname, temp_rel->user_relname);\n \t\tif (temp_rel->relkind != RELKIND_INDEX)\n \t\t\theap_drop_with_catalog(relname, allowSystemTableMods);\n! \t\telse\n! \t\t\tindex_drop(temp_rel->relid);\n! \t\tCommandCounterIncrement();\n \t}\n \tCommitTransactionCommand();\n }\n \n***************\n*** 129,135 ****\n * we don't have the relname for indexes, so we just pass the oid\n */\n void\n! remove_temp_relation(Oid relid)\n {\n \tMemoryContext oldcxt;\n \tList\t *l,\n--- 116,122 ----\n * we don't have the relname for indexes, so we just pass the oid\n */\n void\n! remove_temp_rel_by_relid(Oid relid)\n {\n \tMemoryContext oldcxt;\n \tList\t *l,\n***************\n*** 179,185 ****\n * We just have to delete the map entry.\n */\n void\n! invalidate_temp_relations(void)\n {\n \tMemoryContext oldcxt;\n \tList\t *l,\n--- 166,172 ----\n * We just have to delete the map entry.\n */\n void\n! remove_temp_rel_in_myxid(void)\n {\n \tMemoryContext oldcxt;\n \tList\t *l,\nIndex: src/include/utils/temprel.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/utils/temprel.h,v\nretrieving revision 1.10\ndiff -c -r1.10 temprel.h\n*** src/include/utils/temprel.h\t2000/06/20 06:41:11\t1.10\n--- src/include/utils/temprel.h\t2000/10/11 21:23:14\n***************\n*** 18,29 ****\n \n extern void create_temp_relation(const char *relname,\n \t\t\t\t\t\t\t\t HeapTuple pg_class_tuple);\n! extern void remove_temp_relation(Oid relid);\n extern bool rename_temp_relation(const char *oldname,\n \t\t\t\t\t\t\t\t const char *newname);\n \n extern void remove_all_temp_relations(void);\n! extern void invalidate_temp_relations(void);\n \n extern char *get_temp_rel_by_username(const char *user_relname);\n extern char *get_temp_rel_by_physicalname(const char *relname);\n--- 18,29 ----\n \n extern void create_temp_relation(const char *relname,\n \t\t\t\t\t\t\t\t HeapTuple pg_class_tuple);\n! extern void remove_temp_rel_by_relid(Oid relid);\n extern bool rename_temp_relation(const char *oldname,\n \t\t\t\t\t\t\t\t const char *newname);\n \n extern void remove_all_temp_relations(void);\n! extern void remove_temp_rel_in_myxid(void);\n \n extern char *get_temp_rel_by_username(const char *user_relname);\n extern char *get_temp_rel_by_physicalname(const char *relname);", "msg_date": "Wed, 11 Oct 2000 17:25:39 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postmaster errors with index on temp table?" } ]
[ { "msg_contents": "My original idea for using the new \"memory context\" mechanisms for\nrecovering memory in the executor went like this: in each Plan node,\ncreate a \"per tuple\" context that would be reset at the start of each\nExecProcNode call, thereby recovering memory allocated in the previous\ntuple cycle. I envisioned resetting and switching into this context at\nthe start of each call of the node's ExecProcNode routine.\n\nThis idea has pretty much crashed and burned on takeoff :-(. It turns\nout there are way too many plan-level routines that assume they can\ndo palloc() to allocate memory that will still be there the next time\nthey are called. An example is that rtree index scans use a stack of\npalloc'd nodes to keep track of where they are ... and that stack had\nbetter still be there when you ask for the next tuple.\n\nWe could possibly teach all these places to use something other than\nCurrentMemoryContext for their allocations, but it doesn't look like an\nappetizing prospect. It looks tedious and highly error-prone, both of\nwhich are adjectives I'd hoped to avoid for this project.\n\nWhat I'm currently considering instead is to still create a per-tuple\ncontext for each plan node, but use it only for expression evaluation,\nie, we switch into it on entry to ExecQual(), ExecTargetList(),\nExecProject(), maybe a few other places. The majority of our leakage\nproblems are associated with expression evaluation, so this should allow\nfixing the leakage problems. It will mean that routines associated with\nplan nodes (basically, executor/node*.c) will still need to be careful\nto avoid leaks. For the most part they are already, but I had hoped to\nmake that care less necessary.\n\nComments, better ideas?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 02:31:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Progress report: intraquery memory recovery in executor" }, { "msg_contents": "At 02:31 10/07/00 -0400, Tom Lane wrote:\n>\n>What I'm currently considering instead is to still create a per-tuple\n>context for each plan node, but use it only for expression evaluation,\n>ie, we switch into it on entry to ExecQual(), ExecTargetList(),\n>ExecProject(), maybe a few other places. The majority of our leakage\n>problems are associated with expression evaluation, so this should allow\n>fixing the leakage problems. It will mean that routines associated with\n>plan nodes (basically, executor/node*.c) will still need to be careful\n>to avoid leaks. For the most part they are already, but I had hoped to\n>make that care less necessary.\n>\n\nIs it simple for the person writing the low level routines to choose\n(easily) to allocate 'temporary' memory vs. 'permanent' memory? If some\nmechanism were in place for this, then the code could slowly be\nmigrated...at least reducing the tedium.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 10 Jul 2000 17:30:09 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Progress report: intraquery memory recovery in\n executor" }, { "msg_contents": "> What I'm currently considering instead is to still create a per-tuple\n> context for each plan node, but use it only for expression evaluation,\n> ie, we switch into it on entry to ExecQual(), ExecTargetList(),\n> ExecProject(), maybe a few other places. The majority of our leakage\n> problems are associated with expression evaluation, so this should allow\n> fixing the leakage problems. It will mean that routines associated with\n> plan nodes (basically, executor/node*.c) will still need to be careful\n> to avoid leaks. For the most part they are already, but I had hoped to\n> make that care less necessary.\n\nI was wondering how you were going to pull this off. It seems doing\nsomething on entry to the expression routines is best.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jul 2000 09:17:57 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Progress report: intraquery memory recovery in executor" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> Is it simple for the person writing the low level routines to choose\n> (easily) to allocate 'temporary' memory vs. 'permanent' memory?\n\nOne of the main problems is that a low-level routine doesn't necessarily\nknow which is appropriate --- the answer may vary depending on where it\nwas called from. To do it that way, I think we'd end up decorating a\nlarge number of internal APIs with extra MemoryContext arguments.\n(This is exactly why we have a global CurrentMemoryContext in the first\nplace...)\n\nThat's why I wanted to do the management at the level of the Plan node\nexecutor routines, which are high-level enough that they have some clue\nwhat's going on.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 10:08:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Progress report: intraquery memory recovery in executor " }, { "msg_contents": "At 10:08 10/07/00 -0400, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> Is it simple for the person writing the low level routines to choose\n>> (easily) to allocate 'temporary' memory vs. 'permanent' memory?\n>\n>One of the main problems is that a low-level routine doesn't necessarily\n>know which is appropriate --- the answer may vary depending on where it\n>was called from. \n...\n>That's why I wanted to do the management at the level of the Plan node\n>executor routines, which are high-level enough that they have some clue\n>what's going on.\n\nISTM (with, perhaps, no basis (ISTMWPNB?)) that when allocating memory\nthere are a couple of cases:\n\nYou want it to be available:\n\n1. until the end of the current call\n2. at least until the next call\n3. until TX end\n4. forever\n...etc.\n\nIf there are still cases where the called routine can't tell what sort of\nmemory it wants, then my method won't work (and I'd be interested to know\nwhat they are).\n\nBut if a relatively short list of allocation types can be created, then the\npalloc replacement can be passed an extra parameter (the 'allocation\ntype'), and handle memory contexts appropriately.\n\nFeel free to tell me if this is so way off the mark that there is no\npurpose in pursuing the discussion...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 11 Jul 2000 00:30:31 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Progress report: intraquery memory recovery in\n executor" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I was wondering how you were going to pull this off. It seems doing\n> something on entry to the expression routines is best.\n\nYeah, associating a short-term memory context with each ExprContext\nis looking like the way to proceed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 10:34:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Progress report: intraquery memory recovery in executor " }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> ISTM (with, perhaps, no basis (ISTMWPNB?)) that when allocating memory\n> there are a couple of cases:\n\n> You want it to be available:\n\n> 1. until the end of the current call\n> 2. at least until the next call\n> 3. until TX end\n> 4. forever\n> ...etc.\n\nRight, that's essentially what the various MemoryContexts are for.\n\n> But if a relatively short list of allocation types can be created, then the\n> palloc replacement can be passed an extra parameter (the 'allocation\n> type'), and handle memory contexts appropriately.\n\nI don't think that's a superior solution to passing a target\nMemoryContext around. An allocation-type parameter would just mean an\nextra lookup in some global array to find the appropriate MemoryContext.\nThat means even more global state, rather than less, and it's not as\nextensible. Right now, if you have a need for a context with some\nweird lifetime, you just make one and then delete it again later ---\nthe knowledge of the context's very existence, as well as lifetime,\nis localized. In an allocation-type world you'd need to either add\na new allocation type code or figure out which existing category to\nforce-fit your context into.\n\nIn any case, this doesn't address the real practical problem, which is\ngoing around and changing hundreds of routines and thousands of calls\nthereto... I really *don't* want to add a memory management parameter\nto everything in sight. Doesn't matter how the parameter is defined.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 10:45:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Progress report: intraquery memory recovery in executor " } ]
[ { "msg_contents": "\n> Maybe you just want to use zlib. Let other guys hammer out \n> the details.\n\nYes, but zlib is slow, and has bad compression. \nI think we want something fast with moderate compression,\nmaybe \"lzo\" if license or author permits.\nThe algorithm is patented for free use, the code is gpl'd,\nbut maybe we get something for our license from the\nauthor.\n\nAndreas\n", "msg_date": "Mon, 10 Jul 2000 12:02:58 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Re: [SQL] Re: [GENERAL] lztext and compression rati\n\tos..." } ]
[ { "msg_contents": "\n> Richard Sand writes:\n> \n> > 1) First of all, you can't use IBM's make utility, gotta \n> use GNU make.\n> \n> Quoth the installation instructions:\n> \n> \"Building PostgreSQL requires GNU make. It will not work with \n> other make\n> programs.\"\n> \n> > you have to use the command:\n> > \n> > ./configure --with-template=aix_gcc\n> \n> That has got to be a bug. The configure script should look for gcc\n> first. Can you show the relevant lines of configure output \n> (checking for\n> cc... etc), when you don't use that option?\n> \n> \n> > Making postgres.imp\n> > ./backend/port/aix/mkldexport.sh postgres /usr/local/bin > \n> postgres.imp nm: postgres: 0654-200 Cannot open the specified file.\n> > nm: A file or directory in the path name does not exist.\n> > \n> > This is apparently a bug in the make scripts for Postgres.\n> \n> Can you describe how to fix it? The AIX shared library stuff \n> is an enigma\n> to me.\n\nThe problem here is simply that the postgres.imp target has no dependency in\n\nthe Makefile.aix and thus gmake thinks it can start with this step.\n\n> > I hand edited the Makefile.global file in ./src and \n> commented out the\n> > line \"HAVE_Cplusplus=true\"\n> \n> Quoth configure --help:\n> \n> \" --without-CXX prevent building C++ code\"\n\nThis is now the default. The c++ check using gcc is busted, since the\ntest code is C only and is compiled with gcc. gcc does not switch to C++\nmode.\n\n> > Oh, and as the make output scrolled by, I see that it failed as well\n> > building some plpsql stuff, but it was non fatal.\n> \n> If it failed then it was fatal, and vice versa. Please elaborate.\n> \n> > There were also a zillion warnings, many of them about multiple type\n> > declarations for int8, int32, etc.\n> \n> I'll make a note of it.\n> \n> > installing the man pages, because it expected to use \"zcat\" \n> to handle\n> > its .gz files, which AIX doesn't like. So I had to change zcat to\n> > \"/usr/local/bin/gunzip -c\" in the ./src/Makefile.global (of course\n> \n> Noted.\n\nDon't rely on gunzip, use gzip -cd instead.\n\nAndreas\n", "msg_date": "Mon, 10 Jul 2000 12:38:17 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Lessons learned on how to build 7.0.2 on AIX 4.x" } ]
[ { "msg_contents": "\n> You have recreated what pg_upgrade does. It is for upgrading system\n> tables. If only your system tables were hosed, you are fine now.\n\nOnly if your previous system has been vacuum'ed and no dml afterwards.\nOtherwise you also need to copy your old pg_log.\n\n> \n> \n> > The Hermit Hacker wrote:\n> > > just a quick thought ... have you tried shutting down and \n> restrating the\n> > > postmaster? basically, \"reset\" the shared memory? v7.x handles\n> > > corruptions like that alot cleaner, but previous versions \n> caused odd\n> > > results if shared memory got corrupted ...\n> > \n> > Well, I've rebooted twice. In fact, it was a hard lock that \n> caused the\n> > problems. When the machine was brought back up, the db was foobar.\n> > \n> > I'm doing something really really evil to avoid losing the \n> last days'\n> > data:\n> > \n> > -I created a new db\n> > -used the old db schema to create all new blank tables\n\nvacuum new db\n(I would do a tar backup of the whole old db)\nvacuum old db, if that is possible \n\n> > -copied the physical table files from the old data \n> directory into the\n> > new database directory\n\nif above vacuum old db was not possible copy old pg_log\n\n> > -currently vacuuming the new db - nothing is barfing yet\n> > -now hopefully I can create my indexes and be back in business\n> > \n> > Tim\n\nAndreas\n", "msg_date": "Mon, 10 Jul 2000 12:42:22 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: more corruption" }, { "msg_contents": "Zeugswetter Andreas SB wrote:\n> > > -I created a new db\n> > > -used the old db schema to create all new blank tables\n> \n> vacuum new db\n> (I would do a tar backup of the whole old db)\n> vacuum old db, if that is possible\n\nWas not possible.\n\n> > > -copied the physical table files from the old data\n> > directory into the\n> > > new database directory\n> \n> if above vacuum old db was not possible copy old pg_log\n\nOops - I didn't do that.\n\n> > > -currently vacuuming the new db - nothing is barfing yet\n\nActually, the vacuum seemed to be running forever making no progress so\nI killed it.\n\n> > > -now hopefully I can create my indexes and be back in business\n\nI vacuumed here and it worked. I did not use my \"old\" pg_log file - what\ndid I lose?\n\nTim\n\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n", "msg_date": "Mon, 10 Jul 2000 06:09:22 -0700", "msg_from": "Tim Perdue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: more corruption" }, { "msg_contents": "Tim Perdue <[email protected]> writes:\n>>>>>> -now hopefully I can create my indexes and be back in business\n\n> I vacuumed here and it worked. I did not use my \"old\" pg_log file - what\n> did I lose?\n\nHard to tell. Any tuples that weren't already marked on disk as \"known\ncommitted\" have probably gone missing, because their originating\ntransaction IDs likely won't be shown as committed in the new pg_log.\nSo I'd look for missing tuples from recent transactions in the old DB.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 11:17:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: more corruption " }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Tom Lane\n>\n> Tim Perdue <[email protected]> writes:\n> >>>>>> -now hopefully I can create my indexes and be back in business\n>\n> > I vacuumed here and it worked. I did not use my \"old\" pg_log file - what\n> > did I lose?\n>\n> Hard to tell. Any tuples that weren't already marked on disk as \"known\n> committed\" have probably gone missing, because their originating\n> transaction IDs likely won't be shown as committed in the new pg_log.\n> So I'd look for missing tuples from recent transactions in the old DB.\n>\n\nHmm,this may be more serious.\nMVCC doesn't see committed(marked HEAP_XMIN_COMMITTED) but\nnot yet committed(t_xmin > CurrentTransactionId) tuples.\nHe will see them in the future.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Tue, 11 Jul 2000 08:56:31 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: AW: more corruption " }, { "msg_contents": "> > -----Original Message-----\n> > From: [email protected] [mailto:[email protected]]On\n> > Behalf Of Tom Lane\n> >\n> > Tim Perdue <[email protected]> writes:\n> > >>>>>> -now hopefully I can create my indexes and be back in business\n> >\n> > > I vacuumed here and it worked. I did not use my \"old\" pg_log\n> file - what\n> > > did I lose?\n> >\n> > Hard to tell. Any tuples that weren't already marked on disk as \"known\n> > committed\" have probably gone missing, because their originating\n> > transaction IDs likely won't be shown as committed in the new pg_log.\n> > So I'd look for missing tuples from recent transactions in the old DB.\n> >\n>\n> Hmm,this may be more serious.\n> MVCC doesn't see committed(marked HEAP_XMIN_COMMITTED) but\n> not yet committed(t_xmin > CurrentTransactionId) tuples.\n> He will see them in the future.\n>\n\nP.S.\nThis is the main reason that I once proposed to call\n'pg_ctl stop' to stop postmaster in pg_upgrade before/after\nmoving pg_log and pg_varibale.\n\nThere was a dicussion to recycle OIDs.\nIt's impossible to recycle XIDs.\n\nRegards.\n\nHiroshi Inoue\n\n", "msg_date": "Tue, 11 Jul 2000 10:17:38 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: AW: more corruption " }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>>>> I vacuumed here and it worked. I did not use my \"old\" pg_log file - what\n>>>> did I lose?\n>> \n>> Hard to tell. Any tuples that weren't already marked on disk as \"known\n>> committed\" have probably gone missing, because their originating\n>> transaction IDs likely won't be shown as committed in the new pg_log.\n>> So I'd look for missing tuples from recent transactions in the old DB.\n>> \n\n> Hmm,this may be more serious.\n> MVCC doesn't see committed(marked HEAP_XMIN_COMMITTED) but\n> not yet committed(t_xmin > CurrentTransactionId) tuples.\n> He will see them in the future.\n\nBut he did a vacuum --- won't that get rid of any tuples that aren't\ncurrently considered committed?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 22:16:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: more corruption " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n>\n> \"Hiroshi Inoue\" <[email protected]> writes:\n> >>>> I vacuumed here and it worked. I did not use my \"old\" pg_log\n> file - what\n> >>>> did I lose?\n> >>\n> >> Hard to tell. Any tuples that weren't already marked on disk as \"known\n> >> committed\" have probably gone missing, because their originating\n> >> transaction IDs likely won't be shown as committed in the new pg_log.\n> >> So I'd look for missing tuples from recent transactions in the old DB.\n> >>\n>\n> > Hmm,this may be more serious.\n> > MVCC doesn't see committed(marked HEAP_XMIN_COMMITTED) but\n> > not yet committed(t_xmin > CurrentTransactionId) tuples.\n> > He will see them in the future.\n>\n> But he did a vacuum --- won't that get rid of any tuples that aren't\n> currently considered committed?\n>\n\nOops,did he move old pg_varibale ?\nIf so my anxiety has no meaning.\n\nRegards.\n\nHiroshi Inoue\n\n", "msg_date": "Tue, 11 Jul 2000 11:32:33 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: AW: more corruption " } ]
[ { "msg_contents": "\nAll other points you noted can be fixed.\n\nThe list of lpp's you named is only part of the truth, you\nseemed to have the others already installed. Since I do not \nknow the full list of additional lpp's compared to a default AIX install\nI did not name any in the FAQ. I am pretty sure, that some header file\nlpp's are not in the standard AIX runtime installation.\nMaybe we can start with your list and add other lpp's when people \nreport that they where needed.\n\nAndreas\n", "msg_date": "Mon, 10 Jul 2000 13:05:06 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Lessons learned on how to build 7.0.2 on AIX 4.x" } ]
[ { "msg_contents": "\nI inserted some debug into libpq, then I ran psql, and I noticed that\nthere are two 'Z' \"Ready for query\" messages sent after each query. Is\nthere a reason for that? Is it a bug?\n", "msg_date": "Mon, 10 Jul 2000 21:17:15 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "postgres fe/be protocol" }, { "msg_contents": "Chris Bitmead <[email protected]> writes:\n> I inserted some debug into libpq, then I ran psql, and I noticed that\n> there are two 'Z' \"Ready for query\" messages sent after each query. Is\n> there a reason for that? Is it a bug?\n\nI'm pretty sure the backend sends only one 'Z' per query cycle. Are you\nwatching the outgoing requests too? Maybe psql is sending an extra\nempty query. (It didn't use to do that, but maybe it does after Peter's\nrecent work on it...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 11:44:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres fe/be protocol " }, { "msg_contents": "\n----- Original Message -----\nFrom: Tom Lane <[email protected]>\nTo: Chris Bitmead <[email protected]>\nCc: Postgres Hackers List <[email protected]>\nSent: Monday, July 10, 2000 4:44 PM\nSubject: Re: [HACKERS] postgres fe/be protocol\n\n\n> Chris Bitmead <[email protected]> writes:\n> > I inserted some debug into libpq, then I ran psql, and I noticed that\n> > there are two 'Z' \"Ready for query\" messages sent after each query. Is\n> > there a reason for that? Is it a bug?\n>\n> I'm pretty sure the backend sends only one 'Z' per query cycle. Are you\n> watching the outgoing requests too? Maybe psql is sending an extra\n> empty query. (It didn't use to do that, but maybe it does after Peter's\n> recent work on it...)\n\nActually it used to when it first made a connection to test that it was\nalive, but not afterwards. JDBC was originally based on libpq, but I\nreplaced the empty query with a couple of useful ones.\n\n>\n> regards, tom lane\n\n", "msg_date": "Mon, 10 Jul 2000 23:41:46 +0100", "msg_from": "\"Peter Mount\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres fe/be protocol " }, { "msg_contents": "Tom Lane wrote:\n> \n> Chris Bitmead <[email protected]> writes:\n> > I inserted some debug into libpq, then I ran psql, and I noticed that\n> > there are two 'Z' \"Ready for query\" messages sent after each query. Is\n> > there a reason for that? Is it a bug?\n> \n> I'm pretty sure the backend sends only one 'Z' per query cycle. Are you\n> watching the outgoing requests too? Maybe psql is sending an extra\n> empty query. (It didn't use to do that, but maybe it does after Peter's\n> recent work on it...)\n\nI put in a printf in parseInput in fe-exec.c in libpq, straight after it \nreads the id. This is only going to see messages incoming to the front\nend.\nI also had a break-point on PQexec and it was only called once per\nquery.\nFor each query that I input to psql, everything looked normal except\nfor the two 'Z's.\n\nOk, I've just done it again on another platform with the same result.\nThis\nis what I see...\n\nchrisb=# select * from a;\nP\nT\nD\nC\nZ\nZ\n aa | bb | cc \n-----+-----+-----\n aaa | bbb | ccc\n(1 row)\n\n\nWe've got the P - select results, T - describe output D - one output\ntuple,\nC - complete Z - ready for input.\n\nIt all seems sensible except for the two Z's.\n", "msg_date": "Tue, 11 Jul 2000 10:53:04 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres fe/be protocol" }, { "msg_contents": "Chris Bitmead <[email protected]> writes:\n> Tom Lane wrote:\n>> I'm pretty sure the backend sends only one 'Z' per query cycle.\n\n> I put in a printf in parseInput in fe-exec.c in libpq, straight after\n> it reads the id.\n\nOh. What you're missing is that the input message isn't necessarily\n*consumed* when first seen. The sequence of events is\n\n* absorb bufferload of data into libpq input buffer;\n\n* parse messages in parseInput until 'C' is seen;\n\n* when 'C' message is completely read, set asyncStatus = PGASYNC_READY\n(fe-exec.c line 756 in current sources), then absorb that message by\nadvancing inCursor (line 878) and loop back around. Now it sees the\nfinal 'Z' message, but decides at line 713 not to process it just yet.\nSo it returns to PQgetResult, which finishes off and stashes away the\nPGresult object, and then sets asyncStatus = PGASYNC_BUSY again.\n\n* PQexec will now loop around and call PQgetResult and thence parseInput\nagain, which will now absorb the 'Z' and set asyncStatus = PGASYNC_IDLE,\nwhich finally allows the PQexec loop to return. In the meantime your\nprintf printed the 'Z' a second time.\n\nThis is kind of baroque, I agree, but it seemed to be necessary to\nsupport asynchronous data reading without doing too much damage to\nbackward compatibility of PQexec() semantics. In particular, the\ncritical thing here is that we need to be able to deliver a sequence\nof PGresult objects when the query string contains multiple commands\nand the application is using PQgetResult --- but the old behavior of\nPQexec was that you only got back the *last* PGresult if the query\nstring produced more than one, so I had to preserve that...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 23:11:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres fe/be protocol " } ]
[ { "msg_contents": "ISTM that the template mechanism used in configure is, well, flawed. Among\nthe problems:\n\n1) The templates preempt the choice of compiler. A recent report from AIX\nsaid that it automatically picked up the \"aix_41\" template, only to find\nout later on that there is no \"cc\" compiler on the system. The user had to\nspecify --with-template=aix_gcc. That is not appropriate for automatic\nconfiguration.\n\n2) Template settings clobber user settings. I expect to be able to write\nCFLAGS='-g -pipe' ./configure, but configure will ingore my CFLAGS\nsetting. The only way to change the CFLAGS is to edit Makefile.global,\nwhich is not an nice thing to invite users to.\n\nIn fact, it's questionable why there is a --with-template option at\nall. The template names are based on the operating system and the\nprocessor, and in some cases the compiler, all of which we know exactly.\n\nThat way we could fix problem 1: we read the templates *after* AC_PROG_CC\nhas been called. The templates don't contain any information that could\npossibly be useful before AC_PROG_CC anyway.\n\nTo fix problem 2 I can imagine this procedure: Define a list of variables\nthat is legal to set in a template. (This can be kept in one place and\nextended as needed.) Before doing much of anything, configure checks which\nones of these variables are defined in the environment and remembers\nthat. After AC_PROG_CC has been called, we read the template and process\nall the variables that were not set in the environment.\n\nAny comments?\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 10 Jul 2000 09:13:08 -0400 (EDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Templates" }, { "msg_contents": "> 2) Template settings clobber user settings. I expect to be able to write\n> CFLAGS='-g -pipe' ./configure, but configure will ingore my CFLAGS\n> setting. The only way to change the CFLAGS is to edit Makefile.global,\n> which is not an nice thing to invite users to.\n\nJust a detail here: Makefile.custom can be used to modify makefile\nvariables, so Makefile.global never needs to be touched. I manipulate\nCFLAGS and other makefile variables using Makefile.custom. As we fix up\n./configure, I would hope that we retain this, or a similar, capability.\n\n - Thomas\n", "msg_date": "Mon, 10 Jul 2000 14:57:33 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Templates" }, { "msg_contents": "[email protected] wrote:\n >2) Template settings clobber user settings. I expect to be able to write\n >CFLAGS='-g -pipe' ./configure, but configure will ingore my CFLAGS\n >setting. The only way to change the CFLAGS is to edit Makefile.global,\n >which is not an nice thing to invite users to.\n\nCan't we put it in Makefile.custom any more?\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Delight thyself also in the LORD; and he shall give\n thee the desires of thine heart.\" \n Psalms 37:4 \n\n\n", "msg_date": "Mon, 10 Jul 2000 16:03:09 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Templates " }, { "msg_contents": "On Mon, 10 Jul 2000, Thomas Lockhart wrote:\n\n> Just a detail here: Makefile.custom can be used to modify makefile\n> variables, so Makefile.global never needs to be touched. I manipulate\n> CFLAGS and other makefile variables using Makefile.custom. As we fix up\n> ./configure, I would hope that we retain this, or a similar, capability.\n\nOkay, that was going to be my next message. :) Makefile.custom is not the\nanswer either. You're still inviting users to manually edit files when\nthey shouldn't have to. In fact, this approach is conceptually wrong\nanyway.\n\nThe whole point of the configure script is to test whether you can do what\nyou are trying to do. If you lie to configure and change the settings\nmanually afterwards, then you lose. If you want to save custom settings\nbetween invocations then you can\n\n1) set the respective variable in the environment\n\n2) create a file config.site in PREFIX/etc or PREFIX/share\n\n3) create some file somewhere and point the environment variable\nCONFIG_SITE there.\n\nall requiring that we fix what I'm describing. This way your settings will\nhave to pass through the scrutiny of configure.\n\nI don't mind keeping the Makefile.custom around if people like it, it's\none extra line in Makefile.global, but I wouldn't recommend it to anyone,\nleast of all end users.\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 10 Jul 2000 11:06:31 -0400 (EDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Templates" }, { "msg_contents": "[email protected] writes:\n> In fact, it's questionable why there is a --with-template option at\n> all. The template names are based on the operating system and the\n> processor, and in some cases the compiler, all of which we know exactly.\n\nI believe it would be a bad idea to remove the option entirely, because\nthat would mean that if config.guess and/or configure didn't recognize\nyour platform, you'd have no simple way of forcing a template choice.\nBut I agree that --with-template is not the customary way of telling\nconfigure which compiler you want to use.\n\n> That way we could fix problem 1: we read the templates *after* AC_PROG_CC\n> has been called. The templates don't contain any information that could\n> possibly be useful before AC_PROG_CC anyway.\n\nOK, so you envision:\n\t1. Pick compiler using standard GNU/configure rules.\n\t2. If --with-template not specified, assemble template name\n\t from config.guess output and compiler name. (Use .similar\n\t substitutions to arrive at actual template from this info.)\n\t3. Read selected template.\nSeems pretty reasonable to me.\n\n> To fix problem 2 I can imagine this procedure: Define a list of variables\n> that is legal to set in a template. (This can be kept in one place and\n> extended as needed.) Before doing much of anything, configure checks which\n> ones of these variables are defined in the environment and remembers\n> that. After AC_PROG_CC has been called, we read the template and process\n> all the variables that were not set in the environment.\n\nActually, one point of having the templates is specifically that they\n*aren't* very tightly constrained as to what they can set. Nor do I\nbelieve it's necessarily a good idea to let the user override the\ntemplate settings. If they know enough to do that then let them edit\nthe template. CFLAGS is perhaps a special case here --- I could see\nappending the environment CFLAGS to what the template has, which we\ncould do in the templates themselves by making the customary style be\n\tCFLAGS= whatever $(CFLAGS)\nWhat you sketch above strikes me as a whole lot of mechanism that's\nbasically fighting the template idea rather than working with it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 11:42:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Templates " }, { "msg_contents": "On Mon, 10 Jul 2000, Tom Lane wrote:\n\n> I believe it would be a bad idea to remove the option entirely, because\n> that would mean that if config.guess and/or configure didn't recognize\n> your platform, you'd have no simple way of forcing a template choice.\n\nIf config.guess doesn't recognize your platform then configure fails,\nperiod. Specifying --with-template isn't going to help. The customary way\nto cope with that situation is to use the --host option, which is also\nmuch more flexible in terms of input format.\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 10 Jul 2000 12:08:24 -0400 (EDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Templates " }, { "msg_contents": "[email protected] writes:\n> On Mon, 10 Jul 2000, Tom Lane wrote:\n>> I believe it would be a bad idea to remove the option entirely, because\n>> that would mean that if config.guess and/or configure didn't recognize\n>> your platform, you'd have no simple way of forcing a template choice.\n\n> If config.guess doesn't recognize your platform then configure fails,\n> period. Specifying --with-template isn't going to help. The customary way\n> to cope with that situation is to use the --host option, which is also\n> much more flexible in terms of input format.\n\nconfig.guess shouldn't even be executed if the user's given\n--with-template, IMHO. However, accepting --host to override\nconfig.guess isn't really the issue here. The problem is to select an\nappropriate template if none of the patterns in template/.similar match\nyour platform name. We have had many cases before where the platform\nvendor comes out with some random new variant on their uname output that\ncauses the .similar match to fail (Alpha's \"evNN\" being the latest\nexample). I'd rather be able to tell people \"use --with-template=foo\"\nthan have to explain to them how to alter the regexps in .similar.\n\nOn another part of the thread, it's true that Makefile.custom is not\nintended for random users; it's intended for people who know what they\nare doing. There are enough such people who use it that you will not\nget away with removing it ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 12:23:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Templates " }, { "msg_contents": "Tom Lane writes:\n\n> The problem is to select an appropriate template if none of the\n> patterns in template/.similar match your platform name. We have had\n> many cases before where the platform vendor comes out with some random\n> new variant on their uname output that causes the .similar match to\n> fail (Alpha's \"evNN\" being the latest example).\n\nThat's why this is wrong. We currently rely on the OS name, maybe the\ncomplete triple, maybe the uname output. Of course we have no idea whether\nit will work.\n\nThat is the very reason why config.guess and config.sub were invented, and\nthat's why we should use them exclusively, IMHO. All the possible outputs\nof config.guess are known to us now, so there should be no surprises when\nsomebody changes their uname.\n\nWe ought to make use of the information given to us if possible and not\ntry to construct our own, much poorer, information instead.\n\n\nAnother problem with --with-template is this:\n\n* user specifies --with-template=foonix_gcc\n* template sets CC=gcc\n* AC_PROG_CC assumes \"gcc\" as compiler\n\nMaybe the user's compiler isn't called \"gcc\", maybe it's called \"egcs\"\n(like mine), or \"gcc2\", or \"/opt/experimental/gcc-3.19/bin/gcc\".\n\nOkay, maybe the other way around:\n\n* user runs \"CC=egcs ./configure\"\n* AC_PROG_CC verifies \"egcs\" as compiler\n* template \"foonix_gcc\" gets chosen\n* template sets CC=gcc -- boom!\n\nSo the compiler information must disappear from the template files.\n\nIf you accept that then we could make --with-template specify the\n\"template prefix\" and the compiler information gets added on if it turns\nout there are two templates \"name_cc\" and \"name_gcc\". Then we'd have the\nprocedure\n\nAC_PROG_CC\nif --with-template was given, use that\nelse find one yourself\n\n\nBtw: Just today someone wrote about this very problem (\"[GENERAL] [Help]\nINSTALLing 7.02\").\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 11 Jul 2000 00:25:26 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Templates " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> The problem is to select an appropriate template if none of the\n>> patterns in template/.similar match your platform name. We have had\n>> many cases before where the platform vendor comes out with some random\n>> new variant on their uname output that causes the .similar match to\n>> fail (Alpha's \"evNN\" being the latest example).\n\n> That is the very reason why config.guess and config.sub were invented, and\n> that's why we should use them exclusively, IMHO. All the possible outputs\n> of config.guess are known to us now, so there should be no surprises when\n> somebody changes their uname.\n\nSay what? The variants we've been having trouble with *are*\nconfig.guess outputs. Moreover there are new versions of config.guess\nall the time. You're making no sense at all here.\n\n> So the compiler information must disappear from the template files.\n\nNot exactly. We do need to be able to decide whether we are using\ngcc or vendor cc in order to pick the right switches. One possible\nway of doing that is to merge the \"cc\" and \"gcc\" templates and have\nif-tests in the templates instead. For example the hpux template\nmight look like\n\nAROPT=crs\nALL=\nSRCH_INC=\nSRCH_LIB=\nDLSUFFIX=.sl\n\nif test $ac_cv_prog_gcc = yes; then\n\tCFLAGS=-O2\n\tSHARED_LIB=-fPIC\n\tDL_LIB=/usr/lib/libdld.sl\nelse\n\tCFLAGS=\"-Wl,-E -Ae\"\n\tSHARED_LIB=+z\n\t# Make aCC be first C++ compiler name tried...\n\tCCC=aCC\nfi\n\nThat last line brings up an issue that you'll have to deal with before\nyou can convince me that vanilla autoconf is the only solution we need:\nhow do you force the thing to use compatible C++ and C compilers? Right\nnow it will try to pick g++ (if available) regardless of what you told\nit about CC.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 18:47:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Templates " }, { "msg_contents": "Tom Lane writes:\n\n> > So the compiler information must disappear from the template files.\n> \n> Not exactly. We do need to be able to decide whether we are using\n> gcc or vendor cc in order to pick the right switches.\n\nI'll rephrase that: The name of the compiler needs to disappear from the\ntemplate file. We'd still have a separate file for GCC vs vendor with the\ndifferent CFLAGS, etc., but we wouldn't force CC= something.\n\n> One possible way of doing that is to merge the \"cc\" and \"gcc\"\n> templates and have if-tests in the templates instead. For example the\n> hpux template might look like\n\nOr that, but I'm not sure if that enhances readibility.\n\n\n> That last line brings up an issue that you'll have to deal with before\n> you can convince me that vanilla autoconf is the only solution we need:\n> how do you force the thing to use compatible C++ and C compilers?\n\nWe use the libtool multi-language branch. :-) Btw., libtool will need\nconfig.guess either way. Not this release though...\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 11 Jul 2000 22:34:21 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Templates " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>>>> So the compiler information must disappear from the template files.\n>> \n>> Not exactly. We do need to be able to decide whether we are using\n>> gcc or vendor cc in order to pick the right switches.\n\n> I'll rephrase that: The name of the compiler needs to disappear from the\n> template file. We'd still have a separate file for GCC vs vendor with the\n> different CFLAGS, etc., but we wouldn't force CC= something.\n\nAgreed.\n\n>> One possible way of doing that is to merge the \"cc\" and \"gcc\"\n>> templates and have if-tests in the templates instead. For example the\n>> hpux template might look like\n\n> Or that, but I'm not sure if that enhances readibility.\n\nIf you're doing the legwork I guess you get to choose ;-) ... but I like\nthe idea of combining the gcc and vendor-cc templates for a platform.\nUsually there's a great deal of commonality, so having two templates\njust means two files to edit (or forget to edit).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jul 2000 17:09:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Templates " }, { "msg_contents": "Tom Lane writes:\n\n> Say what? The variants we've been having trouble with *are*\n> config.guess outputs. Moreover there are new versions of config.guess\n> all the time. You're making no sense at all here.\n\nWait, didn't you say the problem was from vendors changing their uname\noutput? We know what config.guess can print, it's in the source. When new\nversions come out we can read the ChangeLog. The Alpha problem you\nmentioned could have been solved by writing \"alpha.*\" instead of just\n\"alpha\". I don't believe relying on uname would have made this better.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 12 Jul 2000 02:24:26 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Templates " } ]
[ { "msg_contents": "\n> Zeugswetter Andreas SB wrote:\n> > > > -I created a new db\n> > > > -used the old db schema to create all new blank tables\n> > \n> > vacuum new db\n> > (I would do a tar backup of the whole old db)\n> > vacuum old db, if that is possible\n> \n> Was not possible.\n> \n> > > > -copied the physical table files from the old data\n> > > directory into the\n> > > > new database directory\n> > \n> > if above vacuum old db was not possible copy old pg_log\n> \n> Oops - I didn't do that.\n> \n> > > > -currently vacuuming the new db - nothing is barfing yet\n> \n> Actually, the vacuum seemed to be running forever making no \n> progress so\n> I killed it.\n> \n> > > > -now hopefully I can create my indexes and be back in business\n> \n> I vacuumed here and it worked. I did not use my \"old\" pg_log \n> file - what\n> did I lose?\n\nTuples that have been inserted/updated/deleted after last vacuum\nin old db assume that the corresponding txn has to be rolled back.\nSince your vacuum on old db only did half the db, that half will be current,\nbut the rest will be old, thus you loose consistency.\n\nOne of the core please confirm.\n\nAndreas\n", "msg_date": "Mon, 10 Jul 2000 15:27:13 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: AW: more corruption" } ]
[ { "msg_contents": "\n> > And of course the major problem with *that* is how do you get the\n> > connection request to arrive at a backend that's been prestarted in\n> > the right database? If you don't commit to a database then there's\n> > not a whole lot of prestarting that can be done.\n> > \n> > It occurs to me that this'd get a whole lot more feasible if one\n> > postmaster == one database, which is something we *could* do if we\n> > implemented schemas. Hiroshi's been arguing that the current hard\n> > separation between databases in an installation should be done away\n> > with in favor of schemas, and I'm starting to see his point...\n> \n> This is interesting. You believe schema's would allow a pool of\n> backends to connect to any database? That would clearly be a win.\n\nI do not agree. We need to keep the different databases per postmaster.\nSchemas are something we need below a database. This is actually \nrequired by SQL99. More than one database (catalog in SQL99)\nis not required per SQL99, but imho a very useful feature.\n\nWe need something that allows us to access objects on another database,\nbut this should imho not be limited to databases on the same postmaster \n(SQL99 cluster).\n\nAndreas \n", "msg_date": "Mon, 10 Jul 2000 15:32:45 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Re: [GENERAL] PostgreSQL vs. MySQL" } ]
[ { "msg_contents": "\n\n> > Peter Eisentraut wrote:\n> > \n> > > Bruce Momjian writes:\n> > > \n> > > > * Add function to return primary key value on INSERT\n> > > \n> > > I don't get the point of this. Don't you know what you \n> inserted? For\n> > > sequences there's curval()\n> > \n> > Mmmhhh... it means that we can assume no update to the \n> sequence value\n> > between the insert and the curval selection?\n> \n> No curval() is per-backend value that is not affected by other users. \n> My book has a mention of that issue, and so does the FAQ.\n\nNot all default values need to be a sequence, thus imho\nthis function is a useful extension. ODBC has it too.\n\nAndreas\n", "msg_date": "Mon, 10 Jul 2000 15:38:56 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Re: postgres TODO" }, { "msg_contents": "On Mon, 10 Jul 2000, Zeugswetter Andreas SB wrote:\n\n> \n> \n> > > Peter Eisentraut wrote:\n> > > \n> > > > Bruce Momjian writes:\n> > > > \n> > > > > * Add function to return primary key value on INSERT\n> > > > \n> > > > I don't get the point of this. Don't you know what you \n> > inserted? For\n> > > > sequences there's curval()\n> > > \n> > > Mmmhhh... it means that we can assume no update to the \n> > sequence value\n> > > between the insert and the curval selection?\n> > \n> > No curval() is per-backend value that is not affected by other users. \n> > My book has a mention of that issue, and so does the FAQ.\n> \n> Not all default values need to be a sequence, thus imho\n> this function is a useful extension. ODBC has it too.\n\nactually, had thought about this too over the weekend ... if I define a\n'serial' type, it right now creates a sequence for that ... if I recall\ncorrectly, that was purely a kludge until someone built a better 'serial'\n...\n\nhaving an INSERT return the value of 'serial' that was used would save a\nsecond SELECT call *and* eliminate the requirement for an app programmer\nto have to know to do a 'SELECT curval('table_field_seq');' ...\n\n\n", "msg_date": "Mon, 10 Jul 2000 11:44:29 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Re: postgres TODO" }, { "msg_contents": "> actually, had thought about this too over the weekend ... if I define a\n> 'serial' type, it right now creates a sequence for that ... if I recall\n> correctly, that was purely a kludge until someone built a better 'serial'\n> ...\n> \n> having an INSERT return the value of 'serial' that was used would save a\n> second SELECT call *and* eliminate the requirement for an app programmer\n> to have to know to do a 'SELECT curval('table_field_seq');' ...\n\nYes, I can imagine that the table_field_seq name is kind of flakey. It\nwill choose a different name if there is a conflict, so it is an chance\nfor failure.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jul 2000 15:01:20 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Re: postgres TODO" } ]
[ { "msg_contents": "\n\n I'm playing with MemoryContextCheck() and I a little confuse now.\n\n How idea is for:\n\n context->initBlockSize\n context->maxBlockSize\n\n Must be a memory block already between this range? For example \nthe AllocSetAlloc() if create a single-chunk-block not check it and create\nblock less than 8*1024 (an example for CacheMemoryContext).\n\nThe AllocSetRealloc() not check it too.\n\n\t\t\t\t\tKarel\n\n", "msg_date": "Mon, 10 Jul 2000 15:43:41 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "memory: bug or feature" }, { "msg_contents": "Karel Zak <[email protected]> writes:\n> I'm playing with MemoryContextCheck() and I a little confuse now.\n> How idea is for:\n> \tcontext->initBlockSize\n>\tcontext->maxBlockSize\n\n> Must be a memory block already between this range?\n\nNo, those are just hints for allocation of default-sized blocks. They\ndon't constrain allocation of blocks that have to be a particular size\nto accommodate a large chunk. The reason for having them is just to\nallow a caller who expects that a particular context won't contain much\ndata to prevent a lot of wasted space from being allocated for big\ndefault-sized blocks.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 11:28:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: memory: bug or feature " } ]
[ { "msg_contents": "\n> > > > > -currently vacuuming the new db - nothing is barfing yet\n> > \n> > Actually, the vacuum seemed to be running forever making no \n> > progress so\n> > I killed it.\n> > \n> > > > > -now hopefully I can create my indexes and be back in business\n> > \n> > I vacuumed here and it worked. I did not use my \"old\" pg_log \n> > file - what\n> > did I lose?\n> \n> Tuples that have been inserted/updated/deleted after last vacuum\n> in old db assume that the corresponding txn has to be rolled back.\n\nCorrection: Tuples that have been inserted/updated/deleted but have not\nbeen accessed afterwards (the magic first access that updates the tuple \ntransaction status inplace).\n\n> Since your vacuum on old db only did half the db, that half \n> will be current,\n> but the rest will be old, thus you loose consistency.\n> \n> One of the core please confirm.\n\nAndreas\n", "msg_date": "Mon, 10 Jul 2000 15:49:31 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: AW: more corruption" } ]
[ { "msg_contents": "\n> Thus spake Alessio Bragadini\n> > > > * Add function to return primary key value on INSERT\n> > > \n> > > I don't get the point of this. Don't you know what you \n> inserted? For\n> > > sequences there's curval()\n> > \n> > Mmmhhh... it means that we can assume no update to the \n> sequence value\n> > between the insert and the curval selection?\n> \n> We can within one connection so this is safe but there are \n> other problems\n> which I am not sure would be solved by this anyway. With \n> rules, triggers\n> and defaults there are often changes to the row between the \n> insert and the\n> values that hit the backing store. This is a general problem of which\n> the primary key is only one example.\n> \n> In fact, the OID of the new row is returned so what stops one \n> from just\n> using it to get any information required. This is exactly \n> what PyGreSQL\n> does in its insert method. After returning, the dictionary \n> used to store\n> the fields for the row have been updated with the actual \n> contents of the\n> row in the database. It simply does a \"SELECT *\" using the new OID to\n> get the row back.\n\nOID access is not indexed by default, only if the dba created a\ncorresponding \nindex. Thus OID access is a seq scan in a default environment.\n\nAndreas\n", "msg_date": "Mon, 10 Jul 2000 15:58:08 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Re: postgres TODO" } ]
[ { "msg_contents": "At 15:58 10/07/00 +0200, Zeugswetter Andreas SB wrote:\n>\n>OID access is not indexed by default, only if the dba created a\n>corresponding \n>index. Thus OID access is a seq scan in a default environment.\n>\n\nSticking my head out even further, this seems like a good reason to use\n'insert/update...returning' - isn't the tuple already on the heap, and\neasily available at the end of the insert/update?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 11 Jul 2000 00:14:32 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AW: Re: postgres TODO" } ]
[ { "msg_contents": "Hi;\n\nI have a table that has four fields as float8[]. I have a serial new data\nfor each array. Each time I have new data I have to update more than 6000\nrows for all four field. But using \"update (ta ble)\" to load in the new\ndata is just too slow. Does anybody know a more efficient way to update\nlarge data set in an array?\n\nThanks for help.\n\nWenjin Zheng, Ph.D.\nBioinformatic Analyst\nLarge Scale Biology, Corp.\nVacaville, CA 95688\[email protected]\n \n", "msg_date": "Mon, 10 Jul 2000 08:49:11 -0700", "msg_from": "Wenjin Zheng <[email protected]>", "msg_from_op": true, "msg_subject": "Better way to load data to table" } ]
[ { "msg_contents": "\nThere is a new version of pg_dump-with-blobs at:\n\n ftp://ftp.rhyme.com.au/pub/postgresql/pg_dump/blobs/\n\nThanks to Pavel Jan�k, there are now two files - one for building against\nCVS, and the other for building against 702.\n\nNote that to build these files you must cd into the new pg_dump_xxx\ndirectory and type 'make'. It does not modify the higher level makefiles.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 11 Jul 2000 09:12:02 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dump with BLOBS & V7.0.2 fix" } ]
[ { "msg_contents": "The following URL is a short article about software quality. He\ncriticizes the sloppy coding practices of some open-source software. I\nagree with most of his points.\n\nMy personal feeling is that if sloppy coding becomes the norm, I will be\nout of a job. I place a high value on quality code, and I know most\nPostgreSQL do as well.\n\n\thttp://www.osopinion.com/Opinions/MontyManley/MontyManley8.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jul 2000 20:19:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Software Quality" } ]
[ { "msg_contents": "Tim,\n\nAside: Is your MySQL database running on an ext2 volume or on a ReiserFS volume? I read somewhere that half of SourceForge is now running in Reiser. Is this true?\n\nOn Wed, 05 Jul 2000 10:00:39 -0700, Tim Perdue wrote:\n\n>The Hermit Hacker wrote:\n>> \n>> Will you accept modifications to this if submit'd, to make better use of\n>> features that PostgreSQL has to improve performance? Just downloaded it\n>> and am going to look her through, just wondering if it would be a waste of\n>> time for me to suggest changes though :)\n>\n>If you can figure out an algorithm that shows these nested messages more\n>efficiently on postgres, then that would be a pretty compelling reason\n>to move SourceForge to Postgres instead of MySQL, which is totally\n>reaching its limits on our site. Right now, neither database appears\n>like it will work, so Oracle is starting to loom on the horizon.\n>\n>Tim\n>\n>-- \n>Founder - PHPBuilder.com / Geocrawler.com\n>Lead Developer - SourceForge\n>VA Linux Systems\n>408-542-5723\n\n\n\n", "msg_date": "Mon, 10 Jul 2000 17:39:12 -0700", "msg_from": "\"Randall Parker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "Randall Parker wrote:\n> \n> Tim,\n> \n> Aside: Is your MySQL database running on an ext2 volume or on a ReiserFS volume? I read somewhere that half of SourceForge is now running in Reiser. Is this true?\n\nThat's ext2. I don't know if \"half\" of SF.net is running on Reiser, but\nsome of the biggest, most critical stuff has been for 6-8 months.\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n", "msg_date": "Mon, 10 Jul 2000 23:00:30 -0700", "msg_from": "Tim Perdue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" } ]
[ { "msg_contents": "> Is WAL planned for 7.1? What is the story with WAL?\n\nYes.\n\n> I'm a bit concerned that the current storage manager is going to be\n> thrown in the bit bucket without any thought for its benefits. There's\n> some stuff I want to do with it like resurrecting time travel,\n\nWhy don't use triggers for time-travel?\nDisadvantages of transaction-commit-time based time travel was pointed out\na days ago.\n\n> some database replication stuff which can make use of the non-destructive\n\nIt was mentioned here that triggers could be used for async replication,\nas well as WAL.\n\n> storage method etc. There's a whole lot of interesting stuff that can be\n> done with the current storage manager.\n\nVadim\n", "msg_date": "Mon, 10 Jul 2000 18:06:40 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: postgres 7.2 features." }, { "msg_contents": "\"Mikheev, Vadim\" wrote:\n> Yes.\n> \n> > I'm a bit concerned that the current storage manager is going to be\n> > thrown in the bit bucket without any thought for its benefits. There's\n> > some stuff I want to do with it like resurrecting time travel,\n> \n> Why don't use triggers for time-travel?\n> Disadvantages of transaction-commit-time based time travel was pointed out\n> a days ago.\n\nTriggers for time travel is MUCH less efficient. There is no copying\ninvolved\neither in memory or on disk with the original postgres time travel, nor\nis\nthere any logic to be executed. Then you've got to figure out strategies\nfor efficiently deleting old data if you want to. The old postgres was\nthe \nRight Thing, if you want access to time travel.\n\n> It was mentioned here that triggers could be used for async replication,\n> as well as WAL.\n\nSame story. Major inefficency. Replication is tough enough without\nmucking\naround with triggers. Once the trigger executes you've got to go and\nstore\nthe data in the database again anyway. Then figure out when to delete\nit.\n\n> > storage method etc. There's a whole lot of interesting stuff that can be\n> > done with the current storage manager.\n> \n> Vadim\n", "msg_date": "Tue, 11 Jul 2000 11:27:39 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 7.2 features." }, { "msg_contents": "At 11:27 11/07/00 +1000, Chris Bitmead wrote:\n>\n>> It was mentioned here that triggers could be used for async replication,\n>> as well as WAL.\n>\n>Same story. Major inefficency. Replication is tough enough without\n>mucking\n>around with triggers. Once the trigger executes you've got to go and\n>store\n>the data in the database again anyway. Then figure out when to delete\n>it.\n>\n\nThe WAL *should* be the most efficient technique for replication (this said\nwithout actually having seen it ;-}). \n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 11 Jul 2000 16:37:13 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 7.2 features." }, { "msg_contents": "\nHas sufficient research been done to warrant destruction of what is\ncurrently there?\n\nAccording to the postgres research papers, the no-overwrite storage\nmanager has the following attributes...\n\n* It's always faster than WAL in the presence of stable main memory.\n(Whether the stable caches in modern disk drives is an approximation I\ndon't know).\n\n* It's more scalable and has less logging contention. This allows\ngreater scalablility in the presence of multiple processors.\n\n* Instantaneous crash recovery.\n\n* Time travel is available at no cost.\n\n* Easier to code and prove correctness. (I used to work for a database\ncompany that implemented WAL, and it took them a large number of years\nbefore they supposedly corrected every bug and crash condition on\nrecovery).\n\n* Ability to keep archival records on an archival medium.\n\nIs there any research on the level of what was done previously to\nwarrant abandoning these benefits? Obviously WAL has its own benefits, I\njust don't want to see the current benefits lost.\n", "msg_date": "Tue, 11 Jul 2000 17:05:42 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storage Manager (was postgres 7.2 features.)" }, { "msg_contents": "Chris Bitmead wrote:\n>\n> Has sufficient research been done to warrant destruction of what is\n> currently there?\n\n What's currently there doesn't have TT any more. So there is\n nothing we would destroy with an overwriting SMGR.\n\n> According to the postgres research papers, the no-overwrite storage\n> manager has the following attributes...\n\n I started using (and hacking) Postgres in version 4.2, which\n was the last official release from Stonebrakers team at UCB\n (and the last one with the PostQUEL query language).\n\n The no-overwriting SMGR concept was one of the things, the\n entire project should aprove. The idea was to combine\n rollback and logging information with the data itself, by\n only storing new values and remembering when something\n appeared or disappeared. Stable memory just means \"if I know\n my write made it to some point, I can read it back later even\n in the case of a crash\".\n\n This has never been implemented to a degree that is capable\n to catch hardware failures like unexpected loss of power. So\n the project finally told \"it might be possible\". Many other\n questions have been answered by the project, but exactly this\n one is still open.\n\n> * It's always faster than WAL in the presence of stable main memory.\n> (Whether the stable caches in modern disk drives is an approximation I\n> don't know).\n\n For writing, yes. But for high updated tables, the scans will\n soon slow down due to the junk contention.\n\n> * It's more scalable and has less logging contention. This allows\n> greater scalablility in the presence of multiple processors.\n>\n> * Instantaneous crash recovery.\n\n Because this never worked reliable, Vadim is working on WAL.\n\n> * Time travel is available at no cost.\n>\n> * Easier to code and prove correctness. (I used to work for a database\n> company that implemented WAL, and it took them a large number of years\n> before they supposedly corrected every bug and crash condition on\n> recovery).\n>\n> * Ability to keep archival records on an archival medium.\n\n Has this ever been implemented?\n\n> Is there any research on the level of what was done previously to\n> warrant abandoning these benefits? Obviously WAL has its own benefits, I\n> just don't want to see the current benefits lost.\n\n I see your points. Maybe we can leave the no-overwriting SMGR\n in the code, and just make the new one the default.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Tue, 11 Jul 2000 12:09:14 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: Storage Manager (was postgres 7.2 features.)" }, { "msg_contents": "Jan Wieck wrote:\n\n> What's currently there doesn't have TT any more. So there is\n> nothing we would destroy with an overwriting SMGR.\n\nI know, but I wanted to resurrect it at some stage, and I think a lot of\nimportant bits are still there.\n\n> > * It's always faster than WAL in the presence of stable main memory.\n> > (Whether the stable caches in modern disk drives is an approximation I\n> > don't know).\n> \n> For writing, yes. But for high updated tables, the scans will\n> soon slow down due to the junk contention.\n\nI imagine highly updated applications won't be interested in time\ntravel. If they are then the alternative of a user-maintained time-stamp\nand triggers will still leave you with \"junk\".\n\n> > * Instantaneous crash recovery.\n> \n> Because this never worked reliable, Vadim is working on WAL.\n\nPostgres recovery is not reliable?\n", "msg_date": "Tue, 11 Jul 2000 23:20:28 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storage Manager (was postgres 7.2 features.)" } ]
[ { "msg_contents": "I am having exactly the same problem, the only thing is, I have a fresh\ncvs copy,\nI did a make clean, and I still have this problem.\n\ncould anybody tell me how I could get around this, or what I should\ncheck out?\n\nThank you very much for your support,\nRobert Stoddard\[email protected]\n\n>\n> > Just downloaded a completely fresh cvs copy. When I\n> > do initdb...\n> >\n> > This user will own all the files and must also own the server\nprocess.\n> >\n> > Creating Postgres database system directory /home/pghack/pgsql/data\n> >\n> > Creating Postgres database system directory\n/home/pghack/pgsql/data/base\n> >\n> > Creating template database in /home/pghack/pgsql/data/base/template1\n\n> > ERROR: Error: unknown type 'oidvector'.\n> >\n> > ERROR: Error: unknown type 'oidvector'.\n> >\n> > syntax error 12 : parse errorinitdb: could not create\ntemplate\n> > database\n> >\n> > ************\n\n", "msg_date": "Mon, 10 Jul 2000 19:05:39 -0700", "msg_from": "rob <[email protected]>", "msg_from_op": true, "msg_subject": "unknown type oidvector in initdb" } ]
[ { "msg_contents": "> > > some stuff I want to do with it like resurrecting time travel,\n> > \n> > Why don't use triggers for time-travel?\n> > Disadvantages of transaction-commit-time based time travel \n> > was pointed out a days ago.\n> \n> Triggers for time travel is MUCH less efficient. There is no copying\n> involved either in memory or on disk with the original postgres time\n> travel, nor is there any logic to be executed.\n\nWith the original TT:\n\n- you are not able to use indices to fetch tuples on time base;\n- you are not able to control tuples life time;\n- you have to store commit time somewhere;\n- you have to store additional 8 bytes for each tuple;\n- 1 sec could be tooo long time interval for some uses of TT.\n\nAnd, btw, what could be *really* very useful it's TT + referential integrity\nfeature. How could it be implemented without triggers?\n\nImho, triggers can give you much more flexible and useful TT...\n\nAlso note that TT was removed from Illustra and authors wrote that\nbuilt-in TT could be implemented without non-overwriting smgr.\n\n> > It was mentioned here that triggers could be used for async \n> > replication, as well as WAL.\n> \n> Same story. Major inefficency. Replication is tough enough without\n> mucking around with triggers. Once the trigger executes you've got\n> to go and store the data in the database again anyway. Then figure\n> out when to delete it.\n\nWhat about reading WAL to get and propagate changes? I don't think that\nreading tables will be more efficient and, btw, \nhow to know what to read (C) -:) ?\n\nVadim\n", "msg_date": "Mon, 10 Jul 2000 19:33:21 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: postgres 7.2 features." }, { "msg_contents": "\nThe bottom line is that the original postgres time-travel implementation\nwas totally cost-free. Actually it may have even speeded things\nup since vacuum would have less work to do. Can you convince me that\ntriggers can compare anywhere near for performance? I can't see how.\nAll I'm asking is don't damage anything that is in postgres now that\nis relevant to time-travel in your quest for WAL....\n\n> With the original TT:\n> \n> - you are not able to use indices to fetch tuples on time base;\n\nSounds not very hard to fix..\n\n> - you are not able to control tuples life time;\n\n From the docs... \"Applications that do not want to save historical data\ncan sepicify a cutoff point for a relation. Cutoff points are defined by\nthe discard command\" The command \"discard EMP before \"1 week\"\ndeletes data in the EMP relation that is more than 1 week old\".\n\n> - you have to store commit time somewhere;\n\nOk, so?\n\n> - you have to store additional 8 bytes for each tuple;\n\nA small price for time travel.\n\n> - 1 sec could be tooo long time interval for some uses of TT.\n\nSo someone in the future can implement finer grains. If time travel\ndisappears this option is not open.\n\n> And, btw, what could be *really* very useful it's TT + referential integrity\n> feature. How could it be implemented without triggers?\n\nIn what way does TT not have referential integrity? As long as the\nsystem\nassures that every transaction writes the same timestamp to all tuples\nthen\nreferential integrity continues to exist.\n\n> Imho, triggers can give you much more flexible and useful TT...\n> \n> Also note that TT was removed from Illustra and authors wrote that\n> built-in TT could be implemented without non-overwriting smgr.\n\nOf course it can be, but can it be done anywhere near as efficiently?\n\n> > > It was mentioned here that triggers could be used for async\n> > > replication, as well as WAL.\n> >\n> > Same story. Major inefficency. Replication is tough enough without\n> > mucking around with triggers. Once the trigger executes you've got\n> > to go and store the data in the database again anyway. Then figure\n> > out when to delete it.\n> \n> What about reading WAL to get and propagate changes? I don't think that\n> reading tables will be more efficient and, btw,\n> how to know what to read (C) -:) ?\n\nMaybe that is a good approach, but it's not clear that it is the best.\nMore research is needed. With the no-overwrite storage manager there\nexists a mechanism for deciding how long a tuple exists and this\ncan easily be tapped into for replication purposes. Vacuum could \nserve two purposes of vacuum and replicate.\n", "msg_date": "Tue, 11 Jul 2000 13:46:42 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 7.2 features." }, { "msg_contents": "\nAlso, does WAL offer instantaneous crash recovery like no-overwrite?\n\n\n\"Mikheev, Vadim\" wrote:\n> \n> > > > some stuff I want to do with it like resurrecting time travel,\n> > >\n> > > Why don't use triggers for time-travel?\n> > > Disadvantages of transaction-commit-time based time travel\n> > > was pointed out a days ago.\n> >\n> > Triggers for time travel is MUCH less efficient. There is no copying\n> > involved either in memory or on disk with the original postgres time\n> > travel, nor is there any logic to be executed.\n> \n> With the original TT:\n> \n> - you are not able to use indices to fetch tuples on time base;\n> - you are not able to control tuples life time;\n> - you have to store commit time somewhere;\n> - you have to store additional 8 bytes for each tuple;\n> - 1 sec could be tooo long time interval for some uses of TT.\n> \n> And, btw, what could be *really* very useful it's TT + referential integrity\n> feature. How could it be implemented without triggers?\n> \n> Imho, triggers can give you much more flexible and useful TT...\n> \n> Also note that TT was removed from Illustra and authors wrote that\n> built-in TT could be implemented without non-overwriting smgr.\n> \n> > > It was mentioned here that triggers could be used for async\n> > > replication, as well as WAL.\n> >\n> > Same story. Major inefficency. Replication is tough enough without\n> > mucking around with triggers. Once the trigger executes you've got\n> > to go and store the data in the database again anyway. Then figure\n> > out when to delete it.\n> \n> What about reading WAL to get and propagate changes? I don't think that\n> reading tables will be more efficient and, btw,\n> how to know what to read (C) -:) ?\n> \n> Vadim\n", "msg_date": "Tue, 11 Jul 2000 14:01:50 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storage Manager (was postgres 7.2 features.)" }, { "msg_contents": "> \n> The bottom line is that the original postgres time-travel implementation\n> was totally cost-free. Actually it may have even speeded things\n> up since vacuum would have less work to do. Can you convince me that\n> triggers can compare anywhere near for performance? I can't see how.\n> All I'm asking is don't damage anything that is in postgres now that\n> is relevant to time-travel in your quest for WAL....\n\nBasically, time travel was getting in the way of more requested features\nthat had to be added. Keeping it around has a cost, and no one felt the\ncost was worth the benefit. You may disagree, but at the time, that was\nthe consensus, and I assume it still is.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jul 2000 00:19:53 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 7.2 features." }, { "msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > The bottom line is that the original postgres time-travel implementation\n> > was totally cost-free. Actually it may have even speeded things\n> > up since vacuum would have less work to do. Can you convince me that\n> > triggers can compare anywhere near for performance? I can't see how.\n> > All I'm asking is don't damage anything that is in postgres now that\n> > is relevant to time-travel in your quest for WAL....\n> \n> Basically, time travel was getting in the way of more requested features\n\nDo you mean way back when it was removed? How was it getting in the way?\n\n> that had to be added. Keeping it around has a cost, and no one felt the\n> cost was worth the benefit. You may disagree, but at the time, that was\n> the consensus, and I assume it still is.\n", "msg_date": "Tue, 11 Jul 2000 15:38:50 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 7.2 features." }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > >\n> > > The bottom line is that the original postgres time-travel implementation\n> > > was totally cost-free. Actually it may have even speeded things\n> > > up since vacuum would have less work to do. Can you convince me that\n> > > triggers can compare anywhere near for performance? I can't see how.\n> > > All I'm asking is don't damage anything that is in postgres now that\n> > > is relevant to time-travel in your quest for WAL....\n> > \n> > Basically, time travel was getting in the way of more requested features\n> \n> Do you mean way back when it was removed? How was it getting in the way?\n\nYes. Every tuple had this time-thing that had to be tested. Vadim\nwanted to revove it to clear up the coding, and we all agreed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jul 2000 09:02:37 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 7.2 features." }, { "msg_contents": "Bruce Momjian wrote:\n\n> > Do you mean way back when it was removed? How was it getting in the way?\n> \n> Yes. Every tuple had this time-thing that had to be tested. Vadim\n> wanted to revove it to clear up the coding, and we all agreed.\n\nAnd did that save a lot of code?\n", "msg_date": "Tue, 11 Jul 2000 23:23:18 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 7.2 features." }, { "msg_contents": "> Bruce Momjian wrote:\n> \n> > > Do you mean way back when it was removed? How was it getting in the way?\n> > \n> > Yes. Every tuple had this time-thing that had to be tested. Vadim\n> > wanted to revove it to clear up the coding, and we all agreed.\n> \n> And did that save a lot of code?\n> \n\nIt simplified the code.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jul 2000 10:47:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 7.2 features." } ]
[ { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Compressor | level | heap size | toastrel | toastidx | seconds\n>> | | | size | size |\n>> -----------+-------+-----------+----------+----------+--------\n>> PGLZ | - | 425,984 | 950,272 | 32,768 | 5.20\n>> zlib | 1 | 499,712 | 614,400 | 16,384 | 6.85\n>> zlib | 3 | 499,712 | 557,056 | 16,384 | 6.75\n>> zlib | 6 | 491,520 | 524,288 | 16,384 | 7.10\n>> zlib | 9 | 491,520 | 524,288 | 16,384 | 7.21\n>\n>Consider that the 25% slowness gets us a 35% disk reduction, and that\n>translates to fewer buffer blocks and disk accesses. Seems there is a\n>clear tradeoff there.\n\nAlso, consider that in the vast majority of cases, reads greatly outnumber\nwrites for any given datum, and particularly so for web applications.\n\nCould we get a benchmark that compared decompression speeds exclusively?\n\n\t-Michael Robinson\n\n", "msg_date": "Tue, 11 Jul 2000 12:20:24 +0800 (+0800)", "msg_from": "Michael Robinson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SQL] Re: [GENERAL] lztext and compression ratios..." } ]
[ { "msg_contents": "Hi,\nIs there SQL-92 SQLSTATE or SQL-96 SQLCODE implemented in\nPostgreSQL (I use version 7.0 on SuSe Linux 6.4) ?\nIf so, how to take the value of it in stored procedures (written in PL/pgSQL\nor C)\nIn documentation I found only a short describtion of sqlca\nIn ecpg.\n\nThanks in advance\nAdam\n\n\n\n\n\n\n\nHi,\nIs there SQL-92 SQLSTATE or SQL-96 SQLCODE implemented \nin\nPostgreSQL (I use version 7.0 on SuSe Linux 6.4) \n?\nIf so, how to take the value of it in stored \nprocedures (written in PL/pgSQL or C)\nIn  documentation I found only a short describtion of \nsqlca\nIn ecpg.\n \nThanks in advance \nAdam", "msg_date": "Tue, 11 Jul 2000 09:15:12 +0200", "msg_from": "\"Adam Walczykiewicz\" <[email protected]>", "msg_from_op": true, "msg_subject": "SQL-92 SQLSTATE in PostgreSQL ?!" } ]
[ { "msg_contents": "I think most of us here are hot on quality. It's one of the reasons why I\ndon't release code before I'm at least happy with what I've got is clean and\neasily maintainable.\n\nHere (MBC) I see several other analysts writing quick hacks that then become\nmission critical. These hacks then become illegible so when they break, I\nend up pulling my hair out because I can't read the code.\n\nYet, they then moan at me because I take longer. However, I test everything\nfirst and I don't reinvent the wheel - if a routine or class is going to be\nuseful, I make sure it's not dependent on too much, and put it in a library.\n\nI hate sloppy coding, but it's a sign of the times. Machines are more\npowerful, and storage is so cheap it's the easy way out not to optimise\nthings.\n\nFor example: How large is the average chess program now? Does anyone\nremember the Sinclair ZX81 and chess that ran in 1K of memory? Or how about\na programming language on the Amiga whos compiler was only 1020 bytes long\n(Fast).\n\nPeter\n\n--\nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council\n\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:[email protected]]\nSent: Tuesday, July 11, 2000 1:20 AM\nTo: PostgreSQL-development\nSubject: [HACKERS] Software Quality\n\n\nThe following URL is a short article about software quality. He\ncriticizes the sloppy coding practices of some open-source software. I\nagree with most of his points.\n\nMy personal feeling is that if sloppy coding becomes the norm, I will be\nout of a job. I place a high value on quality code, and I know most\nPostgreSQL do as well.\n\n\thttp://www.osopinion.com/Opinions/MontyManley/MontyManley8.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jul 2000 08:37:13 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Software Quality" }, { "msg_contents": "Peter Mount wrote:\n> \n> I think most of us here are hot on quality. It's one of the reasons why I\n> don't release code before I'm at least happy with what I've got is clean and\n> easily maintainable.\n> \n> Here (MBC) I see several other analysts writing quick hacks that then become\n> mission critical. These hacks then become illegible so when they break, I\n> end up pulling my hair out because I can't read the code.\n> \n> Yet, they then moan at me because I take longer. However, I test everything\n> first and I don't reinvent the wheel - if a routine or class is going to be\n> useful, I make sure it's not dependent on too much, and put it in a library.\n> \n> I hate sloppy coding, but it's a sign of the times. Machines are more\n> powerful, and storage is so cheap it's the easy way out not to optimise\n> things.\n> \n> For example: How large is the average chess program now? Does anyone\n> remember the Sinclair ZX81 and chess that ran in 1K of memory? Or how about\n> a programming language on the Amiga whos compiler was only 1020 bytes long\n> (Fast).\n\nI used to run the 68000 Macro Assembler for my Amiga 1000 of off\nfloppy disk. There's nothing like a pre-emptively multi-tasking\noperating system with a graphical user interface that runs nicely\nin 256K of RAM ;-)\n\nMike Mascari\n", "msg_date": "Tue, 11 Jul 2000 04:03:42 -0400", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Software Quality" }, { "msg_contents": "Mike Mascari wrote:\n> I used to run the 68000 Macro Assembler for my Amiga 1000 of off\n> floppy disk. There's nothing like a pre-emptively multi-tasking\n> operating system with a graphical user interface that runs nicely\n> in 256K of RAM ;-)\n\n But there's a complete UNIX V7 available to run in 640K on XT\n hardware. My first 'make world' was on a 4.77MHz 8088 (ahem\n NEC-V20) with 768K and 10MB harddisk. With some better\n hardware (386 and 2MB at minimum - works with less but is no\n fun) you can also run it with a TCP/IP stack and telnet or\n ftp to it like me. My one is a total oversized 486DX4-100\n with 16M and 1G disk.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Tue, 11 Jul 2000 12:34:10 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: Software Quality" } ]
[ { "msg_contents": "Applied. I didn't have a copy of this in my mailbox, so either I lost\nit, or it didn't make it to the list.\n\n[ text/enriched is unsupported, treating like TEXT/PLAIN ]\n\n> \n> \n> This was sent a while ago, perhaps it slipped though unnoticed.\n> \n> \n> >>>>\n> \n> <excerpt>At 21:13 9/07/00 -0400, Tom Lane wrote:\n> \n> >\n> \n> >Also I'm not real sure that the unused field will be \"-\" for all the\n> \n> >other languages --- but if that's true you could make it work. Not\n> \n> >hardwiring the language OIDs would definitely be a Good Thing.\n> \n> >\n> \n> \n> Done. In backend/commands/define.c unused field is set to '-' for the\n> moment.\n> \n> \n> A patch for CVS is attached, and I have amended my BLOB dumping version\n> appropriately.\n> \n> \n> </excerpt><<<<<<<<\n> \n> \n\n[ Attachment, skipping... ]\n\n[ text/enriched is unsupported, treating like TEXT/PLAIN ]\n\n> \n> \n> ----------------------------------------------------------------\n> \n> Philip Warner | __---_____\n> \n> Albatross Consulting Pty. Ltd. |----/ - \\\n> \n> (A.C.N. 008 659 498) | /(@) ______---_\n> \n> Tel: (+61) 0500 83 82 81 | _________ \\\n> \n> Fax: (+61) 0500 83 82 82 | ___________ |\n> \n> Http://www.rhyme.com.au | / \\|\n> \n> | --________--\n> \n> PGP key available upon request, | /\n> \n> and from pgp5.ai.mit.edu:11371 |/\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jul 2000 09:07:27 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch for pg_dump" }, { "msg_contents": "At 09:07 11/07/00 -0400, Bruce Momjian wrote:\n>Applied. I didn't have a copy of this in my mailbox, so either I lost\n>it, or it didn't make it to the list.\n>\n\nThis bothers me a little; I've sent several things to the list which either\nhave not made it, or seem to have only made it to some people. Has anybody\nelse noticed odd behaviour?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 11 Jul 2000 23:10:25 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch for pg_dump" }, { "msg_contents": "> At 09:07 11/07/00 -0400, Bruce Momjian wrote:\n> >Applied. I didn't have a copy of this in my mailbox, so either I lost\n> >it, or it didn't make it to the list.\n> >\n> \n> This bothers me a little; I've sent several things to the list which either\n> have not made it, or seem to have only made it to some people. Has anybody\n> else noticed odd behaviour?\n\nI found it in my mailbox. See previous e-mail. I usually wait a day to\nsee if anyone objects to the patch.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jul 2000 10:46:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch for pg_dump" } ]
[ { "msg_contents": "\n> \"Hiroshi Inoue\" <[email protected]> writes:\n> >>>> I vacuumed here and it worked. I did not use my \"old\" \n> pg_log file - what\n> >>>> did I lose?\n> >> \n> >> Hard to tell. Any tuples that weren't already marked on \n> disk as \"known\n> >> committed\" have probably gone missing, because their originating\n> >> transaction IDs likely won't be shown as committed in the \n> new pg_log.\n> >> So I'd look for missing tuples from recent transactions in \n> the old DB.\n> >> \n> \n> > Hmm,this may be more serious.\n> > MVCC doesn't see committed(marked ) but\n> > not yet committed(t_xmin > CurrentTransactionId) tuples.\n> > He will see them in the future.\n\nYes, good point. Is there a way to set CurrentTransactionId to a value\ngreater that the smallest t_xmin ?\n\n> \n> But he did a vacuum --- won't that get rid of any tuples that aren't\n> currently considered committed?\n\nHe said that the vacuum was blocking and he thus killed it.\nThe vacuum was thus only partway done.\n\nAndreas \n", "msg_date": "Tue, 11 Jul 2000 15:11:34 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: AW: more corruption " } ]
[ { "msg_contents": "\n> At 11:27 11/07/00 +1000, Chris Bitmead wrote:\n> >\n> >> It was mentioned here that triggers could be used for \n> async replication,\n> >> as well as WAL.\n> >\n> >Same story. Major inefficency. Replication is tough enough without\n> >mucking\n> >around with triggers. Once the trigger executes you've got to go and\n> >store\n> >the data in the database again anyway. Then figure out when to delete\n> >it.\n> >\n> \n> The WAL *should* be the most efficient technique for \n> replication (this said\n> without actually having seen it ;-}). \n\nThat depends on how much you need replicated. If you replicate all or most \ntables WAL will be very good, if you only need a few tables it wont.\n\nAndreas\n", "msg_date": "Tue, 11 Jul 2000 15:21:19 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: postgres 7.2 features." } ]
[ { "msg_contents": "\n\tI am running RedHat 6.1 and have installed PostgreSQL version 7.0.2 after\nuninstalling 6.5.2. I have initialized the database with the following\ncommand at the prompt:\n\n[postgres@cd480405-a bin]$\n/usr/bin/initdb --pgdata=/var/lib/pgsql --pglib=/usr/lib/pgsql\n\nI did not receive any problems from this.\n\nHowever when I try to initialize postmaster by the following command, I get\na problem.\n\n[postgres@cd480405-a bin]$ /usr/bin/postmaster -D /var/lib/pgsql\n000711.08:18:58.139 [1435] DEBUG: Data Base System is starting up at Tue\nJul 11 08:18:58 2000\n000711.08:18:58.139 [1435] DEBUG: Data Base System was shut down at Mon\nJul 10 22:26:16 2000\n000711.08:18:58.143 [1435] DEBUG: Data Base System is in production state\nat Tue Jul 11 08:18:58 2\n000\n\nWhat can I do to resolve this problem?\n\nIn addition, after resolving this problem, how does one set up the\npostmaster to accept requests from the internet?\n\nThank-you for your time.\n\nSean.\n\n", "msg_date": "Tue, 11 Jul 2000 08:32:49 -0500", "msg_from": "\"Sean Alphonse\" <[email protected]>", "msg_from_op": true, "msg_subject": "Errors on initializing postmaster" } ]
[ { "msg_contents": "\n> Has sufficient research been done to warrant destruction of what is\n> currently there?\n> \n> According to the postgres research papers, the no-overwrite storage\n> manager has the following attributes...\n> \n> * It's always faster than WAL in the presence of stable main memory.\n> (Whether the stable caches in modern disk drives is an approximation I\n> don't know).\n\nYes, only if you want to be able to log changes to a txlog to be able to \nrollforward changes after a restore, then that benefit is not so large any\nmore.\n\n> * It's more scalable and has less logging contention. This allows\n> greater scalablility in the presence of multiple processors.\n\nSame as above, if you want a txlog you lose. \nBut if we could switch the txlog off then at least in that mode\nwe would keep all benefits of the non-overwrite smgr.\n\n> \n> * Instantaneous crash recovery.\n> \n> * Time travel is available at no cost.\n\nYes. Yes, it also solves MVCC.\nAn overwrite smgr would need to write the old row to some temporary storage \n(Orcl: rollback segment, Ifmx: physical log) before an update.\nThe overwrite smgr thus needs 2 sync and one async write for one update:\n\n1. new value to txlog (sync)\n2. old value (or page) to temporary storage (rollseg) (sync)\n3. new value to table (async)\n(4. probably a checkpoint write)\n\nThe 7.1 implementation (non-overwrite + WAL) would only need 2 writes and no\n\ncheckpoints:\n1. new value to log (sync)\n2. new value to table (async)\n\n> * Easier to code and prove correctness. (I used to work for a database\n> company that implemented WAL, and it took them a large number of years\n> before they supposedly corrected every bug and crash condition on\n> recovery).\n> \n> * Ability to keep archival records on an archival medium.\n\nIt allows the use of online native OS backups that are not synchronized with\nthe postmaster (e.g. a tar of your datadir). \n(Even if for practical production use we would probably need to add a\nutility\nfor cleanup after a restore, the basic methodology is possible)\n\nImho that is a very large benefit.\n\n> Is there any research on the level of what was done previously to\n> warrant abandoning these benefits? Obviously WAL has its own \n> benefits, I\n> just don't want to see the current benefits lost.\n\nI do somehow agree that this step should be well considered, \nespecially since WAL can work with the current non-overwrite smgr. \nIt seem we can have the benefit of both, at the cost of periodic vacuums, no\n?\n\nAndreas\n", "msg_date": "Tue, 11 Jul 2000 15:59:31 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Storage Manager (was postgres 7.2 features.)" } ]
[ { "msg_contents": "\n> > * It's always faster than WAL in the presence of stable main memory.\n> > (Whether the stable caches in modern disk drives is an \n> approximation I\n> > don't know).\n> \n> For writing, yes. But for high updated tables, the scans will\n> soon slow down due to the junk contention.\n\nCan you elaborate please ? If we centralized writes, then the\nnon-overwrite smgr would be very efficient since it only writes to the end \nof a table (e.g. one page write for pagesize/rowsize rows). \n\n> \n> > * It's more scalable and has less logging contention. This allows\n> > greater scalablility in the presence of multiple processors.\n> >\n> > * Instantaneous crash recovery.\n> \n> Because this never worked reliable, Vadim is working on WAL\n\ncrash recovery is bullet proof. the WAL is only needed for rollforward \nafter restore with our non overwrite smgr. \nI do agree that we need a txlog. \n\nAndreas\n", "msg_date": "Tue, 11 Jul 2000 16:09:18 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Storage Manager (was postgres 7.2 features.)" }, { "msg_contents": "Zeugswetter Andreas SB wrote:\n>\n> > > * It's always faster than WAL in the presence of stable main memory.\n> > > (Whether the stable caches in modern disk drives is an\n> > approximation I\n> > > don't know).\n> >\n> > For writing, yes. But for high updated tables, the scans will\n> > soon slow down due to the junk contention.\n>\n> Can you elaborate please ? If we centralized writes, then the\n> non-overwrite smgr would be very efficient since it only writes to the end\n> of a table (e.g. one page write for pagesize/rowsize rows).\n\n Each UPDATE/DELETE does a scan, which will be the more\n expensive the larger the heap grows (more tuples to visit on\n sequential access, farer head seeks on indexed ones). And\n each SELECT does the same.\n\n> > > * It's more scalable and has less logging contention. This allows\n> > > greater scalablility in the presence of multiple processors.\n> > >\n> > > * Instantaneous crash recovery.\n> >\n> > Because this never worked reliable, Vadim is working on WAL\n>\n> crash recovery is bullet proof. the WAL is only needed for rollforward\n> after restore with our non overwrite smgr.\n> I do agree that we need a txlog.\n\n Need to precisely define CRASH here. Meant OS/system crashes,\n where you may loose integrity of filesystems as well. This is\n not the fault of the DB, but if a user/client got a \"COMMIT\n is OK\", he should be able to forget about it and go ahead.\n With backup and xlog you can place DB and xlog on different\n raid arrays, still not bullet proof (someone might use an\n automatic gun to shootdown), but alot better.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Tue, 11 Jul 2000 18:41:39 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: AW: Storage Manager (was postgres 7.2 features.)" } ]
[ { "msg_contents": "\n> It appears that brtee indices (at least) can keep references\n> to old toast values that survive a VACUUM! Seems these\n> references live in nodes actually not referring to a heap\n> tuple any more, but used during tree traversal in\n> comparisions. As if an index tuple delete from a btree not\n> necessarily causes the index value to disappear from the\n> btree completely. It'll never be returned by an index scan,\n> but the value is still there somewhere.\n\nWould it be possible to actually delete those entries during vacuum ?\nI guess that would be an overall win, no ?\n\nAndreas\n", "msg_date": "Tue, 11 Jul 2000 16:17:02 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: update on TOAST status'" }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> \n> > It appears that brtee indices (at least) can keep references\n> > to old toast values that survive a VACUUM! Seems these\n> > references live in nodes actually not referring to a heap\n> > tuple any more, but used during tree traversal in\n> > comparisions. As if an index tuple delete from a btree not\n> > necessarily causes the index value to disappear from the\n> > btree completely. It'll never be returned by an index scan,\n> > but the value is still there somewhere.\n> \n> Would it be possible to actually delete those entries during vacuum ?\n> I guess that would be an overall win, no ?\n\nSeems that is the only good solution, or somehow link vacuum of TOAST\ntables to index so these TOAST values are not removed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jul 2000 10:49:20 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: update on TOAST status'" }, { "msg_contents": "Zeugswetter Andreas SB wrote:\n>\n> > It appears that brtee indices (at least) can keep references\n> > to old toast values that survive a VACUUM! Seems these\n> > references live in nodes actually not referring to a heap\n> > tuple any more, but used during tree traversal in\n> > comparisions. As if an index tuple delete from a btree not\n> > necessarily causes the index value to disappear from the\n> > btree completely. It'll never be returned by an index scan,\n> > but the value is still there somewhere.\n>\n> Would it be possible to actually delete those entries during vacuum ?\n> I guess that would be an overall win, no ?\n\n Seems I explained it a little confusing or am confused by it\n myself.\n\n Either way, VACUUM does DELETE those from the indices! But\n btree is a Balanced Tree, and ISTM that it sometimes decides\n to keep a deleted node just to have trees balanced and to\n decide on it whether to go left or right. An index scan will\n never return those nodes, but exactly at the time btree needs\n to decide left/right, it calls the type specific CMP function\n and that in turn invokes the toast fetch.\n\n A pure btree does not have the need for it, but we're using a\n high concurrency optimized version called nbtree. That one\n seems to do so.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Tue, 11 Jul 2000 18:49:58 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: AW: update on TOAST status'" } ]
[ { "msg_contents": "\n> Philip Warner wrote:\n> > At 14:02 11/07/00 +0200, Jan Wieck wrote:\n> > > AFAICS, we need to detoast values for index inserts allways\n> > > and have another toaster inside the index access methods at\n> > > some day.\n> >\n> > We might not need it...at least not in the furst pass.\n> \n> The thing is actually broken and needs a fix. As soon as\n> \"text\" is toastable, it can happen everywhere that text is\n> toasted even if it's actual plain value would perfectly fit\n> into an index tuple. Think of a table with 20 text columns,\n> where the indexed one has a 1024 bytes value, while all\n> others hold 512 bytes. In that case, the indexed one is the\n> biggest and get's toasted first. And if all the data is of\n> nature that compression doesn't gain enough, it might still\n> be the biggest one after that step and will be considered for\n> move off ... boom.\n> \n> We can't let this in in the first pass!\n\nHave you added a minimum size for a value to actually be considered \nfor toasting ? Imho some lower border between 64 and 256 bytes per value\nwould be useful, not only the row size.\n\nImho the same logic should apply to choose wheather a fixed (max) size\ncolumn\nqualifys for toasting (e.g. varchar(32) should never qualify for toasting)\n\nAndreas\n", "msg_date": "Tue, 11 Jul 2000 16:25:22 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: update on TOAST status'" }, { "msg_contents": "Zeugswetter Andreas SB wrote:\n>\n> > Philip Warner wrote:\n> > > At 14:02 11/07/00 +0200, Jan Wieck wrote:\n> > > > AFAICS, we need to detoast values for index inserts allways\n> > > > and have another toaster inside the index access methods at\n> > > > some day.\n> > >\n> > > We might not need it...at least not in the furst pass.\n> >\n> > The thing is actually broken and needs a fix. As soon as\n> > \"text\" is toastable, it can happen everywhere that text is\n> > toasted even if it's actual plain value would perfectly fit\n> > into an index tuple. Think of a table with 20 text columns,\n> > where the indexed one has a 1024 bytes value, while all\n> > others hold 512 bytes. In that case, the indexed one is the\n> > biggest and get's toasted first. And if all the data is of\n> > nature that compression doesn't gain enough, it might still\n> > be the biggest one after that step and will be considered for\n> > move off ... boom.\n> >\n> > We can't let this in in the first pass!\n>\n> Have you added a minimum size for a value to actually be considered\n> for toasting ? Imho some lower border between 64 and 256 bytes per value\n> would be useful, not only the row size.\n>\n> Imho the same logic should apply to choose wheather a fixed (max) size\n> column\n> qualifys for toasting (e.g. varchar(32) should never qualify for toasting)\n\n The toaster simply tries to keep all heap tuples smaller than\n 2K (MaxTupleSize / 4). It does so by first compressing all\n compressable ones, then moving off. Both steps allways pick\n the attributes in descending size order.\n\n It does not know anything about indices (and that wouldn't\n help either because indices could be created later as done in\n every dump).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Tue, 11 Jul 2000 18:55:38 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: AW: update on TOAST status'" } ]
[ { "msg_contents": "\n> > > Basically, time travel was getting in the way of more \n> requested features\n> > \n> > Do you mean way back when it was removed? How was it \n> getting in the way?\n> \n> Yes. Every tuple had this time-thing that had to be tested. Vadim\n> wanted to revove it to clear up the coding, and we all agreed.\n\nWas it not only the mapping between a t_xmin and a wallclock time \nthat was removed ? Thus you can still see the history of a row, but\nnot the corresponding times ?\n\nAndreas\n", "msg_date": "Tue, 11 Jul 2000 16:28:52 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: postgres 7.2 features." } ]
[ { "msg_contents": "\nSince I broke my table on hub and am awaiting assistance I'm shifting\naway from the website temporarily and back to the md5 stuff. In going\nover the previous conversations I've come up with the following:\n\n\nThe client can be sending the password in either plain text or in \nhashed form with one of the two scenarios for a login process:\n\ndirection\twhat\n----------------------------------------------\nCL -> PG\tusername\nPG -> CL\trandom salt\nCL -> PG\tplaintext passwd\n\n\nCL -> PG\tusername\nPG -> CL\trandom salt\nCL -> PG\tencrypted passwd\n\n\n----------------------------------------------\n\nWhen PG receives the password, it doesn't know if the password is\nencrypted or not. It checks first plaintext matching, then encrypted\nmatching using the random salt it sent to CL.\n\n---------------------------------------------\n\nPossible encryption methods:\n\nMD5(password+salt)\n\nMD5(MD5(password) + MD5(salt))\n\nMD5(password+salt)\n\nMD5(MD5(username+password)+salt)\n\nMD5(MD5(username+password)+MD5(salt))\n\nMD5(MD5(username+password+salt))\n\nand many others.\n\n---------------------------------------------\n\nIs there a preference to the method used?\n\nAlso while thinking about this and the vulnerability of the wire itself, \nI've also come up with something that may enhance the login security.\n\nIf CL sends the MD5 of the username rather than the plaintext username,\nonly CL and PG will know what the username is. PG will know it by \ncomparing it with the MD5 of every username in pg_shadow. So even if the\nwire is being sniffed the unhashed username can be used in the password's\nencryption along with the salt sent by PG. This method will take longer\nfor a user to log in, but the login process is only per session, not per\nSQL call. \n\nComments?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 11 Jul 2000 10:50:20 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "md5 again" }, { "msg_contents": "> direction\twhat\n> ----------------------------------------------\n> CL -> PG\tusername\n> PG -> CL\trandom salt\n> CL -> PG\tplaintext passwd\n> \n> \n> CL -> PG\tusername\n> PG -> CL\tuser salt \n^^^^^^^^^^^^^^^^^^^^^^^^^\n> PG -> CL\trandom salt\n> CL -> PG\tencrypted passwd\n> \n\n\nMD5(MD5(username+user_salt)+random_salt)\n\nPostmaster takes its pg_shadow MD5(username+user_salt) and does another\nMD5 with the random salt and compares it with what was sent from the\nclient.\n\nIf the connection is defined as requiring crypt or password, only this\nMD5 method can be used. If trusted is defined, cleartext passwords can\nbe accepted.\n\nDon't bother encrypting the username. No security is gained.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jul 2000 11:00:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: md5 again" }, { "msg_contents": "Vince Vielhaber <[email protected]> writes:\n> When PG receives the password, it doesn't know if the password is\n> encrypted or not.\n\nWhat do you mean it doesn't know if the password is encrypted or not?\nThe protocol should tell it. You can't do this without a protocol\nextension...\n\n> Is there a preference to the method used?\n\nI believe there was a very specific agreement about what the challenge\nand response should be, and none of this looks like it. See the\ndiscussion about how to have both wire security and encrypted-on-disk\npasswords --- doing both is trickier than it sounds.\n\nAs far as the specific mechanics of applying MD5 go, I'd suggest\nconcatenating whatever strings need to go into a particular iteration\nwith appropriate separators (a null byte would probably do) and applying\nMD5 just once. I can't see any reason to do things like\nMD5(MD5(string1)+MD5(string2)). (IIRC there were places in the proposed\nprotocol where you'd be hashing a string previously hashed by the other\nside, but that's not what I'm talking about here. Given particular\ninputs to be combined, it seems sufficient to just concatenate them and\ndo one round of MD5.)\n\n> If CL sends the MD5 of the username rather than the plaintext username,\n> only CL and PG will know what the username is. PG will know it by \n> comparing it with the MD5 of every username in pg_shadow. So even if the\n> wire is being sniffed the unhashed username can be used in the password's\n> encryption along with the salt sent by PG. This method will take longer\n> for a user to log in, but the login process is only per session, not per\n> SQL call. \n\nA linear search of pg_shadow to log in is not acceptable; we don't want\nto make login any slower than we have to. I see no real gain in security\nfrom doing this anyway...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jul 2000 12:32:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: md5 again " }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> MD5(MD5(username+user_salt)+random_salt)\n\n> Postmaster takes its pg_shadow MD5(username+user_salt) and does another\n> MD5 with the random salt and compares it with what was sent from the\n> client.\n\nDoesn't seem quite right ... where's the password?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jul 2000 12:33:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: md5 again " }, { "msg_contents": "On Tue, 11 Jul 2000, Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> > MD5(MD5(username+user_salt)+random_salt)\n> \n> > Postmaster takes its pg_shadow MD5(username+user_salt) and does another\n> > MD5 with the random salt and compares it with what was sent from the\n> > client.\n> \n> Doesn't seem quite right ... where's the password?\n\nI had assumed Bruce was referring to the password with user_salt. \n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 11 Jul 2000 12:44:18 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: md5 again " }, { "msg_contents": "On Tue, 11 Jul 2000, Tom Lane wrote:\n\n> Vince Vielhaber <[email protected]> writes:\n> > When PG receives the password, it doesn't know if the password is\n> > encrypted or not.\n> \n> What do you mean it doesn't know if the password is encrypted or not?\n> The protocol should tell it. You can't do this without a protocol\n> extension...\n\nI was shooting for automatic detection.\n\n> \n> > Is there a preference to the method used?\n> \n> I believe there was a very specific agreement about what the challenge\n> and response should be, and none of this looks like it. See the\n> discussion about how to have both wire security and encrypted-on-disk\n> passwords --- doing both is trickier than it sounds.\n> \n> As far as the specific mechanics of applying MD5 go, I'd suggest\n> concatenating whatever strings need to go into a particular iteration\n> with appropriate separators (a null byte would probably do) and applying\n> MD5 just once. I can't see any reason to do things like\n> MD5(MD5(string1)+MD5(string2)). (IIRC there were places in the proposed\n> protocol where you'd be hashing a string previously hashed by the other\n> side, but that's not what I'm talking about here. Given particular\n> inputs to be combined, it seems sufficient to just concatenate them and\n> do one round of MD5.)\n> \n> > If CL sends the MD5 of the username rather than the plaintext username,\n> > only CL and PG will know what the username is. PG will know it by \n> > comparing it with the MD5 of every username in pg_shadow. So even if the\n> > wire is being sniffed the unhashed username can be used in the password's\n> > encryption along with the salt sent by PG. This method will take longer\n> > for a user to log in, but the login process is only per session, not per\n> > SQL call. \n> \n> A linear search of pg_shadow to log in is not acceptable; we don't want\n> to make login any slower than we have to. I see no real gain in security\n> from doing this anyway...\n\nBy knowing what PG will do with the username and random salt, sniffing \nthe wire can make guessing the password trivial. If the username was\nnever sent over the wire in the clear the unhashed username is an unknown\nsalt to he who is sniffing. But it's true that it would introduce a\nslower than necessary login.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 11 Jul 2000 12:49:32 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: md5 again " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > MD5(MD5(username+user_salt)+random_salt)\n\nSorry, it is:\n\n\tMD5(MD5(password+user_salt)+random_salt)\n\n> \n> > Postmaster takes its pg_shadow MD5(username+user_salt) and does another\n> > MD5 with the random salt and compares it with what was sent from the\n> > client.\n> \n> Doesn't seem quite right ... where's the password?\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jul 2000 12:49:59 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: md5 again" }, { "msg_contents": "> > > If CL sends the MD5 of the username rather than the plaintext username,\n> > > only CL and PG will know what the username is. PG will know it by \n> > > comparing it with the MD5 of every username in pg_shadow. So even if the\n> > > wire is being sniffed the unhashed username can be used in the password's\n> > > encryption along with the salt sent by PG. This method will take longer\n> > > for a user to log in, but the login process is only per session, not per\n> > > SQL call. \n> > \n> > A linear search of pg_shadow to log in is not acceptable; we don't want\n> > to make login any slower than we have to. I see no real gain in security\n> > from doing this anyway...\n> \n> By knowing what PG will do with the username and random salt, sniffing \n> the wire can make guessing the password trivial. If the username was\n> never sent over the wire in the clear the unhashed username is an unknown\n> salt to he who is sniffing. But it's true that it would introduce a\n> slower than necessary login.\n> \n\nDoes it? I thought it was the password being run through MD5 that made\nit secure.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jul 2000 12:51:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: md5 again" }, { "msg_contents": "> If CL sends the MD5 of the username rather than the plaintext username,\n> only CL and PG will know what the username is. PG will know it by \n> comparing it with the MD5 of every username in pg_shadow. So even if the\n> wire is being sniffed the unhashed username can be used in the password's\n> encryption along with the salt sent by PG. This method will take longer\n> for a user to log in, but the login process is only per session, not per\n> SQL call. \n\n But don't forget that some web application need fast log. And if is not\npossible use persisten connection is necessary log for each access to web\npage. (...etc.).\n\n The log speed is keep tracked feature too. \n\n\t\t\t\t\t\tKarel\t\t\t\t\t\n\n", "msg_date": "Tue, 11 Jul 2000 18:51:46 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: md5 again" }, { "msg_contents": "On Tue, 11 Jul 2000, Bruce Momjian wrote:\n\n> > > > If CL sends the MD5 of the username rather than the plaintext username,\n> > > > only CL and PG will know what the username is. PG will know it by \n> > > > comparing it with the MD5 of every username in pg_shadow. So even if the\n> > > > wire is being sniffed the unhashed username can be used in the password's\n> > > > encryption along with the salt sent by PG. This method will take longer\n> > > > for a user to log in, but the login process is only per session, not per\n> > > > SQL call. \n> > > \n> > > A linear search of pg_shadow to log in is not acceptable; we don't want\n> > > to make login any slower than we have to. I see no real gain in security\n> > > from doing this anyway...\n> > \n> > By knowing what PG will do with the username and random salt, sniffing \n> > the wire can make guessing the password trivial. If the username was\n> > never sent over the wire in the clear the unhashed username is an unknown\n> > salt to he who is sniffing. But it's true that it would introduce a\n> > slower than necessary login.\n> > \n> \n> Does it? I thought it was the password being run through MD5 that made\n> it secure.\n\nSimple dictionary passwords. Run them thru a script and compare the \noutput. \n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 11 Jul 2000 12:56:29 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: md5 again" }, { "msg_contents": "Vince Vielhaber <[email protected]> writes:\n> By knowing what PG will do with the username and random salt, sniffing \n> the wire can make guessing the password trivial.\n\nNot if the wire protocol is done correctly, ie, passwords are only\nsent in hashed form.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jul 2000 13:01:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: md5 again " }, { "msg_contents": "> > > By knowing what PG will do with the username and random salt, sniffing \n> > > the wire can make guessing the password trivial. If the username was\n> > > never sent over the wire in the clear the unhashed username is an unknown\n> > > salt to he who is sniffing. But it's true that it would introduce a\n> > > slower than necessary login.\n> > > \n> > \n> > Does it? I thought it was the password being run through MD5 that made\n> > it secure.\n> \n> Simple dictionary passwords. Run them thru a script and compare the \n> output. \n\nI see. In the past, they couldn't see the password salt. Now they can\nsee both salts, both random and password. Seems they can't use a\ndictionary for the random salt to figure out the MD5 version of the\npassword, can they, because they have to crack that before doing the\npassword part. We are are really double-encrypting it, like\ntripple-DES.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jul 2000 13:07:21 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: md5 again" }, { "msg_contents": "Vince Vielhaber <[email protected]> writes:\n> Simple dictionary passwords. Run them thru a script and compare the \n> output. \n\nI was under the impression we'd prevented that by use of a random salt\nchosen on-the-fly for each login attempt ... have to go reread the\nthread to be sure though.\n\nIn any case, if your threat model is a dictionary attack, what's to\nstop the attacker from using a dictionary of likely usernames as well?\nI still don't see much security gain from hashing the username.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jul 2000 13:07:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: md5 again " }, { "msg_contents": "On Tue, 11 Jul 2000, Tom Lane wrote:\n\n> Vince Vielhaber <[email protected]> writes:\n> > Simple dictionary passwords. Run them thru a script and compare the \n> > output. \n> \n> I was under the impression we'd prevented that by use of a random salt\n> chosen on-the-fly for each login attempt ... have to go reread the\n> thread to be sure though.\n\nWhen I went back and reread the thread, it was PG sending the random\nsalt. The username, password and random salt were hashed and sent \nback. Therefore the username and random salt have both been on the\nwire in the clear.\n\n> In any case, if your threat model is a dictionary attack, what's to\n> stop the attacker from using a dictionary of likely usernames as well?\n> I still don't see much security gain from hashing the username.\n\ndictionary of likely usernames: tgl, vev, buzz, wood_tick, ... Now\nthat'd be a dictionary! If only the random salt were on the wire, the\nattacker would need to guess both the username and the password.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 11 Jul 2000 13:23:51 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: md5 again " }, { "msg_contents": "Vince Vielhaber <[email protected]> writes:\n>>>> Simple dictionary passwords. Run them thru a script and compare the \n>>>> output. \n\n> When I went back and reread the thread, it was PG sending the random\n> salt. The username, password and random salt were hashed and sent \n> back. Therefore the username and random salt have both been on the\n> wire in the clear.\n\nHmm. So if you sniffed the transaction you'd have all the info needed\nto verify a guess at a password. It would be nice to improve on that.\n\nHowever, I thought we'd settled on a protocol that involved multiple\nrandom salts being chosen on-the-fly, so the above doesn't sound like\nthe right thing...\n\n>> In any case, if your threat model is a dictionary attack, what's to\n>> stop the attacker from using a dictionary of likely usernames as well?\n\n> dictionary of likely usernames: tgl, vev, buzz, wood_tick, ... Now\n> that'd be a dictionary!\n\nNo bigger than a dictionary of likely passwords, and furthermore you\nmay have good reason to guess a username based on outside info (eg,\nwhere the connection is coming from). A sniffer who's attacking a\nparticular database probably has some idea who its users are, and\nusernames are not customarily hidden carefully.\n\n> If only the random salt were on the wire, the\n> attacker would need to guess both the username and the password.\n\nAnd so would the postmaster ;-). The problem here is that the hashed\nusername has to be sent, and there can be no hidden salt involved\nsince it's the first step of the protocol. So the attacker knows\nexactly what the hashed username is, and if he can guess the username\nthen he can verify it. Then he moves on to guessing/verifying the\npassword. I still don't see a material gain in security here, given\nthat I believe usernames are likely to be pretty easy to guess.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jul 2000 13:52:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: md5 again " }, { "msg_contents": "> And so would the postmaster ;-). The problem here is that the hashed\n> username has to be sent, and there can be no hidden salt involved\n> since it's the first step of the protocol. So the attacker knows\n> exactly what the hashed username is, and if he can guess the username\n> then he can verify it. Then he moves on to guessing/verifying the\n> password. I still don't see a material gain in security here, given\n> that I believe usernames are likely to be pretty easy to guess.\n\nJust do a 'ps' and you have the username for each connection.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jul 2000 13:58:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: md5 again" }, { "msg_contents": "On Tue, 11 Jul 2000, Tom Lane wrote:\n\n> Vince Vielhaber <[email protected]> writes:\n> >>>> Simple dictionary passwords. Run them thru a script and compare the \n> >>>> output. \n> \n> > When I went back and reread the thread, it was PG sending the random\n> > salt. The username, password and random salt were hashed and sent \n> > back. Therefore the username and random salt have both been on the\n> > wire in the clear.\n> \n> Hmm. So if you sniffed the transaction you'd have all the info needed\n> to verify a guess at a password. It would be nice to improve on that.\n> \n> However, I thought we'd settled on a protocol that involved multiple\n> random salts being chosen on-the-fly, so the above doesn't sound like\n> the right thing...\n\nThe salts have to go over the wire. If the user chooses a salt, adds it\nto his password and MD5's it with the salt we send, how do we get the\nuser's salt? I guess we can already have it stored but that doesn't\nseem right. The only scenario from the previous thread involved a\nreversible (or at least somewhat reversible) encryption method, not md5\nwhich is a hash.\n\n> \n> >> In any case, if your threat model is a dictionary attack, what's to\n> >> stop the attacker from using a dictionary of likely usernames as well?\n> \n> > dictionary of likely usernames: tgl, vev, buzz, wood_tick, ... Now\n> > that'd be a dictionary!\n> \n> No bigger than a dictionary of likely passwords, and furthermore you\n> may have good reason to guess a username based on outside info (eg,\n> where the connection is coming from). A sniffer who's attacking a\n> particular database probably has some idea who its users are, and\n> usernames are not customarily hidden carefully.\n> \n> > If only the random salt were on the wire, the\n> > attacker would need to guess both the username and the password.\n> \n> And so would the postmaster ;-). The problem here is that the hashed\n> username has to be sent, and there can be no hidden salt involved\n> since it's the first step of the protocol. So the attacker knows\n> exactly what the hashed username is, and if he can guess the username\n> then he can verify it. Then he moves on to guessing/verifying the\n> password. I still don't see a material gain in security here, given\n> that I believe usernames are likely to be pretty easy to guess.\n\nThe postmaster would have a pretty good idea, the username could even\nbe hashed with the same salt we send for the password, but this part\nis rather moot since it would undoubtedly increase the login time beyond\nan acceptable delay.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 11 Jul 2000 15:27:11 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: md5 again " }, { "msg_contents": "On Tue, 11 Jul 2000, Bruce Momjian wrote:\n\n> > And so would the postmaster ;-). The problem here is that the hashed\n> > username has to be sent, and there can be no hidden salt involved\n> > since it's the first step of the protocol. So the attacker knows\n> > exactly what the hashed username is, and if he can guess the username\n> > then he can verify it. Then he moves on to guessing/verifying the\n> > password. I still don't see a material gain in security here, given\n> > that I believe usernames are likely to be pretty easy to guess.\n> \n> Just do a 'ps' and you have the username for each connection.\n\nTrue, but I was more concerned with remote sniffing.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 11 Jul 2000 15:28:49 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: md5 again" } ]
[ { "msg_contents": "Just wanted to give a short message, that current snapshot does not \ncompile on AIX due to the fmgr changes.\nI have some patches that unfortunately affect other ports but not a complete\nfix.\nWhy are the pg_dlopen ... declared extern in include/utils/dynamic_loader.h\n?\n\naside: Why is sys/fcntl.h included in c.h and not in config.h ? \nAIX has fcntl.h, but that seems not to be compatible with dlfcn.h.\nIt does not seem to be missing if commented out.\n\nAndreas\n\nPS: Am I the only looser who now looking at the new fmgr interface has his\ndoubts \nabout the changes ?\n\nWhat exactly does it gain except masking badly written code\n(casts to wrong datatype like ints to longs) that goof the optimizers and\n64bit ports\nand solving the null value problem ?\n\nMy ideas would be:\n\t- multi row returns\n\t- multi column returns (not opaque)\n\t- portability to other implementations \n\t (Informix is similar to our old fmgr interface, \n\t Oracle's interface I think is not open to the public)\n\nIt would e.g. have been possible to add the NULL indicator array as last\nargument\nto the existing scheme. Informix reserves one value of each pass by value\ndatatype\nto represent NULLs which imho is a bad idea (e.g. INT_MIN for int's).\n", "msg_date": "Tue, 11 Jul 2000 17:51:14 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "fmgr changes not yet ported to AIX" }, { "msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> Just wanted to give a short message, that current snapshot does not \n> compile on AIX due to the fmgr changes.\n\nWithout more details, that's completely unhelpful.\n\n> What exactly does it gain except masking badly written code\n> (casts to wrong datatype like ints to longs) that goof the optimizers and\n> 64bit ports\n> and solving the null value problem ?\n\n> My ideas would be:\n> \t- multi row returns\n> \t- multi column returns (not opaque)\n\nI've said before and I'll say it again: all I'm doing in this round is\nfixing our portability and NULL-value problems. Better handling of sets\nand tuples is a higher-level problem. If you want to start working on\nthat, be my guest...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jul 2000 12:13:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fmgr changes not yet ported to AIX " } ]
[ { "msg_contents": "> The bottom line is that the original postgres time-travel \n> implementation was totally cost-free. \n\nI disagree. I can't consider additional > 8 bytes per tuple +\npg_time (4 bytes per transaction... please remember that ppl\ncomplain even about pg_log - 2 bits per transaction) as\ntotally cost-free for half-useful built-in feature used\nby 10% of users.\nNote that I don't talk about overwriting/non-overwriting smgr at all!\nIt's not issue. There are no problems with keeping dead tuples in files\nas long as required. When I told about new smgr I meant ability to re-use\nspace without vacuum and store > 1 tables per file.\nBut I'll object storing transaction commit times in tuple header and\nold-designed pg_time. If you want to do TT - welcome... but make\nit optional, without affect for those who need not in TT.\n\n> Actually it may have even speeded things up since vacuum would have\n> less work to do.\n\nThis would make happy only *TT users* -:)\n\n> Can you convince me that triggers can compare anywhere near for\nperformance?\n\nNo, they can't. But this is bad only for *TT users* -:)\n\n> I can't see how. All I'm asking is don't damage anything that is in\npostgres\n> now that is relevant to time-travel in your quest for WAL....\n\nIt's not related to WAL!\nThough... With WAL pg_log is not required to be permanent: we could re-use\ntransaction IDs after db restart... Well, seems we can handle this.\n\n> > With the original TT:\n> > \n> > - you are not able to use indices to fetch tuples on time base;\n> \n> Sounds not very hard to fix..\n\nReally? Commit time is unknown till commit - so you would have to insert\nindex tuples just before commit... how to know what insert?\n\n> > - you are not able to control tuples life time;\n> \n> From the docs... \"Applications that do not want to save \n> historical data can sepicify a cutoff point for a relation.\n> Cutoff points are defined by the discard command\"\n\nI meant another thing: when I have to deal with history,\nI need sometimes to change historical date-s (c) -:))\nProbably we can handle this as well, just some additional\ncomplexity -:)\n\n> > - you have to store commit time somewhere;\n> \n> Ok, so?\n\nSpace.\n\n> > - you have to store additional 8 bytes for each tuple;\n> \n> A small price for time travel.\n\nNot for those who aren't going to use TT at all.\nLower performance of trigger implementation is smaller price for me.\n\n> > - 1 sec could be tooo long time interval for some uses of TT.\n> \n> So someone in the future can implement finer grains. If time travel\n> disappears this option is not open.\n\nOpened, with triggers -:)\nAs well as Colour-Travel and all other travels -:)\n\n> > And, btw, what could be *really* very useful it's TT + \n> > referential integrity feature. How could it be implemented without\ntriggers?\n> \n> In what way does TT not have referential integrity? As long as the\n> system assures that every transaction writes the same timestamp to all\n> tuples then referential integrity continues to exist.\n\nThe same tuple of a table with PK may be updated many times by many\ntransactions\nin 1 second. For 1 sec grain you would read *many* historical tuples with\nthe same\nPK all valid in the same time. So, we need in \"finer grains\" right now...\n\n> > Imho, triggers can give you much more flexible and useful TT...\n> > \n> > Also note that TT was removed from Illustra and authors wrote that\n> > built-in TT could be implemented without non-overwriting smgr.\n> \n> Of course it can be, but can it be done anywhere near as efficiently?\n\nBut without losing efficiency where TT is not required.\n\n> > > > It was mentioned here that triggers could be used for async\n> > > > replication, as well as WAL.\n> > >\n> > > Same story. Major inefficency. Replication is tough enough without\n> > > mucking around with triggers. Once the trigger executes you've got\n> > > to go and store the data in the database again anyway. Then figure\n> > > out when to delete it.\n> > \n> > What about reading WAL to get and propagate changes? I \n> > don't think that reading tables will be more efficient and, btw,\n> > how to know what to read (C) -:) ?\n> \n> Maybe that is a good approach, but it's not clear that it is the best.\n> More research is needed. With the no-overwrite storage manager there\n> exists a mechanism for deciding how long a tuple exists and this\n> can easily be tapped into for replication purposes. Vacuum could \n\nThis \"mechanism\" (just additional field in pg_class) can be used\nfor WAL based replication as well.\n\n> serve two purposes of vacuum and replicate.\n\nVacuum is already slow, it's better to make it faster than ever slower...\nI see vacuum as *optional* command someday... when we'll be able to\nre-use space.\n\nVadim\n", "msg_date": "Tue, 11 Jul 2000 11:19:05 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: postgres 7.2 features." }, { "msg_contents": "\"Mikheev, Vadim\" wrote:\n\n> > > - 1 sec could be tooo long time interval for some uses of TT.\n> >\n> > So someone in the future can implement finer grains. If time travel\n> > disappears this option is not open.\n> \n> Opened, with triggers -:)\n> As well as Colour-Travel and all other travels -:)\n\nMaybe you're right and time-travel should be relegated to the dustbin of\nhistory. But it always seemed a really neat design ever since I read\nabout it 8 years ago or something.\n\nIt does seem to me that time is a much more fundamental idea to model\nexplicitely than Colour or any other thing you might dream up. The\nconcept that a data-store has a history is a very fundamental concept.\n\nThis can get very philosophical. Think about the difference between a\npure-functional programming language and a regular programming language.\nOne way of looking at it is that a pure-functional language models time\nexplicitely whereas a regular language models time implicitely. In a\npure-functional language a change of state is brought about by creating\na whole new state, never by destroying the previous state. The previous\nstate continues to exist as long as you have a need for it. Since I'm a\nfan of pure functional languages this idea appeals to me.\n\nOn a practical note, the postgres time travel was very easy to use. It's\nhard for me to see how a trigger mechanism could be as easy. For example\nby default SELECT would always get the current values - sensible. If you\nwant historical values you have to add extra conditions, in a simple to\nuse syntax. The database took care of destroying historical data\naccording to your parameters. Can a trigger mechanism really make things\nthis easy?\n", "msg_date": "Wed, 12 Jul 2000 11:22:15 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 7.2 features." } ]
[ { "msg_contents": "> Also, does WAL offer instantaneous crash recovery like no-overwrite?\n\nNo, but it offers < 1 fsync per commit, instead of fsyncing every\nchanged table/index...\n+ rollback changes of aborted transactions.\n\nAnd, btw, WAL can live with non-overwriting smgr without problem -\nwhy do you oppose them?\n\nAlso, try to power off while inserts into table with indices -\nindices will be broken...\n\nVadim\n\n", "msg_date": "Tue, 11 Jul 2000 11:26:04 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Storage Manager (was postgres 7.2 features.)" } ]
[ { "msg_contents": "> According to the postgres research papers, the no-overwrite storage\n> manager has the following attributes...\n\nBut don't forget about conclusion they made...\n\n> * It's always faster than WAL in the presence of stable main memory.\n> (Whether the stable caches in modern disk drives is an approximation I\n> don't know).\n\nAnd much slower in the absence...\n\n> * It's more scalable and has less logging contention. This allows\n> greater scalablility in the presence of multiple processors.\n\nWe can implement multiple log files (on different disks) someday.\nThe only contention will be for reading/changing some number\n(required for recoverer to read logs in right order)...\n\n> * Instantaneous crash recovery.\n\nAnd slow vacuum...\n\n> * Time travel is available at no cost.\n\nWe told about that already.\n\n> * Easier to code and prove correctness. (I used to work for a database\n> company that implemented WAL, and it took them a large number of years\n> before they supposedly corrected every bug and crash condition on\n> recovery).\n\nThe only plus for me -:)\n\nVadim\n", "msg_date": "Tue, 11 Jul 2000 11:49:06 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Storage Manager (was postgres 7.2 features.)" } ]