threads
listlengths
1
2.99k
[ { "msg_contents": "\n> Bruce Momjian <[email protected]> writes:\n> > You need something that works from the command line, and \n> something that\n> > works if PostgreSQL is not running. How would you restore \n> one file from\n> > a tape.\n> \n> \"Restore one file from a tape\"? How are you going to do that anyway?\n> You can't save and restore portions of a database like that, because\n> of transaction commit status problems. To restore table X correctly,\n> you'd have to restore pg_log as well, and then your other tables are\n> hosed --- unless you also restore all of them from the backup. Only\n> a complete database restore from tape would work, and for that you\n> don't need to tell which file is which. So the above argument is a\n> red herring.\n\n From what I know it is possible to simply restore one table file\nsince pg_log keeps all tid's. Of course it cannot guarantee integrity\nand does not work if the table was altered.\n\n> I realize it's nice to be able to tell which table file is which by\n> eyeball, but the price we are paying for that small convenience is\n> just too high. Give that up, and we can have rollbackable DROP and\n> RENAME now (I'll personally commit to making it happen for 7.1).\n> Continue to insist on it, and I don't think we'll *ever* have those\n> features in a really robust form. It's just not possible to do\n> multiple file renames atomically.\n\nIn the last proposal Bruce and I had it all layed out for tabname + oid\nwith no overhead in the normal situation, and little overhead if a rename \ntable crashed or was not rolled back or committed properly\nwhich imho had all advantages combined.\n\nAndreas\n", "msg_date": "Thu, 15 Jun 2000 09:31:11 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Big 7.1 open items " } ]
[ { "msg_contents": "\n> In reality, very few people are going to be interested in restoring\n> a table in a way that breaks referential integrity and other \n> normal assumptions about what exists in the database. \n\nThis is not true. In my DBA history it would have saved me manweeks\nof work if an easy and efficient restore of one single table from backup \nwould have been available in Informix and Oracle.\nWe allways had to restore most of the whole system to another machine only\nto get back at some table info that would then be manually re-added\nto the production system. \nA restore of one table to a different/new tablename would have been \nvery convenient, and this is currently possible in PostgreSQL.\n(create new table with same schema, then replace new table data file\nwith file from backup)\n\n> The reality\n> is that most people are going to engage in a little time travel\n> to a past, consistent backup rather than do as you suggest.\n\nNo, this is what is done most of the time, but it is very inconvenient\nto tell people that they loose all work from past days, so it is usually \ndone as I noted above if possible. We once had a situation where all data \nwas deleted from a table, but the problem was only noticed 3 weeks later.\n\n> This is going to be more and more true as Postgres gains more and\n> more acceptance in (no offense intended) the real world.\n> \n> >Right now, we use 'ps' with args to display backend \n> information, and ls\n> >-l to show disk information. We are going to lose that here.\n> \n> Dependence on \"ls -l\" is, IMO, a very weak argument.\n\nIn normal situations where everything works I agree, it is the\nerror situations where it really helps if you see what data is where.\ndebugging, lsof, Bruce already named them.\n\nAndreas\n", "msg_date": "Thu, 15 Jun 2000 10:04:51 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Big 7.1 open items" } ]
[ { "msg_contents": "\n> \"Ross J. Reedstrom\" <[email protected]> writes:\n> > Any strong objections to the mixed relname_oid solution?\n> \n> Yes!\n> \n> You cannot make it work reliably unless the relname part is \n> the original\n> relname and does not track ALTER TABLE RENAME.\n\nIt does, or should at least. Only problem case is where db crashes during\nalter or commit/rollback. This could be fixed by first open that fails to\nfind the file\nor vacuum, or some other utility.\n\n> IMHO having \n> an obsolete\n> relname in the filename is worse than not having the relname at all;\n> it's a recipe for confusion, it means you still need admin \n> tools to tell\n> which end is really up, and what's worst is you might think you don't.\n> \n> Furthermore it requires an additional column in pg_class to keep track\n> of the original relname, which is a waste of space and effort.\n\nit does not.\n\n> Finally, one of the reasons I want to go to filenames based \n> only on OID\n> is that that'll make life easier for mdblindwrt. Original \n> relname + OID\n> doesn't help, in fact it makes life harder (more shmem space needed to\n> keep track of the filename for each buffer).\n\nI do not see this. filename is constructed from relname+oid.\nif not found, do directory scan for *_<OID>.dat, if found --> rename.\n\nAndreas\n", "msg_date": "Thu, 15 Jun 2000 10:26:12 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Big 7.1 open items " }, { "msg_contents": "> > Finally, one of the reasons I want to go to filenames based \n> > only on OID\n> > is that that'll make life easier for mdblindwrt. Original \n> > relname + OID\n> > doesn't help, in fact it makes life harder (more shmem space needed to\n> > keep track of the filename for each buffer).\n> \n> I do not see this. filename is constructed from relname+oid.\n> if not found, do directory scan for *_<OID>.dat, if found --> rename.\n\nThat is kind if nifty.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 15 Jun 2000 09:49:46 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Big 7.1 open items" } ]
[ { "msg_contents": "\nOn Wed, 14 Jun 2000, Peter Eisentraut wrote:\n\n> On Tue, 13 Jun 2000, Karel Zak wrote:\n> \n> > Now I have fixed some Makefiles. My idea is add to all contrib modules\n> > Makefile (and allow install contrib matter) and to\n> > top-contrib-directory Makefile.global that define some definition\n> > relevant for contrib only.\n> \n> Try to keep everything as local as possible, don't include\n> ../src/Makefile.global either if at all possible. Make it a\n> contrib/Makefile.global.in so you can substitute Autoconf variables into\n> it.\n\n But I want primary ../src/Makefile.global for compilation, because here\nis all defined. I not sure if is good define it (same) again in contrib. \nIf we will (for example) defined path to LIBPG in two Makefiles we must \nmaintain these Makefiles. IMHO it is bad --- same data is good share not \ndefine it twice.\n\nA solution is put Makefile.global to top directory (over src) and use it in \nall trees (contrib/docs/src). What?\n\n> > A question, how install these files from contrib? (I inspire with\n> > debian PG packages)\n> > \n> > \t*.sql \t\t- $(POSTGRESDIR)/sql\n> \n> @datadir@ (/usr/local/pgsql/share)\n> \n> > \t*.so\t\t- $(POSTGRESDIR)/modules\n> \n> @libdir@ (or perhaps libexecdir, depending on what it does)\n> \n> > \texec-binary\t- $(POSTGRESDIR)/bin\n> \n> @bindir@\n\nAgree. \n\n> \n> > \t*.doc - $(POSTGRESDIR)/doc\n> \n> Is there any documentation in contrib?\n\n It is a little mazy, but yes.\n\n> See <http://www.gnu.org/prep/standards.html#SEC40> for the GNU makefile\n> standards. Using anything else will just conflict with the Makefile\n> cleanup I'm currently doing and thus be inconsistent again at the end. :)\n> \n> > The current state in contrib:\n> > \n> > apache_logging\t- Use it anyone? IMHO good candidate for *delete* or remove\n> > to some web page like \"Tips how use PostgreSQL\".... \n> \n> The latter sounds good but why not keep it?\n\nI can create in contrib dir \"tips\" and somethig like 'apache_logging' can be \nput here.\n\n> \n> > bit\t\t- impossible compile, last change '2000/04/12 17:14:21'\n> > \t\t It is already in main tree. Delete?\n> \n> I think this will be included into the main tree.\n\n I detele it. Who not agree?\n\n> \n> > earthdistance\t- I fix it, it is probably ready now.\n> > \n> > likeplanning\t- haven't Makefile, I fix it\n> \n> This will be merged into the main tree.\n\n I know, but in 7.1 (Tom)? \n\n> > linux\t\t- haven't Makefile, I fix it\n> \n> This definitely doesn't need a make file.\n\n All need Makefile :-) 'make install'\n\n> > mSQL\t\t- Use it anyone?, I haven't idea where install it. It is a\n> > \t\t \"file.c\". Delete?\n> \n> I think this is a wrapper library, hence @libdir@.\n\nOK.\n \n> > \tdatetime\n> \n> I think this is obsoleted by your to_char() and friends.\n\n I look at it, and if you right, I delete it.\n\n\n\t\t\t\t\tKarel\n\n", "msg_date": "Thu, 15 Jun 2000 10:29:15 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the contrib tree clean up" }, { "msg_contents": "Karel Zak <[email protected]> writes:\n>>>> likeplanning\t- haven't Makefile, I fix it\n>> \n>> This will be merged into the main tree.\n\n> I know, but in 7.1 (Tom)? \n\nI think we could delete contrib/likeplanning --- there shouldn't be\nany need for it in 7.1.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Jun 2000 02:38:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the contrib tree clean up " } ]
[ { "msg_contents": "> It's just not possible to do\n> multiple file renames atomically.\n\nThis is not necessary, since *_<OID> is unique regardless of relname prefix.\n\nAndreas\n", "msg_date": "Thu, 15 Jun 2000 10:41:50 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Big 7.1 open items " } ]
[ { "msg_contents": "At 10:04 AM 6/15/00 +0200, Zeugswetter Andreas SB wrote:\n>\n>> In reality, very few people are going to be interested in restoring\n>> a table in a way that breaks referential integrity and other \n>> normal assumptions about what exists in the database. \n>\n>This is not true. In my DBA history it would have saved me manweeks\n>of work if an easy and efficient restore of one single table from backup \n>would have been available in Informix and Oracle.\n>We allways had to restore most of the whole system to another machine only\n>to get back at some table info that would then be manually re-added\n>to the production system. \n\nI'm missing something, I guess. You would do a createdb, do a filesystem\ncopy of pg_log and one file into it, and then read data from the table\n without having to restore the other tables in the database?\n\nI'm just curious - when was the last time you restored a Postgres\ndatabase in this piecemeal manner, and how often do you do it?\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 15 Jun 2000 05:40:49 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AW: Big 7.1 open items" }, { "msg_contents": "Don Baccus wrote:\n> At 10:04 AM 6/15/00 +0200, Zeugswetter Andreas SB wrote:\n> >\n> >This is not true. In my DBA history it would have saved me manweeks\n> >of work if an easy and efficient restore of one single table from backup\n> >would have been available in Informix and Oracle.\n> >We allways had to restore most of the whole system to another machine only\n> >to get back at some table info that would then be manually re-added\n> >to the production system.\n>\n> I'm missing something, I guess. You would do a createdb, do a filesystem\n> copy of pg_log and one file into it, and then read data from the table\n> without having to restore the other tables in the database?\n>\n> I'm just curious - when was the last time you restored a Postgres\n> database in this piecemeal manner, and how often do you do it?\n\n More curios to me is that people seem to use physical file\n based backup at all. Do they shutdown the postmaster during\n backup or do they live with the fact that maybe not every\n backup is a vital one?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Fri, 16 Jun 2000 00:08:16 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: AW: Big 7.1 open items" }, { "msg_contents": "> > I'm just curious - when was the last time you restored a Postgres\n> > database in this piecemeal manner, and how often do you do it?\n> \n> More curios to me is that people seem to use physical file\n> based backup at all. Do they shutdown the postmaster during\n> backup or do they live with the fact that maybe not every\n> backup is a vital one?\n\nI sure hope they shut down the postmaster, or know that nothing is\nhappening during the backup.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 15 Jun 2000 19:11:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Big 7.1 open items" }, { "msg_contents": "On Thu, 15 Jun 2000, Bruce Momjian wrote:\n\n> > > I'm just curious - when was the last time you restored a Postgres\n> > > database in this piecemeal manner, and how often do you do it?\n> > \n> > More curios to me is that people seem to use physical file\n> > based backup at all. Do they shutdown the postmaster during\n> > backup or do they live with the fact that maybe not every\n> > backup is a vital one?\n> \n> I sure hope they shut down the postmaster, or know that nothing is\n> happening during the backup.\n\nI do a backup based on a pg_dump snapshot at the time of the backup\n... \n\n\n", "msg_date": "Fri, 16 Jun 2000 13:58:03 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Big 7.1 open items" } ]
[ { "msg_contents": "CREATE TABLE foo (\n\tname TEXT,\n\ttype CHAR(1),\n\twhen_added TIMESTAMP DEFAULT 'now'\n);\n\nCREATE VIEW mytype AS \n\tSELECT name, when_added FROM foo WHERE type = 'M';\n\nCREATE RULE mytype_insert AS\n\tON INSERT TO mytype DO INSTEAD\n\tINSERT INTO foo (name, type) VALUES (NEW.name, 'M');\n\ndb=# insert into foo (name, type) VALUES ('n1', 'M');\nINSERT 414488 1\ndb=# insert into mytype (name) VALUES ('n2');\nINSERT 414489 1\ndb=# select * from foo;\n name | type | when_added\n------+------+------------------------\n n1 | M | 2000-06-15 09:53:44-04\n n2 | M | 2000-06-15 09:52:27-04\n(2 rows)\n\nInserting directly into foo sets when_added to the current time.\nInserting through the view sets it to what looks like the time of\nview creation.\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n", "msg_date": "Thu, 15 Jun 2000 10:05:10 -0400", "msg_from": "\"Mark Hollomon\" <[email protected]>", "msg_from_op": true, "msg_subject": "Bug with views and defaults" }, { "msg_contents": "\"Mark Hollomon\" <[email protected]> writes:\n> CREATE TABLE foo (\n> \tname TEXT,\n> \ttype CHAR(1),\n> \twhen_added TIMESTAMP DEFAULT 'now'\n> );\n\n> CREATE VIEW mytype AS \n> \tSELECT name, when_added FROM foo WHERE type = 'M';\n\n> CREATE RULE mytype_insert AS\n> \tON INSERT TO mytype DO INSTEAD\n> \tINSERT INTO foo (name, type) VALUES (NEW.name, 'M');\n\n> Inserting directly into foo sets when_added to the current time.\n> Inserting through the view sets it to what looks like the time of\n> view creation.\n\nThis is a known and not readily fixable problem. It's far safer\nto write the default for a timestamp column as now(), rather than\nrelying on a string literal not getting coerced to timestamp form\ntoo soon. See\nhttp://www.postgresql.org/mhonarc/pgsql-hackers/1999-10/msg00036.html\n\nBTW, Bruce: it probably would be wise to have the FAQ's item 4.22\nrecommend now() and nothing else. 'now' has nothing much to recommend\nit and there are still pitfalls like this one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Jun 2000 19:43:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug with views and defaults " }, { "msg_contents": "> This is a known and not readily fixable problem. It's far safer\n> to write the default for a timestamp column as now(), rather than\n> relying on a string literal not getting coerced to timestamp form\n> too soon. See\n> http://www.postgresql.org/mhonarc/pgsql-hackers/1999-10/msg00036.html\n> \n> BTW, Bruce: it probably would be wise to have the FAQ's item 4.22\n> recommend now() and nothing else. 'now' has nothing much to recommend\n> it and there are still pitfalls like this one.\n> \n> \t\t\tregards, tom lane\n> \n\nTODO updated.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 16 Jun 2000 20:06:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug with views and defaults" } ]
[ { "msg_contents": "\n> >> In reality, very few people are going to be interested in restoring\n> >> a table in a way that breaks referential integrity and other \n> >> normal assumptions about what exists in the database. \n> >\n> >This is not true. In my DBA history it would have saved me manweeks\n> >of work if an easy and efficient restore of one single table \n> from backup \n> >would have been available in Informix and Oracle.\n> >We allways had to restore most of the whole system to \n> another machine only\n> >to get back at some table info that would then be manually re-added\n> >to the production system. \n> \n> I'm missing something, I guess. You would do a createdb, do \n> a filesystem\n> copy of pg_log and one file into it, and then read data from the table\n> without having to restore the other tables in the database?\n\nNo if you want to restore to a separate postgres instance you need to \nrestore all pg system tables as well.\nWhat I meant is create a new table in your production server and replace \nthe new 0 byte file with your backup file (rename it accordingly).\n\nAndreas\n", "msg_date": "Thu, 15 Jun 2000 16:27:39 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: AW: Big 7.1 open items" } ]
[ { "msg_contents": "I'm seeing what I think is newly-broken behavior for the ftp site. Seems\nto be pointing to the underlying hub.org area rather than the postgres\none.\n\nIs this a problem on my end, or do others see it too?\n\n - Thomas\n\ngolem> ftp postgresql.org\nConnected to postgresql.org.\n...\nftp> cd pub\n250 CWD command successful.\nftp> ls\n200 PORT command successful.\n150 Opening ASCII mode data connection for /bin/ls.\ntotal 1249\n-rw-r--r-- 1 0 0 655 Oct 20 1998 .message\ndrwxr-xr-x 3 0 0 512 Oct 28 1999 FreeBSD\ndrwxr-xr-x 2 0 0 512 Aug 5 1999 INN-2.3\ndrwxr-xr-x 2 10 0 512 Jun 15 04:04 Majordomo2\ndrwxr-xr-x 3 0 0 512 Feb 19 1999 kde\n-rw-r--r-- 1 209 0 51970 Jul 13 1999 ndtelnet.zip\n-rw-r--r-- 1 0 0 943376 Oct 19 1998 ttermp23.zip\n-rw-r--r-- 1 0 0 250281 Oct 19 1998 ttssh13.zip\ndrwxr-xr-x 2 10 0 512 Oct 15 1999 wu-ftpd\n", "msg_date": "Thu, 15 Jun 2000 14:36:40 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql.org ftp troubles?" }, { "msg_contents": "On Thu, 15 Jun 2000, Thomas Lockhart wrote:\n\n> I'm seeing what I think is newly-broken behavior for the ftp site. Seems\n> to be pointing to the underlying hub.org area rather than the postgres\n> one.\n> \n> Is this a problem on my end, or do others see it too?\n\nOn the docs page there were four links to the ftp area and were set\nas postgresql.org. This morning I had a couple of reports in the\nwebmaster mailbox saying they didn't work. Perhaps something new in\nthe last couple of days?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 15 Jun 2000 10:44:06 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql.org ftp troubles?" }, { "msg_contents": "On Thu, 15 Jun 2000, Vince Vielhaber wrote:\n\n> On Thu, 15 Jun 2000, Thomas Lockhart wrote:\n> \n> > I'm seeing what I think is newly-broken behavior for the ftp site. Seems\n> > to be pointing to the underlying hub.org area rather than the postgres\n> > one.\n> > \n> > Is this a problem on my end, or do others see it too?\n> \n> On the docs page there were four links to the ftp area and were set\n> as postgresql.org. This morning I had a couple of reports in the\n> webmaster mailbox saying they didn't work. Perhaps something new in\n> the last couple of days?\n\nack ... I figured everything referenced it as ftp.postgresql.org, so it\nwould be totally transparent :(\n\nwe had an idle box on our network that had a seperate, idle T1 to the\nInternet that we moved ftp.postgresql.org over to the other day ... I\nshould have mentioned something instead of making teh assumption that\neverything pointed to ftp.postgresql.org ... sorry about that ..\n\n\n", "msg_date": "Fri, 16 Jun 2000 01:01:05 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql.org ftp troubles?" } ]
[ { "msg_contents": "Hi,\n\nalthough this is more of a general question I thought that the \"Hackers\" might now a better answer for this one.\n\nI am considering a recompile of my RPM installed postgres installation (since I am guessing it was not build using Pentium optimization (and possibly even other optimizations)).\n\nMy question is whether anyone has experience with the performance gain of this, or better is it worth the effort?\n\nIf it is worth it, does anyone know what optimization level it is safe to compile PostgreSQL with? (I mean -O2, -O3, -O1000 :) )\n\nThanks\n Mirko\n", "msg_date": "Thu, 15 Jun 2000 11:19:25 -0400", "msg_from": "\"Mirko Geffken\" <[email protected]>", "msg_from_op": true, "msg_subject": "None" }, { "msg_contents": "Mirko Geffken wrote:\n> \n> Hi,\n> \n> although this is more of a general question I thought that the \"Hackers\" might now a better answer for this one.\n \n> I am considering a recompile of my RPM installed postgres installation (since I am guessing it was not build using Pentium optimization (and possibly even other optimizations)).\n\nNo, the postgresql.org RPM distribution, for universality sake, is not\nbuilt with any processor-specific optimization -- however, the Mandrake\nbinaries ARE, and are mostly compatible with a RedHat system. You\ndidn't say _which_ distribution you are using, however.\n\nYou can enable the optimizations in your rpmrc files (/etc/rpmrc), and\ndo a rpm --rebuild of the source rpm to get processor-optimized\nbinaries. You will need a complete development environment to do so,\nincluding the python-devel package and a working C++ installation.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 15 Jun 2000 11:59:13 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " } ]
[ { "msg_contents": "\n I finish clean up in the contrib tree. Bruce has the patch with it. \n\n Changes:\n\n - add 'LIBPGEASYDIR' to src/Makefile.global \n\n - new pg_dumplo\n\n - I write new contrib/Makefile.global, which include standard\n ../src/Makefile.global. It is because in contrib tree is needful\n some definition like 'CFLAGS' ..etc.\n In contrib/Makefile.global are definitions relevant to contrib only.\n\n - all dirs in the contrib contain Makefiles\n\n - all in the contrib is install-able\n\n - I create new dir 'tips' and 'apachelog' is remove to this dir.\n\n - now is _not_ fixed:\n\n os2client\t- non-compile-able (Is it dead?)\n \n odbc \t- unreadable Makefile for me, I don't know what happens \n here (sorry Thomas)\n\n spi/preprocessor - hmm, about previous 'odbc' I a little feel something,\n but here I'm total out...\n\n tools \t- again ????\n\n\n - install paths are defined in contrib/Makefile.global, I expect \n that some definitions will rewrite during Peter's build-system\n overwriting. Now it is not total correct: \n \n### ---------------------------------------------------------\n### DELETE THIS PART if ../src/Makefile.global is standardize\n### (has define all next definitions itself)\n\nDOCDIR=$(POSTDOCDIR)\n\n# not $PGDATA, but anything like '/usr/local/pgsql/share'\nDATADIR=$(LIBDIR)\n\n### ----------------------------------------------------------\n\n# execute-able\nCONTRIB_BINDIR = $(BINDIR)\n# *.so\nCONTRIB_MODDIR = $(LIBDIR)/modules\n# *.doc\nCONTRIB_DOCDIR = $(DOCDIR)/contrib\n# *.sql\nCONTRIB_SQLDIR = $(DATADIR)/contrib\n# *.examples\nCONTRIB_EXAMPLESDIR = $(DOCDIR)/contrib/examples\n\n-------\n\n Generally, 'make' / 'make install' is without errors. \n\n\t \t\t\t\t\tKarel\n\n\n\n\n", "msg_date": "Thu, 15 Jun 2000 17:56:53 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "commit the contrib clean up" }, { "msg_contents": "On Thu, 15 Jun 2000, Karel Zak wrote:\n\n> \n> - now is _not_ fixed:\n> \n> os2client\t- non-compile-able (Is it dead?)\n\nLast time I compiled it was for 6.4. When OS/2 wouldn't recognise my\n9GB drive, I stopped recognising OS/2 and moved completely to FreeBSD.\nSo it may as well be dead, it's no longer supported. Althought it did\nwork in 6.4.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 15 Jun 2000 12:03:06 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commit the contrib clean up" }, { "msg_contents": "\nOn Thu, 15 Jun 2000, Vince Vielhaber wrote:\n\n> On Thu, 15 Jun 2000, Karel Zak wrote:\n> \n> > \n> > - now is _not_ fixed:\n> > \n> > os2client\t- non-compile-able (Is it dead?)\n> \n> Last time I compiled it was for 6.4. When OS/2 wouldn't recognise my\n> 9GB drive, I stopped recognising OS/2 and moved completely to FreeBSD.\n> So it may as well be dead, it's no longer supported. Althought it did\n> work in 6.4.\n\n Well, it is archive in CVS; we can delete it from current tree.\n\n\t\t\t\t\t\tKarel \n\n", "msg_date": "Thu, 15 Jun 2000 18:11:37 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: commit the contrib clean up" }, { "msg_contents": "I removed os2client from CVS.\n\n \n> On Thu, 15 Jun 2000, Vince Vielhaber wrote:\n> \n> > On Thu, 15 Jun 2000, Karel Zak wrote:\n> > \n> > > \n> > > - now is _not_ fixed:\n> > > \n> > > os2client\t- non-compile-able (Is it dead?)\n> > \n> > Last time I compiled it was for 6.4. When OS/2 wouldn't recognise my\n> > 9GB drive, I stopped recognising OS/2 and moved completely to FreeBSD.\n> > So it may as well be dead, it's no longer supported. Althought it did\n> > work in 6.4.\n> \n> Well, it is archive in CVS; we can delete it from current tree.\n> \n> \t\t\t\t\t\tKarel \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 15 Jun 2000 15:11:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commit the contrib clean up" }, { "msg_contents": "Applied.\n\n> \n> I finish clean up in the contrib tree. Bruce has the patch with it. \n> \n> Changes:\n> \n> - add 'LIBPGEASYDIR' to src/Makefile.global \n> \n> - new pg_dumplo\n> \n> - I write new contrib/Makefile.global, which include standard\n> ../src/Makefile.global. It is because in contrib tree is needful\n> some definition like 'CFLAGS' ..etc.\n> In contrib/Makefile.global are definitions relevant to contrib only.\n> \n> - all dirs in the contrib contain Makefiles\n> \n> - all in the contrib is install-able\n> \n> - I create new dir 'tips' and 'apachelog' is remove to this dir.\n> \n> - now is _not_ fixed:\n> \n> os2client\t- non-compile-able (Is it dead?)\n> \n> odbc \t- unreadable Makefile for me, I don't know what happens \n> here (sorry Thomas)\n> \n> spi/preprocessor - hmm, about previous 'odbc' I a little feel something,\n> but here I'm total out...\n> \n> tools \t- again ????\n> \n> \n> - install paths are defined in contrib/Makefile.global, I expect \n> that some definitions will rewrite during Peter's build-system\n> overwriting. Now it is not total correct: \n> \n> ### ---------------------------------------------------------\n> ### DELETE THIS PART if ../src/Makefile.global is standardize\n> ### (has define all next definitions itself)\n> \n> DOCDIR=$(POSTDOCDIR)\n> \n> # not $PGDATA, but anything like '/usr/local/pgsql/share'\n> DATADIR=$(LIBDIR)\n> \n> ### ----------------------------------------------------------\n> \n> # execute-able\n> CONTRIB_BINDIR = $(BINDIR)\n> # *.so\n> CONTRIB_MODDIR = $(LIBDIR)/modules\n> # *.doc\n> CONTRIB_DOCDIR = $(DOCDIR)/contrib\n> # *.sql\n> CONTRIB_SQLDIR = $(DATADIR)/contrib\n> # *.examples\n> CONTRIB_EXAMPLESDIR = $(DOCDIR)/contrib/examples\n> \n> -------\n> \n> Generally, 'make' / 'make install' is without errors. \n> \n> \t \t\t\t\t\tKarel\n> \n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 15 Jun 2000 15:15:20 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commit the contrib clean up" } ]
[ { "msg_contents": "When the libpq shared library is linked on my system it looks like this:\n\nld -Bdynamic -shared -soname libpq.so.2.1 -o libpq.so.2.1 fe-auth.o\nfe-connect.o fe-exec.o fe-misc.o fe-print.o fe-lobj.o pqexpbuffer.o\ndllist.o pqsignal.o -lcrypt -lc\n\nNow if I build a Kerberos IV enabled version it looks like\n\nld -Bdynamic -shared -soname libpq.so.2.1 -o libpq.so.2.1 fe-auth.o\nfe-connect.o fe-exec.o fe-misc.o fe-print.o fe-lobj.o pqexpbuffer.o\ndllist.o pqsignal.o -lcrypt -lkrb -ldes -lc\n\nNote that the relevant Kerberos libraries are specified so the user\ndoesn't have to remember linking his libpq programs against Kerberos.\n\nMy question is, shouldn't libpq be linked against *all* the libraries that\nare detected at configure time? Imagine that libpq uses something from,\nsay, -lbsd. We'd never know because psql links against -lbsd so everything\nis resolved but the end user may not know that and fail to link his\nprograms against -lbsd. I guess what I'm saying is, there seems to be a\ndouble standard of -lcrypt, -lc, -lkrb, and -ldes versus all other\nlibraries.\n\nAny library guru care to enlighten me?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 15 Jun 2000 18:14:39 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Shared library interdependencies" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> My question is, shouldn't libpq be linked against *all* the libraries that\n> are detected at configure time?\n\nUm, no, I'd prefer to err on the side of conservatism here. I'd be\nwilling to do it that way if configure were conservative about what\nlibraries it adds to LIBS --- but in fact configure never met a library\nit didn't like.\n\nFor example: I have no idea what configure thinks it's accomplishing by\nlooking for libPW and including -lPW into LIBS on *any* platform where\nthat doesn't instantly crash and burn. Perhaps somewhere there is a\nplatform where that's really necessary, but I dunno where or what for.\nOn HPUX, there is a libPW with some hoary old compatibility functions\nthat I'd just as soon not have linked in. If compiled -pg, the backend\nactually fails to start unless I remove -lPW from the link. So, I'd\nobject pretty darn strongly to any linking tactic that carries any risk\nof causing libPW to be linked into customer applications just because\nthey asked for libpq. Who knows what it'd break in a customer app that\nwe have no way of testing beforehand? Who even knows what the darn\nthing *does* on any given platform?\n\nThere are doubtless comparable risks on other platforms with other\nlibraries. As long as configure's philosophy is \"if libFOO exists\nthen I must want to link with it\" then I don't want to link the\nwhole LIBS list into interface libraries.\n\n> I guess what I'm saying is, there seems to be a double standard of\n> -lcrypt, -lc, -lkrb, and -ldes versus all other libraries.\n\nYup, there is. -lc should be safe enough though ;-). As for the\nothers, we should be looking for ways to get them out of libpq's\ndependencies, not looking for ways to add more wildcards to the stew.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Jun 2000 03:15:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared library interdependencies " } ]
[ { "msg_contents": "Hello all,\n\nAs you know, PostgreSQL handles in a restricted way a transaction, falling in\n*ABORT STATE*, when a statement which yields error is entered. Not only the\ntransaction isn't allowed to continue processing statements, until a finishing\nstatement is entered, but also the whole transaction is ROLLedBACK, regardless\nof which statement finished the transaction (ABORT, ROLLBACK, COMMIT, END).\n\nI'm needing as water the ability of PostgreSQL to continue working normally\nafter an error within a transaction, and not to fall in *ABORT STATE*. Many of\nyou already read my previous messages about this topic, and maybe my insistence\nis becoming upsetting. I apologize.\n\nI'm willing to do it myself. But my lack of knowledge about many required\ntopics, leaves my chances low. So I'm following alternative paths, as to try to\npersuade others of the importance of this issue.\n\nIn the TODO list, we can read:\n\n> EXOTIC FEATURES\n> \n> * Add sql3 recursive unions\n> * Add the concept of dataspaces\n> * Add replication of distributed databases\n> * Allow queries across multiple databases\n> * Allow nested transactions (Vadim)\n\n From this, I understand that:\n\n1) The development group found that the behaviour change proposed above, will be\nachieved, by encapsulating offending statements within inner transactions, so\nthat outer ones remain OK, and can follow to process statements, until COMMIT;\nnot by changing such behaviour directly. I haven't found in the TODO list, other\nrelated items.\n\n2) The priority assigned to this issue is sort of low, since it's below EXOTIC\nFEATURES, and is in the last in the list.\n\nMaking transactions nestable, will surely become a great feature, compared to\ncompetitor databases. And Vadim seems to be the only one able to cope with it,\nand he has to do lots of things before this one.\n\nI wonder if by focusing narrowerly at the desired behaviour change (not to fall\nin ABORT STATE), we could achieve sooner one of the two goals.\n\nI'm using JDBC to communicate with the backend. If nesting transactions were the\nsolution, I wonder if by enclosing <*automatically within the driver*> each\nstatement inside a nested transaction, would appear to the frontend app, as if\nPostgreSQL handled offending statements within transactions in a smooth way. Of\ncourse this wouldn't be a general solution, so, maybe creating a \"SET\nENCLOSE_EVERY_POSSIBLE_OFFENDING_STATEMENT [ON|OFF]\" backend statement, we could\nachieve the desired behaviour change.\n\nAs you can see, I have some ideas, I have needs, I'm poorly trying to solve the\nproblem, and I want all of you to discuss about this. I have found that my\nworrying about this, is shared by some PostgreSQL users, i.e., when trying to\ninsert a record, in case it didn't exist previously, and update it, in case it\nexists, problem which has been discussed a lot lately, in the GENERAL list.\n\nMany of you have already helped me in a way or another in this issue, including\nbut not limited to Peter Eisentrout, and Ed Loehr. Special thanks to you.\n\nWhat do you think?\n\nBest Regards,\nHaroldo Senger.\n", "msg_date": "Thu, 15 Jun 2000 22:26:28 -0300", "msg_from": "Haroldo Stenger <[email protected]>", "msg_from_op": true, "msg_subject": "Allow nested transactions" } ]
[ { "msg_contents": "Something changed in 7.02 from 6.3. I used to do this:\n\nCREATE FUNCTION make_date() \n RETURNS opaque\n AS '/usr/pgsql/modules/make_date.so' \n LANGUAGE 'c';\nCREATE TRIGGER make_edate\n BEFORE INSERT OR UPDATE ON bgroup\n FOR EACH ROW\n EXECUTE PROCEDURE make_date(edate, aniv, emon, eyear); \n\nThis no longer works. I looked and the docs and it seems that this\nshould work instead.\n\nCREATE FUNCTION make_date(date, int, int, int) \n RETURNS opaque\n AS '/usr/pgsql/modules/make_date.so' \n LANGUAGE 'c';\nCREATE TRIGGER make_edate\n BEFORE INSERT OR UPDATE ON bgroup\n FOR EACH ROW\n EXECUTE PROCEDURE make_date('edate', 'aniv', 'emon', 'eyear'); \n\nBut this gives me;\n\nERROR: CreateTrigger: function make_date() does not exist\n\nIs this broken now or am I not understanding the documentation? Why\nis it looking for a make_date that takes no args?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Fri, 16 Jun 2000 08:43:47 -0400 (EDT)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": true, "msg_subject": "Changes to functions and triggers" }, { "msg_contents": "[email protected] (D'Arcy J.M. Cain) writes:\n> Something changed in 7.02 from 6.3. I used to do this:\n> CREATE FUNCTION make_date() \n> RETURNS opaque\n> AS '/usr/pgsql/modules/make_date.so' \n> LANGUAGE 'c';\n> CREATE TRIGGER make_edate\n> BEFORE INSERT OR UPDATE ON bgroup\n> FOR EACH ROW\n> EXECUTE PROCEDURE make_date(edate, aniv, emon, eyear); \n\n> This no longer works.\n\nDetails?\n\n> I looked and the docs and it seems that this should work instead.\n\n> CREATE FUNCTION make_date(date, int, int, int) \n> RETURNS opaque\n> AS '/usr/pgsql/modules/make_date.so' \n> LANGUAGE 'c';\n> CREATE TRIGGER make_edate\n> BEFORE INSERT OR UPDATE ON bgroup\n> FOR EACH ROW\n> EXECUTE PROCEDURE make_date('edate', 'aniv', 'emon', 'eyear'); \n\nNo. Trigger procedures never take explicit arguments --- whatever\nyou may have stated in the CREATE TRIGGER command gets passed in\nin the trigger data structure. (A pretty bizarre and ugly choice\nif you ask me, but not worth breaking existing code to change...)\n\nThere's surely been a lot of changes in 7.0 that could have broken\nuser-written triggers, but you'll need to look to your C code to\nfind the problem. What you've shown us looks fine.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Jun 2000 10:44:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changes to functions and triggers " }, { "msg_contents": "> This no longer works. I looked and the docs and it seems that this\n> should work instead.\n> \n> CREATE FUNCTION make_date(date, int, int, int)\n> RETURNS opaque\n> AS '/usr/pgsql/modules/make_date.so'\n> LANGUAGE 'c';\n> CREATE TRIGGER make_edate\n> BEFORE INSERT OR UPDATE ON bgroup\n> FOR EACH ROW\n> EXECUTE PROCEDURE make_date('edate', 'aniv', 'emon', 'eyear');\n\nWhat if you leave out the quotes on the last line above, so the column\nnames are actually visible? It looks like the example in the docs might\nbe a poor choice since the function is intended to manipulate columns,\nso the text representation of the column name is being passed in.\n\nDon't know if that is the source of your trouble though...\n\n - Thomas\n", "msg_date": "Fri, 16 Jun 2000 14:50:38 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changes to functions and triggers" }, { "msg_contents": "Thus spake Thomas Lockhart\n> > This no longer works. I looked and the docs and it seems that this\n> > should work instead.\n> > \n> > CREATE FUNCTION make_date(date, int, int, int)\n> > RETURNS opaque\n> > AS '/usr/pgsql/modules/make_date.so'\n> > LANGUAGE 'c';\n> > CREATE TRIGGER make_edate\n> > BEFORE INSERT OR UPDATE ON bgroup\n> > FOR EACH ROW\n> > EXECUTE PROCEDURE make_date('edate', 'aniv', 'emon', 'eyear');\n> \n> What if you leave out the quotes on the last line above, so the column\n> names are actually visible? It looks like the example in the docs might\n> be a poor choice since the function is intended to manipulate columns,\n> so the text representation of the column name is being passed in.\n> \n> Don't know if that is the source of your trouble though...\n\nNope. I added that after reading the web page but without them it still\nhas the problem.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Fri, 16 Jun 2000 18:28:20 -0400 (EDT)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": true, "msg_subject": "Re: Changes to functions and triggers" }, { "msg_contents": "Thus spake Tom Lane\n> [email protected] (D'Arcy J.M. Cain) writes:\n> > Something changed in 7.02 from 6.3. I used to do this:\n> > CREATE FUNCTION make_date() \n> > RETURNS opaque\n> > AS '/usr/pgsql/modules/make_date.so' \n> > LANGUAGE 'c';\n> > CREATE TRIGGER make_edate\n> > BEFORE INSERT OR UPDATE ON bgroup\n> > FOR EACH ROW\n> > EXECUTE PROCEDURE make_date(edate, aniv, emon, eyear); \n> \n> > This no longer works.\n> \n> Details?\n\nSame error as I gave for the new version I wrote.\n\n> > I looked and the docs and it seems that this should work instead.\n> \n> > CREATE FUNCTION make_date(date, int, int, int) \n> > RETURNS opaque\n> > AS '/usr/pgsql/modules/make_date.so' \n> > LANGUAGE 'c';\n> > CREATE TRIGGER make_edate\n> > BEFORE INSERT OR UPDATE ON bgroup\n> > FOR EACH ROW\n> > EXECUTE PROCEDURE make_date('edate', 'aniv', 'emon', 'eyear'); \n> \n> No. Trigger procedures never take explicit arguments --- whatever\n> you may have stated in the CREATE TRIGGER command gets passed in\n> in the trigger data structure. (A pretty bizarre and ugly choice\n> if you ask me, but not worth breaking existing code to change...)\n\nHmm. Are you saying that the above is wrong? I took it right from\nthe web page documentation.\n\n> There's surely been a lot of changes in 7.0 that could have broken\n> user-written triggers, but you'll need to look to your C code to\n> find the problem. What you've shown us looks fine.\n\nReally? That code always worked before. Besides, it doesn't look to me\nlike my C code ever gets called. The failure seems to be at the SQL\nlevel saying that there is no function to call.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Sat, 17 Jun 2000 06:50:57 -0400 (EDT)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": true, "msg_subject": "Re: Changes to functions and triggers" }, { "msg_contents": "[email protected] (D'Arcy J.M. Cain) writes:\n>>>> I looked and the docs and it seems that this should work instead.\n>> \n>>>> CREATE FUNCTION make_date(date, int, int, int) \n>>>> RETURNS opaque\n>>>> AS '/usr/pgsql/modules/make_date.so' \n>>>> LANGUAGE 'c';\n>>>> CREATE TRIGGER make_edate\n>>>> BEFORE INSERT OR UPDATE ON bgroup\n>>>> FOR EACH ROW\n>>>> EXECUTE PROCEDURE make_date('edate', 'aniv', 'emon', 'eyear'); \n>> \n>> No. Trigger procedures never take explicit arguments --- whatever\n>> you may have stated in the CREATE TRIGGER command gets passed in\n>> in the trigger data structure. (A pretty bizarre and ugly choice\n>> if you ask me, but not worth breaking existing code to change...)\n\n> Hmm. Are you saying that the above is wrong?\n\nYes.\n\n> I took it right from the web page documentation.\n\nWhat web page? http://www.postgresql.org/docs/postgres/triggers.htm\nstill says what it always has (complete with bad grammar ;-)):\n\n\tThe trigger function must be created before the trigger is\n\tcreated as a function taking no arguments and returns opaque.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Jun 2000 13:03:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changes to functions and triggers " }, { "msg_contents": "Thus spake Tom Lane\n> [email protected] (D'Arcy J.M. Cain) writes:\n> >>>> I looked and the docs and it seems that this should work instead.\n> >> \n> >>>> CREATE FUNCTION make_date(date, int, int, int) \n> >>>> RETURNS opaque\n> >>>> AS '/usr/pgsql/modules/make_date.so' \n> >>>> LANGUAGE 'c';\n> >>>> CREATE TRIGGER make_edate\n> >>>> BEFORE INSERT OR UPDATE ON bgroup\n> >>>> FOR EACH ROW\n> >>>> EXECUTE PROCEDURE make_date('edate', 'aniv', 'emon', 'eyear'); \n> >> \n> >> No. Trigger procedures never take explicit arguments --- whatever\n> >> you may have stated in the CREATE TRIGGER command gets passed in\n> >> in the trigger data structure. (A pretty bizarre and ugly choice\n> >> if you ask me, but not worth breaking existing code to change...)\n> \n> > Hmm. Are you saying that the above is wrong?\n> \n> Yes.\n> \n> > I took it right from the web page documentation.\n> \n> What web page? http://www.postgresql.org/docs/postgres/triggers.htm\n> still says what it always has (complete with bad grammar ;-)):\n> \n> \tThe trigger function must be created before the trigger is\n> \tcreated as a function taking no arguments and returns opaque.\n\nOK, so I went back to basically what I had before.\n\nCREATE FUNCTION make_date() \n RETURNS opaque\n AS '/usr/pgsql/modules/make_date.so' \n LANGUAGE 'c';\n\nCREATE TRIGGER make_dates\n BEFORE INSERT OR UPDATE ON bgroup\n FOR EACH ROW\n EXECUTE PROCEDURE make_date (edate, aniv, emon, eyear);\n\nINSERT INTO bgroup (bname, client_id, actypid, aniv, emon, eyear, pmon, pyear)\n\tVALUES ('guest', 1000, 1, 1, 1, 2000, 1, 2000);\n\nAnd here is what I get.\n\nERROR: fmgr_info: function 24224: cache lookup failed\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 20 Jun 2000 09:00:14 -0400 (EDT)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": true, "msg_subject": "Re: Changes to functions and triggers" }, { "msg_contents": "[email protected] (D'Arcy J.M. Cain) writes:\n> OK, so I went back to basically what I had before.\n\n> CREATE FUNCTION make_date() \n> RETURNS opaque\n> AS '/usr/pgsql/modules/make_date.so' \n> LANGUAGE 'c';\n\n> CREATE TRIGGER make_dates\n> BEFORE INSERT OR UPDATE ON bgroup\n> FOR EACH ROW\n> EXECUTE PROCEDURE make_date (edate, aniv, emon, eyear);\n\n> INSERT INTO bgroup (bname, client_id, actypid, aniv, emon, eyear, pmon, pyear)\n> \tVALUES ('guest', 1000, 1, 1, 1, 2000, 1, 2000);\n\nLooks OK to me.\n\n> And here is what I get.\n> ERROR: fmgr_info: function 24224: cache lookup failed\n\nYou sure you didn't fall into the same old trap of you-must-create-\nthe-trigger-after-the-function? If you drop and recreate the function,\nit has a new OID, so you have to drop and recreate the trigger because\nit links to the function by OID.\n\n(Someday we ought to make that work better.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Jun 2000 09:55:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changes to functions and triggers " }, { "msg_contents": "Thus spake Tom Lane\n> [email protected] (D'Arcy J.M. Cain) writes:\n> > OK, so I went back to basically what I had before.\n> \n> > CREATE FUNCTION make_date() \n> > RETURNS opaque\n> > AS '/usr/pgsql/modules/make_date.so' \n> > LANGUAGE 'c';\n> \n> > CREATE TRIGGER make_dates\n> > BEFORE INSERT OR UPDATE ON bgroup\n> > FOR EACH ROW\n> > EXECUTE PROCEDURE make_date (edate, aniv, emon, eyear);\n> \n> > INSERT INTO bgroup (bname, client_id, actypid, aniv, emon, eyear, pmon, pyear)\n> > \tVALUES ('guest', 1000, 1, 1, 1, 2000, 1, 2000);\n> \n> Looks OK to me.\n> \n> > And here is what I get.\n> > ERROR: fmgr_info: function 24224: cache lookup failed\n> \n> You sure you didn't fall into the same old trap of you-must-create-\n> the-trigger-after-the-function? If you drop and recreate the function,\n> it has a new OID, so you have to drop and recreate the trigger because\n> it links to the function by OID.\n\nPositive. I dropped both then did the above in the order shown.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 20 Jun 2000 15:33:33 -0400 (EDT)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": true, "msg_subject": "Re: Changes to functions and triggers" }, { "msg_contents": "[email protected] (D'Arcy J.M. Cain) writes:\n>> You sure you didn't fall into the same old trap of you-must-create-\n>> the-trigger-after-the-function? If you drop and recreate the function,\n>> it has a new OID, so you have to drop and recreate the trigger because\n>> it links to the function by OID.\n\n> Positive. I dropped both then did the above in the order shown.\n\nHm. Is the OID shown in the error message the correct OID for your\ntrigger function, or not? (Try \"select oid,* from pg_proc where\nproname = 'make_date'\", maybe also \"select * from pg_proc where\noid = 24224\")\n\nThere are trigger examples in the regress tests that are hardly\ndifferent from your example, so the trigger feature is surely not\nbroken completely. Maybe a platform-specific problem? Which\nversion were you running again, exactly, and what configuration\noptions? BTW, do the regress tests pass for you?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Jun 2000 15:42:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changes to functions and triggers " }, { "msg_contents": "Thus spake D'Arcy J.M. Cain\n> OK, so I went back to basically what I had before.\n> \n> CREATE FUNCTION make_date() \n> RETURNS opaque\n> AS '/usr/pgsql/modules/make_date.so' \n> LANGUAGE 'c';\n> \n> CREATE TRIGGER make_dates\n> BEFORE INSERT OR UPDATE ON bgroup\n> FOR EACH ROW\n> EXECUTE PROCEDURE make_date (edate, aniv, emon, eyear);\n> \n> INSERT INTO bgroup (bname, client_id, actypid, aniv, emon, eyear, pmon, pyear)\n> \tVALUES ('guest', 1000, 1, 1, 1, 2000, 1, 2000);\n> \n> And here is what I get.\n> \n> ERROR: fmgr_info: function 24224: cache lookup failed\n\nI must have done this wrong. The actual error I get when I start from\nscratch is this:\n\nERROR: make_date (bgroup): 0 args\n\nThat message comes from my program at the start of the function.\n\n if (!CurrentTriggerData)\n elog(ERROR, \"make_date: triggers are not initialized\");\n if (TRIGGER_FIRED_FOR_STATEMENT(CurrentTriggerData->tg_event))\n elog(ERROR, \"make_date: can't process STATEMENT events\");\n if (TRIGGER_FIRED_AFTER(CurrentTriggerData->tg_event))\n elog(ERROR, \"make_date: must be fired before event\");\n\n if (TRIGGER_FIRED_BY_INSERT(CurrentTriggerData->tg_event))\n rettuple = CurrentTriggerData->tg_trigtuple;\n else if (TRIGGER_FIRED_BY_UPDATE(CurrentTriggerData->tg_event))\n rettuple = CurrentTriggerData->tg_newtuple;\n else\n elog(ERROR, \"make_date: can't process DELETE events\");\n\n rel = CurrentTriggerData->tg_relation;\n relname = SPI_getrelname(rel);\n\n trigger = CurrentTriggerData->tg_trigger;\n\n nargs = trigger->tgnargs;\n if (nargs != 4)\n elog(ERROR, \"make_date (%s): %d args\", relname, nargs);\n\nAll I have before that is the declarations.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Wed, 21 Jun 2000 14:13:02 -0400 (EDT)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": true, "msg_subject": "Re: Changes to functions and triggers" }, { "msg_contents": "[email protected] (D'Arcy J.M. Cain) writes:\n> I must have done this wrong. The actual error I get when I start from\n> scratch is this:\n> ERROR: make_date (bgroup): 0 args\n\n> That message comes from my program at the start of the function.\n\n> [ blah blah blah ]\n\n> trigger = CurrentTriggerData->tg_trigger;\n\n> nargs = trigger->tgnargs;\n> if (nargs != 4)\n> elog(ERROR, \"make_date (%s): %d args\", relname, nargs);\n\nHmm. Not sure if this is the root of the problem or not, but it's\n*real* dangerous to assume that the global CurrentTriggerData stays\nset (the same way!) throughout your function. You should copy\nCurrentTriggerData into a local TriggerData * variable immediately\nupon being called and then just use that variable. I wonder if you\ncould be losing because some other trigger is getting invoked before\nyour routine returns...\n\nCurrentTriggerData doesn't even exist anymore in current sources,\nso doing it that way will also ease the pain of updating to 7.1 ;-)\n\nAnother thing to keep in mind is that the data structures pointed to\nby CurrentTriggerData had better be treated as read-only.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Jun 2000 18:07:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changes to functions and triggers " } ]
[ { "msg_contents": "Hellow\n\nIt is 2 days from now that I did not recive any mail from any PostgreSQL lists.\nIs there problems with the PostgreSQL lists or I have to sign it again??\n\nPlease, send responses direct to me: [email protected]\n\nThank you\n\nRoberto\n\n", "msg_date": "Fri, 16 Jun 2000 10:14:45 -0300", "msg_from": "Roberto =?iso-8859-1?Q?Jo=E3o?= Lopes Garcia <[email protected]>", "msg_from_op": true, "msg_subject": "Is this list up??" } ]
[ { "msg_contents": "Greetings.\n\nI'm running into a recurring error on my system that has been running fine\nfor quite a while. I dumped, dropped, created, rebuild and reloaded the\ndatabase (all 2+ gigs) yesterday in the hopes of correcting whatever might\nbe the problem.\n\n[PostgreSQL 6.5.3 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66]\n\nThe following lines show up in my /var/log/messages...\n\nJun 16 11:39:44 mymailman logger: ERROR: cannot find attribute 1 of \n\trelation pg_temp.13465.1\nJun 16 11:39:44 mymailman logger: pq_flush: send() failed: Bad file \n\tdescriptor\nJun 16 11:39:44 mymailman logger: pq_flush: send() failed: Bad file\n\tdescriptor\nJun 16 11:39:44 mymailman logger: FATAL 1: Socket command type\nJun 16 11:39:44 mymailman logger: pq_flush: send() failed: Bad file\n\tdescriptor\nJun 16 11:39:44 mymailman logger: ERROR: unknown frontend message was\n\treceived\nJun 16 11:39:44 mymailman logger: pq_flush: send() failed: Bad file\n\tdescriptor\nJun 16 11:39:44 mymailman logger: pq_flush: send() failed: Bad file\n\tdescriptor\nJun 16 11:39:44 mymailman logger: NOTICE: AbortTransaction and not in\n\tin-progress state\nJun 16 11:39:44 mymailman logger: pq_flush: send() failed: Bad file\n\tdescriptor\nJun 16 11:39:44 mymailman logger: pq_flush: send() failed: Bad file\n\tdescriptor\nJun 16 11:39:44 mymailman logger: pq_recvbuf: recv() failed: Bad file\n\tdescriptor\nJun 16 11:39:44 mymailman logger: FATAL 1: Socket command type ? unknown\nJun 16 11:39:44 mymailman logger: pq_flush: send() failed: Bad file\n\tdescriptor\nJun 16 11:39:44 mymailman logger: ERROR: unknown frontend message was\n\treceived\nJun 16 11:39:44 mymailman logger: pq_flush: send() failed: Bad file\n\tdescriptor\nJun 16 11:39:44 mymailman logger: NOTICE: AbortTransaction and not in\n\tin-progress state\n\nEtc etc. It goes off into an infinite recursion and then a script I have\nrunning detects that and shuts down and restarts postgres. Then the\nscripts that run restart themselves and everything's fine for a few more\nminutes and then we're back where we started from.\n\nI was waiting to upgrade to 7.0.2 for awhile to let it shake out before\nputting it on a production system.\n\nThanks in advance for the consideration...\n\n- K\n\nKristofer Munn * KMI * 732-254-9305 * AIM KrMunn * http://www.munn.com/\n\n", "msg_date": "Fri, 16 Jun 2000 12:08:21 -0400 (EDT)", "msg_from": "Kristofer Munn <[email protected]>", "msg_from_op": true, "msg_subject": "ERROR: cannot find attribute 1 of relation pg_temp.13465.1" } ]
[ { "msg_contents": "The ODBC driver has trouble with 7.0.x for some apps. 7.0.x has some\nadditional \"error checking\" which rejects some commands appearing inside\nof transactions. Can we consider relaxing this, particularly since we\nare considering making some of these \"rejecting conditions\"\ntransaction-friendly in a future release? Dumping work into ODBC and\nother drivers to work around this is a diversion from other projects,\nand I'm not sure we'll have much to show for it in the end.\n\nbtw, this is a continuation of the discussion regarding protecting users\nfrom themselves vs handing them an ever-sharper tool. Usually I'll vote\nfor working on sharpening the tool rather than dulling it to protect a\nnaive user. I'll avoid using the analogy to the evolution of barbecue\nlighter fluid, which in recent years is no longer very risky, but also\nhas trouble lighting barbecues :/\n\n - Thomas\n", "msg_date": "Fri, 16 Jun 2000 16:26:24 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "create user and transactions" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> The ODBC driver has trouble with 7.0.x for some apps. 7.0.x has some\n> additional \"error checking\" which rejects some commands appearing inside\n> of transactions. Can we consider relaxing this, particularly since we\n> are considering making some of these \"rejecting conditions\"\n> transaction-friendly in a future release?\n\nI thought all along that having CREATE USER et al refuse to run\ninside a transaction was a bad idea. Peter?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Jun 2000 12:51:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: create user and transactions " }, { "msg_contents": "Thomas Lockhart writes:\n\n> The ODBC driver has trouble with 7.0.x for some apps. 7.0.x has some\n> additional \"error checking\" which rejects some commands appearing inside\n> of transactions. Can we consider relaxing this, particularly since we\n> are considering making some of these \"rejecting conditions\"\n> transaction-friendly in a future release?\n\nI can supply a patch for the create/drop user case since there's not much\nto be lost there. But what about databases? AFAIK there are no plans on\nmaking their creation and removal transaction friendly at all, so we'd be\nin \"NOTICE: don't do that\" mode for a very long time.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 17 Jun 2000 15:02:11 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: create user and transactions" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I can supply a patch for the create/drop user case since there's not much\n> to be lost there. But what about databases? AFAIK there are no plans on\n> making their creation and removal transaction friendly at all, so we'd be\n> in \"NOTICE: don't do that\" mode for a very long time.\n\nAgreed, we have no plans to support rollbackable CREATE or DROP\nDATABASE, so I don't have an objection to erroring out on them.\n\nBut the user & group manipulation commands need to be less picky...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Jun 2000 12:54:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: create user and transactions " } ]
[ { "msg_contents": "I have 2 tables with indices as follows:\n\n\tTable \"activity\" (~4000 rows)\n\t id serial\n\t start_time timestamp not null\n\t stop_time timestamp not null\n\t ...\n\n\tCREATE INDEX activity_start_time ON activity (start_time)\n\tCREATE INDEX activity_stop_time ON activity (stop_time)\n\n\tTable \"activity_hr_need\" (~2000 rows)\n\t id serial\n\t activity_id integer not null\n\t hr_type_id integer not null\n\t hr_count integer not null\n\t ...\n\n\tCREATE UNIQUE INDEX activity_hr_need_pkey \n\t\tON activity_hr_need (activity_id, hr_type_id)\n\tCREATE INDEX activity_hr_need_hrtid \n\t\tON activity_hr_need (hr_type_id)\n\tCREATE INDEX activity_hr_need_aid \n\t\tON activity_hr_need (activity_id int4_ops)\n\nQUESTION: Why doesn't the planner, just after 'vacuum analyze', use the\nprovided indices for this query? How can I tweak it to use the indices?\n\nsdb=# EXPLAIN SELECT ahrn.hr_type_id AS \"Resource Type\", \nsdb-# SUM(ahrn.hr_count) AS \"Planned Consulting Days\"\nsdb-# FROM activity a, activity_hr_need ahrn\nsdb-# WHERE a.start_time::date >= '1-Jun-2000'::date\nsdb-# AND a.stop_time::date <= '1-Jul-2000'::date\nsdb-# AND ahrn.activity_id = a.id\nsdb-# GROUP BY \"Resource Type\";\nNOTICE: QUERY PLAN:\n\nAggregate (cost=243.74..244.58 rows=17 width=16)\n -> Group (cost=243.74..244.16 rows=169 width=16)\n -> Sort (cost=243.74..243.74 rows=169 width=16)\n -> Hash Join (cost=142.65..237.50 rows=169 width=16)\n -> Seq Scan on activity_hr_need ahrn \n(cost=0.00..53.58 rows=2358 width=12)\n -> Hash (cost=141.60..141.60 rows=420 width=4)\n -> Seq Scan on activity a (cost=0.00..141.60\nrows=420 width=4)\n\n\nRegards,\nEd Loehr\n", "msg_date": "Fri, 16 Jun 2000 12:38:01 -0500", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": true, "msg_subject": "planner question re index vs seqscan" }, { "msg_contents": "Ed Loehr wrote:\n\n> QUESTION: Why doesn't the planner, just after 'vacuum analyze', use the\n> provided indices for this query? How can I tweak it to use the indices?\n> \n> sdb=# EXPLAIN SELECT ahrn.hr_type_id AS \"Resource Type\",\n> sdb-# SUM(ahrn.hr_count) AS \"Planned Consulting Days\"\n> sdb-# FROM activity a, activity_hr_need ahrn\n> sdb-# WHERE a.start_time::date >= '1-Jun-2000'::date\n> sdb-# AND a.stop_time::date <= '1-Jul-2000'::date\n> sdb-# AND ahrn.activity_id = a.id\n> sdb-# GROUP BY \"Resource Type\";\n> NOTICE: QUERY PLAN:\n\ndump the typecasting in the query and try again. not sure if it'll\nwork, but it's worth a try. typecasting has an annoying effect of\ndisabling index scans in some cases even when you'd swear logically that\nthey should be used. if that doesn't help, it's possible that it just\nshouldn't be using the indexes based on cost estimates. try shutting\noff the sequential scan with \"set enable_seqscan=off\" before the query\nto check if that's the case. \n\n-- \n\nJeff Hoffmann\nPropertyKey.com\n", "msg_date": "Fri, 16 Jun 2000 13:23:40 -0500", "msg_from": "Jeff Hoffmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner question re index vs seqscan" }, { "msg_contents": "Ed Loehr <[email protected]> writes:\n> QUESTION: Why doesn't the planner, just after 'vacuum analyze', use the\n> provided indices for this query? How can I tweak it to use the indices?\n\n> sdb=# EXPLAIN SELECT ahrn.hr_type_id AS \"Resource Type\", \n> sdb-# SUM(ahrn.hr_count) AS \"Planned Consulting Days\"\n> sdb-# FROM activity a, activity_hr_need ahrn\n> sdb-# WHERE a.start_time::date >= '1-Jun-2000'::date\n> sdb-# AND a.stop_time::date <= '1-Jul-2000'::date\n> sdb-# AND ahrn.activity_id = a.id\n> sdb-# GROUP BY \"Resource Type\";\n\nAt least part of the problem is that you have two separate one-sided\ninequalities, neither one of which is very selective by itself ---\nand of course the planner has no idea that there might be any semantic\nconnection between \"start_time\" and \"stop_time\". You could help it out\nby providing something it can recognize as a range restriction on one\nindex or the other. For example:\n\n\tWHERE a.start_time::date >= '1-Jun-2000'::date\n\t AND a.start_time::date <= '1-Jul-2000'::date\n\t AND a.stop_time::date <= '1-Jul-2000'::date\n\t AND ahrn.activity_id = a.id\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Jun 2000 14:30:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner question re index vs seqscan " }, { "msg_contents": "Jeff Hoffmann <[email protected]> writes:\n>> QUESTION: Why doesn't the planner, just after 'vacuum analyze', use the\n>> provided indices for this query? How can I tweak it to use the indices?\n>> \n>> sdb=# EXPLAIN SELECT ahrn.hr_type_id AS \"Resource Type\",\n>> sdb-# SUM(ahrn.hr_count) AS \"Planned Consulting Days\"\n>> sdb-# FROM activity a, activity_hr_need ahrn\n>> sdb-# WHERE a.start_time::date >= '1-Jun-2000'::date\n>> sdb-# AND a.stop_time::date <= '1-Jul-2000'::date\n>> sdb-# AND ahrn.activity_id = a.id\n>> sdb-# GROUP BY \"Resource Type\";\n\n> dump the typecasting in the query and try again. not sure if it'll\n> work, but it's worth a try. typecasting has an annoying effect of\n> disabling index scans in some cases even when you'd swear logically that\n> they should be used.\n\nOh, that's a good point --- if the start_time and stop_time columns are\nnot of type date then the above is guaranteed not to be indexscanable,\nbecause what you've really written is\n\n\tWHERE date(a.start_time) >= '1-Jun-2000'::date\n\t AND date(a.stop_time) <= '1-Jul-2000'::date\n\nIt might be able to use a functional index on date(start_time) or\ndate(stop_time), but not a straight index on the timestamp columns.\n\nA good rule of thumb is not to use casts unless you have no choice...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Jun 2000 14:42:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner question re index vs seqscan " }, { "msg_contents": "Tom Lane wrote:\n> \n> Jeff Hoffmann <[email protected]> writes:\n> >> QUESTION: Why doesn't the planner, just after 'vacuum analyze', use the\n> >> provided indices for this query? How can I tweak it to use the indices?\n> \n> > dump the typecasting in the query and try again. not sure if it'll\n> > work, but it's worth a try. typecasting has an annoying effect of\n> > disabling index scans in some cases even when you'd swear logically that\n> > they should be used.\n\nI dropped the typecasting, but that had no visible effect. Adding the\nadditional predicate to the where clause as Tom suggested had the desired\neffect of replacing one seqscan with an index scan. But I'm still\nwondering why it is still doing a seq scan on the \"ahrn.activity_id =\na.id\" part when both of those integer columns are indexed??\n\nEXPLAIN SELECT ahrn.hr_type_id AS \"Resource Type\", \n SUM(ahrn.hr_count) AS \"Planned Consulting Days\"\nFROM activity a, activity_hr_need ahrn\nWHERE a.start_time >= '1-Jun-2000'\n AND a.stop_time <= '1-Jul-2000'\n AND a.start_time <= '1-Jul-2000'\n AND ahrn.activity_id = a.id\nGROUP BY \"Resource Type\";\nQUERY PLAN:\n\nAggregate (cost=137.12..137.16 rows=1 width=16)\n -> Group (cost=137.12..137.14 rows=7 width=16)\n -> Sort (cost=137.12..137.12 rows=7 width=16)\n -> Hash Join (cost=47.86..137.04 rows=7 width=16)\n -> Seq Scan on activity_hr_need ahrn \n(cost=0.00..53.58 rows=2358 width=12)\n -> Hash (cost=47.82..47.82 rows=16 width=4)\n -> Index Scan using activity_start_time on\nactivity a (cost=0.00..47.82 rows=16 width=4)\n\nRegards,\nEd Loehr\n", "msg_date": "Fri, 16 Jun 2000 14:25:02 -0500", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: planner question re index vs seqscan" }, { "msg_contents": "Ed Loehr <[email protected]> writes:\n> But I'm still wondering why it is still doing a seq scan on the\n> \"ahrn.activity_id = a.id\" part when both of those integer columns are\n> indexed??\n\nPresumably because it thinks the hash join is cheaper than a nestloop\nor merge join would be ... although that seems kinda surprising. What\nplans do you get if you try various combinations of\n\tset enable_hashjoin = off;\n\tset enable_mergejoin = off;\n\tset enable_nestloop = off;\nHow do the cost estimates compare against the actual runtimes for\ndoing the query each way?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Jun 2000 18:48:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner question re index vs seqscan " } ]
[ { "msg_contents": "After further thought I think there's a lot of merit in Hiroshi's\nopinion that physical file names should not be tied to relation OID.\nIf we use a separately generated value for the file name, we can\nsolve a lot of problems pretty nicely by means of \"table versioning\".\n\nFor example: VACUUM can't compact indexes at the moment, and what it\ndoes do (scan the index and delete unused entries) is really slow.\nThe right thing to do is for it to generate an all-new index file,\nbut how do we do that without creating a risk of leaving the index\ncorrupted if we crash partway through? The answer is to build the\nnew index in a new physical file. But how do we install the new\nfile as the real index atomically, when it might span multiple\nsegments? If the physical file name is decoupled from the relation's\nname *and* OID then there is no problem: the atomic event that makes\nthe new file(s) the real table contents is the commit of the new\npg_class row with the new value for the physical filename.\n\nAside from possible improvements in VACUUM, this would let us do a\nrobust implementation of CLUSTER, and we could do the \"really change\nthe table\" variant of ALTER TABLE DROP COLUMN the same way if anyone\nwants to do it.\n\nThe only cost is that we need an additional column in pg_class to\nhold the physical file name. That's not so bad, especially when\nyou remember that we'd surely need to add something to pg_class for\ntablespace support anyway.\n\nIf we bite that bullet, then we could also do something to satisfy\nBruce about having legible file names ;-). The column in pg_class\ncould perfectly well be a string, not a pure number, and that means\nthat we can throw in the relname (truncated to fit of course). So\nthe thing would act a lot like the original-relname-plus-OID variant\nthat's been discussed so far. (Original relname because ALTER TABLE\nRENAME would *not* change the physical file name. But we could\nthink about a form of VACUUM that creates a whole new table by\nversioning, and that would presumably bring the physical name back\nin sync with the logical relname.)\n\nHere is a sketch of a concrete proposal. I see no need to have\nseparate pg_class columns for tablespace and physical relname;\ninstead, I suggest there be a column of type NAME that is the\nfile pathname (relative to the database directory). Further,\ninstead of the existing convention of appending .N to the base\nfile name to make extension segment names, I propose that we\nalways have a segment number in the physical file name, and that\nthe pg_class entry be required to contain a \"%d\" somewhere that\nindicates where. The actual filename is manufactured by\n\tsprintf(tempbuf, value_from_pg_class_column, segment_number);\n\nAs an example, the arrangement I was suggesting earlier today\nabout segments in different subdirectories of a tablespace\ncould be implemented by assigning physical filenames like\n\n\ttablespace/%d/12345_relname\n\nwhere the 12345 is a value generated separately from the table's OID.\n(We would still use the OID counter to produce these numbers, and\nin fact there's no reason not to use the table's OID as the initial\nunique ID for the physical filename. The point is just that the\nphysical filename doesn't have to remain forever equal to the\nrelation's OID.)\n\nIf we use type NAME for this string then the tablespace part of the path\nwould have to be kept to no more than ~ 15 characters, but that seems\nworkable enough. (Anybody who really didn't like that could recompile\nwith larger NAMEDATALEN. Doesn't seem worth inventing a separate type.)\n\nAs Hiroshi pointed out, one of the best aspects of this approach\nis that the physical table layout policy doesn't have to be hard-wired\ninto low-level file access routines. The low-level routines don't\nneed to know much of anything about the format of the pathname,\nthey just stuff in the right segment number and use the name. The\nlayout policy need only be known to one single routine that generates\nthe strings that go into pg_class. So it'd be really easy to change.\n\nOne thing we'd have to work out is that the critical system tables\n(eg, pg_class itself, as well as its indexes) would have to have\npredictable physical names. Otherwise there's no way for a new\nbackend to bootstrap itself up ... it can't very well read pg_class\nto find out where pg_class is. A brute-force solution is to forbid\nreversioning of the critical tables, but I suspect we can find a\nless restrictive answer.\n\nThis seems like it'd satisfy all the concerns that have been raised.\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Jun 2000 13:51:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "OK, OK, Hiroshi's right: use a seperately-generated filename" }, { "msg_contents": "Tom Lane wrote:\n So\n> the thing would act a lot like the original-relname-plus-OID variant\n> that's been discussed so far. (Original relname because ALTER TABLE\n> RENAME would *not* change the physical file name. But we could\n> think about a form of VACUUM that creates a whole new table by\n> versioning, and that would presumably bring the physical name back\n> in sync with the logical relname.)\n\nAt least on UNIX, couldn't you use a hard-link and change the name in\npg_class immediately? Let the brain-dead operating systems use the\nvacuum method.\n", "msg_date": "Sat, 17 Jun 2000 10:50:10 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OK, OK, Hiroshi's right: use a seperately-generated\n filename" }, { "msg_contents": "Chris Bitmead <[email protected]> writes:\n> At least on UNIX, couldn't you use a hard-link and change the name in\n> pg_class immediately? Let the brain-dead operating systems use the\n> vacuum method.\n\nHmm ... maybe, but it doesn't seem worth the portability headache to\nme. We do have an NT port that we don't want to break, and I don't\nthink RENAME TABLE is worth the trouble of testing/supporting two\nimplementations.\n\nEven on Unix, aren't there filesystems that don't do hard links?\nNot that I'd recommend running Postgres on such a volume, but...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Jun 2000 21:12:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: OK, OK, Hiroshi's right: use a seperately-generated filename " }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Tom Lane\n> \n> After further thought I think there's a lot of merit in Hiroshi's\n> opinion that physical file names should not be tied to relation OID.\n> If we use a separately generated value for the file name, we can\n> solve a lot of problems pretty nicely by means of \"table versioning\".\n> \n> For example: VACUUM can't compact indexes at the moment, and what it\n> does do (scan the index and delete unused entries) is really slow.\n> The right thing to do is for it to generate an all-new index file,\n> but how do we do that without creating a risk of leaving the index\n> corrupted if we crash partway through? The answer is to build the\n> new index in a new physical file. But how do we install the new\n> file as the real index atomically, when it might span multiple\n> segments? If the physical file name is decoupled from the relation's\n> name *and* OID then there is no problem: the atomic event that makes\n> the new file(s) the real table contents is the commit of the new\n> pg_class row with the new value for the physical filename.\n> \n> Aside from possible improvements in VACUUM, this would let us do a\n> robust implementation of CLUSTER, and we could do the \"really change\n> the table\" variant of ALTER TABLE DROP COLUMN the same way if anyone\n> wants to do it.\n>\n\nYes,I've wondered how do we implement column_is_really_dropped \nALTER TABLE DROP COLUMN feature without this kind of mechanism.\n\n> The only cost is that we need an additional column in pg_class to\n> hold the physical file name. That's not so bad, especially when\n> you remember that we'd surely need to add something to pg_class for\n> tablespace support anyway.\n> \n> If we bite that bullet, then we could also do something to satisfy\n> Bruce about having legible file names ;-). The column in pg_class\n> could perfectly well be a string, not a pure number, and that means\n> that we can throw in the relname (truncated to fit of course). So\n> the thing would act a lot like the original-relname-plus-OID variant\n> that's been discussed so far. (Original relname because ALTER TABLE\n> RENAME would *not* change the physical file name. But we could\n> think about a form of VACUUM that creates a whole new table by\n> versioning, and that would presumably bring the physical name back\n> in sync with the logical relname.)\n> \n> As Hiroshi pointed out, one of the best aspects of this approach\n> is that the physical table layout policy doesn't have to be hard-wired\n> into low-level file access routines. The low-level routines don't\n> need to know much of anything about the format of the pathname,\n> they just stuff in the right segment number and use the name. The\n> layout policy need only be known to one single routine that generates\n> the strings that go into pg_class. So it'd be really easy to change.\n>\n\nRoss's approach is fundamentally same though he is using relname+OID\nnaming rule. I've said his trial is most practical one.\n \n> One thing we'd have to work out is that the critical system tables\n> (eg, pg_class itself, as well as its indexes) would have to have\n> predictable physical names.\n\nThe only limitation of the relation filename is the uniqueness.\nSo it doesn't introduce any inconsistency that system tables\nhave fixed name.\nAs for system relations it wouldn't be so bad because CLUSTER/\nALTER TABLE DROP COLUMN ... would be unnecessary(maybe).\nBut as for system indexes,it is preferable that VACUUM/REINDEX\ncould rebuild them safely. System indexes never shrink currently.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Sat, 17 Jun 2000 18:38:53 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: OK, OK, Hiroshi's right: use a seperately-generated filename" }, { "msg_contents": "Tom Lane writes:\n\n> \ttablespace/%d/12345_relname\n\nThrowing table spaces and relation names into one pot doesn't excite me\nvery much. For example, before long people will want to\n\n* Query what tables are in what space (without using string operations)\nConsider for example creating a new table and choosing where to put it.\n\n* Rename table spaces\n\n* Assign attributes of some sort to table spaces (permissions, etc.)\n\n* Use table space names with more than 15 characters. :)\n\nSomehow table spaces need to be catalogued. You could still make the\nphysical file name 'tablespaceoid/rest' without actually having to look up\nanything, although that depends on your symlink idea which is still under\ndiscussion.\n\nThen, why are all nth segments of tables in one directory in that\nproposal?\n\nAlso, you said before that an old relname (after rename) is worse than\nnone at all. I couldn't agree more.\n\nWhy not use OID.[SEGMENT.]VERSION for the physical relname (different\norder possible)? That way you at least have some guaranteed correspondence\nbetween files and tables. Version could probably be an INT2, so you save\nsome space.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 17 Jun 2000 15:01:53 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OK, OK, Hiroshi's right: use a seperately-generated\n filename" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> One thing we'd have to work out is that the critical system tables\n>> (eg, pg_class itself, as well as its indexes) would have to have\n>> predictable physical names.\n\n> The only limitation of the relation filename is the uniqueness.\n> So it doesn't introduce any inconsistency that system tables\n> have fixed name.\n> As for system relations it wouldn't be so bad because CLUSTER/\n> ALTER TABLE DROP COLUMN ... would be unnecessary(maybe).\n> But as for system indexes,it is preferable that VACUUM/REINDEX\n> could rebuild them safely. System indexes never shrink currently.\n\nRight, it's the index-shrinking business that has me worried.\nMost of the other reasons for swapping in a new file don't apply\nto system tables, but that one does.\n\nOne possibility is to say that system *tables* can't be reversioned\n(at least not the critical ones) but system *indexes* can be.\nThen we'd have to use your ignore-system-indexes stuff during backend\nstartup, until we'd found out where the indexes are. Might be too big\na time penalty however... not sure. Shared cache inval of a system\nindex could be a little tricky too; I don't think the catcache routines\nare prepared to fall back to non-index scan are they?\n\nOn the whole it might be better to cheat by using a side data structure\nlike the pg_internal.init file, that a backend could consult to find out\nwhere the indexes are now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Jun 2000 12:24:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: OK, OK, Hiroshi's right: use a seperately-generated filename " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Somehow table spaces need to be catalogued.\n\nSure. Undoubtedly there'll be a pg_tablespace table somewhere. However,\nI don't think it's a good idea to have to consult pg_tablespace to find\nout where a table actually is --- I think the pathname (or smgr access\ntoken as Ross would call it ;-)) ought to be determinable from just the\npg_class entry.\n\nIt would probably be best to expend an additional 4 bytes per pg_class\nentry to record the OID of the table's tablespace, just so you could do\njoins easily without having to do string matching (and assume an\nuncomfortable amount about the format of the pathname). Having the\npathname in the pg_class entry too represents some denormalization,\nbut I think it's the safest way.\n\n> For example, before long people will want to\n> * Query what tables are in what space (without using string operations)\n> * Rename table spaces\n> * Assign attributes of some sort to table spaces (permissions, etc.)\n> * Use table space names with more than 15 characters. :)\n\nTablespaces can have logical names stored in pg_tablespace; they just\ncan't contribute more than a dozen or so characters to file pathnames\nunder the implementation I'm proposing. That doesn't seem too\nunreasonable; the pathname part can be some sort of abbreviated name.\nThe alternative is to enlarge smgr access tokens to something like 64\nbytes. I'd rather keep them as compact as we can, since we're going to\nneed to store them in places like the bufmgr's shared-buffer headers\n(remember the blind write problem).\n\n> Then, why are all nth segments of tables in one directory in that\n> proposal?\n\nIt's better than *all* segments of tables in one directory, which is\nwhat you get if the segment number is just a component of a flat file\nname. We have to have a better answer than that for people who need\nto cope with tables bigger than a disk. Perhaps someone can think of a\nbetter answer than subdirectory-per-segment-number, but I think that\nwill work well enough; and it doesn't add any complexity for file\naccess.\n\n> Also, you said before that an old relname (after rename) is worse than\n> none at all. I couldn't agree more.\n\nI'm not the one who wants relnames in the physical names ;-). However,\nthis implementation mechanism will support either policy choice ---\noriginal relname in the filename, or just a numeric ID for the filename\n--- and that seems like a good sign to me.\n\n> Why not use OID.[SEGMENT.]VERSION for the physical relname (different\n> order possible)?\n\nDoesn't give you a manageable way to split segments across different\ndisks.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Jun 2000 12:47:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: OK, OK, Hiroshi's right: use a seperately-generated filename " }, { "msg_contents": "Tom Lane wrote:\n\n> > Also, you said before that an old relname (after rename) is worse than\n> > none at all. I couldn't agree more.\n> \n> I'm not the one who wants relnames in the physical names ;-). However,\n> this implementation mechanism will support either policy choice ---\n> original relname in the filename, or just a numeric ID for the filename\n> --- and that seems like a good sign to me.\n> \n> > Why not use OID.[SEGMENT.]VERSION for the physical relname (different\n> > order possible)?\n\nUnless VERSION is globally unique like an oid is, having RELNAME.VERSION\nwould be a problem if you created a table with the same name as a \nrecently renamed table.\n", "msg_date": "Sun, 18 Jun 2000 11:07:18 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OK, OK, Hiroshi's right: use a seperately-generated\n filename" }, { "msg_contents": "Tom Lane writes:\n\n> I don't think it's a good idea to have to consult pg_tablespace to find\n> out where a table actually is --- I think the pathname (or smgr access\n> token as Ross would call it ;-)) ought to be determinable from just the\n> pg_class entry.\n\nThat's why I suggested the table space oid. That would be readily\navailable from pg_class.\n\n\n> Tablespaces can have logical names stored in pg_tablespace; they just\n> can't contribute more than a dozen or so characters to file pathnames\n> under the implementation I'm proposing. That doesn't seem too\n> unreasonable; the pathname part can be some sort of abbreviated name.\n\nSince the abbreviated name is really only used internally it might as well\nbe the oid. Otherwise you create a weird functional dependency like the\npg_shadow.usesysid field that's just an extra layer of maintenance.\n\n\n> this implementation mechanism will support either policy choice ---\n> original relname in the filename, or just a numeric ID for the\n> filename\n\nBut when you look at a file name `12345_accounts_recei' you know neither\n\n* whether the table name was really `accounts_recei' or whether the name\nwas truncated\n\n* whether the table still has that name, whatever it was\n\n* what table this is at all\n\nSo in the aggregate you really know less than nothing. :-)\n\n\n> > Why not use OID.[SEGMENT.]VERSION for the physical relname (different\n> > order possible)?\n> \n> Doesn't give you a manageable way to split segments across different\n> disks.\n\nOkay, so maybe ${base}/TABLESPACEOID/SEGMENT/RELOID.VERSION.\n\nThis doesn't need any catalog lookup outside of pg_class, yet it's still\neasy to resolve to human-readable names by simple admin tools (SELECT *\nFROM pg_foo WHERE oid = xxx). VERSION would be unique within a conceptual\nrelation, so you could even see how many times the relation was altered in\nmajor ways (kind of).\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 18 Jun 2000 23:24:26 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OK, OK, Hiroshi's right: use a seperately-generated\n filename" }, { "msg_contents": "> Tom Lane wrote:\n> So\n> > the thing would act a lot like the original-relname-plus-OID variant\n> > that's been discussed so far. (Original relname because ALTER TABLE\n> > RENAME would *not* change the physical file name. But we could\n> > think about a form of VACUUM that creates a whole new table by\n> > versioning, and that would presumably bring the physical name back\n> > in sync with the logical relname.)\n> \n> At least on UNIX, couldn't you use a hard-link and change the name in\n> pg_class immediately? Let the brain-dead operating systems use the\n> vacuum method.\n\nYes, we can hard-link, and let vacuum remove the old link.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 18 Jun 2000 19:01:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OK, OK, Hiroshi's right: use a seperately-generated filename" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Chris Bitmead\n>\n> Tom Lane wrote:\n>\n> > > Also, you said before that an old relname (after rename) is worse than\n> > > none at all. I couldn't agree more.\n> >\n> > I'm not the one who wants relnames in the physical names ;-). However,\n> > this implementation mechanism will support either policy choice ---\n> > original relname in the filename, or just a numeric ID for the filename\n> > --- and that seems like a good sign to me.\n> >\n> > > Why not use OID.[SEGMENT.]VERSION for the physical relname (different\n> > > order possible)?\n>\n> Unless VERSION is globally unique like an oid is, having RELNAME.VERSION\n> would be a problem if you created a table with the same name as a\n> recently renamed table.\n>\n\nIn my proposal(relname+unique-id),the unique-id is globally unique\nand relname is only for dba's convenience. I've said many times that\nwe should be free from the rule of file naming as far as possible.\nI myself don't mind the name of relation files except that they should\nbe globally unique. I had to propose my opinion for file naming\nbecause people have been so enthusiastic about globally_not_unique\nfile naming.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Mon, 19 Jun 2000 09:24:56 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: OK, OK, Hiroshi's right: use a seperately-generated filename" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > Tom Lane wrote:\n> > So\n> > > the thing would act a lot like the original-relname-plus-OID variant\n> > > that's been discussed so far. (Original relname because ALTER TABLE\n> > > RENAME would *not* change the physical file name. But we could\n> > > think about a form of VACUUM that creates a whole new table by\n> > > versioning, and that would presumably bring the physical name back\n> > > in sync with the logical relname.)\n> >\n> > At least on UNIX, couldn't you use a hard-link and change the name in\n> > pg_class immediately? Let the brain-dead operating systems use the\n> > vacuum method.\n> \n> Yes, we can hard-link, and let vacuum remove the old link.\n\nBTW, how does vacuum know which files are obsolete. Does it just delete\nfiles it doesn't know about?\n\nWhat a good application for time travel!\n", "msg_date": "Mon, 19 Jun 2000 10:36:55 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OK, OK, Hiroshi's right: use a seperately-generated \n filename" }, { "msg_contents": "> > Yes, we can hard-link, and let vacuum remove the old link.\n> \n> BTW, how does vacuum know which files are obsolete. Does it just delete\n> files it doesn't know about?\n> \n> What a good application for time travel!\n> \n\nI assume it removes files with oid's that match pg_class but who's file\nnames do not match.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 18 Jun 2000 21:03:08 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OK, OK, Hiroshi's right: use a seperately-generated \n filename" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Peter Eisentraut\n> \n> Tom Lane writes:\n> \n> > I don't think it's a good idea to have to consult pg_tablespace to find\n> > out where a table actually is --- I think the pathname (or smgr access\n> > token as Ross would call it ;-)) ought to be determinable from just the\n> > pg_class entry.\n> \n> That's why I suggested the table space oid. That would be readily\n> available from pg_class.\n>\n\nIt seems to me that the following 1)2) has always been mixed up.\nIMHO,they should be distinguished clearly.\n\n1) Where the table is stored\n Currently PostgreSQL relies on relname -> filename mapping\n rule to access *existent* relations and doesn't have this\n information in its database. Our(Tom,Ross,me) proposal is to\n keep the information(token) in pg_class and provide a standard\n transactional control mechanism for the change of table file\n allocation. By doing it we would be able to be free from table\n allocation(naming) rule.\n Isn't it a kind of thing why we haven't had it from the first ?\n \n2) Where to store the table\n Yes,TABLE(DATA)SPACE should encapsulate this concept.\n \nI want the decision about 1) first. Ross has already tried it without\n2).\n\nComments ?\n\nAs for 2) every one seems to have each opinion and the discussion\nhas always been divergent. Please don't discard 1) together.\n\nRegards.\n\nHiroshi Inoue\[email protected] \n\n", "msg_date": "Mon, 19 Jun 2000 13:52:34 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: OK, OK, Hiroshi's right: use a seperately-generatedfilename " }, { "msg_contents": "On Fri, 16 Jun 2000, Tom Lane wrote:\n\n> Chris Bitmead <[email protected]> writes:\n> > At least on UNIX, couldn't you use a hard-link and change the name in\n> > pg_class immediately? Let the brain-dead operating systems use the\n> > vacuum method.\n> \n> Hmm ... maybe, but it doesn't seem worth the portability headache to\n> me. We do have an NT port that we don't want to break, and I don't\n> think RENAME TABLE is worth the trouble of testing/supporting two\n> implementations.\n> \n> Even on Unix, aren't there filesystems that don't do hard links?\n> Not that I'd recommend running Postgres on such a volume, but...\n\ntTo the best of my knowledge, its only symlinks that aren't\n(weren't?) universally supported ... somehow, I believe taht even extends\nto NT ...\n\n\n", "msg_date": "Tue, 20 Jun 2000 19:50:34 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OK, OK, Hiroshi's right: use a seperately-generated\n filename" } ]
[ { "msg_contents": "I'm probably just missing the point, but why do I have to specify the\nindexname for cluster if the table already has a primary key? Wouldn't\ncluster want to use the primary key for the table (if it exists) anyway?\n\nThanks.\n-Tony\n'\n\n", "msg_date": "Fri, 16 Jun 2000 11:17:04 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Why does cluster need the indexname?" }, { "msg_contents": "I guess we could default it if they don't specifiy an index.\n\n\n> I'm probably just missing the point, but why do I have to specify the\n> indexname for cluster if the table already has a primary key? Wouldn't\n> cluster want to use the primary key for the table (if it exists) anyway?\n> \n> Thanks.\n> -Tony\n> '\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 16 Jun 2000 15:06:59 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does cluster need the indexname?" }, { "msg_contents": ">> I'm probably just missing the point, but why do I have to specify the\n>> indexname for cluster if the table already has a primary key? Wouldn't\n>> cluster want to use the primary key for the table (if it exists) anyway?\n\nNo, you wouldn't necessarily want to cluster on the primary key.\nYou might be using the primary key to enforce logical consistency,\nbut be doing most of your actual scans on some secondary index.\n\nI always thought that CLUSTER was being redundant in the other\ndirection: if you've told it the index name, there's no need to\ntell it the base table name. It can find that out from the index.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Jun 2000 15:16:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does cluster need the indexname? " }, { "msg_contents": "Tom Lane wrote:\n\n> I always thought that CLUSTER was being redundant in the other\n> direction: if you've told it the index name, there's no need to\n> tell it the base table name. It can find that out from the index.\n>\n\nGood point. That would make the most sense:\n\nCLUSTER index_name\n\nif index_name was a primary key to a table, then there's no need to specify\nthe table.\n\n-Tony\n\n\n", "msg_date": "Fri, 16 Jun 2000 13:36:03 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why does cluster need the indexname?" } ]
[ { "msg_contents": "Hi everybody,\n\nI would like first to thank you all for the very good job done in\nPostGreSQL.\n\nYou say that you don't look for performance, but I can tell that\nPostgres on Linux\nis as fast as Sybase on windows.\n\nI use Postgres for a couple of years now, with windows clients and I\nrecently use the text type.\nIt is when I discovered a problem :\n\nWhen I record a field containing CRLF (update or insert), one leading\ncaracter is rubbed off\nfor each CRLF encountered, when I see it back in the windows\napplication.\n\nFor example : I record \"Hello\\r\\nworld\" and I see back \"ello\\r\\nworld\"\n\npsql shows \"Hello\\nworld\"\n\nThe field is recorded correctly when watching it with psql, that is to\nsay with all caracters in\nbut with LF instead of CRLF.\n\nThe external way I found to get around the problem is to add a leading\nspace for each CRLF\nencountered in the string when recording it in the database, so that all\nseems to be correct\nfrom the client.\n\nSo I think that the problem is coming from the ODBC driver that try to\ntransform LF in CRLF.\n\nI hope that all this is clear :-)\n\nI currently use postgresql-6.4.2 on a RedHat 6.0.\nThe ODBC driver version is 6.40.0002 configured with 6.4 protocol, all\ndefaults except the ReadOnly option\nwhich is off.\n\nI hope not to have bored you with all this :-))\n\nYou may have questions but I am *NOT* on the mailing list...\n\nThank you\n\nPierre-Louis\n\n\n\n", "msg_date": "Fri, 16 Jun 2000 23:40:02 +0200", "msg_from": "Pierre-Louis Malatray <[email protected]>", "msg_from_op": true, "msg_subject": "ODBC driver problem ??" } ]
[ { "msg_contents": "There seems to be little practical reason why we couldn't go along with\nthe standard install modes of:\n\nprograms, shared libraries\t\t0755\t[*]\ndata (libraries, headers, *.sample)\t0644\n\nrather than our current\n\nprogram\t\t\t0555\nstatic library\t\t0644\nshared library\t\t0644 most of the time\n\"everything else\"\t0444\n\nOn the other hand, I'm sure somebody knows one. Comments?\n\n[*] assuming that \"HPUX wants shared libs to be mode 555\" does not\npreclude mode 755\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 17 Jun 2000 15:00:24 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Install modes" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> There seems to be little practical reason why we couldn't go along with\n> the standard install modes of:\n\n> programs, shared libraries\t\t0755\t[*]\n> data (libraries, headers, *.sample)\t0644\n\n> [*] assuming that \"HPUX wants shared libs to be mode 555\" does not\n> preclude mode 755\n\n555 for shlibs on HPUX is not negotiable --- the performance cost of not\ndoing it that way is horrific. Shlibs are so wonderfully nonstandard\nthat there are likely other platforms with weird requirements for\nshlibs. So I'd suggest 3 categories:\n\nprograms\t\t\t\t0755\nshared libraries\t\t\tplatform-specific but usually 0755\ndata (libraries, headers, *.sample)\t0644\n\nOtherwise I agree --- no real need for 444 on data files.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Jun 2000 13:08:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Install modes " } ]
[ { "msg_contents": "Here is another article. It is taken from chapter 1 of my book:\n\n\thttp://www.oreillynet.com/pub/a/network/2000/06/16/magazine/postgresql_history.html\n\nIt is part of a larger MySQL article:\n\n\thttp://www.oreillynet.com/pub/a/network/2000/06/16/magazine/mysql.html\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 17 Jun 2000 10:18:51 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Another PostgreSQL article" } ]
[ { "msg_contents": "Hi,\n\nI have two servers running pgsql. Is there a command to transfer the\ndatabases\nbetween them?\n\nCraig May\n\nEnth Dimension\nhttp://www.enthdimension.com.au\n\n", "msg_date": "Sun, 18 Jun 2000 09:07:07 +1000", "msg_from": "Craig May <[email protected]>", "msg_from_op": true, "msg_subject": "Database Transfer" }, { "msg_contents": "Craig May writes:\n\n> I have two servers running pgsql. Is there a command to transfer the\n> databases\n> between them?\n\npg_dump and psql. \"Back up\" one database and \"restore\" it on the other\nserver. Don't even think about moving files around. :)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 18 Jun 2000 16:24:43 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database Transfer" }, { "msg_contents": "I have an interest in this topic ...\n\nI am looking to do \"real time\" updates of a data base on two different\nservers, with one acting as a master and the other acting as a slave. In\nthe realm of Oracle, I believe it is called \"replication\".\n\n From what I have read, there is no such feature in pgsql. Can somebody\nconfirm this?\n\nThanks,\nKate Collins\n\nPeter Eisentraut wrote:\n\n> Craig May writes:\n>\n> > I have two servers running pgsql. Is there a command to transfer the\n> > databases\n> > between them?\n>\n> pg_dump and psql. \"Back up\" one database and \"restore\" it on the other\n> server. Don't even think about moving files around. :)\n>\n> --\n> Peter Eisentraut Sernanders v�g 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n\n--\n=================================================\nKatherine (Kate) L. Collins\nSenior Software Engineer/Meteorologist\nWeather Services International (WSI Corporation)\n4 Federal Street\nBillerica, MA 01821\nEMAIL: [email protected]\nPHONE: (978) 670-5110\nFAX: (978) 670-5100\nhttp://www.intellicast.com\n\n\n", "msg_date": "Mon, 19 Jun 2000 04:49:01 -0400", "msg_from": "Kate Collins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database Transfer" }, { "msg_contents": "Ciao Kate Collins, on 19-Jun-00 as written on pgsl-sql mailing list:\n\n> I have an interest in this topic ...\n\n> I am looking to do \"real time\" updates of a data base on two different\n> servers, with one acting as a master and the other acting as a slave. In\n> the realm of Oracle, I believe it is called \"replication\".\n\nI don't know if there is someone available to discuss about:\nhow would be possible the implementation of \"replication\" feature in\nPostgreSQL?\n\n\nBye, \\fer\n\n", "msg_date": "Tue, 20 Jun 2000 16:19:22 +0100", "msg_from": "Ferruccio Zamuner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Database Transfer" } ]
[ { "msg_contents": "Our documentation claims (eg in the CREATE INDEX ref page) that\n\n: The int24_ops operator class is useful for constructing indices on int2\n: data, and doing comparisons against int4 data in query\n: qualifications. Similarly, int42_ops support indices on int4 data that\n: is to be compared against int2 data in queries.\n\nBut as far as I can tell, it is not actually possible for these\nopclasses to work as claimed, and never has been. The reason is that\nthere is only one set of associated operators for an opclass. To have\nan opclass that works as suggested above, you would need *two* sets of\noperators identified for the opclass. For example, in the case of\nint24_ops, you'd need to point at both of:\n\n1. int2 vs. int4 operators (eg, int24lt) --- the planner must see these\n in order to know that an \"int2 < int4\" WHERE clause has any relevance\n to the index.\n\n2. int2 vs. int2 operators (eg, int2lt) --- the index access method\n itself needs these for internal operations on the index, such as\n comparing a new datum to the ones already in the index for insertion.\n\nCurrently we only reference the first set of operators, which means that\ninternal operations are wrong for these opclasses. Thus, for example:\n\ncreate table foo (f1 int4);\ncreate unique index foo42i on foo (f1 int42_ops);\ninsert into foo values(65537);\ninsert into foo values(1);\nERROR: Cannot insert a duplicate key into unique index foo42i\n\nIn the case of btree operations it's barely possible that we could get\naround this by using the three-way comparison support procedure (int2cmp\nor int4cmp in these cases) for *all* internal comparisons in the index,\nand being careful to use the amop operators --- the right way round! ---\nfor all comparisons to external values. The btree code is not that\ncareful now, and I'm not sure it can be made that careful; it's not\nclear that the low-level operations can tell whether the key they are\nworking with is an about-to-be-inserted value (same type as the index\nentries) or a comparison key (not same type as the index entries).\n\nEven if we could make it work, it'd be horribly fragile in the face of\nfuture code changes --- people are just too used to assuming that\n\"a < b\" and \"b > a\" are equivalent ways of coding a test. And we don't\nhave any way of automatically checking the code, given that all these\nvalues are Datum as far as the compiler knows.\n\nI think we ought to assume that index manipulation deals with only\none datatype for any given index, and therefore these two opclasses\nare broken by design and must be removed.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Jun 2000 20:23:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "int24_ops and int42_ops are bogus" }, { "msg_contents": "> I think we ought to assume that index manipulation deals with only\n> one datatype for any given index, and therefore these two opclasses\n> are broken by design and must be removed.\n\nAgreed. They are weird.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 18 Jun 2000 20:54:37 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: int24_ops and int42_ops are bogus" }, { "msg_contents": "I wrote:\n> I think we ought to assume that index manipulation deals with only\n> one datatype for any given index, and therefore these two opclasses\n> are broken by design and must be removed.\n\nI have removed these two opclasses from the system. I had a further\nthought on the issue, which I just want to record in the archives\nin case anyone ever comes back and wants to resurrect\nint24_ops/int42_ops.\n\nThe real design problem with these two opclasses is that if you want\nto have an int4 column that you might want to compare against either\nint2 or int4 constants, you have to create *two* indexes to handle\nthe two cases. The contents of the two indexes will be absolutely\nidentical, so this approach is inherently silly. The right way to\nattack it is to extend the opclass/amop information so that the\nsystem could understand that a plain-vanilla int4 index might be\nused with int4 vs int2 operators to compare against int2 constants\n--- or with int4 vs int8 operators to compare against int8 constants,\netc.\n\nIt would not be real difficult to extend the opclass representation\nto show these relationships, I think. The hard part is that btree\n(and probably the other index types) is sloppy about whether it is\ncomparing index entries or externally-supplied values and which side\nof the comparison is which. Cleaning that up would be painful and\nmaybe impractical --- but if it could be done it'd be nifty.\n\nThe path I think we will actually pursue, instead, is teaching the\nplanner to coerce constants to the same type as the compared-to\ncolumn. For instance, given \"int2var < int4constant\" the planner\nwill try to coerce the constant to int2 so that it can apply\nint2-vs-int2 operators with an int2 index. This falls down on\ncases like \"int2var < 100000\" because it won't be possible to\nreduce the constant to int2, whereas the above-sketched idea could\nstill handle that case as an indexscan. But in terms of actual\neveryday usefulness, I doubt this is a serious limitation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Jun 2000 00:52:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: int24_ops and int42_ops are bogus " } ]
[ { "msg_contents": "Current cvs compiled on Solaris gives the following error...\n\nIn file included from ../../include/tcop/tcopprot.h:22,\n from pg_proc.c:26:\n/usr/include/setjmp.h:70: conflicting types for `jmp_buf'\n/usr/include/setjmp.h:53: previous declaration of `jmp_buf'\ngmake[3]: *** [pg_proc.o] Error 1\n\n\nThe problem comes from src/include/config.h which defines...\n\n#ifndef HAVE_SIGSETJMP\n# define sigjmp_buf jmp_buf\n# define sigsetjmp(x,y) setjmp(x)\n# define siglongjmp longjmp\n#endif\n\nwhich redefines jmp_buf in conflict with solaris. Solaris appears to\nhave sigsetjmp, but I don't know the cause of the problem beyond that.\n", "msg_date": "Mon, 19 Jun 2000 11:37:58 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "PROBLEM on SOLARIS" }, { "msg_contents": "Chris Bitmead <[email protected]> writes:\n> Current cvs compiled on Solaris gives the following error...\n> In file included from ../../include/tcop/tcopprot.h:22,\n> from pg_proc.c:26:\n> /usr/include/setjmp.h:70: conflicting types for `jmp_buf'\n> /usr/include/setjmp.h:53: previous declaration of `jmp_buf'\n> gmake[3]: *** [pg_proc.o] Error 1\n\nHmm, I didn't think anything much had changed in that area.\nWhat was the last version you compiled successfully on that box?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Jun 2000 01:26:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PROBLEM on SOLARIS " }, { "msg_contents": "Tom Lane wrote:\n> \n> Chris Bitmead <[email protected]> writes:\n> > Current cvs compiled on Solaris gives the following error...\n> > In file included from ../../include/tcop/tcopprot.h:22,\n> > from pg_proc.c:26:\n> > /usr/include/setjmp.h:70: conflicting types for `jmp_buf'\n> > /usr/include/setjmp.h:53: previous declaration of `jmp_buf'\n> > gmake[3]: *** [pg_proc.o] Error 1\n> \n> Hmm, I didn't think anything much had changed in that area.\n> What was the last version you compiled successfully on that box?\n\n7.02 seems to compile cleanly. There are a number of other problems in\ncvs for solaris after I hacked through that one too. Whoever works on\nSolaris should probably take a look.\n", "msg_date": "Mon, 19 Jun 2000 15:39:54 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PROBLEM on SOLARIS" }, { "msg_contents": "Chris Bitmead <[email protected]> writes:\n> Tom Lane wrote:\n>> Chris Bitmead <[email protected]> writes:\n>>>> Current cvs compiled on Solaris gives the following error...\n\n>> Hmm, I didn't think anything much had changed in that area.\n>> What was the last version you compiled successfully on that box?\n\n> 7.02 seems to compile cleanly. There are a number of other problems in\n> cvs for solaris after I hacked through that one too.\n\nIn that case I'll take the liberty of blaming Peter's recent work on\nthe configure and make stuff ...\n\nHowever, knowing that configure is failing on your box isn't much\nof a step towards fixing it. Can you look at the differences between\nwhat configure emits for you now and what it produced in the 7.02\nrelease? include/config.h is the most likely place to check, though\nit's possible some other file holds the key.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Jun 2000 01:58:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PROBLEM on SOLARIS " }, { "msg_contents": "\nAttached are the differences for include/config.h on Solaris between\n7.02 and the latest snapshot....\n\n\n1c1\n< /* include/config.h. Generated automatically by configure. */\n---\n> /* src/include/config.h. Generated automatically by configure. */\n12c12\n< * $Id: config.h.in,v 1.113 2000/05/12 13:58:25 scrappy Exp $\n---\n> * $Id: config.h.in,v 1.119 2000/06/17 00:09:56 petere Exp $\n17a18\n> \n66,73d66\n< * As soon as the backend blocks on a lock, it waits this number of\nseconds\n< * before checking for a deadlock.\n< * We don't check for deadlocks just before sleeping because a\ndeadlock is\n< * a rare event, and checking is an expensive operation.\n< */\n< #define DEADLOCK_CHECK_TIMER 1\n< \n< /*\n117,126d109\n< /* Genetic Query Optimization (GEQO):\n< * \n< * The GEQO module in PostgreSQL is intended for the solution of the\n< * query optimization problem by means of a Genetic Algorithm (GA).\n< * It allows the handling of large JOIN queries through non-exhaustive\n< * search.\n< * For further information see README.GEQO\n<[email protected]>.\n< */\n< #define GEQO\n< \n161,165d143\n< /*\n< * ELOG_TIMESTAMPS: adds a timestamp with the following format to elog\n< * messages: yymmdd.hh:mm:ss.mmm [pid] message\n< */\n< /* #define ELOG_TIMESTAMPS */\n167,173c145\n< /*\n< * USE_SYSLOG: use syslog for elog and error messages printed by\ntprintf\n< * and eprintf. This must be activated with the syslog flag in\npg_options\n< * (syslog=0 for stdio, syslog=1 for stdio+syslog, syslog=2 for\nsyslog).\n< * For information see backend/utils/misc/trace.c (Massimo Dal Zotto).\n< */\n< /* #define USE_SYSLOG */\n---\n> /* #undef ENABLE_SYSLOG */\n186a159\n> /* #define LOCK_DEBUG */\n229c202,204\n< #define DEF_PGPORT \"5432\" \n---\n> #define DEF_PGPORT 5432\n> /* ... and once more as a string constant instead */\n> #define DEF_PGPORT_STR \"5432\"\n250c225\n< #define HAVE_CRYPT_H 1\n---\n> /* #undef HAVE_CRYPT_H */\n298c273,282\n< #define HAVE_VALUES_H 1\n---\n> /* #undef HAVE_VALUES_H */\n> /* Set to 1 if you have <sys/exec.h> */\n> #define HAVE_SYS_EXEC_H 1\n> \n> /* Set to 1 if you have <sys/pstat.h> */\n> /* #undef HAVE_SYS_PSTAT_H */\n> \n> /* Set to 1 if you have <machine/vmparam.h> */\n> #define HAVE_MACHINE_VMPARAM_H 1\n307c291,297\n< /* #undef HAVE_SETPROCTITLE */\n---\n> #define HAVE_SETPROCTITLE 1\n> \n> /* Define if you have the pstat function. */\n> /* #undef HAVE_PSTAT */\n> \n> /* Define if the PS_STRINGS thing exists. */\n> /* #undef HAVE_PS_STRINGS */\n318a309,311\n> /* are we building against a libodbcinst */\n> /* #undef HAVE_SQLGETPRIVATEPROFILESTRING */\n> \n326c319\n< #define HAVE_LIBDL 1\n---\n> /* #undef HAVE_LIBDL */\n333,334c326,327\n< #define HAVE_GETTIMEOFDAY_2_ARGS 1\n< #ifndef HAVE_GETTIMEOFDAY_2_ARGS\n---\n> /* #undef GETTIMEOFDAY_1ARG */\n> #ifdef GETTIMEOFDAY_1ARG\n357c350\n< #define HAVE_FPCLASS 1\n---\n> /* #undef HAVE_FPCLASS */\n363c356\n< /* #undef HAVE_ISINF */\n---\n> #define HAVE_ISINF 1\n375c368\n< /* #undef HAVE_TM_ZONE */\n---\n> #define HAVE_TM_ZONE 1\n382c375\n< #define HAVE_INT_TIMEZONE 1\n---\n> /* #undef HAVE_INT_TIMEZONE */\n388c381\n< /* #undef HAVE_INET_ATON */\n---\n> #define HAVE_INET_ATON 1\n401c394\n< #define HAVE_FCVT 1\n---\n> /* #undef HAVE_FCVT */\n407c400\n< #define HAVE_FINITE 1\n---\n> /* #undef HAVE_FINITE */\n413c406\n< #define HAVE_SIGSETJMP 1\n---\n> /* #undef HAVE_SIGSETJMP */\n497c490\n< /* #undef HAVE_UNION_SEMUN */\n---\n> #define HAVE_UNION_SEMUN \n506c499\n< #define HAVE_LONG_LONG_INT_64 1\n---\n> #define HAVE_LONG_LONG_INT_64 \n519,521c512,514\n< #define ALIGNOF_LONG_LONG_INT 8\n< #define ALIGNOF_DOUBLE 8\n< #define MAXIMUM_ALIGNOF 8\n---\n> #define ALIGNOF_LONG_LONG_INT 4\n> #define ALIGNOF_DOUBLE 4\n> #define MAXIMUM_ALIGNOF 4\n523,524c516,517\n< /* Define as the base type of the last arg to accept */\n< #define SOCKET_SIZE_TYPE size_t\n---\n> /* Define as the type of the type of the 3rd argument to accept() */\n> #define ACCEPT_TYPE_ARG3 socklen_t\n527c520\n< #define USE_POSIX_SIGNALS 1\n---\n> #define HAVE_POSIX_SIGNALS \n530c523\n< #define HAVE_NAMESPACE_STD 1\n---\n> /* #undef HAVE_NAMESPACE_STD */\n533c526,535\n< #define HAVE_CXX_STRING_HEADER 1\n---\n> /* #undef HAVE_CXX_STRING_HEADER */\n> \n> /* Define if you are building with Kerberos 4 support */\n> /* #undef KRB4 */\n> \n> /* Define if you are building with Kerberos 5 support */\n> /* #undef KRB5 */\n> \n> /* The name of the Postgres service principal in Kerberos */\n> #define PG_KRB_SRVNAM \"postgres\"\n534a537,538\n> /* The location of the Kerberos server's keytab file */\n> #define PG_KRB_SRVTAB \"/etc/srvtab\"\n", "msg_date": "Mon, 19 Jun 2000 16:36:39 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PROBLEM on SOLARIS" }, { "msg_contents": "Chris Bitmead <[email protected]> writes:\n> Attached are the differences for include/config.h on Solaris between\n> 7.02 and the latest snapshot....\n\n> 12c12\n> < * $Id: config.h.in,v 1.113 2000/05/12 13:58:25 scrappy Exp $\n> ---\n> > * $Id: config.h.in,v 1.119 2000/06/17 00:09:56 petere Exp $\n> [snippage]\n> 413c406\n> < #define HAVE_SIGSETJMP 1\n> ---\n> > /* #undef HAVE_SIGSETJMP */\n\nOK, so in fact configure is currently failing to detect that it\nshould define HAVE_SIGSETJMP on your platform.\n\nThis is pretty odd, because AFAICS the test for sigsetjmp is\nexactly the same as it was in 7.0. I'm guessing that the failure\nhas to do with reordering of the configure tests, so that the\nlist of #define symbols active at the time of the sigsetjmp test\nis different from what it was in 7.0. But that's as far as I can\ngo on this evidence. Anyone have a better idea?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Jun 2000 02:55:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PROBLEM on SOLARIS " } ]
[ { "msg_contents": "\n> BTW, schemas do make things interesting for the other camp:\n> is it possible for the same table to be referenced by different\n> names in different schemas? If so, just how useful is it to pick\n> one of those names arbitrarily for the filename? This is an advanced\n> version of the main objection to using the original relname and not\n> updating it at RENAME TABLE --- sooner or later, the filenames are\n> going to be more confusing than helpful.\n> \n> Comments? Have I missed something important about schemas?\n\nI think we have to agree on the way we want schemas to be.\nImho (and in other db's) the schema is simply the owner of a table.\n\nThe owner is an optional part of the table keyword ( select * from\n\"owner\".tabname ).\nIt also implys that different owners can have a table with the same name\nin the same database. (this is only implemented in some other db Systems)\n\nOur database concept is and imho should not be altered, thus we keep the\nhierarchy dbname --> owner(=schema) --> tablename.\n\nAndreas\n\n\n", "msg_date": "Mon, 19 Jun 2000 13:16:14 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Big 7.1 open items " } ]
[ { "msg_contents": "\n> OK, to get back to the point here: so in Oracle, tables can't cross\n> tablespace boundaries,\n\nThis is only true if you don't insert more coins and buy the Partitioning\nOption,\nor you use those coins to switch to Informix.\n \n> but a tablespace itself could span multiple\n> disks?\n\nYes\n\n> \n> Not sure if I like that better or worse than equating a tablespace\n> with a directory (so, presumably, all the files within it live on\n> one filesystem) and then trying to make tables able to span\n> tablespaces. We will need to do one or the other though, if we want\n> to have any significant improvement over the current state of affairs\n> for large tables.\n\nYou can currently use a union all view and write appropriate rules\nfor insert, update and delete in Postgresql. This has the only disadvantage,\nthat Partitions (fragments, table parts) cannot be optimized away,\nbut we could fix that if we fixed the optimizer to take check constraints\ninto account (like check (year = 2000) and select * where year=1999).\n\nAndreas\n", "msg_date": "Mon, 19 Jun 2000 14:09:31 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Big 7.1 open items " } ]
[ { "msg_contents": "\n> > \"Hiroshi Inoue\" <[email protected]> writes:\n> > > Please add my opinion for naming rule.\n> > \n> > > relname/unique_id but\tneed some work\t\tnew \n> pg_class column,\t\n> > > no relname change.\tfor unique-id generation\t\n> filename not relname\n> > \n> > Why is a unique ID better than --- or even different from ---\n> > using the relation's OID? It seems pointless to me...\n> \n> just to open up a whole new bucket of worms here, but ... if \n> we do use OID\n> (which up until this thought I endorse 100%) ... do we not \n> run a risk if\n> we run out of OIDs? As far as I know, those are still a \n> finite resource,\n> no? \n> \n> or, do we just assume that by the time that comes, everyone \n> will be pretty\n> much using 64bit machines? :)\n\nI think the idea is to have an option to remove oid's from \nuser tables. I don't think you will run out of oid's if you have your bulk\ndata\nnot use up oid's.\n\nAndreas\n", "msg_date": "Mon, 19 Jun 2000 14:14:43 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Big 7.1 open items " } ]
[ { "msg_contents": "\n> It's better than *all* segments of tables in one directory, which is\n> what you get if the segment number is just a component of a flat file\n> name. We have to have a better answer than that for people who need\n> to cope with tables bigger than a disk. Perhaps someone can \n> think of a\n> better answer than subdirectory-per-segment-number, but I think that\n> will work well enough; and it doesn't add any complexity for file\n> access.\n\nI do not see this connection between a filesystem and a disk ?\nModern systems have the ability to join more than one disk into \none filesystem.\n\nAlso if we think about separating large tables into smaller parts\nwe imho want something where the optimizer has knowledge \nwhat data it finds in what part of the table.\n\nAndreas\n", "msg_date": "Mon, 19 Jun 2000 15:46:22 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: OK, OK, Hiroshi's right: use a seperately-generated\n\t filename" } ]
[ { "msg_contents": "Can someone comment on this? I still have the old libpq++ changes he\nsubmitted for 7.0. We put some of them in, but skipped the rest because\nit was too close to beta and there were some questions about the API\nchange.\n\n\n> Bruce Momjian wrote:\n> > \n> > I think we only did bug fixes to libpq++, like your fix for *.h. I can\n> > take your old patch and merge in the new changes that were not applied.\n> > \n> \n> That would be nice. The C++ header file with the wrappers with exception\n> handling is missing. I still think that the semantics of the old classes \n> (most of all pglobject.cc) are broken.\n> \n> The new C++ header file fixes most of the problems, except for the\n> pglobject\n> class. If Tom Lane objects to fixing pglobject.cc than that\n> is no major problem, because probably pglobject will not be used \n> (I wonder who uses it anyway) when SOAP is operational,\n> so that is not a real issue.\n> \n> So I would suggest adding least the additional header file and the new\n> documentation\n> and new example. These changes are 100% backwards compatible.\n> You and Tom should decide on replacing pglobject.* with my\n> implementation\n> or leave it ontouched (I still can't think of a reason why the current\n> problems\n> shouldn't be fixed. Another option is to have two pglargeobject\n> implementations,\n> the original and mine with different naming, but that is confusing)\n> \n> \n> For your memory: these are the problems with the current pglargeobject:\n> \n> (Quote from old mail)\n> \n> ========================================================================\n> - Also PgLargeObject::Import was broken. The PgLargeObject \n> constructor always creates a large object.\n> But Import creates a new large object, so the sequence\n> \n> PgLargeObject lo(db);\n> lo.Import(\"/tmp/file\");\n> \n> will leak large objects !\n> \n> I made Import private and added a new constructor:\n> \n> PgLargeObject(const char *filename, PgDatabase &db);\n> \n> which initializes a new large object with the contents of\n> the file. I also made Open() and Create() private...\n> =========================================================================\n> \n> Bye\n> \n> Tom\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 19 Jun 2000 10:09:31 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [BUGS] libpq++ update" } ]
[ { "msg_contents": "Hello.\n\n Dmitry Ishutkin <[email protected]> created a set of images:\nhttp://web.interbit.ru/dev/postgres/\n Looks good enough to be put on www.PostgreSQL.org. He can change\ncolors or something upon request.\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2.1/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n\n", "msg_date": "Mon, 19 Jun 2000 15:29:27 +0000 (GMT)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": true, "msg_subject": "Built with PostgreSQL (images)" }, { "msg_contents": "\nlittle advertised, but check out:\n\n\thttp://www.pgsql.com/propaganda.html\n\nfor ones that use \"the Elephant\" ...\n\nOn Mon, 19 Jun 2000, Oleg Broytmann wrote:\n\n> Hello.\n> \n> Dmitry Ishutkin <[email protected]> created a set of images:\n> http://web.interbit.ru/dev/postgres/\n> Looks good enough to be put on www.PostgreSQL.org. He can change\n> colors or something upon request.\n> \n> Oleg.\n> ---- \n> Oleg Broytmann http://members.xoom.com/phd2.1/ [email protected]\n> Programmers don't die, they just GOSUB without RETURN.\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 1 Jul 2000 01:06:16 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built with PostgreSQL (images)" }, { "msg_contents": "Hi!\n\n Thanks for replying, I thought nobody cared...\n\n Looks good. Anyway I am sure Dmitry's images are good enough to be put\ninto http://www.pgsql.com/propaganda/\n\nOn Sat, 1 Jul 2000, The Hermit Hacker wrote:\n> little advertised, but check out:\n> \n> \thttp://www.pgsql.com/propaganda.html\n> \n> for ones that use \"the Elephant\" ...\n\n> On Mon, 19 Jun 2000, Oleg Broytmann wrote:\n> > Dmitry Ishutkin <[email protected]> created a set of images:\n> > http://web.interbit.ru/dev/postgres/\n> > Looks good enough to be put on www.PostgreSQL.org. He can change\n> > colors or something upon request.\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2.1/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Sat, 1 Jul 2000 08:08:18 +0000 (GMT)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built with PostgreSQL (images)" }, { "msg_contents": "On Sat, 1 Jul 2000, Oleg Broytmann wrote:\n\n> Hi!\n> \n> Thanks for replying, I thought nobody cared...\n> \n> Looks good. Anyway I am sure Dmitry's images are good enough to be put\n> into http://www.pgsql.com/propaganda/\n\nhow does that 'gear' (I'm guessing its a gear?) relate to the\nproject? Most of the 'powered by' images that are out there tend to be\niconically related to the project they are boasting ... for us, that is\nthe elephant, yet we have this gear out of nowhere?\n\n > > On Sat, 1 Jul 2000, The Hermit Hacker wrote:\n> > little advertised, but check out:\n> > \n> > \thttp://www.pgsql.com/propaganda.html\n> > \n> > for ones that use \"the Elephant\" ...\n> \n> > On Mon, 19 Jun 2000, Oleg Broytmann wrote:\n> > > Dmitry Ishutkin <[email protected]> created a set of images:\n> > > http://web.interbit.ru/dev/postgres/\n> > > Looks good enough to be put on www.PostgreSQL.org. He can change\n> > > colors or something upon request.\n> \n> Oleg.\n> ---- \n> Oleg Broytmann http://members.xoom.com/phd2.1/ [email protected]\n> Programmers don't die, they just GOSUB without RETURN.\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 2 Jul 2000 22:43:40 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built with PostgreSQL (images)" }, { "msg_contents": "On Sun, 2 Jul 2000, The Hermit Hacker wrote:\n> how does that 'gear' (I'm guessing its a gear?) relate to the\n> project? Most of the 'powered by' images that are out there tend to be\n> iconically related to the project they are boasting ... for us, that is\n> the elephant, yet we have this gear out of nowhere?\n\n I am not insisting - just asking...\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2.1/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Mon, 3 Jul 2000 07:43:47 +0000 (GMT)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built with PostgreSQL (images)" } ]
[ { "msg_contents": "Adam Haberlach <[email protected]> writes:\n> Every time config.h is compiled, I get the following warning--is this\n> something that can/should be easily fixed, or should I figure out\n> which gcc command-line flag turns this off?\n\n> /Scratch/postgres-cvs/pgsql/src/include/config.h:411: warning: `struct in_addr' declared inside parameter list\n> /Scratch/postgres-cvs/pgsql/src/include/config.h:411: warning: its scope is only this definition or declaration,\n> /Scratch/postgres-cvs/pgsql/src/include/config.h:411: warning: which is probably not what you want.\n\nIt means you haven't imported a header that defines struct in_addr.\nIt looks like config.h is trying to do that just above the inet_aton\ndeclaration, but evidently it needs some more work on your platform...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Jun 2000 11:55:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Warning question " }, { "msg_contents": "Every time config.h is compiled, I get the following warning--is this\nsomething that can/should be easily fixed, or should I figure out\nwhich gcc command-line flag turns this off?\n\n/Scratch/postgres-cvs/pgsql/src/include/config.h:411: warning: `struct in_addr' declared inside parameter list\n/Scratch/postgres-cvs/pgsql/src/include/config.h:411: warning: its scope is only this definition or declaration,\n/Scratch/postgres-cvs/pgsql/src/include/config.h:411: warning: which is probably not what you want.\n\n\n-- \nAdam Haberlach | \"Oh my god! It's filled with\[email protected] | char *'s!\"\nhttp://www.newsnipple.com/ | \n", "msg_date": "Mon, 19 Jun 2000 09:31:43 -0700", "msg_from": "Adam Haberlach <[email protected]>", "msg_from_op": false, "msg_subject": "Warning question" } ]
[ { "msg_contents": "Can we rename the `examine_subclass' option to something else? Rationale:\n\n* \"class\" is not a term usually used around here or SQL (okay, besides\npg_class, but note \"relname\")\n\n* The naming is conceptually backwards with the SQL99 model (which, in\nabsence of convincingly better ideas is still the reference). According to\nSQL99, rows are shared between tables and subtables so you're not really\n\"examing subclasses/subtables\" or not, you are instead explicitly ignoring\nrows from subtables.\n\nMaybe something religiously neutral like traditional_inheritance = on|off\nor conversely sql_inheritance = on|off?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 19 Jun 2000 18:15:02 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "On examine_subclass" } ]
[ { "msg_contents": "Hello-\n\n\tI found the following odd behavior in v7.0.2 when issuing the\n'create group' command. Namely, I cannot create a group named 'trusted'\nunless I use 'insert into pg_group...' syntax. Any other group name works\nas expected... well, at least those that I have tried.\n\n\tCheers, Jon\n\n\tPS: I am running home-rolled v7.0.2 of Pg, on a linux v2.2.16 i686\nbox. I have also test this on a v7.0.1 installation, with the same\nresult. I cannot test this on a v6.5.x installation, b/c the 'create\ngroup' command did not exist then.\n\n------------------------------------------------------------------\ntemplate1=# create group trusted;\nERROR: parser: parse error at or near \"trusted\"\ntemplate1=# insert into pg_group (groname) values ('trusted');\nINSERT 18786 1\n\n\n-**-*-*---*-*---*-*---*-----*-*-----*---*-*---*-----*-----*-*-----*---\n Jon Lapham\n Centro Nacional de Ressonancia Magnetica Nuclear de Macromoleculas\n Universidade Federal do Rio de Janeiro (UFRJ) - Brasil\n email: [email protected] \n***-*--*----*-------*------------*--------------------*---------------\n", "msg_date": "Mon, 19 Jun 2000 14:04:38 -0300 (BRT)", "msg_from": "Jon Lapham <[email protected]>", "msg_from_op": true, "msg_subject": "CREATE GROUP oddity" }, { "msg_contents": "Jon Lapham writes:\n\n> Hello-\n> \n> \tI found the following odd behavior in v7.0.2 when issuing the\n> 'create group' command. Namely, I cannot create a group named 'trusted'\n> unless I use 'insert into pg_group...' syntax. Any other group name works\n> as expected... well, at least those that I have tried.\n\nTRUSTED is a reserved word. Try CREATE GROUP \"trusted\" (double quoted).\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 20 Jun 2000 18:43:19 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE GROUP oddity" } ]
[ { "msg_contents": ">\n\n\n\n> ------------------------------------------------------------------------\n>\n> Subject: Re: int24_ops and int42_ops are bogus\n> Date: Mon, 19 Jun 2000 00:52:28 -0400\n> From: Tom Lane <[email protected]>\n> To: [email protected]\n> References: <[email protected]>\n>\n> I wrote:\n> > I think we ought to assume that index manipulation deals with only\n> > one datatype for any given index, and therefore these two opclasses\n> > are broken by design and must be removed.\n>\n> I have removed these two opclasses from the system. I had a further\n> thought on the issue, which I just want to record in the archives\n> in case anyone ever comes back and wants to resurrect\n> int24_ops/int42_ops.\n>\n> The real design problem with these two opclasses is that if you want\n> to have an int4 column that you might want to compare against either\n> int2 or int4 constants, you have to create *two* indexes to handle\n> the two cases. The contents of the two indexes will be absolutely\n> identical, so this approach is inherently silly. The right way to\n> attack it is to extend the opclass/amop information so that the\n> system could understand that a plain-vanilla int4 index might be\n> used with int4 vs int2 operators to compare against int2 constants\n> --- or with int4 vs int8 operators to compare against int8 constants,\n> etc.\n>\n> It would not be real difficult to extend the opclass representation\n> to show these relationships, I think. The hard part is that btree\n> (and probably the other index types) is sloppy about whether it is\n> comparing index entries or externally-supplied values and which side\n> of the comparison is which. Cleaning that up would be painful and\n> maybe impractical --- but if it could be done it'd be nifty.\n>\n> The path I think we will actually pursue, instead, is teaching the\n> planner to coerce constants to the same type as the compared-to\n> column. For instance, given \"int2var < int4constant\" the planner\n> will try to coerce the constant to int2 so that it can apply\n> int2-vs-int2 operators with an int2 index. This falls down on\n> cases like \"int2var < 100000\" because it won't be possible to\n> reduce the constant to int2, whereas the above-sketched idea could\n\nBut since ALL int2var values in the table are in fact less than 100000, this expression is easily optimized to TRUE. And, I think, similar optimizations can be\nfound for other out of range values.\n\n>\n> still handle that case as an indexscan. But in terms of actual\n> everyday usefulness, I doubt this is a serious limitation.\n>\n> regards, tom lane\n>\n\n", "msg_date": "Mon, 19 Jun 2000 20:57:26 -0700", "msg_from": "Paul Condon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: int24_ops and int42_ops are bogus" } ]
[ { "msg_contents": "I'm having problems with permissions on tables with foreign keys.\nConsider the following tables:\n\ndrop sequence t1_id_seq;\ndrop table t1;\ncreate table t1\n(\n id\t\tserial,\n i\t\tint\t\tnot null,\n unique (i)\n);\n\ndrop sequence t2_id_seq;\ndrop table t2;\ncreate table t2\n(\n id\t\tserial,\n t1_id\t\tint\t\tnot null\n\t\t\t\treferences t1 (id)\n\t\t\t\ton delete no action\n\t\t\t\ton update no action,\n j\t\tint\t\tnot null,\n unique (t1_id, j)\n);\n\nThe \"on ... no action\" clauses should imply that no changes should be\nmade to table t2 if table t1 is changed. If there is a reference from\nt2 -> t1 for a row to be changed, that change to t1 is rejected.\n\nI presume only select permission on t2 is really required for the\ntrigger to determine whether there is a referencing row in t2.\nHowever, the current implementation acts as if update permission is\nrequired on t2 (which is presumably true for other \"on ...\" clauses).\n\nTwo questions:\n\n- Is there any way to alter the permissions check for these triggers\n to differentiate between situations in which select permission is\n and is not sufficient? Where would I look in the code for this\n stuff?\n\n- What user/group/whatever is used when checking these trigger\n permissions? If a delete/update to table t1 is initiated by a rule\n on some view, shouldn't the relevant user be the owner of the rule\n not the issuer of the query that initiated the rule? What part of\n the code affects this?\n\nThanks for your help.\n\nCheers,\nBrook\n", "msg_date": "Tue, 20 Jun 2000 14:01:47 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": true, "msg_subject": "table permissions and referential integrity" } ]
[ { "msg_contents": "\n> The current discussion of symlinks is focusing on using directory\n> symlinks, not file symlinks, to represent/implement tablespace layout.\n\nIf that is the only issue for the symlinks, I think it would be sufficient\nto \nput the files in the correct subdirectories. The dba can then decide\nwhether he wants to mount filsystems directly to the disired location, \nor create a symlink. I do not see an advantage in creating a symlink\nin the backend, since the dba has to create the filesystems anyway.\n\nfs: data\nfs: data/base/...../extent1\nlink: data/base/...../extent2 -> /data/extent2\n...\n\nAndreas\n", "msg_date": "Wed, 21 Jun 2000 11:49:26 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Big 7.1 open items " }, { "msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n>> The current discussion of symlinks is focusing on using directory\n>> symlinks, not file symlinks, to represent/implement tablespace layout.\n\n> If that is the only issue for the symlinks, I think it would be sufficient\n> to \n> put the files in the correct subdirectories. The dba can then decide\n> whether he wants to mount filsystems directly to the disired location, \n> or create a symlink. I do not see an advantage in creating a symlink\n> in the backend, since the dba has to create the filesystems anyway.\n\n> fs: data\n> fs: data/base/...../extent1\n> link: data/base/...../extent2 -> /data/extent2\n\nThat (mounting a filesystem directly where the symlink would otherwise\nbe) would be OK if you were making a new filesystem that you intended to\nuse *only* as database storage, and *only* for one database ... maybe\neven just one extent subdir of one database. I'd accept it as being an\nOK answer for anyone unfortunate enough not to have symlinks, but for\nmost people symlinks would be more flexible.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Jun 2000 11:58:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Big 7.1 open items " } ]
[ { "msg_contents": "\n> > > \tCREATE LOCATION tabloc IN '/var/private/pgsql';\n> > > \tCREATE TABLE newtab ... IN tabloc;\n> >\n> > Okay, so we'd have \"table spaces\" and \"database spaces\". \n> Seems like one\n> > \"space\" ought to be enough.\n\nYes, one space should be enough.\n\n> \n> Does your \"database space\" correspond to current PostgreSQL's \n> database ?\n\nI think we should think of the \"database space\" as the default \"table space\"\nfor this database.\n\n> And is it different from SCHEMA ?\n\nPlease don't mix schema and database, they are two different issues.\nEven Oracle has a database, only in Oracle you are limited to one database\nper instance. We do not want to add this limitation to PostgreSQL.\n\nAndreas\n", "msg_date": "Wed, 21 Jun 2000 14:48:43 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Big 7.1 open items" } ]
[ { "msg_contents": "\n> My opinion\n> 3) database and tablespace are relatively irrelevant.\n> I assume PostgreSQL's database would correspond \n> to the concept of SCHEMA.\n\nNo, this should definitely not be so.\n\nAndreas\n", "msg_date": "Wed, 21 Jun 2000 15:02:40 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Big 7.1 open items " } ]
[ { "msg_contents": "\n> My inclindation is that tablespaces should be installation-wide, but\n> I'm not completely sold on it. In any case I could see wanting a\n> permissions mechanism that would only allow some databases to have\n> tables in a particular tablespace.\n\nI fully second that.\n\n> We do need to think more about how traditional Postgres databases\n> fit together with SCHEMA. Maybe we wouldn't even need multiple\n> databases per installation if we had SCHEMA done right.\n\nThis gives me the goose bumps. A schema is something that is below\nthe database hierarchy. It is the owner of a table. We lack the ability\nto qualify a tablemname with an owner like \"owner\".tabname .\nCan we please agree to that much ?\n\nAndreas \n", "msg_date": "Wed, 21 Jun 2000 15:14:51 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Big 7.1 open items " } ]
[ { "msg_contents": "\n> > > Sure, and if the system loading it can not create the \n> required symlinks\n> > > because the directories don't exist, it can just skip the \n> symlink step.\n> > \n> > What I meant is, would you still be able to create tablespaces on\n> > systems without symlinks? That would seem to be a desirable feature.\n> \n> You could create tablespaces, but you could not point them at \n> different\n> drives. The issue is that we don't store the symlink location in the\n> database, just the tablespace name.\n\nYou could point them to another drive if your OS allows you to mount a \nfilesystem under an arbitrary name, no ?\n\nAndreas\n", "msg_date": "Wed, 21 Jun 2000 17:00:44 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Big 7.1 open items" } ]
[ { "msg_contents": "\n> The symlink solution where the actual symlink location is not stored\n> in the database is certainly abstract. We store that info in the file\n> system, which is where it belongs. We only query the symlink location\n> when we need it for database location dumping.\n\nSounds good, and also if the symlink query shows a simple directory\nwe do nothing.\n\nAndreas\n", "msg_date": "Wed, 21 Jun 2000 17:13:41 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Big 7.1 open items" }, { "msg_contents": "> > The symlink solution where the actual symlink location is not stored\n> > in the database is certainly abstract. We store that info in the file\n> > system, which is where it belongs. We only query the symlink location\n> > when we need it for database location dumping.\n> \n> Sounds good, and also if the symlink query shows a simple directory\n> we do nothing.\n\nYes, that is correct.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 21 Jun 2000 11:46:01 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Big 7.1 open items" } ]
[ { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> On Tue, 20 Jun 2000, Bruce Momjian wrote:\n> \n> > What I was suggesting is not to catalog the symlink locations, but to\n> > use lstat when dumping, so that admins can move files around using\n> > symlinks and not have to udpate the database.\n> \n> That surely wouldn't make those happy that are calling for smgr\n> abstraction.\n\nI disagree. We can create an smgr function callable from pg_dump to\nreturn the location string for a tablespace.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 21 Jun 2000 11:39:23 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Big 7.1 open items]" } ]
[ { "msg_contents": "Thanks for the quick answers on the requirement for update permission\nto go along with referential integrity. Now I understand things\nbetter. (Perhaps more info for the docs?)\n\nSo if each table access requires locks for update on multiple tables,\nis there any chance of deadlocks? Or are the multiple locks obtained\nall at once in some sort of atomic manner that eliminates the problem?\n\nThanks again for your help.\n\nCheers,\nBrook\n", "msg_date": "Wed, 21 Jun 2000 10:22:23 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: table permissions and referential integrity" } ]
[ { "msg_contents": "Is limit ignored in a cursor definition? I just was send a source that ecpg\nseemed to have problems with:\n\n...\nEXEC SQL DECLARE H CURSOR FOR\n select id,name from ff order by id asc limit 2;\nEXEC SQL OPEN H ;\n\nwhile(1){\nEXEC SQL FETCH IN H INTO :id,:name ;\nprintf(\"%d-%s\\n\",id,name.arr);\n}\n...\n\nI never before tried this and one could easily program this functionality\nwihtout limit, but I still wonder if it's correct that limit is ignored. BTW\nI only tested it with 6.5.3.\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Wed, 21 Jun 2000 20:03:52 +0200", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "limit?" }, { "msg_contents": "Michael Meskes <[email protected]> writes:\n> Is limit ignored in a cursor definition?\n\nYes. Seems to me we discussed that and decided it was a feature,\nnot a bug. Check the archives if you want to know why; I don't\nrecall.\n\nIf we do believe that, though, probably DECLARE CURSOR ... LIMIT\nought to give an error, instead of silently ignoring the limit\nas it does now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Jun 2000 17:44:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit? " } ]
[ { "msg_contents": "I see the continuing discussion here about how to specify and manage tablespaces. I'd like to point out how DB2 does it \nsince their approach may be worthy of consideration. I've posted some examples and some comments before the \nexamples.\n\nNote several things about the examples below:\n\n1) A tablespace can be managed by either the database or by the operating system. DB2's terminology is DMS vs SMS \ntable spaces. \n\n2) You can specify a FILE or a DEVICE or a directory (absence of a FILE or DEVICE keyword means a directory is being \nspecified) as the place to use as a tablespace.\n I assume that the DEVICE keyword is for specifying raw devices so that the OS file system is bypassed entirely. They \ndon't support DEVICE on all operating systems for which they support DB2 btw.\n The second example below is creating a tablespace in 3 directories on 3 drives using syntax that looks like NT or OS/2 \nsyntax for the paths. \n\n3) They allow absolute or relative paths. If relative then its relative to some main database directory for that particular \ndatabase. \n\n4) The 10000 and 50000 numbers refer to a number of 4K pages. \n\n5) The EXTENTSIZE is the number of pages to write to a particular directory or file or device before switching to the next \ndir, file or device. \n They speak of directories, files and devices used in this way as containers. \n\n6) The ON NODE syntax is used in what sounds like clustered configurationsDMS. They refer to its use on MPP servers. \n\n7) DB2 has a good separation of tablespaces and tables. \n CREATE TABLE mytable IN mydatatablespace INDEX IN myindextablespace LONG IN myblobtablespace\n allows one to pt the table in one table space, the indexes for that table in another tablespace and the LONG VARCHAR, \nLOB and other blobish data in yet another tablespace. \n\n\n CREATE TABLESPACE PAYROLL\n MANAGED BY DATABASE\n USING (DEVICE'/dev/rhdisk6' 10000,\n DEVICE '/dev/rhdisk7' 10000,\n DEVICE '/dev/rhdisk8' 10000)\n OVERHEAD 24.1\n TRANSFERRATE 0.9\n\n CREATE TABLESPACE ACCOUNTING\n MANAGED BY SYSTEM\n USING ('d:\\acc_tbsp', 'e:\\acc_tbsp', 'f:\\acc_tbsp')\n EXTENTSIZE 64\n PREFETCHSIZE 32\n\n CREATE TEMPORARY TABLESPACE TEMPSPACE2\n MANAGED BY DATABASE\n USING (FILE '/tmp/tempspace2.f1' 50000,\n FILE '/tmp/tempspace2.f2' 50000)\n EXTENTSIZE 256\n\n CREATE TABLESPACE PLANS\n MANAGED BY DATABASE\n USING (DEVICE '/dev/rhdisk0' 10000, DEVICE '/dev/rn1hd01' 40000) ON NODE 1\n USING (DEVICE '/dev/rhdisk0' 10000, DEVICE '/dev/rn3hd03' 40000) ON NODE 3\n USING (DEVICE '/dev/rhdisk0' 10000, DEVICE '/dev/rn5hd05' 40000) ON NODE 5\n\n\n", "msg_date": "Wed, 21 Jun 2000 15:36:36 -0700", "msg_from": "\"Randall Parker\" <[email protected]>", "msg_from_op": true, "msg_subject": "tablespace managed by system vs managed by database" } ]
[ { "msg_contents": "Just a notice:\n\nI tried really hard but Makefile.global as we know it can't work together\nwith a fancy autoconf build system. We already know of the install-sh\nrelative path problem. The next problem is that the automatic makefile\nremaking rules (see bottom of top GNUmakefile.in) can't be put into\nMakefile.global, so it really fails to do its job that is \"include common\nstuff\". What's worse, by including another makefile, each makefile would\nreally have to worry about remaking Makefile.global as well. So as it is\nit's doing really little good. There are also several other more technical\nproblems regarding relative paths and build vs source paths getting out of\norder, etc.\n\nSo I thought I'd do the next best thing and apply the features that\nAutoconf bestowed upon us: output file concatenation. Instead of each\nMakefile including Makefile.global, each makefile is pasted together with\na global makefile of sorts when it's created by config.status. That would\nlook like this in configure.in:\n\nAC_OUTPUT(\n ...\n src/bin/psql/Makefile:config/top.mk:src/bin/psql/Makefile.in:config/bottom.mk\n ...\n)\n\n(For various reasons a \"top\" global and a \"bottom\" global work best.) This\napproach seems to solve all of the mentioned problems.\n\nIf you have no idea what I just meant, good. :) I'm currently plowing\nthrough the bin/ subtree, then you'll see. If you did, also good, feel\nfree to comment.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 22 Jun 2000 00:49:38 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Makefile.global is kind of a pain" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> ... Instead of each\n> Makefile including Makefile.global, each makefile is pasted together with\n> a global makefile of sorts when it's created by config.status.\n\nHmm. My only objection to that is that it used to be possible to fix\nsome kinds of configure botches by hand-editing Makefile.global (which\nafter all is a configure output from Makefile.global.in). But now,\nanything I don't like about what configure did is going to be physically\nreplicated in umpteen files, so if I don't understand autoconf well\nenough to make configure do exactly what I wanted, I'm pretty much up\nthe creek.\n\nWhat's so wrong with including Makefile.global? Maybe the system\nwon't know that an edit there requires a global rebuild, but I'd\nrather have to do a \"make clean\"/\"make all\" after changing\nMakefile.global than manually edit dozens upon dozens of makefiles\nto get the same result.\n\nAwhile back I was complaining that configure was dumping its results\ninto too many files already. This sounds like it will make that problem\nmany times worse.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Jun 2000 23:46:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Makefile.global is kind of a pain " }, { "msg_contents": "Tom Lane writes:\n\n> Awhile back I was complaining that configure was dumping its results\n> into too many files already.\n\nIf you're saying that configure should ideally only substitute things into\nMakefile.global and nowhere else then we're never going to have separate\nbuild directories, unless you know something that I don't. Every\nsubdirectory where you build anything at all needs to have a Makefile.in.\n(Hint: How else will the build tree be created? How will the build tree\nfind the source tree?) Yes, that will eventually make config.status run\nfour times longer than it does now but that's the price to pay. If we\ndon't want to do that then it'd be best that I know now.\n\n(Now that I spelled this out, I'm not sure myself whether that's worth it.\nMaybe we should forget about it and get rid of all *.in. It would\ncertainly make my job easier.)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 23 Jun 2000 00:36:03 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Makefile.global is kind of a pain " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> Awhile back I was complaining that configure was dumping its results\n>> into too many files already.\n\n> If you're saying that configure should ideally only substitute things into\n> Makefile.global and nowhere else then we're never going to have separate\n> build directories, unless you know something that I don't. Every\n> subdirectory where you build anything at all needs to have a Makefile.in.\n> (Hint: How else will the build tree be created? How will the build tree\n> find the source tree?)\n\nThe separate-build-tree projects that I've used initialize the build\ntree by doing, for each source directory containing C files (say,\nsrc/foo/bar/)\n\tmkdir obj/foo/bar\n\tln -s ../../../src/foo/bar/Makefile obj/foo/bar/Makefile\nand then VPATH is set by the Makefile to ../../../src/foo/bar\n(note this can be hard-wired in the Makefile, as long as it knows where\nit lives in the source tree) and away you go. Of course files that\nconfigure actually *needs* to make a modified copy of will go\nright into the object tree. But you don't need to copy-and-edit\nevery Makefile just to get the VPATH info right.\n\nThis assumes that each object tree is a sibling of the src tree\n(so you'd make .../pgsql/obj.linux, .../pgsql/obj.hpux, etc if you\nare building for multiple architectures). If you need to build\nsomewhere else, a symlink or two can fake it.\n\n> Yes, that will eventually make config.status run\n> four times longer than it does now but that's the price to pay. If we\n> don't want to do that then it'd be best that I know now.\n\nI'd like to have the ability to have a separate build tree, but not at\nthe price of making configure run 4x longer --- especially not if it\nruns 4x longer for everyone whether they want a separate build tree or\nnot. I think the villagers will be on your doorstep with pitchforks\nif you try to push that through ;-). The nice thing about the above\napproach is that if you aren't building in a separate tree, you don't\nneed to expend any work at all except on the files that really need\nto be edited.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Jun 2000 19:02:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Makefile.global is kind of a pain " }, { "msg_contents": "Tom Lane writes:\n\n> The separate-build-tree projects that I've used initialize the build\n> tree by doing, for each source directory containing C files (say,\n> src/foo/bar/)\n> \tmkdir obj/foo/bar\n> \tln -s ../../../src/foo/bar/Makefile obj/foo/bar/Makefile\n> and then VPATH is set by the Makefile to ../../../src/foo/bar\n\nI think we might be able to do better:\n\n--Makefile--\nsubdir = src/bin/psql\ninclude ../../Makefile.global\n\n--Makefile.global--\ntop_srcdir = @top_srcdir@ # provided by autoconf\nsrcdir = $(top_srcdir)/subdir\nVPATH = $(srcdir)\n...\n\n--Makefile cont.--\n# build stuff\n\nThat way you can build in any directory.\n\nWell, that makes things a lot simpler. Then we really don't need any *.in\nfiles at all except for a select few. We'd just dump all @FOO@ things into\nMakefile.global.\n\nOf course I somehow have to hack up AC_PROG_INSTALL so it doesn't give a\nrelative path to install-sh, but that can be done.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 24 Jun 2000 13:58:03 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Makefile.global is kind of a pain " } ]
[ { "msg_contents": "Lamar,\n\nSee:\n\nhttp://support.microsoft.com/support/kb/articles/Q205/5/24.ASP\n\nIMO, its a bad idea to require the use of symlinks in order to be able to put different tablespaces on different drives. For a discussion on how DB2 supports tablespaces see my message entitled:\n \"tablespace managed by system vs managed by database\"\n\nI think one of the reasons one needs a fairly complex syntax for creating table spaces is that different devices have different hardware characteristics and one might want to tell the RDBMS to treat them differently for that \nreason. You can see how DB2 allows you to do that if you read that message I posted about it. \n\nOn Wed, 21 Jun 2000 11:48:19 -0400, Lamar Owen wrote:\n\n>Does Win32 do symlinks these days? I know Win32 does envvars, and Win32\n>is currently a supported platform.\n\n\n\nLamar,\n\nSee:\n\nhttp://support.microsoft.com/support/kb/articles/Q205/5/24.ASP\n\nIMO, its a bad idea to require the use of symlinks in order to be able to put different tablespaces on different drives. For a discussion on how DB2 supports tablespaces see my message entitled:\n \"tablespace managed by system vs managed by database\"\n\nI think one of the reasons one needs a fairly complex syntax for creating table spaces is that different devices have different hardware characteristics and one might want to tell the RDBMS to treat them differently for that reason. You can see how DB2 allows you to do that if you read that message I posted about it. \n\nOn Wed, 21 Jun 2000 11:48:19 -0400, Lamar Owen wrote:\n\n>Does Win32 do symlinks these days? I know Win32 does envvars, and Win32\n>is currently a supported platform.", "msg_date": "Wed, 21 Jun 2000 15:52:56 -0700", "msg_from": "\"Randall Parker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Big 7.1 open items" } ]
[ { "msg_contents": "Tom,\n\nDB2 supports an ALTER TABLESPACE command that allows one to add new containers to an existing tablespace. IMO, that's far more supportive of 24x7 usage.\n\nOn Wed, 21 Jun 2000 12:10:15 -0400, Tom Lane wrote:\n\n>The right way to address this problem is to invent a \"move table to\n>new tablespace\" command. This'd be pretty trivial to implement based\n>on a file-versioning approach: the new version of the pg_class tuple\n>has a new tablespace identifier in it.\n\n\n", "msg_date": "Wed, 21 Jun 2000 15:56:51 -0700", "msg_from": "\"Randall Parker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Big 7.1 open items" }, { "msg_contents": "\"Randall Parker\" <[email protected]> writes:\n> DB2 supports an ALTER TABLESPACE command that allows one to add new\n> containers to an existing tablespace. IMO, that's far more supportive\n> of 24x7 usage.\n\nEr, what do they mean by \"container\", and why is it better?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Jun 2000 23:03:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big 7.1 open items " } ]
[ { "msg_contents": "> If we bit the bullet and restricted ourselves to numeric filenames then\n> the log would need just four numeric values:\n> \tdatabase OID\n> \ttablespace OID\n\nIs someone going to implement it for 7.1?\n\n> \trelation OID\n> \trelation version number\n\nI believe that we can avoid versions using WAL...\n\n> (this set of 4 values would also be an smgr file reference token).\n> 16 bytes/log entry looks much better than 64.\n> \n> At the moment I can recall the following opinions:\n> \n> Pure OID filenames: Thomas, Tom, Marc, Peter E.\n\n+ me.\n\nBut what about LOCATIONs? I object using environment and think that\nlocations\nmust be stored in pg_control..?\n\nVadim\n", "msg_date": "Wed, 21 Jun 2000 16:00:17 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big 7.1 open items " }, { "msg_contents": "> -----Original Message-----\n> From: Mikheev, Vadim [mailto:[email protected]]\n> \n> > If we bit the bullet and restricted ourselves to numeric filenames then\n> > the log would need just four numeric values:\n> > \tdatabase OID\n> > \ttablespace OID\n> \n> Is someone going to implement it for 7.1?\n> \n> > \trelation OID\n> > \trelation version number\n> \n> I believe that we can avoid versions using WAL...\n>\n\nHow to re-construct tables in place ?\nIs the following right ?\n1) save the content of current table to somewhere\n2) shrink the table and related indexes\n3) reload the saved(+some filtering) content\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Thu, 22 Jun 2000 09:16:30 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Big 7.1 open items " }, { "msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n>> relation OID\n>> relation version number\n\n> I believe that we can avoid versions using WAL...\n\nI don't think so. You're basically saying that\n\t1. create file 'new'\n\t2. delete file 'old'\n\t3. rename 'new' to 'old'\nis safe as long as you have a redo log to ensure that the rename\nhappens even if you crash between steps 2 and 3. But crash is not\nthe only hazard. What if step 3 just plain fails? Redo won't help.\n\nI'm having a hard time inventing really plausible examples, but a\nslightly implausible example is that someone chmod's the containing\ndirectory -w between steps 2 and 3. (Maybe it's not so implausible\nif you assume a crash after step 2 ... someone might have left the\ndirectory nonwritable while restoring the system.)\n\nIf we use file version numbers, then the *only* thing needed to\nmake a valid transition between one set of files and another is\na commit of the update of pg_class that shows the new version number\nin the rel's pg_class tuple. The worst that can happen to you in\na crash or other failure is that you are unable to get rid of the\nset of files that you don't want anymore. That might waste disk\nspace but it doesn't leave the database corrupted.\n\n> But what about LOCATIONs? I object using environment and think that\n> locations must be stored in pg_control..?\n\nI don't like environment variables for this either; it's just way too\neasy to start the postmaster with wrong environment. It still seems\nto me that relying on subdirectory symlinks is a good way to go.\npg_control is not so good --- if it gets corrupted, how do you recover?\nsymlinks can be recreated by hand if necessary, but...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Jun 2000 23:14:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big 7.1 open items " } ]
[ { "msg_contents": "> > > \trelation version number\n> > \n> > I believe that we can avoid versions using WAL...\n> >\n> \n> How to re-construct tables in place ?\n> Is the following right ?\n> 1) save the content of current table to somewhere\n> 2) shrink the table and related indexes\n> 3) reload the saved(+some filtering) content\n\nOr - create tmp file and load with new content; log \"intent to relink table\nfile\";\nrelink table file; log \"file is relinked\".\n\nVadim\n\n", "msg_date": "Wed, 21 Jun 2000 17:20:38 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big 7.1 open items " }, { "msg_contents": "> -----Original Message-----\n> From: Mikheev, Vadim [mailto:[email protected]]\n> \n> > > > \trelation version number\n> > > \n> > > I believe that we can avoid versions using WAL...\n> > >\n> > \n> > How to re-construct tables in place ?\n> > Is the following right ?\n> > 1) save the content of current table to somewhere\n> > 2) shrink the table and related indexes\n> > 3) reload the saved(+some filtering) content\n> \n> Or - create tmp file and load with new content; log \"intent to \n> relink table\n> file\";\n> relink table file; log \"file is relinked\".\n>\n\nIt seems to me that whole content of the table should be\nlogged before relinking or shrinking.\nIs my understanding right ?\n\nRegards.\n\nHiroshi Inoue\[email protected] \n", "msg_date": "Thu, 22 Jun 2000 09:45:46 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Big 7.1 open items " } ]
[ { "msg_contents": "> > Or - create tmp file and load with new content;\n> > log \"intent to relink table file\";\n> > relink table file; log \"file is relinked\".\n> \n> It seems to me that whole content of the table should be\n> logged before relinking or shrinking.\n\nWhy not just fsync tmp files?\n\nVadim\n", "msg_date": "Wed, 21 Jun 2000 18:11:03 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big 7.1 open items " }, { "msg_contents": "> -----Original Message-----\n> From: Mikheev, Vadim [mailto:[email protected]]\n> \n> > > Or - create tmp file and load with new content;\n> > > log \"intent to relink table file\";\n> > > relink table file; log \"file is relinked\".\n> > \n> > It seems to me that whole content of the table should be\n> > logged before relinking or shrinking.\n> \n> Why not just fsync tmp files?\n>\n\nProbably I've misunderstood *relink*.\nIf *relink* different from *rename* ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Thu, 22 Jun 2000 10:27:26 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Big 7.1 open items " } ]
[ { "msg_contents": "> > > > Or - create tmp file and load with new content;\n> > > > log \"intent to relink table file\";\n> > > > relink table file; log \"file is relinked\".\n> > > \n> > > It seems to me that whole content of the table should be\n> > > logged before relinking or shrinking.\n> > \n> > Why not just fsync tmp files?\n> >\n> \n> Probably I've misunderstood *relink*.\n> If *relink* different from *rename* ?\n\nI ment something like this - link(table file, tmp2 file); fsync(tmp2 file);\nunlink(table file); link(tmp file, table file); fsync(table file);\nunlink(tmp file). We can do additional logging (with log flush) of these\nsteps\nif required, postpone on-recovery redo of operations till last relink log\nrecord/\nend of log/transaction abort etc etc etc.\n\nVadim\n", "msg_date": "Wed, 21 Jun 2000 18:30:23 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big 7.1 open items " }, { "msg_contents": "> -----Original Message-----\n> From: Mikheev, Vadim [mailto:[email protected]]\n> \n> > > > > Or - create tmp file and load with new content;\n> > > > > log \"intent to relink table file\";\n> > > > > relink table file; log \"file is relinked\".\n> > > > \n> > > > It seems to me that whole content of the table should be\n> > > > logged before relinking or shrinking.\n> > > \n> > > Why not just fsync tmp files?\n> > >\n> > \n> > Probably I've misunderstood *relink*.\n> > If *relink* different from *rename* ?\n> \n> I ment something like this - link(table file, tmp2 file); \n> fsync(tmp2 file);\n> unlink(table file); link(tmp file, table file); fsync(table file);\n> unlink(tmp file).\n\nI see,old file would be rolled back from tmp2 file on abort.\nThis would work on most platforms.\nBut cygwin port has a flaw that files could not be unlinked\nif they are open. So *relink* may fail in some cases(including\nrollback cases).\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Thu, 22 Jun 2000 12:09:15 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Big 7.1 open items " }, { "msg_contents": "\n> I ment something like this - link(table file, tmp2 file); fsync(tmp2 file);\n> unlink(table file); link(tmp file, table file); fsync(table file);\n> unlink(tmp file).\n\nI don't see the purpose of the fsync() calls here: link() and unlink()\neffect file system metadata, which with normal Unix (but not Linux)\nfilesystem semantics is written synchronously.\n\nfsync() on a file forces outstanding data to disk; it doesn't effect\nthe preceding or subsequent link() and unlink() calls unless\nMcKusick's soft updates are in use.\n\nIf the intent is to make sure the files are in particular states\nbefore each of the link() and unlink() calls (i.e. soft updates or\nsimilar functionality are in use) then more than fsync() is required,\nsince the files can still be updated after the fsync() and before\nlink() or unlink().\n\nOn Linux I understand that a fsync() on a directory will force\nmetadata updates to that directory to be committed, but that doesn't\nseem to be what this code is trying to do either?\n\nRegards,\n\nGiles\n\n\n\n", "msg_date": "Thu, 22 Jun 2000 21:58:08 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big 7.1 open items " }, { "msg_contents": "> 1) In the 'WAL Parameters' section, paragraph 3 there is the following\n> sentence: \"After a checkpoint has been made, any log segments written\n> before the redo record may be removed/archived...\" What does the 'may'\n> refer mean? Does the database administrator need to go into the \n> directory and remove the no longer necessary log files? What does\n> archiving have to do with this? If I archived all log files, could\n> I roll forward a backup made previously? That is the only reason I can\n> think of that you would archive log files (at least that is why you\n> archive log files in Oracle).\n\nOffline log segments are removed automatically at checkpoint time.\nWAL based BAR is not implemented yet, so no archiving is made\ncurrently.\n\n> 2) The doc doesn't seem to explain how on database recovery \n> the database knows which log file to start with. I think walking\n> through an example of how after a database crash, the log file is\n> used for recovery, would be useful. At least it would make me as\n> a user of postgres feel better if I understood how crashes are\n> recovered from.\n\nAfter checkpoint is made (log flushed) its position is saved in\npg_control file. So, on recovery backend first read pg_control,\nthan checkpoint record, than redo record (its position is saved\nin checkpoint) and begins REDO op. Because of entire content of\npages is saved in log on first-after-checkpoint page modification\npages will be first restored to consistent stage.\n\nUsing pg_control to get checkpoint position speed up things but\nto handle possible pg_control corruption we obviously should\nimplement reading existent log segments (from the last one -\nnewest - to oldest) to get last checkpoint.\n\nVadim\n", "msg_date": "Wed, 24 Jan 2001 13:25:39 -0800", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: WAL documentation" }, { "msg_contents": "Here's a patch to the wal.sgml text to take acocunt of Vadim's\nexplanations.\n\n\n\n\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"If anyone has material possessions and sees his\n brother in need but has no pity on him, how can the\n love of God be in him?\"\n I John 3:17", "msg_date": "Wed, 24 Jan 2001 22:12:32 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL documentation " }, { "msg_contents": "Oliver Elphick writes:\n\n> Here's a patch to the wal.sgml text to take acocunt of Vadim's\n> explanations.\n\nI checked in your documentation plus some fixes at other places. Does\nsomebody care to submit some new words to describe the fsync option\n(http://www.postgresql.org/devel-corner/docs/postgres/runtime-config.htm)?\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 25 Jan 2001 00:27:06 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL documentation " } ]
[ { "msg_contents": "On Thu, 22 Jun 2000 11:17:14 +1000, Giles Lean wrote:\n\n>\n>> 1) Make the entire database Unicode\n>> ...\n>> It also makes sorting and indexing take more time.\n>\n>Mentioned in my other email, but what collation order were you\n>proposing to use? Binary might be OK for unique keys but that doesn't\n>help you for '<', '>' etc.\n\nTo use Unicode on a field that can have indexes defined on it does require one single big \ncollation order table that determines the relative order of all the characters in Unicode. Surely \nthere must be a standard for this that is part of the Unicode spec? Or part of ISO/IEC 10646 \nspec? \n\nOne optimization doable on this would be to allow the user to declare tothe RDBMS what \nsubset of Unicode he is going to use. So, for instance, someone who is only handling \nEuropean languages might just say he wants to use 8859-1 thru 8859-9. Or a Japanese \ncompany might throw in some more code pages but still not bring in code pages for \nlanguages for which they do not create manuals.\n\nThat would make the collation table _much_ smaller.\n\nI don't know anything about the collation order of Asian character sets. My guess though is \nthat each in toto is either greater or lesser than the various Euro pages. Though the non-\nshifted part of Shift-JIS would be equal to its ASCII equivalents.\n\n>My expectation (not the same as I'd like to see, necessarily, and not\n>that my opinion counts -- I'm not a developer) would be that each\n>database have a locale, and that this locale's collation order be used\n>for indexing, LIKE, '<', '>' etc. \n\nCharacters like '<' and '>' already have standard collation orders vis a vis the other parts of \nASCII. I doubt these things vary by locale. But maybe I'm wrong. \n\n>If you want to store data from\n>multiple human languages using a locale that has Unicode for its\n>character set would be appropriate/necessary.\n\nSo you are saying that the same characters can have a different collation order when they \nappear in different locales even if they have the same encoding in all of them?\n\nIf so, then Unicode is really not a locale. Its an encoding but it is not a locale. \n\n\n>Regards,\n>\n>Giles\n>\n\n\n\n", "msg_date": "Wed, 21 Jun 2000 18:52:34 -0700", "msg_from": "\"Randall Parker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Thoughts on multiple simultaneous code page support" }, { "msg_contents": "Guys, can I ask a question?\n\nWhat is \"code page\"? Is it sort of M$'s terminology?\n--\nTatsuo Ishii\n", "msg_date": "Thu, 22 Jun 2000 12:51:02 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thoughts on multiple simultaneous code page support" }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> Guys, can I ask a question?\n> What is \"code page\"? Is it sort of M$'s terminology?\n\nRandall already pointed out that the term predates M$, but he didn't\nquite answer the first question. \"Code page\" means \"character set\",\nno more nor less AFAICT. For example ISO 8859-1 would be a specific\ncode page. The term seems to imply a set of no more than about 256\nsymbols --- I doubt anyone ever called Unicode a code page, for\ninstance ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Jun 2000 02:59:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thoughts on multiple simultaneous code page support " }, { "msg_contents": "> Randall already pointed out that the term predates M$, but he didn't\n> quite answer the first question. \"Code page\" means \"character set\",\n> no more nor less AFAICT.\n\nOh, I see.\n\nRandall, why do we need to use the term then? Shouldn't we use more\nstandard term \"character set\" or \"charset\" instead?\n--\nTatsuo Ishii\n", "msg_date": "Fri, 23 Jun 2000 00:14:43 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thoughts on multiple simultaneous code page support " } ]
[ { "msg_contents": "Attached is a second pass at redesigning backend memory management.\nThis is basically my proposal of 29 April, updated per the subsequent\ndiscussion and a couple of other things that have occurred to me since\nthen. I'm hoping to put this on the front burner pretty soon, so if\nyou have any gripes, now is a good time...\n\n\t\t\tregards, tom lane\n\n\nProposal for memory allocation fixes, take 2\t\t21-Jun-2000\n--------------------------------------------\n\nWe know that Postgres has serious problems with memory leakage during\nlarge queries that process a lot of pass-by-reference data. There is\nno provision for recycling memory until end of query. This needs to be\nfixed, even more so with the advent of TOAST which will allow very large\nchunks of data to be passed around in the system. So, here is a proposal.\n\n\nBackground\n----------\n\nWe already do most of our memory allocation in \"memory contexts\", which\nare usually AllocSets as implemented by backend/utils/mmgr/aset.c. What\nwe need to do is create more contexts and define proper rules about when\nthey can be freed.\n\nThe basic operations on a memory context are:\n\n* create a context\n\n* allocate a chunk of memory within a context (equivalent of standard\n C library's malloc())\n\n* delete a context (including freeing all the memory allocated therein)\n\n* reset a context (free all memory allocated in the context, but not the\n context object itself)\n\nGiven a chunk of memory previously allocated from a context, one can\nfree it or reallocate it larger or smaller (corresponding to standard\nlibrary's free() and realloc() routines). These operations return memory\nto or get more memory from the same context the chunk was originally\nallocated in.\n\nAt all times there is a \"current\" context denoted by the\nCurrentMemoryContext global variable. The backend macro palloc()\nimplicitly allocates space in that context. The MemoryContextSwitchTo()\noperation selects a new current context (and returns the previous context,\nso that the caller can restore the previous context before exiting).\n\nThe main advantage of memory contexts over plain use of malloc/free is\nthat the entire contents of a memory context can be freed easily, without\nhaving to request freeing of each individual chunk within it. This is\nboth faster and more reliable than per-chunk bookkeeping. We already use\nthis fact to clean up at transaction end: by resetting all the active\ncontexts, we reclaim all memory. What we need are additional contexts\nthat can be reset or deleted at strategic times within a query, such as\nafter each tuple.\n\n\npfree/prealloc no longer depend on CurrentMemoryContext\n-------------------------------------------------------\n\nIn this proposal, pfree() and prealloc() can be applied to any chunk\nwhether it belongs to CurrentMemoryContext or not --- the chunk's owning\ncontext will be invoked to handle the operation, regardless. This is a\nchange from the old requirement that CurrentMemoryContext must be set\nto the same context the memory was allocated from before one can use\npfree() or prealloc(). The old coding requirement is obviously fairly\nerror-prone, and will become more so the more context-switching we do;\nso I think it's essential to use CurrentMemoryContext only for palloc.\nWe can avoid needing it for pfree/prealloc by putting restrictions on\ncontext managers as discussed below.\n\nWe could even consider getting rid of CurrentMemoryContext entirely,\ninstead requiring the target memory context for allocation to be specified\nexplicitly. But I think that would be too much notational overhead ---\nwe'd have to pass an apppropriate memory context to called routines in\nmany places. For example, the copyObject routines would need to be passed\na context, as would function execution routines that return a\npass-by-reference datatype. And what of routines that temporarily\nallocate space internally, but don't return it to their caller? We\ncertainly don't want to clutter every call in the system with \"here is\na context to use for any temporary memory allocation you might want to\ndo\". So there'd still need to be a global variable specifying a suitable\ntemporary-allocation context. That might as well be CurrentMemoryContext.\n\n\nAdditions to the memory-context mechanism\n-----------------------------------------\n\nIf we are going to have more contexts, we need more mechanism for keeping\ntrack of them; else we risk leaking whole contexts under error conditions.\n\nWe can do this by creating trees of \"parent\" and \"child\" contexts. When\ncreating a memory context, the new context can be specified to be a child\nof some existing context. A context can have many children, but only one\nparent. In this way the contexts form a forest (not necessarily a single\ntree, since there could be more than one top-level context).\n\nWe then say that resetting or deleting any particular context resets or\ndeletes all its direct and indirect children as well. This feature allows\nus to manage a lot of contexts without fear that some will be leaked; we\nonly need to keep track of one top-level context that we are going to\ndelete at transaction end, and make sure that any shorter-lived contexts\nwe create are descendants of that context. Since the tree can have\nmultiple levels, we can deal easily with nested lifetimes of storage,\nsuch as per-transaction, per-statement, per-scan, per-tuple.\n\nFor convenience we will also want operations like \"reset/delete all\nchildren of a given context, but don't reset or delete that context\nitself\".\n\n\nTop-level contexts\n------------------\n\nThere will be several top-level contexts --- these contexts have no parent\nand will be referenced by global variables. At any instant the system may\ncontain many additional contexts, but all other contexts should be direct\nor indirect children of one of the top-level contexts to ensure they are\nnot leaked in event of an error. I presently envision these top-level\ncontexts:\n\nTopMemoryContext --- allocating here is essentially the same as \"malloc\",\nbecause this context will never be reset or deleted. This is for stuff\nthat should live forever, or for stuff that you know you will delete\nat the appropriate time. An example is fd.c's tables of open files,\nas well as the context management nodes for memory contexts themselves.\nAvoid allocating stuff here unless really necessary, and especially\navoid running with CurrentMemoryContext pointing here.\n\nPostmasterContext --- this is the postmaster's normal working context.\nAfter a backend is spawned, it can delete PostmasterContext to free its\ncopy of memory the postmaster was using that it doesn't need. (Anything\nthat has to be passed from postmaster to backends will be passed in\nTopMemoryContext. The postmaster will probably have only TopMemoryContext,\nPostmasterContext, and possibly ErrorContext --- the remaining top-level\ncontexts will be set up in each backend during startup.)\n\nCacheMemoryContext --- permanent storage for relcache, catcache, and\nrelated modules. This will never be reset or deleted, either, so it's\nnot truly necessary to distinguish it from TopMemoryContext. But it\nseems worthwhile to maintain the distinction for debugging purposes.\n(Note: CacheMemoryContext may well have child-contexts with shorter\nlifespans. For example, a child context seems like the best place to\nkeep the subsidiary storage associated with a relcache entry; that way\nwe can free rule parsetrees and so forth easily, without having to depend\non constructing a reliable version of freeObject().)\n\nQueryContext --- this is where the storage holding a received query string\nis kept, as well as storage that should live as long as the query string,\nnotably the parsetree constructed from it. This context will be reset at\nthe top of each cycle of the outer loop of PostgresMain, thereby freeing\nthe old query and parsetree. We must keep this separate from\nTopTransactionContext because a query string might need to live either a\nlonger or shorter time than a transaction, depending on whether it\ncontains begin/end commands or not. (This'll also fix the nasty bug that\n\"vacuum; anything else\" crashes if submitted as a single query string,\nbecause vacuum's xact commit frees the memory holding the parsetree...)\n\nTopTransactionContext --- this holds everything that lives until end of\ntransaction (longer than one statement within a transaction!). An example\nof what has to be here is the list of pending NOTIFY messages to be sent\nat xact commit. This context will be reset, and all its children deleted,\nat conclusion of each transaction cycle. Note: presently I envision that\nthis context will NOT be cleared immediately upon error; its contents\nwill survive anyway until the transaction block is exited by\nCOMMIT/ROLLBACK. This seems appropriate since we want to move in the\ndirection of allowing a transaction to continue processing after an error.\n\nStatementContext --- this is really a child of TopTransactionContext,\nnot a top-level context, but we'll probably store a link to it in a\nglobal variable anyway for convenience. All the memory allocated during\nplanning and execution lives here or in a child context. This context\nis deleted at statement completion, whether normal completion or error\nabort.\n\nErrorContext --- this permanent context will be switched into\nfor error recovery processing, and then reset on completion of recovery.\nWe'll arrange to have, say, 8K of memory available in it at all times.\nIn this way, we can ensure that some memory is available for error\nrecovery even if the backend has run out of memory otherwise. This should\nallow out-of-memory to be treated as a normal ERROR condition, not a FATAL\nerror.\n\nIf we ever implement nested transactions, there may need to be some\nadditional levels of transaction-local contexts between\nTopTransactionContext and StatementContext, but that's beyond the scope of\nthis proposal.\n\n\nTransient contexts during execution\n-----------------------------------\n\nThe planner will probably have a transient context in which it stores\npathnodes; this will allow it to release the bulk of its temporary space\nusage (which can be a lot, for large joins) at completion of planning.\nThe completed plan tree will be in StatementContext.\n\nThe executor will have contexts with lifetime similar to plan nodes\n(I'm not sure at the moment whether there's need for one such context\nper plan level, or whether a single context is sufficient). These\ncontexts will hold plan-node-local execution state and related items.\nThere will also be a context on each plan level that is reset at the start\nof each tuple processing cycle. This per-tuple context will be the normal\nCurrentMemoryContext during evaluation of expressions and so forth. By\nresetting it, we reclaim transient memory that was used during processing\nof the prior tuple. That should be enough to solve the problem of running\nout of memory on large queries. We must have a per-tuple context in each\nplan node, and we must reset it at the start of a tuple cycle rather than\nthe end, so that each plan node can use results of expression evaluation\nas part of the tuple it returns to its parent node.\n\nBy resetting the per-tuple context, we will be able to free memory after\neach tuple is processed, rather than only after the whole plan is\nprocessed. This should solve our memory leakage problems pretty well;\nyet we do not need to add very much new bookkeeping logic to do it.\nIn particular, we do *not* need to try to keep track of individual values\npalloc'd during expression evaluation.\n\nNote we assume that resetting a context is a cheap operation. This is\ntrue already, and we can make it even more true with a little bit of\ntuning in aset.c.\n\nThere will be some special cases, such as aggregate functions. nodeAgg.c\nneeds to remember the results of evaluation of aggregate transition\nfunctions from one tuple cycle to the next, so it can't just discard\nall per-tuple state in each cycle. The easiest way to handle this seems\nto be to have two per-tuple contexts in an aggregate node, and to\nping-pong between them, so that at each tuple one is the active allocation\ncontext and the other holds any results allocated by the prior cycle's\ntransition function.\n\nExecutor routines that switch the active CurrentMemoryContext may need\nto copy data into their caller's current memory context before returning.\nI think there will be relatively little need for that, because of the\nconvention of resetting the per-tuple context at the *start* of an\nexecution cycle rather than at its end. With that rule, an execution\nnode can return a tuple that is palloc'd in its per-tuple context, and\nthe tuple will remain good until the node is called for another tuple\nor told to end execution. This is pretty much the same state of affairs\nthat exists now, since a scan node can return a direct pointer to a tuple\nin a disk buffer that is only guaranteed to remain good that long.\n\nA more common reason for copying data will be to transfer a result from\nper-tuple context to per-run context; for example, a Unique node will\nsave the last distinct tuple value in its per-run context, requiring a\ncopy step. (Actually, Unique could use the same trick with two per-tuple\ncontexts as described above for Agg, but there will probably be other\ncases where doing an extra copy step is the right thing.)\n\nAnother interesting special case is VACUUM, which needs to allocate\nworking space that will survive its forced transaction commits, yet\nbe released on error. Currently it does that through a \"portal\",\nwhich is essentially a child context of TopMemoryContext. While that\nway still works, it's ugly since xact abort needs special processing\nto delete the portal. Better would be to use a context that's a child\nof QueryContext and hence is certain to go away as part of normal\nprocessing. (Eventually we might have an even better solution from\nnested transactions, but this'll do fine for now.)\n\n\nMechanisms to allow multiple types of contexts\n----------------------------------------------\n\nWe may want several different types of memory contexts with different\nallocation policies but similar external behavior. To handle this,\nmemory allocation functions will be accessed via function pointers,\nand we will require all context types to obey the conventions given here.\n(This is not very far different from the existing code.)\n\nA memory context will be represented by an object like\n\ntypedef struct MemoryContextData\n{\n NodeTag type; /* identifies exact kind of context */\n MemoryContextMethods methods;\n MemoryContextData *parent; /* NULL if no parent (toplevel context) */\n MemoryContextData *firstchild; /* head of linked list of children */\n MemoryContextData *nextchild; /* next child of same parent */\n char *name; /* context name (just for debugging) */\n} MemoryContextData, *MemoryContext;\n\nThis is essentially an abstract superclass, and the \"methods\" pointer is\nits virtual function table. Specific memory context types will use\nderived structs having these fields as their first fields. All the\ncontexts of a specific type will have methods pointers that point to the\nsame static table of function pointers, which will look like\n\ntypedef struct MemoryContextMethodsData\n{\n Pointer (*alloc) (MemoryContext c, Size size);\n void (*free_p) (Pointer chunk);\n Pointer (*realloc) (Pointer chunk, Size newsize);\n void (*reset) (MemoryContext c);\n void (*delete) (MemoryContext c);\n} MemoryContextMethodsData, *MemoryContextMethods;\n\nAlloc, reset, and delete requests will take a MemoryContext pointer\nas parameter, so they'll have no trouble finding the method pointer\nto call. Free and realloc are trickier. To make those work, we will\nrequire all memory context types to produce allocated chunks that\nare immediately preceded by a standard chunk header, which has the\nlayout\n\ntypedef struct StandardChunkHeader\n{\n MemoryContext mycontext; /* Link to owning context object */\n Size size; /* Allocated size of chunk */\n};\n\nIt turns out that the existing aset.c memory context type does this\nalready, and probably any other kind of context would need to have the\nsame data available to support realloc, so this is not really creating\nany additional overhead. (Note that if a context type needs more per-\nallocated-chunk information than this, it can make an additional\nnonstandard header that precedes the standard header. So we're not\nconstraining context-type designers very much.)\n\nGiven this, the pfree routine will look something like\n\n StandardChunkHeader * header = \n (StandardChunkHeader *) ((char *) p - sizeof(StandardChunkHeader));\n\n (*header->mycontext->free_p) (p);\n\nWe could do it as a macro, but the macro would have to evaluate its\nargument twice, which seems like a bad idea (the current pfree macro\ndoes not do that). This is already saving two levels of function call\ncompared to the existing code, so I think we're doing fine without\nsqueezing out that last little bit ...\n\n\nMore control over aset.c behavior\n---------------------------------\n\nCurrently, aset.c allocates an 8K block for the first allocation in\na context, and doubles that size for each successive block request.\nThat's good behavior for a context that might hold *lots* of data, and\nthe overhead wasn't bad when we had only a few contexts in existence.\nWith dozens if not hundreds of smaller contexts in the system, we will\nwant to be able to fine-tune things a little better. I envision the\ncreator of a context as being able to specify an initial block size and a\nmaximum block size. Selecting smaller values will prevent wastage of\nspace in contexts that aren't expected to hold very much (an example is\nthe relcache's per-relation contexts).\n\n\nOther notes\n-----------\n\nThe original version of this proposal suggested that functions returning\npass-by-reference datatypes should be required to return a value freshly\npalloc'd in their caller's memory context, never a pointer to an input\nvalue. I've abandoned that notion since it clearly is prone to error.\nIn the current proposal, it is possible to discover which context a\nchunk of memory is allocated in (by checking the required standard chunk\nheader), so nodeAgg can determine whether or not it's safe to reset\nits working context; it doesn't have to rely on the transition function\nto do what it's expecting.\n\nIt might be that the executor per-run contexts described above should\nbe tied directly to executor \"EState\" nodes, that is, one context per\nEState. I'm not real clear on the lifespan of EStates or the situations\nwhere we have just one or more than one, so I'm not sure. Comments?\n\nIt would probably be possible to adapt the existing \"portal\" memory\nmanagement mechanism to do what we need. I am instead proposing setting\nup a totally new mechanism, because the portal code strikes me as\nextremely crufty and unwieldy. It may be that we can eventually remove\nportals entirely, or perhaps reimplement them with this mechanism\nunderneath.\n", "msg_date": "Wed, 21 Jun 2000 23:32:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Memory management revisions, take 2" } ]
[ { "msg_contents": "Tom,\n\nA \"container\" can be a file or a device or a directory. Here again are examples that I already posted in \nanother thread:\n\nIn the first example there are 3 devices specified as containers. In the second example 3 directories are \nspecified as containers (DB2 therefore makes its own file names in it - and uses OIDs to do it I think). In \nthe third example 2 files are the 2 containers. In the fourth example 6 devices on 3 nodes are the \ncontainers.\n\n CREATE TABLESPACE PAYROLL\n MANAGED BY DATABASE\n USING (DEVICE'/dev/rhdisk6' 10000,\n DEVICE '/dev/rhdisk7' 10000,\n DEVICE '/dev/rhdisk8' 10000)\n OVERHEAD 24.1\n TRANSFERRATE 0.9\n\n\n CREATE TABLESPACE ACCOUNTING\n MANAGED BY SYSTEM\n USING ('d:\\acc_tbsp', 'e:\\acc_tbsp', 'f:\\acc_tbsp')\n EXTENTSIZE 64\n PREFETCHSIZE 32\n\n\n\n CREATE TEMPORARY TABLESPACE TEMPSPACE2\n MANAGED BY DATABASE\n USING (FILE '/tmp/tempspace2.f1' 50000,\n FILE '/tmp/tempspace2.f2' 50000)\n EXTENTSIZE 256\n\n\n CREATE TABLESPACE PLANS\n MANAGED BY DATABASE\n USING (DEVICE '/dev/rhdisk0' 10000, DEVICE '/dev/rn1hd01' 40000) ON NODE 1\n USING (DEVICE '/dev/rhdisk0' 10000, DEVICE '/dev/rn3hd03' 40000) ON NODE 3\n USING (DEVICE '/dev/rhdisk0' 10000, DEVICE '/dev/rn5hd05' 40000) ON NODE 5\n\nOn Wed, 21 Jun 2000 23:03:03 -0400, Tom Lane wrote:\n\n>\"Randall Parker\" <[email protected]> writes:\n>> DB2 supports an ALTER TABLESPACE command that allows one to add new\n>> containers to an existing tablespace. IMO, that's far more supportive\n>> of 24x7 usage.\n>\n>Er, what do they mean by \"container\", and why is it better?\n>\n>\t\t\tregards, tom lane\n\n\n\n", "msg_date": "Wed, 21 Jun 2000 22:11:50 -0700", "msg_from": "\"Randall Parker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Big 7.1 open items" } ]
[ { "msg_contents": "Code Page is a terminology that predates MS. I certainly first learned of it from IBM documents. \n\u000b\nShift-JIS is code page 932 or 942 (942 cotains about a half dozen more characters than 932). \n\nThe original US IBM PC used Code Page 437. In Europe it used Code Page 850 which is a Latin 1 Code Page. MS invented Code Page 1252 which was their Latin 1 code page. ISO 8859-1 is just another \nLatin 1 Code Page looked at from that perspective. \n\nOn Thu, 22 Jun 2000 12:51:02 +0900, Tatsuo Ishii wrote:\n\n>Guys, can I ask a question?\n>\n>What is \"code page\"? Is it sort of M$'s terminology?\n>--\n>Tatsuo Ishii\n\n\n\n", "msg_date": "Wed, 21 Jun 2000 22:33:18 -0700", "msg_from": "\"Randall Parker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Thoughts on multiple simultaneous code page support" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Peter Eisentraut [mailto:[email protected]]\n> \n> > My opinion\n> > 3) database and tablespace are relatively irrelevant.\n> > I assume PostgreSQL's database would correspond \n> > to the concept of SCHEMA.\n> \n> A database corresponds to a catalog and a schema corresponds to nothing\n> yet.\n>\n\nOh I see your point. However I've thought that current PostgreSQL's\ndatabase is an imcomplete SCHEMA and still feel so in reality.\nCatalog per database has been nothing but needless for me from\nthe first.\n\nRegards. \n\nHiroshi Inoue\[email protected]\n", "msg_date": "Thu, 22 Jun 2000 18:07:18 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big 7.1 open items " }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> A database corresponds to a catalog and a schema corresponds to nothing\n>> yet.\n\n> Oh I see your point. However I've thought that current PostgreSQL's\n> database is an imcomplete SCHEMA and still feel so in reality.\n> Catalog per database has been nothing but needless for me from\n> the first.\n\nIt may be needless for you, but not for everybody ;-).\n\nIn my mind the point of the \"database\" concept is to provide a domain\nwithin which custom datatypes and functions are available. Schemas\nwill control the visibility of tables, but SQL92 hasn't thought about\ncontrolling visibility of datatypes or functions. So I think we will\nstill want \"database\" = \"span of applicability of system catalogs\"\nand multiple databases allowed per installation, even though there may\nbe schemas subdividing the database(s).\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Jun 2000 11:22:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big 7.1 open items " }, { "msg_contents": "Tom Lane writes:\n\n> In my mind the point of the \"database\" concept is to provide a domain\n> within which custom datatypes and functions are available.\n\nQuoth SQL99:\n\n\"A user-defined type is a schema object\"\n\n\"An SQL-invoked routine is an element of an SQL-schema\"\n\nI have yet to see anything in SQL that's a per-catalog object. Some things\nare global, like users, but everything else is per-schema.\n\nThe way I see it is that schemas are required to be a logical hierarchy,\nwhereas implementations may see catalogs as a physical division (as indeed\nthis implementation does).\n\n> So I think we will still want \"database\" = \"span of applicability of\n> system catalogs\"\n\nYes, because the system catalogs would live in a schema of their own.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 23 Jun 2000 00:36:28 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big 7.1 open items " }, { "msg_contents": "> -----Original Message-----\n> From: Peter Eisentraut\n>\n> Tom Lane writes:\n>\n> > In my mind the point of the \"database\" concept is to provide a domain\n> > within which custom datatypes and functions are available.\n>\n\nAFAIK few users understand it and many users have wondered\nwhy we couldn't issue cross \"database\" queries.\n\n> Quoth SQL99:\n>\n> \"A user-defined type is a schema object\"\n>\n> \"An SQL-invoked routine is an element of an SQL-schema\"\n>\n> I have yet to see anything in SQL that's a per-catalog object. Some things\n> are global, like users, but everything else is per-schema.\n>\n\nSo why is system catalog needed per \"database\" ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Fri, 23 Jun 2000 08:35:12 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big 7.1 open items " } ]
[ { "msg_contents": "Thus spake Zeugswetter Andreas\n> > [email protected] (D'Arcy J.M. Cain) writes:\n> > > nargs = trigger->tgnargs;\n> > > if (nargs != 4)\n> > > elog(ERROR, \"make_date (%s): %d args\", relname, nargs);\n> \n> The simple answer is, that your procedure does not take four arguments.\n> The old and new tuple are passed implicitly to your procedure,\n> they don't show up in the argument list.\n\nRight. That's why the function takes void as its parameter list. (I hadn't\nshown that in my message.) The code above finds the args from the global\nenvironment.\n\nIn fact, my problem was, I think, that I needed to use the SPI_connect() and\nSPI_finish() functions which I don't think were available when I first wrote\nthe function. I added those, recompiled and all now works. It just took\nme a while to realize that the problem was in the C code and not the SQL\nstatements to use it.\n\n> This is also the reason your code works if you add four \"dummy\" string \n> arguments (you can test that by supplying random values not the column names\n> in your create trigger statement).\n\nNope. Without the fix above it didn't work no matter what I tried.\n\nThanks for everyone's help. Now I can move on to the operator defining\nproblem but that's a subject for another message.\n\nIt's funny but the two areas inthe new version that I am having trouble\nwith are the two that I originally helped document. :-)\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 22 Jun 2000 09:59:26 -0400 (EDT)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": true, "msg_subject": "Re: Changes to functions and triggers" } ]
[ { "msg_contents": "\nI am considering making some changes to pg_dump, and would appreciate any\nsuggestions/ideas from the list.\n\nThe outline is as follows:\n\n- Add a flag to modify the output of pg_dump so that it is more amenable to\nretrieving parts of the extracted data. This may involve writing the data\nto a tar file/dbm file/custom-format file, rather than just a simple SQL\nscript file (see below).\n\n- Add flags to allow selective import of information stored in the custom\ndump file: eg. load the schema (no data), load only one table, define all\nindexes or triggers for a given table etc. This would eventually allow for\noverriding of tablespace settings.\n\n- Add options to dump selected information in a readble format (ie.\nprobably SQL).\n\nThe broad approach would be modify the existing pg_dump as little as\npossible; I am inclined to write the data as SQL (as currently done), and\nappend an 'index' to the output, specifying the offset on the file that\neach piece of extractable data can be found. The 'restore' option would\njust go to the relevant section(s), and pipe the data to psql.\n\nI am also considering the possibility of writing the data to separate files\nin a tar archive, since this may be a lot cleaner in the long run, although\nthe text file with the index at the end has the advantage that the index\ncould be written as a series of SQL comments, effectively making the dump\nfile compatible with existing dump file formats.\n\nAny comments/suggestions would be appreciated.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 23 Jun 2000 02:53:34 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Proposal: More flexible backup/restore via pg_dump" } ]
[ { "msg_contents": "> > I believe that we can avoid versions using WAL...\n> \n> I don't think so. You're basically saying that\n> \t1. create file 'new'\n> \t2. delete file 'old'\n> \t3. rename 'new' to 'old'\n> is safe as long as you have a redo log to ensure that the rename\n> happens even if you crash between steps 2 and 3. But crash is not\n> the only hazard. What if step 3 just plain fails? Redo won't help.\n\nOk, ok. Let's use *unique* file name for each table version.\nBut after thinking, seems that I agreed with Hiroshi about using\n*some unique id* for file names instead of oid+version: we could use\njust DB' OID + this unique ID in log records to find table file - just\n8 bytes.\n\nSo, add me to Hiroshi' camp... if Hiroshi is ready to implement new file\nnaming -:)\n\n> > But what about LOCATIONs? I object using environment and think that\n> > locations must be stored in pg_control..?\n> \n> I don't like environment variables for this either; it's just way too\n> easy to start the postmaster with wrong environment. It still seems\n> to me that relying on subdirectory symlinks is a good way to go.\n\nI always thought so.\n\n> pg_control is not so good --- if it gets corrupted, how do \n> you recover?\n\nImpossible to recover anyway - pg_control keeps last checkpoint pointer,\nrequired for recovery. That's why Oracle recommends (requires?) at least\ntwo copies of control file (and log too).\nBut what if log gets corrupted? Or file system (lost symlinks etc)?\nOne will have to use backup...\n\nVadim\n", "msg_date": "Thu, 22 Jun 2000 11:09:47 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big 7.1 open items " }, { "msg_contents": "> -----Original Message-----\n> From: Mikheev, Vadim [mailto:[email protected]]\n> \n> > > I believe that we can avoid versions using WAL...\n> > \n> > I don't think so. You're basically saying that\n> > \t1. create file 'new'\n> > \t2. delete file 'old'\n> > \t3. rename 'new' to 'old'\n> > is safe as long as you have a redo log to ensure that the rename\n> > happens even if you crash between steps 2 and 3. But crash is not\n> > the only hazard. What if step 3 just plain fails? Redo won't help.\n> \n> Ok, ok. Let's use *unique* file name for each table version.\n> But after thinking, seems that I agreed with Hiroshi about using\n> *some unique id* for file names instead of oid+version: we could use\n> just DB' OID + this unique ID in log records to find table file - just\n> 8 bytes.\n> \n> So, add me to Hiroshi' camp... if Hiroshi is ready to implement new file\n> naming -:)\n>\n\nI've thought e.g. newfileid() like newoid() using pg_variable.\nOther smarter ways ? \n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Fri, 23 Jun 2000 12:49:48 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Big 7.1 open items " } ]
[ { "msg_contents": "Well, to me at least the term character set does not define a mapping or encoding. It just specifies a list of characters and their numeric representations or mappings not included. \n\nTo say \"character set mapping\" or \"character set encoding\" might be more complete. Though I tend to use the term \"code page\" because that's what I've heard the most down thru the years.\n\nIf someone here wants to suggest a particular terminology to use I'd be happy to adopt it in this list. \n\nOn Fri, 23 Jun 2000 00:14:43 +0900, Tatsuo Ishii wrote:\n\n>Oh, I see.\n>\n>Randall, why do we need to use the term then? Shouldn't we use more\n>standard term \"character set\" or \"charset\" instead?\n\n\n", "msg_date": "Thu, 22 Jun 2000 11:14:23 -0700", "msg_from": "\"Randall Parker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Thoughts on multiple simultaneous code page support" }, { "msg_contents": "On Thu, 22 Jun 2000, Randall Parker wrote:\n\n> Well, to me at least the term character set does not define a mapping or encoding. It just specifies a list of characters and their numeric representations or mappings not included. \n> \n> To say \"character set mapping\" or \"character set encoding\" might be more complete. Though I tend to use the term \"code page\" because that's what I've heard the most down thru the years.\n> \n> If someone here wants to suggest a particular terminology to use I'd\n> be happy to adopt it in this list.\n\ncodepages are used by the samba folks also, if this helps any ... I never\nknew what it meant before, but now that I do, makes perfect sense ...\n\nits like using tuples/fields vs rows/columns :) one is right, the other\nis lazy :)\n\n\n", "msg_date": "Thu, 22 Jun 2000 15:54:17 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thoughts on multiple simultaneous code page support" }, { "msg_contents": "> > Well, to me at least the term character set does not define a mapping or encoding. It just specifies a list of characters and their numeric representations or mappings not included. \n> > \n> > To say \"character set mapping\" or \"character set encoding\" might be more complete. Though I tend to use the term \"code page\" because that's what I've heard the most down thru the years.\n\nI think the problem with \"code page\" is it only mentions about\ncharacter sets recognized by M$. For example, one of a KANJI character\nsets called \"JIS X 0212\" is in the standard ISO 2022, but not in \"code\npage.\"\n\n> > If someone here wants to suggest a particular terminology to use I'd\n> > be happy to adopt it in this list.\n\nThe term \"character set\" defined in ISO 2022 definitely does not\ndefine a mapping or encoding as Randall said. But in SQL9x, it\nincludes \"a list of characters\" (called \"repertory\") and an encoding\n(called \"form of use\"). I guess we could agree that we discuss how to\nimplement SQL9x in this list. If so, it would be more natural to use\nthe term \"character set\" as defined in SQL9x, rather than \"code page\",\nno?\n\n> codepages are used by the samba folks also, if this helps any ... I never\n> knew what it meant before, but now that I do, makes perfect sense ...\n\nThat's because samba only handles character sets defined by M$.\n--\nTatsuo Ishii\n", "msg_date": "Fri, 23 Jun 2000 07:34:46 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thoughts on multiple simultaneous code page support" } ]
[ { "msg_contents": "Can anyone explain the meaning of the following NOTICE:\n\n\tNOTICE: trying to delete portal name that does not exist.\n\nI get these regularly when using pgaccess to view views. Pgaccess\nmakes no direct use of portals (and I can't find any docs on them) so\nI'm lost. It doesn't seem to be a problem with psql on the same\nviews.\n\nDoes this indicate a bug in the backend code somewhere?\n\nThanks for your help.\n\nCheers,\nBrook\n", "msg_date": "Thu, 22 Jun 2000 12:48:33 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": true, "msg_subject": "NOTICES about portals" }, { "msg_contents": "Brook Milligan <[email protected]> writes:\n> Can anyone explain the meaning of the following NOTICE:\n> \tNOTICE: trying to delete portal name that does not exist.\n> Does this indicate a bug in the backend code somewhere?\n\nProbably. Can you exhibit a command sequence that causes it?\n(You might try turning on a higher debug level to get the backend\nto log the commands pgaccess sends...)\n\nBTW, what version are you running? I have a vague recollection\nof having fixed something in that area recently, but it might be\npost-7.0.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Jun 2000 18:39:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOTICES about portals " }, { "msg_contents": " Brook Milligan <[email protected]> writes:\n > Can anyone explain the meaning of the following NOTICE:\n > \tNOTICE: trying to delete portal name that does not exist.\n > Does this indicate a bug in the backend code somewhere?\n\n Probably. Can you exhibit a command sequence that causes it?\n (You might try turning on a higher debug level to get the backend\n to log the commands pgaccess sends...)\n\nWell, I'm working on it. I now have something in psql that emits this\nNOTICE:. Things like the following do it at the time of commiting the\ntransaction:\n\n begin;\n declare mycursor cursor for select * from some_view;\n fetch 10 in mycursor;\n end;\n\nThe catch is that it doesn't happen with all views. I'm trying to\nfigure out what types of views it does happen on, though. So far, it\nlooks like views of one table or views of a join between 2 tables are\nfine; whereas views of a join between a table and view might trigger\nthe NOTICE.\n\nUnfortunately, this is all in the context of a rather complex\ndatabase. I'm still trying to extract a sufficient and simple\nexample.\n\nHopefully, this information may trigger some ideas in the meantime.\n\n\n BTW, what version are you running? I have a vague recollection\n of having fixed something in that area recently, but it might be\n post-7.0.\n\nI'm running 7.0.\n\nThanks for the help.\n\nCheers,\nBrook\n", "msg_date": "Thu, 22 Jun 2000 17:12:23 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: NOTICES about portals" }, { "msg_contents": "Brook Milligan <[email protected]> writes:\n> Well, I'm working on it. I now have something in psql that emits this\n> NOTICE:. Things like the following do it at the time of commiting the\n> transaction:\n\n> begin;\n> declare mycursor cursor for select * from some_view;\n> fetch 10 in mycursor;\n> end;\n\n> The catch is that it doesn't happen with all views. I'm trying to\n> figure out what types of views it does happen on, though.\n\nHmm. I seem to recall something about a bug with cursors on views,\nbut can't find anything about it in the CVS logs for the likely files.\n\nI do see a post-7.0 bug fix for the case of redeclaring a cursor name\nwithin a transaction: if you do\n\tbegin;\n\tdeclare foo cursor for ...;\n\tdeclare foo cursor for ...;\n\t...\nthings will act very peculiar. That's probably unrelated to your\nproblem, but I wanted to raise a flag for you that the behavior\nyou're trying to chase may have more to do with a pattern of cursor\ndeclarations/uses than with the exact details of the query.\n\nAlso, if you haven't already done so, I'd recommend building the\nbackend with --enable-cassert and compiling backend/utils/mmgr/aset.c\nwith -DCLOBBER_FREED_MEMORY. If it's something like a reference to\nalready-freed memory, these will make the failure a lot more\nreproducible.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Jun 2000 19:30:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOTICES about portals " } ]
[ { "msg_contents": "http://linuxworld.com/linuxworld/lw-2000-06/lw-06-S390-3.html\n\nThis article, while on the subject of Linux for IBM System/390\nmainframes, also notes that PostgreSQL was easily brought up on that\nbox....\n\nNow _that's_ a database engine.\n\nThey noted that changes were required to config.sub and config.guess --\nI have e-mailed the author of the article to get those, assuming we\ndon't already have them, as well as timings on the compile and the\nregression tests.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 22 Jun 2000 16:17:48 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Interesting mention of PostgreSQL in news" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n\n> http://linuxworld.com/linuxworld/lw-2000-06/lw-06-S390-3.html\n> \n> This article, while on the subject of Linux for IBM System/390\n> mainframes, also notes that PostgreSQL was easily brought up on that\n> box....\n> \n> Now _that's_ a database engine.\n> \n> They noted that changes were required to config.sub and config.guess --\n\nI thought they needed some spinlock asm as well? I think that's what\nmissing from getting it to run on ia64.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "22 Jun 2000 16:19:56 -0400", "msg_from": "[email protected] (Trond Eivind=?iso-8859-1?q?_Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Interesting mention of PostgreSQL in news" }, { "msg_contents": "Trond Eivind Glomsr�d wrote:\n> Lamar Owen <[email protected]> writes:\n> > They noted that changes were required to config.sub and config.guess --\n \n> I thought they needed some spinlock asm as well? I think that's what\n> missing from getting it to run on ia64.\n\nThe exact changes are at http://penguinvm.princeton.edu/patches , and a\nsource RPM for 6.5.3 is also available at\nhttp://linux.s390.org/download/ftp/SRPMS as postgresql-6.5.3-4.src.rpm.\nThe patches include some other stuff, too.\n\nThe patch you are referring to is a one-liner asm for the spinlock.\n\nAgain, these patches are against 6.5.3, thus far.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 22 Jun 2000 16:45:39 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Interesting mention of PostgreSQL in news" }, { "msg_contents": "> http://linuxworld.com/linuxworld/lw-2000-06/lw-06-S390-3.html\n> \n> This article, while on the subject of Linux for IBM System/390\n> mainframes, also notes that PostgreSQL was easily brought up on that\n> box....\n> \n> Now _that's_ a database engine.\n> \n> They noted that changes were required to config.sub and config.guess --\n> I have e-mailed the author of the article to get those, assuming we\n> don't already have them, as well as timings on the compile and the\n> regression tests.\n\nThe interesting part of this for me was:\n\n Very soon, important applications such as Apache, Samba, PostgreSQL,\n linuxconf, Sendmail, bind, Emacs, SSL, and SSH were up and running.\n There are now over 400 packages, both source and binaries, ready for use\n on Linux for S/390. \n\nGuess we are an important application now. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 22 Jun 2000 17:22:29 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Interesting mention of PostgreSQL in news" }, { "msg_contents": "On Thu, 22 Jun 2000, Bruce Momjian wrote:\n> The interesting part of this for me was:\n \n> Very soon, important applications such as Apache, Samba, PostgreSQL,\n> linuxconf, Sendmail, bind, Emacs, SSL, and SSH were up and running.\n> There are now over 400 packages, both source and binaries, ready for use\n> on Linux for S/390. \n \n> Guess we are an important application now. :-)\n\nYou just now figuring that out, Bruce? :-) No, that was the part I thought\nmost interesting -- PostgreSQL being considered not just an important\napplication, but an important application running on a linux-powered mainframe\n-- or am I the only one who gets the non-sequitur here? On a machine that runs\nVM/ESA and more than likely DB2, CICS, etc, PostgreSQL is an important\napplication. On a mainframe. An IBM mainframe.\n\nWe are mentioned in the same sentence and with the same implications as Apache,\nSendmail, Bind, and Samba -- now that's classy. \n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Thu, 22 Jun 2000 21:34:21 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Interesting mention of PostgreSQL in news" }, { "msg_contents": "Bruce Momjian wrote:\n> The interesting part of this for me was:\n \n> Very soon, important applications such as Apache, Samba, PostgreSQL,\n> linuxconf, Sendmail, bind, Emacs, SSL, and SSH were up and running.\n> There are now over 400 packages, both source and binaries, ready for use\n> on Linux for S/390.\n \n> Guess we are an important application now. :-)\n\nOn a mainframe, no less. A Linux-powered mainframe.... :-) \n\nYes, that line is why I posted the heads-up to begin with -- I thought\nthat it was interesting that PostgreSQL is mentioned in the same breath\nas Apache as an important application. Is it just me, or is that\n_classy_?\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 23 Jun 2000 10:07:50 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Interesting mention of PostgreSQL in news" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n>> Very soon, important applications such as Apache, Samba, PostgreSQL,\n>> linuxconf, Sendmail, bind, Emacs, SSL, and SSH were up and running.\n>> There are now over 400 packages, both source and binaries, ready for use\n>> on Linux for S/390.\n\n> Yes, that line is why I posted the heads-up to begin with -- I thought\n> that it was interesting that PostgreSQL is mentioned in the same breath\n> as Apache as an important application. Is it just me, or is that\n> _classy_?\n\nAnd *in front of* such unheard-of, seldom-used apps as sendmail and\nbind. Wow.\n\nThe day Postgres is considered as much a standard piece of equipment as\nsendmail, I'll say we've arrived ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Jun 2000 12:04:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Interesting mention of PostgreSQL in news " } ]
[ { "msg_contents": "Reference integrity module seems to have a serious bug (I use 7.0).\nAlthough the super user gave a SELECT permission to the simple user\non the table \"a\" and ALL permission on the table \"c\" (which references\nto the table \"a\"), the simple user will get an\n\nERROR: a: Permission denied.\n\nmessage. My definitions:\n\ntest=# create table a(b serial);\ntest=# grant select on a to simpleuser;\ntest=# create table c(d int4 not null references a(b));\ntest=# grant all on c to simpleuser;\n\nAny comments?\n\nRegards,\nZoltan\n\n", "msg_date": "Thu, 22 Jun 2000 22:54:31 +0200 (CEST)", "msg_from": "Kovacs Zoltan Sandor <[email protected]>", "msg_from_op": true, "msg_subject": "refint/acl problem" } ]
[ { "msg_contents": "Please help! Unfortunetely I gave numbers for user names (column \"usename\" in\npg_shadow). Now in 7.0 we have ALTER GROUP, but the statement\n\ntest=# ALTER GROUP anygroup ADD USER 1234;\n\nwhere 1234 can be any number, will result this error message:\n\nERROR: parser: parse error at or near \"1234\"\n\nI couldn't find any workarounds yet. No conversions solved my problem.\nAny ideas?\n\nRegards,\nZoltan\n\n", "msg_date": "Thu, 22 Jun 2000 22:55:21 +0200 (CEST)", "msg_from": "Kovacs Zoltan Sandor <[email protected]>", "msg_from_op": true, "msg_subject": "problem with ALTER GROUP" }, { "msg_contents": "Kovacs Zoltan Sandor <[email protected]> writes:\n> test=# ALTER GROUP anygroup ADD USER 1234;\n> where 1234 can be any number, will result this error message:\n> ERROR: parser: parse error at or near \"1234\"\n\nDouble quotes around the username, perhaps?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Jun 2000 18:41:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problem with ALTER GROUP " }, { "msg_contents": "Kovacs Zoltan Sandor writes:\n\n> test=# ALTER GROUP anygroup ADD USER 1234;\n\n> ERROR: parser: parse error at or near \"1234\"\n\nHmm, who would have thought of that? Try ... USER \"1234\" (double quotes).\nWe might be able to do better there though.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 23 Jun 2000 00:42:03 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problem with ALTER GROUP" }, { "msg_contents": "> > test=# ALTER GROUP anygroup ADD USER 1234;\n> \n> > ERROR: parser: parse error at or near \"1234\"\n> \n> Hmm, who would have thought of that? Try ... USER \"1234\" (double quotes).\nUnfortunetely it also doesn't work. I tried conversions as well, without\nany success.\n\nZoltan\n\n", "msg_date": "Fri, 23 Jun 2000 12:54:13 +0200 (CEST)", "msg_from": "Kovacs Zoltan Sandor <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problem with ALTER GROUP" }, { "msg_contents": "> > > test=# ALTER GROUP anygroup ADD USER 1234;\n> > > ERROR: parser: parse error at or near \"1234\"\n> > Try ... USER \"1234\" (double quotes).\n> Unfortunetely it also doesn't work.\n\nlockhart=# create user \"1234\";\nCREATE USER\nlockhart=# create group test;\nCREATE GROUP\nlockhart=# ALTER GROUP test ADD USER 1234;\nERROR: parser: parse error at or near \"1234\"\nlockhart=# ALTER GROUP test ADD USER \"1234\";\nALTER GROUP\n\nWhat does \"not work\" mean? Is the result unusable?\n\n - Thomas\n", "msg_date": "Fri, 23 Jun 2000 14:10:01 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problem with ALTER GROUP" }, { "msg_contents": "> lockhart=# ALTER GROUP test ADD USER \"1234\";\n> ALTER GROUP\nHmmm, it works for me, too... I don't know, what I did before... Thanks!\n\nZoltan\n\n", "msg_date": "Sat, 24 Jun 2000 12:25:01 +0200 (CEST)", "msg_from": "Kovacs Zoltan Sandor <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problem with ALTER GROUP" } ]
[ { "msg_contents": "inside the backend, if I have a TupleTableSlot, can I find out from what\nbasetable it is referring to or not?\n", "msg_date": "Fri, 23 Jun 2000 10:35:50 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "PGSQL internals question..." }, { "msg_contents": "Chris Bitmead <[email protected]> writes:\n> inside the backend, if I have a TupleTableSlot, can I find out from what\n> basetable it is referring to or not?\n\nSince it's not necessarily referring to any table at all, the general\nanswer is \"no\". Do you have a reason to assume that the slot is\nholding a tuple directly fetched from disk, rather than constructed\non-the-fly?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Jun 2000 01:50:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGSQL internals question... " }, { "msg_contents": "Tom Lane wrote:\n> \n> Chris Bitmead <[email protected]> writes:\n> > inside the backend, if I have a TupleTableSlot, can I find out from what\n> > basetable it is referring to or not?\n> \n> Since it's not necessarily referring to any table at all, the general\n> answer is \"no\". Do you have a reason to assume that the slot is\n> holding a tuple directly fetched from disk, rather than constructed\n> on-the-fly?\n\nI'm looking at implementing classoid and I was looking around inside\nExecProject, wondering if this would be a good place to create the\nmagical classoid field. If I understand ExecProject it takes some \"real\"\ntables and mangles them into a single result tuple. Do I know if it is a\ntuple direct from disk? It seemed that way, but perhaps you can tell me?\n\n\nThe other approach I'm looking at is to add a Relation field to\nTupleTableSlot which is populated inside of XXXScan or whatever, which\ncan be lifted out inside ExecProject. Do you think I'm on the right\ntrack?\n", "msg_date": "Fri, 23 Jun 2000 15:58:38 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PGSQL internals question..." }, { "msg_contents": "Chris Bitmead <[email protected]> writes:\n> I'm looking at implementing classoid and I was looking around inside\n> ExecProject, wondering if this would be a good place to create the\n> magical classoid field. If I understand ExecProject it takes some \"real\"\n> tables and mangles them into a single result tuple. Do I know if it is a\n> tuple direct from disk? It seemed that way, but perhaps you can tell me?\n\nExecProject is used for any plan node that has to generate an output\ntuple that's not identical to its input ... which means almost\nanything except Sort or Unique or some such. You can't assume that\nthe input's straight off of disk, it could easily be the result of\na lower-level join.\n\n> The other approach I'm looking at is to add a Relation field to\n> TupleTableSlot which is populated inside of XXXScan or whatever, which\n> can be lifted out inside ExecProject. Do you think I'm on the right\n> track?\n\nIt's not clear to me why you think that ExecProject has anything to\ndo with the problem... I doubt that routine will change at all.\nI'd be inclined to look at the handling of \"system\" attributes such\nas OID. Probably you'd need to add a source-table-OID field to\nHeapTupleData, which XXXScan would need to fill in, and then there's\nhave to be code to pull it out again when the right system attribute\nnumber is referenced.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Jun 2000 02:19:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGSQL internals question... " }, { "msg_contents": "Tom Lane wrote:\n\n> It's not clear to me why you think that ExecProject has anything to\n> do with the problem... \n\nOnly that it calls things like ExecEvalExpr which evaluates different\ntypes of column expressions. I was thinking I would need a T_classoid,\nor T_magicColumn expression type evaluated there which grabs the\nclassoid from somewhere.\n\n> I doubt that routine will change at all.\n> I'd be inclined to look at the handling of \"system\" attributes such\n> as OID. \n\nExcept that oid really exists in the db right? The only thing special\nabout oid compared to any other attribute is that it isn't expanded by\n\"*\", which doesn't seem like that much difference.\n\n> Probably you'd need to add a source-table-OID field to\n> HeapTupleData, which XXXScan would need to fill in, \n\nWouldn't ExecTargetList need access to this HeapTupleData instance? Does\nit?\n\n> and then there's\n> have to be code to pull it out again when the right system attribute\n> number is referenced.\n\nWould a non-existant attribute have a system attribute number? Where do\nyou suggest this code should be that \"pulls it out\"?\n", "msg_date": "Fri, 23 Jun 2000 16:44:55 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PGSQL internals question..." }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Chris Bitmead\n> \n> Would a non-existant attribute have a system attribute number? Where do\n> you suggest this code should be that \"pulls it out\"?\n>\n\nCTID has a system attribute number SelfItemPointerAttributeNumber(-1).\nDifferent from other system attributes it corresponds to an item(t_self)\nin HeapTupleData not to an item(t_ctid) in HeapTupleHeaderData.\nPlease look at include/access/htup.h.\nProbably heap_xxxx() functions in access/heap/heapam.c would have\nto fill in the new system attribute. \n\nRegards.\n\nHiroshi Inoue\[email protected] \n", "msg_date": "Fri, 23 Jun 2000 17:35:53 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: PGSQL internals question..." } ]
[ { "msg_contents": "Hello, newbie here. I need 64-bit sequences. I started looking at\nsequence.h, sequence.c, and some other files, and I have some questions\nbefore I start mucking up your code. :-) (I'd rather do it right and\ncontribute something to the pgsql project.)\n\nBut first, has this issue come up in the past, and was a decision made to\nkeep sequences simple as an \"int4\" type only? Please clue me in on the\nhistory. Is there any reason not to push forward with \"int8\" sequences,\nas some sort of compile-time or run-time option?\n\n\nPaul Caskey\nNew Mexico Software\[email protected]\n", "msg_date": "Thu, 22 Jun 2000 21:31:59 -0600", "msg_from": "Paul Caskey <[email protected]>", "msg_from_op": true, "msg_subject": "64-bit sequences" }, { "msg_contents": "Paul Caskey <[email protected]> writes:\n> But first, has this issue come up in the past, and was a decision made to\n> keep sequences simple as an \"int4\" type only? Please clue me in on the\n> history. Is there any reason not to push forward with \"int8\" sequences,\n> as some sort of compile-time or run-time option?\n\nMainly it's that int8 isn't supported on all our platforms. As a\ncompile-time option it might be reasonable...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Jun 2000 01:53:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit sequences " }, { "msg_contents": "Tom Lane wrote:\n> \n> Paul Caskey <[email protected]> writes:\n> > But first, has this issue come up in the past, and was a decision made to\n> > keep sequences simple as an \"int4\" type only? Please clue me in on the\n> > history. Is there any reason not to push forward with \"int8\" sequences,\n> > as some sort of compile-time or run-time option?\n> \n> Mainly it's that int8 isn't supported on all our platforms. As a\n> compile-time option it might be reasonable...\n> \n> regards, tom lane\n\nOkay, cool. Similar subject: What about making the oid 64-bit? At first\nglance, this seems easier to change than the sequence generator, since you\nguys do a good job of using sizeof() and the Oid typedef. Changing the\ntypedef to \"unsigned long long\" should cover everything...? I will test\nit.\n\nThe snag I ran into with sequence.c is a missing Int64GetDatum() macro. \nMy system is Sun Solaris 7, compiled in 32-bit mode. So I\nHAVE_LONG_LONG_INT_64 but don't HAVE_LONG_INT_64. I do have the \"int8\"\nSQL datatype and tested it.\n\nIn c.h I have these lines:\n\ntypedef signed char int8; /* == 8 bits */\ntypedef signed short int16; /* == 16 bits */\ntypedef signed int int32; /* == 32 bits */\n\nIs there some reason I'm missing the magic fourth line? It should be:\n\ntypedef signed long long int64; /* == 64 bits */\n\nIsn't it strange I have the \"int8\" SQL datatype, but not the \"int64\" C\ntypedef and the Int64GetDatum() macro? I'm not a 64-bit compiling expert,\nso go easy on me. \n\n\nPaul Caskey\nNew Mexico Software\n", "msg_date": "Fri, 23 Jun 2000 10:21:07 -0600", "msg_from": "Paul Caskey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 64-bit sequences" }, { "msg_contents": "> > Mainly it's that int8 isn't supported on all our platforms. As a\n> > compile-time option it might be reasonable...\n\nOr maybe better, as another type, say SERIAL64? That way both could be\navailable on some platforms. Also...\n\n> Similar subject: What about making the oid 64-bit? At first\n> glance, this seems easier to change than the sequence generator, since \n> you guys do a good job of using sizeof() and the Oid typedef. \n> Changing the typedef to \"unsigned long long\" should cover \n> everything...? I will test it.\n\nAgain, a 64bit vs 32 bit issue. We have \"pass by value\" and \"pass by\nreference\" data types, and we have conventionally made everything bigger\nthan 32bits a \"by reference\" type. Going to a 64bit OID or SERIAL type\nmay mess with that convention, but it may be good to revisit this and\nremind ourselves why we have that convention in the first place.\n\nYour other questions are related (I think) to the by-ref vs by-val\nissue.\n\n - Thomas\n", "msg_date": "Sat, 24 Jun 2000 03:22:31 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit sequences" }, { "msg_contents": "Paul Caskey <[email protected]> writes:\n> Okay, cool. Similar subject: What about making the oid 64-bit?\n\nLet's just say you'd be opening a *much* larger can of worms there,\nbecause Oid is used all over the place whereas only a few routines\nknow anything about sequences.\n\nThis is (or should be) on the TODO list but I wouldn't recommend it\nas your first backend programming project.\n\n> At first glance, this seems easier to change than the sequence\n> generator, since you guys do a good job of using sizeof() and the Oid\n> typedef.\n\nExcept for all the places that assume Oid is interchangeable with int.\nFinding them is left as an exercise for the student...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 Jun 2000 00:23:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit sequences " }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Similar subject: What about making the oid 64-bit?\n\n> Again, a 64bit vs 32 bit issue. We have \"pass by value\" and \"pass by\n> reference\" data types, and we have conventionally made everything bigger\n> than 32bits a \"by reference\" type. Going to a 64bit OID or SERIAL type\n> may mess with that convention, but it may be good to revisit this and\n> remind ourselves why we have that convention in the first place.\n\nI think it would be completely impractical to convert Oid to a pass-by-\nreference type --- the palloc() overhead would be intolerable. However,\non machines where it's possible to make Datum a 64-bit integer type,\nwe could support 64-bit Oids. On those machines where pointers are 64\nbits, there wouldn't even be any performance cost because Datum has to\nbe 64 bits anyway.\n\nI have actually had something like this in the back of my mind while\nworking on the fmgr conversion. With sufficiently disciplined use of\nDatumGetFoo and FooGetDatum macros everywhere, it would become fairly\ntransparent whether Datum is 32 or 64 bits and whether 64-bit types\nare pass-by-value or pass-by-reference. Eventually I'd like to see\ncompile-time options for the size of Oid and for whether int8,\nfloat4, and float8 are pass-by-val or -by-ref. There's still a lot\nof tedious code-cleanup gruntwork to be done before that can happen,\nthough.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 Jun 2000 00:53:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit sequences " }, { "msg_contents": "> Paul Caskey <[email protected]> writes:\n> > Okay, cool. Similar subject: What about making the oid 64-bit?\n> \n> Let's just say you'd be opening a *much* larger can of worms there,\n> because Oid is used all over the place whereas only a few routines\n> know anything about sequences.\n> \n> This is (or should be) on the TODO list but I wouldn't recommend it\n> as your first backend programming project.\n> \n> > At first glance, this seems easier to change than the sequence\n> > generator, since you guys do a good job of using sizeof() and the Oid\n> > typedef.\n> \n> Except for all the places that assume Oid is interchangeable with int.\n> Finding them is left as an exercise for the student...\n\nIt would be nice to get OID to act as an unsigned 'int' in all places.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 24 Jun 2000 12:25:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit sequences" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Except for all the places that assume Oid is interchangeable with int.\n>> Finding them is left as an exercise for the student...\n\n> It would be nice to get OID to act as an unsigned 'int' in all places.\n\nActually, I'd like to get rid of the assumption that it has anything\nto do with int. Signed or not is the least of my worries --- I'd like\nto be able to equate Oid to long long, for example.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 Jun 2000 21:59:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit sequences " }, { "msg_contents": "Tom Lane wrote:\n> \n> Paul Caskey <[email protected]> writes:\n> > Okay, cool. Similar subject: What about making the oid 64-bit?\n> \n> Let's just say you'd be opening a *much* larger can of worms there,\n> because Oid is used all over the place whereas only a few routines\n> know anything about sequences.\n\nAha! Thanks for the warning.\n\n> This is (or should be) on the TODO list but I wouldn't recommend it\n> as your first backend programming project.\n> \n> > At first glance, this seems easier to change than the sequence\n> > generator, since you guys do a good job of using sizeof() and the Oid\n> > typedef.\n> \n> Except for all the places that assume Oid is interchangeable with int.\n> Finding them is left as an exercise for the student...\n\nAgain, thanks for the warning. :-)\n\nThanks for the comments on this thread. It sounds too tricky for me to\nattempt any type of 64-bit sequence at this time, especially on a 32-bit\nplatform. For now, I've made my critical \"id\" variables type INT8, but\nstill use an INT4 sequence to increment them. If/when an INT8 sequence\nbecomes available in the future, I will drop one in. Otherwise if I start\ngetting close to the 2GB limit, I'll find a workaround to reuse holes in\nthe sequence.\n\nThere doesn't seem to be any problem using pgsql's int4-->int8 automatic\nconversion in this way. Hopefully I can also join on int4/int8 values\nwithout any snags or big performance problems, although I haven't tested\nthat.\n\n-- \nPaul Caskey\t\[email protected]\t\tSoftware Engineer\nNew Mexico Software\t5041 Indian School NE\tAlbuquerque, NM 87110\n--\n", "msg_date": "Mon, 03 Jul 2000 17:11:43 -0600", "msg_from": "Paul Caskey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 64-bit sequences" }, { "msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <[email protected]> writes:\n> >> Except for all the places that assume Oid is interchangeable with int.\n> >> Finding them is left as an exercise for the student...\n> \n> > It would be nice to get OID to act as an unsigned 'int' in all places.\n> \n> Actually, I'd like to get rid of the assumption that it has anything\n> to do with int. Signed or not is the least of my worries --- I'd like\n> to be able to equate Oid to long long, for example.\n\nSo would I!\n\n> \n> regards, tom lane\n\nBTW, the 32-bit oid implies a hard limit of 2 billion records on a single\nserver, right? What if that number hits ~2 billion over the course of\nmany inserts and deletes? Does it intelligently wrap back to 1 and find\nholes in the sequence to re-use? Otherwise it's more than a limit on\nrecords; it's a limit on inserts. \n\n\n-- \nPaul Caskey\t\[email protected]\t\tSoftware Engineer\nNew Mexico Software\t5041 Indian School NE\tAlbuquerque, NM 87110\n--\n", "msg_date": "Wed, 05 Jul 2000 15:35:09 -0600", "msg_from": "Paul Caskey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 64-bit sequences" } ]
[ { "msg_contents": "I'm making consistent accessor functions to all of the special file names\nused in the backend (e.g., \"pg_hba.conf\", \"pg_control\", etc.) and I got to\nthe pid file stuff. I'm wondering why you call the SetPidFile and\nSetOptsFile functions twice, once in pmdaemonize() and once in the\nnon-detach case. It seems to me that you would get the same thing if you\njust did:\n\nif (silentflag)\n pmdaemonize(); /* old version */\n\nSetPidFile(...);\non_proc_exit(UnlinkPidFile, NULL);\nSetOptsFile(...);\n\nIs there anything special you wanted to achieve with this?\n\nFurthermore, with the new run-time configuration system there will be a\nfairly volatile set of possible options to the postmaster (and perhaps\nmore importantly, not all options are necessarily from the command line),\nso the SetOptsFile function will need some rework. I think instead of\nteaching SetOptsFile about each option that the postmaster might accept we\ncould just do\n\nfor (i=1; i<argc; i++)\n{\n fprintf(opts_file, \"'%s' \", argv[i]);\n}\n\nThe result wouldn't look as pretty as it does now but at least it would\nalways be correct. Comments?\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 23 Jun 2000 18:20:09 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "About the pid and opts files" }, { "msg_contents": "> I'm making consistent accessor functions to all of the special file names\n> used in the backend (e.g., \"pg_hba.conf\", \"pg_control\", etc.) and I got to\n> the pid file stuff. I'm wondering why you call the SetPidFile and\n> SetOptsFile functions twice, once in pmdaemonize() and once in the\n> non-detach case. It seems to me that you would get the same thing if you\n> just did:\n> \n> if (silentflag)\n> pmdaemonize(); /* old version */\n> \n> SetPidFile(...);\n> on_proc_exit(UnlinkPidFile, NULL);\n> SetOptsFile(...);\n> \n> Is there anything special you wanted to achieve with this?\n\nBecasue errors on creating the pid file and the opts file are\ncritical, I wanted to print error messages to stdout/stderr. After\ndetaching ttys, it would be impossible.\n\n> Furthermore, with the new run-time configuration system there will be a\n> fairly volatile set of possible options to the postmaster (and perhaps\n> more importantly, not all options are necessarily from the command line),\n> so the SetOptsFile function will need some rework. I think instead of\n> teaching SetOptsFile about each option that the postmaster might accept we\n> could just do\n> \n> for (i=1; i<argc; i++)\n> {\n> fprintf(opts_file, \"'%s' \", argv[i]);\n> }\n> \n> The result wouldn't look as pretty as it does now but at least it would\n> always be correct. Comments?\n\nYes, the new run-time configuration system should simplify\nSetOptsFile. But before proceeding further, I would like to learn more\nabout it. i.e. what kind of application interfaces are provided? Do\nshell scripts such as pg_ctl can use it? Is there any documentation?\n--\nTatsuo Ishii\n\n", "msg_date": "Sat, 24 Jun 2000 12:22:10 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About the pid and opts files" }, { "msg_contents": "Tatsuo Ishii writes:\n\n> Yes, the new run-time configuration system should simplify\n> SetOptsFile. But before proceeding further, I would like to learn more\n> about it. i.e. what kind of application interfaces are provided? Do\n> shell scripts such as pg_ctl can use it? Is there any documentation?\n\nhttp://www.postgresql.org/docs/postgres/runtime-config.htm\n\nThe main difference is that formerly you could assume that if port = 6543\nthen the user necessarily gave the -p option (which isn't quite true if he\nused the environment variable, but anyway). Now the user could have put\nport = 6543 in the configuration file (postgresql.conf) and maybe the\nreason he restarted the server was because he changed the port number\nthere. So reusing postmaster.opts blindly would be fatal. The solution is\nas I illustrated to only write actual argv arguments to the file.\n\nI have most of the coding done, btw. and it works well.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 24 Jun 2000 14:10:22 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: About the pid and opts files" }, { "msg_contents": "I wrote:\n\n> The main difference is that formerly you could assume that if port = 6543\n> then the user necessarily gave the -p option (which isn't quite true if he\n> used the environment variable, but anyway).\n\nIt occurred to me, we really do need to save the environment of the\npostmaster. Consider such variables as LANG, LC_*, PATH (to find\nexecutable), PGPORT, PGDATESTYLE, TZ. I think the safest thing is to just\nsave them all, unless you want to select the ones that matter.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 25 Jun 2000 02:59:51 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: About the pid and opts files" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> It occurred to me, we really do need to save the environment of the\n> postmaster. Consider such variables as LANG, LC_*, PATH (to find\n> executable), PGPORT, PGDATESTYLE, TZ.\n\nParticularly the locale-related env vars. I think it is a serious\nbug in the current system that it is possible to start the postmaster\nwith locale vars different from the values in effect at initdb time.\nYou can corrupt your text-column indices completely that way, because\nthe sort ordering an index depends on can change from run to run.\n\n(We've only seen one or two bug reports we could trace to this, AFAIR,\nbut I'm surprised we don't see a steady stream of 'em. It's just too\neasy to screw up if you sometimes start your postmaster from an\ninteractive environment and sometimes from system boot scripts.)\n\nAn opts file is not a reliable solution --- initdb ought to be recording\nall the locale-relevant variables in pg_control, or some such NON user\neditable place, and postmaster or backend startup ought to force those\nvalues to be re-adopted.\n\nPGDATESTYLE/TZ are not dangerous as far as I know; it should be allowed\nfor the user to change these.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 Jun 2000 22:27:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About the pid and opts files " }, { "msg_contents": "> Particularly the locale-related env vars. I think it is a serious\n> bug in the current system that it is possible to start the postmaster\n> with locale vars different from the values in effect at initdb time.\n> You can corrupt your text-column indices completely that way, because\n> the sort ordering an index depends on can change from run to run.\n\nOur upcoming work on character sets should include this area as an\nissue. We haven't yet discussed how and where \"locale\" is used, but its\neffects may be more isolated once we allow multiple character sets in an\ninstallation.\n\n - Thomas\n", "msg_date": "Sun, 25 Jun 2000 04:31:45 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About the pid and opts files" }, { "msg_contents": "Tom Lane writes:\n\n> PGDATESTYLE/TZ are not dangerous as far as I know; it should be\n> allowed for the user to change these.\n\nBut in the context of pg_ctl we should certainly ensure that the old value\ngets carried over unless overridden. We could make the postmaster.opts\nfile look like this:\n\nPGDATESTYLE=${PGDATESTYLE-oldvalue} TZ=${TZ-oldvalue} ... postmaster ...\n\nBut I'm not sure if that is safe enough. If you want to change the\nenvironment you can either edit postmaster.opts or do an explicit\nstop/start rather than restart.\n\nFailure scenario: Normally, TZ is unset. I log in remotely from a\ndifferent time zone to administer a database server so I have TZ set to\noverride the system's default time zone. I `su postgres', do something,\npg_ctl restart. All the sudden the server operates with a different\ndefault time zone. This is not an unrealistic situation, I have done this\nmany times. If pg_ctl wants to automate things then it shouldn't be\nsubject to these sort of failures if possible.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 26 Jun 2000 03:41:41 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: About the pid and opts files " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Failure scenario: Normally, TZ is unset. I log in remotely from a\n> different time zone to administer a database server so I have TZ set to\n> override the system's default time zone. I `su postgres', do something,\n> pg_ctl restart. All the sudden the server operates with a different\n> default time zone. This is not an unrealistic situation, I have done this\n> many times.\n\nRight --- it should be *possible* to change these vars, but it should\ntake some explicit action. Having a different value in your environment\nat postmaster start time is probably not enough of an explicit action.\n\nThis whole thread makes me more and more uncomfortable about the fact\nthat the postmaster/backend pay attention to environment variables at\nall. An explicit configuration file would seem a better answer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Jun 2000 23:07:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About the pid and opts files " }, { "msg_contents": "Tom Lane wrote:\n\n> Right --- it should be *possible* to change these vars, but it should\n> take some explicit action. Having a different value in your environment\n> at postmaster start time is probably not enough of an explicit action.\n> \n> This whole thread makes me more and more uncomfortable about the fact\n> that the postmaster/backend pay attention to environment variables at\n> all. An explicit configuration file would seem a better answer.\n\nWhy a configuration file? Why not a configuration table?\n", "msg_date": "Mon, 26 Jun 2000 13:47:23 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About the pid and opts files" }, { "msg_contents": "Chris Bitmead <[email protected]> writes:\n> Tom Lane wrote:\n>> Right --- it should be *possible* to change these vars, but it should\n>> take some explicit action. Having a different value in your environment\n>> at postmaster start time is probably not enough of an explicit action.\n>> \n>> This whole thread makes me more and more uncomfortable about the fact\n>> that the postmaster/backend pay attention to environment variables at\n>> all. An explicit configuration file would seem a better answer.\n\n> Why a configuration file? Why not a configuration table?\n\nCircularity. A lot of this stuff has to be known before we dare touch\nthe database at all.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Jun 2000 23:59:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About the pid and opts files " }, { "msg_contents": "Tom Lane wrote:\n> \n> Chris Bitmead <[email protected]> writes:\n> > Tom Lane wrote:\n> >> Right --- it should be *possible* to change these vars, but it should\n> >> take some explicit action. Having a different value in your environment\n> >> at postmaster start time is probably not enough of an explicit action.\n> >>\n> >> This whole thread makes me more and more uncomfortable about the fact\n> >> that the postmaster/backend pay attention to environment variables at\n> >> all. An explicit configuration file would seem a better answer.\n> \n> > Why a configuration file? Why not a configuration table?\n> \n> Circularity. A lot of this stuff has to be known before we dare touch\n> the database at all.\n\nAren't there other things like pg_database that survive this problem?\n", "msg_date": "Mon, 26 Jun 2000 14:02:40 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About the pid and opts files" }, { "msg_contents": "Chris Bitmead <[email protected]> writes:\n>>>> Why a configuration file? Why not a configuration table?\n>> \n>> Circularity. A lot of this stuff has to be known before we dare touch\n>> the database at all.\n\n> Aren't there other things like pg_database that survive this problem?\n\nEr, have you dug into the code that does the initial access of\npg_database? I want to get rid of that kluge, not add more...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Jun 2000 00:16:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About the pid and opts files " }, { "msg_contents": "Then <[email protected]> spoke up and said:\n> Chris Bitmead <[email protected]> writes:\n> > Aren't there other things like pg_database that survive this problem?\n> Er, have you dug into the code that does the initial access of\n> pg_database? I want to get rid of that kluge, not add more...\n\nThis is why Ingres and Oracle (and probably others) use a text\nparameter file to start up their databases.\n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================", "msg_date": "Mon, 26 Jun 2000 08:44:10 -0400 (EDT)", "msg_from": "Brian E Gallew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About the pid and opts files " } ]
[ { "msg_contents": "I have a query crashing the backend and leaving this message in the\nserver log...\n\nWhat does exit status 139 mean? Better still would be a pointer to some\ndocumentation for error codes if such beast exists...\n\nRegards,\nEd Loehr\n", "msg_date": "Fri, 23 Jun 2000 13:16:13 -0500", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": true, "msg_subject": "Server process exited with status 139 (meaning?)" }, { "msg_contents": "Ed Loehr writes:\n\n> I have a query crashing the backend and leaving this message in the\n> server log...\n> What does exit status 139 mean?\n\nThe backend terminated because of a segmentation fault (note 139 = 128 +\n11, 11 = SIGSEGV). So it's definitely a bug and we'd need to see the\nquery.\n\n> Better still would be a pointer to some documentation for error codes\n> if such beast exists...\n\nSee wait(2). Of course we could also make the error message more\nexpressive.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 25 Jun 2000 03:01:18 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Server process exited with status 139 (meaning?)" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> Ed Loehr writes:\n> \n> > I have a query crashing the backend and leaving this message in the\n> > server log...\n> > What does exit status 139 mean?\n> \n> The backend terminated because of a segmentation fault (note 139 = 128 +\n> 11, 11 = SIGSEGV). So it's definitely a bug and we'd need to see the\n> query.\n\nI don't need help on this as I found workable queries for my purposes,\nbut here is a simplified core-dumper (7.0beta3) for posterity...\n\nRegards,\nEd Loehr\n\nDROP TABLE foo;\nCREATE TABLE foo (d date);\nCREATE UNIQUE INDEX date_uidx ON foo(d);\nCREATE UNIQUE INDEX datetime_uidx ON foo(datetime(d));\nINSERT INTO foo (d) VALUES ('17-Jun-1995');\nINSERT INTO foo (d) VALUES ('18-Jun-1995');\nINSERT INTO foo (d) VALUES ('19-Jun-1995');\nDROP TABLE bar;\nDROP SEQUENCE bar_id_seq;\nCREATE TABLE bar (\n id SERIAL, \n start_time DATETIME,\n duration FLOAT\n);\nINSERT INTO bar (start_time, duration) VALUES ('17-Jun-1995', 3);\nINSERT INTO bar (start_time, duration) VALUES ('18-Jun-1995', 3);\nINSERT INTO bar (start_time, duration) VALUES ('19-Jun-1995', 3);\nDROP TABLE baz;\nDROP SEQUENCE baz_id_seq;\nCREATE TABLE baz (\n id SERIAL, \n bar_id DATETIME,\n duration FLOAT\n);\nINSERT INTO baz (bar_id, duration) SELECT id, duration FROM bar;\n \nSELECT f.d, r.start_time::date, r.duration AS \"r_dur\",\n z.duration AS \"z_dur\", f.d,\n (r.start_time - '1 day'::interval)::date AS \"leave\",\n (r.start_time + (z.duration||' days')::interval)::date AS \"return\"\nFROM foo f, activity r, activity_hr_need z\nWHERE r.id = 2\n AND z.activity_id = 2\n AND (f.d = (r.start_time - '1 day'::interval)::date \n OR f.d = (r.start_time + (z.duration||' days')::interval));\n", "msg_date": "Sun, 25 Jun 2000 23:10:16 -0500", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Server process exited with status 139 (meaning?)" }, { "msg_contents": "Ed Loehr <[email protected]> writes:\n> I don't need help on this as I found workable queries for my purposes,\n> but here is a simplified core-dumper (7.0beta3) for posterity...\n\nThis doesn't come close to doing anything as-is, but even reading\nbetween the lines (\"activity\"=>\"bar\" etc) and deleting references\nto missing fields, I can't get a crash. Possibly a bug fixed since\nbeta3?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Jun 2000 00:35:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Server process exited with status 139 (meaning?) " }, { "msg_contents": "Ed Loehr wrote:\n> \n> Peter Eisentraut wrote:\n> >\n> > Ed Loehr writes:\n> >\n> > > I have a query crashing the backend and leaving this message in the\n> > > server log...\n> > > What does exit status 139 mean?\n> >\n> > The backend terminated because of a segmentation fault (note 139 = 128 +\n> > 11, 11 = SIGSEGV). So it's definitely a bug and we'd need to see the\n> > query.\n> \n> I don't need help on this as I found workable queries for my purposes,\n> but here is a simplified core-dumper (7.0beta3) for posterity...\n\nOops. A few typos in my last post. Correction below (still\nsegfaulting):\n\nDROP TABLE foo;\nCREATE TABLE foo (d date);\nCREATE UNIQUE INDEX date_uidx ON foo(d);\nCREATE UNIQUE INDEX datetime_uidx ON foo(datetime(d));\nINSERT INTO foo (d) VALUES ('17-Jun-1995');\nINSERT INTO foo (d) VALUES ('18-Jun-1995');\nINSERT INTO foo (d) VALUES ('19-Jun-1995');\n\nDROP TABLE bar;\nDROP SEQUENCE bar_id_seq;\nCREATE TABLE bar (\n id SERIAL, \n start_time DATETIME,\n duration FLOAT\n);\nINSERT INTO bar (start_time, duration) VALUES ('17-Jun-1995', 3);\nINSERT INTO bar (start_time, duration) VALUES ('18-Jun-1995', 3);\nINSERT INTO bar (start_time, duration) VALUES ('19-Jun-1995', 3);\n\nDROP TABLE baz;\nDROP SEQUENCE baz_id_seq;\nCREATE TABLE baz (\n id SERIAL, \n bar_id DATETIME,\n duration FLOAT\n);\nINSERT INTO baz (bar_id, duration) SELECT id, duration FROM bar;\n \n-- Here's the offending query...\nSELECT f.d, r.start_time::date, r.duration AS \"r_dur\",\n z.duration AS \"z_dur\", f.d,\n (r.start_time - '1 day'::interval)::date AS \"leave\",\n (r.start_time + (z.duration||' days')::interval)::date AS \"return\"\nFROM foo f, bar r, baz z\nWHERE r.id = 2\n AND z.bar_id = 2\n AND (f.d = (r.start_time - '1 day'::interval)::date \n OR f.d = (r.start_time + (z.duration||' days')::interval));\n", "msg_date": "Sun, 25 Jun 2000 23:56:24 -0500", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Server process exited with status 139 (meaning?)" }, { "msg_contents": "Ed Loehr wrote:\n> \n> > > > I have a query crashing the backend and leaving this message in the\n> > > > server log...\n> > > > What does exit status 139 mean?\n> > >\n> > > The backend terminated because of a segmentation fault (note 139 = 128 +\n> > > 11, 11 = SIGSEGV). So it's definitely a bug and we'd need to see the\n> > > query.\n> >\n> > I don't need help on this as I found workable queries for my purposes,\n> > but here is a simplified core-dumper (7.0beta3) for posterity...\n> \n> Oops. A few typos in my last post. Correction below (still\n> segfaulting):\n> \n> DROP TABLE foo;\n> CREATE TABLE foo (d date);\n> CREATE UNIQUE INDEX date_uidx ON foo(d);\n> CREATE UNIQUE INDEX datetime_uidx ON foo(datetime(d));\n> INSERT INTO foo (d) VALUES ('17-Jun-1995');\n> INSERT INTO foo (d) VALUES ('18-Jun-1995');\n> INSERT INTO foo (d) VALUES ('19-Jun-1995');\n> \n> DROP TABLE bar;\n> DROP SEQUENCE bar_id_seq;\n> CREATE TABLE bar (\n> id SERIAL,\n> start_time DATETIME,\n> duration FLOAT\n> );\n> INSERT INTO bar (start_time, duration) VALUES ('17-Jun-1995', 3);\n> INSERT INTO bar (start_time, duration) VALUES ('18-Jun-1995', 3);\n> INSERT INTO bar (start_time, duration) VALUES ('19-Jun-1995', 3);\n> \n> DROP TABLE baz;\n> DROP SEQUENCE baz_id_seq;\n> CREATE TABLE baz (\n> id SERIAL,\n> bar_id DATETIME,\n ^^^^^^^^^\n\nOne more typo: 'bar_id' should be of type INTEGER (and the crash\nremains).\n\nRegards,\nEd Loehr\n\n> duration FLOAT\n> );\n> INSERT INTO baz (bar_id, duration) SELECT id, duration FROM bar;\n> \n> -- Here's the offending query...\n> SELECT f.d, r.start_time::date, r.duration AS \"r_dur\",\n> z.duration AS \"z_dur\", f.d,\n> (r.start_time - '1 day'::interval)::date AS \"leave\",\n> (r.start_time + (z.duration||' days')::interval)::date AS \"return\"\n> FROM foo f, bar r, baz z\n> WHERE r.id = 2\n> AND z.bar_id = 2\n> AND (f.d = (r.start_time - '1 day'::interval)::date\n> OR f.d = (r.start_time + (z.duration||' days')::interval));\n", "msg_date": "Mon, 26 Jun 2000 00:07:30 -0500", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Server process exited with status 139 (meaning?)" }, { "msg_contents": "Tom Lane wrote:\n> \n> Ed Loehr <[email protected]> writes:\n> > I don't need help on this as I found workable queries for my purposes,\n> > but here is a simplified core-dumper (7.0beta3) for posterity...\n> \n> This doesn't come close to doing anything as-is, but even reading\n> between the lines (\"activity\"=>\"bar\" etc) and deleting references\n> to missing fields, I can't get a crash. Possibly a bug fixed since\n> beta3?\n\nI'll assume you tried my latest typo-free version on an HP box after\nposting this and got the same results (i.e., nothing). Maybe someone\nelse would care to verify on a Linux box? I'm on RH 6.2 with dual\nPIII's...\n\nRegards,\nEd Loehr\n", "msg_date": "Mon, 26 Jun 2000 00:25:10 -0500", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Server process exited with status 139 (meaning?)" }, { "msg_contents": "> I'll assume you tried my latest typo-free version on an HP box after\n> posting this and got the same results (i.e., nothing). Maybe someone\n> else would care to verify on a Linux box? I'm on RH 6.2 with dual\n> PIII's...\n\nI see the latest being formally rejected down in the Executor. So\nsomething is better than in the beta, but the plan being generated is\nnot quite right apparently.\n\n - Thomas\n", "msg_date": "Mon, 26 Jun 2000 05:30:40 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Server process exited with status 139 (meaning?)" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I see the latest being formally rejected down in the Executor. So\n> something is better than in the beta, but the plan being generated is\n> not quite right apparently.\n\nNo sign of a problem here (using current sources). Exactly which of\nEd's versions did you see a problem with, and what did you see exactly?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Jun 2000 02:51:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Server process exited with status 139 (meaning?) " }, { "msg_contents": "Ed Loehr wrote:\n> > > I don't need help on this as I found workable queries for my purposes,\n> > > but here is a simplified core-dumper (7.0beta3) for posterity...\n> >\n\ntest=# -- Here's the offending query...\ntest=# SELECT f.d, r.start_time::date, r.duration AS \"r_dur\",\ntest-# z.duration AS \"z_dur\", f.d,\ntest-# (r.start_time - '1 day'::interval)::date AS \"leave\",\ntest-# (r.start_time + (z.duration||' days')::interval)::date AS\n\"return\"\ntest-# FROM foo f, bar r, baz z\ntest-# WHERE r.id = 2\ntest-# AND z.bar_id = 2\ntest-# AND (f.d = (r.start_time - '1 day'::interval)::date \ntest(# OR f.d = (r.start_time + (z.duration||' days')::interval));\n d | ?column? | r_dur | z_dur | d | leave | \nreturn \n------------+------------+-------+-------+------------+------------+------------\n 1995-06-17 | 1995-06-18 | 3 | 3 | 1995-06-17 | 1995-06-17 |\n1995-06-21\n(1 row)\n\ntest=# \ntest=# explain SELECT f.d, r.start_time::date, r.duration AS \"r_dur\",\ntest-# z.duration AS \"z_dur\", f.d,\ntest-# (r.start_time - '1 day'::interval)::date AS \"leave\",\ntest-# (r.start_time + (z.duration||' days')::interval)::date AS\n\"return\"\ntest-# FROM foo f, bar r, baz z\ntest-# WHERE r.id = 2\ntest-# AND z.bar_id = 2\ntest-# AND (f.d = (r.start_time - '1 day'::interval)::date \ntest(# OR f.d = (r.start_time + (z.duration||' days')::interval));\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=0.00..5354.86 rows=1990 width=28)\n -> Nested Loop (cost=0.00..104.86 rows=100 width=24)\n -> Seq Scan on baz z (cost=0.00..22.50 rows=10 width=8)\n -> Index Scan using bar_id_key on bar r (cost=0.00..8.14\nrows=10 width=16)\n -> Seq Scan on foo f (cost=0.00..20.00 rows=1000 width=4)\n\nEXPLAIN\ntest=# select version();\n version \n---------------------------------------------------------------\n PostgreSQL 7.0.2 on i686-pc-linux-gnu, compiled by gcc 2.95.2\n(1 row)\n\n\nWorks fine on my Debian 'woody' system on my laptop.\n\n\nAlso, looking at your other query:\n\ntest=# \ntest=# -- Here's the offending query...\ntest=# SELECT f.d, r.start_time::date, r.duration AS \"r_dur\", z.duration\nAS\ntest-# \"z_dur\"\ntest-# FROM foo f, bar r, baz z\ntest-# WHERE r.id = 2 \ntest-# AND z.bar_id = 2\ntest-# AND f.d = (r.start_time - '1 day'::interval)::date ;\n d | ?column? | r_dur | z_dur \n---+----------+-------+-------\n(0 rows)\n\nso no problem there either. Looks like you should get a trade-in on\nthat beta3 :-)\n\nCheers,\n\t\t\t\tAndrew.\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: [email protected]\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267\n", "msg_date": "Mon, 26 Jun 2000 23:51:36 +1200", "msg_from": "Andrew McMillan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Server process exited with status 139 (meaning?)" } ]
[ { "msg_contents": "UPDATE members m,payments p SET m.status = 2 WHERE p.paydate > 'now'::datetime - '1 month'::timespan and p.productid = 'xxxxxxx' and m.gid = p.gid\n\ni'm trying to run that query and i'm getting \n\n\"parse error near m\"\n\nbut it looks ok to me \n\ni'm running postgresql 7.0.2 with freebsd 4.0 stable\n\njeff\n\n\n\n", "msg_date": "Fri, 23 Jun 2000 15:53:45 -0300 (ADT)", "msg_from": "Jeff MacDonald <[email protected]>", "msg_from_op": true, "msg_subject": "query failed , don't know why" }, { "msg_contents": "shouldn't that be m.payments and not m,payments?\n\nBrian\n----- Original Message -----\nFrom: \"Jeff MacDonald\" <[email protected]>\nTo: <[email protected]>; <[email protected]>\nSent: Friday, June 23, 2000 2:53 PM\nSubject: [HACKERS] query failed , don't know why\n\n\n> UPDATE members m,payments p SET m.status = 2 WHERE p.paydate >\n'now'::datetime - '1 month'::timespan and p.productid = 'xxxxxxx' and m.gid\n= p.gid\n>\n> i'm trying to run that query and i'm getting\n>\n> \"parse error near m\"\n>\n> but it looks ok to me\n>\n> i'm running postgresql 7.0.2 with freebsd 4.0 stable\n>\n> jeff\n>\n>\n>\n>\n\n", "msg_date": "Fri, 23 Jun 2000 18:04:31 -0400", "msg_from": "\"Brian P. Mann\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query failed , don't know why" }, { "msg_contents": "> UPDATE members m,payments p SET m.status = 2\n> WHERE p.paydate > 'now'::datetime - '1 month'::timespan\n> and p.productid = 'xxxxxxx' and m.gid = p.gid\n\nTry\n\nUPDATE members set status = 2\n FROM payments p\n WHERE p.paydate > timestamp 'now' - interval '1 month'\n AND p.productid = 'xxxxxxx' and members.gid = p.gid;\n", "msg_date": "Sat, 24 Jun 2000 03:31:20 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query failed , don't know why" }, { "msg_contents": "Jeff MacDonald <[email protected]> writes:\n> UPDATE members m,payments p SET m.status = 2 WHERE p.paydate > 'now'::datetime - '1 month'::timespan and p.productid = 'xxxxxxx' and m.gid = p.gid\n> i'm trying to run that query and i'm getting \n> \"parse error near m\"\n> but it looks ok to me \n\nOK according to what reference? SQL92 doesn't allow anything but a\nsimple <table name> between UPDATE and SET --- no aliases, much less\nmultiple table names.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 Jun 2000 00:37:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] query failed , don't know why " } ]
[ { "msg_contents": "> Probably the correct way to handle this is to run the foreign key\n> constraint trigger as the user who created the constraint (or\n> something like that) rather than the user making the insert. I'm not\n> sure how hard that would be.\nThe problem is that now using both refint and GRANT/REVOKE will not work\ntogether. Only if I have separate areas of tables, I can assure that each\nuser may work on his own area. But most databases have referencing tables\nwhich parent table is read-only for the simple users and child tables are\nread-write for them. My opinion is that it is neccessary to allow such\nreferencing. If Postgres doesn't support this, almost nobody can use\nrefint and ACLs together. Performance is also important, I do know...\n\nRegards,\nZoltan\n\n", "msg_date": "Sat, 24 Jun 2000 13:48:22 +0200 (CEST)", "msg_from": "Kovacs Zoltan Sandor <[email protected]>", "msg_from_op": true, "msg_subject": "Re: refint/acl problem" } ]
[ { "msg_contents": "Attached is a first attempt at implementing the classoid feature. It\nworks! Can the postgres gurus comment if I've done it right and point\nout any problems. A lot of it was guess work so I'm sure it can be\ncleaned up some.\n\n-- \nChris Bitmead\nmailto:[email protected]", "msg_date": "Sun, 25 Jun 2000 01:26:16 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "CLASSOID patch" }, { "msg_contents": "Chris Bitmead writes:\n\n> Attached is a first attempt at implementing the classoid feature.\n\nI'm wondering what other people think about the naming. Firstly, it's my\nfeeling that TABLEOID would be more in line with the general conventions.\nSecondly, maybe we ought to make the name less susceptible to collision by\nchoosing a something like _CLASSOID (or whatever).\n\n> It works!\n\nGreat! :)\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 26 Jun 2000 03:41:58 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CLASSOID patch" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Chris Bitmead\n> \n> Attached is a first attempt at implementing the classoid feature. It\n> works! Can the postgres gurus comment if I've done it right and point\n> out any problems. A lot of it was guess work so I'm sure it can be\n> cleaned up some.\n>\n\nThe points I've noticed are the following.\n\n1) It seems not preferable to add an entry *relation* which is of\n Relation type in HeapTupleData. Relation OID seems to be\n sufficient for your purpose.\n\n2) The change in optimizer/path/tidpath.c seems to have\n no meaning.\n\nSorry,I have no time to check your patch more now.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Mon, 26 Jun 2000 11:51:01 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] CLASSOID patch" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> The points I've noticed are the following.\n> 1) It seems not preferable to add an entry *relation* which is of\n> Relation type in HeapTupleData. Relation OID seems to be\n> sufficient for your purpose.\n\nI haven't looked at the patch at all yet, but I agree 100% with\nHiroshi on this point. Relation is a pointer to a relcache entry\nand relcache entries are *volatile*. If all you need is the OID\nthen store the OID --- don't open Pandora's box by assuming the\nrelcache entry will never disappear before your tuple value does.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Jun 2000 23:18:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RE: [HACKERS] CLASSOID patch " }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> Chris Bitmead writes:\n> \n> > Attached is a first attempt at implementing the classoid feature.\n> \n> I'm wondering what other people think about the naming. Firstly, it's my\n> feeling that TABLEOID would be more in line with the general conventions.\n\nI was thinking this myself today. Mainly because I wonder if in the\nfuture there may be support for more than one table implementing a\nparticular class type. On the other hand the oid is a reference to the\npg_class table. Maybe pg_class should be renamed pg_table? Anyway, my\ncurrent thinking is that tableoid is better.\n\nThe general naming conventions in postgres are a bit disturbing. Some\nplaces refer to classes, some to tables, some to relations. One day it\nshould all be reconciled :-).\n\n> Secondly, maybe we ought to make the name less susceptible to collision by\n> choosing a something like _CLASSOID (or whatever).\n\nOnly if oid becomes _oid and ctid becomes _ctid. I don't think it's\nworth it myself.\n\n> > It works!\n> \n> Great! :)\n", "msg_date": "Mon, 26 Jun 2000 13:24:56 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CLASSOID patch" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I'm wondering what other people think about the naming. Firstly, it's my\n> feeling that TABLEOID would be more in line with the general conventions.\n\nNo strong feeling either way. The old-line Postgres naming conventions\nwould suggest CLASSOID or RELATIONOID, but I sure wouldn't propose\nRELATIONOID.\n\n> Secondly, maybe we ought to make the name less susceptible to collision by\n> choosing a something like _CLASSOID (or whatever).\n\nNo, I don't like that. If we're going to do this at all then the name\nought to be consistent with the names of existing system attributes,\nand those have no underscore decoration.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Jun 2000 23:36:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CLASSOID patch " }, { "msg_contents": "Hiroshi Inoue wrote:\n> The points I've noticed are the following.\n> \n> 1) It seems not preferable to add an entry *relation* which is of\n> Relation type in HeapTupleData. Relation OID seems to be\n> sufficient for your purpose.\n\nOnly that I was contemplating whether there should also be a \"tablename\"\nattribute in addition to \"classoid\"/\"tableoid\", and I thought that\nsomehow it should be easier to get from Relation to its name, although\nit's not immediately obvious to me if it is possible. If it is easily\ndone it seems desirable not to force people to join with pg_class.\n\n> 2) The change in optimizer/path/tidpath.c seems to have\n> no meaning.\n\nYes that was definitely a mistake, and is commented out as you see.\n\nSpecific questions I have about the patch are...\n\n*) Does this change not add additional storage to disk? I understand it\ndoesn't, but I don't understand the details.\n\n*) in access/heap/heapam.c I wildly inserted a tuple->relation =\nrelation everywhere I could see. Perhaps someone with more insight can\ntell me if some of these are excessive, or conversly if there are some\nother access methods which will cause it not to work.\n", "msg_date": "Mon, 26 Jun 2000 13:36:48 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CLASSOID patch" }, { "msg_contents": "Chris Bitmead wrote:\n> \n> Peter Eisentraut wrote:\n> >\n> > Chris Bitmead writes:\n> >\n> > > Attached is a first attempt at implementing the classoid feature.\n> >\n> > I'm wondering what other people think about the naming. Firstly, it's my\n> > feeling that TABLEOID would be more in line with the general conventions.\n> \n> I was thinking this myself today. Mainly because I wonder if in the\n> future there may be support for more than one table implementing a\n> particular class type. On the other hand the oid is a reference to the\n> pg_class table. Maybe pg_class should be renamed pg_table? Anyway, my\n> current thinking is that tableoid is better.\n\nOr put another way, I see SQL3 has a feature S051 \"CREATE TABLE\n<tablename> OF <type>\", and it seems maybe the <type> should be called a\nclass, and the table a collection of that class. This would advocate the\ntableoid name I think. Someone please correct me if my thinking is\nmuddled here.\n", "msg_date": "Mon, 26 Jun 2000 14:48:13 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CLASSOID patch" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected]\n> \n> Hiroshi Inoue wrote:\n> > The points I've noticed are the following.\n> > \n> > 1) It seems not preferable to add an entry *relation* which is of\n> > Relation type in HeapTupleData. Relation OID seems to be\n> > sufficient for your purpose.\n> \n> Only that I was contemplating whether there should also be a \"tablename\"\n> attribute in addition to \"classoid\"/\"tableoid\", and I thought that\n> somehow it should be easier to get from Relation to its name, although\n> it's not immediately obvious to me if it is possible. If it is easily\n> done it seems desirable not to force people to join with pg_class.\n>\n\nThough the entries other than t_data in HeapTupleData\naren't stored to disk,HeapTupleData is just an extension\nof HeapTupleHeaderData which represents the stored\nformat of tuples. Isn't it strange to you that htup.h is\ndependent on rel.h ?\n \n> \n> Specific questions I have about the patch are...\n> \n> *) Does this change not add additional storage to disk? I understand it\n> doesn't, but I don't understand the details.\n>\n\nAFAIK,it doesn't.\n \n> *) in access/heap/heapam.c I wildly inserted a tuple->relation =\n> relation everywhere I could see. Perhaps someone with more insight can\n> tell me if some of these are excessive, or conversly if there are some\n> other access methods which will cause it not to work.\n>\n\nIt may be unnecessary for heap_insert/delete/update/mark4update().\nI'm not sure however.\n\nHiroshi Inoue\[email protected] \n", "msg_date": "Mon, 26 Jun 2000 19:18:48 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] CLASSOID patch" }, { "msg_contents": "On Mon, 26 Jun 2000 13:24:56 +1000, Chris Bitmead wrote:\n\n>\n>I was thinking this myself today. Mainly because I wonder if in the\n>future there may be support for more than one table implementing a\n>particular class type. On the other hand the oid is a reference to the\n\n Which is very common in wrapper software technology ! Normally only\nthe first implementation is done this way: one class - one table. But\nthis is only a very naive design decision. Then when the performance \nlacks hierarchy tree are converted into one table ... etc\n\n Just my thoughts about something like this ....\n\n Marten\n\n\nMarten Feldtmann, Germany\n\n", "msg_date": "Mon, 26 Jun 2000 19:39:19 +0100 (MEZ)", "msg_from": "[email protected] (Marten Feldtmann)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CLASSOID patch" }, { "msg_contents": "I don't see this in the tree. Status, please.\n\n> \n> Attached is a first attempt at implementing the classoid feature. It\n> works! Can the postgres gurus comment if I've done it right and point\n> out any problems. A lot of it was guess work so I'm sure it can be\n> cleaned up some.\n> \n> -- \n> Chris Bitmead\n> mailto:[email protected]\n\n> ? config.log\n> ? config.cache\n> ? config.status\n> ? nohup.out\n> ? GNUmakefile\n> ? src/GNUmakefile\n> ? src/Makefile.global\n> ? src/ID\n> ? src/nohup.out\n> ? src/backend/fmgr.h\n> ? src/backend/parse.h\n> ? src/backend/postgres\n> ? src/backend/global1.bki.source\n> ? src/backend/local1_template1.bki.source\n> ? src/backend/global1.description\n> ? src/backend/local1_template1.description\n> ? src/backend/1\n> ? src/backend/catalog/genbki.sh\n> ? src/backend/catalog/global1.bki.source\n> ? src/backend/catalog/global1.description\n> ? src/backend/catalog/local1_template1.bki.source\n> ? src/backend/catalog/local1_template1.description\n> ? src/backend/parser/y.output\n> ? src/backend/parser/y.output.gz\n> ? src/backend/parser/gram.y.works\n> ? src/backend/parser/gram.y.works.try\n> ? src/backend/parser/y.output.noerror\n> ? src/backend/parser/gram.y.gz\n> ? src/backend/port/Makefile\n> ? src/backend/utils/Gen_fmgrtab.sh\n> ? src/backend/utils/fmgr.h\n> ? src/backend/utils/fmgrstamp-h\n> ? src/bin/initdb/initdb\n> ? src/bin/initlocation/initlocation\n> ? src/bin/ipcclean/ipcclean\n> ? src/bin/pg_ctl/pg_ctl\n> ? src/bin/pg_dump/Makefile\n> ? src/bin/pg_dump/pg_dump\n> ? src/bin/pg_id/pg_id\n> ? src/bin/pg_passwd/pg_passwd\n> ? src/bin/pg_version/Makefile\n> ? src/bin/pg_version/pg_version\n> ? src/bin/pgtclsh/mkMakefile.tcldefs.sh\n> ? src/bin/pgtclsh/mkMakefile.tkdefs.sh\n> ? src/bin/psql/Makefile\n> ? src/bin/psql/psql\n> ? src/bin/scripts/createlang\n> ? src/include/version.h\n> ? src/include/config.h\n> ? src/include/parser/parse.h\n> ? src/include/utils/fmgroids.h\n> ? src/interfaces/Makefile\n> ? src/interfaces/ecpg/lib/Makefile\n> ? src/interfaces/ecpg/lib/libecpg.so.3.1.1\n> ? src/interfaces/ecpg/preproc/Makefile\n> ? src/interfaces/ecpg/preproc/ecpg\n> ? src/interfaces/jdbc/postgresql.jar\n> ? src/interfaces/jdbc/example/psql.class\n> ? src/interfaces/jdbc/postgresql/DriverClass.java\n> ? src/interfaces/jdbc/postgresql/DriverClass.class\n> ? src/interfaces/jdbc/postgresql/Connection.class\n> ? src/interfaces/jdbc/postgresql/Field.class\n> ? src/interfaces/jdbc/postgresql/PG_Stream.class\n> ? src/interfaces/jdbc/postgresql/Driver.class\n> ? src/interfaces/jdbc/postgresql/ResultSet.class\n> ? src/interfaces/jdbc/postgresql/fastpath/Fastpath.class\n> ? src/interfaces/jdbc/postgresql/fastpath/FastpathArg.class\n> ? src/interfaces/jdbc/postgresql/geometric/PGbox.class\n> ? src/interfaces/jdbc/postgresql/geometric/PGpoint.class\n> ? src/interfaces/jdbc/postgresql/geometric/PGcircle.class\n> ? src/interfaces/jdbc/postgresql/geometric/PGline.class\n> ? src/interfaces/jdbc/postgresql/geometric/PGlseg.class\n> ? src/interfaces/jdbc/postgresql/geometric/PGpath.class\n> ? src/interfaces/jdbc/postgresql/geometric/PGpolygon.class\n> ? src/interfaces/jdbc/postgresql/jdbc2/ResultSet.class\n> ? src/interfaces/jdbc/postgresql/jdbc2/Connection.class\n> ? src/interfaces/jdbc/postgresql/jdbc2/ResultSetMetaData.class\n> ? src/interfaces/jdbc/postgresql/jdbc2/DatabaseMetaData.class\n> ? src/interfaces/jdbc/postgresql/jdbc2/Statement.class\n> ? src/interfaces/jdbc/postgresql/jdbc2/PreparedStatement.class\n> ? src/interfaces/jdbc/postgresql/jdbc2/CallableStatement.class\n> ? src/interfaces/jdbc/postgresql/largeobject/LargeObjectManager.class\n> ? src/interfaces/jdbc/postgresql/largeobject/LargeObject.class\n> ? src/interfaces/jdbc/postgresql/util/PSQLException.class\n> ? src/interfaces/jdbc/postgresql/util/UnixCrypt.class\n> ? src/interfaces/jdbc/postgresql/util/Serialize.class\n> ? src/interfaces/jdbc/postgresql/util/PGobject.class\n> ? src/interfaces/jdbc/postgresql/util/PGtokenizer.class\n> ? src/interfaces/jdbc/postgresql/util/PGmoney.class\n> ? src/interfaces/libpgeasy/Makefile\n> ? src/interfaces/libpgeasy/libpgeasy.so.2.1\n> ? src/interfaces/libpgtcl/Makefile\n> ? src/interfaces/libpgtcl/mkMakefile.tcldefs.sh\n> ? src/interfaces/libpgtcl/mkMakefile.tkdefs.sh\n> ? src/interfaces/libpq/Makefile\n> ? src/interfaces/libpq/libpq.so.2.1\n> ? src/interfaces/libpq++/Makefile\n> ? src/interfaces/libpq++/libpq++.so.3.1\n> ? src/interfaces/odbc/GNUmakefile\n> ? src/interfaces/odbc/Makefile.global\n> ? src/interfaces/perl5/GNUmakefile\n> ? src/interfaces/python/GNUmakefile\n> ? src/pl/Makefile\n> ? src/pl/plperl/GNUmakefile\n> ? src/pl/plpgsql/Makefile\n> ? src/pl/plpgsql/src/Makefile\n> ? src/pl/plpgsql/src/libplpgsql.so.1.0\n> ? src/pl/tcl/mkMakefile.tcldefs.sh\n> ? src/test/regress/GNUmakefile\n> ? src/test/regress/x.x\n> ? src/test/regress/nohup.out\n> Index: src/backend/access/common/heaptuple.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/access/common/heaptuple.c,v\n> retrieving revision 1.62\n> diff -c -r1.62 heaptuple.c\n> *** src/backend/access/common/heaptuple.c\t2000/04/12 17:14:36\t1.62\n> --- src/backend/access/common/heaptuple.c\t2000/06/24 15:24:46\n> ***************\n> *** 9,15 ****\n> *\n> *\n> * IDENTIFICATION\n> ! *\t $Header: /home/projects/pgsql/cvsroot/pgsql/src/backend/access/common/heaptuple.c,v 1.62 2000/04/12 17:14:36 momjian Exp $\n> *\n> * NOTES\n> *\t The old interface functions have been converted to macros\n> --- 9,15 ----\n> *\n> *\n> * IDENTIFICATION\n> ! *\t $Header: /usr/local/cvsroot/pgsql/src/backend/access/common/heaptuple.c,v 1.62 2000/04/12 17:14:36 momjian Exp $\n> *\n> * NOTES\n> *\t The old interface functions have been converted to macros\n> ***************\n> *** 169,174 ****\n> --- 169,175 ----\n> \telse\n> \t\tswitch (attnum)\n> \t\t{\n> + \t\t\tcase ClassOidAttributeNumber:\n> \t\t\tcase SelfItemPointerAttributeNumber:\n> \t\t\tcase ObjectIdAttributeNumber:\n> \t\t\tcase MinTransactionIdAttributeNumber:\n> ***************\n> *** 205,210 ****\n> --- 206,213 ----\n> \n> \tswitch (attno)\n> \t{\n> + \t\tcase ClassOidAttributeNumber:\n> + \t\t\treturn sizeof f->t_oid;\n> \t\tcase SelfItemPointerAttributeNumber:\n> \t\t\treturn sizeof f->t_ctid;\n> \t\tcase ObjectIdAttributeNumber:\n> ***************\n> *** 237,242 ****\n> --- 240,248 ----\n> \n> \tswitch (attno)\n> \t{\n> + \t\tcase ClassOidAttributeNumber:\n> + \t\t\tbyval = true;\n> + \t\t\tbreak;\n> \t\tcase SelfItemPointerAttributeNumber:\n> \t\t\tbyval = false;\n> \t\t\tbreak;\n> ***************\n> *** 275,281 ****\n> {\n> \tswitch (attnum)\n> \t{\n> ! \t\t\tcase SelfItemPointerAttributeNumber:\n> \t\t\treturn (Datum) &tup->t_ctid;\n> \t\tcase ObjectIdAttributeNumber:\n> \t\t\treturn (Datum) (long) tup->t_oid;\n> --- 281,289 ----\n> {\n> \tswitch (attnum)\n> \t{\n> ! case ClassOidAttributeNumber:\n> ! \t\t\treturn (Datum) &tup->t_classoid;\n> ! case SelfItemPointerAttributeNumber:\n> \t\t\treturn (Datum) &tup->t_ctid;\n> \t\tcase ObjectIdAttributeNumber:\n> \t\t\treturn (Datum) (long) tup->t_oid;\n> Index: src/backend/access/heap/heapam.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/access/heap/heapam.c,v\n> retrieving revision 1.71\n> diff -c -r1.71 heapam.c\n> *** src/backend/access/heap/heapam.c\t2000/06/15 04:09:34\t1.71\n> --- src/backend/access/heap/heapam.c\t2000/06/24 15:24:49\n> ***************\n> *** 235,240 ****\n> --- 235,242 ----\n> \tint\t\t\tlinesleft;\n> \tItemPointer tid = (tuple->t_data == NULL) ?\n> \t(ItemPointer) NULL : &(tuple->t_self);\n> + \n> + tuple->relation = relation;\n> \n> \t/* ----------------\n> \t *\tincrement access statistics\n> ***************\n> *** 567,572 ****\n> --- 569,575 ----\n> \n> \tAssert(lockmode >= NoLock && lockmode < MAX_LOCKMODES);\n> \n> + \n> \t/* ----------------\n> \t *\tincrement access statistics\n> \t * ----------------\n> ***************\n> *** 1030,1035 ****\n> --- 1033,1039 ----\n> \tItemPointer tid = &(tuple->t_self);\n> \tOffsetNumber offnum;\n> \n> + tuple->relation = relation;\n> \t/* ----------------\n> \t *\tincrement access statistics\n> \t * ----------------\n> ***************\n> *** 1124,1129 ****\n> --- 1128,1134 ----\n> \tbool\t\tinvalidBlock,\n> \t\t\t\tlinkend;\n> \n> + tp.relation = relation;\n> \t/* ----------------\n> \t *\tget the buffer from the relation descriptor\n> \t *\tNote that this does a buffer pin.\n> ***************\n> *** 1216,1221 ****\n> --- 1221,1227 ----\n> \t *\tincrement access statistics\n> \t * ----------------\n> \t */\n> + \ttup->relation = relation;\n> \tIncrHeapAccessStat(local_insert);\n> \tIncrHeapAccessStat(global_insert);\n> \n> ***************\n> *** 1284,1289 ****\n> --- 1290,1296 ----\n> \tBuffer\t\tbuffer;\n> \tint\t\t\tresult;\n> \n> + \ttp.relation = relation;\n> \t/* increment access statistics */\n> \tIncrHeapAccessStat(local_delete);\n> \tIncrHeapAccessStat(global_delete);\n> ***************\n> *** 1396,1401 ****\n> --- 1403,1409 ----\n> \tBuffer\t\tbuffer;\n> \tint\t\t\tresult;\n> \n> + \tnewtup->relation = relation;\n> \t/* increment access statistics */\n> \tIncrHeapAccessStat(local_replace);\n> \tIncrHeapAccessStat(global_replace);\n> ***************\n> *** 1524,1529 ****\n> --- 1532,1538 ----\n> \tPageHeader\tdp;\n> \tint\t\t\tresult;\n> \n> + \ttuple->relation = relation;\n> \t/* increment access statistics */\n> \tIncrHeapAccessStat(local_mark4update);\n> \tIncrHeapAccessStat(global_mark4update);\n> Index: src/backend/catalog/heap.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/catalog/heap.c,v\n> retrieving revision 1.133\n> diff -c -r1.133 heap.c\n> *** src/backend/catalog/heap.c\t2000/06/18 22:43:55\t1.133\n> --- src/backend/catalog/heap.c\t2000/06/24 15:24:54\n> ***************\n> *** 101,106 ****\n> --- 101,107 ----\n> *\t\t\t\tbe more difficult if not impossible.\n> */\n> \n> + \n> static FormData_pg_attribute a1 = {\n> \t0xffffffff, {\"ctid\"}, TIDOID, 0, sizeof(ItemPointerData),\n> \tSelfItemPointerAttributeNumber, 0, -1, -1, '\\0', 'p', '\\0', 'i', '\\0', '\\0'\n> ***************\n> *** 130,137 ****\n> \t0xffffffff, {\"cmax\"}, CIDOID, 0, sizeof(CommandId),\n> \tMaxCommandIdAttributeNumber, 0, -1, -1, '\\001', 'p', '\\0', 'i', '\\0', '\\0'\n> };\n> \n> ! static Form_pg_attribute HeapAtt[] = {&a1, &a2, &a3, &a4, &a5, &a6};\n> \n> /* ----------------------------------------------------------------\n> *\t\t\t\tXXX END OF UGLY HARD CODED BADNESS XXX\n> --- 131,143 ----\n> \t0xffffffff, {\"cmax\"}, CIDOID, 0, sizeof(CommandId),\n> \tMaxCommandIdAttributeNumber, 0, -1, -1, '\\001', 'p', '\\0', 'i', '\\0', '\\0'\n> };\n> + \n> + static FormData_pg_attribute a7 = {\n> + \t0xffffffff, {\"classoid\"}, OIDOID, 0, sizeof(Oid),\n> + \tClassOidAttributeNumber, 0, -1, -1, '\\0', 'p', '\\0', 'i', '\\0', '\\0'\n> + };\n> \n> ! static Form_pg_attribute HeapAtt[] = {&a1, &a2, &a3, &a4, &a5, &a6, &a7};\n> \n> /* ----------------------------------------------------------------\n> *\t\t\t\tXXX END OF UGLY HARD CODED BADNESS XXX\n> Index: src/backend/optimizer/path/tidpath.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/optimizer/path/tidpath.c,v\n> retrieving revision 1.7\n> diff -c -r1.7 tidpath.c\n> *** src/backend/optimizer/path/tidpath.c\t2000/05/30 00:49:47\t1.7\n> --- src/backend/optimizer/path/tidpath.c\t2000/06/24 15:24:55\n> ***************\n> *** 105,110 ****\n> --- 105,118 ----\n> \t\t\t\t var->varoattno == SelfItemPointerAttributeNumber &&\n> \t\t\t\t var->vartype == TIDOID)\n> \t\t\targ = arg2;\n> + /*\t\telse if (var->varno == varno &&\n> + \t\t\tvar->varattno == ClassOidAttributeNumber &&\n> + \t\t\tvar->vartype == OIDCLASSOID)\n> + \t\t\targ = arg2;\n> + \t\telse if (var->varnoold == varno &&\n> + \t\t\t\t var->varoattno == ClassOidAttributeNumber &&\n> + \t\t\t\t var->vartype == OIDCLASSOID)\n> + \t\t\targ = arg2; */\n> \t}\n> \tif ((!arg) && IsA(arg2, Var))\n> \t{\n> ***************\n> *** 113,118 ****\n> --- 121,130 ----\n> \t\t\tvar->varattno == SelfItemPointerAttributeNumber &&\n> \t\t\tvar->vartype == TIDOID)\n> \t\t\targ = arg1;\n> + /*\t\telse if (var->varno == varno &&\n> + \t\t\tvar->varattno == ClassOidAttributeNumber &&\n> + \t\t\tvar->vartype == OIDCLASSOID)\n> + \t\t\targ = arg1; */\n> \t}\n> \tif (!arg)\n> \t\treturn rnode;\n> Index: src/backend/optimizer/prep/preptlist.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/optimizer/prep/preptlist.c,v\n> retrieving revision 1.36\n> diff -c -r1.36 preptlist.c\n> *** src/backend/optimizer/prep/preptlist.c\t2000/04/12 17:15:23\t1.36\n> --- src/backend/optimizer/prep/preptlist.c\t2000/06/24 15:24:57\n> ***************\n> *** 15,21 ****\n> * Portions Copyright (c) 1994, Regents of the University of California\n> *\n> * IDENTIFICATION\n> ! *\t $Header: /home/projects/pgsql/cvsroot/pgsql/src/backend/optimizer/prep/preptlist.c,v 1.36 2000/04/12 17:15:23 momjian Exp $\n> *\n> *-------------------------------------------------------------------------\n> */\n> --- 15,21 ----\n> * Portions Copyright (c) 1994, Regents of the University of California\n> *\n> * IDENTIFICATION\n> ! *\t $Header: /usr/local/cvsroot/pgsql/src/backend/optimizer/prep/preptlist.c,v 1.36 2000/04/12 17:15:23 momjian Exp $\n> *\n> *-------------------------------------------------------------------------\n> */\n> ***************\n> *** 66,72 ****\n> \tif (command_type == CMD_UPDATE || command_type == CMD_DELETE)\n> \t{\n> \t\tResdom\t *resdom;\n> ! \t\tVar\t\t *var;\n> \n> \t\tresdom = makeResdom(length(tlist) + 1,\n> \t\t\t\t\t\t\tTIDOID,\n> --- 66,72 ----\n> \tif (command_type == CMD_UPDATE || command_type == CMD_DELETE)\n> \t{\n> \t\tResdom\t *resdom;\n> ! \t\tVar\t\t *var1, *var2;\n> \n> \t\tresdom = makeResdom(length(tlist) + 1,\n> \t\t\t\t\t\t\tTIDOID,\n> ***************\n> *** 76,83 ****\n> \t\t\t\t\t\t\t0,\n> \t\t\t\t\t\t\ttrue);\n> \n> ! \t\tvar = makeVar(result_relation, SelfItemPointerAttributeNumber,\n> \t\t\t\t\t TIDOID, -1, 0);\n> \n> \t\t/*\n> \t\t * For an UPDATE, expand_targetlist already created a fresh tlist.\n> --- 76,85 ----\n> \t\t\t\t\t\t\t0,\n> \t\t\t\t\t\t\ttrue);\n> \n> ! \t\tvar1 = makeVar(result_relation, SelfItemPointerAttributeNumber,\n> \t\t\t\t\t TIDOID, -1, 0);\n> + \t\tvar2 = makeVar(result_relation, ClassOidAttributeNumber,\n> + \t\t\t\t\t OIDOID, -1, 0);\n> \n> \t\t/*\n> \t\t * For an UPDATE, expand_targetlist already created a fresh tlist.\n> ***************\n> *** 87,93 ****\n> \t\tif (command_type == CMD_DELETE)\n> \t\t\ttlist = listCopy(tlist);\n> \n> ! \t\ttlist = lappend(tlist, makeTargetEntry(resdom, (Node *) var));\n> \t}\n> \n> \treturn tlist;\n> --- 89,96 ----\n> \t\tif (command_type == CMD_DELETE)\n> \t\t\ttlist = listCopy(tlist);\n> \n> ! \t\ttlist = lappend(tlist, makeTargetEntry(resdom, (Node *) var1));\n> ! \t\ttlist = lappend(tlist, makeTargetEntry(resdom, (Node *) var2));\n> \t}\n> \n> \treturn tlist;\n> Index: src/backend/parser/parse_relation.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/parse_relation.c,v\n> retrieving revision 1.44\n> diff -c -r1.44 parse_relation.c\n> *** src/backend/parser/parse_relation.c\t2000/06/20 01:41:21\t1.44\n> --- src/backend/parser/parse_relation.c\t2000/06/24 15:24:58\n> ***************\n> *** 40,45 ****\n> --- 40,48 ----\n> \n> {\n> \t{\n> + \t\t\"classoid\", ClassOidAttributeNumber, OIDOID\n> + \t},\n> + \t{\n> \t\t\"ctid\", SelfItemPointerAttributeNumber, TIDOID\n> \t},\n> \t{\n> Index: src/backend/utils/cache/lsyscache.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/cache/lsyscache.c,v\n> retrieving revision 1.42\n> diff -c -r1.42 lsyscache.c\n> *** src/backend/utils/cache/lsyscache.c\t2000/06/08 22:37:30\t1.42\n> --- src/backend/utils/cache/lsyscache.c\t2000/06/24 15:25:00\n> ***************\n> *** 249,254 ****\n> --- 249,256 ----\n> \tif (attnum == ObjectIdAttributeNumber ||\n> \t\tattnum == SelfItemPointerAttributeNumber)\n> \t\treturn 1.0 / (double) ntuples;\n> + if (attnum == ClassOidAttributeNumber)\n> + \t\treturn 1.0;\n> \n> \t/*\n> \t * VACUUM ANALYZE has not been run for this table. Produce an estimate\n> Index: src/include/access/heapam.h\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/access/heapam.h,v\n> retrieving revision 1.53\n> diff -c -r1.53 heapam.h\n> *** src/include/access/heapam.h\t2000/06/18 22:44:23\t1.53\n> --- src/include/access/heapam.h\t2000/06/24 15:25:02\n> ***************\n> *** 230,239 ****\n> \t\t\t\t(Datum)((char *)&((tup)->t_self)) \\\n> \t\t\t) \\\n> \t\t\t: \\\n> \t\t\t( \\\n> \t\t\t\t(Datum)*(unsigned int *) \\\n> \t\t\t\t\t((char *)(tup)->t_data + heap_sysoffset[-(attnum)-1]) \\\n> ! \t\t\t) \\\n> \t\t) \\\n> \t) \\\n> )\n> --- 230,244 ----\n> \t\t\t\t(Datum)((char *)&((tup)->t_self)) \\\n> \t\t\t) \\\n> \t\t\t: \\\n> + \t\t\t(((attnum) == ClassOidAttributeNumber) ? \\\n> \t\t\t( \\\n> + \t\t\t\t(Datum)((tup)->relation->rd_id) \\\n> + \t\t\t) \\\n> + : \\\n> + \t\t\t( \\\n> \t\t\t\t(Datum)*(unsigned int *) \\\n> \t\t\t\t\t((char *)(tup)->t_data + heap_sysoffset[-(attnum)-1]) \\\n> ! \t\t\t)) \\\n> \t\t) \\\n> \t) \\\n> )\n> Index: src/include/access/htup.h\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/access/htup.h,v\n> retrieving revision 1.30\n> diff -c -r1.30 htup.h\n> *** src/include/access/htup.h\t2000/06/02 10:20:26\t1.30\n> --- src/include/access/htup.h\t2000/06/24 15:25:02\n> ***************\n> *** 133,139 ****\n> #define MinCommandIdAttributeNumber\t\t\t\t(-4)\n> #define MaxTransactionIdAttributeNumber\t\t\t(-5)\n> #define MaxCommandIdAttributeNumber\t\t\t\t(-6)\n> ! #define FirstLowInvalidHeapAttributeNumber\t\t(-7)\n> \n> /* If you make any changes above, the order off offsets in this must change */\n> extern long heap_sysoffset[];\n> --- 133,140 ----\n> #define MinCommandIdAttributeNumber\t\t\t\t(-4)\n> #define MaxTransactionIdAttributeNumber\t\t\t(-5)\n> #define MaxCommandIdAttributeNumber\t\t\t\t(-6)\n> ! #define ClassOidAttributeNumber\t\t\t (-7)\n> ! #define FirstLowInvalidHeapAttributeNumber\t\t(-8)\n> \n> /* If you make any changes above, the order off offsets in this must change */\n> extern long heap_sysoffset[];\n> ***************\n> *** 156,161 ****\n> --- 157,163 ----\n> {\n> \tuint32\t\tt_len;\t\t\t/* length of *t_data */\n> \tItemPointerData t_self;\t\t/* SelfItemPointer */\n> + \tRelation relation; /* */\n> \tMemoryContext t_datamcxt;\t/* */\n> \tHeapTupleHeader t_data;\t\t/* */\n> } HeapTupleData;\n> Index: src/include/catalog/pg_attribute.h\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/catalog/pg_attribute.h,v\n> retrieving revision 1.59\n> diff -c -r1.59 pg_attribute.h\n> *** src/include/catalog/pg_attribute.h\t2000/06/12 03:40:52\t1.59\n> --- src/include/catalog/pg_attribute.h\t2000/06/24 15:25:04\n> ***************\n> *** 267,272 ****\n> --- 267,273 ----\n> DATA(insert OID = 0 ( 1247 cmin\t\t\t\t29 0 4 -4 0 -1 -1 t p f i f f));\n> DATA(insert OID = 0 ( 1247 xmax\t\t\t\t28 0 4 -5 0 -1 -1 t p f i f f));\n> DATA(insert OID = 0 ( 1247 cmax\t\t\t\t29 0 4 -6 0 -1 -1 t p f i f f));\n> + DATA(insert OID = 0 ( 1247 classoid\t\t\t26 0 4 -7 0 -1 -1 t p f i f f));\n> \n> /* ----------------\n> *\t\tpg_database\n> ***************\n> *** 282,287 ****\n> --- 283,289 ----\n> DATA(insert OID = 0 ( 1262 cmin\t\t\t\t29 0 4 -4 0 -1 -1 t p f i f f));\n> DATA(insert OID = 0 ( 1262 xmax\t\t\t\t28 0 4 -5 0 -1 -1 t p f i f f));\n> DATA(insert OID = 0 ( 1262 cmax\t\t\t\t29 0 4 -6 0 -1 -1 t p f i f f));\n> + DATA(insert OID = 0 ( 1262 classoid\t\t\t26 0 4 -7 0 -1 -1 t p f i f f));\n> \n> /* ----------------\n> *\t\tpg_proc\n> ***************\n> *** 329,334 ****\n> --- 331,337 ----\n> DATA(insert OID = 0 ( 1255 cmin\t\t\t\t29 0 4 -4 0 -1 -1 t p f i f f));\n> DATA(insert OID = 0 ( 1255 xmax\t\t\t\t28 0 4 -5 0 -1 -1 t p f i f f));\n> DATA(insert OID = 0 ( 1255 cmax\t\t\t\t29 0 4 -6 0 -1 -1 t p f i f f));\n> + DATA(insert OID = 0 ( 1255 classoid\t\t\t26 0 4 -7 0 -1 -1 t p f i f f));\n> \n> /* ----------------\n> *\t\tpg_shadow\n> ***************\n> *** 348,353 ****\n> --- 351,357 ----\n> DATA(insert OID = 0 ( 1260 cmin\t\t\t\t29 0 4 -4 0 -1 -1 t p f i f f));\n> DATA(insert OID = 0 ( 1260 xmax\t\t\t\t28 0 4 -5 0 -1 -1 t p f i f f));\n> DATA(insert OID = 0 ( 1260 cmax\t\t\t\t29 0 4 -6 0 -1 -1 t p f i f f));\n> + DATA(insert OID = 0 ( 1260 classoid\t\t\t26 0 4 -7 0 -1 -1 t p f i f f));\n> \n> /* ----------------\n> *\t\tpg_group\n> ***************\n> *** 362,367 ****\n> --- 366,372 ----\n> DATA(insert OID = 0 ( 1261 cmin\t\t\t\t29 0 4 -4 0 -1 -1 t p f i f f));\n> DATA(insert OID = 0 ( 1261 xmax\t\t\t\t28 0 4 -5 0 -1 -1 t p f i f f));\n> DATA(insert OID = 0 ( 1261 cmax\t\t\t\t29 0 4 -6 0 -1 -1 t p f i f f));\n> + DATA(insert OID = 0 ( 1261 classoid\t\t\t26 0 4 -7 0 -1 -1 t p f i f f));\n> \n> /* ----------------\n> *\t\tpg_attribute\n> ***************\n> *** 405,410 ****\n> --- 410,416 ----\n> DATA(insert OID = 0 ( 1249 cmin\t\t\t\t29 0 4 -4 0 -1 -1 t p f i f f));\n> DATA(insert OID = 0 ( 1249 xmax\t\t\t\t28 0 4 -5 0 -1 -1 t p f i f f));\n> DATA(insert OID = 0 ( 1249 cmax\t\t\t\t29 0 4 -6 0 -1 -1 t p f i f f));\n> + DATA(insert OID = 0 ( 1249 classoid\t\t\t26 0 4 -7 0 -1 -1 t p f i f f));\n> \n> /* ----------------\n> *\t\tpg_class\n> ***************\n> *** 458,463 ****\n> --- 464,470 ----\n> DATA(insert OID = 0 ( 1259 cmin\t\t\t\t29 0 4 -4 0 -1 -1 t p f i f f));\n> DATA(insert OID = 0 ( 1259 xmax\t\t\t\t28 0 4 -5 0 -1 -1 t p f i f f));\n> DATA(insert OID = 0 ( 1259 cmax\t\t\t\t29 0 4 -6 0 -1 -1 t p f i f f));\n> + DATA(insert OID = 0 ( 1259 classoid\t\t\t26 0 4 -7 0 -1 -1 t p f i f f));\n> \n> /* ----------------\n> *\t\tpg_attrdef\n> ***************\n> *** 473,478 ****\n> --- 480,486 ----\n> DATA(insert OID = 0 ( 1215 cmin\t\t\t\t29 0 4 -4 0 -1 -1 t p f i f f));\n> DATA(insert OID = 0 ( 1215 xmax\t\t\t\t28 0 4 -5 0 -1 -1 t p f i f f));\n> DATA(insert OID = 0 ( 1215 cmax\t\t\t\t29 0 4 -6 0 -1 -1 t p f i f f));\n> + DATA(insert OID = 0 ( 1215 classoid\t\t\t26 0 4 -7 0 -1 -1 t p f i f f));\n> \n> /* ----------------\n> *\t\tpg_relcheck\n> ***************\n> *** 488,493 ****\n> --- 496,502 ----\n> DATA(insert OID = 0 ( 1216 cmin\t\t\t\t29 0 4 -4 0 -1 -1 t p f i f f));\n> DATA(insert OID = 0 ( 1216 xmax\t\t\t\t28 0 4 -5 0 -1 -1 t p f i f f));\n> DATA(insert OID = 0 ( 1216 cmax\t\t\t\t29 0 4 -6 0 -1 -1 t p f i f f));\n> + DATA(insert OID = 0 ( 1216 classoid\t\t\t26 0 4 -7 0 -1 -1 t p f i f f));\n> \n> /* ----------------\n> *\t\tpg_trigger\n> ***************\n> *** 513,518 ****\n> --- 522,528 ----\n> DATA(insert OID = 0 ( 1219 cmin\t\t\t\t29 0 4 -4 0 -1 -1 t p f i f f));\n> DATA(insert OID = 0 ( 1219 xmax\t\t\t\t28 0 4 -5 0 -1 -1 t p f i f f));\n> DATA(insert OID = 0 ( 1219 cmax\t\t\t\t29 0 4 -6 0 -1 -1 t p f i f f));\n> + DATA(insert OID = 0 ( 1219 classoid\t\t\t26 0 4 -7 0 -1 -1 t p f i f f));\n> \n> /* ----------------\n> *\t\tpg_variable - this relation is modified by special purpose access\n> Index: src/include/catalog/pg_type.h\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/catalog/pg_type.h,v\n> retrieving revision 1.89\n> diff -c -r1.89 pg_type.h\n> *** src/include/catalog/pg_type.h\t2000/06/05 07:29:01\t1.89\n> --- src/include/catalog/pg_type.h\t2000/06/24 15:25:06\n> ***************\n> *** 337,342 ****\n> --- 337,343 ----\n> DATA(insert OID = 1025 ( _tinterval PGUID -1 -1 f b t \\054 0 704 array_in array_out array_in array_out i _null_ ));\n> DATA(insert OID = 1026 ( _filename PGUID -1 -1 f b t \\054 0 605 array_in array_out array_in array_out i _null_ ));\n> DATA(insert OID = 1027 ( _polygon\t PGUID -1 -1 f b t \\054 0 604 array_in array_out array_in array_out d _null_ ));\n> + \n> /*\n> *\tNote: the size of aclitem needs to match sizeof(AclItem) in acl.h.\n> *\tThanks to some padding, this will be 8 on all platforms.\n> Index: src/tools/make_mkid\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/tools/make_mkid,v\n> retrieving revision 1.4\n> diff -c -r1.4 make_mkid\n> *** src/tools/make_mkid\t2000/03/31 01:41:27\t1.4\n> --- src/tools/make_mkid\t2000/06/24 15:25:09\n> ***************\n> *** 1,6 ****\n> #!/bin/sh\n> find `pwd`/ \\( -name _deadcode -a -prune \\) -o \\\n> ! \t-type f -name '*.[chyl]' -print|sed 's;//;/;g' | mkid\n> \n> find . -name 'CVS' -prune -o -type d -print |while read DIR\n> do\n> --- 1,6 ----\n> #!/bin/sh\n> find `pwd`/ \\( -name _deadcode -a -prune \\) -o \\\n> ! \t-type f -name '*.[chyl]' -print|sed 's;//;/;g' | mkid -\n> \n> find . -name 'CVS' -prune -o -type d -print |while read DIR\n> do\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 10 Oct 2000 17:53:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] CLASSOID patch" }, { "msg_contents": "It's TableOid now. That patch is long dead...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Oct 2000 18:24:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] CLASSOID patch " } ]
[ { "msg_contents": "Can anybody else reproduce this in the current development version?\n\nStart one postmaster in the foreground. Start another one on the same data\ndir in the background on a different port (say -p 8888 -S). It will give\nyou a message about the pid file thing and then it seems to delete the pid\nfile on my system.\n\nWhat's really scary is that when I try to step through this with a\ndebugger then the pid file stays in place.\n\n(An alternative way to do this is to `echo foo > postmaster.pid ; chmod\n0000 postmaster.pid'. In any case you must start the server with -S.)\n\nLinux 2.2.12 i586\negcs-2.91.66\ngdb 4.17.0.4\n--enable-debug, --enable-cassert\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 25 Jun 2000 03:00:36 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Pid file magically disappears" } ]
[ { "msg_contents": "Hello all,\n\nI try to write large objects in one table support for postgres and have\nfollowing problem:\n\nI should calculate maximum amount of data I can put into the bytea field.\nTable scheme is:\n\ncreate table pg_largeobject {\n loid oid,\n lastbyte int4,\n data bytea\n);\n\nIf I will assume that oid == int4 I will have 4 * 2 + 4 (size of vl_len in bytea) = 12.\nPlus 36 bytes of header (as noted in FAQ). It will be 48 bytes... But I still get\nerror that tuple is out too big... And that my size is 8188, but should be 8140.\n\nIt would be great if someone will clarify the way I should do such calculations.\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Mon, 26 Jun 2000 03:48:40 +0700", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": true, "msg_subject": "Maximum len of data fit into the tuple" }, { "msg_contents": "Denis Perchine <[email protected]> writes:\n> If I will assume that oid == int4 I will have 4 * 2 + 4 (size of\n> vl_len in bytea) = 12. Plus 36 bytes of header (as noted in FAQ). It\n> will be 48 bytes... But I still get error that tuple is out too\n> big... And that my size is 8188, but should be 8140.\n\n> It would be great if someone will clarify the way I should do such\n> calculations.\n\nYou forgot the per-page overhead.\n\nOffhand 36 seems too small for the per-tuple overhead, anyway, but I'm\ntoo lazy to count bytes in include/access/htup.h right now...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Jun 2000 23:26:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maximum len of data fit into the tuple " } ]
[ { "msg_contents": "This is a problem in release 7.0.2.\n\nI had never heard of the truncate command! It seems that it ought to be\ndisallowed on a table that is a target for RI checks, since checking that\ndeletions are OK would frustrate the whole purpose of truncate as opposed\nto delete.\n\n------- Forwarded Message\n\nDate: 25 Jun 2000 13:49:14 +0000\nFrom: Grzegorz Stelmaszek <[email protected]>\nTo: [email protected]\nSubject: Bug#66232: postgresql: TRUNCATE doesn't check REFERENCE clause\n\nPackage: postgresql\nVersion: 7.0-release-1\nSeverity: normal\n\nTRUNCATE'ing the table allows rows to be deleted in spite of the REFERENCES\nclause pointing to that table. These may bring the db to an inconsistent\nstate.\n\n...\n------- End of Forwarded Message\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Honour thy father and mother; which is the first \n commandment with promise; That it may be well with \n thee, and thou mayest live long on the earth.\" \n Ephesians 6:2,3 \n\n\n", "msg_date": "Sun, 25 Jun 2000 23:26:16 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "TRUNCATE violates Referential Integrity" } ]
[ { "msg_contents": "Hi all,\nThe following phenomenon has just been reported by\nMikage Sawartari in Japan.\n\nmikage=# CREATE TABLE test (id INTEGER);\nCREATE\nmikage=# CREATE UNIQUE INDEX test_id_ub ON test (id);\nCREATE\nmikage=# INSERT INTO test VALUES (1);\nINSERT 18828 1\nmikage=# INSERT INTO test VALUES (1);\nERROR: Cannot insert a duplicate key into unique index test_id_ub\nmikage=# begin;\nBEGIN\nmikage=# SELECT * FROM test FOR UPDATE;\n id\n----\n 1\n(1 row)\n\nmikage=# INSERT INTO test VALUES (1);\nINSERT 18831 1\nmikage=# commit;\nCOMMIT\nmikage=# SELECT * FROM test;\n id\n----\n 1\n 1\n(2 rows)\n\n\nHeapTupleSatisfiesDirty() seems to neglect the check about HEAP_MARKED_\nFOR_UPDATE in a place. After applying the following patch,unique constraint\nworks well in my environment,\n\nComments ?\n\nIndex: utils/time/tqual.c\n===================================================================\nRCS file: /home/cvs/pgcurrent/src/backend/utils/time/tqual.c,v\nretrieving revision 1.5\ndiff -c -r1.5 tqual.c\n*** utils/time/tqual.c\t2000/01/26 09:59:05\t1.5\n--- utils/time/tqual.c\t2000/06/26 00:13:01\n***************\n*** 441,447 ****\n--- 441,451 ----\n \t}\n \n \tif (TransactionIdIsCurrentTransactionId(tuple->t_xmax))\n+ \t{\n+ \t\tif (tuple->t_infomask & HEAP_MARKED_FOR_UPDATE)\n+ \t\t\treturn true;\n \t\treturn false;\n+ \t}\n \n \tif (!TransactionIdDidCommit(tuple->t_xmax))\n \t{\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Mon, 26 Jun 2000 09:25:02 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "SELECT FOR UPDATE breaks unique constraint " } ]
[ { "msg_contents": "Can somebody confirm how the executable extensions behave on\nWindows/Cygwin? It seems that the following is true:\n\ncc -o foo ...\n\ncreates a file `foo.exe'.\n\ncc -o foo.exe ...\n\nalso creates a file `foo.exe'. Is that correct?\n\nIt also seems that the make targets need to be written like\n\npg_passwd$(X):\n\nrather than\n\npg_passwd:\n\nbecause otherwise you're not really updating the target of the rule.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 26 Jun 2000 03:41:12 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": ".exe extension on Windows" }, { "msg_contents": "\nOn Mon, 26 Jun 2000 03:41:12 +0200 (CEST)\nPeter Eisentraut <[email protected]> wrote:\n\n> Can somebody confirm how the executable extensions behave on\n> Windows/Cygwin? It seems that the following is true:\n> \n> cc -o foo ...\n> \n> creates a file `foo.exe'.\n> \n> cc -o foo.exe ...\n> \n> also creates a file `foo.exe'. Is that correct?\n\n Yes.\n\n> It also seems that the make targets need to be written like\n> \n> pg_passwd$(X):\n> \n> rather than\n> \n> pg_passwd:\n> \n> because otherwise you're not really updating the target of the rule.\n\n I agreed this.\n-----\nYutaka Tanida<[email protected]>\n\n", "msg_date": "Mon, 26 Jun 2000 20:33:11 +0900", "msg_from": "Yutaka tanida <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] .exe extension on Windows" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Yutaka tanida\n>\n> On Mon, 26 Jun 2000 03:41:12 +0200 (CEST)\n> Peter Eisentraut <[email protected]> wrote:\n>\n> > Can somebody confirm how the executable extensions behave on\n> > Windows/Cygwin? It seems that the following is true:\n> >\n> > cc -o foo ...\n> >\n> > creates a file `foo.exe'.\n> >\n> > cc -o foo.exe ...\n> >\n> > also creates a file `foo.exe'. Is that correct?\n>\n> Yes.\n>\n> > It also seems that the make targets need to be written like\n> >\n> > pg_passwd$(X):\n> >\n> > rather than\n> >\n> > pg_passwd:\n> >\n> > because otherwise you're not really updating the target of the rule.\n>\n> I agreed this.\n\nHmm,I see the following in my environment.\n\nbash-2.02$ ls\nCVS Makefile pg_passwd.c pg_passwd.o\nbash-2.02$ make pg_passwd\ngcc -o pg_passwd\npg_passwd.o -lcrypt -lm -lreadline -ltermcap -lncurses -lcygipc\n -g\nbash-2.02$ ls\nCVS Makefile pg_passwd.c pg_passwd.exe pg_passwd.o\nbash-2.02$ make pg_passwd\nmake: `pg_passwd' is up to date.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Tue, 27 Jun 2000 08:08:03 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] .exe extension on Windows" }, { "msg_contents": "Hiroshi Inoue writes:\n\n> Hmm,I see the following in my environment.\n> \n> bash-2.02$ ls\n> CVS Makefile pg_passwd.c pg_passwd.o\n> bash-2.02$ make pg_passwd\n> gcc -o pg_passwd\n> pg_passwd.o -lcrypt -lm -lreadline -ltermcap -lncurses -lcygipc\n> -g\n> bash-2.02$ ls\n> CVS Makefile pg_passwd.c pg_passwd.exe pg_passwd.o\n> bash-2.02$ make pg_passwd\n> make: `pg_passwd' is up to date.\n\nSeems make is smarter than it wants to admit. Then that would not sit well\nwith the changes I just made (which would require you to do `make\npg_passwd.exe', unless make is *that* smart). Gotta investigate this in\nthe GNU make manuals. Thanks.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 27 Jun 2000 02:42:52 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] .exe extension on Windows" } ]
[ { "msg_contents": "Pgaccess currently installs its accessory files into PREFIX/pgaccess,\nwhich isn't really compatible with standard Unix file system layouts. From\nmy reading of things the proper place for it would be\nPREFIX/share/pgaccess. This does not affect users since the file path is\nsubstituted into the pgaccess executable at build time. Protests?\n\n(Pedants might point out that part of pgaccess should go into\nPREFIX/libexec, but we can worry about that later.)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 26 Jun 2000 03:41:27 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "pgaccess installation layout" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> Pgaccess currently installs its accessory files into PREFIX/pgaccess,\n> which isn't really compatible with standard Unix file system layouts. From\n> my reading of things the proper place for it would be\n> PREFIX/share/pgaccess. This does not affect users since the file path is\n> substituted into the pgaccess executable at build time. Protests?\n> \n> (Pedants might point out that part of pgaccess should go into\n> PREFIX/libexec, but we can worry about that later.)\n\nI can't say if it's ok or if not. If you think that it's ok and\nPostgreSQL developers agree, let's put it there.\nI don't know the meaning of .../share/ directory and what the standards\nare.\n\nBest regards,\nTeo\n", "msg_date": "Mon, 26 Jun 2000 09:10:03 +0300", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgaccess installation layout" } ]
[ { "msg_contents": "And here is an old nemesis, thought to have been fixed in 7.0,\nreproducible on 7.0beta3 with the following:\n\nDROP TABLE foo;\nCREATE TABLE foo (d date);\nCREATE UNIQUE INDEX date_uidx ON foo(d);\nCREATE UNIQUE INDEX datetime_uidx ON foo(datetime(d));\nINSERT INTO foo (d) VALUES ('17-Jun-1995');\n\nDROP TABLE bar;\nDROP SEQUENCE bar_id_seq;\nCREATE TABLE bar (\n id SERIAL, \n start_time DATETIME,\n duration FLOAT\n);\nINSERT INTO bar (start_time, duration) VALUES ('17-Jun-1995', 3);\n\nDROP TABLE baz;\nDROP SEQUENCE baz_id_seq;\nCREATE TABLE baz (\n id SERIAL, \n bar_id INTEGER,\n duration FLOAT\n);\nINSERT INTO baz (bar_id, duration) SELECT id, duration FROM bar;\n \n-- Here's the offending query...\nSELECT f.d, r.start_time::date, r.duration AS \"r_dur\", z.duration AS\n\"z_dur\"\nFROM foo f, bar r, baz z\nWHERE r.id = 2 \n AND z.bar_id = 2\n AND f.d = (r.start_time - '1 day'::interval)::date ;\n", "msg_date": "Mon, 26 Jun 2000 00:14:35 -0500", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": true, "msg_subject": "ExecInitIndexScan: both left and right ops are rel-vars" }, { "msg_contents": "Ed Loehr wrote:\n> \n> And here is an old nemesis, thought to have been fixed in 7.0,\n> reproducible on 7.0beta3 with the following:\n> \n> DROP TABLE foo;\n> CREATE TABLE foo (d date);\n> CREATE UNIQUE INDEX date_uidx ON foo(d);\n> CREATE UNIQUE INDEX datetime_uidx ON foo(datetime(d));\n> INSERT INTO foo (d) VALUES ('17-Jun-1995');\n> \n> DROP TABLE bar;\n> DROP SEQUENCE bar_id_seq;\n> CREATE TABLE bar (\n> id SERIAL,\n> start_time DATETIME,\n> duration FLOAT\n> );\n> INSERT INTO bar (start_time, duration) VALUES ('17-Jun-1995', 3);\n> \n> DROP TABLE baz;\n> DROP SEQUENCE baz_id_seq;\n> CREATE TABLE baz (\n> id SERIAL,\n> bar_id INTEGER,\n> duration FLOAT\n> );\n> INSERT INTO baz (bar_id, duration) SELECT id, duration FROM bar;\n> \n\n\nA final clue: if I run 'VACUUM ANALYZE' at this point in the script,\nbefore the select, the error disappears.\n\nRegards,\nEd Loehr\n\n> -- Here's the offending query...\n> SELECT f.d, r.start_time::date, r.duration AS \"r_dur\", z.duration AS\n> \"z_dur\"\n> FROM foo f, bar r, baz z\n> WHERE r.id = 2\n> AND z.bar_id = 2\n> AND f.d = (r.start_time - '1 day'::interval)::date ;\n", "msg_date": "Mon, 26 Jun 2000 00:29:37 -0500", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ExecInitIndexScan: both left and right ops are rel-vars" }, { "msg_contents": "Try it in 7.0.2 --- if it still happens there then I'm interested.\n(Don't see it here with current sources.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Jun 2000 02:43:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ExecInitIndexScan: both left and right ops are rel-vars " } ]
[ { "msg_contents": "\n> In my mind the point of the \"database\" concept is to provide a domain\n> within which custom datatypes and functions are available. Schemas\n> will control the visibility of tables, but SQL92 hasn't thought about\n> controlling visibility of datatypes or functions. So I think we will\n> still want \"database\" = \"span of applicability of system catalogs\"\n> and multiple databases allowed per installation, even though there may\n> be schemas subdividing the database(s).\n\nYes, and people wanting only one database like in Oracle will simply only\ncreate one database. The only issue I can think of is that they can have\nsome \"default database\" other than the current dbname=username, so\nthey don't need to worry about it. \n\nAndreas\n", "msg_date": "Mon, 26 Jun 2000 09:57:43 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Big 7.1 open items " } ]
[ { "msg_contents": "Please note that as I've just moved home, my home internet connection is not\nup at the moment, so it's better to send emails about the JDBC driver to my\nwork's address:\n\n\[email protected]\n\nHopefully, as soon as BT get the new phone line up and running I'll be back\nin business.\n\nPeter\n\n--\nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council\n\n", "msg_date": "Mon, 26 Jun 2000 09:06:12 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Contacting me" }, { "msg_contents": "I've finally got email working at the new flat, so you can now send me email\nto [email protected] again.\n\nThe downside, is that I have just over 2500 email to plough through ;-(\n\nPeter\n\n----- Original Message -----\nFrom: Peter Mount <[email protected]>\nTo: PostgreSQL Developers List (E-mail) <[email protected]>; PostgreSQL\nInterfaces (E-mail) <[email protected]>\nSent: Monday, June 26, 2000 9:06 AM\nSubject: [HACKERS] Contacting me\n\n\n> Please note that as I've just moved home, my home internet connection is\nnot\n> up at the moment, so it's better to send emails about the JDBC driver to\nmy\n> work's address:\n>\n> [email protected]\n>\n> Hopefully, as soon as BT get the new phone line up and running I'll be\nback\n> in business.\n>\n> Peter\n>\n> --\n> Peter Mount\n> Enterprise Support\n> Maidstone Borough Council\n> Any views stated are my own, and not those of Maidstone Borough Council\n>\n\n", "msg_date": "Sat, 8 Jul 2000 01:14:58 +0100", "msg_from": "\"Peter Mount\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Contacting me" }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> I've finally got email working at the new flat, so you can now send me email\n> to [email protected] again.\n> \n> The downside, is that I have just over 2500 email to plough through ;-(\n\nI was just mentioning to Tom Lane that there are points where I can just\nread PostgreSQL mail constanly. I read my received messages, and once I\nam done, more have arrived.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\necho: cannot create /dev/ttyp3: permission denied\n", "msg_date": "Fri, 7 Jul 2000 20:31:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Contacting me" }, { "msg_contents": "> > I've finally got email working at the new flat, so you can now send me\nemail\n> > to [email protected] again.\n> >\n> > The downside, is that I have just over 2500 email to plough through ;-(\n>\n> I was just mentioning to Tom Lane that there are points where I can just\n> read PostgreSQL mail constanly. I read my received messages, and once I\n> am done, more have arrived.\n\nI get that a lot here, spending about 2 hours an evening reading mail, then\nrealising there's not enough time to do any programming :-(\n\nPeter\n\n\n\n", "msg_date": "Sat, 8 Jul 2000 11:41:29 +0100", "msg_from": "\"Peter Mount\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Contacting me" }, { "msg_contents": "\"Peter Mount\" <[email protected]> writes:\n\n> I get that a lot here, spending about 2 hours an evening reading mail, then\n> realising there's not enough time to do any programming :-(\n\nWhat about trying to partition into more mailing lists with more specific\ntopics ?\n\nFor instance when it comes to interfaces I'm really only interested in the\njdbc interface. \n \n\tGunnar\n", "msg_date": "12 Jul 2000 13:06:07 +0200", "msg_from": "Gunnar R|nning <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Contacting me" } ]
[ { "msg_contents": "\n> Besides which, OID alone doesn't give us a possibility of file\n> versioning, and as I commented to Vadim I think we will want that,\n> WAL or no WAL. So it seems to me the two viable choices are\n> unique-id or OID+version-number. Either way, the file-naming behavior\n> should be the same across all platforms.\n\nI do not think the only problem of a failing rename of \"temp\" to \"new\" \non startup rollforward is issue enough to justify the additional complexity\na version implys.\nWhy not simply abort startup of postmaster in such an event and let the \ndba fix it. There can be no data loss.\n\nIf e.g. the permissions of the directory are insufficient we will want to\nabort \nstartup anyway, no?\n\nAndreas\n", "msg_date": "Mon, 26 Jun 2000 10:09:13 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "File versioning (was: Big 7.1 open items) " }, { "msg_contents": "> -----Original Message-----\n> From: Zeugswetter Andreas SB\n> \n> > Besides which, OID alone doesn't give us a possibility of file\n> > versioning, and as I commented to Vadim I think we will want that,\n> > WAL or no WAL. So it seems to me the two viable choices are\n> > unique-id or OID+version-number. Either way, the file-naming behavior\n> > should be the same across all platforms.\n> \n> I do not think the only problem of a failing rename of \"temp\" to \"new\" \n> on startup rollforward is issue enough to justify the additional \n> complexity\n> a version implys.\n\nHmm,I've always mentioned about usual rollback and never mentioned\nabout rollforward on this topic AFAIR. Could you tell me what you mean\nby * on startup rollforward* ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Mon, 26 Jun 2000 19:08:54 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: File versioning (was: Big 7.1 open items) " }, { "msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> I do not think the only problem of a failing rename of \"temp\" to \"new\" \n> on startup rollforward is issue enough to justify the additional complexity\n> a version implys.\n\nIf that were the only reason for it then I wouldn't feel it was so\nessential. However, it will also let us fix CLUSTER, vacuuming of\nindexes, ALTER TABLE DROP COLUMN with physical removal of the column,\netc etc. Making the world safe for rollbackable RENAME/DROP/TRUNCATE\nTABLE is just one of the benefits.\n\nVersioning also eliminates a whole host of problems at the bufmgr/smgr\nlevel that are caused by having to cope with relation files getting\nrenamed out from under you. We have painfully eliminated some of these\nproblems over the past couple of years by ad-hoc, ugly techniques like\nflushing the buffer cache when doing a rename. But who's to say there\nare not more such bugs left?\n\nIn short, I think versioning is far *less* complex, not to mention more\nreliable, than the kluges we need to use to work around the lack of it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Jun 2000 10:42:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: File versioning (was: Big 7.1 open items) " } ]
[ { "msg_contents": "\n> The broad approach would be modify the existing pg_dump as little as\n> possible; I am inclined to write the data as SQL (as currently done), and\n> append an 'index' to the output, specifying the offset on the file that\n> each piece of extractable data can be found. The 'restore' option would\n> just go to the relevant section(s), and pipe the data to psql.\n\nA problem I see with an index at file end is, that you will need to read the\nfile\ntwice, and that may be very undesireable if e.g the backup is on tape\nor a compressed file.\n\nI like your idea of uniquely formatted comments heading separate sections\nof the dump file, if the \"create table ...\" is not already enough. \n\nAndreas\n", "msg_date": "Mon, 26 Jun 2000 10:17:05 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Proposal: More flexible backup/restore via pg_dump" } ]
[ { "msg_contents": "\n> > > In my mind the point of the \"database\" concept is to \n> provide a domain\n> > > within which custom datatypes and functions are available.\n> >\n> \n> AFAIK few users understand it and many users have wondered\n> why we couldn't issue cross \"database\" queries.\n\nImho the same issue is access to tables on another machine.\nIf we \"fix\" that, access to another db on the same instance is just\na variant of the above. \n\n> \n> > Quoth SQL99:\n> >\n> > \"A user-defined type is a schema object\"\n> >\n> > \"An SQL-invoked routine is an element of an SQL-schema\"\n> >\n> > I have yet to see anything in SQL that's a per-catalog \n> object. Some things\n> > are global, like users, but everything else is per-schema.\n\nYes.\n\n> So why is system catalog needed per \"database\" ?\n\nI like to use different databases on a development machine,\nbecause it makes testing easier. The only thing that\nneeds to be changed is the connect statement. All other statements\nincluding schema qualified tablenames stay exactly the same for\neach developer even though each has his own database, \nand his own version of functions.\nI have yet to see an installation that does'nt have at least one program\nthat needs access to more than one schema.\n\nOn production machines we (using Informix) use different databases \nfor different products, because it reduces the possibility of accessing\nthe wrong tables, since the syntax for accessing tables in other db's\nis different (dbname[@instancename]:\"owner\".tabname in Informix)\nThe schema does not help us, since most of our programs access \ntables from more than one schema.\n\nAnd again someone wanting Oracle'ish behavior will only create one \ndatabase per instance.\n\nAndreas\n", "msg_date": "Mon, 26 Jun 2000 11:31:06 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Big 7.1 open items " }, { "msg_contents": "> -----Original Message-----\n> From: Zeugswetter Andreas SB\n> \n> > > > In my mind the point of the \"database\" concept is to \n> > provide a domain\n> > > > within which custom datatypes and functions are available.\n> > >\n> > \n> > AFAIK few users understand it and many users have wondered\n> > why we couldn't issue cross \"database\" queries.\n> \n> Imho the same issue is access to tables on another machine.\n> If we \"fix\" that, access to another db on the same instance is just\n> a variant of the above. \n>\n\nWhat is a difference between SCHAMA and your \"database\" ?\nI myself am confused about them.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Mon, 26 Jun 2000 19:08:26 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Big 7.1 open items " } ]
[ { "msg_contents": "After further thought I do think that a physical restore of a backup \ndone with e.g. tar and pg_log as first file of backup does indeed work.\n\nI had concerns with the incompletely written pg pages, but\nthose will always be last pages in table data files. The problem\nwith this page is imho not existent since the offsets for rows inside the\npage\nstay the same, data after the currently last added row will stay the same.\nThus we only have a problem, that a new row can be half added because\nit is split between two system pages. But since it will have an open \n(to be rolled back) [x?]tid in respect to pg_log we are certainly not \ninterested in this row. \n\nYes, indexes will need to be rebuilt, since they do change page layout \nand pointers (e.g. page split) during transactions.\n\nBut the simplicity of backing up your database as part of your normal\nsystem backup makes this very interesting to me, and imho to the community.\nThe speed difference compared to pg_dump is also tremendous.\n\nOne helpful improvement for this would be to add a type of file ending\nto our files, like *.dat for data and *.idx for indexes, since you will not \nwant to backup index files. Missing indexes would need to be recreated\non first backend startup.\n\nAndreas\n", "msg_date": "Mon, 26 Jun 2000 12:13:14 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "physical backup of PostgreSQL" } ]
[ { "msg_contents": "Hiroshi Inoue [mailto:[email protected]] wrote:\n> > > > > In my mind the point of the \"database\" concept is to \n> > > provide a domain\n> > > > > within which custom datatypes and functions are available.\n> > > >\n> > > \n> > > AFAIK few users understand it and many users have wondered\n> > > why we couldn't issue cross \"database\" queries.\n> > \n> > Imho the same issue is access to tables on another machine.\n> > If we \"fix\" that, access to another db on the same instance is just\n> > a variant of the above. \n> >\n> \n> What is a difference between SCHAMA and your \"database\" ?\n> I myself am confused about them.\n\nThink of it as a hierarchy:\n\tinstance -> database -> schema -> object\n\n- \"instance\" corresponds to one postmaster\n- \"database\" as in current implementation\n- \"schema\" name corresponds to the owner of the object,\nonly that a corresponding db or os user does not need to exist in\nsome of the implementations I know.\n- \"object\" is one of table, index, function ... \n\nThe database is what you connect to in your connect statement,\nyou then see all schemas inside this database only. Access to another\ndatabase would need an explicitly created synonym or different syntax.\nThe default \"schema\" name is usually the logged in user name\n(although I don't like this approach, I like Informix's approach where\nthe schema need not be specified if tabname is unique (and tabname\nis unique per db unless you specify database mode ansi)). \nAll other schemas have to be explicitly named (\"schemaname\".tabname).\n\nOracle has exactly this layout, only you are restricted to one database \nper instance. \n(They even have a \"create database ..\" statement, although it is somehow \nanalogous to our initdb).\n\nAndreas\n", "msg_date": "Mon, 26 Jun 2000 12:50:10 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Big 7.1 open items " } ]
[ { "msg_contents": "Hiroshi Inoue [mailto:[email protected]] wrote:\n> > > Besides which, OID alone doesn't give us a possibility of file\n> > > versioning, and as I commented to Vadim I think we will want that,\n> > > WAL or no WAL. So it seems to me the two viable choices are\n> > > unique-id or OID+version-number. Either way, the \n> file-naming behavior\n> > > should be the same across all platforms.\n> > \n> > I do not think the only problem of a failing rename of \n> \"temp\" to \"new\" \n> > on startup rollforward is issue enough to justify the additional \n> > complexity\n> > a version implys.\n> \n> Hmm,I've always mentioned about usual rollback and never mentioned\n> about rollforward on this topic AFAIR. Could you tell me what you mean\n> by * on startup rollforward* ?\n\nsituation:\n$ alter table ...\n\ndb crash before rename is done but rest was ok.\n\n$ startup\n\nrollforward tx log which has the open entry for the rename table\n\nAndreas \n", "msg_date": "Mon, 26 Jun 2000 12:54:11 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: File versioning (was: Big 7.1 open items) " }, { "msg_contents": "Zeugswetter Andreas SB wrote:\n\n> Hiroshi Inoue [mailto:[email protected]] wrote:\n> > > > Besides which, OID alone doesn't give us a possibility of file\n> > > > versioning, and as I commented to Vadim I think we will want that,\n> > > > WAL or no WAL. So it seems to me the two viable choices are\n> > > > unique-id or OID+version-number. Either way, the\n> > file-naming behavior\n> > > > should be the same across all platforms.\n> > >\n> > > I do not think the only problem of a failing rename of\n> > \"temp\" to \"new\"\n> > > on startup rollforward is issue enough to justify the additional\n> > > complexity\n> > > a version implys.\n> >\n> > Hmm,I've always mentioned about usual rollback and never mentioned\n> > about rollforward on this topic AFAIR. Could you tell me what you mean\n> > by * on startup rollforward* ?\n>\n> situation:\n> $ alter table ...\n>\n> db crash before rename is done but rest was ok.\n>\n> $ startup\n>\n\nDoesn't above *startup* mean the startup of transaction recovery ?\nBecause I don't know Vadim's implementation about WAL well,I've\nnever taken WAL into account in this topic. I've always discussed\nabout usual rollback and so the problem you've pointed out is not\nmy point of discussion.\nSeems we have discussed this topic from different POV.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n\n", "msg_date": "Tue, 27 Jun 2000 08:32:02 +0900", "msg_from": "Hiroshi Inoue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: File versioning (was: Big 7.1 open items)" } ]
[ { "msg_contents": "Vadim wrote:\n> Impossible to recover anyway - pg_control keeps last \n> checkpoint pointer, required for recovery. \n\nWhy not put this info in the tx log itself.\n\n> That's why Oracle recommends (requires?) at least\n> two copies of control file ....\n\nThis is one of the most stupid design issues Oracle has.\nI suggest you look at the tx log design of Informix.\n(No Informix dba fears to pull the power cord on his servers,\nask the same of an Oracle dba, they even fear \n\"shutdown immediate\" on a heavily used db)\n\nAndreas\n", "msg_date": "Mon, 26 Jun 2000 13:50:55 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Big 7.1 open items " } ]
[ { "msg_contents": "I wrote:\n> Vadim wrote:\n> > Impossible to recover anyway - pg_control keeps last \n> > checkpoint pointer, required for recovery. \n> \n> Why not put this info in the tx log itself.\n> \n> > That's why Oracle recommends (requires?) at least\n> > two copies of control file ....\n> \n> This is one of the most stupid design issues Oracle has.\n\nThe problem is, that if you want to switch to a no fsync environment,\n(here I also mean the tx log)\nbut the possibility of losing a write is still there, you cannot sync \nwrites to two or more different files. Only one file, the tx log itself is\nallowed\nto carry lastminute information. \n\nThus you need to txlog changes to pg_control also.\n\nAndreas \n", "msg_date": "Mon, 26 Jun 2000 14:01:15 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Big 7.1 open items " } ]
[ { "msg_contents": "I've started doing a bit of work on gram.y, and am noticing some new\ncruftiness in the Makefile: if I add tokens to gram.y/keywords.c then I\ncan't just remake in that directory since parse.h is not updated\nelsewhere in the tree.\n\nI believe that the Makefile used to reach up and over in the tree to\nupdate parse.h, and I'll guess that this fell victim to a general\ncleanup (looks like Tom Lane and Peter E. have been working with the\nMakefiles, but I haven't tracked down the details).\n\nAny suggestions for a fixup? In general, I agree that having Makefiles\nmuck with stuff elsewhere in the tree is a Bad Idea, but it would be\nnice if each directory could be built on its own.\n\n - Thomas\n", "msg_date": "Mon, 26 Jun 2000 13:38:27 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Makefile for parser" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I've started doing a bit of work on gram.y, and am noticing some new\n> cruftiness in the Makefile: if I add tokens to gram.y/keywords.c then I\n> can't just remake in that directory since parse.h is not updated\n> elsewhere in the tree.\n\nUh ... what's your point? If the changes to parse.h affect anything\nelse then you ought to be doing a top-level make --- or at the very\nleast a make in src/backend --- and that will rebuild\ninclude/parser/parse.h.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Jun 2000 10:55:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Makefile for parser " }, { "msg_contents": "> > I've started doing a bit of work on gram.y, and am noticing some new\n> > cruftiness in the Makefile: if I add tokens to gram.y/keywords.c \n> > then I can't just remake in that directory since parse.h is not \n> > updated elsewhere in the tree.\n> Uh ... what's your point? If the changes to parse.h affect anything\n> else then you ought to be doing a top-level make --- or at the very\n> least a make in src/backend --- and that will rebuild\n> include/parser/parse.h.\n\nAny change to gram.y regenerates the local copy of parse.h and affects\nother files *in that local directory* (as well as elsewhere). The\nmakefile *in that local directory* should be able to make the other\nfiles *in that same directory* at the same time.\n\nThat's my point ;)\n\nistm that the local makefile should at least reach up and over to the\nother rule for building parse.h (wherever that is), so that parse.h gets\nmoved to the include/ area. If make is invoked from the top of the tree,\nthen it is a noop. If make is invoked from backend/parser/, then the\nlocal files get rebuilt correctly.\n\n - Tom\n", "msg_date": "Tue, 27 Jun 2000 01:26:48 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Makefile for parser" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Uh ... what's your point?\n\n> Any change to gram.y regenerates the local copy of parse.h and affects\n> other files *in that local directory* (as well as elsewhere). The\n> makefile *in that local directory* should be able to make the other\n> files *in that same directory* at the same time.\n\nOh, right, the files in that directory are going to include parse.h\nfrom the include dir now, instead of \".\", aren't they? I see your\nproblem.\n\nProbably the rule that installs parse.h into the include tree ought to\nbe pushed down from backend/Makefile to backend/parser/Makefile (but\nbackend/Makefile still needs to invoke it during its prebuildheaders\nphase). Maybe likewise for fmgroids.h into backend/utils.\n\nPeter, any thoughts here?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Jun 2000 04:38:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Makefile for parser " }, { "msg_contents": "> Oh, right, the files in that directory are going to include parse.h\n> from the include dir now, instead of \".\", aren't they? I see your\n> problem.\n\nRight.\n\n> Probably the rule that installs parse.h into the include tree ought to\n> be pushed down from backend/Makefile to backend/parser/Makefile (but\n> backend/Makefile still needs to invoke it during its prebuildheaders\n> phase). Maybe likewise for fmgroids.h into backend/utils.\n\nI've got (simple) patches which do this for parse.h. I can see why this\nwas pushed up, since it is not very clean to have Makefiles putting\ntheir fingers into other places in the tree. But for this case I don't\nsee a way out.\n\nPeter E?\n\n - Thomas\n", "msg_date": "Tue, 27 Jun 2000 14:05:11 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Makefile for parser" }, { "msg_contents": "Tom Lane writes:\n\n> > I've started doing a bit of work on gram.y, and am noticing some new\n> > cruftiness in the Makefile: if I add tokens to gram.y/keywords.c then I\n> > can't just remake in that directory since parse.h is not updated\n> > elsewhere in the tree.\n> \n> Uh ... what's your point? If the changes to parse.h affect anything\n> else then you ought to be doing a top-level make --- or at the very\n> least a make in src/backend --- and that will rebuild\n> include/parser/parse.h.\n\nI'm having a feeling that this will not work too well with parallel\nmake. Every directory needs to know how to make all the files that it\nneeds. For the case of parse.h it would not be too difficult to teach the\nfew places that need it:\n\nsrc/backend$ find -name '*.c' | xargs fgrep 'parse.h' | fgrep -v './parser/'\n./commands/command.c:#include \"parser/parse.h\"\n./commands/comment.c:#include \"parser/parse.h\"\n./nodes/outfuncs.c:#include \"parser/parse.h\"\n./tcop/utility.c:#include \"parser/parse.h\"\n\nfmgroids.h on the other hand would be trickier. We might need a\nbackend/Makefile.inc (perhaps as a wrapper around Makefile.global) to do\nit right. But I haven't gotten to the backend tree at all yet.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 27 Jun 2000 20:05:49 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Makefile for parser " }, { "msg_contents": "Thomas Lockhart writes:\n\n> I've got (simple) patches which do this for parse.h.\n\nPlease feel free to do whatever helps you right now. I haven't gotten to\nthe backend tree yet.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 28 Jun 2000 18:33:14 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Makefile for parser" }, { "msg_contents": "\nOn the topic of make, have you all read \"Recursive Make Considered\nHarmful\" at http://www.tip.net.au/~millerp/rmch/recu-make-cons-harm.html\n\nHe makes a good point that most people write makefiles wrong. Certainly\nthe pgsql makefiles are broken for parallel make.\n\n-- \nChris Bitmead\nmailto:[email protected]\nPeter Eisentraut wrote:\n> \n> Tom Lane writes:\n> \n> > > I've started doing a bit of work on gram.y, and am noticing some new\n> > > cruftiness in the Makefile: if I add tokens to gram.y/keywords.c then I\n> > > can't just remake in that directory since parse.h is not updated\n> > > elsewhere in the tree.\n> >\n> > Uh ... what's your point? If the changes to parse.h affect anything\n> > else then you ought to be doing a top-level make --- or at the very\n> > least a make in src/backend --- and that will rebuild\n> > include/parser/parse.h.\n> \n> I'm having a feeling that this will not work too well with parallel\n> make. Every directory needs to know how to make all the files that it\n> needs. For the case of parse.h it would not be too difficult to teach the\n> few places that need it:\n> \n> src/backend$ find -name '*.c' | xargs fgrep 'parse.h' | fgrep -v './parser/'\n> ./commands/command.c:#include \"parser/parse.h\"\n> ./commands/comment.c:#include \"parser/parse.h\"\n> ./nodes/outfuncs.c:#include \"parser/parse.h\"\n> ./tcop/utility.c:#include \"parser/parse.h\"\n> \n> fmgroids.h on the other hand would be trickier. We might need a\n> backend/Makefile.inc (perhaps as a wrapper around Makefile.global) to do\n> it right. But I haven't gotten to the backend tree at all yet.\n> \n> --\n> Peter Eisentraut Sernanders v�g 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n", "msg_date": "Sat, 01 Jul 2000 13:20:12 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Makefile for parser" }, { "msg_contents": "Chris Bitmead <[email protected]> writes:\n> On the topic of make, have you all read \"Recursive Make Considered\n> Harmful\" at http://www.tip.net.au/~millerp/rmch/recu-make-cons-harm.html\n\nI read it, I don't believe a word of it. The whole thing is founded on\na bogus example, to which is added specious reasoning and an assumption\nthat everyone wants to use GCC as compiler plus a nonstandardly-patched\nversion of GNU make. This is not the real world.\n\nThe Postgres build setup is certainly far from ideal, but IMHO the only\nthing *really* wrong with it is that we're not constructing accurate\ndependency lists by default. I believe Peter E. is planning to\nfix that...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 01 Jul 2000 01:43:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Makefile for parser " }, { "msg_contents": "Tom Lane wrote:\n> \n> Chris Bitmead <[email protected]> writes:\n> > On the topic of make, have you all read \"Recursive Make Considered\n> > Harmful\" at http://www.tip.net.au/~millerp/rmch/recu-make-cons-harm.html\n> \n> I read it, I don't believe a word of it. The whole thing is founded on\n> a bogus example, to which is added specious reasoning\n\n?\n\n> and an assumption\n> that everyone wants to use GCC as compiler plus a nonstandardly-patched\n> version of GNU make. This is not the real world.\n\nIt doesn't depend on using gcc. The GNU make patch referred to was put\ninto the core GNU make distribution a long time ago.\n\n> The Postgres build setup is certainly far from ideal, but IMHO the only\n> thing *really* wrong with it is that we're not constructing accurate\n> dependency lists by default. I believe Peter E. is planning to\n> fix that...\n\nIt is pretty nice if you use his recommendation to be able to type make\nat the top level and be told immediately that everything is up to date\nrather than seeing 10 pages of messages scroll past. I think you've\ndismissed him a little easily about the problems of properly specifying\ndependancies with recursive make.\n", "msg_date": "Sat, 01 Jul 2000 15:56:56 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Makefile for parser" }, { "msg_contents": "Tom Lane writes:\n\n> > On the topic of make, have you all read \"Recursive Make Considered\n> > Harmful\" at http://www.tip.net.au/~millerp/rmch/recu-make-cons-harm.html\n> \n> I read it, I don't believe a word of it.\n\nNot only do I believe some words of it, I've had essentially the same\nthoughts for quite a while. No, certainly there should not be a single\nmakefile for all of PostgreSQL. I'd certainly like to be able to do `make\n-C src/bin/psql install'. But for the backend tree where you only build\none program this certainly would make sense. And note that the author's\nexample is essentially the same we've been talking about: a parse.h file\nwith incomplete dependencies.\n\n> The whole thing is founded on a bogus example, to which is added\n> specious reasoning and an assumption that everyone wants to use GCC as\n> compiler\n\nWell, you would need a compiler that handles -c and -o at the same time,\nbut you can always use cc -c && mv if that doesn't work. I think GNU make\nmight even do that by default, but I'd have to check.\n\n> The Postgres build setup is certainly far from ideal, but IMHO the only\n> thing *really* wrong with it is that we're not constructing accurate\n> dependency lists by default.\n\nAccurate dependencies are one thing, actually knowing how to satisfy these\ndependencies is another. If the dependencies say that file X depends on\nfmgroids.h but no further information about fmgroids.h is provided then it\nwill fail if it doesn't exist, or assume on mere existance that it is up\nto date. This is again exactly what this guy is talking about.\n\n\nConcluding, I don't know how well the suggested setup would work. I\nhaven't tried it, but I certainly will. In any case there's got to be\nsomething better than maintaining 50+ makefiles that all do the same thing\nand all do the same thing wrong.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 2 Jul 2000 17:21:56 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Makefile for parser " }, { "msg_contents": "Chris Bitmead writes:\n\n> Certainly the pgsql makefiles are broken for parallel make.\n\nI think of \"broken\" for parallel make if it doesn't work at all, which\ncertainly needs to be fixed. \"Unsupportive\" of parallel make are things\nlike this:\n\nDIRS := initdb initlocation ipcclean pg_ctl pg_dump pg_id \\\n pg_passwd pg_version psql scripts\n\nall:\n @for dir in $(DIRS); do $(MAKE) -C $$dir $@ || exit 1; done\n\nbecause no matter how smart make is, the loop will still execute\nsequentially.\n\nBut parallel make can co-exist with recursive make, like this:\n\nDIRS := initdb initlocation ipcclean pg_ctl pg_dump pg_id \\ \n pg_passwd pg_version psql scripts\n\nall: $(DIRS:%=%-all-recursive)\n\n.PHONY: $(DIRS:%=%-all-recursive)\n\n$(DIRS:%=%-all-recursive):\n\t$(MAKE) -C $(subst -all-recursive,,$@) all\n\n\nThen again, if you want faster builds, use -pipe. I'd like to make that\nthe default but I haven't found a reliable way to test for it. GCC doesn't\nreject invalid switches in a detectable manner.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 2 Jul 2000 17:22:18 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Makefile for parser" } ]
[ { "msg_contents": "At 12:13 26/06/00 +0200, Zeugswetter Andreas SB wrote:\n>After further thought I do think that a physical restore of a backup \n>done with e.g. tar and pg_log as first file of backup does indeed work.\n\nEven if the file backup occurs in the middle of a vacuum?\n\nI like the idea of a non-SQL-based backup method, but it would be nice to\nsee some kind of locking being done to ensure that the backup is a valid\ndatabase snapshot. Can some kind of consistent page dump be done? (Rather\nthan a file copy)\n\nI believe Vadim's future plans involve reuse of data pages - would this\nhave an impact on backup integrity?\n\nMy experience with applying journals to restored databases suggests that\nthe journals need to be applied to a 'known' state of the database -\nusually a snapshot as of a certain TX Seq No.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 26 Jun 2000 23:58:21 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: physical backup of PostgreSQL" } ]
[ { "msg_contents": "At 10:17 26/06/00 +0200, Zeugswetter Andreas SB wrote:\n>\n>A problem I see with an index at file end is, that you will need to read the\n>file twice, and that may be very undesireable if e.g the backup is on tape\n>or a compressed file.\n\nThe proposal has actually come a fairly long way after extensive\ndiscussions with Tom Lane, and I have added the current plans at the end of\nthis message. The TOC-at-end problem is an issue that I am trying to deal\nwith; I am planning a 'custom' format that has the large parts (data dumps)\ncompressed, to avoid the need of compressing the entire file. This means\nthat you would not need to uncompress the entire file to get to the TOC, or\nto restore just the schema. It also allows good random access to defns and\ndata. I'm also considering putting the dumped data at the end of the file,\nbut this has issues when you want to restore table data before defining\nindexes, for example.\n\nI must admit that I've been working on the assumption that people using\nPostgreSQL don't have multi-GB (compressed) database dumps, so that (in\ntheory) a restore can be loaded onto disk from tape before being used. I\nknow this is pretty evil, but it will cover 95% of users. For those people\nwith huge backups, they will have to suffer tapes that go backward and\nforwards a bit. From the details below, you will see that this is unavoidable.\n\nSanity Check: does fseek work on tapes? If not, what is the correct way to\nread a particular block/byte from a file on a tape?\n\n-----------------------------------------------------------\nUpdated Proposal:\n-------------------------\n\nFor the sake of argument, call the new utilities pg_backup and pg_restore.\n\npg_backup\n---------\n\nDump schema [and data] in OID order (to try to make restores sequential,\nfor when tar/tape storage is used). Each dumped item has a TOC entry which\nincludes the OID and description, and for those items for which we know\nsome dependencies (functions for types & aggregates; types for tables;\nsuperclasses for classes; - any more?), it will also dump the dependency OIDs.\n\nEach object (table defn, table data, function defn, type defn etc) is\ndumped to a separate file/thing in the output file. The TOC entries go into\na separate file/thing (probably only one file/thing for the whole TOC).\n\nThe output scheme will be encapsulated, and in the initial version will be\na custom format (since I can't see an API for tar files), and a\ndump-to-a-directory format. Future use of tar, DB, PostgreSQL or even a\nMake file should not be excluded in the IO design. This last goal *may* not\nbe achieved, but I don't see why it can't be at this stage. Hopefully\nsomeone with appropriate skills & motivation can do a tar archive 8-}.\n\nThe result of a pg_backup should be a single file with metadata and\noptional data, along with whatever dependency and extra data is available\npg_backup, or provided by the DBA.\n\n\npg_restore\n----------\n\nReads a backup file and dumps SQL suitable for sending to psql.\n\nOptions will include:\n\n- No Data (--no-data? -nd? -s?)\n- No metadata (--no-schema? -ns? -d?)\n- Specification of items to dump from an input file; this allows custom\nordering AND custom selection of multiple items. Basically, I will allow\nthe user to dump part of the TOC, edit it, and tell pg_restore to use the\nedited partial TOC. (--item-list=<file>? -l=<file>?)\n- Dump TOC (--toc-only? -c?)\n\n[Wish List]\n- Data For a single table (--table=<name>? -t=<name>)\n- Defn/Data for a single OID; (--oid=<oid>? -o=<oid>?)\n- User definied dependencies. Allow the DB developer to specify once for\nthier DB what the dependencies are, then use that files as a guide to the\nrestore process. (--deps=<file> -D=<file>)\n\npg_restore will use the same custom IO routines to allow IO to\ntar/directory/custom files. In the first pass, I will do custom file IO.\n\nIf a user selects to restore the entire metadata, then it will be dumped\naccording to the defaul policy (OID order). If they select to specify the\nitems from an input file, then the file ordering is used.\n\n\n-------\n\nTypical backup procedure:\n\n pg_backup mydb mydb.bkp\n\nor *maybe* \n\n pg_backup mydb > mydb.bkp\n\nBUT AFAIK, fseek does not work on STDOUT, and at the current time pg_backup\nwill use fseek.\n\n\nTypical restore procedure:\n\n pg_restore mydb mydb.bkp | psql \n\nA user will be able to extract only the schema (-s), only the data (-d), a\nspecific table (-t=name), or even edit the object order and selection via:\n\n pg_restore --dump-toc mydb.bkp > mytoc.txt\n vi mytoc.txt {ie. reorder TOC elements as per known dependency problems}\n pg_restore --item-list=mytoc.txt mydb.bkp | psql\n\nFWIW, I envisage the ---dump-toc output to look like:\n\nID; FUNCTION FRED(INT4)\nID; TYPE MY_TYPE\nID; TABLE MY_TABLE\nID; DATA MY_TABLE\nID; INDEX MY_TABLE_IX1\n...etc.\n\nso editing and reordering the dump plan should not be too onerous.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 27 Jun 2000 00:30:30 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: More flexible backup/restore via\n pg_dump" }, { "msg_contents": "\n> I must admit that I've been working on the assumption that people using\n> PostgreSQL don't have multi-GB (compressed) database dumps, so that (in\n> theory) a restore can be loaded onto disk from tape before being\n> used.\n\nAre you are also assuming that a backup fits in a single file,\ni.e. that anyone with >2GB of backup has some sort of large file\nsupport?\n\n> Sanity Check: does fseek work on tapes? If not, what is the correct way to\n> read a particular block/byte from a file on a tape?\n\nAs someone else answered: no. You can't portably assume random access\nto tape blocks.\n\n> The output scheme will be encapsulated, and in the initial version will be\n> a custom format (since I can't see an API for tar files)\n\nYou can use a standard format without there being a standard API.\n\nUsing either tar or cpio format as defined for POSIX would allow a lot\nof us to understand your on-tape format with a very low burden on you\nfor documentation. (If you do go this route you might want to think\nabout cpio format; it is less restrictive about filename length than\ntar.)\n\nThere is also plenty of code lying around for reading and writing tar\nand cpio formats that you could steal^H^H^H^H^H reuse. The BSD pax\ncode should have a suitable license.\n\n> pg_restore will use the same custom IO routines to allow IO to\n> tar/directory/custom files. In the first pass, I will do custom file\n> IO.\n\nPresumably you'd expect this file I/O to be through some standard API\nthat other backends would also use? I'd be interested to see this;\nI've got code for an experimental libtar somewhere around here, so I\ncould offer comments at least.\n\n> BUT AFAIK, fseek does not work on STDOUT, and at the current time pg_backup\n> will use fseek.\n\nIt depends what fseek is whether it works on standard output or not.\nIf it's a pipe, no. If it's a file, yes. If it's a tape, no. If\nit's a ...\n\nNot using fseek() would be a win if you can see a way to do it.\n\nRegards,\n\nGiles\n", "msg_date": "Tue, 27 Jun 2000 07:00:41 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: More flexible backup/restore via pg_dump " }, { "msg_contents": "At 07:00 27/06/00 +1000, Giles Lean wrote:\n>\n>Are you are also assuming that a backup fits in a single file,\n>i.e. that anyone with >2GB of backup has some sort of large file\n>support?\n\nThat's up to the format used to save the database; in the case of the\n'custom' format, yes. But that is the size after compression. This is not\nsubstantially different to pg_dump's behaviour, except that pg_dump can be\npiped to a tape drive...\n\nThe objective of the API components are to (a) make it very easy to add new\nmetadata to dump (eg. tablespaces), and (b) make it easy to add new output\nformats (eg. tar archives). Basically the metadata dumping side makes one\ncall to register the thing to be saved, passing an optional function\npointer to dump data (eg. table contents) - this *could* even be used to\nimplement dumping of BLOBs.\n\nThe 'archiver' format provider must have some basic IO routines:\nRead/WriteBuf and Read/WriteByte and has a number of hook functions which\nit can use to output the data. It needs to provide at least one function\nthat actually writes data somewhere. It also has to provide the associated\nfunction to read the data.\n\n>\n>As someone else answered: no. You can't portably assume random access\n>to tape blocks.\n\nThis is probably an issue. One of the motivations for this utility it to\nallow partial restores (eg. table data for one table only), and\narbitrarilly ordered restores. But I may have a solution:\n\nwrite the schema and TOC out at the start of the file/tape, then compressed\ndata with headers for each indicating which TOC item they correspond to.\nThis metadata can be loaded into /tmp, so fseek is possible. The actual\ndata restoration (assuming constraints are not defined [THIS IS A PROBLEM])\ncan be done by scanning the rest of the tape in it's own order since RI\nwill not be an issue. I think I'm happy with this.\n\nBut the problem is the constraints: AFAIK there is no 'ALTER TABLE ADD\nCONSTRAINT...' so PK, FK, Not Null constraints have to be applied before\ndata load (*please* tell me I'm wrong). This also means that for large\ndatabases, I should apply indexes to make PK/FK checks fast, but they will\nslow data load.\n\nAny ideas?\n\n\n>> The output scheme will be encapsulated, and in the initial version will be\n>> a custom format (since I can't see an API for tar files)\n>\n>You can use a standard format without there being a standard API.\n\nBeing a relatively lazy person, I was hoping to leave that as an excercise\nfor the reader...\n\n\n>Using either tar or cpio format as defined for POSIX would allow a lot\n>of us to understand your on-tape format with a very low burden on you\n>for documentation. (If you do go this route you might want to think\n>about cpio format; it is less restrictive about filename length than\n>tar.)\n\nTom Lane was also very favorably disposed to tar format. As I said above,\nthe archive interfaces should be pretty amenable to adding tar support -\nit's just I'd like to get a version working with custom and directory based\nformats to ensure the flexibility is there. As I see it, the 'backup to\ndirectory' format should be easy to use as a basis for the 'backup to tar'\ncode.\n\nThe problem I have with tar is that it does not support random access to\nthe associated data. For reordering large backups, or (ultimately) single\nBLOB extraction, this is a performance problem.\n\nIf you have a tar spec (or suitably licenced code), please mail it to me,\nand I'll be able to make more informed comments.\n\n\n>Presumably you'd expect this file I/O to be through some standard API\n>that other backends would also use? I'd be interested to see this;\n>I've got code for an experimental libtar somewhere around here, so I\n>could offer comments at least.\n\nNo problem: I should have a working version pretty soon. The API is\nstrictly purpose-built; it would be adaptable to a more general archibe\nformat, but as you say, tar is fine for most purposes.\n\n\n>> BUT AFAIK, fseek does not work on STDOUT, and at the current time pg_backup\n>> will use fseek.\n>\n>Not using fseek() would be a win if you can see a way to do it.\n\nI think I probably can if I can work my way around RI problems.\nUnfortunately the most effective solution will be to allow reording of the\ntable data restoration order, but that requires multiple passes through the\nfile to find the table data...\n\n\nBye for now,\n\nPhilip\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 27 Jun 2000 19:07:03 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: More flexible backup/restore via\n pg_dump" }, { "msg_contents": "\n> If you have a tar spec (or suitably licenced code), please mail it to me,\n> and I'll be able to make more informed comments.\n\nThe POSIX tar format is documented as part of the \"Single Unix\nSpecification, version 2\" which is available from:\n\nhttp://www.opengroup.org\nhttp://www.opengroup.org/publications/catalog/t912.htm\n\nYou can download the standard as HTML. They keep moving the location\naround so if the second URL breaks start from the top. They do want an\nemail address from you and they will spam this address with\ninvitations to conferences. There's no such thing as a free lunch, I\nguess.\n\nFor source code, any FreeBSD, NetBSD, or OpenBSD mirror will have pax\nwhich understands both cpio and tar format and is BSD licensed:\n\nftp://ftp.au.netbsd.org/pub/NetBSD/NetBSD-current/src/bin/pax/\n\nRegards,\n\nGiles\n\n", "msg_date": "Tue, 27 Jun 2000 19:38:35 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: More flexible backup/restore via pg_dump " }, { "msg_contents": "At 10:55 27/06/00 +0100, Peter Mount wrote:\n>comments prefixed with PM...\n>\n>PM: So can most other Unix based formats. On the intranet server here, I\n>pg_dump into /tmp, then include them in a tar piped to the tape drive.\n\nSince you are using an intermediate file, there would be no change. fseek\nis only an issue on tape drives and pipes, not files on disk. I suspect\nthat most people can afford the overhead of restoring the backup file to\ndisk before restoring the database, but it'd be nice to start out as\nflexible as possible. In the back of my mind is the fact that when the WAL\nand storage manager are going, raw data backups should be possible (and\nfast) - but then again, maybe it's a pipe dream.\n\n\n>PM: The problem with blobs hasn't been with dumping them (I have some Java\n>code that does it into a compressed zip file), but restoring them - you\n>can't create a blob with a specific OID, so any references in existing\n>tables will break. I currently get round it by updating the tables after the\n>restore - but it's ugly and easy to break :(\n\nI assumed this would have to happen - hence why it will not be in the first\nversion. With all the TOAST stuff coming, and talk about the storage\nmanager, I still live in hope of a better BLOB system...\n\n\n>PM: Having a set of api's (either accessible directly into the backend,\n>and/or via some fastpath call) would be useful indeed.\n\nBy this I assume you mean APIs to get to database data, not backup data.\n\n\n>\n>This is probably an issue. One of the motivations for this utility it to\n>allow partial restores (eg. table data for one table only), and\n>arbitrarilly ordered restores. But I may have a solution:\n>\n>PM: That would be useful. I don't know about CPIO, but tar stores the TOC at\n>the start of each file (so you can actually join two tar files together and\n>still read all the files). In this way, you could put the table name as the\n>\"filename\" in the header, so partial restores could be done.\n\nWell, the way my plans work, I'll use either a munged OID, or a arbitrary\nunique ID as the file name. All meaningful access has to go via the TOC.\nBut that's just a detail; the basic idea is what I'm implementing. \n\nIt's very tempting to say tape restores are only possible in the order in\nwhich the backup file was written ('pg_restore --stream' ?), and that\nreordering/selection is only possible if you put the file on disk.\n\n\n>\n>PM: How about IOCTL's? I know that ArcServe on both NT & Unixware can seek\n>through the tape, so there must be a way of doing it.\n\nMaybe; I know BackupExec also does some kind of seek to update the TOC at\nend of a backup (which is what I need to do). Then again, maybe that's just\na rewind. I don't want to get into custom tape formats...\n\nDo we have any tape experts out there?\n\n\n>PM: Tar can do this sequentially, which I've had to do many times over the\n>years - restoring just one file from a tape, sequential access is probably\n>the only way.\n\nIt's just nasty when the user reorders the restoration of tables and\nmetadata. In the worst cast it might be hundreds of scans of the tape. I'd\nhate to have my name associated with something so unfriendly (even if it is\nthe operators fault).\n\n\n>PM: The tar spec should be around somewhere - just be careful, the obvious\n>source I was thinking of would be GPL'd, and we don't want to be poluted :-)\n\nThat was my problem. I've got some references now, and I'll look at them.\nAt least everything I've written so far can be put in PG.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 27 Jun 2000 20:30:58 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Proposal: More flexible backup/restore via\n pg_dump" }, { "msg_contents": "\n> Do we have any tape experts out there?\n\nDon't even try. The guaranteed portable subset of tape drive,\ninterface, device driver, and OS semantics is pretty limited.\n\nI'm confident to stream one (tape) file of less than one tape capacity\nto a drive and read it back sequentially. These days you can probably\nexpect reliable end of media handling as well, but don't be too sure\nwhat value errno will have when you do hit the end of a tape.\n\nAs soon as you start looking to deal with more advanced facilities you\nwill discover portability problems:\n\n- autochanger interface differences\n- head positioning on tapes with multiple files (SysV v. BSD, anyone?)\n- random access (yup, supported on some drives ... probably all obsolete)\n- \"fast search marks\" and similar\n\nSome of these things can vary on the one OS if a tape drive is\nconnected to different interfaces, since different drivers may be\nused.\n\nBTW I'm not a tape expert. The problems in practice may be greater or\nlesser than I've suggested.\n\nI would be trying really hard to work out a backup format that allows\na one pass restore. Rummaging around in the database when making the\nbackup and using some scratch space at that time isn't too bad. Using\nscratch space upon restore is more problematic; restore problems are\ntraditionally found at the worst possible moment!\n\nRegards,\n\nGiles\n\n\n\n\n", "msg_date": "Tue, 27 Jun 2000 23:40:37 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: More flexible backup/restore via pg_dump " }, { "msg_contents": "\n> Maybe; I know BackupExec also does some kind of seek to \n> update the TOC at\n> end of a backup (which is what I need to do). Then again, \n> maybe that's just\n> a rewind. I don't want to get into custom tape formats...\n> \n> Do we have any tape experts out there?\n\nDont lock yourself in on the tape issue, it is the pipes that \nactually add value to the utility, and those can't rewind, seek\nor whatever.\n\npipes can:\n\tcompress, split output, write to storage managers, stripe output,\n.....\n\nI guess we would want two formats, one for pipe, and one for a standard\ndirectory.\n\nAndreas\n", "msg_date": "Tue, 27 Jun 2000 15:52:01 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": false, "msg_subject": "AW: Proposal: More flexible backup/restore via pg_dump " }, { "msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> pipes can:\n> \tcompress, split output, write to storage managers, stripe output,\n> .....\n\nRight, the thing we *really* want is to preserve the fact that pg_dump\ncan write its output to a pipeline ... and that a restore can read from\none. If you can improve performance when you find you do have a\nseekable source/destination file, fine, but the utilities must NOT\nrequire it.\n\n> I guess we would want two formats, one for pipe, and one for a standard\n> directory.\n\nAt the risk of becoming tiresome, \"tar\" format is eminently pipeable...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Jun 2000 10:48:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Proposal: More flexible backup/restore via pg_dump " }, { "msg_contents": "At 10:48 27/06/00 -0400, Tom Lane wrote:\n>\n>Right, the thing we *really* want is to preserve the fact that pg_dump\n>can write its output to a pipeline ... and that a restore can read from\n>one. If you can improve performance when you find you do have a\n>seekable source/destination file, fine, but the utilities must NOT\n>require it.\n\nOK, the limitation will have to be that reordering of *data* loads (as\nopposed to metadata) will not be possible in piped data. This is only a\nproblem if RI constraints are loaded.\n\nI *could* dump the compressed data to /tmp, but I would guess that in most\ncases when the archive file is being piped it's because the file won't fit\non a local disk.\n\nDoes this sound reasonable?\n\n\n>> I guess we would want two formats, one for pipe, and one for a standard\n>> directory.\n>\n>At the risk of becoming tiresome, \"tar\" format is eminently pipeable...\n>\n\nNo, it's good...I'll never feel guilty about asking for optimizer hints again.\n\nMore seriously, though, if I pipe a tar file, I still can't reorder the\n*data* files without saving them to disk, which is what I want to avoid.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 28 Jun 2000 01:08:03 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AW: Proposal: More flexible backup/restore via\n pg_dump" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> More seriously, though, if I pipe a tar file, I still can't reorder the\n> *data* files without saving them to disk, which is what I want to avoid.\n\nTrue. This is not an issue on the dump side, of course, since you can\nchoose what order you're going to write the tables in. On the restore\nside, you have no alternative but to restore the tables in the order\nthey appear on tape. Of course the DBA can run the restore utility\nmore than once and extract a subset of tables each time, but I don't\nsee how the restore utility can be expected to do that for him.\n(Except if it finds it does have the ability to seek in its input file,\nbut I dunno if it's a good idea to use that case for anything except\nunder-the-hood performance improvement, ie quickly skipping over the\ndata you don't need. Features that don't work all the time are not\ngood in my book.)\n\nBasically I think we want to assume that pg_dump will write the tables\nin an order that's OK for restoring. If we can arrange for RI checks\nnot to be installed until after all the data is loaded, this shouldn't\nbe a big problem, seems like.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Jun 2000 11:23:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Proposal: More flexible backup/restore via pg_dump " }, { "msg_contents": "At 11:23 27/06/00 -0400, Tom Lane wrote:\n>Of course the DBA can run the restore utility\n>more than once and extract a subset of tables each time, but I don't\n>see how the restore utility can be expected to do that for him.\n\nOnly works with seek (ie. a file).\n\n\n>Features that don't work all the time are not\n>good in my book.)\n\nThe *only* bit that won't work is being able to select the table data load\norder, and I can fix that by writing tables that are wanted later to /tmp\nif seek is unavailable. This *may* not be a problem, and probably should be\npresented as an option to the user if restoring from non-seekable media.\nAssuming that the backup was originally written to seekable media, I will\nbe able to tell the user how much space will be required, which should help.\n\nI don't suppose anyone knows of a way of telling if a file handle supports\nseek?\n\n\n>Basically I think we want to assume that pg_dump will write the tables\n>in an order that's OK for restoring. If we can arrange for RI checks\n>not to be installed until after all the data is loaded, this shouldn't\n>be a big problem, seems like.\n\nPart of the motivation for this utility was to allow DBAs to fix the\nordering at restore time, but otherwise I totally agree. Unfortunately I\ndon't think the RI checks can be delayed at this stage - can they? \n\nI don't suppose there is a 'disable constraints' command? Or the ability to\nset all constraints as deferrred until commit-time?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 28 Jun 2000 03:40:25 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AW: Proposal: More flexible backup/restore via\n pg_dump" }, { "msg_contents": "On Tue, 27 Jun 2000, Philip Warner wrote:\n\n> But the problem is the constraints: AFAIK there is no 'ALTER TABLE ADD\n> CONSTRAINT...' so PK, FK, Not Null constraints have to be applied before\n> data load (*please* tell me I'm wrong). This also means that for large\n> databases, I should apply indexes to make PK/FK checks fast, but they will\n> slow data load.\n\nActually, there is an ALTER TABLE ADD CONSTRAINT for foreign key\nconstraints. Of course, if the existing data fails the constraint the\nconstraint doesn't get made. and if you're in a transaction, it'll force\na rollback.\nIn fact, you really can't always apply foreign key constraints at\nschema reload time because you can have tables with circular\ndependencies. Those would have to be created after data load.\n\n", "msg_date": "Tue, 27 Jun 2000 13:10:00 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: More flexible backup/restore via pg_dump " }, { "msg_contents": "\n> I don't suppose anyone knows of a way of telling if a file handle supports\n> seek?\n\nThe traditional method is to call lseek() and see what happens.\n\n> Part of the motivation for this utility was to allow DBAs to fix the\n> ordering at restore time, but otherwise I totally agree. Unfortunately I\n> don't think the RI checks can be delayed at this stage - can they? \n\nThe current pg_dump handles the data and then adds the constraints.\n\nOtherwise there are \"chicken and egg\" problems where two tables have\nmutual RI constraints. Even at the tuple level two tuples can be\nmutually dependent.\n\nRegards,\n\nGiles\n\n\n", "msg_date": "Wed, 28 Jun 2000 08:48:52 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Proposal: More flexible backup/restore via pg_dump " }, { "msg_contents": "At 13:10 27/06/00 -0700, Stephan Szabo wrote:\n>\n>Actually, there is an ALTER TABLE ADD CONSTRAINT for foreign key\n>constraints.\n>\n\nThis is good to know; presumably at some stage in the future the rest will\nbe added, and the backup/restore can be amended to apply constraints after\ndata load. In the mean time, I suppose people with tape drives who need to\nreorder the data load will have to make multiple passes (or copy the file\nlocally).\n\n \n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 28 Jun 2000 13:05:21 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: More flexible backup/restore via\n pg_dump" }, { "msg_contents": "At 08:48 28/06/00 +1000, Giles Lean wrote:\n>\n>> Part of the motivation for this utility was to allow DBAs to fix the\n>> ordering at restore time, but otherwise I totally agree. Unfortunately I\n>> don't think the RI checks can be delayed at this stage - can they? \n>\n>The current pg_dump handles the data and then adds the constraints.\n\nNot as far as I can see; that's what I want to do, bu there is no\nimplemented syntax for doing it. pg_dump simply dumps the table definition\nwith constraints (at least on 7.0.2).\n\n>Otherwise there are \"chicken and egg\" problems where two tables have\n>mutual RI constraints. Even at the tuple level two tuples can be\n>mutually dependent.\n\nAbsolutely. And AFAICT, these happen with pg_dump.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 28 Jun 2000 13:10:39 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AW: Proposal: More flexible backup/restore via\n pg_dump" }, { "msg_contents": "> At 08:48 28/06/00 +1000, Giles Lean wrote:\n> >Otherwise there are \"chicken and egg\" problems where two tables have\n> >mutual RI constraints. Even at the tuple level two tuples can be\n> >mutually dependent.\n>\n> Absolutely. And AFAICT, these happen with pg_dump.\n\nThis will happen for check constraints, but not for foreign key\nconstraints...\nIt actually adds the fk constraints later with CREATE CONSTRAINT TRIGGER\nafter the data dump is finished. And, if you do separate schema and data\ndumps, the wierd statements at the top and bottom of the data dump turn\noff triggers and then turn them on again (in the most painful way possible).\nHowever those cases do not actually guarantee the validity of the data in\nbetween.\n\n", "msg_date": "Wed, 28 Jun 2000 12:06:33 -0700", "msg_from": "\"Stephan Szabo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Proposal: More flexible backup/restore via pg_dump " }, { "msg_contents": "At 12:06 28/06/00 -0700, Stephan Szabo wrote:\n>\n>This will happen for check constraints, but not for foreign key\n>constraints...\n>It actually adds the fk constraints later with CREATE CONSTRAINT TRIGGER\n>after the data dump is finished. And, if you do separate schema and data\n>dumps, the wierd statements at the top and bottom of the data dump turn\n>off triggers and then turn them on again (in the most painful way possible).\n\nThanks for this information!\n\nI had not seen those statements before; I have been mistakenly modifying\n6.5.3 sources, not 7.0.2. I will incorporate them in my work. Is there any\nway of also disabling all constraint checking while loading the data?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 29 Jun 2000 20:33:03 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AW: Proposal: More flexible backup/restore via\n pg_dump" }, { "msg_contents": "I think my previous message went dead...\n\n> >This will happen for check constraints, but not for foreign key\n> >constraints...\n> >It actually adds the fk constraints later with CREATE CONSTRAINT TRIGGER\n> >after the data dump is finished. And, if you do separate schema and data\n> >dumps, the wierd statements at the top and bottom of the data dump turn\n> >off triggers and then turn them on again (in the most painful way\npossible).\n>\n> Thanks for this information!\n>\n> I had not seen those statements before; I have been mistakenly modifying\n> 6.5.3 sources, not 7.0.2. I will incorporate them in my work. Is there any\n> way of also disabling all constraint checking while loading the data?\n\nWell, for unique you could remove/recreate the unique index. NOT NULL is\nprobably\nnot worth bothering with. Check constraints might be able to be turned off\nfor a table\nby setting relchecks to 0 in the pg_class row and the resetting it after\ndata is loaded (sort\nof like what we do on data only dumps for triggers).\n\nThe problem is that the create constraint trigger, playing with reltriggers\nand playing with\nrelchecks doesn't guarantee that the data being loaded is correct. And if\nyou remove\nand recreate a unique index, you might not get the index back at the end,\nand then\nyou've lost the information that there was supposed to be a unique or\nprimary key on\nthe table.\n\nIt might be a good idea to have some sort of pg_constraint (or whatever)\nthat holds\nthis data, since that would also make it easier to make the constraint\nnaming SQL compliant\n(no duplicate constraint names within schema - that includes automatically\ngenerated ones),\nand it might help if we ever try to make deferrable check/primary key/etc...\n\n", "msg_date": "Thu, 29 Jun 2000 11:40:13 -0700", "msg_from": "\"Stephan Szabo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Proposal: More flexible backup/restore via pg_dump " }, { "msg_contents": "\"Stephan Szabo\" <[email protected]> writes:\n>> I had not seen those statements before; I have been mistakenly modifying\n>> 6.5.3 sources, not 7.0.2. I will incorporate them in my work. Is there any\n>> way of also disabling all constraint checking while loading the data?\n\n> Well, for unique you could remove/recreate the unique index. NOT NULL\n> is probably not worth bothering with. Check constraints might be able\n> to be turned off for a table by setting relchecks to 0 in the pg_class\n> row and the resetting it after data is loaded (sort of like what we do\n> on data only dumps for triggers).\n\nThere's no need to disable NOT NULL, nor unique constraints either,\nsince those are purely local to a table --- if they're going to fail,\naltering load order doesn't prevent it. The things you need to worry\nabout are constraint expressions that cause references to other tables\n(perhaps indirectly via a function).\n\nIf we had ALTER TABLE ADD CONSTRAINT then the problem would be largely\nsolved, I believe. This should be a minor exercise --- the heavy\nlifting is already done, because heap.c's AddRelationRawConstraints()\nis already set up to be invokable on a pre-existing relation. Also\nthe parser knows how to parse ALTER TABLE ADD CONSTRAINT ... I think\nall that's missing is a few lines of glue code in command.c.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Jun 2000 15:24:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Proposal: More flexible backup/restore via pg_dump " }, { "msg_contents": "> There's no need to disable NOT NULL, nor unique constraints either,\n> since those are purely local to a table --- if they're going to fail,\n> altering load order doesn't prevent it. The things you need to worry\nIs there a speed difference with doing a copy on a table with an index\nversus creating\nthe index at the end? I've been assuming that the latter was faster (and\nthat that was\npart of what he wanted)\n\n> about are constraint expressions that cause references to other tables\n> (perhaps indirectly via a function).\nYeah, that's actually a big problem, since that's actually also a constraint\non the other table\nas well, and as far as I know, we aren't yet constraining the other table.\n\n> If we had ALTER TABLE ADD CONSTRAINT then the problem would be largely\n> solved, I believe. This should be a minor exercise --- the heavy\n> lifting is already done, because heap.c's AddRelationRawConstraints()\n> is already set up to be invokable on a pre-existing relation. Also\n> the parser knows how to parse ALTER TABLE ADD CONSTRAINT ... I think\n> all that's missing is a few lines of glue code in command.c.\n\nDoes the AddRelationRawConstraints() check that the constraint is satisified\nas well when\nyou add it? It didn't look like it did, but I could be missing something.\nThat's another requirement of ALTER TABLE ADD CONSTRAINT. That was the\nbit I wasn't sure how to do for other generic constraints when I added the\nforeign key one.\n\n\n", "msg_date": "Thu, 29 Jun 2000 12:50:02 -0700", "msg_from": "\"Stephan Szabo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Proposal: More flexible backup/restore via pg_dump" }, { "msg_contents": "\"Stephan Szabo\" <[email protected]> writes:\n>> If we had ALTER TABLE ADD CONSTRAINT then the problem would be largely\n>> solved, I believe. This should be a minor exercise --- the heavy\n>> lifting is already done, because heap.c's AddRelationRawConstraints()\n>> is already set up to be invokable on a pre-existing relation.\n\n> Does the AddRelationRawConstraints() check that the constraint is\n> satisified as well when you add it? It didn't look like it did, but I\n> could be missing something.\n\nOh, you're right, it does not. So you'd first have to run through the\ntable and verify that the constraint holds for each existing tuple.\nDoesn't seem like a big deal though...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Jun 2000 20:30:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Proposal: More flexible backup/restore via pg_dump " }, { "msg_contents": "At 20:30 29/06/00 -0400, Tom Lane wrote:\n>\n>Oh, you're right, it does not. So you'd first have to run through the\n>table and verify that the constraint holds for each existing tuple.\n>Doesn't seem like a big deal though...\n>\n\nDoes this mean somebody is likely to do it? It'd certainly make\nbackup/restore more reliable.\n\nI'm almost at the point of asking for testers with the revised\npg_dump/pg_restore, so I'll go with what I have for now, but it would make\nlife a lot less messy. Since the new version *allows* table restoration\nintermixed with metadata, and in any order, I need to update pg_class\nrepeatedly (I assume there may be system triggers that need to execute when\nmetadata is changed).\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 30 Jun 2000 11:46:58 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AW: Proposal: More flexible backup/restore via\n pg_dump" }, { "msg_contents": "> At 20:30 29/06/00 -0400, Tom Lane wrote:\n> >\n> >Oh, you're right, it does not. So you'd first have to run through the\n> >table and verify that the constraint holds for each existing tuple.\n> >Doesn't seem like a big deal though...\n> >\n> \n> Does this mean somebody is likely to do it? It'd certainly make\n> backup/restore more reliable.\n\nI'll take a stab at it. It might take me a while to get stuff working\nbut it shouldn't take too long before the beginnings are there.\n\n\n", "msg_date": "Thu, 29 Jun 2000 19:15:21 -0700", "msg_from": "\"Stephan Szabo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Proposal: More flexible backup/restore via pg_dump " }, { "msg_contents": "At 15:25 12/07/00 +0100, Peter Mount wrote:\n>No he didn't, just I've been sort of lurking on this subject ;-)\n>\n>Actually, tar files are simply a small header, followed by the file's\n>contents. To add another file, you simply write another header, and contents\n>(which is why you can cat two tar files together and get a working file).\n>\n>http://www.goice.co.jp/member/mo/formats/tar.html has a nice brief\n>description of the header.\n>\n\nDamn! I knew someone would call my bluff.\n\nAs you say, it looks remarkably simple.\n\nA couple of questions:\n\n\n 136 12 bytes Modify time (in octal ascii)\n\n ...do you know the format of the date (seconds since 1970?).\n\n\n 157 100 bytes Linkname ('\\0' terminated, 99 maxmum length)\n\n ...what's this? Is it the target for symlinks?\n\n\n 329 8 bytes Major device ID (in octal ascii)\n 337 8 bytes Minor device ID (in octal ascii)345 167 bytes\nPadding\n\n ...and what should I set these to?\n\n>As for a C api with a compatible licence, if needs must I'll write one to\n>your spec (maidast should be back online in a couple of days, so I'll be\n>back in business development wise).\n\nIf you're serious about the offer, I'd be happy. But, given how simple the\nformat is, I can probably tack in into place myself. \n\nThere is a minor problem. Currently I compress the output stream as I\nreceive it from PG, and send it to the archive. I don't know how big it\nwill be until it is written. The custom output format can handle this, but\nin streaming a tar file to tape, I have to know the file size first. This\nmeans writing to /tmp. I supose that's OK, but I've been trying to avoid it.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 13 Jul 2000 01:11:04 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "RE: RE: [HACKERS] pg_dump & blobs - editable dump?" }, { "msg_contents": "\n> >http://www.goice.co.jp/member/mo/formats/tar.html has a nice brief\n\nBest is to look at one of the actual standards, accessible via:\n\nhttp://www.opengroup.org\n\nThe tar and cpio formats are in the pax specification.\n\n> 136 12 bytes Modify time (in octal ascii)\n> \n> ...do you know the format of the date (seconds since 1970?).\n\nIt's just 11 bytes plus \\0 in tar's usual encode-this-as-octal format:\n\nencode_octal(unsigned char *p, size_t n, unsigned long value)\n{\n const unsigned char octal[] = \"01234567\";\n while (n) {\n *(p + --n) = octal[value & 07];\n value >>= 3;\n }\n}\n\nWarning: some values allowed by tar exceed the size of 'long' on a 32\nbit platform.\n\n> 157 100 bytes Linkname ('\\0' terminated, 99 maxmum length)\n> \n> ...what's this? Is it the target for symlinks?\n\nLong pathnames get split into two pieces on a '/' as I recall.\n\nThe code I offered you previously has code to do this too; I\nappreciate that the code is quite likely not what you want, but you\nmight consider looking at it or other tar/pax code to help you\ninterpret the standard.\n\n> 329 8 bytes Major device ID (in octal ascii)\n> 337 8 bytes Minor device ID (in octal ascii)\n> 345 167 bytes Padding\n> \n> ...and what should I set these to?\n\nZero.\n\n> If you're serious about the offer, I'd be happy. But, given how simple the\n> format is, I can probably tack in into place myself. \n\nFor the very limited formats you want to create, that's probably\nthe easiest way. You don't care about unpacking, GNU v. POSIX format,\ndevice files, etc etc.\n\n> There is a minor problem. Currently I compress the output stream as I\n> receive it from PG, and send it to the archive. I don't know how big it\n> will be until it is written. The custom output format can handle this, but\n> in streaming a tar file to tape, I have to know the file size first. This\n> means writing to /tmp. I supose that's OK, but I've been trying to\n> avoid it.\n\nI recommend you compress the whole stream, not the pieces. Presumably\nyou can determine the size of the pieces you're backing up, and ending\nwith a .tar.gz (or whatever) file is more convenient to manage than a\n.tar file of compressed pieces unless you really expect people to be\nextracting individual files from the backup very often.\n\nHaving to pass everything through /tmp would be really unfortunate.\n\nRegards,\n\nGiles\n", "msg_date": "Thu, 13 Jul 2000 07:58:43 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RE: [HACKERS] pg_dump & blobs - editable dump? " }, { "msg_contents": "At 07:58 13/07/00 +1000, Giles Lean wrote:\n>\n>I recommend you compress the whole stream, not the pieces. Presumably\n>you can determine the size of the pieces you're backing up, and ending\n>with a .tar.gz (or whatever) file is more convenient to manage than a\n>.tar file of compressed pieces unless you really expect people to be\n>extracting individual files from the backup very often.\n>\n>Having to pass everything through /tmp would be really unfortunate.\n>\n\nThe only things I compress are the table data and the blobs (ie. the big\nthings); unfortunately, the table data is of unknown uncompressed size. I\n*could* do two 'COPY TO STDOUT' calls, just to get the size, but that seems\nlike a very bad idea.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 13 Jul 2000 10:58:06 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RE: [HACKERS] pg_dump & blobs - editable dump? " }, { "msg_contents": "\nThe reason this isn't working is because there is no concept of an inherent order of rows\nin SQL. The only time things are ordered are when you explicitly request them to be,\naccording to a particular field. Thus, inserting a bunch of rows is exactly the same no\nmatter what order you insert them in, and you shouldn't assume anything about the\nunderlying mechanism of insertion and oids in your application.\n\nWhat is the purpose you're trying to accomplish with this order by? No matter what, all the\nrows where done='f' will be inserted, and you will not be left with any indication of that\norder once the rows are in the todolist table.\n\n-Ben\n\nAndrew Selle wrote:\n\n> Alright. My situation is this. I have a list of things that need to be done\n> in a table called tasks. I have a list of users who will complete these tasks.\n> I want these users to be able to come in and \"claim\" the top 2 most recent tasks\n> that have been added. These tasks then get stored in a table called todolist\n> which stores who claimed the task, the taskid, and when the task was claimed.\n> For each time someone wants to claim some number of tasks, I want to do something\n> like\n>\n> INSERT INTO todolist\n> SELECT taskid,'1',now()\n> FROM tasks\n> WHERE done='f'\n> ORDER BY submit DESC\n> LIMIT 2;\n>\n> Unfortunately, when I do this I get\n> ERROR: ORDER BY is not allowed in INSERT/SELECT\n>\n> The select works fine\n>\n> aselle=> select taskid,'1',now() FROM tasks WHERE done='f' ORDER BY submit DESC LIMIT 2;\n> taskid | ?column? | now\n> --------+----------+------------------------\n> 4 | 1 | 2000-08-17 12:56:00-05\n> 3 | 1 | 2000-08-17 12:56:00-05\n> (2 rows)\n>\n> It seems to me, this is something I should do. I was wondering if there\n> is any reason why I can't do this? I've thought of a couple of workarounds\n> but they don't seem to be very clean:\n>\n> 1. Read the results of the select at the application level and reinsert into the\n> todolist table\n>\n> 2. Add two fields to the task table that keep track of userid and claimed.\n> This unfortunately clutters the main task table, and it loses the ability\n> to assign multiple people to the same task. It also requires looping at the\n> application level I think\n>\n> 3. use a temporary table with a SELECT INTO statement and then copy the contents\n> of the temporary table into the table I want it in todolist\n>\n> Below are the table creation statements for this sample...\n>\n> -Andy\n>\n> CREATE TABLE tasks (\n> taskid int4,\n> title varchar(64),\n> descr text,\n> submit datetime,\n> done boolean\n> );\n>\n> CREATE TABLE users (\n> userid int4,\n> name varchar(32)\n> );\n>\n> CREATE TABLE todolist (\n> taskid int4,\n> userid int4,\n> claimed datetime\n> );\n\n", "msg_date": "Thu, 17 Aug 2000 10:23:17 -0400", "msg_from": "Ben Adida <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inserting a select statement result into another table" }, { "msg_contents": "Alright. My situation is this. I have a list of things that need to be done\nin a table called tasks. I have a list of users who will complete these tasks.\nI want these users to be able to come in and \"claim\" the top 2 most recent tasks\nthat have been added. These tasks then get stored in a table called todolist\nwhich stores who claimed the task, the taskid, and when the task was claimed.\nFor each time someone wants to claim some number of tasks, I want to do something\nlike\n\nINSERT INTO todolist \n\tSELECT taskid,'1',now() \n\tFROM tasks \n\tWHERE done='f' \n\tORDER BY submit DESC \n\tLIMIT 2;\n\nUnfortunately, when I do this I get \nERROR: ORDER BY is not allowed in INSERT/SELECT\n\nThe select works fine\n\naselle=> select taskid,'1',now() FROM tasks WHERE done='f' ORDER BY submit DESC LIMIT 2;\n taskid | ?column? | now \n --------+----------+------------------------\n 4 | 1 | 2000-08-17 12:56:00-05\n 3 | 1 | 2000-08-17 12:56:00-05\n(2 rows)\n\nIt seems to me, this is something I should do. I was wondering if there\nis any reason why I can't do this? I've thought of a couple of workarounds\nbut they don't seem to be very clean:\n\n1. Read the results of the select at the application level and reinsert into the\n todolist table\n\n2. Add two fields to the task table that keep track of userid and claimed.\n This unfortunately clutters the main task table, and it loses the ability\n to assign multiple people to the same task. It also requires looping at the\n application level I think\n\n3. use a temporary table with a SELECT INTO statement and then copy the contents\n of the temporary table into the table I want it in todolist\n\nBelow are the table creation statements for this sample...\n\n-Andy\n\n\nCREATE TABLE tasks (\n\ttaskid\tint4,\n\ttitle\tvarchar(64),\n\tdescr\ttext,\n\tsubmit\tdatetime,\n\tdone\tboolean\n);\n\nCREATE TABLE users (\n\tuserid\tint4,\n\tname\tvarchar(32)\n);\n\nCREATE TABLE todolist (\n\ttaskid\tint4,\n\tuserid\tint4,\n\tclaimed\tdatetime\n);\n\n\n\n", "msg_date": "Thu, 17 Aug 2000 13:05:17 -0500", "msg_from": "Andrew Selle <[email protected]>", "msg_from_op": false, "msg_subject": "Inserting a select statement result into another table" }, { "msg_contents": "\nHe does ask a legitimate question though. If you are going to have a\nLIMIT feature (which of course is not pure SQL), there seems no reason\nyou shouldn't be able to insert the result into a table.\n\nBen Adida wrote:\n> \n> The reason this isn't working is because there is no concept of an inherent order of rows\n> in SQL. The only time things are ordered are when you explicitly request them to be,\n> according to a particular field. Thus, inserting a bunch of rows is exactly the same no\n> matter what order you insert them in, and you shouldn't assume anything about the\n> underlying mechanism of insertion and oids in your application.\n> \n> What is the purpose you're trying to accomplish with this order by? No matter what, all the\n> rows where done='f' will be inserted, and you will not be left with any indication of that\n> order once the rows are in the todolist table.\n> \n> -Ben\n> \n> Andrew Selle wrote:\n> \n> > Alright. My situation is this. I have a list of things that need to be done\n> > in a table called tasks. I have a list of users who will complete these tasks.\n> > I want these users to be able to come in and \"claim\" the top 2 most recent tasks\n> > that have been added. These tasks then get stored in a table called todolist\n> > which stores who claimed the task, the taskid, and when the task was claimed.\n> > For each time someone wants to claim some number of tasks, I want to do something\n> > like\n> >\n> > INSERT INTO todolist\n> > SELECT taskid,'1',now()\n> > FROM tasks\n> > WHERE done='f'\n> > ORDER BY submit DESC\n> > LIMIT 2;\n> >\n> > Unfortunately, when I do this I get\n> > ERROR: ORDER BY is not allowed in INSERT/SELECT\n> >\n> > The select works fine\n> >\n> > aselle=> select taskid,'1',now() FROM tasks WHERE done='f' ORDER BY submit DESC LIMIT 2;\n> > taskid | ?column? | now\n> > --------+----------+------------------------\n> > 4 | 1 | 2000-08-17 12:56:00-05\n> > 3 | 1 | 2000-08-17 12:56:00-05\n> > (2 rows)\n> >\n> > It seems to me, this is something I should do. I was wondering if there\n> > is any reason why I can't do this? I've thought of a couple of workarounds\n> > but they don't seem to be very clean:\n> >\n> > 1. Read the results of the select at the application level and reinsert into the\n> > todolist table\n> >\n> > 2. Add two fields to the task table that keep track of userid and claimed.\n> > This unfortunately clutters the main task table, and it loses the ability\n> > to assign multiple people to the same task. It also requires looping at the\n> > application level I think\n> >\n> > 3. use a temporary table with a SELECT INTO statement and then copy the contents\n> > of the temporary table into the table I want it in todolist\n> >\n> > Below are the table creation statements for this sample...\n> >\n> > -Andy\n> >\n> > CREATE TABLE tasks (\n> > taskid int4,\n> > title varchar(64),\n> > descr text,\n> > submit datetime,\n> > done boolean\n> > );\n> >\n> > CREATE TABLE users (\n> > userid int4,\n> > name varchar(32)\n> > );\n> >\n> > CREATE TABLE todolist (\n> > taskid int4,\n> > userid int4,\n> > claimed datetime\n> > );\n", "msg_date": "Fri, 18 Aug 2000 09:34:33 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inserting a select statement result into another table" }, { "msg_contents": "\nWell, If I'm reading the spec correctly,\nINSERT INTO references a query expression \nwhich doesn't include ORDER BY as an option, so this\nis even less SQL since we're actually not just changing\nit to allow our non-standard bit, but we're changing\na piece that is explicitly not allowed in the spec.\n\nThat being said, I also think it's probably a useful extension\ngiven the LIMIT clause.\n\nOn Fri, 18 Aug 2000, Chris Bitmead wrote:\n\n> \n> He does ask a legitimate question though. If you are going to have a\n> LIMIT feature (which of course is not pure SQL), there seems no reason\n> you shouldn't be able to insert the result into a table.\n\n", "msg_date": "Thu, 17 Aug 2000 17:27:08 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inserting a select statement result into another table" }, { "msg_contents": "Chris Bitmead wrote:\n\n> He does ask a legitimate question though. If you are going to have a\n> LIMIT feature (which of course is not pure SQL), there seems no reason\n> you shouldn't be able to insert the result into a table.\n\nYes, that's true, I had missed that the first time around.\n\n-Ben\n\n", "msg_date": "Thu, 17 Aug 2000 20:53:08 -0400", "msg_from": "Ben Adida <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inserting a select statement result into another table" }, { "msg_contents": "At 09:34 18/08/00 +1000, Chris Bitmead wrote:\n>\n>He does ask a legitimate question though. If you are going to have a\n>LIMIT feature (which of course is not pure SQL), there seems no reason\n>you shouldn't be able to insert the result into a table.\n\nThis feature is supported by two commercial DBs: Dec/RDB and SQL/Server. I\nhave no idea if Oracle supports it, but it is such a *useful* feature that\nI would be very surprised if it didn't.\n\n\n>Ben Adida wrote:\n>> \n>> What is the purpose you're trying to accomplish with this order by? No\nmatter what, all the\n>> rows where done='f' will be inserted, and you will not be left with any\nindication of that\n>> order once the rows are in the todolist table.\n\nI don't know what his *purpose* was, but the query should only insert the\nfirst two rows from the select bacause of the limit).\n\n>> Andrew Selle wrote:\n>> \n>> > Alright. My situation is this. I have a list of things that need to\nbe done\n>> > in a table called tasks. I have a list of users who will complete\nthese tasks.\n>> > I want these users to be able to come in and \"claim\" the top 2 most\nrecent tasks\n>> > that have been added. These tasks then get stored in a table called\ntodolist\n>> > which stores who claimed the task, the taskid, and when the task was\nclaimed.\n>> > For each time someone wants to claim some number of tasks, I want to\ndo something\n>> > like\n>> >\n>> > INSERT INTO todolist\n>> > SELECT taskid,'1',now()\n>> > FROM tasks\n>> > WHERE done='f'\n>> > ORDER BY submit DESC\n>> > LIMIT 2;\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 18 Aug 2000 10:58:35 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inserting a select statement result into another\n table" }, { "msg_contents": "Can I ask on a status?\n\n> At 10:17 26/06/00 +0200, Zeugswetter Andreas SB wrote:\n> >\n> >A problem I see with an index at file end is, that you will need to read the\n> >file twice, and that may be very undesireable if e.g the backup is on tape\n> >or a compressed file.\n> \n> The proposal has actually come a fairly long way after extensive\n> discussions with Tom Lane, and I have added the current plans at the end of\n> this message. The TOC-at-end problem is an issue that I am trying to deal\n> with; I am planning a 'custom' format that has the large parts (data dumps)\n> compressed, to avoid the need of compressing the entire file. This means\n> that you would not need to uncompress the entire file to get to the TOC, or\n> to restore just the schema. It also allows good random access to defns and\n> data. I'm also considering putting the dumped data at the end of the file,\n> but this has issues when you want to restore table data before defining\n> indexes, for example.\n> \n> I must admit that I've been working on the assumption that people using\n> PostgreSQL don't have multi-GB (compressed) database dumps, so that (in\n> theory) a restore can be loaded onto disk from tape before being used. I\n> know this is pretty evil, but it will cover 95% of users. For those people\n> with huge backups, they will have to suffer tapes that go backward and\n> forwards a bit. From the details below, you will see that this is unavoidable.\n> \n> Sanity Check: does fseek work on tapes? If not, what is the correct way to\n> read a particular block/byte from a file on a tape?\n> \n> -----------------------------------------------------------\n> Updated Proposal:\n> -------------------------\n> \n> For the sake of argument, call the new utilities pg_backup and pg_restore.\n> \n> pg_backup\n> ---------\n> \n> Dump schema [and data] in OID order (to try to make restores sequential,\n> for when tar/tape storage is used). Each dumped item has a TOC entry which\n> includes the OID and description, and for those items for which we know\n> some dependencies (functions for types & aggregates; types for tables;\n> superclasses for classes; - any more?), it will also dump the dependency OIDs.\n> \n> Each object (table defn, table data, function defn, type defn etc) is\n> dumped to a separate file/thing in the output file. The TOC entries go into\n> a separate file/thing (probably only one file/thing for the whole TOC).\n> \n> The output scheme will be encapsulated, and in the initial version will be\n> a custom format (since I can't see an API for tar files), and a\n> dump-to-a-directory format. Future use of tar, DB, PostgreSQL or even a\n> Make file should not be excluded in the IO design. This last goal *may* not\n> be achieved, but I don't see why it can't be at this stage. Hopefully\n> someone with appropriate skills & motivation can do a tar archive 8-}.\n> \n> The result of a pg_backup should be a single file with metadata and\n> optional data, along with whatever dependency and extra data is available\n> pg_backup, or provided by the DBA.\n> \n> \n> pg_restore\n> ----------\n> \n> Reads a backup file and dumps SQL suitable for sending to psql.\n> \n> Options will include:\n> \n> - No Data (--no-data? -nd? -s?)\n> - No metadata (--no-schema? -ns? -d?)\n> - Specification of items to dump from an input file; this allows custom\n> ordering AND custom selection of multiple items. Basically, I will allow\n> the user to dump part of the TOC, edit it, and tell pg_restore to use the\n> edited partial TOC. (--item-list=<file>? -l=<file>?)\n> - Dump TOC (--toc-only? -c?)\n> \n> [Wish List]\n> - Data For a single table (--table=<name>? -t=<name>)\n> - Defn/Data for a single OID; (--oid=<oid>? -o=<oid>?)\n> - User definied dependencies. Allow the DB developer to specify once for\n> thier DB what the dependencies are, then use that files as a guide to the\n> restore process. (--deps=<file> -D=<file>)\n> \n> pg_restore will use the same custom IO routines to allow IO to\n> tar/directory/custom files. In the first pass, I will do custom file IO.\n> \n> If a user selects to restore the entire metadata, then it will be dumped\n> according to the defaul policy (OID order). If they select to specify the\n> items from an input file, then the file ordering is used.\n> \n> \n> -------\n> \n> Typical backup procedure:\n> \n> pg_backup mydb mydb.bkp\n> \n> or *maybe* \n> \n> pg_backup mydb > mydb.bkp\n> \n> BUT AFAIK, fseek does not work on STDOUT, and at the current time pg_backup\n> will use fseek.\n> \n> \n> Typical restore procedure:\n> \n> pg_restore mydb mydb.bkp | psql \n> \n> A user will be able to extract only the schema (-s), only the data (-d), a\n> specific table (-t=name), or even edit the object order and selection via:\n> \n> pg_restore --dump-toc mydb.bkp > mytoc.txt\n> vi mytoc.txt {ie. reorder TOC elements as per known dependency problems}\n> pg_restore --item-list=mytoc.txt mydb.bkp | psql\n> \n> FWIW, I envisage the ---dump-toc output to look like:\n> \n> ID; FUNCTION FRED(INT4)\n> ID; TYPE MY_TYPE\n> ID; TABLE MY_TABLE\n> ID; DATA MY_TABLE\n> ID; INDEX MY_TABLE_IX1\n> ...etc.\n> \n> so editing and reordering the dump plan should not be too onerous.\n> \n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.C.N. 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 10 Oct 2000 20:49:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: More flexible backup/restore via pg_dump" }, { "msg_contents": "At 20:49 10/10/00 -0400, Bruce Momjian wrote:\n>Can I ask on a status?\n\nDone.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 11 Oct 2000 12:09:08 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: More flexible backup/restore via\n pg_dump" }, { "msg_contents": "> \n> Well, If I'm reading the spec correctly,\n> INSERT INTO references a query expression \n> which doesn't include ORDER BY as an option, so this\n> is even less SQL since we're actually not just changing\n> it to allow our non-standard bit, but we're changing\n> a piece that is explicitly not allowed in the spec.\n> \n> That being said, I also think it's probably a useful extension\n> given the LIMIT clause.\n> \n\n> On Fri, 18 Aug 2000, Chris Bitmead wrote:\n> \n> > \n> > He does ask a legitimate question though. If you are going to have a\n> > LIMIT feature (which of course is not pure SQL), there seems no reason\n> > you shouldn't be able to insert the result into a table.\n> \n> \n\nThis is an interesting idea. We don't allow ORDER BY in INSERT INTO ...\nSELECT because it doesn't make any sense, but it does make sense if\nLIMIT is used:\n\n\tctest=> create table x (Y oid);\n\tCREATE\n\ttest=> insert into x \n\ttest-> select oid from pg_class order by oid limit 1;\n\tERROR: LIMIT is not supported in subselects\n\nAdded to TODO:\n\n\tAllow ORDER BY...LIMIT in INSERT INTO ... SELECT\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 12 Oct 2000 13:23:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inserting a select statement result into another table" }, { "msg_contents": "Hello,\n\njust my $0.02...\nIf I do\ninsert into x\n select * from y limit 10;\n\nI will get all of rows in x inserted, not just 10...\nI already wrote about this... But did not get any useful reply.\n\n> This is an interesting idea. We don't allow ORDER BY in INSERT INTO ...\n> SELECT because it doesn't make any sense, but it does make sense if\n> LIMIT is used:\n>\n> \tctest=> create table x (Y oid);\n> \tCREATE\n> \ttest=> insert into x\n> \ttest-> select oid from pg_class order by oid limit 1;\n> \tERROR: LIMIT is not supported in subselects\n>\n> Added to TODO:\n>\n> \tAllow ORDER BY...LIMIT in INSERT INTO ... SELECT\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Fri, 13 Oct 2000 11:02:29 +0700", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inserting a select statement result into another table" }, { "msg_contents": "I reported this problem about 3 weeks ago or even more. The problem hasn't\ndisappeared yet. In 7.1beta4 if I use pg_dump with -a switch together, I\nget each CREATE SEQUENCE twice. I suspected if this is an installation\nproblem at my place but now I think it maybe isn't.\n\nYou answered that noone experienced anything like this. Here I get this\nbehaviour with the most simple table as well.\n\nCould you please help? TIA, Zoltan\n\n------------------------------------------------------------------------\nZoltan Kovacs\nsystem designing leader at Trend Ltd, J\\'aszber\\'eny\nassistant teacher in mathematics at Bolyai Institute, Szeged\n\nhttp://www.trendkft.hu\nhttp://www.math.u-szeged.hu/~kovzol\n\n\n", "msg_date": "Tue, 6 Mar 2001 16:07:36 +0100 (CET)", "msg_from": "kovacsz <[email protected]>", "msg_from_op": false, "msg_subject": "pg_dump writes SEQUENCEs twice with -a " }, { "msg_contents": "At 16:07 6/03/01 +0100, kovacsz wrote:\n>The problem hasn't\n>disappeared yet. In 7.1beta4...\n\nAs per an earlier message today, the problem is fixed in CVS\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 07 Mar 2001 03:00:01 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dump writes SEQUENCEs twice with -a " }, { "msg_contents": "kovacsz wrote:\n >I reported this problem about 3 weeks ago or even more. The problem hasn't\n >disappeared yet. In 7.1beta4 if I use pg_dump with -a switch together, I\n >get each CREATE SEQUENCE twice. I suspected if this is an installation\n >problem at my place but now I think it maybe isn't.\n >\n >You answered that noone experienced anything like this. Here I get this\n >behaviour with the most simple table as well.\n\nI get the same error using 7.1beta4. See this example for a 1 table database:\n\nolly@linda$ pg_dump -a junk\n--\n-- Selected TOC Entries:\n--\n\\connect - olly\n--\n-- TOC Entry ID 1 (OID 2091620)\n--\n-- Name: \"basket_id_seq\" Type: SEQUENCE Owner: olly\n--\n\nCREATE SEQUENCE \"basket_id_seq\" start 1 increment 1 maxvalue 2147483647 \nminvalue 1 cache 1 ;\n\n--\n-- TOC Entry ID 3 (OID 2091620)\n--\n-- Name: \"basket_id_seq\" Type: SEQUENCE Owner: olly\n--\n\nCREATE SEQUENCE \"basket_id_seq\" start 1 increment 1 maxvalue 2147483647 \nminvalue 1 cache 1 ;\n\n--\n-- Data for TOC Entry ID 5 (OID 2091639) TABLE DATA basket\n--\n\n-- Disable triggers\nUPDATE \"pg_class\" SET \"reltriggers\" = 0 WHERE \"relname\" ~* 'basket';\nCOPY \"basket\" FROM stdin;\n1\t2001-03-04 19:59:58+00\n\\.\n-- Enable triggers\nBEGIN TRANSACTION;\nCREATE TEMP TABLE \"tr\" (\"tmp_relname\" name, \"tmp_reltriggers\" smallint);\nINSERT INTO \"tr\" SELECT C.\"relname\", count(T.\"oid\") FROM \"pg_class\" C, \n\"pg_trigger\" T WHERE C.\"oid\" = T.\"tgrelid\" AND C.\"relname\" ~* 'basket' GROUP \nBY 1;\nUPDATE \"pg_class\" SET \"reltriggers\" = TMP.\"tmp_reltriggers\" FROM \"tr\" TMP \nWHERE \"pg_class\".\"relname\" = TMP.\"tmp_relname\";\nDROP TABLE \"tr\";\nCOMMIT TRANSACTION;\n\n--\n-- TOC Entry ID 2 (OID 2091620)\n--\n-- Name: \"basket_id_seq\" Type: SEQUENCE SET Owner: \n--\n\nSELECT setval ('\"basket_id_seq\"', 1, 't');\n\n--\n-- TOC Entry ID 4 (OID 2091620)\n--\n-- Name: \"basket_id_seq\" Type: SEQUENCE SET Owner: \n--\n\nSELECT setval ('\"basket_id_seq\"', 1, 't');\n\nolly@linda$ \n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Go ye therefore, and teach all nations, baptizing them\n in the name of the Father, and of the Son, and of the \n Holy Ghost; Teaching them to observe all things \n whatsoever I have commanded you; and, lo, I am with \n you alway, even unto the end of the world. Amen.\" \n Matthew 28:19,20 \n\n\n", "msg_date": "Wed, 07 Mar 2001 20:48:32 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump writes SEQUENCEs twice with -a " }, { "msg_contents": "At 20:48 7/03/01 +0000, Oliver Elphick wrote:\n>kovacsz wrote:\n> >\n> >You answered that noone experienced anything like this. Here I get this\n> >behaviour with the most simple table as well.\n>\n\nIs there a problem with the lists? I reveived Zoltan's message twice, and\nnow this one that seems to indicate my earlier reply has not been seen.\n\nFWIW, this is fixed in CVS.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 08 Mar 2001 10:10:04 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dump writes SEQUENCEs twice with -a " }, { "msg_contents": "Philip Warner wrote:\n >At 20:48 7/03/01 +0000, Oliver Elphick wrote:\n >>kovacsz wrote:\n >> >\n >> >You answered that noone experienced anything like this. Here I get this\n >> >behaviour with the most simple table as well.\n >>\n >\n >Is there a problem with the lists? I reveived Zoltan's message twice, and\n >now this one that seems to indicate my earlier reply has not been seen.\n\nNo I hadn't (and still haven't) seen your earlier reply.\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Go ye therefore, and teach all nations, baptizing them\n in the name of the Father, and of the Son, and of the \n Holy Ghost; Teaching them to observe all things \n whatsoever I have commanded you; and, lo, I am with \n you alway, even unto the end of the world. Amen.\" \n Matthew 28:19,20 \n\n\n", "msg_date": "Wed, 07 Mar 2001 23:21:56 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump writes SEQUENCEs twice with -a " }, { "msg_contents": "On Thu, 8 Mar 2001, Philip Warner wrote:\n\n> At 20:48 7/03/01 +0000, Oliver Elphick wrote:\n> >kovacsz wrote:\n> > >\n> > >You answered that noone experienced anything like this. Here I get this\n> > >behaviour with the most simple table as well.\n> >\n> \n> Is there a problem with the lists? I reveived Zoltan's message twice, and\n> now this one that seems to indicate my earlier reply has not been seen.\n> \n> FWIW, this is fixed in CVS.\nThank you, I checked the CVS (and I downloaded the new sources and tried \nto compile -- without success, I should download the whole stuff IMHO,\ne.g. postgres_fe.h is quite new to 7.1beta4 and the old sources may be\nincompatible with the new ones).\n\nZoltan \n\n", "msg_date": "Thu, 8 Mar 2001 08:17:31 +0100 (CET)", "msg_from": "Kovacs Zoltan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump writes SEQUENCEs twice with -a " }, { "msg_contents": "At 11:06 AM 2/10/2002 -0400, Tom Lane wrote:\n>It needs to get done; AFAIK no one has stepped up to do it. Do you want\n>to?\n\nMy limited reading of off_t stuff now suggests that it would be brave to \nassume it is even a simple 64 bit number (or even 3 32 bit numbers). One \nalternative, which I am not terribly fond of, is to have pg_dump write \nmultiple files - when we get to 1 or 2GB, we just open another file, and \nrecord our file positions as a (file number, file position) pair. Low tech, \nbut at least we know it would work.\n\nUnless anyone knows of a documented way to get 64 bit uint/int file \noffsets, I don't see we have mush choice.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Thu, 03 Oct 2002 23:10:48 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dump and large files - is this a problem? " }, { "msg_contents": "\nPhilip Warner writes:\n\n> My limited reading of off_t stuff now suggests that it would be brave to \n> assume it is even a simple 64 bit number (or even 3 32 bit numbers).\n\nWhat are you reading?? If you find a platform with 64 bit file\noffsets that doesn't support 64 bit integral types I will not just be\nsurprised but amazed.\n\n> One alternative, which I am not terribly fond of, is to have pg_dump\n> write multiple files - when we get to 1 or 2GB, we just open another\n> file, and record our file positions as a (file number, file\n> position) pair. Low tech, but at least we know it would work.\n\nThat does avoid the issue completely, of course, and also avoids\nproblems where a platform might have large file support but a\nparticular filesystem might or might not.\n\n> Unless anyone knows of a documented way to get 64 bit uint/int file \n> offsets, I don't see we have mush choice.\n\nIf you're on a platform that supports large files it will either have\na straightforward 64 bit off_t or else will support the \"large files\nAPI\" that is common on Unix-like operating systems.\n\nWhat are you trying to do, exactly?\n\nRegards,\n\nGiles\n\n\n\n", "msg_date": "Fri, 04 Oct 2002 07:15:29 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem? " }, { "msg_contents": "At 07:15 AM 4/10/2002 +1000, Giles Lean wrote:\n\n> > My limited reading of off_t stuff now suggests that it would be brave to\n> > assume it is even a simple 64 bit number (or even 3 32 bit numbers).\n>\n>What are you reading?? If you find a platform with 64 bit file\n>offsets that doesn't support 64 bit integral types I will not just be\n>surprised but amazed.\n\nYes, but there is no guarantee that off_t is implemented as such, nor would \nwe be wise to assume so (most docs say explicitly not to do so).\n\n\n> > Unless anyone knows of a documented way to get 64 bit uint/int file\n> > offsets, I don't see we have mush choice.\n>\n>If you're on a platform that supports large files it will either have\n>a straightforward 64 bit off_t or else will support the \"large files\n>API\" that is common on Unix-like operating systems.\n>\n>What are you trying to do, exactly?\n\nAgain yes, but the problem is the same: we need a way of making the *value* \nof an off_t portable (not just assuming it's a int64). In general that \ninvolves knowing how to turn it into a more universal data type (eg. int64, \nor even a string). Does the large file API have functions for representing \nthe off_t values that is portable across architectures? And is the API also \nportable?\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Fri, 04 Oct 2002 09:42:35 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dump and large files - is this a problem? " }, { "msg_contents": "\nPhilip Warner writes:\n\n> Yes, but there is no guarantee that off_t is implemented as such, nor would \n> we be wise to assume so (most docs say explicitly not to do so).\n\nI suspect you're reading old documents, which is why I asked what you\nwere referring to. In the '80s what you are saying would have been\nbest practice, no question: 64 bit type support was not common.\n\nWhen talking of near-current systems with 64 bit off_t you are not\ngoing to find one without support for 64 bit integral types.\n\n> Again yes, but the problem is the same: we need a way of making the *value* \n> of an off_t portable (not just assuming it's a int64). In general that \n> involves knowing how to turn it into a more universal data type (eg. int64, \n> or even a string).\n\nSo you need to know the size of off_t, which will be 32 bit or 64 bit,\nand then you need routines to convert that to a portable representation.\nThe canonical solution is XDR, but I'm not sure that you want to bother\nwith it or if it has been extended universally to support 64 bit types.\n\nIf you limit the file sizes to 1GB (your less preferred option, I\nknow;-) then like the rest of the PostgreSQL code you can safely\nassume that off_t fits into 32 bits and have a choice of functions\n(XDR or ntohl() etc) to deal with them and ignore 64 bit off_t\nissues altogether.\n\nIf you intend pg_dump files to be portable avoiding the use of large\nfiles will be best. It also avoids issues on platforms such as HP-UX\nwhere large file support is available, but it has to be enabled on a\nper-filesystem basis. :-(\n\n> Does the large file API have functions for representing \n> the off_t values that is portable across architectures? And is the API also \n> portable?\n\nThe large files API is a way to access large files from 32 bit\nprocesses. It is reasonably portable, but is a red herring for\nwhat you are wanting to do. (I'm not convinced I am understanding\nwhat you're trying to do, but I have 'flu which is not helping. :-)\n\nRegards,\n\nGiles\n\n", "msg_date": "Fri, 04 Oct 2002 10:50:21 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem? " }, { "msg_contents": "Giles Lean <[email protected]> writes:\n> When talking of near-current systems with 64 bit off_t you are not\n> going to find one without support for 64 bit integral types.\n\nI tend to agree with Giles on this point. A non-integral representation\nof off_t is theoretically possible but I don't believe it exists in\npractice. Before going far out of our way to allow it, we should first\nrequire some evidence that it's needed on a supported or\nlikely-to-be-supported platform.\n\ntime_t isn't guaranteed to be an integral type either if you read the\noldest docs about it ... but no one believes that in practice ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 Oct 2002 23:07:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem? " }, { "msg_contents": "Tom Lane wrote:\n> Giles Lean <[email protected]> writes:\n> > When talking of near-current systems with 64 bit off_t you are not\n> > going to find one without support for 64 bit integral types.\n> \n> I tend to agree with Giles on this point. A non-integral representation\n> of off_t is theoretically possible but I don't believe it exists in\n> practice. Before going far out of our way to allow it, we should first\n> require some evidence that it's needed on a supported or\n> likely-to-be-supported platform.\n> \n> time_t isn't guaranteed to be an integral type either if you read the\n> oldest docs about it ... but no one believes that in practice ...\n\nI think fpos_t is the non-integral one. I thought off_t almost always\nwas integral.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 23:10:19 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "At 11:07 PM 3/10/2002 -0400, Tom Lane wrote:\n>A non-integral representation\n>of off_t is theoretically possible but I don't believe it exists in\n>practice.\n\nExcellent. So I can just read/write the bytes in an appropriate order and \nexpect whatever size it is to be a single intXX.\n\nFine with me, unless anybody voices another opinion in the next day, I will \nproceed. I just have this vague recollection of seeing a header file with a \nmore complex structure for off_t. I'm probably dreaming.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Fri, 04 Oct 2002 13:15:29 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dump and large files - is this a problem? " }, { "msg_contents": "\nI have made the changes to pg_dump and verified that (a) it reads old \nfiles, (b) it handles 8 byte offsets, and (c) it dumps & seems to restore \n(at least to /dev/null).\n\nI don't have a lot of options for testing it - should I just apply the \nchanges and wait for the problems, or can someone offer a bigendian machine \nand/or a 4 byte off_t machine?\n\n\n>was integral.\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Fri, 18 Oct 2002 13:42:24 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> I don't have a lot of options for testing it - should I just apply the \n> changes and wait for the problems, or can someone offer a bigendian machine \n> and/or a 4 byte off_t machine?\n\nMy HP is big-endian; send in the patch and I'll check it here...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 Oct 2002 09:25:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem? " }, { "msg_contents": "Philip Warner writes:\n\n>\n> I have made the changes to pg_dump and verified that (a) it reads old\n> files, (b) it handles 8 byte offsets, and (c) it dumps & seems to restore\n> (at least to /dev/null).\n>\n> I don't have a lot of options for testing it - should I just apply the\n> changes and wait for the problems, or can someone offer a bigendian machine\n> and/or a 4 byte off_t machine?\n\nAny old machine has a 4-byte off_t if you configure with\n--disable-largefile. This could be a neat way to test: Make two\ninstallations configured different ways and move data back and forth\nbetween them until it changes. ;-)\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Sat, 19 Oct 2002 00:07:07 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "At 12:07 AM 19/10/2002 +0200, Peter Eisentraut wrote:\n>Any old machine has a 4-byte off_t if you configure with\n>--disable-largefile.\n\nThanks - done. I just dumped to a custom backup file, then dumped it do \nSQL, and compared each version (V7.2.1, 8 byte & 4 byte offsets), and they \nall looked OK. Also, the 4 byte version reads the 8 byte offset version \ncorrectly - although I have not checked reading > 4GB files with 4 byte \noffset, but it's not a priority for obvious reasons.\n\nSo once Giles gets back to me (Monday), I'll commit the changes.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Sat, 19 Oct 2002 14:15:54 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dump and large files - is this a problem?" } ]
[ { "msg_contents": "\n> At 12:13 26/06/00 +0200, Zeugswetter Andreas SB wrote:\n> >After further thought I do think that a physical restore of a backup \n> >done with e.g. tar and pg_log as first file of backup does \n> indeed work.\n> \n> Even if the file backup occurs in the middle of a vacuum?\n\nI guess not, since at vacuum time the rows are moved inside the pages,\nbut that does not seem like a real world limitation to me.\n\n> I like the idea of a non-SQL-based backup method, but it \n> would be nice to\n> see some kind of locking being done to ensure that the backup \n> is a valid\n> database snapshot. Can some kind of consistent page dump be \n> done? (Rather\n> than a file copy)\n\nYes, it is the current smgr that makes any locking obsolete.\n\n> I believe Vadim's future plans involve reuse of data pages - \n> would this\n> have an impact on backup integrity?\n\nYes.\n\n> My experience with applying journals to restored databases \n> suggests that\n> the journals need to be applied to a 'known' state of the database -\n> usually a snapshot as of a certain TX Seq No.\n\nYes, if Vadim changes things to overwrite. \nOf course we could have a db mode where nothing is overwritten,\nsince we have the code for that.\n\nAndreas\n", "msg_date": "Mon, 26 Jun 2000 16:51:32 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: physical backup of PostgreSQL" } ]
[ { "msg_contents": "\n> Sanity Check: does fseek work on tapes? If not, what is the \n> correct way to\n> read a particular block/byte from a file on a tape?\n\nThis is usually not possible, meaning that to read backwards you have to\nrewind \nto the beginning of tape, then seek to your position. Same is usually true\nif you \nuse a pipe to a storage manager.\n\nAndreas\n", "msg_date": "Mon, 26 Jun 2000 16:56:30 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Proposal: More flexible backup/restore via pg_dump" } ]
[ { "msg_contents": "> BTW we are about to take in tablespace concept. You would\n> need another information(the name of the symlink to a directory\n> ,would be = tablespaceOID) for WAL logging.\n\nDo we need *both* database & tablespace to find table file ?!\nImho, database shouldn't be used...\n\n?\n\nVadim\n", "msg_date": "Mon, 26 Jun 2000 11:25:50 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big 7.1 open items " }, { "msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n> Do we need *both* database & tablespace to find table file ?!\n> Imho, database shouldn't be used...\n\nThat'd work fine for me, but I think Bruce was arguing for paths that\nincluded the database name. We'd end up with paths that go something\nlike\n\t..../data/tablespaces/TABLESPACEOID/RELATIONOID\n(plus some kind of decoration for segment and version), so you'd have\na hard time telling which files in a tablespace belong to which\ndatabase. Doesn't bother me a whole lot, personally --- if one wants\nto know that one could just as well assign separate tablespaces to\ndifferent databases. They're only subdirectories anyway...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Jun 2000 18:03:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big 7.1 open items " }, { "msg_contents": "Mikheev, Vadim writes:\n\n> Do we need *both* database & tablespace to find table file ?!\n> Imho, database shouldn't be used...\n\nThen the system tables from different databases would collide.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 27 Jun 2000 20:07:34 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Big 7.1 open items " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Mikheev, Vadim writes:\n>> Do we need *both* database & tablespace to find table file ?!\n>> Imho, database shouldn't be used...\n\n> Then the system tables from different databases would collide.\n\nI've been assuming that we would create a separate tablespace for\neach database, which would be the location of that database's\nsystem tables. It's probably also the default tablespace for user\ntables created in that database, though it wouldn't have to be.\n\nThere should also be a known tablespace for the installation-wide tables\n(pg_shadow et al).\n\nWith this approach tablespace+relation would indeed be a sufficient\nidentifier. We could even eliminate the knowledge that certain\ntables are installation-wide from the bufmgr and below (currently\nthat knowledge is hardwired in places that I'd rather didn't know\nabout it...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Jun 2000 17:00:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big 7.1 open items " }, { "msg_contents": "> Peter Eisentraut <[email protected]> writes:\n> > Mikheev, Vadim writes:\n> >> Do we need *both* database & tablespace to find table file ?!\n> >> Imho, database shouldn't be used...\n> \n> > Then the system tables from different databases would collide.\n> \n> I've been assuming that we would create a separate tablespace for\n> each database, which would be the location of that database's\n> system tables. It's probably also the default tablespace for user\n> tables created in that database, though it wouldn't have to be.\n> \n> There should also be a known tablespace for the installation-wide tables\n> (pg_shadow et al).\n> \n> With this approach tablespace+relation would indeed be a sufficient\n> identifier. We could even eliminate the knowledge that certain\n> tables are installation-wide from the bufmgr and below (currently\n> that knowledge is hardwired in places that I'd rather didn't know\n> about it...)\n\nWell, if we did that, I can see a good reason to not use per-database\ndirectories in the tablepspace.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 27 Jun 2000 17:16:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big 7.1 open items" }, { "msg_contents": "Tom Lane writes:\n\n> I've been assuming that we would create a separate tablespace for\n> each database, which would be the location of that database's\n> system tables.\n\nThen I can't put more than one database into a table space? But I can put\nmore than one table space into a database? I think that's the wrong\nhierarchy. More specifically, I think it's wrong that there is a hierarchy\nhere at all. Table spaces and databases don't have to know about each\nother in any predefined way.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 28 Jun 2000 20:37:35 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big 7.1 open items " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> I've been assuming that we would create a separate tablespace for\n>> each database, which would be the location of that database's\n>> system tables.\n\n> Then I can't put more than one database into a table space? But I can put\n> more than one table space into a database?\n\nYou can put *user* tables from more than one database into a table space.\nThe restriction is just on *system* tables.\n\nAdmittedly this is a tradeoff. We could avoid it along the lines you\nsuggest (name table files like DBOID.RELOID.VERSION instead of just\nRELOID.VERSION) but is it really worth it? Vadim's concerned about\nevery byte that has to go into the WAL log, and I think he's got a\ngood point.\n\n> I think that's the wrong\n> hierarchy. More specifically, I think it's wrong that there is a hierarchy\n> here at all. Table spaces and databases don't have to know about each\n> other in any predefined way.\n\nThey don't, at least not at the smgr level. In my view of how this\nshould work, the smgr *only* knows about tablespaces and tables.\nDatabases are a higher-level construct.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Jun 2000 21:05:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big 7.1 open items " }, { "msg_contents": "Tom Lane writes:\n\n> You can put *user* tables from more than one database into a table space.\n> The restriction is just on *system* tables.\n\nI think my understanding as a user would be that a table space represents\na storage location. If I want to put a table/object/entire database on a\nfancy disk somewhere I create a table space for it there. But if I want to\nstore all my stuff under /usr/local/pgsql/data then I wouldn't expect to\nhave to create more than one table space. So the table spaces become at\nthat point affected by the logical hierarchy: I must make sure to have\nenough table spaces to have many databases.\n\nMore specifically, what would the user interface to this look like?\nClearly there has to be some sort of CREATE TABLESPACE command. Now does\nCREATE DATABASE imply a CREATE TABLESPACE? I think not. Do you have to\ncreate a table space before creating each database? I think not.\n\n> We could avoid it along the lines you suggest (name table files like\n> DBOID.RELOID.VERSION instead of just RELOID.VERSION) but is it really\n> worth it?\n\nI only intended that for pg_class and other bootstrap-sort-of tables,\nmaybe all system tables. Normal heap files could look like RELOID.VERSION,\nwhereas system tables would look like \"name.DBOID\". Clearly there's no\nmarket for renaming system tables or dropping any of their columns. We're\nobviously going to have to treat pg_class special anyway.\n\n> Vadim's concerned about every byte that has to go into the WAL log,\n> and I think he's got a good point.\n\nTrue. But if you only do it for the system tables then it might take less\nspace than keeping track of lots of table spaces that are unneeded. :-)\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Fri, 30 Jun 2000 01:32:20 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big 7.1 open items " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> You can put *user* tables from more than one database into a table space.\n>> The restriction is just on *system* tables.\n\n> More specifically, what would the user interface to this look like?\n> Clearly there has to be some sort of CREATE TABLESPACE command. Now does\n> CREATE DATABASE imply a CREATE TABLESPACE? I think not. Do you have to\n> create a table space before creating each database? I think not.\n\nI would say that CREATE DATABASE just implicitly creates a new\ntablespace that's physically located right under the toplevel data\ndirectory of the installation, no symlink. What's wrong with that?\nYou need not keep anything except the system tables of the DB there\nif you don't want to. In practice, for someone who doesn't need to\nworry about tablespaces (because they put the installation on a disk\nwith enough room for their purposes), the whole thing acts exactly\nthe same as it does now.\n\n>> We could avoid it along the lines you suggest (name table files like\n>> DBOID.RELOID.VERSION instead of just RELOID.VERSION) but is it really\n>> worth it?\n\n> I only intended that for pg_class and other bootstrap-sort-of tables,\n> maybe all system tables. Normal heap files could look like RELOID.VERSION,\n> whereas system tables would look like \"name.DBOID\".\n\nThat would imply that the very bottom levels of the system know all\nabout which tables are system tables and which are not (and, if you\nare really going to insist on the \"name\" part of that, that they\nknow what name goes with each system-table OID). I'd prefer to avoid\nthat. The less the smgr knows about the upper levels of the system,\nthe better.\n\n> Clearly there's no market for renaming system tables or dropping any\n> of their columns.\n\nNo, but there is a market for compacting indexes on system relations,\nand I haven't heard a good proposal for doing index compaction in place.\nSo we need versioning for system indexes.\n\n>> Vadim's concerned about every byte that has to go into the WAL log,\n>> and I think he's got a good point.\n\n> True. But if you only do it for the system tables then it might take less\n> space than keeping track of lots of table spaces that are unneeded. :-)\n\nAgain, WAL should not need to distinguish system and user tables.\n\nAnd as for the keeping track, the tablespace OID will simply replace the\ndatabase OID in the log and in the smgr interfaces. There's no \"extra\"\ncost, except maybe by comparison to a system with neither tablespaces\nnor multiple databases.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Jun 2000 20:43:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big 7.1 open items " }, { "msg_contents": "Tom Lane writes:\n\n> In practice, for someone who doesn't need to worry about tablespaces\n> (because they put the installation on a disk with enough room for\n> their purposes), the whole thing acts exactly the same as it does now.\n\nBut I'd venture the guess that for someone who wants to use tablespaces it\nwouldn't work as expected. Table spaces should represent a physical\nstorage location. Creation of table spaces should be a restricted\noperation, possibly more than, but at least differently from, databases.\nEventually, table spaces probably will have attributes, such as\noptimization parameters (random_page_cost). This will not work as expected\nif you intermix them with the databases.\n\nI'd expect that if I have three disks and 50 databases, then I make three\ntablespaces and assign the databases to them. I'll bet lunch that if we\ndon't do it that way that before long people will come along and ask for\nsomething that does work this way.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 1 Jul 2000 17:03:42 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big 7.1 open items " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I'd expect that if I have three disks and 50 databases, then I make three\n> tablespaces and assign the databases to them.\n\nIn our last installment, you were complaining that you didn't want to\nbe bothered with that ;-)\n\nBut I don't see any reason why CREATE DATABASE couldn't take optional\nparameters indicating where to create the new DB's default tablespace.\nWe already have a LOCATION option for it that does something close to\nthat.\n\nCome to think of it, it would probably make sense to adapt the existing\nnotion of \"location\" (cf initlocation script) into something meaning\n\"directory that users are allowed to create tablespaces (including\ndatabases) in\". If there were an explicit table of allowed locations,\nit could be used to address the protection issues you raise --- for\nexample, a location could be restricted so that only some users could\ncreate tablespaces/databases in it. $PGDATA/data would be just the\nfirst location in every installation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 01 Jul 2000 13:37:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big 7.1 open items " }, { "msg_contents": "Tom Lane writes:\n\n> Come to think of it, it would probably make sense to adapt the existing\n> notion of \"location\" (cf initlocation script) into something meaning\n> \"directory that users are allowed to create tablespaces (including\n> databases) in\".\n\nThis is what I've been trying to push all along. But note that this\nmechanism does allow multiple databases per location. :)\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 2 Jul 2000 17:22:43 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big 7.1 open items " } ]
[ { "msg_contents": "[Forwarding this here, as I don't know how to answer the second\nquestion.]\n[LRO]\n\n-------- Original Message --------\nSubject: RE: config.sub and config.guess for PostgreSQL compilation on\nLinux S/390\nDate: Mon, 26 Jun 2000 14:15:45 -0400\nFrom: \"Ferguson, Neale\" <[email protected]>\nTo: Lamar Owen <[email protected]>\n\nHi again,\n I'm in the process of forward fitting the patches to 7.0.2. They are\ntrivial so I don't anticipate problems there. One file missing from the\npatches I referred you to was the linux_s390 template which I've\nreconstructed to look like:\nAROPT:crs\nCFLAGS:-O0\nSHARED_LIB:-fpic\nALL:\nSRCH_INC:\nSRCH_LIB:\nUSE_LOCALE:no\nDLSUFFIX:.so\nYFLAGS:-d\nYACC:bison -y\n\nI've chosen -O0 as the level of compiler I'm using has a couple of bugs.\nThe\nlatest one doesn't but I've yet to upgrade to it. I use ./configure\n--with-template=linux_s390 --with-odbc --with-perl\nOne question, how do I get it to automatically pick up linux_s390? As a\nclue\nhere's what I get with uname:\nuname -m\ns390\nuname -r\n2.2.14\nuname -s\nLinux\nuname -v\n#1 SMP Sun Feb 20 06:20:12 EST 2000\n\n> -----Original Message-----\n> And thanks for your reply. I have forwarded the information to the\n> PostgreSQL developers list. I would like to see if the new PostgreSQL\n> 7.0.2 version can be brought up on S390, as the patches are for 6.5.3.\n> \n> --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n>\n", "msg_date": "Mon, 26 Jun 2000 14:49:00 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: RE: config.sub and config.guess for PostgreSQL compilation on\n\tLinux S/390]" }, { "msg_contents": "Lamar Owen writes:\n\n> I'm in the process of forward fitting the patches to 7.0.2. They are\n> trivial so I don't anticipate problems there.\n\nHow is this S/390 setup supported by GNU libtool? I'm sure it is since\nmany other packages use libtool, but I'm wondering whether special patches\nwould be required. We're planning on moving to that sometime.\n\n> One file missing from the patches I referred you to was the linux_s390\n> template which I've reconstructed to look like:\n\n AROPT:crs\n CFLAGS:-O0\n SHARED_LIB:-fpic\n ALL:\n SRCH_INC:\n SRCH_LIB:\n- USE_LOCALE:no\n DLSUFFIX:.so\n- YFLAGS:-d\n- YACC:bison -y\n\n> I've chosen -O0 as the level of compiler I'm using has a couple of bugs.\n\nNote that PostgreSQL also had a couple of bugs that violated standard C,\nwhich are now mostly fixed. Perhaps that affects you. In any case you\nperhaps want to look at the 7.1 development branch.\n\n\n> The latest one doesn't but I've yet to upgrade to it. I use\n> ./configure --with-template=linux_s390 --with-odbc --with-perl One\n> question, how do I get it to automatically pick up linux_s390?\n\nThe path of least resistance is to name the template file after the\nconfig.guess output. (I assume that you'll provide those guys with patches\nas well.) Otherwise there's a file src/templates/.similar that matches\ncanonical host names to template files. (Yeah, we really shouldn't have\nhidden files like that.)\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 27 Jun 2000 20:07:55 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Fwd: RE: config.sub and config.guess for PostgreSQL\n\tcompilation on Linux S/390]" } ]
[ { "msg_contents": "postgresql 7.0.2\nHPUX 10.20\n\nconfigure --without-CXX --prefix=$HOME/pgsql --enable-cassert --enable-debug\n\ncreate table a ( a integer, b integer );\ncreate table b ( b integer, c integer );\ncreate table c ( c integer, d integer );\n\n-- core dumps\nselect * from a natural join b natural join c;\n-- so does this\nselect * from a join b using (b) join c using (c);\n-- this seems to work\nselect * from a join b on a.b = b.b join c on b.c = c.c;\n\nback trace from the 'natural join' form shows:\n#0 0xfef54 in parseFromClause (pstate=0x401caac8, frmList=0x401ca9e0) at parse_clause.c:505\n#1 0xfe248 in makeRangeTable (pstate=0x401caac8, frmList=0x401ca9e0) at parse_clause.c:57\n#2 0xf0958 in transformSelectStmt (pstate=0x401caac8, stmt=0x401ca9f8) at analyze.c:1417\n#3 0xee208 in $0000001A () at analyze.c:238\n#4 0xedbe8 in parse_analyze (pl=0x401caab0, parentParseState=0x0) at analyze.c:75\n#5 0xfd9f4 in parser (str=0x40091628 \"select * from a natural join b natural join c\\n\", typev=0x0, \n nargs=0) at parser.c:64\n#6 0x1724a8 in pg_parse_and_rewrite (\n query_string=0x40091628 \"select * from a natural join b natural join c\\n\", typev=0x0, nargs=0, \n aclOverride=0 '\\000') at postgres.c:395\n#7 0x172974 in pg_exec_query_dest (\n query_string=0x40091628 \"select * from a natural join b natural join c\\n\", dest=Debug, \n aclOverride=0 '\\000') at postgres.c:580\n#8 0x1728fc in pg_exec_query (\n query_string=0x40091628 \"select * from a natural join b natural join c\\n\") at postgres.c:562\n#9 0x1742d8 in $00000061 () at postgres.c:1590\n#10 0xed150 in main (argc=3, argv=0x7b03ac00) at main.c:103\n\n\nThe debug output shows it doing the first join correctly. It seems\nto flake when it is running the attributes of the join to find the\ncorrect attribute for the second join.\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n", "msg_date": "Mon, 26 Jun 2000 16:22:19 -0400", "msg_from": "\"Mark Hollomon\" <[email protected]>", "msg_from_op": true, "msg_subject": "'natural join' core dump" }, { "msg_contents": "\"Mark Hollomon\" <[email protected]> writes:\n> -- core dumps\n> select * from a natural join b natural join c;\n> -- so does this\n> select * from a join b using (b) join c using (c);\n> -- this seems to work\n> select * from a join b on a.b = b.b join c on b.c = c.c;\n\nThis is a previously reported problem that's on Thomas' todo list.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Jun 2000 18:33:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 'natural join' core dump " }, { "msg_contents": "> > -- core dumps\n> > select * from a natural join b natural join c;\n> > -- so does this\n> > select * from a join b using (b) join c using (c);\n> This is a previously reported problem that's on Thomas' todo list.\n\nYup. And I've just recently started getting a bit of time for\ndevelopment again, so should be able to look at it soon.\n\n - Thomas\n", "msg_date": "Tue, 27 Jun 2000 01:58:22 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 'natural join' core dump" } ]
[ { "msg_contents": "I am running PostgreSQL on a linux box (Red Hat). I have set everything up\nto successfully communicate with MS-Access (which is on a Windows 98 machine\non my network). From Access I can import tables from PostgreSQL\nsuccessfully. I can also link to PostgreSQL tables and update them from\nAccess. What I really want to do though is repeatedly export existing\nAccess tables (which are constantly being updated by our software) to\nPostgreSQL. When I try to do this the table name appears on the Linux box\nbut the table isn't really set up. Also, once the table name is there I\ncan't export again or I get the error that the table already exists. Should\nI be able to do this? If not, what's the best way to keep the up-to-date\ninfo from the Access tables current in PostgreSQL. Any ideas or tips would\nbe greatly appreciated.\n\t\t\t\t\t\t\tThanks,\n\t\t\t\t\t\t\t Jenni Jaeger\n\n", "msg_date": "Mon, 26 Jun 2000 15:13:43 -0700", "msg_from": "\"Jenni Jaeger\" <[email protected]>", "msg_from_op": true, "msg_subject": "connection to Access tables" } ]