threads
listlengths
1
2.99k
[ { "msg_contents": "> > Well, I've dropped index but INSERTs still take 70 sec and \n> > COPY just 1sec -:(((\n> \n> Well, for those that have fsync turned off we could actually \n> avoid most of the writes, could'nt we ? Just leave the page\n> marked dirty. We would only need to write each new page once.\n> The problem as I see it is, that we don't have a good place \n> where the writes would actually be done. Now they are obviously\n> done after each insert.\n\nI've run test without fsync and with all inserts in *single*\ntransaction - there should be no write after each insert...\n\nVadim\n\n", "msg_date": "Fri, 26 May 2000 14:04:49 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Berkeley DB... " }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> > > Well, I've dropped index but INSERTs still take 70 sec and \n> > > COPY just 1sec -:(((\n> > \n> > Well, for those that have fsync turned off we could actually \n> > avoid most of the writes, could'nt we ? Just leave the page\n> > marked dirty. We would only need to write each new page once.\n> > The problem as I see it is, that we don't have a good place \n> > where the writes would actually be done. Now they are obviously\n> > done after each insert.\n> \n> I've run test without fsync and with all inserts in *single*\n> transaction - there should be no write after each insert...\n\nWatch out. I think Vadim is settled into San Franciso and is getting\nfired up again... :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 26 May 2000 19:41:44 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB..." }, { "msg_contents": "On Fri, 26 May 2000, Mikheev, Vadim wrote:\n> > > Well, I've dropped index but INSERTs still take 70 sec and \n> > > COPY just 1sec -:(((\n> > \n> > Well, for those that have fsync turned off we could actually \n> > avoid most of the writes, could'nt we ? Just leave the page\n> > marked dirty. We would only need to write each new page once.\n> > The problem as I see it is, that we don't have a good place \n> > where the writes would actually be done. Now they are obviously\n> > done after each insert.\n> \n> I've run test without fsync and with all inserts in *single*\n> transaction - there should be no write after each insert...\n\nYes, but if you don't do the inserts in one big transaction\nand don't issue transaction statements ( no begin or commit )\nthen you get the behavior I described.\n\nAndreas\n", "msg_date": "Sun, 28 May 2000 09:30:38 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Berkeley DB..." } ]
[ { "msg_contents": "Can someone point me to where I can obtain libpq-fe.h? I'm trying to build the postgres driver for AOL server and this file is missing from the AOL source code. I downloaded postgres 7.0 source code and couldn't find it there. \n\nthanks in advance,\n\n-andrew joseph\n\n\n\n\n\n\n\nCan someone point me to where I can obtain \nlibpq-fe.h?  I'm trying to build the postgres driver for AOL server and \nthis file is missing from the AOL source code.  I downloaded postgres 7.0 \nsource code and couldn't find it there.  \n \nthanks in advance,\n \n-andrew joseph", "msg_date": "Fri, 26 May 2000 19:00:10 -0400", "msg_from": "\"Andy Joseph\" <[email protected]>", "msg_from_op": true, "msg_subject": "where is libpq-fe.h" }, { "msg_contents": "On Fri, 26 May 2000, Andy Joseph wrote:\n> \n> Can someone point me to where I can obtain libpq-fe.h? I'm trying to build the postgres driver for AOL server and this file is missing from the AOL source code. I downloaded postgres 7.0 source code and couldn't find it there. \n> \n> thanks in advance,\n\nIt is distributed as part of PostgreSQL 7.0 -- after the ./configure; make;\nmake install sequence, check in the include directory of the PostgreSQL install\ntree. On a RedHat system, it goes into /usr/include/pgsql. On a standard\ninstallation, using /usr/local/pgsql as the installation dir, it should be in\n/usr/local/pgsql/include AFTER the make install. You need to specify the\nPostgreSQL include directory in the AOLserver nspostgres makefile.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 26 May 2000 19:59:08 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: where is libpq-fe.h" } ]
[ { "msg_contents": "\nIn preparation for a v7.0.1 release on Monday, we've just branched off\nv7.0 from the working tree ...\n\nThose that want access to the \"STABLE\" version, please use\n-rREL7_0_PATCHES when you do your check out ...\n\nElse, you'll get the development tree ... :)\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 26 May 2000 23:04:49 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL v7.0 branched ..." } ]
[ { "msg_contents": "I'm finally up and running (sort of) after upgrading my formerly\nrevlocked Linux system at home.\n\nTatsuo, I've got one last problem I was hoping someone could help me\nwith:\n\nI've got a Fujitsu MO640 SCSI disk drive. At the moment, the system is\nvery unhappy with it; when I write to it the system ends up trying to\nwrite to sectors way off the end of the drive, which throws SCSI errors.\n\nIn looking at Altavista, most of the matches on a reference to this\ndrive are in Japanese :( Any hints from the web would be appreciated.\nI'm running Linux 2.2.14 from the Mandrake 7.0.2 distribution. My old\nRedHat 5.2 system required kernel patches (which I had gotten from a\nsite in Japan), but ran with the drive without problems. btw, the\nfundamental problem is with the 2048byte sector size on the disk.\n\nThanks.\n\n - Tom\n\n", "msg_date": "Sat, 27 May 2000 02:36:01 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Back online" }, { "msg_contents": "> I'm finally up and running (sort of) after upgrading my formerly\n> revlocked Linux system at home.\n> \n> Tatsuo, I've got one last problem I was hoping someone could help me\n> with:\n> \n> I've got a Fujitsu MO640 SCSI disk drive. At the moment, the system is\n> very unhappy with it; when I write to it the system ends up trying to\n> write to sectors way off the end of the drive, which throws SCSI errors.\n> \n> In looking at Altavista, most of the matches on a reference to this\n> drive are in Japanese :( Any hints from the web would be appreciated.\n> I'm running Linux 2.2.14 from the Mandrake 7.0.2 distribution. My old\n> RedHat 5.2 system required kernel patches (which I had gotten from a\n> site in Japan), but ran with the drive without problems. btw, the\n> fundamental problem is with the 2048byte sector size on the disk.\n\nI have found Fujitsu's hard disk drives page written in English.\n\nhttp://www.fujitsu.co.jp/hypertext/hdd/drive/disk_e.html\n\nI could not find exactly the same drive you are using on that page,\nhowever.\n--\nTatsuo Ishii\n", "msg_date": "Sat, 27 May 2000 23:29:50 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Back online" }, { "msg_contents": "> > I've got a Fujitsu MO640 SCSI disk drive. At the moment, the system is\n> > very unhappy with it; when I write to it the system ends up trying to\n> > write to sectors way off the end of the drive, which throws SCSI errors.\n> > In looking at Altavista, most of the matches on a reference to this\n> > drive are in Japanese :( Any hints from the web would be appreciated.\n> > I'm running Linux 2.2.14 from the Mandrake 7.0.2 distribution. My old\n> > RedHat 5.2 system required kernel patches (which I had gotten from a\n> > site in Japan), but ran with the drive without problems. btw, the\n> > fundamental problem is with the 2048byte sector size on the disk.\n> I have found Fujitsu's hard disk drives page written in English.\n> http://www.fujitsu.co.jp/hypertext/hdd/drive/disk_e.html\n> I could not find exactly the same drive you are using on that page,\n> however.\n\nAh, that is because the MO640 is a magneto-optical drive, which is\nusually listed in a different area of the company.\n\nWhen I did an Altavista search on \"MO640\" many of the references were in\nJapanese (and a few in German), and I was hoping that there would be\nsome mention of how to use the drive with Linux 2.2.x kernels. If you\nhave a chance to look, that would be great.\n\nTIA\n\n - Thomas\n", "msg_date": "Wed, 31 May 2000 15:47:38 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Back online" }, { "msg_contents": "> > I have found Fujitsu's hard disk drives page written in English.\n> > http://www.fujitsu.co.jp/hypertext/hdd/drive/disk_e.html\n> > I could not find exactly the same drive you are using on that page,\n> > however.\n> \n> Ah, that is because the MO640 is a magneto-optical drive, which is\n> usually listed in a different area of the company.\n\nOh I didn't know that. I am going to look for Fujitsu's English MO\ndisk pages...\n\n> When I did an Altavista search on \"MO640\" many of the references were in\n> Japanese (and a few in German), and I was hoping that there would be\n> some mention of how to use the drive with Linux 2.2.x kernels. If you\n> have a chance to look, that would be great.\n\nI am not Linux kernel guru at all but I can read Japanese pages:-)\nWill look at.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 01 Jun 2000 12:11:15 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Back online" }, { "msg_contents": "Hi,\n\nThomas Lockhart:\n> When I did an Altavista search on \"MO640\" many of the references were in\n> Japanese (and a few in German), and I was hoping that there would be\n> some mention of how to use the drive with Linux 2.2.x kernels. If you\n> have a chance to look, that would be great.\n> \nIt's a SCSI drive. You connect it, insert a fresh disk, run fdisk and\nmke2fs on it, mount the partitions, and work with it.\n\nIt's no different from every other removable disk. There's nothing\nspecial that needs to be done for this device -- that's why you don't\nfind anything. ;-)\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nProblem mit cookie: File exists \n", "msg_date": "Thu, 8 Jun 2000 13:03:04 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Back online" }, { "msg_contents": "> > > I have found Fujitsu's hard disk drives page written in English.\n> > > http://www.fujitsu.co.jp/hypertext/hdd/drive/disk_e.html\n> > > I could not find exactly the same drive you are using on that page,\n> > > however.\n> > \n> > Ah, that is because the MO640 is a magneto-optical drive, which is\n> > usually listed in a different area of the company.\n> \n> Oh I didn't know that. I am going to look for Fujitsu's English MO\n> disk pages...\n\nFound.\n\nhttp://www.fujitsu.co.jp/hypertext/aboutmo/en/index.html\n\n--\nTatsuo Ishii\n", "msg_date": "Thu, 08 Jun 2000 21:34:36 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Back online" }, { "msg_contents": "> It's no different from every other removable disk. There's nothing\n> special that needs to be done for this device -- that's why you don't\n> find anything. ;-)\n\nHmm. That is what I'm hearing, so I'm not sure why my Adaptec/MO640 is\nblowing chunks when I try reading or writing.\n\nWill keep poking at it :(\n\n - Thomas\n", "msg_date": "Wed, 14 Jun 2000 12:56:10 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Back online" }, { "msg_contents": "> http://www.fujitsu.co.jp/hypertext/aboutmo/en/index.html\n\nThanks Tatsuo. I'm not seeing why I have a problem with the drive, but\nwill keep looking.\n\n - Thomas\n", "msg_date": "Wed, 14 Jun 2000 12:56:54 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Back online" }, { "msg_contents": "On Wed, 14 Jun 2000, Thomas Lockhart wrote:\n> Hmm. That is what I'm hearing, so I'm not sure why my Adaptec/MO640 is\n> blowing chunks when I try reading or writing.\n> \n> Will keep poking at it :(\n\nMO drives have a really bad habbit of collecting dust/lint on the\noptics. This is why its important to store drives without disks in them.\n\nTo clean the optics you'll have to open the drive up and somehow gain\naccess to them. On the larger 5.25\" mechanisms this can be fairly easy\n(In the case of the Sony SMO-X501 drives), to fairly annoying (Any HP\nmechanism.) I've got a 3.5\" Olympus mechanism that I need to clean and I\nsuspect it may be on the difficult side. The 3.5\" Fujitsu mechanism looks\na little easier to get access to.\n\nUse a q-tip and alcohol.\n\nIf you can actually find cleaning media then use that.\n\n-- \n| Matthew N. Dodd | '78 Datsun 280Z | '75 Volvo 164E | FreeBSD/NetBSD |\n| [email protected] | 2 x '84 Volvo 245DL | ix86,sparc,pmax |\n| http://www.jurai.net/~winter | This Space For Rent | ISO8802.5 4ever |\n\n", "msg_date": "Wed, 14 Jun 2000 13:18:41 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Back online" } ]
[ { "msg_contents": "\nI've made changes to gram.y to reduce the conflicts down to a couple of\nharmless shift/reduce conflicts, but I don't seem to have the right\nblack-magic incantations to remove these last ones. They seem to be\nrelated to ONLY syntax. Can anybody help?\n\n\nftp://ftp.tech.com.au/pub/gram.y.gz\nftp://ftp.tech.com.au/pub/y.output.gz\nftp://ftp.tech.com.au/pub/patch.only.gz\n", "msg_date": "Sat, 27 May 2000 14:13:00 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "gram.y help, ONLY syntax" } ]
[ { "msg_contents": "Can I rename the directory pl/tcl to pl/pltcl? Is that OK?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 27 May 2000 00:13:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Rename pl/tcl to pl/pltcl" }, { "msg_contents": "* Bruce Momjian <[email protected]> [000526 21:52] wrote:\n> Can I rename the directory pl/tcl to pl/pltcl? Is that OK?\n\nRepo copies are a pain in the butt. :)\n\nSimply cvs rm'ing them and cvs add'ing them will strip all the history,\nsomeone needs to go in and actually copy the files over and mess with\nthe ,v files so that the earlier versions don't \"pop-up\" when someone\nchecks out older source.\n\nIf this is a real nessesity and no one has done this before you'll\nwant to mail Peter Wemm <[email protected]> and ask him how he\nworks the magic for us.\n\nIt's really better if the names don't change at all.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Fri, 26 May 2000 22:43:24 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Rename pl/tcl to pl/pltcl" }, { "msg_contents": "Alfred Perlstein <[email protected]> writes:\n>> Can I rename the directory pl/tcl to pl/pltcl? Is that OK?\n\n> It's really better if the names don't change at all.\n\nI agree ... a marginal improvement in consistency of directory names\nis not worth the pain involved here.\n\nBesides, some of us would argue that if anything is to be done here,\nit should be removing the redundant \"pl\" from the *other* subdirectories\nof src/pl ;-).\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 27 May 2000 02:06:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Rename pl/tcl to pl/pltcl " } ]
[ { "msg_contents": "When will we see inner/outer join support? How far away is that?\n\n:)\n\nCraig\n\n", "msg_date": "Sat, 27 May 2000 06:44:11 -0400", "msg_from": "CB <[email protected]>", "msg_from_op": true, "msg_subject": "Probably already asked but" }, { "msg_contents": "\nLast scheduale shows it coming in for v7.1 ... haven't heard anything\nrecently to change that ...\n\nOn Sat, 27 May 2000, CB wrote:\n\n> When will we see inner/outer join support? How far away is that?\n> \n> :)\n> \n> Craig\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 29 May 2000 00:29:59 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Probably already asked but" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> On Sat, 27 May 2000, CB wrote:\n>> When will we see inner/outer join support? How far away is that?\n\n> Last scheduale shows it coming in for v7.1 ... haven't heard anything\n> recently to change that ...\n\nEr, I thought we were planning 7.2 for that? Unless WAL takes longer\nthan Vadim thinks it will...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 May 2000 00:12:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Probably already asked but " } ]
[ { "msg_contents": "I have a table:\n\nCREATE TABLE \"link_tst\" (\n \"c_id\" int4,\n \"s_date\" timestamp,\n \"r_id\" int4,\n \"r_tally\" int8\n);\n\ncreate unique INDEX \"link_tst_idx\" on \"link_tst\"\n USING btree (r_id, c_id, s_date);\n\ncreate rule link_tst_rule\n as on insert to link_tst\n where exists (\n select\n c_id\n from\n link_tst\n where \n r_id = NEW.r_id\n AND c_id = NEW.c_id\n AND s_date = NEW.s_date\n ) do instead\n update\n link_tst\n set\n r_hits = r_hits + NEW.r_hits\n where\n r_id = NEW.r_id\n AND c_id = NEW.c_id\n AND s_date = NEW.s_date\n; \n\nnow when i select from another table with identical fields\nbut not the UNIQUE qualifier on it's index (there may be duplicates)\nI get this:\n\nselect now(); insert into link_tst select * from r_link; select now(); \n now\n------------------------\n 2000-05-27 15:08:23-07\n(1 row)\n\nERROR: Cannot insert a duplicate key into unique index link_tst_idx\n now\n------------------------\n 2000-05-27 15:10:14-07\n(1 row)\n\nHow is that possible? My only guess is that the rule is only being applied\nto the table _before_ the query, and if there actually are duplicate rows\nto be inserted the rule isn't catching them because the exists clause is\nonly running on the snapshot of the table before the insert starts.\n\nis there a workaround or is this a possible bug?\n\n--\n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Sat, 27 May 2000 15:58:30 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": true, "msg_subject": "but i _really can't_ insert a duplicate key!" }, { "msg_contents": "* Alfred Perlstein <[email protected]> [000527 16:03] wrote:\n> \n> How is that possible? My only guess is that the rule is only being applied\n> to the table _before_ the query, and if there actually are duplicate rows\n> to be inserted the rule isn't catching them because the exists clause is\n> only running on the snapshot of the table before the insert starts.\n> \n> is there a workaround or is this a possible bug?\n\nOk, this was my fault, it seems the rule system takes a snapshot of the\ntable at the start of a insert from select op and my rule wasn't catching\nthe rows that were inserted during the insert. (basically confirmed my\nsuspicions)\n\nI found the duplicate row in my original table and once it was\nremoved the the inserts seem to work perfectly.\n\nIt would be nice to have an exception handler that could be executed\nwhen an insert fails because of various reason, something like:\n\ncreate rule update_instread_of_insert as on exception to mytable \n where exception = violates_unique\n do update ....\n\nThis would reduce the amount of searching because the insert rule only\nhappens when there is an exception instead of forcing an extra lookup\nbefore each insert.\n\nAnyhow, I can always wish. :)\n\nthanks,\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Sun, 28 May 2000 21:49:29 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": true, "msg_subject": "Re: but i _really can't_ insert a duplicate key!" } ]
[ { "msg_contents": "> > Well, I've dropped index but INSERTs still take 70 sec and \n> > COPY just 1sec -:(((\n> >\n> \n> Did you run vacuum after dropping indexes ?\n> Because DROP INDEX doesn't update relhasindex of pg_class,\n> planner/executer may still look up pg_index.\n\nActually, I dropped and re-created table without indices...\n\nVadim\n", "msg_date": "Sat, 27 May 2000 21:44:52 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Berkeley DB... " }, { "msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n>>>> Well, I've dropped index but INSERTs still take 70 sec and \n>>>> COPY just 1sec -:(((\n\nMebbe so, but you can't blame it all on parse/plan overhead.\n\nI did some experimentation on this with current sources, using a test\ncase of inserting 100,000 rows of 16 columns (the regression \"tenk1\"\ntable's contents repeated 10 times). Each test was started with a\nfreshly created empty table. The initial runs were done with all\npostmaster options except -F defaulted. All numbers are wall-clock time\nin seconds; the \"+\" column is the time increase from the previous case:\n\nload via COPY, fsync off:\n0 indexes\t24.45s\n1 index\t\t48.88s\t\t+ 24.43\n2 indexes\t62.65s\t\t+ 13.77\n3 indexes\t96.84s\t\t+ 34.19\n4 indexes\t134.09s\t\t+ 37.25\n\nload via INSERTs, fsync off, one xact (begin/end around all inserts):\n0 indexes\t194.95s\n1 index\t\t247.21s\t\t+ 52.26\n2 indexes\t269.69s\t\t+ 22.48\n3 indexes\t307.33s\t\t+ 37.64\n4 indexes\t352.72s\t\t+ 45.39\n\nload via INSERTs, fsync off, separate transaction for each insert:\n0 indexes\t236.53s\n1 index\t\t295.96s\t\t+ 59.43\n2 indexes\t323.40s\t\t+ 27.44\n[ got bored before doing 3/4 index cases ... ]\n\nload via INSERTs, fsync on, separate transactions:\n0 indexes\t5189.99s\n[ don't want to know how long it will take with indexes :-( ]\n\nSo while the parse/plan overhead looks kinda bad next to a bare COPY,\nit's not anything like a 70:1 penalty. But an fsync per insert is\nthat bad and worse.\n\nI then recompiled with -pg to learn more about where the time was going.\nOne of the useful places to look at is calls to FileSeek, since mdread,\nmdwrite, and mdextend all call it. To calibrate these numbers, the\ntable being created occupies 2326 pages and the first index is 343\npages.\n\nInserts (all in 1 xact), no indexes:\n 0.00 0.00 1/109528 init_irels [648]\n 0.00 0.00 85/109528 mdread [592]\n 0.01 0.00 2327/109528 mdextend [474]\n 0.01 0.00 2343/109528 mdwrite [517]\n 0.23 0.00 104772/109528 _mdnblocks [251]\n[250] 0.0 0.24 0.00 109528 FileSeek [250]\nInserts (1 xact), 1 index:\n 0.00 0.00 1/321663 init_irels [649]\n 0.00 0.00 2667/321663 mdextend [514]\n 0.10 0.00 55478/321663 mdread [277]\n 0.11 0.00 58096/321663 mdwrite [258]\n 0.38 0.00 205421/321663 _mdnblocks [229]\n[213] 0.1 0.60 0.00 321663 FileSeek [213]\nCOPY, no indexes:\n 0.00 0.00 1/109527 init_irels [431]\n 0.00 0.00 84/109527 mdread [404]\n 0.00 0.00 2327/109527 mdextend [145]\n 0.00 0.00 2343/109527 mdwrite [178]\n 0.07 0.00 104772/109527 _mdnblocks [77]\n[83] 0.0 0.07 0.00 109527 FileSeek [83]\nCOPY, 1 index:\n 0.00 0.00 1/218549 init_irels [382]\n 0.00 0.00 2667/218549 mdextend [220]\n 0.07 0.00 53917/218549 mdread [106]\n 0.08 0.00 56542/218549 mdwrite [99]\n 0.14 0.00 105422/218549 _mdnblocks [120]\n[90] 0.0 0.30 0.00 218549 FileSeek [90]\n\nThe extra _mdnblocks() calls for the inserts/1index case seem to be from\nthe pg_index scans in ExecOpenIndices (which is called 100000 times in\nthe inserts case but just once in the COPY case). We know how to fix\nthat. Otherwise the COPY and INSERT paths seem to be pretty similar as\nfar as actual I/O calls go. The thing that jumps out here, however, is\nthat it takes upwards of 50000 page reads and writes to prepare a\n343-page index. Most of the write calls turn out to be from\nBufferReplace, which is pretty conclusive evidence that the default\nsetting of -B 64 is not enough for this example; we need more buffers.\n\nAt -B 128, inserts/0index seems about the same, inserts/1index traffic is\n 0.00 0.00 1/270331 init_irels [637]\n 0.01 0.00 2667/270331 mdextend [510]\n 0.06 0.00 29798/270331 mdread [354]\n 0.06 0.00 32444/270331 mdwrite [277]\n 0.40 0.00 205421/270331 _mdnblocks [229]\n[223] 0.1 0.52 0.00 270331 FileSeek [223]\nAt -B 256, inserts/1index traffic is\n 0.00 0.00 1/221849 init_irels [650]\n 0.00 0.00 2667/221849 mdextend [480]\n 0.01 0.00 5556/221849 mdread [513]\n 0.01 0.00 8204/221849 mdwrite [460]\n 0.37 0.00 205421/221849 _mdnblocks [233]\n[240] 0.0 0.40 0.00 221849 FileSeek [240]\nAt -B 512, inserts/1index traffic is\n 0.00 0.00 1/210788 init_irels [650]\n 0.00 0.00 25/210788 mdread [676]\n 0.00 0.00 2667/210788 mdextend [555]\n 0.00 0.00 2674/210788 mdwrite [564]\n 0.27 0.00 205421/210788 _mdnblocks [248]\n[271] 0.0 0.28 0.00 210788 FileSeek [271]\n\nSo as long as the -B setting is large enough to avoid thrashing, there\nshouldn't be much penalty to making an index. I didn't have time to run\nthe COPY cases but I expect they'd be about the same.\n\nBottom line is that where I/O costs are concerned, the parse/plan\noverhead for INSERTs is insignificant except for the known problem\nof wanting to rescan pg_index for each INSERT. The CPU overhead is\nsignificant, at least if you're comparing no-fsync performance ...\nbut as I commented before, I doubt we can do a whole lot better in\nthat area for simple INSERTs.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 May 2000 01:28:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB... " }, { "msg_contents": "> -----Original Message-----\n> From: Mikheev, Vadim [mailto:[email protected]]\n> \n> > > Well, I've dropped index but INSERTs still take 70 sec and \n> > > COPY just 1sec -:(((\n> > >\n> > \n> > Did you run vacuum after dropping indexes ?\n> > Because DROP INDEX doesn't update relhasindex of pg_class,\n> > planner/executer may still look up pg_index.\n> \n> Actually, I dropped and re-created table without indices...\n>\n\nOops,aren't you testing in 6.5.3 ?\nExecOpenIndices() always refers to pg_index in 6.5.x.\nCurrently it doesn't refer to pg_index if relhasindex is\nfalse. \n\nRegards.\n\nHiroshi Inoue\[email protected] \n", "msg_date": "Mon, 29 May 2000 13:51:42 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Berkeley DB... " } ]
[ { "msg_contents": "Hi,\n\nI saw some errors on the regression test, and they are new to me.\nHere are the output from checkresults.\n\n====== rules ======\n1168d1167\n< pg_indexes | SELECT c.relname AS tablename, i.relname AS indexname, pg_get_indexdef(x.indexrelid) AS indexdef FROM pg_index x, pg_class c, pg_class i WHERE ((c.oid = x.indrelid) AND (i.oid = x.indexrelid));\n1187c1186\n< (20 rows)\n---\n> (19 rows)\n====== foreign_key ======\n11a12\n> NOTICE: _outNode: don't know how to print type 726 \n235a237\n> NOTICE: _outNode: don't know how to print type 726 \n\nI guess the error on foreign_key is caused by -d 3 turned on(does this\nhurt anything?) But what about the rules?\n\nLinux RedHat 4.2, multibyte support enabled.\n--\nTatsuo Ishii\n\n", "msg_date": "Sun, 28 May 2000 13:56:22 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Regression test failure on 7.0-STABLE" }, { "msg_contents": "Oops.\n\n> I saw some errors on the regression test, and they are new to me.\n> Here are the output from checkresults.\n> \n> ====== rules ======\n> 1168d1167\n> < pg_indexes | SELECT c.relname AS tablename, i.relname AS indexname, pg_get_indexdef(x.indexrelid) AS indexdef FROM pg_index x, pg_class c, pg_class i WHERE ((c.oid = x.indrelid) AND (i.oid = x.indexrelid));\n> 1187c1186\n> < (20 rows)\n> ---\n> > (19 rows)\n> ====== foreign_key ======\n> 11a12\n> > NOTICE: _outNode: don't know how to print type 726 \n> 235a237\n> > NOTICE: _outNode: don't know how to print type 726 \n> \n> I guess the error on foreign_key is caused by -d 3 turned on(does this\n> hurt anything?) But what about the rules?\n> \n> Linux RedHat 4.2, multibyte support enabled.\n\nForget about the rules failure. I was tweaking pg_log file, and that\nwas the problem. Initdb again and the error has gone.\n--\nTatsuo Ishii\n\n", "msg_date": "Sun, 28 May 2000 14:30:30 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regression test failure on 7.0-STABLE" }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> I saw some errors on the regression test, and they are new to me.\n> Here are the output from checkresults.\n\n> ====== rules ======\n> 1168d1167\n> < pg_indexes | SELECT c.relname AS tablename, i.relname AS indexname, pg_get_indexdef(x.indexrelid) AS indexdef FROM pg_index x, pg_class c, pg_class i WHERE ((c.oid = x.indrelid) AND (i.oid = x.indexrelid));\n> 1187c1186\n> < (20 rows)\n> ---\n>> (19 rows)\n> ====== foreign_key ======\n> 11a12\n>> NOTICE: _outNode: don't know how to print type 726 \n> 235a237\n>> NOTICE: _outNode: don't know how to print type 726 \n\n> I guess the error on foreign_key is caused by -d 3 turned on(does this\n> hurt anything?) But what about the rules?\n\nThe NOTICEs are caused by a known omission: the foreign-key boys didn't\nbother to add an outfuncs.c routine for node type FkConstraint. Should\nbe pretty harmless AFAIK. The rules discrepancy is much more\ndisturbing. Is it repeatable?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 May 2000 01:36:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression test failure on 7.0-STABLE " } ]
[ { "msg_contents": " There was some talk a while back of changing the blocksize to 32k in\norder to increase possible row size, I need to do this (and don't want to\nscrew anything up) but am unsure about how to go about it.. Is BLKSZ (if\nthat's even correct) simply a PG variable somewhere?\n\nI see this in my LINT kernel configuration file (I'm using FreeBSD 4.0)\n\n# BLKDEV_IOSIZE sets the default block size used in user block\n# device I/O. Note that this value will be overriden by the label\n# when specifying a block device from a label with a non-0\n# partition blocksize. The default is PAGE_SIZE.\n#\noptions BLKDEV_IOSIZE=8192\n\nThat sounds like what I'm looking for and since it's set to 8k, looks like\nwhat I'm looking for -- so would I be correct in changing this value and\nre-compiling my kernel?\n\nThanks!!!\n\n-Mitch\n\n\n", "msg_date": "Sun, 28 May 2000 10:50:41 -0400", "msg_from": "\"Mitch Vincent\" <[email protected]>", "msg_from_op": true, "msg_subject": "Increasing row size beyond 8k" }, { "msg_contents": "\"Mitch Vincent\" <[email protected]> writes:\n> There was some talk a while back of changing the blocksize to 32k in\n> order to increase possible row size, I need to do this (and don't want to\n> screw anything up) but am unsure about how to go about it.. Is BLKSZ (if\n> that's even correct) simply a PG variable somewhere?\n\nIt's near the head of src/include/config.h (or config.h.in if you\nare doing the edit before running configure). Edit, rebuild the\nsystem, away you go. Don't forget you will need to initdb after\ninstalling the rebuilt postmaster...\n\n> That sounds like what I'm looking for and since it's set to 8k, looks like\n> what I'm looking for -- so would I be correct in changing this value and\n> re-compiling my kernel?\n\nNo no no, do not touch your kernel.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 May 2000 23:54:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing row size beyond 8k " } ]
[ { "msg_contents": "It's bothered me for some time that backend files need to be compiled\nwith -I src/backend as well as -I src/include. AFAICT this is just\nbecause the two header files that are generated on-the-fly (parse.h\nand fmgroids.h, formerly known as fmgr.h) are included from src/backend\nrather than being inserted into the include tree, which it seems to me\nis where they should be. Any objections if I rearrange the makefiles\nso that these files get placed under include/ when they are built,\nand then -I src/backend goes away?\n\n(In case anyone is wondering, there are no platform-dependencies in\neither file. We could distribute them as part of the distribution\ntarball --- in fact we already do so for parse.h. So I don't see\nthat installing them into src/include would create any problems for\nmultiplatform builds.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 May 2000 14:23:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Proposed cleanup of generated header files" }, { "msg_contents": "> It's bothered me for some time that backend files need to be compiled\n> with -I src/backend as well as -I src/include. AFAICT this is just\n> because the two header files that are generated on-the-fly (parse.h\n> and fmgroids.h, formerly known as fmgr.h) are included from src/backend\n> rather than being inserted into the include tree, which it seems to me\n> is where they should be. Any objections if I rearrange the makefiles\n> so that these files get placed under include/ when they are built,\n> and then -I src/backend goes away?\n> \n> (In case anyone is wondering, there are no platform-dependencies in\n> either file. We could distribute them as part of the distribution\n> tarball --- in fact we already do so for parse.h. So I don't see\n> that installing them into src/include would create any problems for\n> multiplatform builds.)\n\nAgreed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 28 May 2000 14:42:46 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposed cleanup of generated header files" }, { "msg_contents": "On Sun, 28 May 2000, Tom Lane wrote:\n\n> It's bothered me for some time that backend files need to be compiled\n> with -I src/backend as well as -I src/include. AFAICT this is just\n> because the two header files that are generated on-the-fly (parse.h\n> and fmgroids.h, formerly known as fmgr.h) are included from src/backend\n> rather than being inserted into the include tree, which it seems to me\n> is where they should be. Any objections if I rearrange the makefiles\n> so that these files get placed under include/ when they are built,\n> and then -I src/backend goes away?\n> \n> (In case anyone is wondering, there are no platform-dependencies in\n> either file. We could distribute them as part of the distribution\n> tarball --- in fact we already do so for parse.h. So I don't see\n> that installing them into src/include would create any problems for\n> multiplatform builds.)\n\nSounds perfect to me ... just make changes to prep_release in tools so\nthat they are generated for the snapshots?\n\n\n", "msg_date": "Sun, 28 May 2000 15:59:22 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposed cleanup of generated header files" }, { "msg_contents": ">> Any objections if I rearrange the makefiles\n>> so that these files get placed under include/ when they are built,\n>> and then -I src/backend goes away?\n\nDone. Note you'd be well advised to do make distclean / configure\nafter pulling these updates --- lotta Makefile.in files changed...\n\nThe Hermit Hacker <[email protected]> writes:\n> Sounds perfect to me ... just make changes to prep_release in tools so\n> that they are generated for the snapshots?\n\nNo point in changing prep_release that I can see. fmgroids.h has to\nhave a dependency on Gen_fmgrtab.sh, which is built at configure time\nfrom Gen_fmgrtab.sh.in. So it'd get rebuilt anyway, even though there's\nno real platform dependency in it. parse.h is distributed in the\ntarball in backend/parser where it's built, so we don't need another\ncopy distributed in include/parser.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 May 2000 01:54:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposed cleanup of generated header files " } ]
[ { "msg_contents": "Well I have to say that I'm pretty impressed with PostgreSQL after this..\nStill, I'd like some input from the experts as I may not be doing the best I\ncan..\n\nI setup the full text indexing (as described in contrib/fulltextindex) and\nsee some amazing results but like I said earlier, I might be able to do\nbetter.\n\nI took Tom's advice and turned on like planning (as described in\ncontrib/likeplanning) and it made a world of difference by itself..\n\nHere is a quick run-down of the table structure.\n\nTable : resumes_fti (coorespnds to the cds-fti table in the example)\n\n25370953 rows.\n\nTable applicants_resumes (cooresponds to the cds table in the example)\n\nTable applicants (63 fields)\n11039 rows.\n\nTable applicants_states (2 fields)\n276255 rows\n\n\nThe most complex query I use is this :\n\nselect a.appcode, a.firstname, a.middlename, a.lastname, a.state, a.degree1,\na.d1date, a.degree2, a.d2date, a.salary, a.skill1, a.skill2, a.skill3,\na.objective, a.employer, a.sic1, a.sic2, a.sic3, a.prefs1, a.prefs2,\na.sells from applicants as a,applicants_states as s, applicants_resumes as\nar,\nresumes_fti as rf where a.status = 'A' and s.rstate='AL' and\ns.app_id=a.app_id and rf.string ~'engineer' and rf.id = ar.oid limit 10\noffset 0\n\n-- BUT - I forgot one crucial thing. To qualify the results from the\napplicants_resume table bases on the applicants table (ie ar.app_id =\na.app_id) I did this and the query went from just over 3 seconds to over 25\nseconds!\n\nI changes the above query to\n\nselect a.appcode, a.firstname, a.middlename, a.lastname, a.state, a.degree1,\na.d1date, a.degree2, a.d2date, a.salary, a.skill1, a.skill2, a.skill3,\na.objective, a.employer, a.sic1, a.sic2, a.sic3, a.prefs1, a.prefs2,\na.sells from applicants as a,applicants_states as s, applicants_resumes as\nar,\nresumes_fti as rf where a.status = 'A' and s.rstate='AL' and rf.string\n~'engineer'\nand rf.id = ar.oid and s.app_id=a.app_id and ar.app_id=a.app_id limit 10\noffset 0\n(Listed below again)\n\nHopefully it's just something else stupid I am doing and someone will beat\nme with a clue stick. All of this was done on a PostgreSQL 7.0 backend run\nas \"/usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data -B 4096 -o '-S\n16384'\n-i >/usr/local/pgsql/postgres.log 2>&1&\" on a FreeBSD 4.0, Dual Celeron 600\nbox with an ATA/66 30 gig drive and 256 megs of RAM.\n\nHere are some stats :\n\nWithout the extra condition :\n\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=0.00..487951.02 rows=1644 width=204)\n -> Nested Loop (cost=0.00..478489.01 rows=2641 width=12)\n -> Nested Loop (cost=0.00..474081.63 rows=1 width=8)\n -> Seq Scan on resumes_fti rf (cost=0.00..474076.91 rows=1\nwidth=4)\n -> Index Scan using resumes_oid_index on applicants_resumes\nar (cost=0.00..4.70 rows=1 width=4)\n -> Index Scan using applicants_states_rstate on applicants_states s\n(cost=0.00..4380.98 rows=2641 width=4)\n -> Index Scan using applicants_app_id on applicants a (cost=0.00..3.57\nrows=1 width=192)\n\nEXPLAIN\n\n\nStartTransactionCommand\nquery: select a.appcode, a.firstname, a.middlename, a.lastname, a.state,\na.degree1,\na.d1date, a.degree2, a.d2date, a.salary, a.skill1, a.skill2, a.skill3,\na.objective, a.employer, a.sic1, a.sic2, a.sic3, a.prefs1, a.prefs2,\na.sells from applicants as a,applicants_states as s, applicants_resumes as\nar,\nresumes_fti as rf where a.status = 'A' and s.rstate='AL' and rf.string\n~'engineer'\nand rf.id = ar.oid and s.app_id=a.app_id limit 10 offset 0;\nProcessQuery\n! system usage stats:\n! 3.386697 elapsed 2.599174 user 0.787057 system sec\n! [2.617929 user 0.797100 sys total]\n! 0/0 [0/0] filesystem blocks in/out\n! 0/8330 [0/8569] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [3/3] messages rcvd/sent\n! 0/43 [3/47] voluntary/involuntary context switches\n! postgres usage stats:\n! Shared blocks: 4950 read, 0 written, buffer hit rate\n= 20.03%\n! Local blocks: 0 read, 0 written, buffer hit rate\n= 0.00%\n! Direct blocks: 0 read, 0 written\nCommitTransactionCommand\nproc_exit(0)\nshmem_exit(0)\nexit(0)\n\n\nWith the extra condition :\n\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=0.00..474194.76 rows=1 width=208)\n -> Nested Loop (cost=0.00..474085.21 rows=1 width=204)\n -> Nested Loop (cost=0.00..474081.63 rows=1 width=12)\n -> Seq Scan on resumes_fti rf (cost=0.00..474076.91 rows=1\nwidth=4)\n -> Index Scan using resumes_oid_index on applicants_resumes\nar (cost=0.00..4.70 rows=1 width=8)\n -> Index Scan using applicants_app_id on applicants a\n(cost=0.00..3.57 rows=1 width=192)\n -> Index Scan using applicants_states_app_id on applicants_states s\n(cost=0.00..109.54 rows=1 width=4)\n\nEXPLAIN\n\n\n\nStartTransactionCommand\nquery: select a.appcode, a.firstname, a.middlename, a.lastname, a.state,\na.degree1,\na.d1date, a.degree2, a.d2date, a.salary, a.skill1, a.skill2, a.skill3,\na.objective, a.employer, a.sic1, a.sic2, a.sic3, a.prefs1, a.prefs2,\na.sells from applicants as a,applicants_states as s, applicants_resumes as\nar,\nresumes_fti as rf where a.status = 'A' and s.rstate='AL' and rf.string\n~'engineer'\nand rf.id = ar.oid and s.app_id=a.app_id and ar.app_id=a.app_id limit 10\noffset 0\nProcessQuery\n! system usage stats:\n! 25.503341 elapsed 18.564543 user 5.599631 system sec\n! [18.564543 user 5.627987 sys total]\n! 2029/0 [2029/0] filesystem blocks in/out\n! 0/8335 [0/8571] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [3/3] messages rcvd/sent\n! 149/342 [152/346] voluntary/involuntary context switches\n! postgres usage stats:\n! Shared blocks: 35118 read, 0 written, buffer hit rate\n= 4.98%\n! Local blocks: 0 read, 0 written, buffer hit rate\n= 0.00%\n! Direct blocks: 0 read, 0 written\nCommitTransactionCommand\nproc_exit(0)\nshmem_exit(0)\nexit(0)\n\nSorry about the length. Thanks!\n\n-Mitch\n\n\n", "msg_date": "Sun, 28 May 2000 14:42:30 -0400", "msg_from": "\"Mitch Vincent\" <[email protected]>", "msg_from_op": true, "msg_subject": "Full text indexing preformance! (long)" }, { "msg_contents": "\"Mitch Vincent\" <[email protected]> writes:\n> Without the extra condition :\n> \n> NOTICE: QUERY PLAN:\n> \n> Nested Loop (cost=0.00..487951.02 rows=1644 width=204)\n> -> Nested Loop (cost=0.00..478489.01 rows=2641 width=12)\n> -> Nested Loop (cost=0.00..474081.63 rows=1 width=8)\n> -> Seq Scan on resumes_fti rf (cost=0.00..474076.91 rows=1\n> width=4)\n> -> Index Scan using resumes_oid_index on applicants_resumes\n> ar (cost=0.00..4.70 rows=1 width=4)\n> -> Index Scan using applicants_states_rstate on applicants_states s\n> (cost=0.00..4380.98 rows=2641 width=4)\n> -> Index Scan using applicants_app_id on applicants a (cost=0.00..3.57\n> rows=1 width=192)\n> \n> With the extra condition :\n> \n> NOTICE: QUERY PLAN:\n> \n> Nested Loop (cost=0.00..474194.76 rows=1 width=208)\n> -> Nested Loop (cost=0.00..474085.21 rows=1 width=204)\n> -> Nested Loop (cost=0.00..474081.63 rows=1 width=12)\n> -> Seq Scan on resumes_fti rf (cost=0.00..474076.91 rows=1\n> width=4)\n> -> Index Scan using resumes_oid_index on applicants_resumes\n> ar (cost=0.00..4.70 rows=1 width=8)\n> -> Index Scan using applicants_app_id on applicants a\n> (cost=0.00..3.57 rows=1 width=192)\n> -> Index Scan using applicants_states_app_id on applicants_states s\n> (cost=0.00..109.54 rows=1 width=4)\n\nOdd. The innermost join's the same in both plans, so that's not what's\ncausing the difference. In the first case the next join is to\napplicants_states using the \"s.rstate='AL'\" clause as a filter with the\napplicants_states_rstate index. The planner doesn't think that's gonna\nbe real selective (note the rows=2641) and based on prior discussion of\nyour database I'd agree --- don't you have lots of entries for AL?\nThen it can at last join to applicants using \"s.app_id=a.app_id\".\n\nIn the second case it's seized on \"ar.app_id=a.app_id\" as a way to join\n\"applicants a\" to the inner join using the applicants_app_id index.\nThis is not a bad idea at all if the a.app_id field is unique as it\nseems to think (observe rows=1 there). Then finally applicants_states\nis joined on its app_id field.\n\nOffhand I'd say that the second plan *ought* to be a lot quicker, and\nI don't see why it's not. Is applicants.app_id a unique key, or not?\nYou could investigate this by running just the partial selects (the\ntwo or three inner tables with just the relevant WHERE clauses) to see\nhow many rows are returned at each step.\n\nBTW, as far as I can see from this example you're still not using the\nFTI stuff properly: you should be querying rf.string ~ '^engineer' so\nthat you get an indexscan over resumes_fti. Without that, it seems\nlike you're not really getting any benefit from the FTI structure.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 May 2000 02:56:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text indexing preformance! (long) " }, { "msg_contents": "Hi Tom, thanks for your reply..\n\nI increased BLKSZ to 32k and re-compiled, then I imported all the resumes\n(some of which I couldn't get before) and this problem completly\ndisappeared. The query is very fast now (.0.039792 seconds to be exact)..\n\nOne thing I did run into was this...\nIn my paging system I only have a need for 10 records at a time so I LIMIT\nthe query. The problem comes when I need to get a total of all the records\nthat matched the query (as a good search engine, I must tell people how many\nrecords were found).. I can't count() and LIMIT in the same query, so I'm\nforced to do 2 queries, one with count() and one without.\n\nAn example :\n\nselect a.appcode, a.firstname, a.middlename, a.lastname, a.state, a.degree1,\na.d1date, a.degree2, a.d2date, a.salary, a.skill1, a.skill2, a.skill3,\na.objective, a.employer, a.sic1, a.sic2, a.sic3, a.prefs1, a.prefs2, a.sells\nfrom applicants as a,applicants_states as s, applicants_resumes as\nar,resumes_fti as rf where a.status = 'A' and lower(a.firstname) ~\nlower('^a') and s.rstate='AL' and rf.string ~'^engineer' and\na.app_id=s.app_id and ar.app_id=a.app_id and rf.id=ar.oid limit 10 offset 0\n\nVs.\n\nselect count (a.app_id) as total from applicants as a,applicants_states as\ns, applicants_resumes as ar,resumes_fti as rf where a.status = 'A' and\nlower(a.firstname) ~ lower('^a') and s.rstate='AL' and rf.string\n~'^engineer' and a.app_id=s.app_id and ar.app_id=a.app_id and rf.id=ar.oid\n\nHowever the count() query has to go through the entire record set (which\nmakes sense) but it takes about 4 or 5 seconds.\n\nThe plan for the count() query.\n\nNOTICE: QUERY PLAN:\n\nAggregate (cost=56.61..56.61 rows=1 width=20)\n -> Nested Loop (cost=0.00..56.61 rows=1 width=20)\n -> Nested Loop (cost=0.00..10.74 rows=1 width=16)\n -> Nested Loop (cost=0.00..8.59 rows=1 width=12)\n -> Index Scan using resumes_fti_index on resumes_fti rf\n(cost=0.00..4.97 rows=1 width=4)\n -> Index Scan using applicants_resumes_index on\napplicants_resumes ar (cost=0.00..3.61 rows=1 width=8)\n -> Index Scan using applicants_app_id on applicants a\n(cost=0.00..2.14 rows=1 width=4)\n -> Index Scan using applicants_states_app_id on applicants_states s\n(cost=0.00..45.86 rows=1 width=4)\n\nAnd the stats :\n\nProcessQuery\n! system usage stats:\n! 5.088647 elapsed 4.954981 user 0.125561 system sec\n! [4.976752 user 0.132817 sys total]\n! 0/0 [0/0] filesystem blocks in/out\n! 0/4607 [0/4846] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [3/3] messages rcvd/sent\n! 0/52 [3/57] voluntary/involuntary context switches\n! postgres usage stats:\n! Shared blocks: 0 read, 0 written, buffer hit rate\n= 100.00%\n! Local blocks: 0 read, 0 written, buffer hit rate\n= 0.00%\n! Direct blocks: 0 read, 0 written\nCommitTransactionCommand\nproc_exit(0)\n\n\nThe \"0/4607 [0/4846] page faults/reclaims\" area is greatly increased in this\nquery that from the other. Is that to be expected? Is there anything else I\ncan do to get the total number of records matched by the query and still use\nLIMIT (I doubt it)?\n\nIf there isn't anything I can do, which looks to be the case here, I still\nappreciate all the help you've given me..\n\nI look forward to your response. Thanks!\n\n-Mitch\n\n\n\n\n\n\n", "msg_date": "Mon, 29 May 2000 15:57:20 -0400", "msg_from": "\"Mitch Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text indexing preformance! (long) " }, { "msg_contents": "\"Mitch Vincent\" <[email protected]> writes:\n> The query is very fast now (.0.039792 seconds to be exact).\n\nCool ...\n\n> In my paging system I only have a need for 10 records at a time so I LIMIT\n> the query. The problem comes when I need to get a total of all the records\n> that matched the query (as a good search engine, I must tell people how many\n> records were found).. I can't count() and LIMIT in the same query, so I'm\n> forced to do 2 queries, one with count() and one without.\n\nWell, of course the whole *point* of LIMIT is that it stops short of\nscanning the whole query result. So I'm afraid you're kind of stuck\nas far as the performance goes: you can't get a count() answer without\nscanning the whole query.\n\nI'm a little curious though: what is the typical count() result from\nyour queries? The EXPLAIN outputs you show indicate that the planner\nis only expecting about one row out now, but I have no idea how close\nthat is to the mark. If it were really right, then there'd be no\ndifference in the performance of LIMIT and full queries, so I guess\nit's not right; but how far off is it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 May 2000 01:49:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text indexing preformance! (long) " } ]
[ { "msg_contents": "I am working fixing vacuum so analyze is done with shared lock.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 28 May 2000 15:44:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Spliting vacuum and analyze" } ]
[ { "msg_contents": "On Sat, 20 May 2000, Thomas Lockhart wrote:\n\n> > I modified the current ODBC driver for\n> > * referential integrity error reporting,\n> > * SELECT in transactions and\n> > * disabling autocommit.\n> > I tested these changes with Borland C++ Builder -> ODBCExpress ->\n> > WinODBC driver (DLL) -> Postgres 7.0beta1 and Borland C++ Builder -> BDE ->\n> > WinODBC driver (DLL) -> Postgres 7.0beta1. The patch is based on snapshot of\n> > 22th April (I don't think that someone has modified it since that: Byron\n> > hasn't gave any sign of living for about a month and I didn't find any\n> > comments about the ODBC driver on the list).\n> We are starting to think about organizing additional ODBC testing for\n> 7.0.1. istm that your patches would help existing functionality, and\n> not damage (or change for the worse) current behavior.\nYes, sure. I know that this code (which was sent to Thomas) needs further\ncheck. Have you had time to think about it?\n\nRegards,\nZoltan\n Kov\\'acs, Zolt\\'an\n [email protected]\n http://www.math.u-szeged.hu/~kovzol\n ftp://pc10.radnoti-szeged.sulinet.hu/home/kovacsz\n\n", "msg_date": "Mon, 29 May 2000 11:19:11 +0200 (CEST)", "msg_from": "Kovacs Zoltan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ODBC patch" }, { "msg_contents": "> > > I modified the current ODBC driver for\n> > > * referential integrity error reporting,\n> > > * SELECT in transactions and\n> > > * disabling autocommit.\n> > We are starting to think about organizing additional ODBC testing \n> Yes, sure. I know that this code (which was sent to Thomas) needs \n> further check. Have you had time to think about it?\n\nSorry to ask this: can you please re-re-send the patch to me? I've\nupgraded my machine and at the moment my backup disk is unreadable :(\n\nre-TIA\n\n - Thomas\n", "msg_date": "Wed, 31 May 2000 16:26:57 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ODBC patch" }, { "msg_contents": "Did we ever get this patch applied?\n\n\n> > > > I modified the current ODBC driver for\n> > > > * referential integrity error reporting,\n> > > > * SELECT in transactions and\n> > > > * disabling autocommit.\n> > > We are starting to think about organizing additional ODBC testing \n> > Yes, sure. I know that this code (which was sent to Thomas) needs \n> > further check. Have you had time to think about it?\n> \n> Sorry to ask this: can you please re-re-send the patch to me? I've\n> upgraded my machine and at the moment my backup disk is unreadable :(\n> \n> re-TIA\n> \n> - Thomas\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Jun 2000 04:59:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: ODBC patch" }, { "msg_contents": "> Did we ever get this patch applied?\n\nNo. Zoltan has re-re-sent it to me, and I am planning on looking at it\nand testing here.\n\n - Thomas\n", "msg_date": "Tue, 13 Jun 2000 12:41:10 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: ODBC patch" }, { "msg_contents": "Can I ask about the status of this?\n\n> > > > I modified the current ODBC driver for\n> > > > * referential integrity error reporting,\n> > > > * SELECT in transactions and\n> > > > * disabling autocommit.\n> > > We are starting to think about organizing additional ODBC testing \n> > Yes, sure. I know that this code (which was sent to Thomas) needs \n> > further check. Have you had time to think about it?\n> \n> Sorry to ask this: can you please re-re-send the patch to me? I've\n> upgraded my machine and at the moment my backup disk is unreadable :(\n> \n> re-TIA\n> \n> - Thomas\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 1 Oct 2000 23:38:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: ODBC patch" }, { "msg_contents": "> Can I ask about the status of this?\n> \n> > > > > I modified the current ODBC driver for\n> > > > > * referential integrity error reporting,\n> > > > > * SELECT in transactions and\n> > > > > * disabling autocommit.\n> > > > We are starting to think about organizing additional ODBC testing \n> > > Yes, sure. I know that this code (which was sent to Thomas) needs \n> > > further check. Have you had time to think about it?\n> > \n> > Sorry to ask this: can you please re-re-send the patch to me? I've\n> > upgraded my machine and at the moment my backup disk is unreadable :(\n> > \n> > re-TIA\n> > \n> > - Thomas\n> > \n> \nUnfortunately no new state yet. I received another patch from Max Khon,\nits aim was similar. It worked mostly, but my one seemed better for a\ncertain test. My patch works well at our place with Borland C++ Builder 4.\n\nI don't think the ODBC driver has any future. It's frozen. Our next\napplication also will not contain any ODBC, we think about using JDBC\ninstead. :-(\n\n Kov\\'acs, Zolt\\'an\n [email protected]\n http://www.math.u-szeged.hu/~kovzol\n ftp://pc10.radnoti-szeged.sulinet.hu/home/kovacsz\n\n", "msg_date": "Mon, 2 Oct 2000 18:39:44 +0200 (CEST)", "msg_from": "Kovacs Zoltan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: ODBC patch" } ]
[ { "msg_contents": "Hello all,\n\ntoday I've started to change the structure of one of my databases and\nremoved everything from it. Created new tables, come through database with\nvacuumlo, say vacuum; vacuum analyze; I expected that I will have quite small\ndisk space occupied after this... And the result was a surprise...\n\nIndices became VERY large even when table is empty!!!\n-rw------- 1 postgres postgres 0 May 29 14:25 inbox\n-rw------- 1 postgres postgres 3162112 May 29 14:25 inbox_id_key\n\nAnd lots of pg_* indices are incredibly large. Like:\n\n-rw------- 1 postgres postgres 90112 May 29 16:27 pg_attribute\n-rw------- 1 postgres postgres 37920768 May 29 16:26 pg_attribute_relid_attnam_index\n-rw------- 1 postgres postgres 15671296 May 29 16:26 pg_attribute_relid_attnum_index\n-rw------- 1 postgres postgres 16384 May 29 16:27 pg_class\n-rw------- 1 postgres postgres 4136960 May 29 16:26 pg_class_oid_index\n-rw------- 1 postgres postgres 10846208 May 29 16:26 pg_class_relname_index\n\nI know that I had quite big (>30000) amount of BLOBs.But there're only 5 of them inside now.\nAnd this is what registered in pg_class.\nLooks like bug in vacuum... Or I missed something?\n\nBTW, Postgresql 7.0, Linux 2.2.15.\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Mon, 29 May 2000 17:56:13 +0700", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": true, "msg_subject": "pg_* files are too large for empty database." }, { "msg_contents": "Denis Perchine <[email protected]> writes:\n> And lots of pg_* indices are incredibly large. Like:\n\nLongstanding problem --- vacuum doesn't have any way to shrink indexes.\nThis is on the TODO list but there is lots of higher-priority work.\n\nIf you are cleaning out your database entirely anyway, I'd suggest just\ndropping and recreating the whole database ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 May 2000 13:23:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_* files are too large for empty database. " } ]
[ { "msg_contents": "Please help!\n\nI'm having troubles with the timestamp type:\nhere's a psql output :\n\nScript started on Mon May 29 13:32:08 2000\n~ 13:32:08: psql pyrenet\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\npyrenet=# select 'today'::date;\n ?column? \n------------\n 29-05-2000\n(1 row)\n\npyrenet=# select 'today'::timestamp\n ?column? \n-------------------------------------------\n Mon 29 May 00:00:00 2000 MET DST(���^A\n(1 row)\n\nscript done on Mon May 29 13:44:30 2000\n\nAs you can see, there's no \\0 after TZ.\n\nChecking to code led me to EncodeTimeSpan that does strcpy and strncpy and\nnever puts a null character.\n\nCould this be a bug?\n\nThat's breaking all my scripts because if I add a timespan value to it,\nthen the backend complains about bad formated external timestamp.\n\nPlease help!\n\nThis is V7.0 on Unixware 7.0.1 compiled with cc.\n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Mon, 29 May 2000 14:27:53 +0200 (MET DST)", "msg_from": "Olivier PRENANT <[email protected]>", "msg_from_op": true, "msg_subject": "Timestamp data type problems" }, { "msg_contents": "Olivier PRENANT <[email protected]> writes:\n> pyrenet=# select 'today'::timestamp\n> ?column? \n> -------------------------------------------\n> Mon 29 May 00:00:00 2000 MET DST(���^A\n> (1 row)\n\n> script done on Mon May 29 13:44:30 2000\n\n> As you can see, there's no \\0 after TZ.\n\nOK, patched for 7.0.1 (I increased MAXTZLEN as well as made the code\nmore careful about overrun of the allowed length).\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 May 2000 15:20:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Timestamp data type problems " }, { "msg_contents": "I made a patch against 7.0. Do you need it?\n\nActually it's quite simple and only ocurs and datetime.c.\n\nafterline 2156, I added : *(str + 28 + MAXSTZLEN) = '\\0';\n\nand after line 2166, I added : *(str + 25 + MAXTZLEN) = '\\0';\n\nIt works for me !!\n\nRegards,\nOn Mon, 29 May 2000, Tom Lane wrote:\n\n> Olivier PRENANT <[email protected]> writes:\n> > pyrenet=# select 'today'::timestamp\n> > ?column? \n> > -------------------------------------------\n> > Mon 29 May 00:00:00 2000 MET DST(���^A\n> > (1 row)\n> \n> > script done on Mon May 29 13:44:30 2000\n> \n> > As you can see, there's no \\0 after TZ.\n> \n> OK, patched for 7.0.1 (I increased MAXTZLEN as well as made the code\n> more careful about overrun of the allowed length).\n> \n> \t\t\tregards, tom lane\n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Mon, 29 May 2000 22:45:22 +0200 (MET DST)", "msg_from": "Olivier PRENANT <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Timestamp data type problems " }, { "msg_contents": "Olivier PRENANT <[email protected]> writes:\n> I made a patch against 7.0. Do you need it?\n> Actually it's quite simple and only ocurs and datetime.c.\n> afterline 2156, I added : *(str + 28 + MAXSTZLEN) = '\\0';\n> and after line 2166, I added : *(str + 25 + MAXTZLEN) = '\\0';\n> It works for me !!\n\nI used StrNCpy instead, but thanks for confirming that that fixes it\nfor you.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 May 2000 17:12:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Timestamp data type problems " } ]
[ { "msg_contents": "There is a cvs-faq.html in the html/doc directory, but it seems the web\npage uses the version from the programmer's manual, which has the old\nCVSROOT information. Not sure what the file cvs-faq.html is for.\n\nI will update the cvs.sgml today.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 May 2000 10:53:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "CVS FAQ on web page" } ]
[ { "msg_contents": "Hi,\n\n now that we have the branch for 7.0, I could apply my actual\n work on TOAST to the CURRENT development tree. Before doing\n so, I'd like to discuss some related details.\n\n 1. In the actual version, the lztext datatype is stripped\n down to something more similar to text (does not compress\n on input). So it is kinda toastable base type for testing\n purposes created at initdb time.\n\n The pg_rules catalog still uses it, just that the toaster\n is now responsible to do the compression work. No\n problems so far with that.\n\n In the long run I think lztext will disappear completely\n again (it was supposed to be). Does anybody see a problem\n with abuse of this type during development?\n\n 2. I've added another ALTER TABLE command to create the\n external storage table for a relation. The syntax is\n\n ALTER TABLE tablename CREATE TOAST TABLE;\n\n Up to that, toastable types (lztext only yet) will be\n compressed, but the INSERT still fails if compression\n isn't enough to make a tuple fit.\n\n We haven't decided yet how/when to create the secondary\n relation and it's index. Since we intend to make base\n types like text and varchar by default toastable, I don't\n think that \"if a tables schema contains toastable types\"\n is a good enough reason to create them silently. There\n might exists tons of tables in a schema, that don't\n require it.\n\n OTOH I don't think it's a good thing to try creating\n these things on the fly the first time needed. The\n required catalog changes and file creations introduce all\n kinds of possible rollback/crash problems, that we don't\n want to have here - do we?\n\n 3. Tom, we don't have a consensus how to merge the TOAST\n related function changes with the fmgr changes up to now.\n Which base type specific functions will be touched due to\n fmgr changes right now?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Mon, 29 May 2000 17:03:10 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Applying TOAST to CURRENT" }, { "msg_contents": "> Hi,\n> \n> now that we have the branch for 7.0, I could apply my actual\n> work on TOAST to the CURRENT development tree. Before doing\n> so, I'd like to discuss some related details.\n> \n> 1. In the actual version, the lztext datatype is stripped\n> down to something more similar to text (does not compress\n> on input). So it is kinda toastable base type for testing\n> purposes created at initdb time.\n> \n> The pg_rules catalog still uses it, just that the toaster\n> is now responsible to do the compression work. No\n> problems so far with that.\n> \n> In the long run I think lztext will disappear completely\n> again (it was supposed to be). Does anybody see a problem\n> with abuse of this type during development?\n\nSounds fine.\n\n> 2. I've added another ALTER TABLE command to create the\n> external storage table for a relation. The syntax is\n> \n> ALTER TABLE tablename CREATE TOAST TABLE;\n> \n> Up to that, toastable types (lztext only yet) will be\n> compressed, but the INSERT still fails if compression\n> isn't enough to make a tuple fit.\n> \n> We haven't decided yet how/when to create the secondary\n> relation and it's index. Since we intend to make base\n> types like text and varchar by default toastable, I don't\n> think that \"if a tables schema contains toastable types\"\n> is a good enough reason to create them silently. There\n> might exists tons of tables in a schema, that don't\n> require it.\n> \n> OTOH I don't think it's a good thing to try creating\n> these things on the fly the first time needed. The\n> required catalog changes and file creations introduce all\n> kinds of possible rollback/crash problems, that we don't\n> want to have here - do we?\n\nWell, we could print the message suggesing ALTER TABLE when printing\ntuple too large. Frankly, I don't see a problem in creating the backup\ntable automatically. If you are worried about performance, how about\nputting it in a subdirectory.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 May 2000 22:55:02 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Applying TOAST to CURRENT" }, { "msg_contents": "Bruce Momjian wrote:\n> > OTOH I don't think it's a good thing to try creating\n> > these things on the fly the first time needed. The\n> > required catalog changes and file creations introduce all\n> > kinds of possible rollback/crash problems, that we don't\n> > want to have here - do we?\n>\n> Well, we could print the message suggesing ALTER TABLE when printing\n> tuple too large. Frankly, I don't see a problem in creating the backup\n> table automatically. If you are worried about performance, how about\n> putting it in a subdirectory.\n\n It's the toast-table and the index. So it's 2 Inodes and 16K\n per table. If the backend is compiled with -g, someone needs\n to create about 500 tables to waste the same amount of space.\n\n Well, I like the subdirectory idea. I only wonder how that\n should be implemented (actually the tablename is the filename\n - and that doesn't allow / in it).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Tue, 30 May 2000 11:38:50 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: Applying TOAST to CURRENT" }, { "msg_contents": "> Bruce Momjian wrote:\n> > > OTOH I don't think it's a good thing to try creating\n> > > these things on the fly the first time needed. The\n> > > required catalog changes and file creations introduce all\n> > > kinds of possible rollback/crash problems, that we don't\n> > > want to have here - do we?\n> >\n> > Well, we could print the message suggesing ALTER TABLE when printing\n> > tuple too large. Frankly, I don't see a problem in creating the backup\n> > table automatically. If you are worried about performance, how about\n> > putting it in a subdirectory.\n> \n> It's the toast-table and the index. So it's 2 Inodes and 16K\n> per table. If the backend is compiled with -g, someone needs\n> to create about 500 tables to waste the same amount of space.\n> \n> Well, I like the subdirectory idea. I only wonder how that\n> should be implemented (actually the tablename is the filename\n> - and that doesn't allow / in it).\n\nNot sure. It will take some tricks, I am sure. How about if we add\nsome TOAST option to CREATE TABLE, so they can create with TOAST support\nrather than having to use ALTER every time. Maybe that would work.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 May 2000 10:42:43 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Applying TOAST to CURRENT" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> 3. Tom, we don't have a consensus how to merge the TOAST\n> related function changes with the fmgr changes up to now.\n> Which base type specific functions will be touched due to\n> fmgr changes right now?\n\nFor functions that need their inputs de-toasted, I think that the\nchanges you need should be a free byproduct of the fmgr changes.\nI'd recommend we make those changes first, and then in a cleanup pass\nyou can modify anything that is able to work on still-toasted input.\n\nI can't really do much with updating any varlena datatypes until\nthere's a version of heap_tuple_untoast_attr() somewhere in the\nsystem --- if you look at src/include/fmgr.h, you'll see the call\nis already there:\n\n/* use this if you want the raw, possibly-toasted input datum: */\n#define PG_GETARG_RAW_VARLENA_P(n) ((struct varlena *) PG_GETARG_POINTER(n))\n/* use this if you want the input datum de-toasted: */\n#define PG_GETARG_VARLENA_P(n) \\\n\t(VARATT_IS_EXTENDED(PG_GETARG_RAW_VARLENA_P(n)) ? \\\n\t (struct varlena *) heap_tuple_untoast_attr((varattrib *) PG_GETARG_RAW_VARLENA_P(n)) : \\\n\t PG_GETARG_RAW_VARLENA_P(n))\n/* GETARG macros for varlena types will typically look like this: */\n#define PG_GETARG_TEXT_P(n) ((text *) PG_GETARG_VARLENA_P(n))\n\nBTW, it would save some casting if heap_tuple_untoast_attr were declared\nto accept and return \"struct varlena *\" ...\n\nAnyway, as soon as that code links to something that works, let me know\nand I'll make a pass over the \"text\" functions. That should give you\nsomething to test with.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 May 2000 10:57:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Applying TOAST to CURRENT " }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n>>>> OTOH I don't think it's a good thing to try creating\n>>>> these things on the fly the first time needed. The\n>>>> required catalog changes and file creations introduce all\n>>>> kinds of possible rollback/crash problems, that we don't\n>>>> want to have here - do we?\n\nAFAIK we are pretty solid on rolling back table creation, it's just\nrename/drop that have problems. A worse problem is what if two\nbackends both decide they need to create the toast table at the same\ntime. That might be fixable with appropriate locking but it seems\nlike there'd be potential for deadlocks.\n\n> Bruce Momjian wrote:\n>> Well, we could print the message suggesing ALTER TABLE when printing\n>> tuple too large. Frankly, I don't see a problem in creating the backup\n>> table automatically. If you are worried about performance, how about\n>> putting it in a subdirectory.\n\nI agree with Bruce --- the toast table should be created automatically,\nat least if the table contains any potentially-toastable columns. We\nwant this to be as transparent as possible. I'd rather have auto create\non-the-fly when first needed, but if that seems too risky then let's\njust make the table when its owning table is created.\n\nIf you want to control it with an ALTER TABLE function, let's add ALTER\nTABLE DROP TOAST so that admins who don't like the excess space usage\ncan get rid of it. (Of course that should only succeed after verifying\nthe toast table is empty...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 May 2000 11:07:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Applying TOAST to CURRENT " }, { "msg_contents": "On Tue, 30 May 2000, Tom Lane wrote:\n\n> [email protected] (Jan Wieck) writes:\n> >>>> OTOH I don't think it's a good thing to try creating\n> >>>> these things on the fly the first time needed. The\n> >>>> required catalog changes and file creations introduce all\n> >>>> kinds of possible rollback/crash problems, that we don't\n> >>>> want to have here - do we?\n> \n> AFAIK we are pretty solid on rolling back table creation, it's just\n> rename/drop that have problems. A worse problem is what if two\n> backends both decide they need to create the toast table at the same\n> time. That might be fixable with appropriate locking but it seems\n> like there'd be potential for deadlocks.\n> \n> > Bruce Momjian wrote:\n> >> Well, we could print the message suggesing ALTER TABLE when printing\n> >> tuple too large. Frankly, I don't see a problem in creating the backup\n> >> table automatically. If you are worried about performance, how about\n> >> putting it in a subdirectory.\n> \n> I agree with Bruce --- the toast table should be created automatically,\n> at least if the table contains any potentially-toastable columns. We\n> want this to be as transparent as possible. I'd rather have auto create\n> on-the-fly when first needed, but if that seems too risky then let's\n> just make the table when its owning table is created.\n\nhave to third this one ... I think it should be totally transparent to the\nadmin/user ... just create it when the table is created, what's the worst\ncase scenario? it never gets used and you waste 16k of disk space?\n\n", "msg_date": "Tue, 30 May 2000 13:11:21 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Applying TOAST to CURRENT " }, { "msg_contents": "> I agree with Bruce --- the toast table should be created automatically,\n> at least if the table contains any potentially-toastable columns. We\n> want this to be as transparent as possible. I'd rather have auto create\n> on-the-fly when first needed, but if that seems too risky then let's\n> just make the table when its owning table is created.\n> \n> If you want to control it with an ALTER TABLE function, let's add ALTER\n> TABLE DROP TOAST so that admins who don't like the excess space usage\n> can get rid of it. (Of course that should only succeed after verifying\n> the toast table is empty...)\n\nBut when you vacuum a table, doesn't it get zero size? Sure works here:\n\n\t#$ cd /u/pg/data/base/test\n\t#$ ls -l kkk*\n\t-rw------- 1 postgres postgres 0 May 30 12:20 kkk\n\t#$ \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 May 2000 12:20:44 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Applying TOAST to CURRENT" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Tue, 30 May 2000, Tom Lane wrote:\n> >\n> > I agree with Bruce --- the toast table should be created automatically,\n> > at least if the table contains any potentially-toastable columns. We\n> > want this to be as transparent as possible. I'd rather have auto create\n> > on-the-fly when first needed, but if that seems too risky then let's\n> > just make the table when its owning table is created.\n> \n> have to third this one ... I think it should be totally transparent to the\n> admin/user ... just create it when the table is created, what's the worst\n> case scenario? it never gets used and you waste 16k of disk space?\n\nYou dont even use 16k if toast tables are like ordinary tables (which I \nguess they are). New empty tables seem to occupy 0k.\n\nSo I'm also for immediate creation of tost tables for all tables that \nrequire them, either at create (if there are any toastable columns in \nthe create clause) or at alter table time if first toestable column is \nadded after initial create.\n\nThe only drawback is bloating directories, but it was already suggested\nthat \nTOAST tables could/should be kept in subdirectory toast (as should\nindexes \ntoo, imho).\n\nAnd the most widespread database in the world does it too ;) \n(dBASE and its derivates)\n\n--------\nHannu\n", "msg_date": "Tue, 30 May 2000 22:03:22 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Applying TOAST to CURRENT" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Tue, 30 May 2000, Hannu Krosing wrote:\n> \n> > The only drawback is bloating directories, but it was already suggested\n> > that\n> > TOAST tables could/should be kept in subdirectory toast (as should\n> > indexes\n> > too, imho).\n> \n> still say, simplest \"fix\":\n> \n> <dbname>/{system,db,toast,index}\n\nWhy can't we just add a column named \"tablepath\" to pg_table, that can\neither be \na simple filename, or relative path with a filename or even full path \n(if we don't worry too much for security ;)\n\nThat has came up before when discussing ways to make rename table\nrollbackable\nbut it could be handy here two. \n\nAFAIK it has been a general principle in programming to keep separate\nthings \nseparate unless a very good reason not to do so is present.\n\n-----------\nHannu\n", "msg_date": "Tue, 30 May 2000 23:22:06 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Applying TOAST to CURRENT" }, { "msg_contents": "On Tue, 30 May 2000, Hannu Krosing wrote:\n\n> The Hermit Hacker wrote:\n> > \n> > On Tue, 30 May 2000, Tom Lane wrote:\n> > >\n> > > I agree with Bruce --- the toast table should be created automatically,\n> > > at least if the table contains any potentially-toastable columns. We\n> > > want this to be as transparent as possible. I'd rather have auto create\n> > > on-the-fly when first needed, but if that seems too risky then let's\n> > > just make the table when its owning table is created.\n> > \n> > have to third this one ... I think it should be totally transparent to the\n> > admin/user ... just create it when the table is created, what's the worst\n> > case scenario? it never gets used and you waste 16k of disk space?\n> \n> You dont even use 16k if toast tables are like ordinary tables (which I \n> guess they are). New empty tables seem to occupy 0k.\n> \n> So I'm also for immediate creation of tost tables for all tables that \n> require them, either at create (if there are any toastable columns in \n> the create clause) or at alter table time if first toestable column is \n> added after initial create.\n> \n> The only drawback is bloating directories, but it was already suggested\n> that \n> TOAST tables could/should be kept in subdirectory toast (as should\n> indexes \n> too, imho).\n\nstill say, simplest \"fix\":\n\n\t<dbname>/{system,db,toast,index}\n\n\n", "msg_date": "Tue, 30 May 2000 18:14:07 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Applying TOAST to CURRENT" }, { "msg_contents": "The Hermit Hacker wrote:\n>\n> have to third this one ... I think it should be totally transparent to the\n> admin/user ... just create it when the table is created, what's the worst\n> case scenario? it never gets used and you waste 16k of disk space?\n>\n\n Not exactly.\n\n I've made some good experiences with having the toaster\n trying to keep the main tuple size below 1/4 of MaxTupleSize\n (BLKSIZE - block header). Remember that external stored\n attributes are only fetched from the secondary relation if\n really needed (when the result set is sent to the client or\n if explicitly used in the query). So in a usual case, where a\n relatively small amount of the entire data is retrieved and\n key attributes are small, it's a win. With this config more\n main tuples fit into one block, and if the attributes used in\n the WHERE clause aren't stored external, the result set\n (including sort and group actions) can be collected with\n fewer block reads. Only those big values, that the client\n really wanted, have to be fetched at send time.\n\n If no external table exists, the toaster will try the <2K\n thing by compression only. If the resulting tuple fits into\n the 8K limit, it's OK. But if a secondary relation exists,\n it'll store external to make the tuple <2K. Thus, a 4K or 6K\n tuple, that actually fits and would be stored in the main\n table, will cause the toaster to jump in if we allways create\n the secondary table.\n\n Hmmm - thinking about that it doesn't sound bad if we allways\n create a secondary relation at CREATE TABLE time, but NOT the\n index for it. And at VACUUM time we create the index if it\n doesn't exist AND there is external stored data.\n\n The table is prepared for external storage allways and we\n avoid the risks from creating tables in possibly later\n aborting transactions or due to concurrency issues. But we\n don't waste the index space for really allways-small-tuple\n tables.\n\n Another benefit would be, that reloads should be faster\n because with this technique, the toaster doesn't need to\n insert index tuples during the load. The indices are created\n later at VACUUM after reload.\n\n The toaster needs to use sequential scans on the external\n table until the next vacuum run, but index usage allways\n depends on vacuum so that's not a real issue from my PoV.\n\n At least a transparent compromise - isn't it?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Wed, 31 May 2000 03:10:20 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: Applying TOAST to CURRENT" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> Hmmm - thinking about that it doesn't sound bad if we allways\n> create a secondary relation at CREATE TABLE time, but NOT the\n> index for it. And at VACUUM time we create the index if it\n> doesn't exist AND there is external stored data.\n\nDon't much like that --- what if the user doesn't run vacuum for\na good long while? Could be doing a lot of sequential scans over\na pretty large toast file...\n\nIf the 16K for an empty btree index really bothers you, let's\nattack that head-on. I don't see why a freshly created index\ncouldn't be zero bytes, and the metadata page gets created on\nfirst store into the index.\n\n> The toaster needs to use sequential scans on the external\n> table until the next vacuum run, but index usage allways\n> depends on vacuum so that's not a real issue from my PoV.\n\nWhat makes you say that? Indexes will be used on a never-vacuumed\ntable with the current planner.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 May 2000 23:53:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Applying TOAST to CURRENT " }, { "msg_contents": "Jan Wieck wrote:\n> \n> The Hermit Hacker wrote:\n> >\n> > have to third this one ... I think it should be totally transparent to the\n> > admin/user ... just create it when the table is created, what's the worst\n> > case scenario? it never gets used and you waste 16k of disk space?\n> >\n> \n> Not exactly.\n> \n> I've made some good experiences with having the toaster\n> trying to keep the main tuple size below 1/4 of MaxTupleSize\n> (BLKSIZE - block header). \n\nCan't _that_ behaviour be made modifyable by some setting ?\n\n> Remember that external stored\n> attributes are only fetched from the secondary relation if\n> really needed (when the result set is sent to the client or\n> if explicitly used in the query). So in a usual case, where a\n> relatively small amount of the entire data is retrieved and\n> key attributes are small, it's a win. With this config more\n> main tuples fit into one block, and if the attributes used in\n> the WHERE clause aren't stored external, the result set\n> (including sort and group actions) can be collected with\n> fewer block reads. Only those big values, that the client\n> really wanted, have to be fetched at send time.\n\nWhat is the priority of checks on indexed fetch?\n\nI mean if we do \"SELECT * FROM ttable WHERE toasted LIKE 'ab%' \"\n\nDO we first scan by index to 'ab%', then check if tuple is live and \nafter that to the LIKE comparison ?\n\nWould it not be faster in toast case to use the already retrieved \nindex data and check that first, before going to main table (not to \nmention the TOAST table)\n\n> If no external table exists, the toaster will try the <2K\n> thing by compression only. If the resulting tuple fits into\n> the 8K limit, it's OK. \n\nWould it not be faster/cleaner to check some configuration variable \nthan the existance of toest table ?\n\n> But if a secondary relation exists,\n> it'll store external to make the tuple <2K. Thus, a 4K or 6K\n> tuple, that actually fits and would be stored in the main\n> table, will cause the toaster to jump in if we allways create\n> the secondary table.\n\nDo our current (btree/hash) indexes support toast ?\n\nIf not, will they ?\n\n> \n> Hmmm - thinking about that it doesn't sound bad if we allways\n> create a secondary relation at CREATE TABLE time, but NOT the\n> index for it. And at VACUUM time we create the index if it\n> doesn't exist AND there is external stored data.\n\nIs there a plan to migrate to some combined index/database table for \nat least toast tables later ? \n\nFor at least toast tables it seems feasible to start using the \noriginally planned tuple-spanning mechanisms, unless we plan \nmigrating LOs to toast table at some point which would make index-less \ntuple chaining a bad idea as it would make seeking on really large \nLOs slow. \n\n> The table is prepared for external storage allways and we\n> avoid the risks from creating tables in possibly later\n> aborting transactions or due to concurrency issues. But we\n> don't waste the index space for really allways-small-tuple\n> tables.\n\nThat could perhaps be done for other tables too, ie CREATE INDEX \nwould not actually create index until VACUUM notices that table is \nbig enough to make use of that index ?\n\nOn second thought that seems not a good idea to me ;(\n\n> \n> Another benefit would be, that reloads should be faster\n> because with this technique, the toaster doesn't need to\n> insert index tuples during the load. The indices are created\n> later at VACUUM after reload.\n\nAFAIK reloads (from pg_dump at least) create indexes after LOAD'ing data\n\n> The toaster needs to use sequential scans on the external\n> table until the next vacuum run, but index usage allways\n> depends on vacuum so that's not a real issue from my PoV.\n> \n> At least a transparent compromise - isn't it?\n\nBut do we need it ?\n\nI suspect there are other issues that need your attention more than \ncomplicating table creation to save a few kb ;)\n\nCreating toast tables still wastes only 1MB per 64 tables _that have \ntoastable columns_, which seems real cheap considering today's HD\nprices.\n\nYou would need 6400 toast tables to consume 1% of the smallest currently \navailable (10GB) disk.\n\nIf that is a concern this can probably be cured by good docs that say \nin detail which datatypes cause toast tables an which don't.\n\n-----------\nHannu\n", "msg_date": "Wed, 31 May 2000 09:03:53 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Applying TOAST to CURRENT" }, { "msg_contents": "Hannu Krosing wrote:\n> > I've made some good experiences with having the toaster\n> > trying to keep the main tuple size below 1/4 of MaxTupleSize\n> > (BLKSIZE - block header).\n>\n> Can't _that_ behaviour be made modifyable by some setting ?\n\n Good point.\n\n There is already a fine tuning option per table attribute,\n where someone can tell things like \"forget about compression\n for this attribute\" or \"try keeping in main tuple and toast\n others first\". Theres no utility command up to now to\n customize them, but an UPDATE pg_attribute does it already.\n\n Seems another value in pg_class, telling the toaster what max\n size to try, would be a good idea.\n\n> What is the priority of checks on indexed fetch?\n>\n> I mean if we do \"SELECT * FROM ttable WHERE toasted LIKE 'ab%' \"\n>\n> DO we first scan by index to 'ab%', then check if tuple is live and\n> after that to the LIKE comparison ?\n\n That's the current behaviour, and TOAST doesn't change it.\n\n There was discussion already about index tuple toasting.\n Indices have different size constraints and other features so\n they cannot share exactly the same toasting scheme as heap\n tuples.\n\n I'm still not sure if supporting indices on huge values is\n worth the efford. Many databases have some limit on the size\n of index entries, and noone seems to really care for that.\n\n> > If no external table exists, the toaster will try the <2K\n> > thing by compression only. If the resulting tuple fits into\n> > the 8K limit, it's OK.\n>\n> Would it not be faster/cleaner to check some configuration variable\n> than the existance of toest table ?\n\n The toast tables and indexes OID are stored in pg_class. An\n open Relation has reference to the pg_class row, so it's\n simply comparing that to INVALID_OID. No wasted time here.\n\n> Do our current (btree/hash) indexes support toast ?\n\n Not hard tested yet. At least, they don't support it if\n toasting would be required to make the index tuple fit, but\n the heap toaster is already happy with it.\n\n The tuple is modified in place at heap_insert(). So the later\n index_insert() will use the Datums found there to build the\n index tuples, either plain or toast reference, whatever the\n toaster left.\n\n>\n> If not, will they ?\n\n Not planned for 7.1. Maybe we can workout a solution for\n unlimited index entries after that.\n\n> > Hmmm - thinking about that it doesn't sound bad if we allways\n> > create a secondary relation at CREATE TABLE time, but NOT the\n> > index for it. And at VACUUM time we create the index if it\n> > doesn't exist AND there is external stored data.\n>\n> Is there a plan to migrate to some combined index/database table for\n> at least toast tables later ?\n\n No. But we plan a general overwriting storage manager, so\n that might not be an issue at all.\n\n> For at least toast tables it seems feasible to start using the\n> originally planned tuple-spanning mechanisms, unless we plan\n> migrating LOs to toast table at some point which would make index-less\n> tuple chaining a bad idea as it would make seeking on really large\n> LOs slow.\n\n I've never seen a complete proposal for tuple-spanning. The\n toaster breaks up the large Datum into chunks. There is a\n chunk number, so modifying the index to be a multi-attribute\n one would gain direct access to a chunk. That should make\n seeks reasonably fast.\n\n> > Another benefit would be, that reloads should be faster\n> > because with this technique, the toaster doesn't need to\n> > insert index tuples during the load. The indices are created\n> > later at VACUUM after reload.\n>\n> AFAIK reloads (from pg_dump at least) create indexes after LOAD'ing data\n\n Finally the toast table will have another relkind, so it'll\n not be accessible by normal SQL. The toaster acts on these\n tables more hardwired like on system catalogs. It expects a\n fixed schema and uses direct heap access. Due to the\n different relkind, a dump wouldn't be able to delay the index\n creation.\n\n> But do we need it ?\n>\n> [...]\n>\n> You would need 6400 toast tables to consume 1% of the smallest currently\n> available (10GB) disk.\n>\n> If that is a concern this can probably be cured by good docs that say\n> in detail which datatypes cause toast tables an which don't.\n\n We plan to make ALL variable size builtin types toastable. So\n this list would name them all :-).\n\n But this 6400 = 1% really is the point. Let's forget about\n the 16K and create the toast table allways (as soon as the\n main table has toastable attributes).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Wed, 31 May 2000 18:11:20 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: Applying TOAST to CURRENT" } ]
[ { "msg_contents": "I have changed vacuum so analyze now uses AccessShareLock. (Is this the\nproper lock for analyze?)\n\nThe code will now vacuum all requested relations. It will then analyze\neach relation. This way, all the exclusive vacuum work is done first,\nthen analyze can continue with shared locks.\n\nThe code is much clearer with that functionality split into separate\nfunctions.\n\nHow separate do people want vacuum and analyze? Analyze currently does\nnot record the number of tuples and pages, because vacuum does that. Do\npeople want analyze as a separate command and in a separate file?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 May 2000 11:53:46 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Vacuum now uses AccessShareLock for analyze" }, { "msg_contents": "On Mon, 29 May 2000, Bruce Momjian wrote:\n\n> I have changed vacuum so analyze now uses AccessShareLock. (Is this the\n> proper lock for analyze?)\n> \n> The code will now vacuum all requested relations. It will then analyze\n> each relation. This way, all the exclusive vacuum work is done first,\n> then analyze can continue with shared locks.\n\nhrmmm, here's a thought ... why not vacuum->analyze each relation in\norder? the 'exclusive lock' will prevent anyone from reading, so do a\nrelation, release the lock to analyze that relation and let ppl access the\ndatabase ... then do the next ... instead of doing an exclusive lock for\nthe duration of the whole database ...\n\n\n", "msg_date": "Mon, 29 May 2000 13:12:56 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum now uses AccessShareLock for analyze" }, { "msg_contents": "> On Mon, 29 May 2000, Bruce Momjian wrote:\n> \n> > I have changed vacuum so analyze now uses AccessShareLock. (Is this the\n> > proper lock for analyze?)\n> > \n> > The code will now vacuum all requested relations. It will then analyze\n> > each relation. This way, all the exclusive vacuum work is done first,\n> > then analyze can continue with shared locks.\n> \n> hrmmm, here's a thought ... why not vacuum->analyze each relation in\n> order? the 'exclusive lock' will prevent anyone from reading, so do a\n> relation, release the lock to analyze that relation and let ppl access the\n> database ... then do the next ... instead of doing an exclusive lock for\n> the duration of the whole database ...\n\nNo, each table is locked one at a time. We do all the single-table\nlocks first so the rest is all shared access. Does that make sense?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 May 2000 12:13:51 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum now uses AccessShareLock for analyze" }, { "msg_contents": "On Mon, 29 May 2000, Bruce Momjian wrote:\n\n> > On Mon, 29 May 2000, Bruce Momjian wrote:\n> > \n> > > I have changed vacuum so analyze now uses AccessShareLock. (Is this the\n> > > proper lock for analyze?)\n> > > \n> > > The code will now vacuum all requested relations. It will then analyze\n> > > each relation. This way, all the exclusive vacuum work is done first,\n> > > then analyze can continue with shared locks.\n> > \n> > hrmmm, here's a thought ... why not vacuum->analyze each relation in\n> > order? the 'exclusive lock' will prevent anyone from reading, so do a\n> > relation, release the lock to analyze that relation and let ppl access the\n> > database ... then do the next ... instead of doing an exclusive lock for\n> > the duration of the whole database ...\n> \n> No, each table is locked one at a time. We do all the single-table\n> locks first so the rest is all shared access. Does that make sense?\n\nits what I suspected, but my point was that if we did the ANALYZE for the\nrelation right after the VACUUM for it, there would be a period of time\nwhere readers could come in and process ... think of it as a 'breather'\nbefore the next VACUUM starts, vs just jumping into the next ...\n\nOverall time for doing the vacuum shouldn't be any longer, but it would\ngive gaps where readers could get in and out ... we're a relational\ndatabase, so I imagine ppl are doing JOINs ... if RelationA is locked\nwhile ReaderA is trying to doign a JOIN between RA and RB, ReaderA is\ngonna be screwed ... if we did a quick ANALZE between RelationA and\nRelationB, then ReaderA would have a chance to do its processing while the\nANALYZE is running, instead of having to wait for both RelationA and\nRelationB to be finished ...\n\n", "msg_date": "Mon, 29 May 2000 13:40:47 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum now uses AccessShareLock for analyze" }, { "msg_contents": "> its what I suspected, but my point was that if we did the ANALYZE for the\n> relation right after the VACUUM for it, there would be a period of time\n> where readers could come in and process ... think of it as a 'breather'\n> before the next VACUUM starts, vs just jumping into the next ...\n> \n> Overall time for doing the vacuum shouldn't be any longer, but it would\n> give gaps where readers could get in and out ... we're a relational\n> database, so I imagine ppl are doing JOINs ... if RelationA is locked\n> while ReaderA is trying to doign a JOIN between RA and RB, ReaderA is\n> gonna be screwed ... if we did a quick ANALZE between RelationA and\n> RelationB, then ReaderA would have a chance to do its processing while the\n> ANALYZE is running, instead of having to wait for both RelationA and\n> RelationB to be finished ...\n\nGood point. Because we are only doing one table at a time, they could\nget in and do something, but they could also get part-way in and have\nanother table locked, and since we are only locking one at a time, it\nseemed better to get it all done first. Comments?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 May 2000 12:54:27 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum now uses AccessShareLock for analyze" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> The code will now vacuum all requested relations. It will then analyze\n> each relation. This way, all the exclusive vacuum work is done first,\n> then analyze can continue with shared locks.\n\nI agree with Marc: it'd make more sense to do it one table at a time,\nie,\n\tget exclusive lock on table A\n\tvacuum table A\n\tcommit, release lock\n\tget shared lock on table A\n\tgather stats for table A\n\tcommit, release lock\n\trepeat sequence for table B\n\tetc\n\n> The code is much clearer with that functionality split into separate\n> functions.\n\nWouldn't surprise me.\n\n> How separate do people want vacuum and analyze? Analyze currently does\n> not record the number of tuples and pages, because vacuum does that. Do\n> people want analyze as a separate command and in a separate file?\n\nWe definitely want a separate command that can invoke just the analyze\npart. I'd guess something like \"ANALYZE [ VERBOSE ] optional-table-name\n(optional-list-of-columns)\" pretty much like VACUUM.\n\nI would be inclined to move the code out to a new file, just because\nvacuum.c is so darn big, but that's purely a code-beautification issue.\n\nOn the number of tuples/pages issue, I'd suggest removing that function\nfrom plain vacuum and make the analyze part do it instead. It's always\nmade me uncomfortable that vacuum needs to update system relations while\nit's holding an exclusive lock on the table-being-vacuumed (which might\nbe another system catalog, or even pg_class itself). It works, more or\nless, but that update-tuple-in-place code is awfully ugly and\nfragile-looking. I'm also worried that there could be deadlock\nscenarios between concurrent vacuums (eg, one guy working on pg_class,\nanother on pg_statistic, both need to get in and update the other guy's\ntable. Oops. That particular problem should be gone with your changes,\nbut maybe there are still problems just from the need to update\npg_class).\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 May 2000 13:05:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum now uses AccessShareLock for analyze " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > The code will now vacuum all requested relations. It will then analyze\n> > each relation. This way, all the exclusive vacuum work is done first,\n> > then analyze can continue with shared locks.\n> \n> I agree with Marc: it'd make more sense to do it one table at a time,\n> ie,\n> \tget exclusive lock on table A\n> \tvacuum table A\n> \tcommit, release lock\n> \tget shared lock on table A\n> \tgather stats for table A\n> \tcommit, release lock\n> \trepeat sequence for table B\n> \tetc\n\nOK, changed.\n\nI will work on the additional issues in the next week or so.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 May 2000 13:08:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum now uses AccessShareLock for analyze" }, { "msg_contents": "\nWhile we are at VACUUM, it would be good idea to have VACUUM flag each 'vacuumed' table (with some sort of 'clean' flag) - this will permit vaster vacuuming of mostly static databases, such that contain 'history' data in some tables.\n\nDaniel\n\n", "msg_date": "Mon, 29 May 2000 20:39:57 +0300", "msg_from": "Daniel Kalchev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum now uses AccessShareLock for analyze " }, { "msg_contents": "> > How separate do people want vacuum and analyze? Analyze currently does\n> > not record the number of tuples and pages, because vacuum does that. Do\n> > people want analyze as a separate command and in a separate file?\n> \n> We definitely want a separate command that can invoke just the analyze\n> part. I'd guess something like \"ANALYZE [ VERBOSE ] optional-table-name\n> (optional-list-of-columns)\" pretty much like VACUUM.\n\nOK.\n\n> \n> I would be inclined to move the code out to a new file, just because\n> vacuum.c is so darn big, but that's purely a code-beautification issue.\n\nDone.\n\n> \n> On the number of tuples/pages issue, I'd suggest removing that function\n> from plain vacuum and make the analyze part do it instead. It's always\n> made me uncomfortable that vacuum needs to update system relations while\n> it's holding an exclusive lock on the table-being-vacuumed (which might\n> be another system catalog, or even pg_class itself). It works, more or\n> less, but that update-tuple-in-place code is awfully ugly and\n> fragile-looking. I'm also worried that there could be deadlock\n> scenarios between concurrent vacuums (eg, one guy working on pg_class,\n> another on pg_statistic, both need to get in and update the other guy's\n> table. Oops. That particular problem should be gone with your changes,\n> but maybe there are still problems just from the need to update\n> pg_class).\n\nHow do I find the number of pages from heapscan? Can that number just\nbe computed from the file size. I can get the block number of the last\nentry in the scan, but that doesn't show me expired rows at the end.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 May 2000 13:54:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum now uses AccessShareLock for analyze" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> How do I find the number of pages from heapscan?\n\nIIRC, heap_beginscan updates the relcache entry with the current number\nof blocks (as determined the hard way, with lseek). Should be able to\nuse that, even though it might be a little bit out of date by the time\nyou finish the scan ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 May 2000 14:20:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum now uses AccessShareLock for analyze " } ]
[ { "msg_contents": "Tom, you mentioned you needed more system indexes. I would be glad to\ncreate them for you. Can you tell me which ones?\n\nAlso, I see a heap_getnext on pg_attribute in vacuum.c that should be\nusing index scan. Are there other places in the code where this needs\nto be changed?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 May 2000 12:08:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Additional system indexes" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, you mentioned you needed more system indexes. I would be glad to\n> create them for you. Can you tell me which ones?\n\nLet's see, we need an index on pg_index's indrelid column (non unique\nof course). Offhand I do not know of any others. I would like to get\n*rid* of the index(es) on pg_listener and revert async.c to its index-\nfree state; it seems unlikely that indexes on pg_listener will be worth\ntheir maintenance effort.\n\nAnother idea that had come up in that thread was to get rid of\npg_attrdef completely and put its info into two new columns in\npg_attribute. Not sure if anyone but me thought that'd be worth\nthe trouble.\n\n> Also, I see a heap_getnext on pg_attribute in vacuum.c that should be\n> using index scan. Are there other places in the code where this needs\n> to be changed?\n\nDunno; I haven't had time to go looking for suspicious heap_getnext\nloops.\n\nAnother thing we had discussed was to try to unify the APIs of the\nheap_getnext and index_getnext routines so that it could be fairly\ntransparent in calling code which one you are using. That'd allow\nsupport of Hiroshi's disable-system-indexes feature without so much\ncruft. If we are going to do that, it probably ought to happen before\nwe start adding more call sites that'll have to be fixed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 May 2000 12:48:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Additional system indexes " }, { "msg_contents": "> Another idea that had come up in that thread was to get rid of\n> pg_attrdef completely and put its info into two new columns in\n> pg_attribute. Not sure if anyone but me thought that'd be worth\n> the trouble.\n\nSeems worth removing.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 May 2000 13:52:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Additional system indexes" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Tom Lane\n> \n> Bruce Momjian <[email protected]> writes:\n> > Tom, you mentioned you needed more system indexes. I would be glad to\n> > create them for you. Can you tell me which ones?\n> \n> Another idea that had come up in that thread was to get rid of\n> pg_attrdef completely and put its info into two new columns in\n> pg_attribute. Not sure if anyone but me thought that'd be worth\n> the trouble.\n>\n\nI agree with you.\n \n> > Also, I see a heap_getnext on pg_attribute in vacuum.c that should be\n> > using index scan. Are there other places in the code where this needs\n> > to be changed?\n> \n> Dunno; I haven't had time to go looking for suspicious heap_getnext\n> loops.\n>\n\nThere seems to be some suspicious places e.g. in command.c.\nI haven't complained about it mainly because sequential\nscan was convenient for my REINDEX stuff.\n\nThough I myself use system indexes if appropiate ones\nexist,I've always been suspicous. I don't think that indexes\nare always best. Sequential scan is never slower than index\nscan for small tables. It is reliable even in case of index\ncorruption. System indexes may grow big but vacuum\ncouldn't shrink them as pointed out in another thread..\n... and so on.\n\n> Another thing we had discussed was to try to unify the APIs of the\n> heap_getnext and index_getnext routines so that it could be fairly\n> transparent in calling code which one you are using. That'd allow\n> support of Hiroshi's disable-system-indexes feature without so much\n> cruft. If we are going to do that, it probably ought to happen before\n> we start adding more call sites that'll have to be fixed.\n>\n\nYes unification is required. Current separate routines seems to\nhave misled even main developers if index scan is preferable.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n \n", "msg_date": "Tue, 30 May 2000 10:15:31 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Re: Additional system indexes " } ]
[ { "msg_contents": "As an on-going process starting now I would like to begin cleaning up the\nconfiguration and build process, in particular to eventually have\n\n* Separate build directory\n\n* Automatic dependency tracking\n\n* Semi-automatic packaging\n\nas well as to\n\n* Remove a number of hacks, bogosities, and faults\n\n* Be ready for the next Autoconf release, coming soon\n\nbut most of all to\n\n* Make life easier for users and developers\n\nChanges will mostly be internal but to get most of this to work okay the\nconfigure script really needs to live in the top level directory of the\npackage.\n\nRationale:\n\n- This is where users expect it and everyone else has it.\n\n- Separate build directories will behave more sensical.\n\n- Would be nice if a top level makefile took care of the documentation\nbuild and install as well.\n\n- Automated packaging will really not work otherwise.\n\nIn order to not clutter the top level directory with stuff such as\n`config.guess', I suggest making a separate directory `config/' where to\nput all these things (incl. install-sh, mkinstalldirs, etc.), as there\nwill probably be more as we go along.\n\nThis layout is recommended by the Autoconf maintainers, btw.\n\nAny objections to this point?\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 29 May 2000 19:11:38 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Configuration and build clean-up" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> [ much good stuff ]\n\n> - Would be nice if a top level makefile took care of the documentation\n> build and install as well.\n\nYeah. The doc makefile already depends on configure having been run,\nand it's just plain weird that it's reaching over to a sibling directory\ninstead of being configured by a configure script above it.\n\n> - Automated packaging will really not work otherwise.\n\nWhat do you mean by automated packaging?\n\n> Any objections to this point?\n\nAll the specifics sound good to me. One thing to watch out for is that\nwe will probably still need something comparable to the current setup's\n\"template\" mechanism to select system-specific hints (like which tas()\nimplementation to use). Overall there is a lot of painstaking\nportability work embodied in the current setup; don't throw the baby out\nwith the bathwater while you're trying to make us fully Autoconf-ish.\n\nI didn't notice anything about libtool in your list of items, but\nI should think that replacing Makefile.shlib with libtool would be\na much better way of handling the various client shlibs. Is that\npart of your plan?\n\nBTW, I cleaned out a few Makefile bogosities yesterday, so be sure you\nare starting from current sources.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 May 2000 14:10:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuration and build clean-up " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n\n> As an on-going process starting now I would like to begin cleaning up the\n> configuration and build process, in particular to eventually have\n\nIf so, please fix this in the configure.in script:\n\nAC_CONFIG_AUX_DIR(`pwd`)\n\nThis doesn't work very well with libtool:\n\n[root@hoser src]# libtoolize \nRemember to add `AM_PROG_LIBTOOL' to `configure.in'.\nUsing `AC_PROG_RANLIB' is rendered obsolete by `AM_PROG_LIBTOOL'\nYou should add the contents of `/usr/share/aclocal/libtool.m4' to `aclocal.m4'.\nPutting files in AC_CONFIG_AUX_DIR, ``pwd`'.\n/usr/bin/libtoolize: `pwd`: Ingen slik fil eller filkatalog\n[root@hoser src]#\n\nlibtoolize (a part of libtool) isn't important on int's own, but it's\nused as part of some automated buildsystems \n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "29 May 2000 14:59:26 -0400", "msg_from": "[email protected] (Trond Eivind=?iso-8859-1?q?_Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Configuration and build clean-up" }, { "msg_contents": "On Mon, 29 May 2000, Tom Lane wrote:\n\n> What do you mean by automated packaging?\n\nmake dist, integrating the release_prep thing into the makefiles.\nSomething like make rpm-spec would eventually be nice as well.\n\n> One thing to watch out for is that we will probably still need\n> something comparable to the current setup's \"template\" mechanism to\n> select system-specific hints (like which tas() implementation to use). \n\nYes, there's really no way around keeping that.\n\n> I didn't notice anything about libtool in your list of items,\n\nThe problem with libtool is this: it can't handle multiple languages at\nonce. That means that you'd have to build libpq and libpq++ with\nsufficiently similar compilers. I guess this is how it is already but at\nleast it would have to be carefully evaluated. One option would be to use\nlibtool for all C stuff and keep the current funny business for libpq++,\nor we could volunteer as earlier testers for libtool's multi-language\nbranch. All in all it looks like a separate undertaking.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 30 May 2000 12:04:10 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuration and build clean-up " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> I didn't notice anything about libtool in your list of items,\n\n> The problem with libtool is this: it can't handle multiple languages at\n> once. That means that you'd have to build libpq and libpq++ with\n> sufficiently similar compilers.\n\nHmm. The one thing that we really need libtool for is to deal with\ncross-shlib dependencies --- eg, arranging for libpq.so to be pulled\nin if libpq++.so is loaded. (Right now, at least on my platform,\nthis doesn't work... the client app has to mention both at link time.)\nI think we could assume that all the files in any one shlib are compiled\nwith the same compiler; is that enough, or does it still fall over?\n\nThe main place where I'm concerned about \"only one compiler\" is in pltcl\nand plperl. We have found that the most reliable way to build those is\nwith the compiler and switches that were used to build Tcl and Perl\nrespectively, not with the compiler/switches being used for the main\nPostgres build. One of the bits we have painstakingly got right is to\nmake this actually work (last time I tried, I could build these with\nHP's cc whilst Postgres was configured with gcc or vice versa). The\nsimplest answer for those two might be to leave well enough alone.\nThere's no real reason to libtoolize them that I can see, nor to try\nto fold them into a unified Postgres build environment.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 May 2000 10:22:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuration and build clean-up " } ]
[ { "msg_contents": "Enlighten me:\n\nWhy use #include \"header.h\" over #include <header.h> for exported interface\nheader files? I've read the man and info page, and understand the differences\nfrom a C preprocessor standpoint, so, suggestions to read those sources will be\npiped to /dev/null -- I'm looking for why _we_ do it one way over the other.\n\nThe reason I am asking is to see if anyone using the RPM's have had problems\n#include'ing our headers.... but, as well, to see just what the advantages of\n\"\" over <> are for our exported headers.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 29 May 2000 15:36:57 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Header File cleanup." }, { "msg_contents": "\nI'll give you a different source to read ... the archives :) This one has\nbeen bounced back and forth a few times already, with *at least* twice\nthat I can think of where to even shifted from one to the other and back\nagain *roll eyes*\n\nOn Mon, 29 May 2000, Lamar Owen wrote:\n\n> Enlighten me:\n> \n> Why use #include \"header.h\" over #include <header.h> for exported interface\n> header files? I've read the man and info page, and understand the differences\n> from a C preprocessor standpoint, so, suggestions to read those sources will be\n> piped to /dev/null -- I'm looking for why _we_ do it one way over the other.\n> \n> The reason I am asking is to see if anyone using the RPM's have had problems\n> #include'ing our headers.... but, as well, to see just what the advantages of\n> \"\" over <> are for our exported headers.\n> \n> --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 29 May 2000 19:01:11 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Header File cleanup." }, { "msg_contents": "On Mon, 29 May 2000, The Hermit Hacker wrote:\n> I'll give you a different source to read ... the archives :) This one has\n> been bounced back and forth a few times already, with *at least* twice\n\nWell, I had read in the archives some already, but, after getting your message,\nI looked again. And found src/tools/pginclude/pgfixinclude and friends.\n\nHOWEVER, for exported headers that are installed, this is suboptimal (not to\narbitrarily reopen a can of worms). There is a bug listed on RedHat's bugzilla\nabout this very issue. I'll check a little more, and I may have to undo\npgfixincludes handiwork for the exported headers packaged in the rpm -devel\nsubpackage.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 29 May 2000 19:41:46 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Header File cleanup." }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n> Enlighten me:\n> Why use #include \"header.h\" over #include <header.h> for exported interface\n> header files?\n\nAs Marc mentioned, we've gone round on that before. I think the bias\nfor using \"\" is largely because it's convenient (or even necessary,\nin some scenarios) for building Postgres itself. I am not aware of\nany compelling arguments why <> would be better from the perspective\nof a client app trying to use already-installed Postgres header files\n--- if you know some reasons, let's hear 'em!\n\nI'm prepared to believe that the client's-eye view might favor something\ndifferent from the developer's-eye view. I think you were suggesting\nthat we might want to replace \"\" by <> in installed copies of the\nheaders. I'd support that if it were shown necessary, but I'd want to\nbe shown first...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 May 2000 02:06:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Header File cleanup. " } ]
[ { "msg_contents": "First of all congratulation to all the developers ...\n\nI would like to know if you are planning to implement the SQL fetch\ncommand in combination with memory variables ( fetch <cursor> into \n<variables list>; ). This would be useful for educational reasons.\n\nRegards from Greece\nAntony Sakellariou\nEducator\n", "msg_date": "Tue, 30 May 2000 00:00:54 +0300", "msg_from": "Antony Sakellariou <[email protected]>", "msg_from_op": true, "msg_subject": "Cursors" } ]
[ { "msg_contents": "> So while the parse/plan overhead looks kinda bad next to a bare COPY,\n> it's not anything like a 70:1 penalty. But an fsync per insert is\n\nIsn't it because of your table has 16 columns and my table has only 2?\n\n> that bad and worse.\n\nOf course -:)\n\n2Hiroshi - yes, I've used 6.5.3...\n\nVadim\n", "msg_date": "Mon, 29 May 2000 15:08:51 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Berkeley DB... " } ]
[ { "msg_contents": "\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 May 2000 20:59:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "#include cleanup" } ]
[ { "msg_contents": "I have removed some duplicate include files and did some cleanup. It\nhas been a while since this was done, and I hope I have learned how to\ndo this without causing any problems.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 May 2000 21:00:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "#include cleanup" }, { "msg_contents": "On Mon, 29 May 2000, Bruce Momjian wrote:\n> I have removed some duplicate include files and did some cleanup. It\n> has been a while since this was done, and I hope I have learned how to\n> do this without causing any problems.\n\nYou have _impeccable_ timing. :-)\n\nI'm assuming this was done only on CURRENT and not on REL7_0_PATCHES....\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 29 May 2000 21:43:01 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: #include cleanup" }, { "msg_contents": "> On Mon, 29 May 2000, Bruce Momjian wrote:\n> > I have removed some duplicate include files and did some cleanup. It\n> > has been a while since this was done, and I hope I have learned how to\n> > do this without causing any problems.\n> \n> You have _impeccable_ timing. :-)\n> \n> I'm assuming this was done only on CURRENT and not on REL7_0_PATCHES....\n> \n\nCurrent only, of course.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 May 2000 22:42:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: #include cleanup" } ]
[ { "msg_contents": "Is there a way in CVS to see only the logs of files in a certain branch?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 May 2000 21:47:23 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "cvs help with logs" }, { "msg_contents": "On Mon, 29 May 2000, Bruce Momjian wrote:\n\n> Is there a way in CVS to see only the logs of files in a certain branch?\n\nassuming that this is what you mean ... from 'info cvs':\n\n`-rREVISIONS'\n Print information about revisions given in the comma-separated\n list REVISIONS of revisions and ranges. The following table\n explains the available range formats:\n \n `REV1:REV2'\n Revisions REV1 to REV2 (which must be on the same branch).\n \n `:REV'\n Revisions from the beginning of the branch up to and\n including REV.\n \n `REV:'\n Revisions starting with REV to the end of the branch\n containing REV.\n \n `BRANCH'\n An argument that is a branch means all revisions on that\n branch.\n \n `BRANCH1:BRANCH2'\n A range of branches means all revisions on the branches in \n\n", "msg_date": "Mon, 29 May 2000 22:58:47 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cvs help with logs" }, { "msg_contents": "> On Mon, 29 May 2000, Bruce Momjian wrote:\n> \n> > Is there a way in CVS to see only the logs of files in a certain branch?\n> \n> assuming that this is what you mean ... from 'info cvs':\n> \n> `-rREVISIONS'\n> Print information about revisions given in the comma-separated\n> list REVISIONS of revisions and ranges. The following table\n> explains the available range formats:\n> \n> `REV1:REV2'\n> Revisions REV1 to REV2 (which must be on the same branch).\n> \n> `:REV'\n> Revisions from the beginning of the branch up to and\n> including REV.\n> \n> `REV:'\n> Revisions starting with REV to the end of the branch\n> containing REV.\n> \n> `BRANCH'\n> An argument that is a branch means all revisions on that\n> branch.\n> \n> `BRANCH1:BRANCH2'\n> A range of branches means all revisions on the branches in \n\nI have tried:\n\n\tcvs log -d '>2000-05-08 00:00:00 GMT' -r REL7_0_PATCHES . >log\n\nbut that give all files in all branches. If I do:\n\n\tcvs log -d '>2000-05-08 00:00:00 GMT' -rREL7_0_PATCHES . >/bjm/log \n\nwith no space after -r, I get only a few changes in the log, with\nmessages like:\n\n\tcvs server: warning: no revision `REL7_0_PATCHES' in\n\t`/home/projects/pgsql/cvsroot/pgsql/src/include/parser/Attic/catalog_utils.h,v'\n\nI am stumped.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 May 2000 22:12:51 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cvs help with logs]" } ]
[ { "msg_contents": "Sorry for the delay - 3 day weekend here in the UK ;-)\n\nAs usual when replying from here, replies prefixed with PM:\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council. \n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: Friday, May 26, 2000 4:13 PM\nTo: Peter Mount\nCc: 'Gunnar R|nning'; [email protected]; PostgreSQL\nDevelopers List (E-mail)\nSubject: Re: [INTERFACES] Postgresql 7.0 JDBC exceptions - broken\nconnections ?\n\n\nPeter Mount <[email protected]> writes:\n> Unknown Response Type u\n\n> PM: Does anyone [on Hackers] know what the u code is for? The fact it's\n> in lower case tells me that the protocol/connection got broken somehow.\n\nThere is no 'u' message code. Looks to me like the client got out of\nsync with the backend and is trying to interpret data as the start of\na message.\n\nI think that this and the \"Tuple received before MetaData\" issue could\nhave a common cause, namely running out of memory on the client side\nand not recovering well. libpq is known to emit its equivalent of\n\"Tuple received before MetaData\" when the backend hasn't violated the\nprotocol at all. What happens is that libpq runs out of memory while\ntrying to accumulate a large query result, \"recovers\" by resetting\nitself to no-query-active state, and then is surprised when the next\nmessage is another tuple. (Obviously this error recovery plan needs\nwork, but no one's got round to it yet.) I wonder whether the JDBC\ndriver has a similar problem, and whether these queries could have\nbeen retrieving enough data to trigger it?\n\nPM: The protocol side of the JDBC driver was based on libpq, so it is\npossible that this sort of problem could manifest itself in JDBC.\n\nAnother possibility is that the client app is failing to release\nquery results when done with them, which would eventually lead to\nan out-of-memory condition even with not-so-large queries.\n\nPM: Garbage collection can be strange (and different on each platform).\nUnfortunately you can't guarantee that garbage collection will occur\nregularly (or when the VM's memory fills - normally 16Mb), or that it will\nrun at all.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 May 2000 07:46:28 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Postgresql 7.0 JDBC exceptions - broken connecti\n\tons ?" } ]
[ { "msg_contents": "Hi jeff,\nI'm not sure but may be that's because you are using select distinct and so \nthere would be a few rows with same \"gid\" but different \"created\" fields in \nyour table . And PG does not know which one to select and compare for ORDER \nBY clause. If that ,you would need to change the table structure to a better \nnormal form.\nRegards ,\nOmid Omoomi\n\n\n>From: Jeff MacDonald <[email protected]>\n>Reply-To: Jeff MacDonald <[email protected]>\n>To: [email protected], [email protected]\n>Subject: [SQL] 7.0 weirdness\n>Date: Tue, 30 May 2000 09:28:11 -0300 (ADT)\n>\n>hi folks,\n>\n>this query works fine in 6.5 but screwie in 7.0\n>\n>7.0\n>\n>gm=> SELECT DISTINCT gid FROM members\n>gm-> WHERE active = 't'\n>gm-> AND (gender = 0\n>gm-> AND (wantrstypemale LIKE '%Short Term%'\n>gm-> OR wantrstypemale like '%Marriage%'\n>gm-> OR wantrstypemale like '%Long Term%'\n>gm-> OR wantrstypemale like '%Penpal%'\n>gm-> OR wantrstypemale like '%Activity Partner%')\n>gm-> ) order by created desc;\n>ERROR: For SELECT DISTINCT, ORDER BY expressions must appear in target \n>list\n>gm=>\n>\n>\n>any idea's ?\n>\n>jeff\n>\n>\n>\n\n________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com\n\n", "msg_date": "Tue, 30 May 2000 06:46:45 PDT", "msg_from": "\"omid omoomi\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.0 weirdness" }, { "msg_contents": "gid is unique.. it's a serial..\n\nfunny thing is tho this worked on 6.5\noh well thanks for the info.\n\n\njeff\n\nOn Tue, 30 May 2000, omid omoomi wrote:\n\n> Hi jeff,\n> I'm not sure but may be that's because you are using select distinct and so \n> there would be a few rows with same \"gid\" but different \"created\" fields in \n> your table . And PG does not know which one to select and compare for ORDER \n> BY clause. If that ,you would need to change the table structure to a better \n> normal form.\n> Regards ,\n> Omid Omoomi\n> \n> \n> >From: Jeff MacDonald <[email protected]>\n> >Reply-To: Jeff MacDonald <[email protected]>\n> >To: [email protected], [email protected]\n> >Subject: [SQL] 7.0 weirdness\n> >Date: Tue, 30 May 2000 09:28:11 -0300 (ADT)\n> >\n> >hi folks,\n> >\n> >this query works fine in 6.5 but screwie in 7.0\n> >\n> >7.0\n> >\n> >gm=> SELECT DISTINCT gid FROM members\n> >gm-> WHERE active = 't'\n> >gm-> AND (gender = 0\n> >gm-> AND (wantrstypemale LIKE '%Short Term%'\n> >gm-> OR wantrstypemale like '%Marriage%'\n> >gm-> OR wantrstypemale like '%Long Term%'\n> >gm-> OR wantrstypemale like '%Penpal%'\n> >gm-> OR wantrstypemale like '%Activity Partner%')\n> >gm-> ) order by created desc;\n> >ERROR: For SELECT DISTINCT, ORDER BY expressions must appear in target \n> >list\n> >gm=>\n> >\n> >\n> >any idea's ?\n> >\n> >jeff\n> >\n> >\n> >\n> \n> ________________________________________________________________________\n> Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com\n> \n\n", "msg_date": "Tue, 30 May 2000 11:10:24 -0300 (ADT)", "msg_from": "Jeff MacDonald <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0 weirdness" }, { "msg_contents": "Hi,\n\nJeff MacDonald:\n> gid is unique.. it's a serial..\n> \nThen there is no point in using \"DISTINCT\" in the first place, is there?\n\n> funny thing is tho this worked on 6.5\n\nIt happened to work because your gid is unique. But in the general case,\nit can't work. Consider this table:\n\ngid created\n X 1\n Y 2\n X 3\n\nNow, should your query's result be\n\ngid\n X\n Y\n\nor should it be\n\ngid\n Y\n X\n\n? And since the typical implementation throws away non-selected-for\ncolumns before UNIQUEing, how should it be able to sort anything?\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nProblem mit cookie: File exists \n", "msg_date": "Tue, 30 May 2000 16:31:35 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: 7.0 weirdness" }, { "msg_contents": "Jeff MacDonald <[email protected]> writes:\n> gid is unique.. it's a serial..\n\nMph. If you assume that gid is unique then the query would give\nwell-defined results, but if you know it's unique then why don't\nyou just leave off the DISTINCT?\n\n> funny thing is tho this worked on 6.5\n\nNo, 6.5 merely failed to notice that it was giving you undefined\nresults.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 May 2000 10:44:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0 weirdness " }, { "msg_contents": "Hi Jeff!\n\nI think you need a solution, and not explains...\nTom, and the others told the truth. You missed this query.\n\n> gid is unique.. it's a serial..\nI give you two ways:\n\n1) gid __realy__ unique -> DISTINCT is unnecessary.\n SELECT gid FROM members -- ... etc \n\n2) gid not unique -> DISTINCT is not enough. ;(\n SELECT gid,MAX(created) -- or MIN or AVG ... any aggregate\n\tFROM members -- ... etc\n GROUP BY gid ORDER BY 2; -- second colunm\n\n> > >gm=> SELECT DISTINCT gid FROM members\n> > >gm-> WHERE active = 't'\n> > >gm-> AND (gender = 0\n> > >gm-> AND (wantrstypemale LIKE '%Short Term%'\n> > >gm-> OR wantrstypemale like '%Marriage%'\n> > >gm-> OR wantrstypemale like '%Long Term%'\n> > >gm-> OR wantrstypemale like '%Penpal%'\n> > >gm-> OR wantrstypemale like '%Activity Partner%')\n> > >gm-> ) order by created desc;\n> > >ERROR: For SELECT DISTINCT, ORDER BY expressions must appear in target \n\nbest regards\n--\n nek;(\n\n", "msg_date": "Wed, 31 May 2000 08:45:08 +0200 (CEST)", "msg_from": "Peter Vazsonyi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0 weirdness" }, { "msg_contents": "thanks for the hlep guys..\n\nfor those that are curious, the distinct is tehr cause it's\nsomeone elses code that i'm workig on .. :) have to kick\nout the bug's//\n\njeff\n\nOn Tue, 30 May 2000, Matthias Urlichs wrote:\n\n> Hi,\n> \n> Jeff MacDonald:\n> > gid is unique.. it's a serial..\n> > \n> Then there is no point in using \"DISTINCT\" in the first place, is there?\n> \n> > funny thing is tho this worked on 6.5\n> \n> It happened to work because your gid is unique. But in the general case,\n> it can't work. Consider this table:\n> \n> gid created\n> X 1\n> Y 2\n> X 3\n> \n> Now, should your query's result be\n> \n> gid\n> X\n> Y\n> \n> or should it be\n> \n> gid\n> Y\n> X\n> \n> ? And since the typical implementation throws away non-selected-for\n> columns before UNIQUEing, how should it be able to sort anything?\n> \n> -- \n> Matthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\n> The quote was selected randomly. Really. | http://smurf.noris.de/\n> -- \n> Problem mit cookie: File exists \n> \n\n", "msg_date": "Wed, 31 May 2000 16:00:55 -0300 (ADT)", "msg_from": "Jeff MacDonald <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: 7.0 weirdness" } ]
[ { "msg_contents": "Hi,\n\nI'm trying to convert an application from MS SQL / ASP / IIS to\nPostgreSQL / PHP / Apache. I am having trouble getting efficient\nqueries on one of my main tables, which tends to have some fairly large\nrecords in it. Currently there are around 20000 records, and it looks\nlike they average around 500 bytes from the VACUUM ANALYZE statistics\nbelow.\n\nI don't really want any query on this table to return more than about 20\nrecords, so it seems to me that indexed access should be the answer, but\nI am having some problems with indexes containing BOOLEAN types.\n\nI can't see any reason why BOOL shouldn't work in an index, and in other\nsystems I have commonly used them as the first component of an index,\nwhich is what I want to do here.\n\nAlso, I can't see why the estimator should see a difference between\n\"WHERE head1\" and \"WHERE head1=TRUE\".\n\nAny help appreciated,\n\t\t\t\t\tAndrew.\n\n\nnewsroom=# \\d story\n Table \"story\"\n Attribute | Type | Modifier \n--------------+-----------+-------------------\n story_id | integer | not null\n author | integer | \n written | timestamp | \n released | timestamp | \n withdrawn | timestamp | \n sent | timestamp | \n wcount | integer | default 0\n chunk_count | integer | \n head1 | boolean | default 'f'::bool\n headpriority | integer | default 999\n internal | boolean | default 'f'::bool\n islive | boolean | default 'f'::bool\n story_type | char(4) | \n title | text | \n precis | text | \nIndices: story_oid_skey,\n story_pkey,\n story_sk1,\n story_sk2,\n story_sk4\n\nnewsroom=# \\d story_sk4\n Index \"story_sk4\"\n Attribute | Type \n-----------+-----------\n head1 | boolean\n written | timestamp\nbtree\n\nnewsroom=# explain SELECT DISTINCT story.story_id, written, released,\ntitle, precis, author, head1 FROM story WHERE head1 ORDER BY written\nDESC LIMIT 15;\nNOTICE: QUERY PLAN:\n\nUnique (cost=2623.87..2868.99 rows=1401 width=49)\n -> Sort (cost=2623.87..2623.87 rows=14007 width=49)\n -> Seq Scan on story (cost=0.00..1421.57 rows=14007 width=49)\n\nEXPLAIN\n\nnewsroom=# set enable_seqscan to 'off';\nSET VARIABLE\n\nnewsroom=# explain SELECT DISTINCT story.story_id, written, released,\ntitle, precis, author, head1 FROM story WHERE head1 ORDER BY written\nDESC LIMIT 15;\nNOTICE: QUERY PLAN:\n\nUnique (cost=100002623.87..100002868.99 rows=1401 width=49)\n -> Sort (cost=100002623.87..100002623.87 rows=14007 width=49)\n -> Seq Scan on story (cost=100000000.00..100001421.57\nrows=14007 width=49)\n\nEXPLAIN\n\nnewsroom=# explain SELECT DISTINCT story.story_id, written, released,\ntitle, precis, author FROM story WHERE head1=TRUE LIMIT 15;\nNOTICE: QUERY PLAN:\n\nUnique (cost=8846.22..9056.33 rows=1401 width=48)\n -> Sort (cost=8846.22..8846.22 rows=14007 width=48)\n -> Index Scan using story_sk4 on story (cost=0.00..7645.97\nrows=14007 width=48)\n\nEXPLAIN\n\nnewsroom=# vacuum verbose analyze story;\nNOTICE: --Relation story--\nNOTICE: Pages 1238: Changed 0, reaped 0, Empty 0, New 0; Tup 18357: Vac\n0, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 84, MaxLen 3115; Re-using:\nFree/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. CPU 0.16s/1.90u sec.\nNOTICE: Index story_oid_skey: Pages 39; Tuples 18357. CPU 0.00s/0.07u\nsec.\nNOTICE: Index story_sk4: Pages 94; Tuples 18357. CPU 0.01s/0.08u sec.\nNOTICE: Index story_sk2: Pages 51; Tuples 18357. CPU 0.01s/0.08u sec.\nNOTICE: Index story_sk1: Pages 70; Tuples 18357. CPU 0.02s/0.06u sec.\nNOTICE: Index story_pkey: Pages 59; Tuples 18357. CPU 0.02s/0.06u sec.\nVACUUM\n\n\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: [email protected]\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267\n", "msg_date": "Tue, 30 May 2000 23:23:44 +1200", "msg_from": "Andrew McMillan <[email protected]>", "msg_from_op": true, "msg_subject": "Using BOOL in indexes" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Andrew McMillan\n> \n> Hi,\n> \n> I'm trying to convert an application from MS SQL / ASP / IIS to\n> PostgreSQL / PHP / Apache. I am having trouble getting efficient\n> queries on one of my main tables, which tends to have some fairly large\n> records in it. Currently there are around 20000 records, and it looks\n> like they average around 500 bytes from the VACUUM ANALYZE statistics\n> below.\n> \n> I don't really want any query on this table to return more than about 20\n> records, so it seems to me that indexed access should be the answer, but\n> I am having some problems with indexes containing BOOLEAN types.\n> \n> I can't see any reason why BOOL shouldn't work in an index, and in other\n> systems I have commonly used them as the first component of an index,\n> which is what I want to do here.\n>\n> Also, I can't see why the estimator should see a difference between\n> \"WHERE head1\" and \"WHERE head1=TRUE\".\n> \n> \n> newsroom=# explain SELECT DISTINCT story.story_id, written, released,\n> title, precis, author, head1 FROM story WHERE head1 ORDER BY written\n\nPlease add head1 to ORDER BY clause i.e. ORDER BY head1,written.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Wed, 31 May 2000 10:52:38 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Using BOOL in indexes" } ]
[ { "msg_contents": "\nI have looked (briefly) through the general, sql, and hackers archives,and\ncould not find anything the addressed the ability to rename a database.\nMost of the posts resorted to using pg_dump to rebuild the database under a\nnew name.\n\nI appreciate that (AFAIK) SQL does not have 'alter schema rename', but it\nis something I would find very useful: I have a web site that is 99.9%\nread-only, where bulk updates are sent periodically and can take a lot of\ntime to run, so what I do is apply the changes (or rebuild the db) under\nanother name. After this is done, I would like to rename the current DB,\nand put the new one in it's place. Currently this is fine with the 100,000\nor so records it contains, but the projections are for 5M+ records within\nfour years.\n\nI strongly suspect that renaming the directory will not work, and besides,\nI have a strong aversion to making structural changes to files that are\nlogically part of the database.\n\nSo the questions are: \n\n1) is is there a place for a pg_rename utility? \n\n2) would it be a difficult thing to write?\n\n3) is this better handled by psql.\n\nI suspect the answer to (3) is 'NO', since moving databases around seems to\nbe something best done without a backend attached.\n\nAny and all (polite) comments welcome.\n\nP.S. Before anybody suggests it, I currently build the new db elsewhere,\nthen do a pg_dump, delete the current DB, and load from the dump file. This\nis fine for the relatively small amount of data currently in the db.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 30 May 2000 21:35:21 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Rename database?" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> I have looked (briefly) through the general, sql, and hackers archives,and\n> could not find anything the addressed the ability to rename a database.\n\nNope, we haven't got it.\n\nAs long as there are no backends running in the DB, I think it'd just\nbe a matter of renaming the subdirectory of data/base/ and updating the\npg_database row with the new name. You could do that manually if you\nare comfortable with assuming that no one is connected to the DB while\nyou do it.\n\n> 1) is is there a place for a pg_rename utility? \n\nIt could not be a standalone utility because it'd have no way to\ninterlock against running backends. It'd have to be implemented as\nan SQL statement and use the same interlock method that DROP DATABASE\ndoes.\n\n> 2) would it be a difficult thing to write?\n\nProbably not too tough if you used DROP DATABASE as a model.\n\nBear in mind though that the whole issue might go away, depending on\nwhat happens with the tablespace/schema/physical-file-name-conventions\ndiscussions. Might want to see how that plays out before expending much\nwork on it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 May 2000 11:27:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Rename database? " } ]
[ { "msg_contents": "\n> > Imho this is an area where it does make sense to look at what other\n> > db's do, because it makes the toolwriters life so much easier if pg\n> > behaves like some other common db.\n> \n> The defined interface to the privilege system is GRANT, REVOKE, and\n> \"access denied\" (and a couple of INFORMATION_SCHEMA views, \n> eventually).\n> I don't see how other db's play into this.\n\nOf course the grant revoke is the same. But administrative tools usually\nallow you to dump schema, all rights, triggers ... for an object and thus\nneed \naccess to the system tables containing the grants.\n\n> \n> > Other db's usually use a char array for priaction and don't have\n> > priisgrantable, but code it into priaction. Or they use a bitfield.\n> > This has the advantage of only producing one row per table.\n> \n> That's the price I'm willing to pay for abstraction, \n> extensibility, and\n> verifyability. But I'm open for better ideas.\n\nImho this is an area that is extremly sensitive to performance,\nthe rights have to be checked for each access.\n\nAndreas\n", "msg_date": "Tue, 30 May 2000 13:39:17 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Proposal for enhancements of privilege system" }, { "msg_contents": "\nOn Tue, 30 May 2000, Zeugswetter Andreas SB wrote:\n\n> > \n> > > Other db's usually use a char array for priaction and don't have\n> > > priisgrantable, but code it into priaction. Or they use a bitfield.\n> > > This has the advantage of only producing one row per table.\n> > \n> > That's the price I'm willing to pay for abstraction, \n> > extensibility, and\n> > verifyability. But I'm open for better ideas.\n> \n> Imho this is an area that is extremly sensitive to performance,\n> the rights have to be checked for each access.\n\n Yes, but I believe that Peter's idea is good. System tables are used for\neach access not only for ACL, and performance problem is a problem for\nsystem cache not primary for privilege system.\n\n I look forward set privilege for columns and functions. Large multiuser\nprojects need it.\n\n\t\t\t\t\t\t\tKarel\n\n", "msg_date": "Tue, 30 May 2000 13:46:09 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Proposal for enhancements of privilege system" }, { "msg_contents": "On Tue, 30 May 2000, Zeugswetter Andreas SB wrote:\n\n> Of course the grant revoke is the same. But administrative tools\n> usually allow you to dump schema, all rights, triggers ... for an\n> object and thus need access to the system tables containing the\n> grants.\n\nThat's what you use the information schema views for. Also, of course,\nwe're light years away from having anything like a portable pg_dump.\n\n> Imho this is an area that is extremly sensitive to performance, the\n> rights have to be checked for each access.\n\nBut using some sort of arrays is going to make it slower in any case since\nyou can't use indexes on those.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 30 May 2000 13:49:40 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Proposal for enhancements of privilege system" } ]
[ { "msg_contents": "hi folks,\n\nthis query works fine in 6.5 but screwie in 7.0\n\n7.0 \n\ngm=> SELECT DISTINCT gid FROM members\ngm-> WHERE active = 't'\ngm-> AND (gender = 0\ngm-> AND (wantrstypemale LIKE '%Short Term%'\ngm-> OR wantrstypemale like '%Marriage%'\ngm-> OR wantrstypemale like '%Long Term%'\ngm-> OR wantrstypemale like '%Penpal%'\ngm-> OR wantrstypemale like '%Activity Partner%')\ngm-> ) order by created desc;\nERROR: For SELECT DISTINCT, ORDER BY expressions must appear in target list\ngm=> \n\n\nany idea's ?\n\njeff\n\n\n\n", "msg_date": "Tue, 30 May 2000 09:28:11 -0300 (ADT)", "msg_from": "Jeff MacDonald <[email protected]>", "msg_from_op": true, "msg_subject": "7.0 weirdness" }, { "msg_contents": "It seems to me that it was lack of control in 6.5 version...\nFor one \"gid\", you may have several \"created\" values, so Postgres is not\nable to decide which value must be taken and ordered\n\nSimple example\ngid created\n1 1\n1 3\n2 2\n\nIn which order is Postgres supposed to give the data???\n\n\nPatrick Fiche\n-----Message d'origine-----\nDe : [email protected] [mailto:[email protected]]De la part\nde Jeff MacDonald\nEnvoy� : mardi 30 mai 2000 14:28\n� : [email protected]; [email protected]\nObjet : [SQL] 7.0 weirdness\n\n\nhi folks,\n\nthis query works fine in 6.5 but screwie in 7.0\n\n7.0\n\ngm=> SELECT DISTINCT gid FROM members\ngm-> WHERE active = 't'\ngm-> AND (gender = 0\ngm-> AND (wantrstypemale LIKE '%Short Term%'\ngm-> OR wantrstypemale like '%Marriage%'\ngm-> OR wantrstypemale like '%Long Term%'\ngm-> OR wantrstypemale like '%Penpal%'\ngm-> OR wantrstypemale like '%Activity Partner%')\ngm-> ) order by created desc;\nERROR: For SELECT DISTINCT, ORDER BY expressions must appear in target list\ngm=>\n\n\nany idea's ?\n\njeff\n\n\n\n\n", "msg_date": "Tue, 30 May 2000 15:17:38 +0200", "msg_from": "\"Patrick FICHE\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: 7.0 weirdness" }, { "msg_contents": "gid is unique..\n\njeff\n\nOn Tue, 30 May 2000, Patrick FICHE wrote:\n\n> It seems to me that it was lack of control in 6.5 version...\n> For one \"gid\", you may have several \"created\" values, so Postgres is not\n> able to decide which value must be taken and ordered\n> \n> Simple example\n> gid created\n> 1 1\n> 1 3\n> 2 2\n> \n> In which order is Postgres supposed to give the data???\n> \n> \n> Patrick Fiche\n> -----Message d'origine-----\n> De : [email protected] [mailto:[email protected]]De la part\n> de Jeff MacDonald\n> Envoy� : mardi 30 mai 2000 14:28\n> � : [email protected]; [email protected]\n> Objet : [SQL] 7.0 weirdness\n> \n> \n> hi folks,\n> \n> this query works fine in 6.5 but screwie in 7.0\n> \n> 7.0\n> \n> gm=> SELECT DISTINCT gid FROM members\n> gm-> WHERE active = 't'\n> gm-> AND (gender = 0\n> gm-> AND (wantrstypemale LIKE '%Short Term%'\n> gm-> OR wantrstypemale like '%Marriage%'\n> gm-> OR wantrstypemale like '%Long Term%'\n> gm-> OR wantrstypemale like '%Penpal%'\n> gm-> OR wantrstypemale like '%Activity Partner%')\n> gm-> ) order by created desc;\n> ERROR: For SELECT DISTINCT, ORDER BY expressions must appear in target list\n> gm=>\n> \n> \n> any idea's ?\n> \n> jeff\n> \n> \n> \n> \n\n", "msg_date": "Tue, 30 May 2000 10:46:30 -0300 (ADT)", "msg_from": "Jeff MacDonald <[email protected]>", "msg_from_op": true, "msg_subject": "RE: 7.0 weirdness" } ]
[ { "msg_contents": "\n> > Of course the grant revoke is the same. But administrative tools\n> > usually allow you to dump schema, all rights, triggers ... for an\n> > object and thus need access to the system tables containing the\n> > grants.\n> \n> That's what you use the information schema views for. \n\nOk.\n\n> Also, of course,\n> we're light years away from having anything like a portable pg_dump.\n\nHmm ? I am not talking about pg_dump, I am talking about some graphical tool\nthat shows the table structure and grants.\n\n> \n> > Imho this is an area that is extremly sensitive to performance, the\n> > rights have to be checked for each access.\n> \n> But using some sort of arrays is going to make it slower in \n> any case since\n> you can't use indexes on those.\n\nAgain Hmm ? Are you going to do select * from <authtable> where pri=\"select\"\nor some such ? Usually you look up a users rights for a specific table,\nand that needs to be fast.\n\nAndreas\n", "msg_date": "Tue, 30 May 2000 15:39:28 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: AW: Proposal for enhancements of privilege system" }, { "msg_contents": "Zeugswetter Andreas SB writes:\n\n> Again Hmm ? Are you going to do select * from <authtable> where pri=\"select\"\n> or some such ? Usually you look up a users rights for a specific table,\n> and that needs to be fast.\n\nExactly, that's why I have to do it like this. To interface a system\ncatalog to the shared cache you need a primary key, which would be\n(object, user, action) in my proposal. With that setup I can easily make\nqueries of the sort \"does user X have select right on table Y\" as fast as\npossible, no slower than, say, looking up an attribute definition in\npg_attribute.\n\nWith several privileges per row you make the table unnecessarily sparse,\nyou make interfacing to the catalog cache a nightmare, and you create all\nsorts of funny implementation problems (for example, revoking a privilege\nmight be an update or a delete, depending on whether it was the last\nprivilege revoked).\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Fri, 2 Jun 2000 02:37:12 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: AW: Proposal for enhancements of privilege system" }, { "msg_contents": "> > Again Hmm ? Are you going to do select * from <authtable> where pri=\"select\"\n> > or some such ? Usually you look up a users rights for a specific table,\n> > and that needs to be fast.\n> \n> Exactly, that's why I have to do it like this. To interface a system\n> catalog to the shared cache you need a primary key, which would be\n> (object, user, action) in my proposal. With that setup I can easily make\n> queries of the sort \"does user X have select right on table Y\" as fast as\n> possible, no slower than, say, looking up an attribute definition in\n> pg_attribute.\n\nOk, I see that you will somtimes want to do a select like that, only I do \nnot see the reason why this has to be the primary target for speed.\nRemember that for each row in the db you have >30 bytes of overhead\n(I forgot the exact number) plus table_oid + user_oid thus if a user has \nall permissions on a table, that will take 300 bytes.\nI also think that a key of object + {user|group} is imho selective enough,\nyou don't want a key whose only info is a boolean.\n\nAndreas\n\n", "msg_date": "Sun, 4 Jun 2000 13:33:53 +0200", "msg_from": "\"Zeugswetter Andreas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: AW: Proposal for enhancements of privilege system" }, { "msg_contents": "\"Zeugswetter Andreas\" <[email protected]> writes:\n>> Exactly, that's why I have to do it like this. To interface a system\n>> catalog to the shared cache you need a primary key, which would be\n>> (object, user, action) in my proposal. With that setup I can easily make\n>> queries of the sort \"does user X have select right on table Y\" as fast as\n>> possible, no slower than, say, looking up an attribute definition in\n>> pg_attribute.\n\n> Ok, I see that you will somtimes want to do a select like that, only I do \n> not see the reason why this has to be the primary target for speed.\n> Remember that for each row in the db you have >30 bytes of overhead\n> (I forgot the exact number) plus table_oid + user_oid thus if a user has \n> all permissions on a table, that will take 300 bytes.\n> I also think that a key of object + {user|group} is imho selective enough,\n> you don't want a key whose only info is a boolean.\n\nI tend to agree with Andreas on this: having a separate tuple for each\nindividual kind of access right will consume an unreasonable amount of\nspace --- both on disk and in the syscache, if a cache is used for this\ntable. (In the cache, that translates to entries not living very long\nbefore they fall off the LRU list.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 04 Jun 2000 14:47:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: AW: Proposal for enhancements of privilege system " }, { "msg_contents": "Tom Lane writes:\n\n> having a separate tuple for each individual kind of access right will\n> consume an unreasonable amount of space --- both on disk and in the\n> syscache, if a cache is used for this table.\n\nThat's a valid concern, but unfortunately things aren't that easy. For\neach access right you also have to store what user granted that privilege\nand whether it's grantable, and for SELECT also whether it includes the\n\"hierarchy option\" (has to do with table inheritance somehow).\n\nSay you store all privileges in an array, then you'd either need to encode\nall 3 1/2 pieces of information into one single data type and make an\narray thereof (like `array of record privtype; privgrantor;\nprivgrantable'), which doesn't really make things easier, or you have\nthree arrays per tuple, which makes things worse. Also querying arrays is\npainful.\n\nSo the alternative is to have separate columns per privilege, like\n\npg_privilege ( priobj, prigrantee,\n\tpriupdate, priupdateisgrantable, priupdategrantor,\n\tpriselect, priselectisgrantable, priselectgrantor,\n\t ... /* delete, insert, references */\n)\n\nThe originally proposed schema would be 14 bytes data plus overhead. This\nnew idea would cost 38 bytes of data. As I understand, the overhead is 40\nbytes. So the break-even point for this new scheme is when users have on\naverage at least 1.4 privileges (78/54) granted to them on one object.\nConsidering that such objects as types and functions will in any case have\nat most one privilege (USAGE or EXECUTE, resp.), that there are groups (or\nroles), that column level privileges will probably tend to have sparse\ntuples of this kind, and that object owners are short-circuited in any\ncase, then it is not at all clear whether that figure will be reached.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 6 Jun 2000 03:36:06 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: AW: Proposal for enhancements of privilege system" } ]
[ { "msg_contents": "\n> > > > Other db's usually use a char array for priaction and don't have\n> > > > priisgrantable, but code it into priaction. Or they use \n> a bitfield.\n> > > > This has the advantage of only producing one row per table.\n> > > \n> > > That's the price I'm willing to pay for abstraction, \n> > > extensibility, and\n> > > verifyability. But I'm open for better ideas.\n> > \n> > Imho this is an area that is extremly sensitive to performance,\n> > the rights have to be checked for each access.\n> \n> Yes, but I believe that Peter's idea is good. System tables \n> are used for\n> each access not only for ACL, and performance problem is a problem for\n> system cache not primary for privilege system.\n\nYes I totally agree, that the basic idea is great, all I am saying is, that\nI would \n1. gather more than one priviledge per table into one row (all of: select,\ninsert, update ...)\n2. try to look at some existing table structure from one biggie db and see\nif it fits\n\nAndreas\n", "msg_date": "Tue, 30 May 2000 15:44:11 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: AW: Proposal for enhancements of privilege system" }, { "msg_contents": "On Tue, 30 May 2000, Zeugswetter Andreas SB wrote:\n\n> > Yes, but I believe that Peter's idea is good. System tables \n> > are used for\n> > each access not only for ACL, and performance problem is a problem for\n> > system cache not primary for privilege system.\n> \n> Yes I totally agree, that the basic idea is great, all I am saying is, that\n> I would \n> 1. gather more than one priviledge per table into one row (all of: select,\n> insert, update ...)\n\n I disccuse this idea with Peter some month ago via private mails (Peter \nhas big patience .. :-) and we already calculate about it. \n\n * needful ACL data for one object will very small and not spend very memory \n in cache, \n * in one moment you need information about one object and one privilege\n type. SELECT/UPDATE/etc in one row is not needful, if you run SELECT you\n need information about priv. for select only. \n * it is very easy extendible, is not defined some special pozition in some\n string or some special column for (example) SELECT. You can in future add\n new privilege element. \n\n> 2. try to look at some existing table structure from one biggie db and see\n> if it fits\n\n See pg_attribute --- here is very simular situation, but it is larger.\n\n\t\t\t\t\t\t\tKarel\n \n\n\n", "msg_date": "Tue, 30 May 2000 16:08:31 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: AW: Proposal for enhancements of privilege system" } ]
[ { "msg_contents": "I am still stuck. I can't figure out how to get a log of just file from\na CVS branch. Anyone?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 May 2000 12:33:21 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "CVS log of only one branch" } ]
[ { "msg_contents": "Marc has explained this to me. Seems I have to get a log of all the\nchanges from the 7.0 release until we created the REL7_0_PATCHES branch,\nthen get the changes added just to TREL7_0_PATCHES.\n\nNow that it is explained to me, it makes perfect sense. Thanks. I will\nget on it now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 May 2000 14:14:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "CVS log problem" }, { "msg_contents": "> Marc has explained this to me. Seems I have to get a log of all the\n> changes from the 7.0 release until we created the REL7_0_PATCHES branch,\n> then get the changes added just to TREL7_0_PATCHES.\n\nLet me ask Marc another question. What is the tag for 7.0 release?\n--\nTatsuo Ishii\n\n", "msg_date": "Wed, 31 May 2000 10:03:46 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CVS log problem" }, { "msg_contents": "On Wed, 31 May 2000, Tatsuo Ishii wrote:\n\n> > Marc has explained this to me. Seems I have to get a log of all the\n> > changes from the 7.0 release until we created the REL7_0_PATCHES branch,\n> > then get the changes added just to TREL7_0_PATCHES.\n> \n> Let me ask Marc another question. What is the tag for 7.0 release?\n\nREL7_0 was the tag, and REL7_0_PATCHES is the branch taht was created the\nother day ...\n\n\n", "msg_date": "Tue, 30 May 2000 22:33:16 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CVS log problem" } ]
[ { "msg_contents": "Seeing that the book will be printed in the fall, should I mention the\npostgres -F flag in the book?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 May 2000 14:20:46 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Mention WAL in the book" } ]
[ { "msg_contents": "\n\n\t>Hi all,\n\n\t>I'm trying to create a type password; the goal is to have a table\nlike:\n\n\t>CREATE TABLE test (\n\t>username varchar,\n\t>pass passwd);\n\n\t>insert into test values ('me','secret');\n\n\t>and have \"secret\" being automagicly crypted.\n\n\t>What I want is to mimic the PASSWORD function of mysql but much\nbetter,\n\t>not having to call a function.\n\n\t>I just can't figure how to write the xx_crypt(opaque) returns\nopaque\n\t>function.\n\n\t>Any help available???\n\n\t>TIA\n\n\tI have function for crypt password in MD5(c - function). Are you\nneed it.\n\n\tValery Krotenko. \n\n\n", "msg_date": "Tue, 30 May 2000 23:38:11 +0400", "msg_from": "\"Krotenko Valery V.\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: New Type" } ]
[ { "msg_contents": "Currently, pg_passwd allows the creation of secondary password file that\ncan be used as part of 'password' pg_hba.conf entries.\n\nWhy do we bother supporting passwords in pg_shadow and secondary files? \nSeems we could just allow usernames in the secondary files, and use the\nuser passwords from pg_shadow.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 May 2000 15:42:57 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "secondary password files" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Why do we bother supporting passwords in pg_shadow and secondary files? \n\nSo the same user can have different passwords for different databases.\n\nIt's a pretty crude hack, since there isn't any support for updating\nthe secondary password files except via manual editing done by the\ndbadmin. But I wouldn't be in favor of taking it out until we can\nreplace that functionality elsewhere.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 May 2000 17:40:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: secondary password files " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Why do we bother supporting passwords in pg_shadow and secondary files? \n> \n> So the same user can have different passwords for different databases.\n> \n> It's a pretty crude hack, since there isn't any support for updating\n> the secondary password files except via manual editing done by the\n> dbadmin. But I wouldn't be in favor of taking it out until we can\n> replace that functionality elsewhere.\n\nWe have pg_passwd which does allow updating of the files.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 May 2000 17:42:46 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: secondary password files" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> It's a pretty crude hack, since there isn't any support for updating\n>> the secondary password files except via manual editing done by the\n>> dbadmin. But I wouldn't be in favor of taking it out until we can\n>> replace that functionality elsewhere.\n\n> We have pg_passwd which does allow updating of the files.\n\nSay again? I see a pg_shadow table and a pg_user view of it.\nNo pg_passwd table.\n\nSince pg_shadow can't hold more than one password per user, it's\nfundamentally incapable of supporting this function.\n\nIf we wanted to handle this better, I'd be inclined to remove passwords\nfrom pg_shadow (then the need for a separate pg_user view would go away)\nand make a pg_passwd table holding <username, dbname, password> triples\nwith some provision for an \"any other db\" wildcard. (Not dbname = NULL,\nbecause we'd want to treat <username, dbname> as primary key. Maybe\ndbname = '*' would be OK.) There'd need to be two flat files for the\npostmaster to consult, one shadowing each of these tables.\n\nPeter may already have better ideas as part of his protection-system\nrework, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 May 2000 17:59:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: secondary password files " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> It's a pretty crude hack, since there isn't any support for updating\n> >> the secondary password files except via manual editing done by the\n> >> dbadmin. But I wouldn't be in favor of taking it out until we can\n> >> replace that functionality elsewhere.\n> \n> > We have pg_passwd which does allow updating of the files.\n> \n> Say again? I see a pg_shadow table and a pg_user view of it.\n> No pg_passwd table.\n> \n> Since pg_shadow can't hold more than one password per user, it's\n> fundamentally incapable of supporting this function.\n\nThere is a pg_passwd binary in /bin.\n\n> \n> If we wanted to handle this better, I'd be inclined to remove passwords\n> from pg_shadow (then the need for a separate pg_user view would go away)\n> and make a pg_passwd table holding <username, dbname, password> triples\n> with some provision for an \"any other db\" wildcard. (Not dbname = NULL,\n> because we'd want to treat <username, dbname> as primary key. Maybe\n> dbname = '*' would be OK.) There'd need to be two flat files for the\n> postmaster to consult, one shadowing each of these tables.\n\nGood ideas.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 May 2000 19:33:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: secondary password files" } ]
[ { "msg_contents": "Seems we should have pg_hba.conf and other files in a separate\ndirectory, not /data.\n\nComments?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 May 2000 16:13:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "config files in /data" }, { "msg_contents": "On Tue, 30 May 2000, Bruce Momjian wrote:\n\n> Seems we should have pg_hba.conf and other files in a separate\n> directory, not /data.\n> \n> Comments?\n\nI still say that pg_hba.conf should be eliminated altogether, in favor of\nsomething that is modifiable by anyone that the supeuser gives permissions\nto create databases too ... what's the point of having that ability if you\ncan't access the database after its created?\n\nSeems kinda anti-friendly to the DBA to be able to have\n\"sub-ordinates\" that can create users and databases, but has to be bug'd\nin order to allow them to be accessed ...\n\npg_hba should become another system table that can be modified with simple\nSQL queries, and is modifiable (readable?) only by those with createdb\nprivileges ...\n\n\n", "msg_date": "Tue, 30 May 2000 18:24:29 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: config files in /data" }, { "msg_contents": "> On Tue, 30 May 2000, Bruce Momjian wrote:\n> \n> > Seems we should have pg_hba.conf and other files in a separate\n> > directory, not /data.\n> > \n> > Comments?\n> \n> I still say that pg_hba.conf should be eliminated altogether, in favor of\n> something that is modifiable by anyone that the supeuser gives permissions\n> to create databases too ... what's the point of having that ability if you\n> can't access the database after its created?\n> \n> Seems kinda anti-friendly to the DBA to be able to have\n> \"sub-ordinates\" that can create users and databases, but has to be bug'd\n> in order to allow them to be accessed ...\n> \n> pg_hba should become another system table that can be modified with simple\n> SQL queries, and is modifiable (readable?) only by those with createdb\n> privileges ...\n\nAnd have it dump like pg_shadow. Yea, I guess we could do that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 May 2000 17:30:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: config files in /data" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Seems we should have pg_hba.conf and other files in a separate\n> directory, not /data.\n\nWhy?\n\nI don't see enough advantage in it to justify breaking people's\nexisting administrative procedures. You can bet there are folks\nout there whose scripts know where pg_hba.conf lives, for example.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 May 2000 17:42:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: config files in /data " }, { "msg_contents": ">> pg_hba should become another system table that can be modified with simple\n>> SQL queries, and is modifiable (readable?) only by those with createdb\n>> privileges ...\n\n> And have it dump like pg_shadow. Yea, I guess we could do that.\n\nYeah, the postmaster needs to see it as a flat file, but we could have\nan update trigger like for pg_shadow.\n\nI'm not convinced that it's cool to grant read rights on the table even\nto those with createdb privileges. (\"Wow, Joe Blow is running his\ndatabase with no connection security...\") If we had a setup such that\none could only see the rows for databases one owns, it'd work. This\ncould be enforced by a view, perhaps, like we do for pg_user.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 May 2000 18:05:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: config files in /data " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Seems we should have pg_hba.conf and other files in a separate\n> > directory, not /data.\n> \n> Why?\n> \n> I don't see enough advantage in it to justify breaking people's\n> existing administrative procedures. You can bet there are folks\n> out there whose scripts know where pg_hba.conf lives, for example.\n> \n\nIn describing how to do a backup, people have to backup\ndata/pg_hba.conf, but none of the others. Seems strange that it is\nmixed in with actual tables.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 May 2000 19:25:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: config files in /data" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Seems we should have pg_hba.conf and other files in a separate\n> directory, not /data.\n> \n> Comments?\n\nconfig or etc, perhaps? But, make the location configuratable (so that\nthe RPM's can finally do away with config data in /var, and put it in\n/etc/pgsql, where it belongs in the RPM distribution.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 PEter 4:11\n", "msg_date": "Tue, 30 May 2000 20:30:52 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: config files in /data" }, { "msg_contents": "Bruce Momjian writes:\n\n> Seems we should have pg_hba.conf and other files in a separate\n> directory, not /data.\n\nI think the internal files, such as pg_control should be in data/base/\ninstead, so as to not mix them up with user-editable files such as\npg_hba.conf, `configuration' (as of 3 min ago), ident, and password files.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 31 May 2000 02:36:36 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: config files in /data" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Bruce Momjian writes:\n>> Seems we should have pg_hba.conf and other files in a separate\n>> directory, not /data.\n\n> I think the internal files, such as pg_control should be in data/base/\n> instead, so as to not mix them up with user-editable files such as\n> pg_hba.conf, `configuration' (as of 3 min ago), ident, and password files.\n\nDoesn't that pose a risk of collision with user-chosen database names?\nRather than restricting database names, we should keep these files in\na different subdirectory. I agree that it's a good idea to separate\nthem from the directly-editable config files, just not to put them in\ndata/base/ ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 31 May 2000 13:16:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: config files in /data " }, { "msg_contents": "> Peter Eisentraut <[email protected]> writes:\n> > Bruce Momjian writes:\n> >> Seems we should have pg_hba.conf and other files in a separate\n> >> directory, not /data.\n> \n> > I think the internal files, such as pg_control should be in data/base/\n> > instead, so as to not mix them up with user-editable files such as\n> > pg_hba.conf, `configuration' (as of 3 min ago), ident, and password files.\n> \n> Doesn't that pose a risk of collision with user-chosen database names?\n> Rather than restricting database names, we should keep these files in\n> a different subdirectory. I agree that it's a good idea to separate\n> them from the directly-editable config files, just not to put them in\n> data/base/ ...\n\nYes, seems user-editable files should go in pgsql/etc or pgsql/config.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 31 May 2000 13:18:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: config files in /data" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Yes, seems user-editable files should go in pgsql/etc or pgsql/config.\n\nWhat? That'd mean you couldn't have different files for different\ninstallations, which'd be a severe handicap (at least for developers\nwho are pretty likely to have multiple installations on one machine).\nPutting the active copies under the data/ directory is good.\n\nOr did you really mean a new subdirectory like data/config/ ?\nI could live with that for new or reformatted config files. As long as\npg_hba.conf (for example) doesn't change meaning/layout I'd rather leave\nit where it is.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 31 May 2000 13:23:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: config files in /data " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Yes, seems user-editable files should go in pgsql/etc or pgsql/config.\n> \n> What? That'd mean you couldn't have different files for different\n> installations, which'd be a severe handicap (at least for developers\n> who are pretty likely to have multiple installations on one machine).\n> Putting the active copies under the data/ directory is good.\n\nI didn't think of that. Yes, I can see pgsql/data/config is better.\n\n> \n> Or did you really mean a new subdirectory like data/config/ ?\n> I could live with that for new or reformatted config files. As long as\n> pg_hba.conf (for example) doesn't change meaning/layout I'd rather leave\n> it where it is.\n\nSeems we can just move it. I really don't like people in /data, but\n/data/config is OK. Of course, this is just my opinion. It just scares\nme to have people doing edits in a directory with real tables.\n\nI remember someone deleted pg_log last week because they thought it was\na log file. It just seems we have a mess in /data with too many\ndifferent types of files:\n\n\tPG_VERSION pg_log\n\tbase/ pg_pwd\n\tpg_control pg_pwd.reload\n\tpg_database pg_shadow\n\tpg_geqo.sample pg_variable\n\tpg_group pg_xlog/\n\tpg_group_name_index postmaster.opts\n\tpg_group_sysid_index postmaster.opts.default\n\tpg_hba.conf postmaster.pid\n\nI myself am not totally sure of the use of all these.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 31 May 2000 13:46:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: config files in /data" }, { "msg_contents": "Tom Lane writes:\n\n> Rather than restricting database names, we should keep these files in\n> a different subdirectory. I agree that it's a good idea to separate\n> them from the directly-editable config files, just not to put them in\n> data/base/ ...\n\nRight. How about `$PGDATA/internal'? Can't be more obvious. Perhaps with\nthat we could also have initdb clean up a little more respectfully.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 31 May 2000 21:34:44 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: config files in /data " }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> Tom Lane writes:\n> \n> > Rather than restricting database names, we should keep these files in\n> > a different subdirectory. I agree that it's a good idea to separate\n> > them from the directly-editable config files, just not to put them in\n> > data/base/ ...\n> \n> Right. How about `$PGDATA/internal'? Can't be more obvious. Perhaps with\n> that we could also have initdb clean up a little more respectfully.\n\nAre we talking about moveing pg_log and pg_shadow? Maybe call it\n/global because the tables are global to all databases.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 31 May 2000 15:43:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: config files in /data" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Right. How about `$PGDATA/internal'? Can't be more obvious. Perhaps with\n>> that we could also have initdb clean up a little more respectfully.\n\n> Are we talking about moveing pg_log and pg_shadow? Maybe call it\n> /global because the tables are global to all databases.\n\nWe weren't, but it seems like a good idea now that you mention it.\nSo it sounds like we are converging on:\n\n$PGDATA itself contains only directly-editable config files\n\n$PGDATA/base/ contains database subdirectories (same as now)\n\n$PGDATA/global/ contains installation-wide tables (pg_database,\n\tpg_shadow, their indices, etc)\n\n$PGDATA/internal/ contains anything else that is installation-wide\n\tbut is not a table.\n\nThe distinction between /global and /internal is a little bit artificial\n(which one does pg_log belong in? It's only sort of a table...), so\nmaybe we'd be better off just putting those two together. Don't have\na strong opinion either way.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 31 May 2000 19:15:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: config files in /data " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> Right. How about `$PGDATA/internal'? Can't be more obvious. Perhaps with\n> >> that we could also have initdb clean up a little more respectfully.\n> \n> > Are we talking about moveing pg_log and pg_shadow? Maybe call it\n> > /global because the tables are global to all databases.\n> \n> We weren't, but it seems like a good idea now that you mention it.\n> So it sounds like we are converging on:\n> \n> $PGDATA itself contains only directly-editable config files\n> \n> $PGDATA/base/ contains database subdirectories (same as now)\n> \n> $PGDATA/global/ contains installation-wide tables (pg_database,\n> \tpg_shadow, their indices, etc)\n> \n> $PGDATA/internal/ contains anything else that is installation-wide\n> \tbut is not a table.\n> \n> The distinction between /global and /internal is a little bit artificial\n> (which one does pg_log belong in? It's only sort of a table...), so\n> maybe we'd be better off just putting those two together. Don't have\n> a strong opinion either way.\n\nThis sounds good, and it keeps pg_hba.conf in the same place. Seems\n/internal is just going to be confusing. Not sure.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 31 May 2000 19:21:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: config files in /data" }, { "msg_contents": "Tom Lane writes:\n\n> The distinction between /global and /internal is a little bit artificial\n> (which one does pg_log belong in? It's only sort of a table...),\n\nIs there any reason these special tables are catalogued? Also, with the\ncatalog version number, is there any more use for the PG_VERSION file?\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 2 Jun 2000 02:37:06 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: config files in /data " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> The distinction between /global and /internal is a little bit artificial\n>> (which one does pg_log belong in? It's only sort of a table...),\n\n> Is there any reason these special tables are catalogued?\n\nI can't think of a reason to catalog pg_log offhand, but maybe Vadim\nknows one...\n\n> Also, with the catalog version number, is there any more use for the\n> PG_VERSION file?\n\nSure. The catalog number is just for internal purposes; you can't use\nit (easily) to tell which release you have. PG_VERSION is more\nappropriate for user interface purposes. Also, consider pg_upgrade:\nit wouldn't have any simple way of checking for compatibility without\nPG_VERSION.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Jun 2000 20:43:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: config files in /data " }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > Seems we should have pg_hba.conf and other files in a separate\n> > directory, not /data.\n> > \n> > Comments?\n> \n> config or etc, perhaps? But, make the location configuratable (so that\n> the RPM's can finally do away with config data in /var, and put it in\n> /etc/pgsql, where it belongs in the RPM distribution.\n\nCurrent idea is to leave pg_hba.conf in /data, and move pg_log and\npg_shadow to a /global directory.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 2 Jun 2000 01:35:15 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: config files in /data" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n\n> Current idea is to leave pg_hba.conf in /data, and move pg_log and\n> pg_shadow to a /global directory.\n\nIt is a configuration, and separation of data and configuration would\nbe nice.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "02 Jun 2000 09:06:32 -0400", "msg_from": "[email protected] (Trond Eivind=?iso-8859-1?q?_Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: config files in /data" }, { "msg_contents": "Bruce Momjian wrote:\n\n> > config or etc, perhaps? But, make the location configuratable (so that\n> > the RPM's can finally do away with config data in /var, and put it in\n> > /etc/pgsql, where it belongs in the RPM distribution.\n \n> Current idea is to leave pg_hba.conf in /data, and move pg_log and\n> pg_shadow to a /global directory.\n\nThat certainly is better than now. I would still like to see the\nlocation configurable in some way. \n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 02 Jun 2000 17:28:18 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: config files in /data" }, { "msg_contents": "Trond Eivind Glomsr�d writes:\n\n> It is a configuration, and separation of data and configuration would\n> be nice.\n\nBut then it's just going to be more confusing to tell which configuration\ngoes with which data.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 3 Jun 2000 01:54:39 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: config files in /data" }, { "msg_contents": "Lamar Owen writes:\n\n> config or etc, perhaps? But, make the location configuratable (so that\n> the RPM's can finally do away with config data in /var, and put it in\n> /etc/pgsql, where it belongs in the RPM distribution.\n\nThe Linux FHS or GNU standards or whatever are really only good for\nprograms that act on one piece of data per system. But you can run many\npostmasters per system so you can't fix the location of the configuration\nfiles independently of that of the data.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 3 Jun 2000 01:56:27 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: config files in /data" }, { "msg_contents": "Peter Eisentraut wrote:\n> Lamar Owen writes:\n> > config or etc, perhaps? But, make the location configuratable (so that\n> > the RPM's can finally do away with config data in /var, and put it in\n> > /etc/pgsql, where it belongs in the RPM distribution.\n \n> The Linux FHS or GNU standards or whatever are really only good for\n> programs that act on one piece of data per system. But you can run many\n> postmasters per system so you can't fix the location of the configuration\n> files independently of that of the data.\n\nYes, you can. If it is important enough to do it, I'll build the RPM's\nwith ease of multiple postmasters in mind. With that, I can go with a\ntree such as:\n/etc/pgsql/$name_1/*\n/var/lib/pgsql/$name_1/data\n/etc/pgsql/$name_2/*\n/var/lib/pgsql/$name_1/data\netc.\n\nIn /etc/pgsql/$name_[12], I'd have a few files:\nconfiguration\npg_hba.conf\n\nI haven't looked at your new configuration file as yet, but I am very\ninterested.\n\nThen, the initscript would iterate over the /etc/pgsql/$name_\ndirectories, and start a postmaster for each.\n\nThis, incidentally, is planned for 7.1, as I don't want to mess with\npeople's minds too much with too many changes inside the 7.0.x series.\n\nI would also have a couple of RPM-specific commands: createdbhost and\ndropdbhost (or some other, yet-to-be-determined program names) that\nwould create and drop the extra postmaster configs and data dirs.\n\nI could even, for backwards compatibility, support an un-named\npostmaster that would feed from /etc/pgsql and /var/lib/pgsql/data......\n\nAll of which would be FHS-compliant.\n\nThoughts?\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 02 Jun 2000 20:26:55 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: config files in /data" }, { "msg_contents": "Lamar Owen writes:\n\n> /etc/pgsql/$name_1/*\n> /var/lib/pgsql/$name_1/data\n> /etc/pgsql/$name_2/*\n> /var/lib/pgsql/$name_1/data\n> etc.\n\nI can see how that would make sense in some standard layout. But you could\neasily make symlinks from /etc/ to the respective /var. Yes, that may look\nugly to some, but the alternative might look just as ugly to some others.\nI mean users in a manual setup would actually have to go out of their way\nto put the configuration files somewhere else during initdb and then pick\nthem up from there when they start the postmaster. If you want to do that\nbehind the scenes in a prepackaged deal, okay, but I don't think exposing\nthe users to these kinds of things will make things easier.\n\n\n> I haven't looked at your new configuration file as yet, but I am very\n> interested.\n\nNew initdb runs now install a sample.\n\n\n> Thoughts?\n\nWell my overall thought is that the FHS and PostgreSQL are not necessarily\na match made in heaven but I understand that you have some implicit\ncommitments so I won't try to make it harder for you. :)\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 4 Jun 2000 03:46:01 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: config files in /data" }, { "msg_contents": "Peter Eisentraut wrote:\n> Lamar Owen writes:\n> > /etc/pgsql/$name_1/*\n> > /var/lib/pgsql/$name_1/data\n> > etc.\n \n> I can see how that would make sense in some standard layout. But you could\n> easily make symlinks from /etc/ to the respective /var. Yes, that may look\n> ugly to some, but the alternative might look just as ugly to some others.\n\nThe symlinks idea is not the greatest, but it _would_ serve in a pinch.\n\n> I mean users in a manual setup would actually have to go out of their way\n> to put the configuration files somewhere else during initdb and then pick\n> them up from there when they start the postmaster. If you want to do that\n> behind the scenes in a prepackaged deal, okay, but I don't think exposing\n> the users to these kinds of things will make things easier.\n\nThat's why a configure option to place configuration files in whatever\nlocation would be nice for those who want to do this sort of thing. No\nforcing of anyone to do anything -- for that matter, a RedHat user is\nperfectly free to install the source tarball and make the system look\nhowever -- after all, Thomas does this already (although it is Mandrake\ninstead of RedHat -- cousins, really.).\n\nOf course, I can always patch....not ideal, however.\n\n> > I haven't looked at your new configuration file as yet, but I am very\n> > interested.\n \n> New initdb runs now install a sample.\n\nGood. I'm going to actually try to package the pre-7.1 stuff up in RPM\nform for those most intrepid RedHat users who want to maintain a\nconsistency between versions. We shall see how easily that is\naccomplished in an automated manner.\n \n> > Thoughts?\n \n> Well my overall thought is that the FHS and PostgreSQL are not necessarily\n> a match made in heaven but I understand that you have some implicit\n> commitments so I won't try to make it harder for you. :)\n\nThat is appreciated. The biggest difference between most platforms and\nRedHat relative to PostgreSQL is that PostgreSQL is considered an\nintegral part of RedHat's distribution (which complicates things, as you\nare aware). It is my job to make RedHat and PostgreSQL get along\nsmoothly. Debian is in the same position, as PostgreSQL is also\nintegral to it -- and Oliver does an excellent job shepherding that\nfacet.\n\n(PS: Bruce, don't you think it's about time Peter got mention on the\ndevelopers' globe?)\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 03 Jun 2000 22:03:03 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: config files in /data" }, { "msg_contents": "> > Well my overall thought is that the FHS and PostgreSQL are not necessarily\n> > a match made in heaven but I understand that you have some implicit\n> > commitments so I won't try to make it harder for you. :)\n> \n> That is appreciated. The biggest difference between most platforms and\n> RedHat relative to PostgreSQL is that PostgreSQL is considered an\n> integral part of RedHat's distribution (which complicates things, as you\n> are aware). It is my job to make RedHat and PostgreSQL get along\n> smoothly. Debian is in the same position, as PostgreSQL is also\n> integral to it -- and Oliver does an excellent job shepherding that\n> facet.\n> \n> (PS: Bruce, don't you think it's about time Peter got mention on the\n> developers' globe?)\n\nYikes, yes. Let me add him. Vince will have to redo the actual globe. \nVadim is moved now anyway, so it has to be regenerated. Peter, can you\ngive me your geographical location (Major city?).\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 3 Jun 2000 22:35:38 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: config files in /data" } ]
[ { "msg_contents": "Has anyone tried using crypt authentication with pg_passwd files?\nDoes crypt just use the username in the file, and the password from\npg_shadow? I hope so.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 May 2000 16:24:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Does pg_passwd files work with crypt" } ]
[ { "msg_contents": "The heralded grand unified configuration is now here. In essence things\nshould not behave differently if you don't use the new functionality but\nheavy users of debugging flags and pg_options might observe otherwise. If\nanything's amiss, tell me right away and it shall be fixed. (Sometimes it\nis hard to know what was part of the semi-public interface and what was\nsemi-internal.)\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 31 May 2000 02:33:20 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "New configuration is here" } ]
[ { "msg_contents": "In 7.0 release note, we have:\n\n\tAdd SET FSYNC and SHOW PG_OPTIONS commands(Massimo)\n\nI'm confused by this since we don't seem to have SET FSYNC command.\n\ntest=# set fsync to on;\nNOTICE: Unrecognized variable fsync\nSET VARIABLE\n\nAm I missing something?\n\nAlso if we have that command, I wonder it would be safe to issue on\nthe fly.\n\nComments?\n--\nTatsuo Ishii\n\n", "msg_date": "Wed, 31 May 2000 10:03:15 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "SET FSYNC command?" }, { "msg_contents": "> In 7.0 release note, we have:\n> \n> \tAdd SET FSYNC and SHOW PG_OPTIONS commands(Massimo)\n> \n> I'm confused by this since we don't seem to have SET FSYNC command.\n> \n> test=# set fsync to on;\n> NOTICE: Unrecognized variable fsync\n> SET VARIABLE\n> \n> Am I missing something?\n> \n> Also if we have that command, I wonder it would be safe to issue on\n> the fly.\n\nGee, I think we removed it because we thought it would be unsafe, but\nthen Tom Lane made it safe, forgot to re-enable it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 May 2000 21:33:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SET FSYNC command?" }, { "msg_contents": "Tatsuo Ishii wrote:\n\n> In 7.0 release note, we have:\n>\n> Add SET FSYNC and SHOW PG_OPTIONS commands(Massimo)\n>\n> I'm confused by this since we don't seem to have SET FSYNC command.\n\nIf SHOW PG_OPTIONS; is executed, one of the lines is\n o NOTICE: nofsync=0\nnow, SET PG_OPTIONS TO 'nofsync=1';\nSHOW PG_OPTIONS;\n o NOTICE: nofsync=1\n\nDoes this actually work?\n\nRegards\nGrant\n\n--\n> Poorly planned software requires a genius to write it\n> and a hero to use it.\n\nGrant Finnemore BSc(Eng) (mailto:[email protected])\nSoftware Engineer Universal Computer Services\nTel (+27)(11)712-1366 PO Box 31266 Braamfontein 2017, South Africa\nCell (+27)(82)604-5536 20th Floor, 209 Smit St., Braamfontein\nFax (+27)(11)339-3421 Johannesburg, South Africa\n\n\n\n", "msg_date": "Wed, 31 May 2000 07:12:04 +0200", "msg_from": "Grant Finnemore <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SET FSYNC command?" }, { "msg_contents": "Grant Finnemore <[email protected]> writes:\n>> Add SET FSYNC and SHOW PG_OPTIONS commands(Massimo)\n>> \n>> I'm confused by this since we don't seem to have SET FSYNC command.\n\n> If SHOW PG_OPTIONS; is executed, one of the lines is\n> o NOTICE: nofsync=0\n> now, SET PG_OPTIONS TO 'nofsync=1';\n> SHOW PG_OPTIONS;\n> o NOTICE: nofsync=1\n\n> Does this actually work?\n\nIt does. The syntax is unnecessarily obscure and arse-backwards :-(.\nThe only defense I can offer is that we had every intention of ripping\nthe variable out completely, until a nearly-last-minute revision of the\nbuffer manager (to fix a different bug report) that had as a side-effect\nmaking it safe and reasonable to run different backends with different\nfsync settings. So the variable got left in as it stood.\n\nOnce the WAL revisions are done the whole issue will go away anyway,\nso it's probably not worth spending any time fixing the bizarre user\ninterface for this preference setting...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 31 May 2000 02:14:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SET FSYNC command? " }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> Also if we have that command, I wonder it would be safe to issue on\n> the fly.\n\nI believe it is safe now, following the changes I made last month\nto the buffer-sync algorithm. It surely was not safe before to run\nwith different fsync settings in different backends.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 31 May 2000 02:20:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SET FSYNC command? " }, { "msg_contents": "On Wed, 31 May 2000, Tom Lane wrote:\n\n> > If SHOW PG_OPTIONS; is executed, one of the lines is\n> > o NOTICE: nofsync=0\n> > now, SET PG_OPTIONS TO 'nofsync=1';\n> > SHOW PG_OPTIONS;\n> > o NOTICE: nofsync=1\n\n> The syntax is unnecessarily obscure and arse-backwards :-(.\n\nNo kidding.\n\nActually, since yesterday you can disable fsync in a multitude of ways:\n\npostmaster -F\npostmaster -o '-F'\npostmaster --enable-fsync=off\nSET ENABLE_FSYNC TO OFF;\n\nbut feel free to not care because after all ...\n\n> Once the WAL revisions are done the whole issue will go away anyway,\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 31 May 2000 12:43:54 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SET FSYNC command? " }, { "msg_contents": "> > The syntax is unnecessarily obscure and arse-backwards :-(.\n> SET ENABLE_FSYNC TO OFF;\n\nHi Peter. I was noticing that several of the SET keywords have redundant\nelements. In this case, istm that\n\n SET FSYNC=ON;\n\nwould be adequate and preferred; the \"ENABLE\" is implied by the \"ON\" in\nthe last example. I realize that you are using a keyword style already\npresent, but perhaps for 7.1 we can shorten all of those keywords having\n\"ENABLE_\"? \n\nComments?\n\n - Thomas\n", "msg_date": "Wed, 31 May 2000 13:47:45 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SET FSYNC command?" }, { "msg_contents": "Thomas Lockhart writes:\n\n> In this case, istm that\n> \n> SET FSYNC=ON;\n> \n> would be adequate and preferred; the \"ENABLE\" is implied by the \"ON\" in\n\nActually I just noticed that it actually is that way for FSYNC, but I\nagree totally.\n\n> the last example. I realize that you are using a keyword style already\n> present, but perhaps for 7.1 we can shorten all of those keywords having\n> \"ENABLE_\"? \n\nYou know me, I would change everything if you guys wouldn't hold me\nback. :)\n\nI could certainly agree to this.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 31 May 2000 20:44:03 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SET FSYNC command?" } ]
[ { "msg_contents": "> > Well, of course the whole *point* of LIMIT is that it stops short of\n> > scanning the whole query result. So I'm afraid you're kind of stuck\n> > as far as the performance goes: you can't get a count() answer without\n> > scanning the whole query.\n\nRight, that's what I thought.\n\n> > I'm a little curious though: what is the typical count() result from\n> > your queries? The EXPLAIN outputs you show indicate that the planner\n> > is only expecting about one row out now, but I have no idea how close\n> > that is to the mark. If it were really right, then there'd be no\n> > difference in the performance of LIMIT and full queries, so I guess\n> > it's not right; but how far off is it?\n\nWell, count does always return 1 row, though what's in that one row is as\nvarying as 0 to the number of records in the applicants database (about\n11,000)..\n\nAnyway, I thank you and appreciate your input..\n\n-Mitch\n\n\n\n", "msg_date": "Tue, 30 May 2000 21:53:01 -0400", "msg_from": "\"Mitch Vincent\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Full text indexing preformance! (long) " } ]
[ { "msg_contents": "\nAnnounce: Release of PyGreSQL version 3.0\n===============================================\n\nPyGreSQL v3.0 has been released.\nIt is available at: ftp://ftp.druid.net/pub/distrib/PyGreSQL.tgz. If\nyou are running NetBSD, look in the packages directory under databases.\nThere is also a package in the FreeBSD ports collection. \n\nPostgreSQL is a database system derived from Postgres4.2. It conforms\nto (most of) ANSI SQL and offers many interesting capabilities (C\ndynamic linking for functions or type definition, etc.). This package\nis copyright by the Regents of the University of California, and is\nfreely distributable.\n\nPython is an interpreted programming language. It is object oriented,\nsimple to use (light syntax, simple and straightforward statements), and\nhas many extensions for building GUIs, interfacing with WWW, etc. An\nintelligent web browser (HotJava like) is currently under development\n(November 1995), and this should open programmers many doors. Python is\ncopyrighted by Stichting S Mathematisch Centrum, Amsterdam, The\nNetherlands, and is freely distributable.\n\nPyGreSQL is a python module that interfaces to a PostgreSQL database. It\nembeds the PostgreSQL query library to allow easy use of the powerful\nPostgreSQL features from a Python script.\n\nThis release of PyGreSQL is the first DB-SIG API. That's why we have\na bump in the major number. There is also a potential problem in\nbackwards compatibility. Previously when there was a NULL in a returned\nfield it was returned as a blank. Now it is more properly returned as\na Python None. Any scripts that expect NULLs to be blanks will have\nproblems with this.\n\nDue to the fact that the DB-API is brand new, it is expected that there\nwill be a 3.1 release shortly with corrections once many people have\nhad a chance to test it.\n\nSee the other changes below or in the Changelog file.\n\nPyGreSQL 2.0 was developed and tested on a NetBSD 1.3_BETA system. It\nis based on the PyGres95 code written by Pascal Andre,\[email protected]. I changed the version to 2.0 and updated the\ncode for Python 1.5 and PostgreSQL 6.2.1. While I was at it I upgraded\nthe code to use full ANSI style prototypes and changed the order of\narguments to connect. Later versions are fixes and enhancements to that.\nThe latest version of PyGreSQL works with Python 1.5.2 and PostgreSQL 6.5.\n\nImportant changes from PyGreSQL 2.4 to PyGreSQL 3.0:\n - Remove strlen() call from pglarge_write() and get size from object.\n ([email protected])\n - Add a little more error checking to the quote function in the wrapper\n - Add extra checking in _quote function\n - Wrap query in pg.py for debugging\n - Add DB-API 2.0 support to pgmodule.c ([email protected])\n - Add DB-API 2.0 wrapper pgdb.py ([email protected])\n - Correct keyword clash (temp) in tutorial\n - Clean up layout of tutorial\n - Return NULL values as None ([email protected]) (WARNING: This\n will cause backwards compatibility issues.)\n - Change None to NULL in insert and update\n - Change hash-bang lines to use /usr/bin/env\n - Clearing date should be blank (NULL) not TODAY\n - Quote backslashes in strings in _quote ([email protected])\n - Expanded and clarified build instructions ([email protected])\n - Make code thread safe ([email protected])\n - Add README.distutils ([email protected] & [email protected])\n - Many fixes by [email protected], [email protected], [email protected]\n and others to get the final version ready to release.\n\nImportant changes from PyGreSQL 2.3 to PyGreSQL 2.4:\n - Insert returns None if the user doesn't have select permissions\n on the table. It can (and does) happen that one has insert but\n not select permissions on a table.\n - Added ntuples() method to query object ([email protected])\n - Corrected a bug related to getresult() and the money type\n - Corrected a bug related to negative money amounts\n - Allow update based on primary key if munged oid not available and\n table has a primary key\n - Add many __doc__ strings. ([email protected])\n - Get method works with views if key specified\n\nImportant changes from PyGreSQL 2.2 to PyGreSQL 2.3:\n - connect.host returns \"localhost\" when connected to Unix socket\n ([email protected])\n - Use PyArg_ParseTupleAndKeywords in connect() ([email protected])\n - fixes and cleanups ([email protected])\n - Fixed memory leak in dictresult() ([email protected])\n - Deprecated pgext.py - functionality now in pg.py\n - More cleanups to the tutorial\n - Added fileno() method - [email protected] (Mikhail Terekhov)\n - added money type to quoting function\n - Compiles cleanly with more warnings turned on\n - Returns PostgreSQL error message on error\n - Init accepts keywords (Jarkko Torppa)\n - Convenience functions can be overridden (Jarkko Torppa)\n - added close() method\n\nImportant changes from PyGreSQL 2.1 to PyGreSQL 2.2:\n - Added user and password support thanks to Ng Pheng Siong <[email protected]>\n - Insert queries return the inserted oid\n - Add new pg wrapper (C module renamed to _pg)\n - Wrapped database connection in a class.\n - Cleaned up some of the tutorial. (More work needed.)\n - Added version and __version__. Thanks to [email protected] for\n the suggestion.\n\nImportant changes from PyGreSQL 2.0 to PyGreSQL 2.1:\n - return fields as proper Python objects for field type\n - Cleaned up pgext.py\n - Added dictresult method\n\nImportant changes from Pygres95 1.0b to PyGreSQL 2.0:\n - Updated code for PostgreSQL 6.2.1 and Python 1.5.\n - Reformatted code and converted to ANSI .\n - Changed name to PyGreSQL (from PyGres95.)\n - Changed order of arguments to connect function.\n - Created new type pgqueryobject and moved certain methods to it.\n - Added a print function for pgqueryobject\n - Various code changes - mostly stylistic.\n\nFor more information about each package, please have a look to their\nweb pages:\n - Python : http://www.python.org/\n - PostgreSQL : http://www.PostgreSQL.org/\n - PyGreSQL : http://www.druid.net/pygresql/\n\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 30 May 2000 22:23:27 -0400 (EDT)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": true, "msg_subject": "Announce: Release of PyGreSQL version 3.0" }, { "msg_contents": "I have installed this in the current source tree, ready for 7.1.\n\nI have installed > \n> Announce: Release of PyGreSQL version 3.0\n> ===============================================\n> \n> PyGreSQL v3.0 has been released.\n> It is available at: ftp://ftp.druid.net/pub/distrib/PyGreSQL.tgz. If\n> you are running NetBSD, look in the packages directory under databases.\n> There is also a package in the FreeBSD ports collection. \n> \n> PostgreSQL is a database system derived from Postgres4.2. It conforms\n> to (most of) ANSI SQL and offers many interesting capabilities (C\n> dynamic linking for functions or type definition, etc.). This package\n> is copyright by the Regents of the University of California, and is\n> freely distributable.\n> \n> Python is an interpreted programming language. It is object oriented,\n> simple to use (light syntax, simple and straightforward statements), and\n> has many extensions for building GUIs, interfacing with WWW, etc. An\n> intelligent web browser (HotJava like) is currently under development\n> (November 1995), and this should open programmers many doors. Python is\n> copyrighted by Stichting S Mathematisch Centrum, Amsterdam, The\n> Netherlands, and is freely distributable.\n> \n> PyGreSQL is a python module that interfaces to a PostgreSQL database. It\n> embeds the PostgreSQL query library to allow easy use of the powerful\n> PostgreSQL features from a Python script.\n> \n> This release of PyGreSQL is the first DB-SIG API. That's why we have\n> a bump in the major number. There is also a potential problem in\n> backwards compatibility. Previously when there was a NULL in a returned\n> field it was returned as a blank. Now it is more properly returned as\n> a Python None. Any scripts that expect NULLs to be blanks will have\n> problems with this.\n> \n> Due to the fact that the DB-API is brand new, it is expected that there\n> will be a 3.1 release shortly with corrections once many people have\n> had a chance to test it.\n> \n> See the other changes below or in the Changelog file.\n> \n> PyGreSQL 2.0 was developed and tested on a NetBSD 1.3_BETA system. It\n> is based on the PyGres95 code written by Pascal Andre,\n> [email protected]. I changed the version to 2.0 and updated the\n> code for Python 1.5 and PostgreSQL 6.2.1. While I was at it I upgraded\n> the code to use full ANSI style prototypes and changed the order of\n> arguments to connect. Later versions are fixes and enhancements to that.\n> The latest version of PyGreSQL works with Python 1.5.2 and PostgreSQL 6.5.\n> \n> Important changes from PyGreSQL 2.4 to PyGreSQL 3.0:\n> - Remove strlen() call from pglarge_write() and get size from object.\n> ([email protected])\n> - Add a little more error checking to the quote function in the wrapper\n> - Add extra checking in _quote function\n> - Wrap query in pg.py for debugging\n> - Add DB-API 2.0 support to pgmodule.c ([email protected])\n> - Add DB-API 2.0 wrapper pgdb.py ([email protected])\n> - Correct keyword clash (temp) in tutorial\n> - Clean up layout of tutorial\n> - Return NULL values as None ([email protected]) (WARNING: This\n> will cause backwards compatibility issues.)\n> - Change None to NULL in insert and update\n> - Change hash-bang lines to use /usr/bin/env\n> - Clearing date should be blank (NULL) not TODAY\n> - Quote backslashes in strings in _quote ([email protected])\n> - Expanded and clarified build instructions ([email protected])\n> - Make code thread safe ([email protected])\n> - Add README.distutils ([email protected] & [email protected])\n> - Many fixes by [email protected], [email protected], [email protected]\n> and others to get the final version ready to release.\n> \n> Important changes from PyGreSQL 2.3 to PyGreSQL 2.4:\n> - Insert returns None if the user doesn't have select permissions\n> on the table. It can (and does) happen that one has insert but\n> not select permissions on a table.\n> - Added ntuples() method to query object ([email protected])\n> - Corrected a bug related to getresult() and the money type\n> - Corrected a bug related to negative money amounts\n> - Allow update based on primary key if munged oid not available and\n> table has a primary key\n> - Add many __doc__ strings. ([email protected])\n> - Get method works with views if key specified\n> \n> Important changes from PyGreSQL 2.2 to PyGreSQL 2.3:\n> - connect.host returns \"localhost\" when connected to Unix socket\n> ([email protected])\n> - Use PyArg_ParseTupleAndKeywords in connect() ([email protected])\n> - fixes and cleanups ([email protected])\n> - Fixed memory leak in dictresult() ([email protected])\n> - Deprecated pgext.py - functionality now in pg.py\n> - More cleanups to the tutorial\n> - Added fileno() method - [email protected] (Mikhail Terekhov)\n> - added money type to quoting function\n> - Compiles cleanly with more warnings turned on\n> - Returns PostgreSQL error message on error\n> - Init accepts keywords (Jarkko Torppa)\n> - Convenience functions can be overridden (Jarkko Torppa)\n> - added close() method\n> \n> Important changes from PyGreSQL 2.1 to PyGreSQL 2.2:\n> - Added user and password support thanks to Ng Pheng Siong <[email protected]>\n> - Insert queries return the inserted oid\n> - Add new pg wrapper (C module renamed to _pg)\n> - Wrapped database connection in a class.\n> - Cleaned up some of the tutorial. (More work needed.)\n> - Added version and __version__. Thanks to [email protected] for\n> the suggestion.\n> \n> Important changes from PyGreSQL 2.0 to PyGreSQL 2.1:\n> - return fields as proper Python objects for field type\n> - Cleaned up pgext.py\n> - Added dictresult method\n> \n> Important changes from Pygres95 1.0b to PyGreSQL 2.0:\n> - Updated code for PostgreSQL 6.2.1 and Python 1.5.\n> - Reformatted code and converted to ANSI .\n> - Changed name to PyGreSQL (from PyGres95.)\n> - Changed order of arguments to connect function.\n> - Created new type pgqueryobject and moved certain methods to it.\n> - Added a print function for pgqueryobject\n> - Various code changes - mostly stylistic.\n> \n> For more information about each package, please have a look to their\n> web pages:\n> - Python : http://www.python.org/\n> - PostgreSQL : http://www.PostgreSQL.org/\n> - PyGreSQL : http://www.druid.net/pygresql/\n> \n> \n> -- \n> D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> http://www.druid.net/darcy/ | and a sheep voting on\n> +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 1 Oct 2000 23:25:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ANNOUNCE] Announce: Release of PyGreSQL version 3.0" }, { "msg_contents": "Hi!\nWhere can I find the documentation of Postgres's compiler and optimizer?\nwho has anything about this subject?\nRicardo\[email protected]\n\n", "msg_date": "Thu, 5 Oct 2000 08:50:03 +0500 (GMT)", "msg_from": "Ricardo Timaran <[email protected]>", "msg_from_op": false, "msg_subject": "Documentation about compiler Postgres" }, { "msg_contents": "Thus spake Bruce Momjian\n> I have installed this in the current source tree, ready for 7.1.\n> \n> I have installed > \n> > Announce: Release of PyGreSQL version 3.0\n\nWhen is 7.1 being locked down? I may be releasing 3.1 with a few small\nfixes and changes very soon.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 5 Oct 2000 08:13:17 -0400 (EDT)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": true, "msg_subject": "Re: [ANNOUNCE] Announce: Release of PyGreSQL version 3.0" }, { "msg_contents": "[email protected] (D'Arcy J.M. Cain) writes:\n> When is 7.1 being locked down? I may be releasing 3.1 with a few small\n> fixes and changes very soon.\n\nYou've probably got about 2 weeks before beta starts. Bug fixes are\naccepted during beta freeze, of course --- just no new-feature\ndevelopment.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Oct 2000 09:59:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [ANNOUNCE] Announce: Release of PyGreSQL version 3.0 " }, { "msg_contents": "November 1.\n\n> Thus spake Bruce Momjian\n> > I have installed this in the current source tree, ready for 7.1.\n> > \n> > I have installed > \n> > > Announce: Release of PyGreSQL version 3.0\n> \n> When is 7.1 being locked down? I may be releasing 3.1 with a few small\n> fixes and changes very soon.\n> \n> -- \n> D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> http://www.druid.net/darcy/ | and a sheep voting on\n> +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 5 Oct 2000 10:28:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [ANNOUNCE] Announce: Release of PyGreSQL version 3.0" }, { "msg_contents": "On Thu, 5 Oct 2000, Tom Lane wrote:\n\n> [email protected] (D'Arcy J.M. Cain) writes:\n> > When is 7.1 being locked down? I may be releasing 3.1 with a few small\n> > fixes and changes very soon.\n> \n> You've probably got about 2 weeks before beta starts. Bug fixes are\n> accepted during beta freeze, of course --- just no new-feature\n> development.\n\nhow are we dealing with third party software like this though? Stuff like\nPyGreSQL and PGAccess should be \"at the authors discretion\", no? As they\ndon't interfere with the core functionality and build of the system?\n\n\n", "msg_date": "Sat, 7 Oct 2000 21:13:47 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [ANNOUNCE] Announce: Release of PyGreSQL version\n 3.0" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> On Thu, 5 Oct 2000, Tom Lane wrote:\n>> [email protected] (D'Arcy J.M. Cain) writes:\n>>>> When is 7.1 being locked down? I may be releasing 3.1 with a few small\n>>>> fixes and changes very soon.\n>> \n>> You've probably got about 2 weeks before beta starts. Bug fixes are\n>> accepted during beta freeze, of course --- just no new-feature\n>> development.\n\n> how are we dealing with third party software like this though? Stuff like\n> PyGreSQL and PGAccess should be \"at the authors discretion\", no? As they\n> don't interfere with the core functionality and build of the system?\n\nWell, a third party author always has the option to release his code\nseparately on whatever timeline seems good to him. But I think that for\nthird-party code included in the distribution, the same standards ought\nto apply as for the Postgres code itself: we don't want people sticking\nalpha-quality code into a Postgres release tarball, whether it's core\nfunctionality or not. It's not as if \"no new features for a month\" is\na particularly onerous standard to meet ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 07 Oct 2000 23:38:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [ANNOUNCE] Announce: Release of PyGreSQL version 3.0 " }, { "msg_contents": "On Sat, 7 Oct 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > On Thu, 5 Oct 2000, Tom Lane wrote:\n> >> [email protected] (D'Arcy J.M. Cain) writes:\n> >>>> When is 7.1 being locked down? I may be releasing 3.1 with a few small\n> >>>> fixes and changes very soon.\n> >> \n> >> You've probably got about 2 weeks before beta starts. Bug fixes are\n> >> accepted during beta freeze, of course --- just no new-feature\n> >> development.\n> \n> > how are we dealing with third party software like this though? Stuff like\n> > PyGreSQL and PGAccess should be \"at the authors discretion\", no? As they\n> > don't interfere with the core functionality and build of the system?\n> \n> Well, a third party author always has the option to release his code\n> separately on whatever timeline seems good to him. But I think that for\n> third-party code included in the distribution, the same standards ought\n> to apply as for the Postgres code itself: we don't want people sticking\n> alpha-quality code into a Postgres release tarball, whether it's core\n> functionality or not. It's not as if \"no new features for a month\" is\n> a particularly onerous standard to meet ;-)\n\nagreed about the alpha quality, but if someone like D'Arcy or Constantin\nreleases a new version of their code, is there any reason to hold off on\nbringing that in? Haven't we done that in the past with pgaccess as it\nis? I seem to recall Bruce bringing in a new release of PgAccess close to\nthe release, but am not 100% certain of this ...\n\n\n", "msg_date": "Sat, 7 Oct 2000 23:55:09 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [ANNOUNCE] Announce: Release of PyGreSQL version\n 3.0" }, { "msg_contents": "> > Well, a third party author always has the option to release his code\n> > separately on whatever timeline seems good to him. But I think that for\n> > third-party code included in the distribution, the same standards ought\n> > to apply as for the Postgres code itself: we don't want people sticking\n> > alpha-quality code into a Postgres release tarball, whether it's core\n> > functionality or not. It's not as if \"no new features for a month\" is\n> > a particularly onerous standard to meet ;-)\n> \n> agreed about the alpha quality, but if someone like D'Arcy or Constantin\n> releases a new version of their code, is there any reason to hold off on\n> bringing that in? Haven't we done that in the past with pgaccess as it\n> is? I seem to recall Bruce bringing in a new release of PgAccess close to\n> the release, but am not 100% certain of this ...\n\nYes, that is true. I don't have a problem with 3rd party stuff that\ndoes not install by default.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 8 Oct 2000 00:02:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [ANNOUNCE] Announce: Release of PyGreSQL version 3.0" }, { "msg_contents": "On Thu, 5 Oct 2000, D'Arcy J.M. Cain wrote:\n\n> Thus spake Bruce Momjian\n> > I have installed this in the current source tree, ready for 7.1.\n> > \n> > I have installed > \n> > > Announce: Release of PyGreSQL version 3.0\n> \n> When is 7.1 being locked down? I may be releasing 3.1 with a few small\n> fixes and changes very soon.\n\nyou should be safe until mid-december or so ... as PyGreSQL doesn't affect\nthe build of PostgreSQL in anyway that I'm aware of, there is no reason\nwhy you should be constrained to our beta cycle ... \n\n\n", "msg_date": "Mon, 9 Oct 2000 20:14:29 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Re: [ANNOUNCE] Announce: Release of PyGreSQL version\n 3.0" }, { "msg_contents": "> Thus spake Bruce Momjian\n> > I have installed this in the current source tree, ready for 7.1.\n> > \n> > I have installed > \n> > > Announce: Release of PyGreSQL version 3.0\n> \n> When is 7.1 being locked down? I may be releasing 3.1 with a few small\n> fixes and changes very soon.\n\nYou have until November 1, at least.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 9 Oct 2000 19:16:53 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Re: [ANNOUNCE] Announce: Release of PyGreSQL version\n 3.0" }, { "msg_contents": "The Hermit Hacker writes:\n\n> > > > Announce: Release of PyGreSQL version 3.0\n> > \n> > When is 7.1 being locked down? I may be releasing 3.1 with a few small\n> > fixes and changes very soon.\n> \n> you should be safe until mid-december or so ... as PyGreSQL doesn't affect\n> the build of PostgreSQL in anyway that I'm aware of, there is no reason\n> why you should be constrained to our beta cycle ... \n\nIf you configure --with-python, then it will, so it would be advantageous\nto get it in with/before beta.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Tue, 10 Oct 2000 17:52:01 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Re: [ANNOUNCE] Announce: Release of PyGreSQL version\n 3.0" }, { "msg_contents": "On Thu, 5 Oct 2000, Ricardo Timaran wrote:\n\n> Hi!\n> Where can I find the documentation of Postgres's compiler and optimizer?\n> who has anything about this subject?\n> Ricardo\n> [email protected]\n> \n> \n\n", "msg_date": "Wed, 18 Oct 2000 16:36:50 +0500 (GMT)", "msg_from": "Timaran Ricardo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Documentation about compiler Postgres" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Hiroshi Inoue [mailto:[email protected]]\n> > -----Original Message-----\n> > From: [email protected] [mailto:[email protected]]On\n> > Behalf Of Andrew McMillan\n> >\n> > Hi,\n> >\n> > I'm trying to convert an application from MS SQL / ASP / IIS to\n> > PostgreSQL / PHP / Apache. I am having trouble getting efficient\n> > queries on one of my main tables, which tends to have some fairly large\n> > records in it. Currently there are around 20000 records, and it looks\n> > like they average around 500 bytes from the VACUUM ANALYZE statistics\n> > below.\n> >\n> > I don't really want any query on this table to return more than about 20\n> > records, so it seems to me that indexed access should be the answer, but\n> > I am having some problems with indexes containing BOOLEAN types.\n> >\n> > I can't see any reason why BOOL shouldn't work in an index, and in other\n> > systems I have commonly used them as the first component of an index,\n> > which is what I want to do here.\n> >\n> > Also, I can't see why the estimator should see a difference between\n> > \"WHERE head1\" and \"WHERE head1=TRUE\".\n> >\n> >\n> > newsroom=# explain SELECT DISTINCT story.story_id, written, released,\n> > title, precis, author, head1 FROM story WHERE head1 ORDER BY written\n>\n> Please add head1 to ORDER BY clause i.e. ORDER BY head1,written.\n>\n\nSorry,it wouldn't help unless there's an index e.g. on (head1,written,\nstory_id, released, title, precis, author).\nHowever isn't (story_id) a primary key ?\nIf so,couldn't you change your query as follows ?\n\nSELECT story.story_id, written, released, title, precis, author, head1\nFROM story WHERE head1=TRUE ORDER BY head1, written DESC\nLIMIT 15.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Wed, 31 May 2000 12:04:41 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Using BOOL in indexes" }, { "msg_contents": "Hiroshi Inoue wrote:\n> Hiroshi Inoue wrote:\n> > Andrew McMillan wrote:\n> > >\n> > > Hi,\n> > >\n> > > I'm trying to convert an application from MS SQL / ASP / IIS to\n> > > PostgreSQL / PHP / Apache. I am having trouble getting efficient\n> > > queries on one of my main tables, which tends to have some fairly large\n> > > records in it. Currently there are around 20000 records, and it looks\n> > > like they average around 500 bytes from the VACUUM ANALYZE statistics\n> > > below.\n> > >\n> > > I don't really want any query on this table to return more than about 20\n> > > records, so it seems to me that indexed access should be the answer, but\n> > > I am having some problems with indexes containing BOOLEAN types.\n> > >\n> > > I can't see any reason why BOOL shouldn't work in an index, and in other\n> > > systems I have commonly used them as the first component of an index,\n> > > which is what I want to do here.\n> > >\n> > > Also, I can't see why the estimator should see a difference between\n> > > \"WHERE head1\" and \"WHERE head1=TRUE\".\n> > >\n> > >\n> > > newsroom=# explain SELECT DISTINCT story.story_id, written, released,\n> > > title, precis, author, head1 FROM story WHERE head1 ORDER BY written\n> >\n> > Please add head1 to ORDER BY clause i.e. ORDER BY head1,written.\n> >\n> \n> Sorry,it wouldn't help unless there's an index e.g. on (head1,written,\n> story_id, released, title, precis, author).\n> However isn't (story_id) a primary key ?\n> If so,couldn't you change your query as follows ?\n> \n> SELECT story.story_id, written, released, title, precis, author, head1\n> FROM story WHERE head1=TRUE ORDER BY head1, written DESC\n> LIMIT 15.\n\nThanks Hiroshi,\n\nI already have such an index, but as you can see below, it is still not\nused:\n\nnewsroom=# explain SELECT story.story_id, written, released, title,\nprecis, author, head1 FROM story WHERE head1=TRUE ORDER BY head1,\nwritten DESC LIMIT 15;\nNOTICE: QUERY PLAN:\n\nSort (cost=2669.76..2669.76 rows=14007 width=49)\n -> Seq Scan on story (cost=0.00..1467.46 rows=14007 width=49)\n\nEXPLAIN\nnewsroom=# \\d story_sk4\n Index \"story_sk4\"\n Attribute | Type \n-----------+-----------\n head1 | boolean\n written | timestamp\nbtree\n\nRegards,\n\t\t\t\t\tAndrew.\n\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: [email protected]\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267\n", "msg_date": "Wed, 31 May 2000 16:01:15 +1200", "msg_from": "Andrew McMillan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using BOOL in indexes" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected]\n> \n> Hiroshi Inoue wrote:\n> > Hiroshi Inoue wrote:\n> > > Andrew McMillan wrote:\n> > > >\n> > > > Hi,\n> > > >\n> > \n> > Sorry,it wouldn't help unless there's an index e.g. on (head1,written,\n> > story_id, released, title, precis, author).\n> > However isn't (story_id) a primary key ?\n> > If so,couldn't you change your query as follows ?\n> > \n> > SELECT story.story_id, written, released, title, precis, author, head1\n> > FROM story WHERE head1=TRUE ORDER BY head1, written DESC\n> > LIMIT 15.\n> \n> Thanks Hiroshi,\n> \n> I already have such an index, but as you can see below, it is still not\n> used:\n> \n> newsroom=# explain SELECT story.story_id, written, released, title,\n> precis, author, head1 FROM story WHERE head1=TRUE ORDER BY head1,\n> written DESC LIMIT 15;\n\nOops,please add DESC to head1 also i.e ORDER BY head1 DESC,written DESC.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Wed, 31 May 2000 13:21:32 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Using BOOL in indexes" } ]
[ { "msg_contents": "\n> Hmmm - thinking about that it doesn't sound bad if we allways\n> create a secondary relation at CREATE TABLE time, but NOT the\n> index for it. And at VACUUM time we create the index if it\n> doesn't exist AND there is external stored data.\n\nSeems we are trying to reduce the dependency on vacuum in other\nareas (e.g. overwriting smgr). \nI would prefer explicit syntax to enable toast (create table and alter\ntable).\n\nAndreas\n", "msg_date": "Wed, 31 May 2000 09:28:51 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Applying TOAST to CURRENT" } ]
[ { "msg_contents": "this is a pretty minor thing, but i just dropped a table that is acted\non by a rule declared as part of another table and i'm wondering if this\nis the expected (or more importantly the desired) behavior. basically\nthe rule deletes all of the rows of table two with same id as was\ndeleted in table one. when i drop table two, it lets me do it without\nany notice of there being a rule that affects this table & then when i\ntry to do a delete on table one, it gives me an error. i'm not sure how\nother databases handle this, but it seems to me that i should have at\nleast been warned that there is a dependency there when i dropped the\ntable if not being disallowed from dropping the table altogether until i\ndrop the rule. \n\njeff\n", "msg_date": "Wed, 31 May 2000 14:47:37 -0500", "msg_from": "Jeff Hoffmann <[email protected]>", "msg_from_op": true, "msg_subject": "bug or feature? (regarding rules)" } ]
[ { "msg_contents": "OK, I figured out how to get a log of all changes from 7.0 to the branch\nsplit:\n\n\tcvs log -r REL7_0 -r REL7_0_PATCHES . \n\nNow, I need a list of log entries just in the REL7_0_PATCHES branch made\nafter the branch was split. I tried:\n\n\tcvs log -d '>2000-05-08 00:00:00 GMT' -r REL7_0_PATCHES .\n\nbut I got entries like:\n\t\n\tdate: 2000/05/29 05:44:24; author: tgl; state: Exp; lines: +3 -27\n\tGenerated header files parse.h and fmgroids.h are now copied into the\n\tsrc/include tree, so that -I backend is no longer necessary anywhere.\n\tAlso, clean up some bit rot in contrib tree.\n\nwhich is only in the current tree. Any ideas, folks? I feel like a\npretty big idiot here that I can not figure out how to do it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 31 May 2000 16:46:01 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "more cvs problems" } ]
[ { "msg_contents": "It seems that access methods nominally have an \"owner\", but that owner is\nnowhere else referenced. Since there is no user interface for adding\naccess methods anyway, would there be any problems with removing that\nfield?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 1 Jun 2000 01:51:42 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "pg_am.amowner" }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> It seems that access methods nominally have an \"owner\", but that owner is\n> nowhere else referenced. Since there is no user interface for adding\n> access methods anyway, would there be any problems with removing that\n> field?\n\nI can't think of a reason not to remove it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 31 May 2000 20:01:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_am.amowner" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> It seems that access methods nominally have an \"owner\", but that owner is\n> nowhere else referenced. Since there is no user interface for adding\n> access methods anyway, would there be any problems with removing that\n> field?\n\nHmm ... offhand I'm having a hard time seeing that it would make sense\nto associate protection checks with an access method. The only use\nI can see for the owner field is to control who could delete an access\nmethod --- and I don't have much problem with saying \"only the\nsuperuser\". It's even harder to believe that we'd really want non-\nsuperusers installing access methods.\n\nBut the other side of the coin is what harm is it doing? Surely you're\nnot worried about the space occupied by the column ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 31 May 2000 22:26:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_am.amowner " }, { "msg_contents": "> Peter Eisentraut <[email protected]> writes:\n> > It seems that access methods nominally have an \"owner\", but that owner is\n> > nowhere else referenced. Since there is no user interface for adding\n> > access methods anyway, would there be any problems with removing that\n> > field?\n> \n> Hmm ... offhand I'm having a hard time seeing that it would make sense\n> to associate protection checks with an access method. The only use\n> I can see for the owner field is to control who could delete an access\n> method --- and I don't have much problem with saying \"only the\n> superuser\". It's even harder to believe that we'd really want non-\n> superusers installing access methods.\n> \n> But the other side of the coin is what harm is it doing? Surely you're\n> not worried about the space occupied by the column ;-)\n\nSeems our system catalogs are confusing enough. Any trimming is\nhelpful, no?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 31 May 2000 22:36:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_am.amowner" }, { "msg_contents": "Tom Lane writes:\n\n> But the other side of the coin is what harm is it doing?\n\nWell, I'm going to have to change it from int32 to oid but I might as well\nremove it with about the same amount of keystrokes and the same effect. :)\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 3 Jun 2000 01:47:30 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_am.amowner " } ]
[ { "msg_contents": "> On Wed, May 31, 2000 at 04:46:01PM -0400, Bruce Momjian wrote:\n> > OK, I figured out how to get a log of all changes from 7.0 to the branch\n> > split:\n> > \n> > \tcvs log -r REL7_0 -r REL7_0_PATCHES . \n> > \n> > Now, I need a list of log entries just in the REL7_0_PATCHES branch made\n> > after the branch was split. I tried:\n> > \n> > \tcvs log -d '>2000-05-08 00:00:00 GMT' -r REL7_0_PATCHES .\n> \n> Hmm, it looks to me like space is significant: the options take their\n> value up to the next space, so (according to the book 'Open Source Development \n> with CVS' found at red-bean.com/cvsbook), something like this\n> should work:\n> \n> cvs log -d'>2000-05-08 00:00:00 GMT' -rREL7_0_PATCHES .\n\nThis does not work. It only gets changes since REL7_0PATCHES was\ncreated.\n\n> \n> Actually, the description of the cvs log options specifically mentions\n> that spaces are not allowed.\n\nOK, thanks, I got it working with:\n\n\tcvs log -d'2000-05-08 00:00:00 GMT<2000-05-29 00:00:00 GMT'\n\tcvs log -d'>2000-05-29 00:00:00 GMT' -rREL7_0_PATCHES\n\nThe first gets all change up to the branch, the second gets stuff after\nthe branch. I am working on 7.0.1 release packaging now. Sorry for the\ndelay. I am just CVS messed up.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 31 May 2000 21:28:20 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: more cvs problems" } ]
[ { "msg_contents": "Greetings,\n\nI have a question about pg6.4.2, I know it is old but upgrading is not an \noption at this point in time (not my decision). :(\n\nEvery night I run the following:\n\n<sql to drop all indexes>\nvacuum;\n<sql to create all indexes>\nvacuum analyze;\n\nThe problem I am having is that somewhere in that process records are being \nlost. There are two tables that have a one-to-one relationship and records \nare always missing from one of the tables, the appnotes:\n\ncreate table applicants (\n app_id int4\n .\n .\n .\n);\n\ncreate table appnotes (\n note_id int4,\n personalnotes text,\n technotes text,\n relonote text,\n follownote text,\n profile text\n);\n\nThe rest of the applicant data is stored in the applicant table. I don't \nreally know why the text fields were broken out into their own table, but \nthey are, and for every record in the applicant table there is supposed to \nbe a record in the appnotes table.\n\nI added the following query before and after the normal nightly sql and \nthis is what I got:\n\n-----------\nselect a.app_id from applicants as a where a.app_id not in\n(select n.note_id from appnotes as n where n.note_id=a.app_id);\napp_id\n------\n(0 rows)\n\n<sql to drop all indexes>\nvacuum;\n<sql to create all indexes>\nvacuum analyze;\n\nselect a.app_id from applicants as a where a.app_id not in\n(select n.note_id from appnotes as n where n.note_id=a.app_id);\napp_id\n------\n27555\n26446\n27556\n1734\n26502\n26246\n(6 rows)\n\n------------\n\nWhat happened? Did vacuum eat them or something? The records are always \njust missing out of the appnotes table.\n\nAny insight would be greatly appreciated.\n\nThank you,\nMatthew\n\n", "msg_date": "Wed, 31 May 2000 23:29:14 -0400", "msg_from": "Matthew Hagerty <[email protected]>", "msg_from_op": true, "msg_subject": "pg6.4.2 eating records..." }, { "msg_contents": "Matthew Hagerty <[email protected]> writes:\n> I have a question about pg6.4.2, I know it is old but upgrading is not an \n> option at this point in time (not my decision). :(\n\nTry to persuade your PHB to reconsider ;-)\n\n> Every night I run the following:\n\n> <sql to drop all indexes>\n> vacuum;\n> <sql to create all indexes>\n> vacuum analyze;\n\nThis is a little bizarre, to say the least. It should be\n\n> <sql to drop all indexes>\n> vacuum analyze;\n> <sql to create all indexes>\n\nThere's no point in running two vacuums, and there's even less point\nin dropping/recreating indexes around the vacuum only to proceed to\nrun another vacuum with the indexes in place.\n\n> select a.app_id from applicants as a where a.app_id not in\n> (select n.note_id from appnotes as n where n.note_id=a.app_id);\n> app_id\n> ------\n> 27555\n> 26446\n> 27556\n> 1734\n> 26502\n> 26246\n> (6 rows)\n\nUgh. It would be interesting to see EXPLAIN of this query run just\nbefore and just after the vacuum sequence. If it's failing just\nafter the nightly vacuum, what makes it start working again by the\ntime of the next one?\n\nIn all honesty, you are not likely to attract a whole lot of interest\nin fixing 6.4.* bugs at this late date. My own interest will only\nextend as far as making sure the bug is not still there in 7.0...\n\n> What happened? Did vacuum eat them or something? The records are always \n> just missing out of the appnotes table.\n\nMy guess is the records are still there, but due to some bug the\nspecific query you are using fails to find them. Postgres has many\nfaults, but losing data completely isn't usually one of them.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Jun 2000 00:51:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg6.4.2 eating records... " }, { "msg_contents": "At 12:51 AM 6/1/00 -0400, Tom Lane wrote:\n>Matthew Hagerty <[email protected]> writes:\n> > I have a question about pg6.4.2, I know it is old but upgrading is not an\n> > option at this point in time (not my decision). :(\n>\n>Try to persuade your PHB to reconsider ;-)\n\nI am, believe me, I am! :)\n\n> > Every night I run the following:\n>\n> > <sql to drop all indexes>\n> > vacuum;\n> > <sql to create all indexes>\n> > vacuum analyze;\n>\n>This is a little bizarre, to say the least. It should be\n> > <sql to drop all indexes>\n> > vacuum analyze;\n> > <sql to create all indexes>\n>\n>There's no point in running two vacuums, and there's even less point\n>in dropping/recreating indexes around the vacuum only to proceed to\n>run another vacuum with the indexes in place.\n\nWell, I do it that way based on your feedback, Tom. ;) You said once that \nyou should drop the indexes prior to running vacuum, then another time you \nsaid vacuum analyze should be run with indexes in place. So I do both. Is \nthis bad?\n\n\n> > select a.app_id from applicants as a where a.app_id not in\n> > (select n.note_id from appnotes as n where n.note_id=a.app_id);\n> > app_id\n> > ------\n> > 27555\n> > 26446\n> > 27556\n> > 1734\n> > 26502\n> > 26246\n> > (6 rows)\n>\n>Ugh. It would be interesting to see EXPLAIN of this query run just\n>before and just after the vacuum sequence. If it's failing just\n>after the nightly vacuum, what makes it start working again by the\n>time of the next one?\n\nActually it does not fix itself, I add new records everyday to the appnotes \ntable so the application does not break. I know I know.\n\n\n>In all honesty, you are not likely to attract a whole lot of interest\n>in fixing 6.4.* bugs at this late date. My own interest will only\n>extend as far as making sure the bug is not still there in 7.0...\n\nI guess I'm not really looking for a fix, I was just wondering if this was \na known problem with 6.4.2 and/or if there was maybe a patch that fixed it \nor something. I need to tell my project manager something intelligent so \nhe can communicate to the client and get them to spend the money to have \nthe database upgraded to 7.0.\n\n> > What happened? Did vacuum eat them or something? The records are always\n> > just missing out of the appnotes table.\n>\n>My guess is the records are still there, but due to some bug the\n>specific query you are using fails to find them. Postgres has many\n>faults, but losing data completely isn't usually one of them.\n>\n> regards, tom lane\n\nThanks Tom!\n\nMatthew\n\n", "msg_date": "Thu, 01 Jun 2000 01:41:25 -0400", "msg_from": "Matthew Hagerty <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg6.4.2 eating records... " }, { "msg_contents": "Matthew Hagerty <[email protected]> writes:\n>>>> <sql to drop all indexes>\n>>>> vacuum;\n>>>> <sql to create all indexes>\n>>>> vacuum analyze;\n>> \n>> This is a little bizarre, to say the least. It should be\n>>>> <sql to drop all indexes>\n>>>> vacuum analyze;\n>>>> <sql to create all indexes>\n>> \n>> There's no point in running two vacuums, and there's even less point\n>> in dropping/recreating indexes around the vacuum only to proceed to\n>> run another vacuum with the indexes in place.\n\n> Well, I do it that way based on your feedback, Tom. ;) You said once that \n> you should drop the indexes prior to running vacuum, then another time you \n> said vacuum analyze should be run with indexes in place.\n\nI did? That must have been long ago and far away, because I am now well\naware that vacuum analyze doesn't do anything special with indexes...\n\n> So I do both. Is this bad?\n\nWell, other than causing wear and tear on your disk drives, it probably\nisn't harmful as long as you've got the cycles to spare. But it's\ncertainly a waste of time.\n\n>> In all honesty, you are not likely to attract a whole lot of interest\n>> in fixing 6.4.* bugs at this late date. My own interest will only\n>> extend as far as making sure the bug is not still there in 7.0...\n\n> I guess I'm not really looking for a fix, I was just wondering if this was \n> a known problem with 6.4.2 and/or if there was maybe a patch that fixed it \n> or something.\n\nDunno. We've fixed a heckuva lot of bugs since 6.4, but I don't have\nenough information to tell if this is one of them. If it remains\nunfixed, we'll sure do our best to fix it ... but we'd really like to\nsee a report against 7.0 before we spend a lot of effort investigating.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Jun 2000 02:36:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg6.4.2 eating records... " } ]
[ { "msg_contents": "I've been doing some poking at the full-text indexing code in\n/contrib/fulltextindex to try to get it to work with non-ASCII locales \n(among other things), but I'm having a bit of trouble trying to figure \nout how to properly parse non-ASCII strings from inside the fti() \ntrigger function (which is written in C).\n\nMy problem is this:\n\nI want to aggregate text in multiple languages in a single full-text index\nmuch like the current structure used by the current fti() function. In order\nto correctly parse the strings, however, I've got to know what locale\nthey're written in/for (otherwise, isalpha() thinks that characters such as\nthe Hungarian letter u\" -- that's a 'u' with a double acute accent -- aren't\nvery alphabetic.)\n\nMy initial thinking (which could certainly be very wrong) is that the\neasiest way to get around this would be to allow client apps to set their\nLC_ALL environment variables, and then to have the new fti() function use\nthat locale while doing string manipulation.\n\nBut the way I'm doing things, it doesn't appear that the LC_ALL environment\nvariable is available. (Maybe it was never meant to be ... but I'm not a \nvery skilled C programmer, and I don't know the first thing about the SPI \ninterface, so please forgive me if I'm asking why the sun doesn't rise in \nthe west more often ;-)).\n\nHere's what's happening:\n\n\tbash# LC_ALL=hu_HU\n\tbash# export LC_ALL\n\tbash# psql test\n\tWelcome to psql, the PostgreSQL interactive terminal.\n\n\tType: \\copyright for distribution terms\n\t \\h for help with SQL commands\n\t \\? for help on internal slash commands\n\t \\g or terminate with semicolon to execute query\n\t \\q to quit\n\n\ttest=# INSERT INTO ttxt (t1) values ('FELEL�SS�G�');\n\tINSERT 513377 1\n\ttest=#select * from ttxt_fti;\n\t string | id \n\t--------+--------\n\t felel | 513377\n\t ss | 513377\n\t(2 rows)\n\nWhich isn't quite what I'm looking for ;-).\n\nInside the C source of fti(), I added a call to getenv(\"LC_ALL\") to make\nsure that LC_ALL really isn't set:\n\n locale = getenv(\"LC_ALL\");\n elog(NOTICE,\"Locale is '%s'\\n\",locale);\n\nAnd sure enough, it outputs:\n\n\tNOTICE: Locale is '(null)'\n\nIf, on the other hand, I do:\n\n\tsetlocale(\"LC_ALL\",\"hu_HU\")\n\ninside fti(), everything works out perfectly:\n\n\ttest=# INSERT INTO ttxt (t1) values ('FELEL�SS�G�');\n\tINSERT 513410 1\n\ttest=# select * from ttxt_fti;\n\t string | id \n\t-------------+--------\n\t felel�ss�g� | 513410\n\t(1 row)\n\n\nAny ideas?\n\nCheers,\nCharlie\n\nP.S. I only subscribe to the hackers digest, so please CC me with your \nreplies... Thanks!\n\n", "msg_date": "Wed, 31 May 2000 22:34:36 -0500", "msg_from": "Charlie Hornberger <[email protected]>", "msg_from_op": true, "msg_subject": "full-text indexing, locales, triggers, SPI & more fun" }, { "msg_contents": "\nOn Wed, 31 May 2000, Charlie Hornberger wrote:\n\n> I want to aggregate text in multiple languages in a single full-text index\n> much like the current structure used by the current fti() function. In order\n> to correctly parse the strings, however, I've got to know what locale\n> they're written in/for (otherwise, isalpha() thinks that characters such as\n> the Hungarian letter u\" -- that's a 'u' with a double acute accent -- aren't\n> very alphabetic.)\n> \n> My initial thinking (which could certainly be very wrong) is that the\n> easiest way to get around this would be to allow client apps to set their\n> LC_ALL environment variables, and then to have the new fti() function use\n> that locale while doing string manipulation.\n> \n> But the way I'm doing things, it doesn't appear that the LC_ALL environment\n> variable is available. (Maybe it was never meant to be ... but I'm not a \n> very skilled C programmer, and I don't know the first thing about the SPI \n> interface, so please forgive me if I'm asking why the sun doesn't rise in \n> the west more often ;-)).\n\nThe PostgreSQL set in main() next locale catg. (if you compile it with locale\nsupport)\n\n#ifdef USE_LOCALE\n setlocale(LC_CTYPE, \"\"); \n setlocale(LC_COLLATE, \"\");\n setlocale(LC_MONETARY, \"\");\n#endif\n\n If you need in your routines ctype.h's functions a solution is a set LANG\nenv. :\n\t# LANG=Czech\t\t(for me)\n\t# start_postmaster\n\n It works very well, and you not need any other setting. \n\n IMHO use setlocale(LC_ALL, ..) is a very strong hard to backend, because\na example all float data will crashed.\n\n If you (still:-) need all locales see pg_locale.c in pg's sources in\nutils/atd and usage of these routines in formatting.c (to_char()) which use \nfull locale for numbers. \n\nBut don't remember - you must always return all to state before LC_ALL. \n\t\n\t\t\t\t\tKarel\n\n", "msg_date": "Fri, 2 Jun 2000 12:08:20 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: full-text indexing, locales, triggers, SPI & more fun" } ]
[ { "msg_contents": "I have prepared the REL7_0_PATCHES tree for 7.0.1. Here are the changes.\n\n---------------------------------------------------------------------------\n\n\nRelease 7.0.1\n\nThis is basically a cleanup release for 7.0.1\n\nMigration to v7.0.1\n\nA dump/restore is not required for those running 7.0.\n\nChanges\n-------\nFix many CLUSTER failures (Tom)\nAllow ALTER TABLE RENAME works on indexes (Tom)\nFix plpgsql to handle datetime->timestamp and timespan->interval (Bruce)\nNew configure --with-setproctitle switch to use setproctitle() (Marc, Bruce)\nFix the off by one errors in ResultSet from 6.5.3, and more.\njdbc ResultSet fixes (Joseph Shraibman)\noptimizer tunings (Tom)\nFix create user for pgaccess\nFix for UNLISTEN failure\nIRIX fixes (David Kaelbling)\nQNX fixes (Andreas Kardos)\nReduce COPY IN lock level (Tom)\nChange libpqeasy to use PQconnectdb() style parameters (Bruce)\nFix pg_dump to handle OID indexes (Tom)\nFix small memory leak (Tom)\nSolaris fix for createdb/dropdb\nFix for non-blocking connections (Alfred Perlstein)\nFix improper recovery after RENAME TABLE failures (Tom)\nCopy pg_ident.conf.sample into /lib directory in install (Bruce)\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Thu, 1 Jun 2000 01:15:02 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "7.0.1 is ready" }, { "msg_contents": "\nwill wrap her up tomorrow and announce availability ... \n\nOn Thu, 1 Jun 2000, Bruce Momjian wrote:\n\n> I have prepared the REL7_0_PATCHES tree for 7.0.1. Here are the changes.\n> \n> ---------------------------------------------------------------------------\n> \n> \n> Release 7.0.1\n> \n> This is basically a cleanup release for 7.0.1\n> \n> Migration to v7.0.1\n> \n> A dump/restore is not required for those running 7.0.\n> \n> Changes\n> -------\n> Fix many CLUSTER failures (Tom)\n> Allow ALTER TABLE RENAME works on indexes (Tom)\n> Fix plpgsql to handle datetime->timestamp and timespan->interval (Bruce)\n> New configure --with-setproctitle switch to use setproctitle() (Marc, Bruce)\n> Fix the off by one errors in ResultSet from 6.5.3, and more.\n> jdbc ResultSet fixes (Joseph Shraibman)\n> optimizer tunings (Tom)\n> Fix create user for pgaccess\n> Fix for UNLISTEN failure\n> IRIX fixes (David Kaelbling)\n> QNX fixes (Andreas Kardos)\n> Reduce COPY IN lock level (Tom)\n> Change libpqeasy to use PQconnectdb() style parameters (Bruce)\n> Fix pg_dump to handle OID indexes (Tom)\n> Fix small memory leak (Tom)\n> Solaris fix for createdb/dropdb\n> Fix for non-blocking connections (Alfred Perlstein)\n> Fix improper recovery after RENAME TABLE failures (Tom)\n> Copy pg_ident.conf.sample into /lib directory in install (Bruce)\n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 1 Jun 2000 02:24:37 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.1 is ready" }, { "msg_contents": "> I have prepared the REL7_0_PATCHES tree for 7.0.1. Here are the changes.\n> \n> ---------------------------------------------------------------------------\n> Solaris fix for createdb/dropdb\n\ndone by me.\n\nCould you add followings please?\n\nAdd SJIS UDC (NEC selection IBM kanji) support (Eiji Tokuya)\nFix too long syslog message (Tatsuo)\n--\nTatsuo Ishii\n", "msg_date": "Thu, 01 Jun 2000 14:48:04 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.1 is ready" }, { "msg_contents": "OK, can someone test it to make sure it is OK? Thanks.\n\n\n> \n> will wrap her up tomorrow and announce availability ... \n> \n> On Thu, 1 Jun 2000, Bruce Momjian wrote:\n> \n> > I have prepared the REL7_0_PATCHES tree for 7.0.1. Here are the changes.\n> > \n> > ---------------------------------------------------------------------------\n> > \n> > \n> > Release 7.0.1\n> > \n> > This is basically a cleanup release for 7.0.1\n> > \n> > Migration to v7.0.1\n> > \n> > A dump/restore is not required for those running 7.0.\n> > \n> > Changes\n> > -------\n> > Fix many CLUSTER failures (Tom)\n> > Allow ALTER TABLE RENAME works on indexes (Tom)\n> > Fix plpgsql to handle datetime->timestamp and timespan->interval (Bruce)\n> > New configure --with-setproctitle switch to use setproctitle() (Marc, Bruce)\n> > Fix the off by one errors in ResultSet from 6.5.3, and more.\n> > jdbc ResultSet fixes (Joseph Shraibman)\n> > optimizer tunings (Tom)\n> > Fix create user for pgaccess\n> > Fix for UNLISTEN failure\n> > IRIX fixes (David Kaelbling)\n> > QNX fixes (Andreas Kardos)\n> > Reduce COPY IN lock level (Tom)\n> > Change libpqeasy to use PQconnectdb() style parameters (Bruce)\n> > Fix pg_dump to handle OID indexes (Tom)\n> > Fix small memory leak (Tom)\n> > Solaris fix for createdb/dropdb\n> > Fix for non-blocking connections (Alfred Perlstein)\n> > Fix improper recovery after RENAME TABLE failures (Tom)\n> > Copy pg_ident.conf.sample into /lib directory in install (Bruce)\n> > \n> > \n> > -- \n> > Bruce Momjian | http://www.op.net/~candle\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> > \n> > \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 1 Jun 2000 01:59:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.0.1 is ready" }, { "msg_contents": "Done.\n\n> > I have prepared the REL7_0_PATCHES tree for 7.0.1. Here are the changes.\n> > \n> > ---------------------------------------------------------------------------\n> > Solaris fix for createdb/dropdb\n> \n> done by me.\n> \n> Could you add followings please?\n> \n> Add SJIS UDC (NEC selection IBM kanji) support (Eiji Tokuya)\n> Fix too long syslog message (Tatsuo)\n> --\n> Tatsuo Ishii\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 1 Jun 2000 02:01:02 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.0.1 is ready" }, { "msg_contents": "On Thu, Jun 01, 2000 at 01:59:04AM -0400, Bruce Momjian wrote:\n> OK, can someone test it to make sure it is OK? Thanks.\n> > > ... \n> > > Changes\n> > > -------\n\nI hope the changes to ecpg are included to:\n\ninclude ecpg version 2.7.1\ninclude libecpg version 3.1.1\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Fri, 2 Jun 2000 10:35:36 +0200", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.1 is ready" }, { "msg_contents": "> On Thu, Jun 01, 2000 at 01:59:04AM -0400, Bruce Momjian wrote:\n> > OK, can someone test it to make sure it is OK? Thanks.\n> > > > ... \n> > > > Changes\n> > > > -------\n> \n> I hope the changes to ecpg are included to:\n> \n> include ecpg version 2.7.1\n> include libecpg version 3.1.1\n\nBecause these aren't specific items and because ecpg has its own\nchangelog file, I usually only mention that ecpg has changed without\ngiving details.\n\nI will add a mention of ecpg changes.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 2 Jun 2000 11:06:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.0.1 is ready" }, { "msg_contents": "On Fri, Jun 02, 2000 at 11:06:00AM -0400, Bruce Momjian wrote:\n> Because these aren't specific items and because ecpg has its own\n> changelog file, I usually only mention that ecpg has changed without\n> giving details.\n> \n> I will add a mention of ecpg changes.\n\nThat's not really needed. I just wanted to make sure the changes are in. \n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Fri, 2 Jun 2000 21:13:52 +0200", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.1 is ready" }, { "msg_contents": "> On Fri, Jun 02, 2000 at 11:06:00AM -0400, Bruce Momjian wrote:\n> > Because these aren't specific items and because ecpg has its own\n> > changelog file, I usually only mention that ecpg has changed without\n> > giving details.\n> > \n> > I will add a mention of ecpg changes.\n> \n> That's not really needed. I just wanted to make sure the changes are in. \n\nUh, how do I know if they are in?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 2 Jun 2000 15:24:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.0.1 is ready" } ]
[ { "msg_contents": "Bruce, some more additions:\n\nJDBC ResultSet.getTimestamp() fix (Gregory Krasnow & Floyd Marinescu)\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council. \n", "msg_date": "Thu, 1 Jun 2000 07:54:01 +0100 ", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: 7.0.1 is ready" }, { "msg_contents": "Peter, I do not see these patches applied in the REL7_0_PATCHES tree. \nIf you went to send me a patch, I can apply it into the tree.\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> Bruce, some more additions:\n> \n> JDBC ResultSet.getTimestamp() fix (Gregory Krasnow & Floyd Marinescu)\n> \n> Peter\n> \n> -- \n> Peter Mount\n> Enterprise Support\n> Maidstone Borough Council\n> Any views stated are my own, and not those of Maidstone Borough Council. \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 1 Jun 2000 12:29:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.1 is ready" }, { "msg_contents": "Got it. I will put it under 7.0.1 and it will appear in 7.0.2. Sorry I\nwas confused and did not put it in the first time.\n\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> Bruce, some more additions:\n> \n> JDBC ResultSet.getTimestamp() fix (Gregory Krasnow & Floyd Marinescu)\n> \n> Peter\n> \n> -- \n> Peter Mount\n> Enterprise Support\n> Maidstone Borough Council\n> Any views stated are my own, and not those of Maidstone Borough Council. \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 2 Jun 2000 10:55:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.1 is ready" } ]
[ { "msg_contents": "\ncan someone check over the tarballs I made for v7.0.1 that are available\non the ftp site? \n\nunder pub/dev, there are a few 7.0.1b1 tarballs ... if I hear nothing back\non these by 1pm my time (AST), I'm going to do a proper release ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 1 Jun 2000 09:45:33 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Quick trial bundle ..." }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> can someone check over the tarballs I made for v7.0.1 that are available\n> on the ftp site? \n\nShouldn't the top-level directory name inside the tarball be\npostgresql-7.0.1, not just pgsql?\n\nOtherwise the full tarball looks good. Didn't check the others.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Jun 2000 11:06:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Quick trial bundle ... " }, { "msg_contents": "\nd'oh, knew I forgot something ... building a new one now, but if that is\nall that appears wrong, as long as this tar ball builds iwth the proper\ntop level, I'll release as is ...\n\nOn Thu, 1 Jun 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > can someone check over the tarballs I made for v7.0.1 that are available\n> > on the ftp site? \n> \n> Shouldn't the top-level directory name inside the tarball be\n> postgresql-7.0.1, not just pgsql?\n> \n> Otherwise the full tarball looks good. Didn't check the others.\n> \n> \t\t\tregards, tom lane\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 1 Jun 2000 13:45:39 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Quick trial bundle ... " } ]
[ { "msg_contents": "The definition for \"ProcessConfigFile()\" in guc-file.l does not match the\nprototype in guc.h. The following patch corrects that.\n\n--------------------------8< CUT HERE >8--------------------------------\n*** src/backend/utils/misc/guc-file.l.orig\tWed May 31 16:46:43 2000\n--- src/backend/utils/misc/guc-file.l\tWed May 31 16:47:08 2000\n***************\n*** 124,130 ****\n * values will be changed.\n */\n void\n! ProcessConfigFile(unsigned int context)\n {\n \tint token, parse_state;\n \tchar *opt_name, *opt_value;\n--- 124,130 ----\n * values will be changed.\n */\n void\n! ProcessConfigFile(GucContext context)\n {\n \tint token, parse_state;\n \tchar *opt_name, *opt_value;\n--------------------------8< CUT HERE >8--------------------------------\n", "msg_date": "Thu, 01 Jun 2000 12:01:28 -0400", "msg_from": "\"Billy G. Allie\" <[email protected]>", "msg_from_op": true, "msg_subject": "Problem compiling guc-file.l in current CVS sources." }, { "msg_contents": "Applied.\n\n> The definition for \"ProcessConfigFile()\" in guc-file.l does not match the\n> prototype in guc.h. The following patch corrects that.\n> \n> --------------------------8< CUT HERE >8--------------------------------\n> *** src/backend/utils/misc/guc-file.l.orig\tWed May 31 16:46:43 2000\n> --- src/backend/utils/misc/guc-file.l\tWed May 31 16:47:08 2000\n> ***************\n> *** 124,130 ****\n> * values will be changed.\n> */\n> void\n> ! ProcessConfigFile(unsigned int context)\n> {\n> \tint token, parse_state;\n> \tchar *opt_name, *opt_value;\n> --- 124,130 ----\n> * values will be changed.\n> */\n> void\n> ! ProcessConfigFile(GucContext context)\n> {\n> \tint token, parse_state;\n> \tchar *opt_name, *opt_value;\n> --------------------------8< CUT HERE >8--------------------------------\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 1 Jun 2000 12:46:37 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem compiling guc-file.l in current CVS sources." } ]
[ { "msg_contents": "Hi,\n\n has anybody used libpq from a CORBA server? I've got a server\n(omniORB3) and try to write data to the database from the server. libpq\nseems to receive incorrect messages from the backend (\"Backend message\ntype 0x50 arrived while idle\"). As this is intermittent I'm wondering\nwhether libpq is picking up messages intended for the CORBA server\nrather than for itself.\n\nHas anybody else done this before and knows whether this works or not?\nThere isn't an alternative to using libpq, is there?\n\nThanks,\n\nAdriaan\n\n", "msg_date": "Thu, 01 Jun 2000 19:04:09 +0300", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": true, "msg_subject": "CORBA / libpq" }, { "msg_contents": "Sounds like you are doing something seriously wrong. (Such as connecting\nto a wrong machine or a wrong port, confusing hell out of libpq). \n\nlibpq opens a new tcp connection to server, and it is impossible for it to\npick up corba traffic....\n\n-alex\n\n\nOn Thu, 1 Jun 2000, Adriaan Joubert wrote:\n\n> Hi,\n> \n> has anybody used libpq from a CORBA server? I've got a server\n> (omniORB3) and try to write data to the database from the server. libpq\n> seems to receive incorrect messages from the backend (\"Backend message\n> type 0x50 arrived while idle\"). As this is intermittent I'm wondering\n> whether libpq is picking up messages intended for the CORBA server\n> rather than for itself.\n> \n> Has anybody else done this before and knows whether this works or not?\n> There isn't an alternative to using libpq, is there?\n> \n> Thanks,\n> \n> Adriaan\n> \n> \n\n", "msg_date": "Thu, 1 Jun 2000 13:03:57 -0400 (EDT)", "msg_from": "Alex Pilosov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CORBA / libpq" } ]
[ { "msg_contents": "Thomas Lockhart wrote:\n\n> > > > I modified the current ODBC driver for\n> > > > * referential integrity error reporting,\n> > > > * SELECT in transactions and\n> > > > * disabling autocommit.\n> > > We are starting to think about organizing additional ODBC testing\n> > Yes, sure. I know that this code (which was sent to Thomas) needs\n> > further check. Have you had time to think about it?\n>\n> Sorry to ask this: can you please re-re-send the patch to me? I've\n> upgraded my machine and at the moment my backup disk is unreadable :(\n>\n> re-TIA\n>\n> - Thomas\n\n\nDoes this anything to do with a problem I'm having:\n\nWhen selecting data (using ODBC) from within a transaction block with\nV7.0 I get NO data back :( The same query performed outside the block\nworks fine !!\nThis same code works ok with 6.5.x with no problems.\n\nAny Clues ?\n\nTIA,\n Frank.\n\n\n\n\n\n", "msg_date": "Thu, 01 Jun 2000 12:36:30 -0400", "msg_from": "frank <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: ODBC patch" } ]
[ { "msg_contents": "Has the problem resulting from length(pg_proc.prosrc) > 2700\" been fixed?\n\nRegards,\nEd Loehr\n", "msg_date": "Thu, 01 Jun 2000 11:54:28 -0500", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": true, "msg_subject": "2700-byte prosrc limit fixed?" }, { "msg_contents": "Yes. Of course the BLCKSZ limit is still there :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Jun 2000 13:18:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2700-byte prosrc limit fixed? " }, { "msg_contents": "Ed Loehr wrote:\n> Has the problem resulting from length(pg_proc.prosrc) > 2700\" been fixed?\n\n Yes. There was an obsolete index on that column that has been\n removed in 7.0.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Fri, 2 Jun 2000 10:15:09 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: 2700-byte prosrc limit fixed?" } ]
[ { "msg_contents": "Hi all,\n\nI regularly do a \"cvs update\" and compile and test PostgreSQL.\n\nRecently, since about 1 week, I've had a nasty problem.\n\nDoing an \"initdb\" seems to suck up all available memory and almost\nkills the system, before dying itself with a SEGV.\n\nThe problem postgress process is:-\n\n /usr/local/pgsql/bin/postgres -boot -x -C -F -D/usr/local/pgsql/data -d \ntemplate1\n \nThe system becomes VERY unresponsive when this postgres process\nstarts running, so difficult to attach to with gdb. \n\nI'm stuck for a clue as to how to debug this.\n\nIs anyone else seeing this problem recently?\n\nIs it just a Solaris problem?\n(Solaris 2.6 on SPARCstation 5)\n\nIs it just me? :-(\n\nHelp,\n\nKeith.\n\n", "msg_date": "Thu, 1 Jun 2000 21:31:17 +0100 (BST)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Problems with recent CVS versions and Solaris." }, { "msg_contents": "Keith Parks <[email protected]> writes:\n> Recently, since about 1 week, I've had a nasty problem.\n> Doing an \"initdb\" seems to suck up all available memory and almost\n> kills the system, before dying itself with a SEGV.\n\nHmm --- no such problem noted here, and I've been doing lots of initdbs...\n\nIt must be somewhat platform-specific. See if you can get a coredump\nand backtrace.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Jun 2000 17:48:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with recent CVS versions and Solaris. " } ]
[ { "msg_contents": "Dear Friends,\n\nlast months I started to learn script programming a bit more.\nSince we work much with Postgres, I decided to write PGBrowse, an interactive\nPostgreSQL query program mainly for console browsing, written fully\nin bash script (just 5K). Don't expect too much. I find it fast and useful\nbut it needs many improvements to become a real tool. Based on dialog\nand some Unix utils. It can be found on\nhttp://www.math.u-szeged.hu/~kovzol/PGBrowse/pgbrowse\n\nRegards,\nZoltan\n\n Kov\\'acs, Zolt\\'an\n [email protected]\n http://www.math.u-szeged.hu/~kovzol\n ftp://pc10.radnoti-szeged.sulinet.hu/home/kovacsz\n\n", "msg_date": "Thu, 1 Jun 2000 23:17:19 +0200 (CEST)", "msg_from": "Kovacs Zoltan <[email protected]>", "msg_from_op": true, "msg_subject": "announcing pgbrowse" } ]
[ { "msg_contents": " I have used MySQL to do several database related things in the past\nusing PHP. In the past I have used a field \"row_id\" as a unique number\n(within that specific table) as a reference to a specific row. This\n\"row_id\" field was automatically placed in a table when it was created\nin MySQL and the next unique number was placed in the field when\nautomatically during every new insert. This made things easy for\nwriting applications in PHP. When I switch to Postgres I noticed that\nthere was a OID. I believe that this \"object identifier\" is similar to\nthe \"row_id\" in MySQL but I am unable to access it for an given row.\nPHP has a function which can get the last OID for the last \"Insert\"\nissued, however, this won't help me accomplish the same things I was\nable to accomplish using \"row_id\" in MySQL. I have read the\ndocumetation and have not found a real good description of OID but have\nfound commands that can add a unique sequence column which could\naccomplish what I need. However, I need (or want) the unique sequence\ncolumn to maintain itself, without calling a function to fill in that\nfield during an insert. I am sure there is an easy way to accomplish\nthis and I am overlooking the solution. Could someone suggest what they\nhave done? Thanks for any response.\n\n\n Jacques Huard\n jacques@_no_spam_intuineer.com\n\n\n", "msg_date": "Thu, 01 Jun 2000 22:29:16 GMT", "msg_from": "Jacques Huard <[email protected]>", "msg_from_op": true, "msg_subject": "OID question" }, { "msg_contents": "Jacques Huard wrote:\n> \n> ...I need (or want) the unique sequence\n> column to maintain itself, without calling a function to fill in that\n> field during an insert...\n\nTry using the SERIAL type.\n\n\thttp://www.postgresql.org/docs/faq-english.html#4.16.1\n\nRegards,\nEd Loehr\n", "msg_date": "Fri, 02 Jun 2000 09:49:02 -0500", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OID question" } ]
[ { "msg_contents": "Oops, mailed it to myself instead of the list!\n\nIt's been a long day...\n\n\n------------- Begin Forwarded Message -------------\n\nDate: Thu, 1 Jun 2000 23:31:01 +0100 (BST)\nFrom: Keith Parks <[email protected]>\nSubject: Re: [HACKERS] Problems with recent CVS versions and Solaris.\nTo: [email protected]\nMIME-Version: 1.0\n\nI've managed to get a backtrace, attached, thanks to Ross J. Reedstrom's\nexcellent example from the archives, also attached.\n\nI'm not sure whether the stack frame shown is corrupt, it seems to just\nloop over and over again. (I got fed up after 400+ frames)\n\nThe final few frames show us asking for more memory, the point at\nwhich things seem to go out of control.\n\n#0 0xef5d33b8 in _brk_unlocked ()\n#1 0xef5ce2f8 in _sbrk_unlocked ()\n#2 0xef5ce26c in sbrk ()\n#3 0xef585bb0 in _morecore ()\n#4 0xef58549c in _malloc_unlocked ()\n#5 0xef5852b4 in malloc ()\n#6 0x139198 in AllocSetAlloc (set=0x1bea10, size=4032) at aset.c:285\n#7 0x139ea8 in GlobalMemoryAlloc (this=0x1bea08, size=4008) at mcxt.c:419\n#8 0x1399ec in MemoryContextAlloc (context=0x1bea08, size=4008) at mcxt.c:224\n#9 0x12c700 in InitSysCache (relname=0x180f40 \"pg_proc\", \n iname=0x180f08 \"pg_proc_oid_index\", id=18, nkeys=1, key=0x19a2f0, \n iScanfuncP=0x6e1c8 <ProcedureOidIndexScan>) at catcache.c:705\n#10 0x1312d8 in SearchSysCacheTuple (cacheId=18, key1=184, key2=0, key3=0, \n key4=0) at syscache.c:509\n\nIs this any help?\n\nI'm no expert in gdb, but I can follow instructions. ;-)\n\nThanks,\nKeith.\n\n\nKeith Parks <[email protected]>\n> \n> Hi all,\n> \n> I regularly do a \"cvs update\" and compile and test PostgreSQL.\n> \n> Recently, since about 1 week, I've had a nasty problem.\n> \n> Doing an \"initdb\" seems to suck up all available memory and almost\n> kills the system, before dying itself with a SEGV.\n> \n> The problem postgress process is:-\n> \n> /usr/local/pgsql/bin/postgres -boot -x -C -F -D/usr/local/pgsql/data -d \n> template1\n> \n> The system becomes VERY unresponsive when this postgres process\n> starts running, so difficult to attach to with gdb. \n> \n> I'm stuck for a clue as to how to debug this.\n> \n> Is anyone else seeing this problem recently?\n> \n> Is it just a Solaris problem?\n> (Solaris 2.6 on SPARCstation 5)\n> \n> Is it just me? :-(\n> \n> Help,\n> \n> Keith.\n> \n\n------------- End Forwarded Message -------------", "msg_date": "Thu, 1 Jun 2000 23:32:13 +0100 (BST)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problems with recent CVS versions and Solaris." }, { "msg_contents": "Keith Parks <[email protected]> writes:\n> I've managed to get a backtrace, attached, thanks to Ross J. Reedstrom's\n> excellent example from the archives, also attached.\n\n> I'm not sure whether the stack frame shown is corrupt, it seems to just\n> loop over and over again. (I got fed up after 400+ frames)\n\nWhat we've got here is the syscache trying to set up for a search of\ncache 18, which I believe is the pg_proc-indexed-on-OID cache.\nFor that it needs the OID comparison function, \"oideq\" (OID 184).\nIt's asking the funcmgr for oideq ... and funcmgr is turning around\nand asking the syscache for the pg_proc entry with OID 184. Ooops.\n\nI thought there was an interlock in there to report a useful message if\na syscache got called recursively like this. Have to look at why it's\nnot working. However, I guess your real problem is that the funcmgr is\nfailing to find proc OID 184 in its own table of built-in functions.\nThe reason this isn't a recursion under normal circumstances is that the\ncomparison functions the syscaches need are all supposed to be hardwired\ninto fmgr.\n\nMy bet is that there is something snafu'd in your generation of\nfmgrtab.c from pg_proc.h via Gen_fmgrtab.sh, such that your table of\nbuiltin functions is either empty or corrupt.\n\nBefore wasting any additional time on it I'd recommend a make distclean,\ncvs update, configure and rebuild from scratch to see if the problem\npersists. I changed the Gen_fmgrtab.sh setup last week as part of the\nfirst round of fmgr checkins, and I wouldn't be surprised to find that\nyou've just gotten burnt by out-of-sync files or some such (eg, a local\nfile that needs to be rebuilt but is timestamped a bit newer than the\ncvs-supplied files it depends on).\n\nIf you still see the problem with a virgin build, take a look at\nsrc/backend/utils/Gen_fmgrtab.sh and its output\nsrc/backend/utils/fmgrtab.c to see if you can figure out what's\nwrong. Could be that I introduced some kind of portability problem\ninto Gen_fmgrtab.sh ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Jun 2000 19:22:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with recent CVS versions and Solaris. " } ]
[ { "msg_contents": "Tom, since you check pg_statsistic in the optimizer for the most common\nvalue, should we downgrade the pg_attribute.attdisbursion value when we\nknow the value is not the most common value? Are you doing that\nalready?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 1 Jun 2000 19:42:39 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "disbursion" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, since you check pg_statsistic in the optimizer for the most common\n> value, should we downgrade the pg_attribute.attdisbursion value when we\n> know the value is not the most common value? Are you doing that\n> already?\n\nYeah, we already do. In fact the last tweak in that area was to reduce\nthe ratio we multiply by in just that situation. It's a poor substitute\nfor actually having better stats of course...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Jun 2000 19:48:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disbursion " } ]
[ { "msg_contents": "Tom,\n\nYou ain't arf clever.\n\nRunning Gen_fmgrtab.sh with a \"set -x\" shows:-\n\nconst FmgrBuiltin fmgr_builtins[] = {\n+ awk { printf (\" { %d, \\\"%s\\\", %d, %s, %s, %s },\\n\"), \\\n $1, $(NF-1), $9, \\\n ($8 == \"t\") ? \"true\" : \"false\", \\\n ($4 == \"11\") ? \"true\" : \"false\", \\\n $(NF-1) } fmgr.raw \nawk: syntax error near line 3\nawk: illegal statement near line 3\n+ cat \n /* dummy entry is easier than getting rid of comma after last real one */\n { 0, NULL, 0, false, false, (PGFunction) NULL }\n};\n\n/* Note fmgr_nbuiltins excludes the dummy entry */\nconst int fmgr_nbuiltins = (sizeof(fmgr_builtins) / sizeof(FmgrBuiltin)) - 1;\n\nLooks like the problem is that, Solaris's awk is \"old\" awk.\n\nIf I change the awk to nawk I get valid output.\n\nI'm just about to start the clean build process with this change.\n\nOnce it's started I'm off to bed. Will check in the morning.\n\nThanks for your trouble, we just need a \"portable\" fix now.\n\nThanks,\nKeith. \n\nTom Lane <[email protected]>\n> \n> Keith Parks <[email protected]> writes:\n> > I've managed to get a backtrace, attached, thanks to Ross J. Reedstrom's\n> > excellent example from the archives, also attached.\n> \n> > I'm not sure whether the stack frame shown is corrupt, it seems to just\n> > loop over and over again. (I got fed up after 400+ frames)\n> \n> What we've got here is the syscache trying to set up for a search of\n> cache 18, which I believe is the pg_proc-indexed-on-OID cache.\n> For that it needs the OID comparison function, \"oideq\" (OID 184).\n> It's asking the funcmgr for oideq ... and funcmgr is turning around\n> and asking the syscache for the pg_proc entry with OID 184. Ooops.\n> \n<snip>\n> My bet is that there is something snafu'd in your generation of\n> fmgrtab.c from pg_proc.h via Gen_fmgrtab.sh, such that your table of\n> builtin functions is either empty or corrupt.\n> \n<snip>\n> \n> If you still see the problem with a virgin build, take a look at\n> src/backend/utils/Gen_fmgrtab.sh and its output\n> src/backend/utils/fmgrtab.c to see if you can figure out what's\n> wrong. Could be that I introduced some kind of portability problem\n> into Gen_fmgrtab.sh ...\n> \n> \t\t\tregards, tom lane\n\n", "msg_date": "Fri, 2 Jun 2000 00:44:04 +0100 (BST)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problems with recent CVS versions and Solaris. " }, { "msg_contents": "Keith Parks <[email protected]> writes:\n> Running Gen_fmgrtab.sh with a \"set -x\" shows:-\n\n> const FmgrBuiltin fmgr_builtins[] = {\n> + awk { printf (\" { %d, \\\"%s\\\", %d, %s, %s, %s },\\n\"), \\\n> $1, $(NF-1), $9, \\\n> ($8 == \"t\") ? \"true\" : \"false\", \\\n> ($4 == \"11\") ? \"true\" : \"false\", \\\n> $(NF-1) } fmgr.raw \n> awk: syntax error near line 3\n> awk: illegal statement near line 3\n\nUgh. I think that the former version of the script didn't use\nconditional expressions (a ? b : c). Perhaps old versions of\nawk don't have those? If so we can probably work around it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Jun 2000 19:53:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with recent CVS versions and Solaris. " }, { "msg_contents": "> Ugh. I think that the former version of the script didn't use\n> conditional expressions (a ? b : c). Perhaps old versions of\n> awk don't have those?\n\nIndeed, the GNU awk manual says so very clearly :-(\n\nKeith, I've committed a new version of Gen_fmgrtab.sh.in;\nwould you check that it works on your copy of awk?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Jun 2000 22:01:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with recent CVS versions and Solaris. " }, { "msg_contents": "Tom Lane writes:\n\n> Ugh. I think that the former version of the script didn't use\n> conditional expressions (a ? b : c). Perhaps old versions of\n> awk don't have those? If so we can probably work around it...\n\nWhile you're at it, you should use AC_PROG_AWK to potentially find the\nmost modern and fastest awk on the system. Also, it seems that script has\nreally little to no checking of exit statuses. A segfault during initdb is\na really obscure place to find out about awk syntax errors.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 3 Jun 2000 01:47:12 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with recent CVS versions and Solaris. " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> While you're at it, you should use AC_PROG_AWK to potentially find the\n> most modern and fastest awk on the system. Also, it seems that script has\n> really little to no checking of exit statuses.\n\nTrue. Wanna fix it? I'm not planning to touch it again soon...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Jun 2000 22:50:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with recent CVS versions and Solaris. " } ]
[ { "msg_contents": "Also, Tom, should we preload the disbursion buckets with most common\nvalue from the previous run on the assumption we will get a better\nvalue?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 1 Jun 2000 19:45:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "disbursion again" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Also, Tom, should we preload the disbursion buckets with most common\n> value from the previous run on the assumption we will get a better\n> value?\n\nHmm, I don't see why that'd make it better. It'd probably bias us\nin favor of continuing to report the same value from run to run,\nwhich might have some benefit. The thing is that if no value in\nthe table is especially common, the existing algorithm will give a\nnearly random selection that might not be the most common value or\neven very close to it.\n\nI'm not really excited about spending effort on marginal tweaks\nof the current method, however. What we really need is more stats\n(and more reliable stats) than we have, and that's going to require\nsome work.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Jun 2000 20:52:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disbursion again " } ]
[ { "msg_contents": "Hi,\n\nI have a PostgreSQL 6.5.3 database server working flawlessly under Linux\nMandrake 7.0\n\nHowever, when I try to use the pg_dump utility, I get this error:\n\n#pg_dump myDatabase > backup.db\n\nNOTICE: get_groname: group 87 not found\ngetTables(): SELECT failed. Explanation from backend: 'pqReadData() --\nbackend \nclosed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\n\n\nThis database has been working perfectly for many weeks, but pg_dump\nrefuses to work.\n\nAny ideas?\n\nBest Regards,\n\nJorge.\n\n_____________________________________________\nFree email with personality! Over 200 domains!\nhttp://www.MyOwnEmail.com\n\n", "msg_date": "Thu, 1 Jun 2000 19:19:33 -0600", "msg_from": "\"Jorge Alvarez\" <[email protected]>", "msg_from_op": true, "msg_subject": "Pg_Dump strange error" } ]
[ { "msg_contents": "I have an application in Java I use to insert records \ninto postgreSQL base. Java shows no errors, but\nrecords can't write into base.In pgsqrever.log \nI found entry :pg_recvbuf : unexpected EOF on client connection.\n\nThanks for any help.\nAdam\n\n\n\n\n\n\n\n\nI have an application in Java I use to insert records into \npostgreSQL base. Java shows no errors, butrecords can't write into base.In \npgsqrever.log I found entry :pg_recvbuf : unexpected EOF on client \nconnection.\n \nThanks for any help.\nAdam", "msg_date": "Fri, 2 Jun 2000 11:48:36 +0200", "msg_from": "\"Adam Walczykiewicz\" <[email protected]>", "msg_from_op": true, "msg_subject": "pg_recvbuf : unexpected EOF on client connection." } ]
[ { "msg_contents": "> > http://www.math.u-szeged.hu/~kovzol/PGBrowse/pgbrowse\n> Your link is invalid !\nOh, yes! I forgot to copy the script into the right directory...\nIt should be OK now.\n\nThanks!\n\nZoltan\n\n Kov\\'acs, Zolt\\'an\n [email protected]\n http://www.math.u-szeged.hu/~kovzol\n ftp://pc10.radnoti-szeged.sulinet.hu/home/kovacsz\n\n", "msg_date": "Fri, 2 Jun 2000 12:06:58 +0200 (CEST)", "msg_from": "Kovacs Zoltan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: announcing pgbrowse" } ]
[ { "msg_contents": "Thanks Tom,\n\nThat's fixed it.\n\nIt's a shame when you have to \"dumb-down\" your AWK programming\nto suit the lowest common standard :-(\n\nThanks again,\nKeith.\n\n\nTom Lane <[email protected]>\n> \n> > Ugh. I think that the former version of the script didn't use\n> > conditional expressions (a ? b : c). Perhaps old versions of\n> > awk don't have those?\n> \n> Indeed, the GNU awk manual says so very clearly :-(\n> \n> Keith, I've committed a new version of Gen_fmgrtab.sh.in;\n> would you check that it works on your copy of awk?\n> \n> \t\t\tregards, tom lane\n\n", "msg_date": "Fri, 2 Jun 2000 13:16:33 +0100 (BST)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problems with recent CVS versions and Solaris. " } ]
[ { "msg_contents": "Hi all,\n\nI've had a table for years to keep radius connections to our NAS.\n\nNot havin inet type, I typed ip_adress as text. Now, I'em trying to change\nthis column type to no avail.\n\nselect inet(ip_addr)... : cannot cast type to inet\n\nHow can I change this column??\n\nTIA\n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Fri, 2 Jun 2000 18:27:47 +0200 (MET DST)", "msg_from": "Olivier PRENANT <[email protected]>", "msg_from_op": true, "msg_subject": "Inet type how to?" }, { "msg_contents": "You cannot change column type without dropping/recreating the table.\n\nOn Fri, 2 Jun 2000, Olivier PRENANT wrote:\n\n> Hi all,\n> \n> I've had a table for years to keep radius connections to our NAS.\n> \n> Not havin inet type, I typed ip_adress as text. Now, I'em trying to change\n> this column type to no avail.\n> \n> select inet(ip_addr)... : cannot cast type to inet\n> \n> How can I change this column??\n> \n> TIA\n> \n> \n\n", "msg_date": "Fri, 2 Jun 2000 13:41:54 -0400 (EDT)", "msg_from": "Alex Pilosov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inet type how to?" }, { "msg_contents": "Yes, I know that; but even when I reload it, copy complains it can't\nconvert 194.250.190.185 ( for example) to inet!\n\nRegards\nOn Fri, 2 Jun 2000, Alex Pilosov wrote:\n\n> You cannot change column type without dropping/recreating the table.\n> \n> On Fri, 2 Jun 2000, Olivier PRENANT wrote:\n> \n> > Hi all,\n> > \n> > I've had a table for years to keep radius connections to our NAS.\n> > \n> > Not havin inet type, I typed ip_adress as text. Now, I'em trying to change\n> > this column type to no avail.\n> > \n> > select inet(ip_addr)... : cannot cast type to inet\n> > \n> > How can I change this column??\n> > \n> > TIA\n> > \n> > \n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Fri, 2 Jun 2000 19:48:49 +0200 (MET DST)", "msg_from": "Olivier PRENANT <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inet type how to?" }, { "msg_contents": "It works for me for \\copy (which uses inserts)\n\nBut yes, there is no way to convert text to inet, which is a pain. When it\nannoys me sufficiently, I think I'll write text_inet wrapping inet_in...Or\njust wait for someone else to do it (TODO, bruce? ;)\n\nOn Fri, 2 Jun 2000, Olivier PRENANT wrote:\n\n> Yes, I know that; but even when I reload it, copy complains it can't\n> convert 194.250.190.185 ( for example) to inet!\n> \n> Regards\n> On Fri, 2 Jun 2000, Alex Pilosov wrote:\n> \n> > You cannot change column type without dropping/recreating the table.\n> > \n> > On Fri, 2 Jun 2000, Olivier PRENANT wrote:\n> > \n> > > Hi all,\n> > > \n> > > I've had a table for years to keep radius connections to our NAS.\n> > > \n> > > Not havin inet type, I typed ip_adress as text. Now, I'em trying to change\n> > > this column type to no avail.\n> > > \n> > > select inet(ip_addr)... : cannot cast type to inet\n> > > \n> > > How can I change this column??\n> > > \n> > > TIA\n> > > \n> > > \n> > \n> \n> \n\n", "msg_date": "Fri, 2 Jun 2000 16:07:49 -0400 (EDT)", "msg_from": "Alex Pilosov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inet type how to?" }, { "msg_contents": "> It works for me for \\copy (which uses inserts)\n> \n> But yes, there is no way to convert text to inet, which is a pain. When it\n> annoys me sufficiently, I think I'll write text_inet wrapping inet_in...Or\n> just wait for someone else to do it (TODO, bruce? ;)\n> \n\nWe don't? What about some double-casting option?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 2 Jun 2000 16:32:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inet type how to?" }, { "msg_contents": "None works. I tried. There is no type that can be cast to inet. And\ninet_in has a different calling sequence, and you can't really use that.\n\n-alex\n\nOn Fri, 2 Jun 2000, Bruce Momjian wrote:\n\n> > It works for me for \\copy (which uses inserts)\n> > \n> > But yes, there is no way to convert text to inet, which is a pain. When it\n> > annoys me sufficiently, I think I'll write text_inet wrapping inet_in...Or\n> > just wait for someone else to do it (TODO, bruce? ;)\n> > \n> \n> We don't? What about some double-casting option?\n> \n> \n\n", "msg_date": "Fri, 2 Jun 2000 16:36:43 -0400 (EDT)", "msg_from": "Alex Pilosov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inet type how to?" }, { "msg_contents": "> None works. I tried. There is no type that can be cast to inet. And\n> inet_in has a different calling sequence, and you can't really use that.\n\nAdded to TODO.\n\n\n> \n> -alex\n> \n> On Fri, 2 Jun 2000, Bruce Momjian wrote:\n> \n> > > It works for me for \\copy (which uses inserts)\n> > > \n> > > But yes, there is no way to convert text to inet, which is a pain. When it\n> > > annoys me sufficiently, I think I'll write text_inet wrapping inet_in...Or\n> > > just wait for someone else to do it (TODO, bruce? ;)\n> > > \n> > \n> > We don't? What about some double-casting option?\n> > \n> > \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 2 Jun 2000 16:38:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inet type how to?" }, { "msg_contents": "You're right... No way; However despite what I said befor it works.. until\ntext is ''; maybe this could be casted as NULL??\n\nRegards,\nOn Fri, 2 Jun 2000, Alex Pilosov wrote:\n\n> None works. I tried. There is no type that can be cast to inet. And\n> inet_in has a different calling sequence, and you can't really use that.\n> \n> -alex\n> \n> On Fri, 2 Jun 2000, Bruce Momjian wrote:\n> \n> > > It works for me for \\copy (which uses inserts)\n> > > \n> > > But yes, there is no way to convert text to inet, which is a pain. When it\n> > > annoys me sufficiently, I think I'll write text_inet wrapping inet_in...Or\n> > > just wait for someone else to do it (TODO, bruce? ;)\n> > > \n> > \n> > We don't? What about some double-casting option?\n> > \n> > \n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Fri, 2 Jun 2000 22:40:02 +0200 (MET DST)", "msg_from": "Olivier PRENANT <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inet type how to?" }, { "msg_contents": ">> None works. I tried. There is no type that can be cast to inet. And\n>> inet_in has a different calling sequence, and you can't really use that.\n\n> Added to TODO.\n\nIf we had \"C string\" (or something like that) as a genuine type in the\ntype system, then it would be possible/reasonable for the parser to\nunderstand\n\ttextvariable::cstring::inet\nas a request to invoke text_out followed by inet_in.\n\nI think it'd be a real bad idea to invoke such conversions silently,\nsince then we'd essentially have no type system at all (you could get\nfrom anything to anything else via cstring, so how can the thing check\nfor errors?). But it'd be awful darn handy to be able to invoke the\ntype i/o routines explicitly...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Jun 2000 18:24:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inet type how to? " }, { "msg_contents": "On Fri, 2 Jun 2000, Tom Lane wrote:\n\n> >> None works. I tried. There is no type that can be cast to inet. And\n> >> inet_in has a different calling sequence, and you can't really use that.\n> \n> > Added to TODO.\n> \n> If we had \"C string\" (or something like that) as a genuine type in the\n> type system, then it would be possible/reasonable for the parser to\n> understand\n> \ttextvariable::cstring::inet\n> as a request to invoke text_out followed by inet_in.\n> \n> I think it'd be a real bad idea to invoke such conversions silently,\n> since then we'd essentially have no type system at all (you could get\n> from anything to anything else via cstring, so how can the thing check\n> for errors?). But it'd be awful darn handy to be able to invoke the\n> type i/o routines explicitly...\n\nI think its a great idea. Is it just a matter of altering the catalog? Or\napparently we'd need postgres engine to know that cstring is a special\ntype and instead of looking for conversion routines it should use xxx_in\nand yyy_out\n\nI like it in any case. Last ditch do-not-use-unless-you-sure-its-what you\nwant thingy...\n\n-alex\n\n", "msg_date": "Fri, 2 Jun 2000 18:31:37 -0400 (EDT)", "msg_from": "Alex Pilosov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inet type how to? " } ]
[ { "msg_contents": "\n06-01-2000 - PostgreSQL v7.0.1 Released \n\nThe PostgreSQL Global Development Group is proud to announce the release\nof PostgreSQL v7.0.1. This is essentially acleanup of v7.0. A dump/restore\nis not required if you're moving from v7.0. If you're migrating from a\nrelease earlier than v7.0 a dump/restore will be necessary.\n\nThe release is available on ftp.postgresql.org, at:\n\n\tftp.postgresql.org/pub/source/v7.0.1\n\nAs well as our various mirror affiliates ...\n\n \nChanges \n\n Fix many CLUSTER failures (Tom) \n Allow ALTER TABLE RENAME works on indexes (Tom) \n Fix plpgsql to handle datetime->timestamp and \n timespan->interval (Bruce) \n New configure --with-setproctitle switch to use setproctitle() (Marc, Bruce) \n Fix the off by one errors in ResultSet from 6.5.3, and more. \n jdbc ResultSet fixes (Joseph Shraibman) \n optimizer tunings (Tom) \n Fix create user for pgaccess \n Fix for UNLISTEN failure \n IRIX fixes (David Kaelbling) \n QNX fixes (Andreas Kardos) \n Reduce COPY IN lock level (Tom) \n Change libpqeasy to use PQconnectdb() style parameters (Bruce) \n Fix pg_dump to handle OID indexes (Tom) \n Fix small memory leak (Tom) \n Solaris fix for createdb/dropdb \n Fix for non-blocking connections (Alfred Perlstein) \n Fix improper recovery after RENAME TABLE failures (Tom) \n Copy pg_ident.conf.sample into /lib directory in install (Bruce) \n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 2 Jun 2000 13:49:19 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL v7.0.1 Released" }, { "msg_contents": "> 06-01-2000 - PostgreSQL v7.0.1 Released \n> \n> The PostgreSQL Global Development Group is proud to announce the release\n> of PostgreSQL v7.0.1. This is essentially acleanup of v7.0. A dump/restore\n> is not required if you're moving from v7.0. If you're migrating from a\n> release earlier than v7.0 a dump/restore will be necessary.\n> \n> The release is available on ftp.postgresql.org, at:\n> \n> \tftp.postgresql.org/pub/source/v7.0.1\n> \n> As well as our various mirror affiliates ...\n\nI am a little bit confused about this release. It seems pre-formatted\nhtml docs and man files are not included in\nftp.postgresql.org/pub/source/v7.0.1/postgresql.7.0.1.tar.gz \n\ncd /usr/local/src/pgsql/7.0.1/postgresql-7.0.1/doc\nmake install\nmake all\nmake[1]: Entering directory `/usr/local/src/pgsql/7.0.1/postgresql-7.0.1/doc'\nmake[1]: *** No rule to make target `admin', needed by `all'. Stop.\nmake[1]: Leaving directory `/usr/local/src/pgsql/7.0.1/postgresql-7.0.1/doc'\nmake: *** [install] Error 2\n\nSo I grabed postgresql.7.0.1.docs.tar.gz and looked into the contents,\nbut I couldn't find pre-formatted docs. Am I missing something?\n\nAlso, bellow is different from the release note included in the tar ball?\n\n> Changes \n> \n> Fix many CLUSTER failures (Tom) \n> Allow ALTER TABLE RENAME works on indexes (Tom) \n> Fix plpgsql to handle datetime->timestamp and \n> timespan->interval (Bruce) \n> New configure --with-setproctitle switch to use setproctitle() (Marc, Bruce) \n> Fix the off by one errors in ResultSet from 6.5.3, and more. \n> jdbc ResultSet fixes (Joseph Shraibman) \n> optimizer tunings (Tom) \n> Fix create user for pgaccess \n> Fix for UNLISTEN failure \n> IRIX fixes (David Kaelbling) \n> QNX fixes (Andreas Kardos) \n> Reduce COPY IN lock level (Tom) \n> Change libpqeasy to use PQconnectdb() style parameters (Bruce) \n> Fix pg_dump to handle OID indexes (Tom) \n> Fix small memory leak (Tom) \n> Solaris fix for createdb/dropdb \n> Fix for non-blocking connections (Alfred Perlstein) \n> Fix improper recovery after RENAME TABLE failures (Tom) \n> Copy pg_ident.conf.sample into /lib directory in install (Bruce) \n> \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\nBTW, why\nthe tar ball is named postgresql.7.0.1.tar.gz, rather than\npostgresql-7.0.1.tar.gz? \n\nThis may confuse users becasue following descriptions are in INSTALL:\n\n> gunzip postgresql-7.0.1.tar.gz\n> tar -xf postgresql-7.0.1.tar\n", "msg_date": "Sat, 03 Jun 2000 17:38:27 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL v7.0.1 Released" }, { "msg_contents": "\nd'oh ... my build script got toasted awhile back, and I had to re-write it\nall ... failing to include the formatted docs, it appears :( my fault ...\n\n\n\nOn Sat, 3 Jun 2000, Tatsuo Ishii wrote:\n\n> > 06-01-2000 - PostgreSQL v7.0.1 Released \n> > \n> > The PostgreSQL Global Development Group is proud to announce the release\n> > of PostgreSQL v7.0.1. This is essentially acleanup of v7.0. A dump/restore\n> > is not required if you're moving from v7.0. If you're migrating from a\n> > release earlier than v7.0 a dump/restore will be necessary.\n> > \n> > The release is available on ftp.postgresql.org, at:\n> > \n> > \tftp.postgresql.org/pub/source/v7.0.1\n> > \n> > As well as our various mirror affiliates ...\n> \n> I am a little bit confused about this release. It seems pre-formatted\n> html docs and man files are not included in\n> ftp.postgresql.org/pub/source/v7.0.1/postgresql.7.0.1.tar.gz \n> \n> cd /usr/local/src/pgsql/7.0.1/postgresql-7.0.1/doc\n> make install\n> make all\n> make[1]: Entering directory `/usr/local/src/pgsql/7.0.1/postgresql-7.0.1/doc'\n> make[1]: *** No rule to make target `admin', needed by `all'. Stop.\n> make[1]: Leaving directory `/usr/local/src/pgsql/7.0.1/postgresql-7.0.1/doc'\n> make: *** [install] Error 2\n> \n> So I grabed postgresql.7.0.1.docs.tar.gz and looked into the contents,\n> but I couldn't find pre-formatted docs. Am I missing something?\n> \n> Also, bellow is different from the release note included in the tar ball?\n> \n> > Changes \n> > \n> > Fix many CLUSTER failures (Tom) \n> > Allow ALTER TABLE RENAME works on indexes (Tom) \n> > Fix plpgsql to handle datetime->timestamp and \n> > timespan->interval (Bruce) \n> > New configure --with-setproctitle switch to use setproctitle() (Marc, Bruce) \n> > Fix the off by one errors in ResultSet from 6.5.3, and more. \n> > jdbc ResultSet fixes (Joseph Shraibman) \n> > optimizer tunings (Tom) \n> > Fix create user for pgaccess \n> > Fix for UNLISTEN failure \n> > IRIX fixes (David Kaelbling) \n> > QNX fixes (Andreas Kardos) \n> > Reduce COPY IN lock level (Tom) \n> > Change libpqeasy to use PQconnectdb() style parameters (Bruce) \n> > Fix pg_dump to handle OID indexes (Tom) \n> > Fix small memory leak (Tom) \n> > Solaris fix for createdb/dropdb \n> > Fix for non-blocking connections (Alfred Perlstein) \n> > Fix improper recovery after RENAME TABLE failures (Tom) \n> > Copy pg_ident.conf.sample into /lib directory in install (Bruce) \n> > \n> > \n> > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > Systems Administrator @ hub.org \n> > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> BTW, why\n> the tar ball is named postgresql.7.0.1.tar.gz, rather than\n> postgresql-7.0.1.tar.gz? \n> \n> This may confuse users becasue following descriptions are in INSTALL:\n> \n> > gunzip postgresql-7.0.1.tar.gz\n> > tar -xf postgresql-7.0.1.tar\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 3 Jun 2000 17:43:36 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] PostgreSQL v7.0.1 Released" } ]
[ { "msg_contents": "Hi,\n\nCan you tell me trim() spec, please ? (This problem has been \ndiscussed in pgsql-jp ML. )\n\nIn trim(trailing 'abc' from '123cbabc') function, 'abc' means\n~'[abc]'. \n\npgbash> select trim(trailing 'abc' from '123cbabc');\nrtrim\n-----\n 123 <==== it is not \"123cb\"!!\n(1 row)\n\n\nIn current trim() function, MULTIBYTE string is broken.\n\npgbash> select trim(trailing '0x8842' from '0xB1428842');\n --~~ ~~--~~\nrtrim\n-----\n 0xB1 <==== MULTIBYTE string broken (This is a bug.)\n(1 row)\n\n\nIf trim(trailing 'abc' from '123cbabc') returns \"123cb\", current \ntrim() spec is broken. However, the spec that 'abc' means ~'[abc]' \nis ugly. It seems that this ugly spec isn't used for any kind of\nfunctions argument and SQL expression except for trim().\n\nHow do you think about the trim() spec ?\n\n--\nRegards,\nSAKAIDA Masaaki -- Osaka, Japan\n\n\n", "msg_date": "Sat, 03 Jun 2000 12:28:56 +0900", "msg_from": "SAKAIDA Masaaki <[email protected]>", "msg_from_op": true, "msg_subject": "trim() spec" }, { "msg_contents": "> Can you tell me trim() spec, please ? (This problem has been\n> discussed in pgsql-jp ML. )\n> In trim(trailing 'abc' from '123cbabc') function, 'abc' means\n> ~'[abc]'.\n> If trim(trailing 'abc' from '123cbabc') returns \"123cb\", current\n> trim() spec is broken. However, the spec that 'abc' means ~'[abc]'\n> is ugly. It seems that this ugly spec isn't used for any kind of\n> functions argument and SQL expression except for trim().\n> How do you think about the trim() spec ?\n\nafaict, the SQL92 spec for trim() requires a single character as the\nfirst argument; allowing a character string is a Postgres extension. On\nthe surface, istm that this extension is in the spirit of the SQL92\nspec, in that it allows trimming several possible characters.\n\nI'm not sure if SQL3/SQL99 has anything extra to say on this.\n\nposition() and substring() seem to be able to do what you want;\n\n select substring('123ab' for position('ab' in '123ab')-1);\n\ngives '123', while\n\n select substring('123ab' for position('d' in '123ab')-1);\n\ngives '123ab', which seems to be the behavior you might be suggesting\nfor trim().\n\n - Tom\n", "msg_date": "Tue, 13 Jun 2000 01:35:57 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: trim() spec" }, { "msg_contents": "Can someone comment on this?\n\n\n> Hi,\n> \n> Can you tell me trim() spec, please ? (This problem has been \n> discussed in pgsql-jp ML. )\n> \n> In trim(trailing 'abc' from '123cbabc') function, 'abc' means\n> ~'[abc]'. \n> \n> pgbash> select trim(trailing 'abc' from '123cbabc');\n> rtrim\n> -----\n> 123 <==== it is not \"123cb\"!!\n> (1 row)\n> \n> \n> In current trim() function, MULTIBYTE string is broken.\n> \n> pgbash> select trim(trailing '0x8842' from '0xB1428842');\n> --~~ ~~--~~\n> rtrim\n> -----\n> 0xB1 <==== MULTIBYTE string broken (This is a bug.)\n> (1 row)\n> \n> \n> If trim(trailing 'abc' from '123cbabc') returns \"123cb\", current \n> trim() spec is broken. However, the spec that 'abc' means ~'[abc]' \n> is ugly. It seems that this ugly spec isn't used for any kind of\n> functions argument and SQL expression except for trim().\n> \n> How do you think about the trim() spec ?\n> \n> --\n> Regards,\n> SAKAIDA Masaaki -- Osaka, Japan\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Jun 2000 04:04:15 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: trim() spec" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> If trim(trailing 'abc' from '123cbabc') returns \"123cb\", current\n>> trim() spec is broken. However, the spec that 'abc' means ~'[abc]'\n>> is ugly. It seems that this ugly spec isn't used for any kind of\n>> functions argument and SQL expression except for trim().\n\n> afaict, the SQL92 spec for trim() requires a single character as the\n> first argument; allowing a character string is a Postgres extension. On\n> the surface, istm that this extension is in the spirit of the SQL92\n> spec, in that it allows trimming several possible characters.\n\nMySQL's crashme list has some useful information about this: they\nindicate whether an implementation considers a multi-char TRIM argument\nto be a set (our way) or a substring (MySQL does it that way, for one).\nSo there's precedent for both sides.\n\nGiven that our trim() code claims to exist for Oracle compatibility,\nI'd have assumed that its handling of multi-char arguments followed\nOracle. But the crashme list doesn't show Oracle as supporting either\nsemantics. Can someone with access to Oracle check this?\n\n> I'm not sure if SQL3/SQL99 has anything extra to say on this.\n\nThe 1994 draft specifies just a single trim character, same as SQL92.\nHaven't gotten around to grabbing the 99 draft yet...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Jun 2000 10:45:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: trim() spec " }, { "msg_contents": "On Tue, Jun 13, 2000 at 10:45:07AM -0400, Tom Lane wrote:\n> Thomas Lockhart <[email protected]> writes:\n> >> If trim(trailing 'abc' from '123cbabc') returns \"123cb\", current\n> >> trim() spec is broken. However, the spec that 'abc' means ~'[abc]'\n> >> is ugly. It seems that this ugly spec isn't used for any kind of\n> >> functions argument and SQL expression except for trim().\n> \n> > afaict, the SQL92 spec for trim() requires a single character as the\n> > first argument; allowing a character string is a Postgres extension. On\n> > the surface, istm that this extension is in the spirit of the SQL92\n> > spec, in that it allows trimming several possible characters.\n> \n> MySQL's crashme list has some useful information about this: they\n> indicate whether an implementation considers a multi-char TRIM argument\n> to be a set (our way) or a substring (MySQL does it that way, for one).\n> So there's precedent for both sides.\n> \n> Given that our trim() code claims to exist for Oracle compatibility,\n> I'd have assumed that its handling of multi-char arguments followed\n> Oracle. But the crashme list doesn't show Oracle as supporting either\n> semantics. Can someone with access to Oracle check this?\n\nOracle 8i gives you an error if you give a multi-character argument\nto TRIM. So anything that worked with Oracle would work the same with\nus.\n\nRichard\n", "msg_date": "Tue, 13 Jun 2000 16:25:50 +0100", "msg_from": "Richard Poole <[email protected]>", "msg_from_op": false, "msg_subject": "Re: trim() spec" } ]
[ { "msg_contents": "I have committed new warning code to alert users who auto-create\nrelations without knowing it.\n\nThe new code does not throw a warning for:\n\n\tSELECT pg_language.*;\n\nbut does throw one for:\n\n\tSELECT pg_language.* FROM pg_class\n\nThe code issues the warning if it auto-creates a range table entry, and\nthere is already a range table entry identified as coming from a FROM\nclause. Correlated subqueries should not be a problem because they are\nnot auto-created.\n\nThe regression tests run fine, except for:\n\t\n\tSELECT *\n\t INTO TABLE tmp1\n\t FROM tmp\n\t WHERE onek.unique1 < 2;\n\tNOTICE: Adding missing FROM-clause entry for table onek\n\tDROP TABLE tmp1;\n\tSELECT *\n\t INTO TABLE tmp1\n\t FROM tmp\n\t WHERE onek2.unique1 < 2;\n\tNOTICE: Adding missing FROM-clause entry for table onek2\n\nSeems those warnings are justified. In fact, I am not even sure what\nthese queries are trying to do. I have modified the expected files so\nthey now expect to see the warnings.\n\nA bigger question is whether we should issue ERROR for these queries. \nIf they have already used a FROM clause, why would they have other\nrelations not specified there?\n\nIf people have other suggestions about this, I would be glad to modify\nthe code. A new function warnAutoRange() does the checking.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 3 Jun 2000 00:40:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "New warning code for missing FROM relations" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I have committed new warning code to alert users who auto-create\n> relations without knowing it.\n> The code issues the warning if it auto-creates a range table entry, and\n> there is already a range table entry identified as coming from a FROM\n> clause. Correlated subqueries should not be a problem because they are\n> not auto-created.\n\nI still prefer the suggestion I made before: complain only if the\nimplicit FROM entry is for a table already present in the rangelist\n(under a different alias, obviously). The fact that that choice\nwould not break any existing regression tests seems relevant...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 03 Jun 2000 03:34:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New warning code for missing FROM relations " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I have committed new warning code to alert users who auto-create\n> > relations without knowing it.\n> > The code issues the warning if it auto-creates a range table entry, and\n> > there is already a range table entry identified as coming from a FROM\n> > clause. Correlated subqueries should not be a problem because they are\n> > not auto-created.\n> \n> I still prefer the suggestion I made before: complain only if the\n> implicit FROM entry is for a table already present in the rangelist\n> (under a different alias, obviously). The fact that that choice\n> would not break any existing regression tests seems relevant...\n\nBut it seems mine is going to complain if they forget one in a FROM\nclause, which sort of makes sense to me. I can do your suggestion, but\nthis makes more sense. Can we get some other votes?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 3 Jun 2000 10:00:46 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New warning code for missing FROM relations" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> I still prefer the suggestion I made before: complain only if the\n>> implicit FROM entry is for a table already present in the rangelist\n>> (under a different alias, obviously). The fact that that choice\n>> would not break any existing regression tests seems relevant...\n\n> But it seems mine is going to complain if they forget one in a FROM\n> clause, which sort of makes sense to me.\n\nSeems like the real question is what is the goal of having the warning.\nAre we (a) trying to nag people into writing their queries in an\nSQL-compliant way, or are we (b) trying to warn about probable mistakes\nwhile still considering implicit FROM entries as a fully supported\nPostgres feature?\n\nIf the goal is (a) then your way is better, but I like mine if the goal\nis (b). Seems like some discussion is needed here about just what we\nwant to accomplish.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 03 Jun 2000 12:44:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New warning code for missing FROM relations " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> I still prefer the suggestion I made before: complain only if the\n> >> implicit FROM entry is for a table already present in the rangelist\n> >> (under a different alias, obviously). The fact that that choice\n> >> would not break any existing regression tests seems relevant...\n> \n> > But it seems mine is going to complain if they forget one in a FROM\n> > clause, which sort of makes sense to me.\n> \n> Seems like the real question is what is the goal of having the warning.\n> Are we (a) trying to nag people into writing their queries in an\n> SQL-compliant way, or are we (b) trying to warn about probable mistakes\n> while still considering implicit FROM entries as a fully supported\n> Postgres feature?\n> \n> If the goal is (a) then your way is better, but I like mine if the goal\n> is (b). Seems like some discussion is needed here about just what we\n> want to accomplish.\n\nI agree the goal is (b). However, I can not imagine a query with a FROM\nclause that would ever want to use auto-creation of range entries.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 3 Jun 2000 13:53:24 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New warning code for missing FROM relations" }, { "msg_contents": "At 00:40 3/06/00 -0400, Bruce Momjian wrote:\n>\n>The regression tests run fine, except for:\n>\t\n>\tSELECT *\n>\t INTO TABLE tmp1\n>\t FROM tmp\n>\t WHERE onek.unique1 < 2;\n>\tNOTICE: Adding missing FROM-clause entry for table onek\n\nPersonally I would prefer this to generate an error, eg:\n\n Table 'onek' refereenced in the WHERE clause is not in the FROM clause.\n\nIs is worth adding yet another setting, eg. set sql92=strict, which would\ndisallow such flagrant breaches of the standard? Maybe it could even be set\nas the default in template1? I understand that breaking legacy code is a\nbad idea, so the warning is a good step, but I would prefer an error if I\never write such a statement.\n\nOther DBs I've worked with issue warnings for several version before\nchanging default behaviour, so perhaps in version 8.0, the above code could\nprduce an error by default (unless 'set sql92=relaxed' was specified).\n\nP.S. Given that 'set sql92=xxx' may imply that we are enforcing compliance,\nwhich is unlikely, maybe it should be 'set implied-tables' or something\nmore specific and meaningful. I have no idea how the 'set' command works,\nbut to avoid a whole lot of new variables, maybe 'set\nsql-compiance-options=no-implied-tables, no-something-else,\nallow-another-thing' would allow for expansion...\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sun, 04 Jun 2000 12:35:30 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New warning code for missing FROM relations" }, { "msg_contents": "> At 00:40 3/06/00 -0400, Bruce Momjian wrote:\n> >\n> >The regression tests run fine, except for:\n> >\t\n> >\tSELECT *\n> >\t INTO TABLE tmp1\n> >\t FROM tmp\n> >\t WHERE onek.unique1 < 2;\n> >\tNOTICE: Adding missing FROM-clause entry for table onek\n> \n> Personally I would prefer this to generate an error, eg:\n> \n> Table 'onek' refereenced in the WHERE clause is not in the FROM clause.\n\nYes, that was one of my stated options, return an error.\n\n> Is is worth adding yet another setting, eg. set sql92=strict, which would\n> disallow such flagrant breaches of the standard? Maybe it could even be set\n> as the default in template1? I understand that breaking legacy code is a\n> bad idea, so the warning is a good step, but I would prefer an error if I\n> ever write such a statement.\n\nI am concerned about overloading the SET command. Seems we should just\nagree on a behavior.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 3 Jun 2000 22:58:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New warning code for missing FROM relations" }, { "msg_contents": ">\n>> Is is worth adding yet another setting, eg. set sql92=strict, which would\n>> disallow such flagrant breaches of the standard? Maybe it could even be set\n>> as the default in template1? I understand that breaking legacy code is a\n>> bad idea, so the warning is a good step, but I would prefer an error if I\n>> ever write such a statement.\n>\n>I am concerned about overloading the SET command. Seems we should just\n>agree on a behavior.\n>\n\nThen make it an option at build time.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sun, 04 Jun 2000 13:23:59 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New warning code for missing FROM relations" }, { "msg_contents": "> > Bruce Momjian <[email protected]> writes:\n> > > I have committed new warning code to alert users who auto-create\n> > > relations without knowing it.\n> > > The code issues the warning if it auto-creates a range table entry, and\n> > > there is already a range table entry identified as coming from a FROM\n> > > clause. Correlated subqueries should not be a problem because they are\n> > > not auto-created.\n> > \n> > I still prefer the suggestion I made before: complain only if the\n> > implicit FROM entry is for a table already present in the rangelist\n> > (under a different alias, obviously). The fact that that choice\n> > would not break any existing regression tests seems relevant...\n> \n> But it seems mine is going to complain if they forget one in a FROM\n> clause, which sort of makes sense to me. I can do your suggestion, but\n> this makes more sense. Can we get some other votes?\n\nI like it the way you did it. Personally I would even throw an error,\nbut that would probably be too strict. \n\nI would change the regressiontest to add onek to the from clause, \nand not make it throw the warning. \nImho this example is only good to demonstrate how you can \nmisuse a feature.\n\nThere are good examples for using it, but all of those that I can think of \ndon't have a from clause.\n\nAndreas\n\n", "msg_date": "Sun, 4 Jun 2000 14:57:50 +0200", "msg_from": "\"Zeugswetter Andreas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New warning code for missing FROM relations" }, { "msg_contents": "> > Bruce Momjian <[email protected]> writes:\n> > >> I still prefer the suggestion I made before: complain only if the\n> > >> implicit FROM entry is for a table already present in the rangelist\n> > >> (under a different alias, obviously). The fact that that choice\n> > >> would not break any existing regression tests seems relevant...\n> > \n> > > But it seems mine is going to complain if they forget one in a FROM\n> > > clause, which sort of makes sense to me.\n> > \n> > Seems like the real question is what is the goal of having the warning.\n> > Are we (a) trying to nag people into writing their queries in an\n> > SQL-compliant way, or are we (b) trying to warn about probable mistakes\n> > while still considering implicit FROM entries as a fully supported\n> > Postgres feature?\n> > \n> > If the goal is (a) then your way is better, but I like mine if the goal\n> > is (b). Seems like some discussion is needed here about just what we\n> > want to accomplish.\n> \n> I agree the goal is (b). However, I can not imagine a query with a FROM\n> clause that would ever want to use auto-creation of range entries.\n\nhow about:\n\ndelete from taba where a=tabb.a; \n\nI think the implicit auto-creation should only be disallowed/warned in \nselect statements that have a from clause, not update and delete.\n\nAndreas\n\n", "msg_date": "Sun, 4 Jun 2000 15:06:14 +0200", "msg_from": "\"Zeugswetter Andreas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New warning code for missing FROM relations" }, { "msg_contents": "> > > If the goal is (a) then your way is better, but I like mine if the goal\n> > > is (b). Seems like some discussion is needed here about just what we\n> > > want to accomplish.\n> > \n> > I agree the goal is (b). However, I can not imagine a query with a FROM\n> > clause that would ever want to use auto-creation of range entries.\n> \n> how about:\n> \n> delete from taba where a=tabb.a; \n> \n> I think the implicit auto-creation should only be disallowed/warned in \n> select statements that have a from clause, not update and delete.\n\nI meant a SELECT FROM clause. A DELETE FROM is not the same, and does\nnot mark the entry as inFromCl. I should have been more specific.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 4 Jun 2000 13:36:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New warning code for missing FROM relations" }, { "msg_contents": "> I like it the way you did it. Personally I would even throw an error,\n> but that would probably be too strict. \n\nYes, I figured the same.\n\n> \n> I would change the regressiontest to add onek to the from clause, \n> and not make it throw the warning. \n> Imho this example is only good to demonstrate how you can \n> misuse a feature.\n\nOK.\n\n> \n> There are good examples for using it, but all of those that I can think of \n> don't have a from clause.\n\nYes, that was my logic.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 4 Jun 2000 13:48:51 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New warning code for missing FROM relations" }, { "msg_contents": "Philip Warner writes:\n\n> >\tSELECT *\n> >\t INTO TABLE tmp1\n> >\t FROM tmp\n> >\t WHERE onek.unique1 < 2;\n> >\tNOTICE: Adding missing FROM-clause entry for table onek\n\n> Is is worth adding yet another setting, eg. set sql92=strict, which\n> would disallow such flagrant breaches of the standard?\n\nSQL provides for facility called the SQL Flagger, which is supposed to do\nexactly that. This might sound like an interesting idea but in order for\nit to be useful you'd have to maintain it across the board, which sounds\nlike a major head ache.\n\nThe irony in the given example is that the SELECT INTO command isn't in\nthe standard in the first place so you'd have to create all sorts of\ndouble standards. Certain things would be \"extensions\", certain things\nwould be \"misuse\". And for all it's worth, we have no idea which is which.\n\nIf you want to throw about warnings about \"probable\" coding errors and the\nlike one *must* be able to switch them off. Either something is right,\nthen you shut up. Or it's wrong, then you throw an error. Or you're not\nsure, then you better leave it up to the user.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 5 Jun 2000 02:21:14 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New warning code for missing FROM relations" } ]
[ { "msg_contents": "------- Blind-Carbon-Copy\n\nTo: [email protected]\nSubject: Re: Industrial-Strength Logging \nIn-reply-to: <[email protected]> \nDate: Sat, 03 Jun 2000 22:59:34 +1000\nMessage-ID: <[email protected]>\nFrom: Giles Lean <[email protected]>\n\n\nOn Sat, 3 Jun 2000 01:48:33 +0200 (CEST) Peter Eisentraut wrote:\n\n> Yeah, let's have another logging discussion... :)\n\nMmm, seems popular. There was a mention on -ports and -general a\ncouple of weeks ago, and here we are (were) on -patches. I'm moving\nthis discussion to -hackers (hope that's a good choice) since that is\nwhere Tim Holloway's proposals were discussed late last year.\n\nA start point I found in the archives for Tim's proposal is:\nhttp://www.postgresql.org/mhonarc/pgsql-hackers/1999-10/msg00747.html\n\nI'm not proposing anything that advanced. In particular, I'm not\ndiscussing the -content- of log messages at all. For now it would be\nnice to see the logging mechanism improved; changing or improving the\ncontent can be another project.\n\nI don't discuss the current logging implementation except to note that\nthe backend postgres processes' logging depends on whether the process\nis running under postmaster or not, has a controlling terminal or not,\nwhether a -o option was provided, and whether postgres was compiled to\nuse syslog. Maybe that functionality can be simplified a bit ... ;-)\n\nOne more thing I don't discuss is how the debug log level is set.\nCertainly something more sophisticated and dynamically variable than\nthe current command line method would be nice, but that too can be a\ndiscussion for another day; it isn't much related to -how- the error\nmessages are tucked away.\n\nTypical logging methods\n=======================\n\n(a)(i) write to standard error with redirection to a file\n\n Pro:\n - what the code (mostly) does currently\n - very easy to set up, just redirect standard error at startup\n - efficient and low overhead\n\n Con:\n - can't rotate log files\n - problematic when there is an I/O error or the filesystem the log\n file is on fills up\n\n(a)(ii) write to standard error, with standard error piped to another\n process\n\n Pro:\n - administrator chooses between (i) and (ii) and can change this\n via shutdown and restart, no recompilation needed\n - no code changes to backend programs\n - clean separation of functionality\n - choice of backend logging programs\n o Bernstein's logtools\n o Apache's rotatelogs\n o swatch\n o logsurfer\n o ...\n\n Con:\n - backend can block if the logging process is not reading log\n messages fast enough (can the backends generate enough data for\n this to be a problem in practice?)\n - reliability of message logging is dependent on the log\n process\n - log messages can be lost if the log process aborts, or is not\n started (solution: make portmaster responsible for starting and\n restartin the log process)\n\n(b) write to named log file(s)\n\n One way to allow rotation of log files is for the backend\n processes to know what log files they write to, and to have them\n open them directly without shell redirection. There is some\n support for this with the postgres -o option, but no support\n for rotating these files that I have seen so far.\n\n In the simplest case, the backend processes open the log file at\n when they start and close it when they exit. This allows rotation\n of the log file by moving it and waiting for all the currently\n running backend processes to finish.\n\n Pro:\n - relatively simple code change\n - still efficient and low overhead\n\n Con:\n - backend processes can run for some time, and postmaster runs\n indefinitely, so at least postmaster needs to know about log\n file rotation\n - doesn't help much for I/O errors or full filesystem\n\n To address these limitations some applications open their log file\n for each message and then close it afterward:\n\n Pro:\n - nothing holds the log file open for long\n - still efficient and low overhead for the actual writing the log\n file\n\n Con:\n - all error logging has to be via a log routine. This would be\n elog(), but there is some use of fprintf(stderr, ...) around the\n place that would want to be changed\n\n - there will be some efficiency hit for the open() and close()\n calls. This won't be -too- bad since the operating system's\n inode cache (or local equivalent) should contain an entry for\n the log file, but it is still two more system calls.\n\n Another way to handle rotation with long running processes is to\n signal them to re-open their log file, like syslogd is managed:\n\n Pro:\n - it's a solution\n\n Con:\n - more code in the backend processes\n - more communication with the backend processes\n - more complication\n\n(c) log via some logging facility such as syslogd\n\n Pro:\n - people know syslogd\n - syslogd \"looks like\" the right answer\n\n Con:\n - messages are sent to it typically via UDP, so it's easy for them\n to get lost\n - syslogd is not coded robustly, and can hang, stop listening to\n input sources, stop writing to output files, and put wrong\n timestamps on messages\n - using non-UDP transport to syslogd is non-portable (some systems\n allow Unix domain sockets, some named pipes, some neither)\n - syslogd can be hard to secure (e.g. to stop it listening on\n particular network interfaces)\n - do you know what your syslog(3) routine does when it gets a\n write(2) error?\n - it is not supported by all (any?) vendors to have syslogd write\n to a pipe, so on-the-fly processing of error messages is not\n possible; they have to be written to the filesystem\n - Unix specific(?)\n\n(d) log into the database\n\n Pro:\n - we've got a nice tool, let's use it\n\n Con:\n - chicken-and-egg problem of logging serious/fatal errors\n - infinite recursion problem of logging messages that cause more errors\n - error messages are predominantly text messages, very susceptible\n to processing with perl, grep and friends. It isn't crystal\n clear that having them inside the database helps a lot\n - using the database is friendly to a DBA, but quite possibly not\n to the system adminstrator who very possibly knows no SQL but\n who (on a Unix system, at least) has many text log files to look\n after\n - postmaster doesn't access tables according to the previous\n discussion\n\nRecommendations\n===============\n\n From here on it's definitely all IMHO! Your mileage may vary.\n\nTo restate the methods I discussed:\n\n (a)(i) standard error to file\n (ii) standard error piped to a process\n (b) named log file(s)\n (c) syslogd\n (d) database\n\nI would recommend (a)(ii), with (a)(i) available for anyone who wants\nit. (Someone who has high load 9-5 but who can shut down daily might\nbe happy writing directly to a log file, for example.)\n\nUsing the database is too complex/fragile, using syslogd fails too\noften, and having every process involved manage log files as in (b)\nseems like more work than is necessary unless/until logging via a\nsingle process turns out to be a bottleneck.\n\nThere has been a suggestion to use the Apache \"rotatelogs\" program to\ndo the logging. That program is perhaps a little -too- simplistic:\n\n- - no timestamp option\n- - no way to make ad-hoc requests for log file rotation\n- - write errors cause it to exit (... oops!)\n- - no fsync option\n\nOn the other hand the program is only 100 lines. Writing a robust\nversion isn't rocket science, and if postmaster is taught to start it,\nthen pg_ctl looks like a natural place to control log file rotation\nand possibly one day log levels as well.\n\nRegards,\n\nGiles\n\nP.S. Yes, I've some coding time to offer. I'll wait and see what is\nliked before I do anything though. ;-)\n\n\n\n\n\n------- End of Blind-Carbon-Copy\n", "msg_date": "Sat, 03 Jun 2000 22:59:34 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Industrial-Strength Logging " }, { "msg_contents": "Giles Lean <[email protected]> writes:\n>> Yeah, let's have another logging discussion... :)\n>\n> [ good summary of different approaches: ]\n> (a)(i) standard error to file\n> (ii) standard error piped to a process\n> (b) named log file(s)\n> (c) syslogd\n> (d) database\n> I would recommend (a)(ii), with (a)(i) available for anyone who wants\n> it. (Someone who has high load 9-5 but who can shut down daily might\n> be happy writing directly to a log file, for example.)\n\nYou mentioned the issue of trying to deal with out-of-disk-space errors\nfor the log file, but there is another kind of resource exhaustion\nproblem that should also be taken into account. Namely, inability to\nopen the log file due to EMFILE (no kernel filetable slots left) errors.\nThis is fresh in my mind because I just finished making some fixes to\nmake Postgres more robust in the full-filetable scenario. It's quite\neasy for a Postgres installation to run the kernel out of filetable\nslots if the admin has set a large MaxBackends limit without increasing\nthe kernel's NFILE parameter enough to cope. So this isn't a very\nfarfetched scenario, and we ought to take care that our logging\nmechanism doesn't break down when it happens.\n\nYou mentioned that case (b) has a popular variant of opening and closing\nthe logfile for each message. I think this would be the most prone to\nEMFILE failures, since the backends wouldn't normally be holding the\nlogfile open. In the other cases the logfile or log pipe is held open\ncontinually by each backend so there's no risk at that point. Of\ncourse, the downstream logging daemon in cases (a)(ii) and (c) might\nsuffer EMFILE at the time that it's trying to rotate to a new logfile.\nI doubt we can expect that syslogd has a good strategy for coping with\nthis :-(. If the daemon is of our own making, the first thought that\ncomes to mind is to hold the previous logfile open until after we\nsuccessfully open the new one. If we get a failure on opening the new\nfile, we just keep logging into the old one, while periodically trying\nto rotate again.\n\nThe recovery strategy for individual backends faced with EMFILE failures\nis to close inessential files until the open() request succeeds. (There\nare normally plenty of inessential open files, since most backend I/O\ngoes through VFDs managed by fd.c, and any of those that are physically\nopen can be closed at need.) If we use case (b) then a backend that\nfinds itself unable to open a log file could try to recover that way.\nHowever there are two problems with it: one, we might be unable to log\nstartup failures under EMFILE conditions (since there might well be no\nopen VFDs in a newly-started backend, especially if the system is in\nfiletable trouble), and two, there's some risk of circularity problems\nif fd.c is itself trying to write a log message and has to be called\nback by elog.c.\n\nCase (d), logging to a database table, would be OK in the face of EMFILE\nduring normal operation, but again I worry about the prospect of being\nunable to log startup failures. (Actually, there's a more serious\nproblem with it for startup failures: a backend cannot be expected to do\ndatabase writes until it's pretty fully up to speed. Between that and\nthe fact the postmaster can't write to tables either, I think we can\nreject case (d) for our purposes.)\n\nSo from this point of view, it again seems that case (a)(i) or (a)(ii)\nis the best alternative, so long as the logging daemon is coded not to\ngive up its handle for an old log file until it's successfully acquired\na new one.\n\n\nSeems like the next step should be for someone to take a close look at\nthe several available log-daemon packages and see which of them looks\nlike the best bet for our purposes. (I assume there's no good reason\nto roll our own from scratch...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 03 Jun 2000 12:36:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Industrial-Strength Logging " }, { "msg_contents": "\nOn Sat, 03 Jun 2000 12:36:58 -0400 Tom Lane wrote:\n\n> You mentioned the issue of trying to deal with out-of-disk-space errors\n> for the log file, but there is another kind of resource exhaustion\n> problem that should also be taken into account. Namely, inability to\n> open the log file due to EMFILE (no kernel filetable slots left)\n> errors.\n\ns/EMFILE/ENFILE/, but yes.\n\n> I doubt we can expect that syslogd has a good strategy for coping with\n> this :-(.\n\n\t\tif ((f->f_file = open(p, O_WRONLY|O_APPEND, 0)) < 0) {\n\t\t\tf->f_type = F_UNUSED;\n\t\t\tlogerror(p);\n\n'nuff said.\n\n> If the daemon is of our own making, the first thought that\n> comes to mind is to hold the previous logfile open until after we\n> successfully open the new one. If we get a failure on opening the new\n> file, we just keep logging into the old one, while periodically trying\n> to rotate again.\n\nCosts an extra file descriptor, but it's only one and it's temporary.\nI can't see anything better to do.\n\n> Seems like the next step should be for someone to take a close look at\n> the several available log-daemon packages and see which of them looks\n> like the best bet for our purposes. (I assume there's no good reason\n> to roll our own from scratch...)\n\nSuggestions anyone? I like Bernstein's tools, but they're not freely\ndistributable enough to be integrated into Postgres. The Apache\nprogram would be an OK starting place ... if 100 lines of not quite\nright code is better than a blank page.\n\nThe other tools I mentioned -- swatch and logsurfer -- are really\nanalysis programs, and while someone might use them for on-the-fly\ndata reduction they're not really what we want as a default. Anyone?\n\nIt wouldn't hurt to discuss the requirements a little bit more here,\ntoo. There is a compile time option to postgres to add timestamps.\nShould the log program do that instead? Should the log writing\nprogram have any responsibility for filtering?\n\nBernstein's \"multilog\" can do this sort of thing, although I don't\nexpect the interface to be to many peoples' liking:\n\n http://cr.yp.to/daemontools/multilog.html\n\nRegards,\n\nGiles\n\nP.S. I sent two copies of the message that Tom was replying to to\n-hackers; I saw none come back. Very peculiar, since the one was sent\nfrom an address subscribed to pgsql-loophole and one from an address\nactually subscribed to the list.\n", "msg_date": "Sun, 04 Jun 2000 09:08:28 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Industrial-Strength Logging " } ]
[ { "msg_contents": "Hi,\n\nI just posted a message to the interfaces list about how this is causing\nproblems in th JDBC driver, and I'm wondering if there's a reason why\nthe EncodeDateTime function creates a different format string depending\non whether the date has milliseconds. Would it break anything if it\nalways returned:\n\nyyyy-mm-dd hh:mm:ss.SSzzz\n\neven if the SS will be zero, and even if the time will is null: \n\"00:00:00.00\" (midnight)?\n\nAlso, why are there only two digits of precision on the milliseconds? \nshouldn't there be three?\n\n\t-Nissim\n", "msg_date": "Sat, 03 Jun 2000 14:15:15 -0400", "msg_from": "Nissim <[email protected]>", "msg_from_op": true, "msg_subject": "Variable formatting of datetime with DateStyle=ISO " }, { "msg_contents": "Nissim <[email protected]> writes:\n> I just posted a message to the interfaces list about how this is causing\n> problems in th JDBC driver, and I'm wondering if there's a reason why\n> the EncodeDateTime function creates a different format string depending\n> on whether the date has milliseconds. Would it break anything if it\n> always returned:\n\n> yyyy-mm-dd hh:mm:ss.SSzzz\n\nYes: all the applications that never store fractional seconds, and are\nnot expecting to find fractions in their returned results. I think the\nexisting behavior is a reasonable compromise, and puts the burden of\nextra complexity where it belongs: on the apps that are using\nfractional-second timestamps.\n\n> Also, why are there only two digits of precision on the milliseconds? \n> shouldn't there be three?\n\nThe system doesn't actually store \"milliseconds\". Timestamp is a\nfloating-point format internally, and so the true resolution is variable\ndepending on how far away you are from time zero. Over a 100-year range\nthe available resolution would be more like microseconds.\n\nHaving said that, 2 fraction digits does seem like a pretty arbitrary\nchoice. Thomas Lockhart might know why it was done that way, but he's\ngone for vacation and won't be back for a week or so...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 04 Jun 2000 15:14:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Variable formatting of datetime with DateStyle=ISO " }, { "msg_contents": "> Having said that, 2 fraction digits does seem like a pretty arbitrary\n> choice. Thomas Lockhart might know why it was done that way, but he's\n> gone for vacation and won't be back for a week or so...\n\nYup. Arbitrary compromise between precision and readability...\n\n - Thomas\n", "msg_date": "Tue, 13 Jun 2000 01:49:28 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Variable formatting of datetime with DateStyle=ISO" } ]
[ { "msg_contents": "Does *anyone* get commit messages from some mailing list? The system\nclaims I'm subscribed to that list but I'm not getting any mail. No one is\nlistening at -owner either.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 4 Jun 2000 00:57:53 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Commit messages list" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Does *anyone* get commit messages from some mailing list? The system\n> claims I'm subscribed to that list but I'm not getting any mail. No one is\n> listening at -owner either.\n\nHmm. Works for me ... mine claim to be coming through\n\[email protected]\nand it seems to be run by the same majordomo as the rest of the lists.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 04 Jun 2000 14:22:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Commit messages list " }, { "msg_contents": "\nokay, just checked the list, and you are set for getting a digest of\ncommitter messages ... the config for it looks okay, except that there was\na 'time' setting of 18-23 (hrs between which it is allowed to create a\ndigest) ... I just changed it to always instead, not sure if that will\nhelp or not, but its a thought ...\n\nI get the individual messages, as does Tom, which appears to be flowing\nproperly ...\n\n On Sun, 4 Jun 2000, Peter Eisentraut wrote:\n\n> Does *anyone* get commit messages from some mailing list? The system\n> claims I'm subscribed to that list but I'm not getting any mail. No one is\n> listening at -owner either.\n> \n> -- \n> Peter Eisentraut Sernanders v�g 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 5 Jun 2000 11:12:07 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Commit messages list" }, { "msg_contents": "The Hermit Hacker writes:\n\n> okay, just checked the list, and you are set for getting a digest of\n> committer messages ... the config for it looks okay, except that there was\n> a 'time' setting of 18-23 (hrs between which it is allowed to create a\n> digest) ... I just changed it to always instead, not sure if that will\n> help or not, but its a thought ...\n\nUnfortunately, nothing so far. :( As a parallel thought, how about making\nthe list of commit messages available on the web site, as a text file?\n(Perhaps rotate every week.)\n\n> I get the individual messages, as does Tom, which appears to be flowing\n> properly ...\n\nOkay, so how do I unset digest?\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 6 Jun 2000 18:13:58 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Commit messages list" } ]
[ { "msg_contents": "Well, in attempting to build RPMs for 7.0.1, I have unearthed a 26\nmegabyte problem. 26 megabytes is the difference in the size of an\nunpacked 7.0.1 source tree versus 7.0. 7.0.1 is 26 MB _smaller_. \nWhat's missing?\n\nA diff of a du on each tree shows that the unpacked docs are missing in\n7.0.1. This causes a make all in the doc tree to fail (which was\nreported earlier this week on-list) -- which make all is required by the\nRPMs for the documentation build. However, that is only 7 MB of the 26\nmissing MB.\n\nSo, I do a diff of find -print on each tree..... I also do a wc. The wc\nresults are revealing -- 7.0's tree has 4,172 files. 7.0.1's tree has\n2,562 files. Hmmm.... The PostScript docs are missing in 7.0.1, guys.\nExcept for internals.ps, that is.\n\nHmmm... Where is the rest of the space? Well, there are some *.o and\n*.so files laying around in the 7.0 tree. No, I haven't done a build\nhere -- but, there is a complete build here -- or, it _looks_ like a\ncomplete build. 7.0.1 doesn't have any *.o's -- which is good!\n\nSo, the *.o and *.so's (refint.so and autoinc.so) account for the\nmajority of the missing 26MB. But, the docs are still missing...and I\nsee no note about docs in HISTORY, so I'm assuming that them being\nmissing is not intentional.\n\nI may have to make a patch from 7.0 to 7.0.1 for the non-doc portion,\nand patch a 7.0 tree (minus the *.o's), and generate a tree to put into\nany 7.0.1 RPMs I might generate. Until then (or 7.0.1-and-a-half is\nreleased), there will be no new RPMs, as I really don't have any\nintention of building doc-less RPMs.... :-(.\n\nSorry I didn't catch this sooner, Marc. I should have followed through\non my hunch that I should build RPMs during the prerelease period, after\nyou announced here that the 7.0.1 release candidate was available --\nregrettably, I did not do that build. RPM-building will show problems\nother builds won't.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sun, 04 Jun 2000 00:49:01 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "7.0.1 Problems." }, { "msg_contents": "* Lamar Owen <[email protected]> [000603 21:56] wrote:\n> Well, in attempting to build RPMs for 7.0.1, I have unearthed a 26\n> megabyte problem. 26 megabytes is the difference in the size of an\n> unpacked 7.0.1 source tree versus 7.0. 7.0.1 is 26 MB _smaller_. \n> What's missing?\n\nI asked about this recently, as you noticed the docs are missing, you'll\nneed to use docbook (jade or whatever) to generate them yourself or\nyou can download the docs and whatnot already built from the download\nsite.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Sat, 3 Jun 2000 23:44:42 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.1 Problems." }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n> [ 7.0.1 missing docs, but 7.0 contains .o files it shouldn't have ]\n\nI think Marc needs to automate his tarfile-building process a little\nmore thoroughly ... we keep having these release-to-release\ndiscrepancies about just what's in the tar, how it's named, etc.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 04 Jun 2000 14:41:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.1 Problems. " }, { "msg_contents": "On Sun, 4 Jun 2000, Tom Lane wrote:\n\n> Lamar Owen <[email protected]> writes:\n> > [ 7.0.1 missing docs, but 7.0 contains .o files it shouldn't have ]\n> \n> I think Marc needs to automate his tarfile-building process a little\n> more thoroughly ... we keep having these release-to-release\n> discrepancies about just what's in the tar, how it's named, etc.\n\nBut then again Marc himself just said something like that yesterday,\na few hours before Lamar's comment.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sun, 4 Jun 2000 16:30:34 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.1 Problems. " }, { "msg_contents": "Vince Vielhaber wrote:\n> \n> On Sun, 4 Jun 2000, Tom Lane wrote:\n> > Lamar Owen <[email protected]> writes:\n> > > [ 7.0.1 missing docs, but 7.0 contains .o files it shouldn't have ]\n\n> > I think Marc needs to automate his tarfile-building process a little\n> > more thoroughly ... we keep having these release-to-release\n> > discrepancies about just what's in the tar, how it's named, etc.\n \n> But then again Marc himself just said something like that yesterday,\n> a few hours before Lamar's comment.\n\nYeah, he mentioned that he had to rebuild his script due to a lossage.\nPossibly related. He certainly has my sympathies. Thankfully RPM\nbuilding is relatively easy to automate (it's designed that way from the\nground up, though).\n\nIf I need to work around the lossage (with a docs tar for the RPMset or\nsomething similar generated from the 7.0 tarball) I can do that -- I\njust wanted to bring it to the list's attention.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sun, 04 Jun 2000 16:40:51 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.0.1 Problems." }, { "msg_contents": "Alfred Perlstein wrote:\n> \n> * Lamar Owen <[email protected]> [000603 21:56] wrote:\n> > Well, in attempting to build RPMs for 7.0.1, I have unearthed a 26\n> > megabyte problem. 26 megabytes is the difference in the size of an\n> > unpacked 7.0.1 source tree versus 7.0. 7.0.1 is 26 MB _smaller_.\n> > What's missing?\n\n> I asked about this recently, as you noticed the docs are missing, you'll\n> need to use docbook (jade or whatever) to generate them yourself or\n> you can download the docs and whatnot already built from the download\n> site.\n\nUnless there have been changes, I figured I'd take what I needed from\nthe 7.0 tarball, unless a 7.0.1 re-release or 7.0.2 release is planned\nsoon. I just want to see where the direction is going to be before doing\nthat.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sun, 04 Jun 2000 16:43:59 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.0.1 Problems." }, { "msg_contents": "* Lamar Owen <[email protected]> [000604 13:57] wrote:\n> Alfred Perlstein wrote:\n> > \n> > * Lamar Owen <[email protected]> [000603 21:56] wrote:\n> > > Well, in attempting to build RPMs for 7.0.1, I have unearthed a 26\n> > > megabyte problem. 26 megabytes is the difference in the size of an\n> > > unpacked 7.0.1 source tree versus 7.0. 7.0.1 is 26 MB _smaller_.\n> > > What's missing?\n> \n> > I asked about this recently, as you noticed the docs are missing, you'll\n> > need to use docbook (jade or whatever) to generate them yourself or\n> > you can download the docs and whatnot already built from the download\n> > site.\n> \n> Unless there have been changes, I figured I'd take what I needed from\n> the 7.0 tarball, unless a 7.0.1 re-release or 7.0.2 release is planned\n> soon. I just want to see where the direction is going to be before doing\n> that.\n\nI wouldn't, shipping outdated docs is a nice way of shooting your\nusers in the feet.\n\n-Alfred\n", "msg_date": "Sun, 4 Jun 2000 14:17:36 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.1 Problems." }, { "msg_contents": "On Sun, 04 Jun 2000, The Hermit Hacker wrote:\n> On Sun, 4 Jun 2000, Alfred Perlstein wrote:\n> > * Lamar Owen <[email protected]> [000604 13:57] wrote:\n> > > Unless there have been changes, I figured I'd take what I needed from\n> > > the 7.0 tarball, unless a 7.0.1 re-release or 7.0.2 release is planned\n> > > soon. I just want to see where the direction is going to be before doing\n> > > that.\n \n> > I wouldn't, shipping outdated docs is a nice way of shooting your\n> > users in the feet.\n \n> actually, were there much changes to the docs themselves since the release\n> of v7.0? the tar files I can find on the site are dated May 8th ...\n\nThe major changes deal with the changed CVSROOT. There are other changes --\na diff -uNr between postgresql-7.0/doc/src and postgresql-7.0.1/doc/src is 100K\nor so, with the majority of it being $Header differences.\n\nI will wait to package 7.0.1 RPMs until a direction is set by Steering on this\nissue, or a week passes. If a week passes without a set direction, I'm going\nto package 7.0.1 RPMs with the 7.0 PostScript docs, unless I get my own\njade/DocBook system up and running building the docs here first. (RedHat 6.2\nships with a jade/DocBook SGML toolset -- I just have to learn how to use it in\nthis context.)\n\nYour call.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sun, 4 Jun 2000 21:26:31 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.0.1 Problems." }, { "msg_contents": "On Sun, 4 Jun 2000, Alfred Perlstein wrote:\n\n> * Lamar Owen <[email protected]> [000604 13:57] wrote:\n> > Alfred Perlstein wrote:\n> > > \n> > > * Lamar Owen <[email protected]> [000603 21:56] wrote:\n> > > > Well, in attempting to build RPMs for 7.0.1, I have unearthed a 26\n> > > > megabyte problem. 26 megabytes is the difference in the size of an\n> > > > unpacked 7.0.1 source tree versus 7.0. 7.0.1 is 26 MB _smaller_.\n> > > > What's missing?\n> > \n> > > I asked about this recently, as you noticed the docs are missing, you'll\n> > > need to use docbook (jade or whatever) to generate them yourself or\n> > > you can download the docs and whatnot already built from the download\n> > > site.\n> > \n> > Unless there have been changes, I figured I'd take what I needed from\n> > the 7.0 tarball, unless a 7.0.1 re-release or 7.0.2 release is planned\n> > soon. I just want to see where the direction is going to be before doing\n> > that.\n> \n> I wouldn't, shipping outdated docs is a nice way of shooting your\n> users in the feet.\n\nactually, were there much changes to the docs themselves since the release\nof v7.0? the tar files I can find on the site are dated May 8th ...\n\n\n", "msg_date": "Sun, 4 Jun 2000 22:51:35 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.1 Problems." }, { "msg_contents": "On Sun, 04 Jun 2000, The Hermit Hacker wrote:\n> these problems fixed and call it v7.0.2 ... I'm not going to bother\n> announcing this, cause there are no changes other then cleaning up the\n> packaging, but don't want to confuse ppl by just changing the contents of\n> hte existing tar files ...\n\nWell, I hate to say it, but, in keeping with the spirit of 6.4.1->6.4.2, a\nshort note to announce and general might be in order.\n \n> If anyone feels like looking it over and telling me if I'm missing\n> anythign else, the 'generation script' is in\n> ftp://ftp.postgresql.org/www/supported-bin/mk-release (as is\n> mk-snapshot) ... please feel free to suggest any changes ...\n\nAdd a copy of the *.ps.gz docs after the copy of the tarred html docs. I\n_think_ that's all that's missing.... tar one up, and I'll try an RPM build\ntomorrow morning. I won't guarantee that an RPM build will catch all problems,\nbut it will catch most. Not a bad script, BTW.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sun, 4 Jun 2000 22:15:40 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.0.1 Problems." }, { "msg_contents": "On Sun, 04 Jun 2000, Lamar Owen wrote:\n> On Sun, 04 Jun 2000, The Hermit Hacker wrote:\n> > these problems fixed and call it v7.0.2 ... I'm not going to bother\n> > announcing this, cause there are no changes other then cleaning up the\n> > packaging, but don't want to confuse ppl by just changing the contents of\n> > hte existing tar files ...\n \n> Well, I hate to say it, but, in keeping with the spirit of 6.4.1->6.4.2, a\n> short note to announce and general might be in order.\n\nWell, after writing that, and then re-reading it, I do want to say that I\nunderstand that this is different from the 6.4.1 mispackage -- as it was a\npackage of the then CURRENT branch versus the stable branch, and was thus a\nmore serious problem than the present one. I was not intending to make this\none to be bigger than it is -- it is indeed a minor packaging error, not of the\nmagnitude of 6.4.1. HOWEVER, the spirit of 6.4.1-6.4.2 is to at least make a\nbrief note available. And I'm glad the 6.4.1 problem has not been repeated.\n\nMy apologies.\n\nAs an RPM packager, I have it a little easier than you do, Marc -- I can just\nrelease a minor update to the same version. Problems with 7.0-1? No problem --\nhere's 7.0-2. Save version of the program -- different release of the package.\n\n --\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sun, 4 Jun 2000 22:40:33 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.0.1 Problems." }, { "msg_contents": "\nokay, since I've obviously screwed up naming conventions on ppl (.7.0.1 vs\n-7.0.1) and missed the docs, I'm just going to do a 're-release' with\nthese problems fixed and call it v7.0.2 ... I'm not going to bother\nannouncing this, cause there are no changes other then cleaning up the\npackaging, but don't want to confuse ppl by just changing the contents of\nhte existing tar files ...\n\nIf anyone feels like looking it over and telling me if I'm missing\nanythign else, the 'generation script' is in\nftp://ftp.postgresql.org/www/supported-bin/mk-release (as is\nmk-snapshot) ... please feel free to suggest any changes ...\n\n\n\nOn Sun, 4 Jun 2000, Lamar Owen wrote:\n\n> On Sun, 04 Jun 2000, The Hermit Hacker wrote:\n> > On Sun, 4 Jun 2000, Alfred Perlstein wrote:\n> > > * Lamar Owen <[email protected]> [000604 13:57] wrote:\n> > > > Unless there have been changes, I figured I'd take what I needed from\n> > > > the 7.0 tarball, unless a 7.0.1 re-release or 7.0.2 release is planned\n> > > > soon. I just want to see where the direction is going to be before doing\n> > > > that.\n> \n> > > I wouldn't, shipping outdated docs is a nice way of shooting your\n> > > users in the feet.\n> \n> > actually, were there much changes to the docs themselves since the release\n> > of v7.0? the tar files I can find on the site are dated May 8th ...\n> \n> The major changes deal with the changed CVSROOT. There are other changes --\n> a diff -uNr between postgresql-7.0/doc/src and postgresql-7.0.1/doc/src is 100K\n> or so, with the majority of it being $Header differences.\n> \n> I will wait to package 7.0.1 RPMs until a direction is set by Steering on this\n> issue, or a week passes. If a week passes without a set direction, I'm going\n> to package 7.0.1 RPMs with the 7.0 PostScript docs, unless I get my own\n> jade/DocBook system up and running building the docs here first. (RedHat 6.2\n> ships with a jade/DocBook SGML toolset -- I just have to learn how to use it in\n> this context.)\n> \n> Your call.\n> \n> --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 4 Jun 2000 23:50:41 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.1 Problems." }, { "msg_contents": "On Sun, 04 Jun 2000, The Hermit Hacker wrote:\n> On Sun, 4 Jun 2000, Lamar Owen wrote:\n \n> > Add a copy of the *.ps.gz docs after the copy of the tarred html docs. I\n> > _think_ that's all that's missing.... tar one up, and I'll try an RPM build\n> > tomorrow morning. I won't guarantee that an RPM build will catch all problems,\n> > but it will catch most. Not a bad script, BTW.\n \n> I believe it was decided already that the .ps.gz files would be available\n> through the web, but not as part of the tar files themselves ... same with\n> the other formats (A1?) that thomas had worked on ...\n\nOk, then I guess I'll drop them from the RPM, unless they are requested by\npopular demand, in which case I can build either a separate package for them,\nor I can incorporate them as separate source files.... It'll depend upon\nresponse to RPMs without the PostScript files. That saves a little space, too!\n Of course, I'll include a pointer to them in my README.rpm.\n\nIt is now coming back to me about the discussion on that issue -- but, I then\nfound them in the 7.0 tarball, and misunderstood that they were still going to\nbe included. Just a minor change, no biggie.\n\nOk, when 7.0.2 is ready, I'll run a trial build....\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sun, 4 Jun 2000 23:11:14 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.0.1 Problems." }, { "msg_contents": "On Sun, 4 Jun 2000, Lamar Owen wrote:\n\n> On Sun, 04 Jun 2000, The Hermit Hacker wrote:\n> > these problems fixed and call it v7.0.2 ... I'm not going to bother\n> > announcing this, cause there are no changes other then cleaning up the\n> > packaging, but don't want to confuse ppl by just changing the contents of\n> > hte existing tar files ...\n> \n> Well, I hate to say it, but, in keeping with the spirit of 6.4.1->6.4.2, a\n> short note to announce and general might be in order.\n> \n> > If anyone feels like looking it over and telling me if I'm missing\n> > anythign else, the 'generation script' is in\n> > ftp://ftp.postgresql.org/www/supported-bin/mk-release (as is\n> > mk-snapshot) ... please feel free to suggest any changes ...\n> \n> Add a copy of the *.ps.gz docs after the copy of the tarred html docs. I\n> _think_ that's all that's missing.... tar one up, and I'll try an RPM build\n> tomorrow morning. I won't guarantee that an RPM build will catch all problems,\n> but it will catch most. Not a bad script, BTW.\n\nI believe it was decided already that the .ps.gz files would be available\nthrough the web, but not as part of the tar files themselves ... same with\nthe other formats (A1?) that thomas had worked on ...\n\n\n", "msg_date": "Mon, 5 Jun 2000 00:50:02 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.1 Problems." }, { "msg_contents": "> \n> okay, since I've obviously screwed up naming conventions on ppl (.7.0.1 vs\n> -7.0.1) and missed the docs, I'm just going to do a 're-release' with\n> these problems fixed and call it v7.0.2 ... I'm not going to bother\n> announcing this, cause there are no changes other then cleaning up the\n> packaging, but don't want to confuse ppl by just changing the contents of\n> hte existing tar files ...\n> \n> If anyone feels like looking it over and telling me if I'm missing\n> anythign else, the 'generation script' is in\n> ftp://ftp.postgresql.org/www/supported-bin/mk-release (as is\n> mk-snapshot) ... please feel free to suggest any changes ...\n\nWe will need to add release note changes, and change the install files\nand other branding to mark it as 7.0.2. Let me know if you want me to\ndo it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 4 Jun 2000 23:51:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.1 Problems." }, { "msg_contents": "On Sun, 4 Jun 2000, Bruce Momjian wrote:\n\n> > \n> > okay, since I've obviously screwed up naming conventions on ppl (.7.0.1 vs\n> > -7.0.1) and missed the docs, I'm just going to do a 're-release' with\n> > these problems fixed and call it v7.0.2 ... I'm not going to bother\n> > announcing this, cause there are no changes other then cleaning up the\n> > packaging, but don't want to confuse ppl by just changing the contents of\n> > hte existing tar files ...\n> > \n> > If anyone feels like looking it over and telling me if I'm missing\n> > anythign else, the 'generation script' is in\n> > ftp://ftp.postgresql.org/www/supported-bin/mk-release (as is\n> > mk-snapshot) ... please feel free to suggest any changes ...\n> \n> We will need to add release note changes, and change the install files\n> and other branding to mark it as 7.0.2. Let me know if you want me to\n> do it.\n\nokay, let's do this ...\n\nLamar, please test the ones that are up there now ... I haven't announced\nit, and Vince hopefully didn't yet? :)\n\nBruce, can you please do the appropriate branding? \n\nOnce Bruce is done, and Lamar has reported it tested, I will create a\n'final tar ball' ...\n\nIf anyone else wants to take a look and comment, please do ... \n\n\n", "msg_date": "Mon, 5 Jun 2000 01:29:34 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.1 Problems." }, { "msg_contents": "On Mon, 5 Jun 2000, The Hermit Hacker wrote:\n\n> okay, let's do this ...\n> \n> Lamar, please test the ones that are up there now ... I haven't announced\n> it, and Vince hopefully didn't yet? :)\n\nNope, haven't yet. I'll wait for a go-ahead.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 5 Jun 2000 06:03:10 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.1 Problems." }, { "msg_contents": "> > do it.\n> \n> okay, let's do this ...\n> \n> Lamar, please test the ones that are up there now ... I haven't announced\n> it, and Vince hopefully didn't yet? :)\n> \n> Bruce, can you please do the appropriate branding? \n\nOK, done.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Jun 2000 13:07:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.1 Problems." }, { "msg_contents": "The Hermit Hacker wrote:\n> Lamar, please test the ones that are up there now ... I haven't announced\n> it, and Vince hopefully didn't yet? :)\n\n> Once Bruce is done, and Lamar has reported it tested, I will create a\n> 'final tar ball' ...\n\nBuilt correctly. :-)\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 05 Jun 2000 14:31:15 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.0.1 Problems." }, { "msg_contents": "> I believe it was decided already that the .ps.gz files would be \n> available through the web, but not as part of the tar files themselves \n> ... same with the other formats (A1?) that thomas had worked on ...\n\n?? Hmmph. I don't recall that, and am a bit unhappy that the docs are\nnow considered optional. Just because some of the developers don't use a\nparticular format is no reason to hide them from others...\n\n - Thomas\n", "msg_date": "Tue, 13 Jun 2000 01:55:45 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.1 Problems." }, { "msg_contents": "On Tue, 13 Jun 2000, Thomas Lockhart wrote:\n\n> > I believe it was decided already that the .ps.gz files would be \n> > available through the web, but not as part of the tar files themselves \n> > ... same with the other formats (A1?) that thomas had worked on ...\n> \n> ?? Hmmph. I don't recall that, and am a bit unhappy that the docs are\n> now considered optional. Just because some of the developers don't use a\n> particular format is no reason to hide them from others...\n\nWho is hiding what? *All*, and as many, formats should be easily\navailable through the web site ... why should the distribution contain\n>2Meg worth of extra docs? \n\nIts not something that I'm going to pull hair out over ... if ppl want\nthem as part of the general distribution to download, I'll add it back\ninto the release generation script ...\n\n", "msg_date": "Tue, 13 Jun 2000 09:51:33 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.1 Problems." } ]
[ { "msg_contents": "On Sat, Jun 03, 2000 at 05:22:56PM +0200, Louis-David Mitterrand wrote:\n> When creating a child (through CREATE TABLE ... INHERIT (parent)) it\n> seems the child gets all of the parent's contraints _except_ its PRIMARY\n> KEY. Is this normal? Should I add a PRIMARY KEY(id) statement each time\n> I create an inherited table?\n\nFollowing up to my previous message, I found that one can't explicitely\nadd a PRIMARY KEY on child table referencing a field on the parent\ntable, for instance:\n\n CREATE TABLE auction (\n id SERIAL PRIMARY KEY,\n title text,\n ... etc...\n );\n\nthen \n\n CREATE TABLE auction_dvd (\n zone int4,\n PRIMARY KEY(\"id\")\n ) inherits(\"auction\");\n\ndoesn't work:\n ERROR: CREATE TABLE: column 'id' named in key does not exist\n\nBut the aution_dvd table doesn't inherit the auction table's PRIMARY\nKEY, so I can insert duplicates.\n\nSolutions:\n\n1) don't use PRIMARY KEY, use UNIQUE NOT NULL (which will be inherited?)\nbut the I lose the index,\n\n2) use the OID field, but it's deprecated by PG developers?\n\nWhat would be the best solution?\n\nTIA\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.fr\n\nVeni, Vidi, VISA.\n", "msg_date": "Sun, 4 Jun 2000 08:52:10 +0200", "msg_from": "Louis-David Mitterrand <[email protected]>", "msg_from_op": true, "msg_subject": "child table doesn't inherit PRIMARY KEY?" }, { "msg_contents": "Louis-David Mitterrand wrote:\n> \n> Solutions:\n> \n> 1) don't use PRIMARY KEY, use UNIQUE NOT NULL (which will be inherited?)\n> but the I lose the index,\n\nAFAIK the UNIQUE constraint is implemented in PostgreSQL by creating \nan unique index on that field \n\n----------\nHannu\n", "msg_date": "Sun, 04 Jun 2000 10:42:57 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: child table doesn't inherit PRIMARY KEY?" } ]
[ { "msg_contents": "Hello all,\n\nI just upgraded from 7.0 to 7.0.1 and have the following in the log:\n/home/postgres/bin/postmaster child[18147]: starting with (/home/postgres/bin/postgres -d2 -B 64 -F -v131072 -p webmailstation )\nFindExec: found \"/home/postgres/bin/postgres\" using argv[0]\nstarted: host=localhost user=webmailstation database=webmailstation\nInitPostgres\nFATAL 1: heapgettup: failed ReadBuffer\nproc_exit(0)\nshmem_exit(0)\nexit(0)\n\nWhy this can happend?\n\nSU,\nDenis.\n", "msg_date": "Sun, 4 Jun 2000 17:06:05 +0700", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": true, "msg_subject": "Problem upgrading from 7.0 to 7.0.1" } ]
[ { "msg_contents": "Issuing the followin SELECT crashes PG 7.0:\n\nauction=# SELECT a.id,a.title,a.id,(select CASE WHEN a.stopdate < 'now' THEN 'closed' ELSE 'open' end) as status,to_char(a.time,'DD-MM HH24:MI'),b.price FROM auction* a, bid b WHERE a.id = b.auctionid AND b.login = 'mito2';\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!# \\q\n\nApparently PG doesn't like the (SELECT CASE ... ) statement, until I\nadded it everything went well. \n\nHere are the table descriptions:\n\n Table \"auction\"\n Attribute | Type | Modifier \n--------------+-----------+--------------------------------------------------\n id | integer | not null default nextval('auction_id_seq'::text)\n login | text | not null\n startdate | timestamp | not null default now()\n stopdate | timestamp | not null\n description | text | not null\n startprice | float8 | not null\n reserveprice | float8 | \n category | text | not null\n imageurl | text | \n title | text | not null\n quantity | integer | default 1\n time | timestamp | not null default now()\n option | bigint | \nIndex: auction_pkey\nConstraints: (quantity > 0)\n (imageurl ~ '^http://'::text)\n (stopdate > startdate)\n ((reserveprice ISNULL) OR (reserveprice > startprice))\n\n Table \"bid\"\n Attribute | Type | Modifier \n-----------+-----------+------------------------\n login | text | not null\n auctionid | integer | not null\n price | float8 | not null\n quantity | integer | not null default 1\n time | timestamp | not null default now()\nIndex: bid_pkey\n\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.fr\n\nWARNING TO ALL PERSONNEL: Firings will continue until morale improves. \n", "msg_date": "Sun, 4 Jun 2000 17:21:49 +0200", "msg_from": "Louis-David Mitterrand <[email protected]>", "msg_from_op": true, "msg_subject": "PG 7.0 crash on SELECT" }, { "msg_contents": "Louis-David Mitterrand <[email protected]> writes:\n> Issuing the followin SELECT crashes PG 7.0:\n> auction=# SELECT a.id,a.title,a.id,(select CASE WHEN a.stopdate < 'now' THEN 'closed' ELSE 'open' end) as status,to_char(a.time,'DD-MM HH24:MI'),b.price FROM auction* a, bid b WHERE a.id = b.auctionid AND b.login = 'mito2';\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> !# \\q\n\n> Apparently PG doesn't like the (SELECT CASE ... ) statement, until I\n> added it everything went well. \n\nThe crash certainly is a bug, but you could get around it for now\nby not using an unnecessary sub-SELECT. Why not just\n\t...,a.id,(CASE WHEN ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 04 Jun 2000 14:57:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 7.0 crash on SELECT " } ]
[ { "msg_contents": "Who's maintaining the ODBC configure and build stuff?\n\nThere's a grave issue with ODBC linking in all kinds of makefiles,\ntemplates, header files, etc. from the main Postgres configure process.\nThat creates problems for both sides: The main tree couldn't care less\nwhat ODBC needs tested and defined since it claims to have it's own\nconfigure script.\n\nI don't have a problem with ODBC having it's own configure script, if\nthat's what's desired, but then it needs to be used unconditionally.\nRunning the standalone and the integrated show at the same time doesn't\nwork.\n\nI'd be more than happy to help fixing these issues but I just want to ask\nfirst who knows anything about it. One particular question is where the\nauthorative source is: is it the PostgreSQL CVS or is it this\npsqlodbc-x.y.z.tar.gz file, whereever that came from?\n\n(Incidentally, I think the ODBC thing is simple enough to just use\nAutomake on it and be done with it. Then we don't need any templates or\nport makefiles or whatever.)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 5 Jun 2000 02:20:58 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "ODBC configure" }, { "msg_contents": "> I don't have a problem with ODBC having it's own configure script, if\n> that's what's desired, but then it needs to be used unconditionally.\n> Running the standalone and the integrated show at the same time doesn't\n> work.\n> \n> I'd be more than happy to help fixing these issues but I just want to ask\n> first who knows anything about it. One particular question is where the\n\n\tNikolaidis, Byron in Baltimore, Maryland, USA\n\t([email protected]) rewrote and maintains\n\tthe ODBC interface for Windows. \n\n> authorative source is: is it the PostgreSQL CVS or is it this\n> psqlodbc-x.y.z.tar.gz file, whereever that came from?\n\nOur CVS tree.\n\n> \n> (Incidentally, I think the ODBC thing is simple enough to just use\n> Automake on it and be done with it. Then we don't need any templates or\n> port makefiles or whatever.)\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 4 Jun 2000 21:36:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ODBC configure" }, { "msg_contents": "I wrote:\n\n> I don't have a problem with ODBC having it's own configure script, if\n> that's what's desired, but then it needs to be used unconditionally.\n> Running the standalone and the integrated show at the same time doesn't\n> work.\n\nFurther investigation shows that the standalone build is completely broken\nand apparently no longer maintained. Thus, is there any point in trying to\nkeep it?\n\n> (Incidentally, I think the ODBC thing is simple enough to just use\n> Automake on it and be done with it. Then we don't need any templates or\n> port makefiles or whatever.)\n\nFWIW, I just did tried that conversion and it's really very nice (after\nfixing some of shared-lib-on-linux-only funny business). So in case you\nwanna keep the separate build after all...\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 6 Jun 2000 18:08:16 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ODBC configure" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I wrote:\n>> I don't have a problem with ODBC having it's own configure script, if\n>> that's what's desired, but then it needs to be used unconditionally.\n>> Running the standalone and the integrated show at the same time doesn't\n>> work.\n\n> Further investigation shows that the standalone build is completely broken\n> and apparently no longer maintained. Thus, is there any point in trying to\n> keep it?\n\nAFAICS the only situation where a separate build of ODBC is really\nuseful is to build a Windows executable of the ODBC driver --- but\nof course the autoconf stuff doesn't support that anyway.\n\nFor Unix purposes I'd be in favor of making ODBC just another interface\nthat's built as part of the main build ... but let's see what Lockhart\nhas to say about it. I think he was responsible for setting it up this\nway in the first place, so maybe he's got a good reason for it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Jun 2000 13:34:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: ODBC configure " }, { "msg_contents": "> Who's maintaining the ODBC configure and build stuff?\n\nMe, a little bit. I haven't changed anything in quite a while.\n\n> There's a grave issue with ODBC linking in all kinds of makefiles,\n> templates, header files, etc. from the main Postgres configure\n> process.\n\nOnly for a \"standalone\" build afaik. An \"integrated build\" uses all\nfeatures from the main tree.\n\n> That creates problems for both sides: The main tree couldn't care less\n> what ODBC needs tested and defined since it claims to have it's own\n> configure script.\n> I don't have a problem with ODBC having it's own configure script, if\n> that's what's desired, but then it needs to be used unconditionally.\n> Running the standalone and the integrated show at the same time \n> doesn't work.\n\nWell, standalone and integrated are two separate features. We shouldn't\nstrip one at the expense of the other, but it may be that the standalone\nversion is completely unused? If so, we worked at this feature for no\ngood purpose, and it should be ignored until someone finds a real need\nfor it imho.\n\n - Thomas\n", "msg_date": "Tue, 13 Jun 2000 02:03:57 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ODBC configure" }, { "msg_contents": "On Tue, 13 Jun 2000, Thomas Lockhart wrote:\n\n> Well, standalone and integrated are two separate features. We shouldn't\n> strip one at the expense of the other, but it may be that the standalone\n> version is completely unused? If so, we worked at this feature for no\n> good purpose, and it should be ignored until someone finds a real need\n> for it imho.\n\nThe standalone build is broken at least for 7.0. I see that at some point\nyou tried to get the top-level configure to invoke odbc's configure\nrecursively to have a \"distribution in the distribution\". That is\ndefinitely the right way to do it if you want to keep the dichotomy.\nSomehow it seems that it didn't work for you, but at least it does work\nfor me. Maybe we should re-enable that?\n\nBtw., do you recall what the separate packaging was intended for?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 13 Jun 2000 15:16:49 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ODBC configure" }, { "msg_contents": "> The standalone build is broken at least for 7.0. I see that at some \n> point you tried to get the top-level configure to invoke odbc's \n> configure recursively to have a \"distribution in the distribution\". \n> That is definitely the right way to do it if you want to keep the \n> dichotomy. Somehow it seems that it didn't work for you, but at least \n> it does work for me. Maybe we should re-enable that?\n\nYour call. But afaik the \"distro in the distro\" technique does not allow\nfor a separate package, without requiring a lot of duplication in the\nconfigure script at both levels.\n\n> Btw., do you recall what the separate packaging was intended for?\n\nThe separate packaging was intended to address the point that, for some\napps, the *only* package needed to access a remote Postgres database was\nthe ODBC driver itself. So we wanted a way to build and install it\nwithout requiring any other parts of Postgres to be installed.\n\nAt the time, I didn't have a *strong* feeling that this was essential,\nbut others suggested that it was.\n\nFrankly, istm that the same kind of person who would want to build a\nstandalone ODBC driver might also be running a system for which RPMs or\nsome other packaging system is available, so perhaps we should back away\nfrom the standalone capability on Unix systems. The small grey area in\nusers' knowledge and capabilities between \"can build and install from\nthe source distro\" to \"needs a pre-built package\" may be more trouble\nthan it is worth.\n\n - Thomas\n", "msg_date": "Tue, 13 Jun 2000 13:39:43 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ODBC configure" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Btw., do you recall what the separate packaging was intended for?\n\n> The separate packaging was intended to address the point that, for some\n> apps, the *only* package needed to access a remote Postgres database was\n> the ODBC driver itself. So we wanted a way to build and install it\n> without requiring any other parts of Postgres to be installed.\n\nI don't have a strong feeling that this is particularly essential for\nODBC alone. It's been suggested more than once that it might be nice\nto be able to build clients without building the server --- but that\nwould cover libpq, psql, etc, not just the ODBC interface. (Bruce,\nshouldn't there be a TODO item for that?)\n\nIt seems clear that the standalone ODBC config is suffering bit-rot,\nand that that will be its normal state of existence given that no one\ntakes care to make it track changes to the top-level configure files.\nSo I'm in favor of ripping it out.\n\nI would like to see us offer a client-only build procedure someday,\nbut I don't feel much urgency about it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Jun 2000 11:01:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ODBC configure " } ]
[ { "msg_contents": "HI all,\n\nPGPORT seems to be ignored in current cvs.\nThe following code in postmaster.c has little meaing because\nPostPortName is already set twice(in its declaration and\nProcessConfigFIle()).\n\n\tif (PostPortName == 0)\n\t\tPostPortName = pq_getport();\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Mon, 5 Jun 2000 10:52:58 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "PGPORT ignored ?" }, { "msg_contents": "Hiroshi Inoue writes:\n\n> PGPORT seems to be ignored in current cvs.\n\nWell observed. Fixed now.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Tue, 6 Jun 2000 18:10:34 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGPORT ignored ?" } ]
[ { "msg_contents": "\njust upgraded to sendmail 8.10.1 for anti-spam features ... makign sure I\nhaven't affected the lists ...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 5 Jun 2000 01:03:57 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "testing something ..." } ]
[ { "msg_contents": "\n\na test, ignore ...\n\n", "msg_date": "Mon, 5 Jun 2000 01:21:16 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": true, "msg_subject": "a test" } ]
[ { "msg_contents": "have people out there had equal success with both\nworkstation and server nt 4.0 ?\n\nalso what have success rates with 2000 been ?\n\njeff\n\n\n", "msg_date": "Mon, 5 Jun 2000 02:34:55 -0300 (ADT)", "msg_from": "Jeff MacDonald <[email protected]>", "msg_from_op": true, "msg_subject": "pg on NT" }, { "msg_contents": "Jeff MacDonald wrote:\n\n> have people out there had equal success with both\n> workstation and server nt 4.0 ?\n>\n> also what have success rates with 2000 been ?\n\nHi Jeff,\n\nI run PostgreSQL 7.0 with Cygwin-b20.1 on Windows NT 4.0,\nit works fine so far. I also built the binary, it's available at:\n\nftp://ftp.postgresql.org/pub/binary/v7.0/NT/postgresql-7.0-nt-binaries.tar.gz\n\n> jeff\n\n- Kevin\n\n", "msg_date": "Mon, 05 Jun 2000 23:12:58 +0800", "msg_from": "Kevin Lo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg on NT" } ]
[ { "msg_contents": "\njust ignore ...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 5 Jun 2000 02:36:39 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "testing ..." } ]
[ { "msg_contents": "\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 5 Jun 2000 02:38:13 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "test 2 ..." } ]
[ { "msg_contents": "\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 5 Jun 2000 03:00:58 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "ignore ..." } ]
[ { "msg_contents": "\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 5 Jun 2000 03:06:46 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "test, again ..." } ]
[ { "msg_contents": "\nleave alot to be desired ... \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 5 Jun 2000 03:15:22 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "anti-relay features ..." } ]
[ { "msg_contents": "\n> > >\tSELECT *\n> > >\t INTO TABLE tmp1\n> > >\t FROM tmp\n> > >\t WHERE onek.unique1 < 2;\n> > >\tNOTICE: Adding missing FROM-clause entry for table onek\n> \n> > Is is worth adding yet another setting, eg. set sql92=strict, which\n> > would disallow such flagrant breaches of the standard?\n> \n> SQL provides for facility called the SQL Flagger, which is \n> supposed to do\n> exactly that. This might sound like an interesting idea but \n> in order for\n> it to be useful you'd have to maintain it across the board, \n> which sounds\n> like a major head ache.\n> \n> The irony in the given example is that the SELECT INTO \n> command isn't in\n> the standard in the first place so you'd have to create all sorts of\n> double standards. Certain things would be \"extensions\", certain things\n> would be \"misuse\". And for all it's worth, we have no idea \n> which is which.\n> \n> If you want to throw about warnings about \"probable\" coding \n> errors and the\n> like one *must* be able to switch them off. Either something is right,\n> then you shut up. Or it's wrong, then you throw an error. Or \n> you're not\n> sure, then you better leave it up to the user.\n\nYes, only Bruce and I are of the opinion that it *is* an Error, and I guess \nwe want some consensus.\nThe notice is imho of the sort: notice this syntax is going to be disallowed\nsoon.\n\nAndreas\n", "msg_date": "Mon, 5 Jun 2000 10:02:55 +0200 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: New warning code for missing FROM relations" }, { "msg_contents": "> > The irony in the given example is that the SELECT INTO \n> > command isn't in\n> > the standard in the first place so you'd have to create all sorts of\n> > double standards. Certain things would be \"extensions\", certain things\n> > would be \"misuse\". And for all it's worth, we have no idea \n> > which is which.\n> > \n> > If you want to throw about warnings about \"probable\" coding \n> > errors and the\n> > like one *must* be able to switch them off. Either something is right,\n> > then you shut up. Or it's wrong, then you throw an error. Or \n> > you're not\n> > sure, then you better leave it up to the user.\n> \n> Yes, only Bruce and I are of the opinion that it *is* an Error, and I guess \n> we want some consensus.\n> The notice is imho of the sort: notice this syntax is going to be disallowed\n> soon.\n\nI see the notice as \"Hey, you probably did something you didn't want to do\".\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Jun 2000 06:25:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: New warning code for missing FROM relations" }, { "msg_contents": "Bruce Momjian writes:\n\n> I see the notice as \"Hey, you probably did something you didn't want\n> to do\".\n\nAgain, \"probably\" means you're not sure, so you leave it up to the user to\nturn it on or off.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 6 Jun 2000 18:08:04 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: New warning code for missing FROM relations" }, { "msg_contents": "Zeugswetter Andreas SB writes:\n\n> Yes, only Bruce and I are of the opinion that it *is* an Error, and I\n> guess we want some consensus.\n\nI agree that it is an error.\n\n> The notice is imho of the sort: notice this syntax is going to be\n> disallowed soon.\n\nIf you can guarantee that each user will only see the notice once, then\nokay. :)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Tue, 6 Jun 2000 18:09:00 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: New warning code for missing FROM relations" }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> Zeugswetter Andreas SB writes:\n> \n> > Yes, only Bruce and I are of the opinion that it *is* an Error, and I\n> > guess we want some consensus.\n> \n> I agree that it is an error.\n> \n> > The notice is imho of the sort: notice this syntax is going to be\n> > disallowed soon.\n> \n> If you can guarantee that each user will only see the notice once, then\n> okay. :)\n\nThere is no sense that this is a warning about the syntax changing at\nsome point. It is to warn queries that are probably broken.\n\nSeems if they already have a FROM clause, there is no purpose for some\ntables being in the FROM clause, and others begin auto-created. In\nother cases, it issues no warning.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 6 Jun 2000 12:13:31 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: New warning code for missing FROM relations" } ]
[ { "msg_contents": "\n> -----Original Message-----\n> From: Jeff MacDonald [mailto:[email protected]]\n> Sent: 05 June 2000 05:35\n> To: [email protected]\n> Subject: [HACKERS] pg on NT\n> \n> \n> have people out there had equal success with both\n> workstation and server nt 4.0 ?\n> \n> also what have success rates with 2000 been ?\n> \n> jeff\n> \n\nI'm running my own build of v7.0.0 on Windows 2000 Professional (Dell\nInspiron Laptop, PIII 500, 128Mb) with no problems at if I manually start\nthe Postmaster from a bash shell. All my attempts to autostart (via scripts\nand tools like srvany) the postmaster have failed miserably to date. It's\nnot exactly quick though...\n\nRegards, \n \nDave. \n \n-- \nDisclaimer: the above is the author's personal opinion and is not the\nopinion or policy of his employer or of the little green men that have been\nfollowing him all day.\nhttp://www.vale-housing.co.uk/ - http://www.pgadmin.freeserve.co.uk/\n", "msg_date": "Mon, 5 Jun 2000 08:44:24 -0000 ", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "RE: pg on NT" } ]
[ { "msg_contents": "In looking at the INSTALL/install.sgml files, I see that there are no\ninstructions for removing the /data directory after the backup, so \ninitdb will succeed. Should that be suggested after the backup is\nperformed? If not, initdb will fail. Also, I have to add something to\nthe initdb step to tell 7.* users they don't need initdb. \n\nCan someone confirm my thinking on this?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Jun 2000 06:58:27 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "INSTALL/install.sgml file" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> In looking at the INSTALL/install.sgml files, I see that there are no\n> instructions for removing the /data directory after the backup, so \n> initdb will succeed. Should that be suggested after the backup is\n> performed? If not, initdb will fail. Also, I have to add something to\n> the initdb step to tell 7.* users they don't need initdb.\n\nWhat? It says \"move the old directories out of the way\" at the bottom\nof step 6.\n\nPossibly that should be promoted into a whole separate step, rather than\nbeing just an afterthought to killing the postmaster.\n\nThis step and step 11 should also mention pg_upgrade as a possible\nalternative to doing a full reload. (But encourage people to make\nthe backup anyway ;-).)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Jun 2000 11:37:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] INSTALL/install.sgml file " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > In looking at the INSTALL/install.sgml files, I see that there are no\n> > instructions for removing the /data directory after the backup, so \n> > initdb will succeed. Should that be suggested after the backup is\n> > performed? If not, initdb will fail. Also, I have to add something to\n> > the initdb step to tell 7.* users they don't need initdb.\n> \n> What? It says \"move the old directories out of the way\" at the bottom\n> of step 6.\n> \n\nOh, I see that now. The trick was that people upgrading from 7.0 or\n7.0.1 do not need to do pg_dumpall, nor move the old directory out of\nthe way, nor do an initdb, nor reload from pg_dumpall.\n\nI put a note about who should run pg_dumpall (6.5.* or earlier), and\nthen later I mention that \"If you did pg_dumpall...\" move the old\ndirectory out of the way, do initdb, and reload. Seems it is OK now. \nThanks.\n\n> Possibly that should be promoted into a whole separate step, rather than\n> being just an afterthought to killing the postmaster.\n> \n> This step and step 11 should also mention pg_upgrade as a possible\n> alternative to doing a full reload. (But encourage people to make\n> the backup anyway ;-).)\n\nI got that into step 5:\n\n Rather than using pg_dumpall, pg_upgrade can often be used.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Jun 2000 13:05:02 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] INSTALL/install.sgml file" }, { "msg_contents": "Bruce Momjian writes:\n\n> > This step and step 11 should also mention pg_upgrade as a possible\n> > alternative to doing a full reload. (But encourage people to make\n> > the backup anyway ;-).)\n> \n> I got that into step 5:\n> \n> Rather than using pg_dumpall, pg_upgrade can often be used.\n\nI think the installation instructions should say \"If you are upgrading\nfrom an existing installation, read the Administrator's Guide for backing\nand restoring your data.\" The said guide contains a chapter discussing\nthese issues in detail (or at least it contains a chapter on it and we\nshould add some detail :). This sort of thing can't be replaced by three\nsentences.\n\nJust the other day I griped about the fact that the installation\ninstructions are in fact a chapter of the administrator's guide, which\nbreaks the internal and external organization and the flow of information\nof both the installation instructions and the administrator's guide.\n\nCan anyone see that concern?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 6 Jun 2000 23:58:45 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] INSTALL/install.sgml file" }, { "msg_contents": "> I think the installation instructions should say \"If you are upgrading\n> from an existing installation, read the Administrator's Guide for backing\n> and restoring your data.\" The said guide contains a chapter discussing\n> these issues in detail (or at least it contains a chapter on it and we\n> should add some detail :). This sort of thing can't be replaced by three\n> sentences.\n> \n> Just the other day I griped about the fact that the installation\n> instructions are in fact a chapter of the administrator's guide, which\n> breaks the internal and external organization and the flow of information\n> of both the installation instructions and the administrator's guide.\n\nCan we refer people to that section for more information?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 6 Jun 2000 21:36:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] INSTALL/install.sgml file" }, { "msg_contents": "Bruce Momjian writes:\n\n> > Just the other day I griped about the fact that the installation\n> > instructions are in fact a chapter of the administrator's guide, which\n> > breaks the internal and external organization and the flow of information\n> > of both the installation instructions and the administrator's guide.\n> \n> Can we refer people to that section for more information?\n\nWe can refer people, but how? Do you write \"please read more about this in\nthe Administrator's Guide\"? Then somebody who's reading the installation\ninstructions as part of the administrator's guide will think \"Well, duh,\nthanks a lot\". Or you make a proper DocBook hyperlink which comes out\nsomething like \"read more about this in `Backing up and Restoring'\". Then\nsomebody who reads the flat text file will say \"Gee, and where exactly is\nthat?\". Schizophrenia Bad.\n\nIMHO, there's really a line between:\n* Installation Guide: how to get from sources to binaries (roughly)\n* Administration Guide: how to get from binaries to running system\n* Users (and other) Guide: how to make use of the running system\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 8 Jun 2000 19:02:49 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] INSTALL/install.sgml file" }, { "msg_contents": "> We can refer people, but how? Do you write \"please read more about this in\n> the Administrator's Guide\"? Then somebody who's reading the installation\n> instructions as part of the administrator's guide will think \"Well, duh,\n> thanks a lot\". Or you make a proper DocBook hyperlink which comes out\n> something like \"read more about this in `Backing up and Restoring'\". Then\n> somebody who reads the flat text file will say \"Gee, and where exactly is\n> that?\". Schizophrenia Bad.\n> \n> IMHO, there's really a line between:\n> * Installation Guide: how to get from sources to binaries (roughly)\n> * Administration Guide: how to get from binaries to running system\n> * Users (and other) Guide: how to make use of the running system\n\nGood questions. No good answers I can think of.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 8 Jun 2000 13:11:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] INSTALL/install.sgml file" } ]
[ { "msg_contents": "At 10:02 5/06/00 +0200, Zeugswetter Andreas SB wrote:\n>\n>> > >\tSELECT *\n>> > >\t INTO TABLE tmp1\n>> > >\t FROM tmp\n>> > >\t WHERE onek.unique1 < 2;\n>> > >\tNOTICE: Adding missing FROM-clause entry for table onek\n>> \n>\n>Yes, only Bruce and I are of the opinion that it *is* an Error, and I guess \n>we want some consensus.\n>The notice is imho of the sort: notice this syntax is going to be disallowed\n>soon.\n>\n\nFWIW, I think it's an error too. For me, this situation is *far* more\nlikely to result from typos than intention, and I want to be told when it\nhappens. I also want to be prevented from doing it.\n\nI take it that there is no chance of a compile-time or runtime option to\ndisallow this kind of thing in all cases?\n\n \n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 05 Jun 2000 21:57:32 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AW: New warning code for missing FROM relations" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> I take it that there is no chance of a compile-time or runtime option to\n> disallow this kind of thing in all cases?\n\nDefine \"all cases\" ... also, just how strict do you want to be?\nDisallow use of *any* Postgres feature that is not in SQL92 (that\nwould include all user datatypes and functions, for example)?\n\nPostgres was never designed as an SQL compatibility checker, and\nI doubt that it'd be cost-effective to try to turn it into one.\nA standalone tool might be a better approach.\n\nPerhaps there is someone out there who wants this feature badly\nenough to do the legwork, but I'm not him ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Jun 2000 11:44:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: New warning code for missing FROM relations " }, { "msg_contents": "> Philip Warner <[email protected]> writes:\n> > I take it that there is no chance of a compile-time or runtime option to\n> > disallow this kind of thing in all cases?\n> \n> Define \"all cases\" ... also, just how strict do you want to be?\n> Disallow use of *any* Postgres feature that is not in SQL92 (that\n> would include all user datatypes and functions, for example)?\n> \n> Postgres was never designed as an SQL compatibility checker, and\n> I doubt that it'd be cost-effective to try to turn it into one.\n> A standalone tool might be a better approach.\n> \n> Perhaps there is someone out there who wants this feature badly\n> enough to do the legwork, but I'm not him ;-)\n\nAgreed. Seems the current warning level as configured is in the middle\nof people's expections on this issue.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Jun 2000 13:06:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: New warning code for missing FROM relations" }, { "msg_contents": "At 11:44 5/06/00 -0400, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> I take it that there is no chance of a compile-time or runtime option to\n>> disallow this kind of thing in all cases?\n>\n>Define \"all cases\" ... also, just how strict do you want to be?\n>Disallow use of *any* Postgres feature that is not in SQL92 (that\n>would include all user datatypes and functions, for example)?\n\nSorry, I should have been a bit more specific! I would like some kind of\noption to disable all cases of adding tables to FROM clauses by\nimplication. The main issue I have with this feature is it is more likely\nto be used (by me) by accident (as a result of a typo), and consequently\nintroduce very strange results.\n\nI am unaware (yet? ;-}) of any other non-SQL features in PostgreSQL that\nare likely to cause me the same level of concern. Adding tables to a query\nseems very dangerous: some people might, for example, expect an automatic\nnatural join on primary/foreign keys if you add a table.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-500 83 82 81 | _________ \\\nFax: +61-500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 06 Jun 2000 14:41:35 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AW: New warning code for missing FROM relations " } ]
[ { "msg_contents": "Hello-\n\nI've noticed a difference in the behavior between how v6.5.3 and v7.0.0\nhandle NULL assignments for char arrays (see below for the simple\nexample). Maybe I should be using '{}' to nullify a char array instead of\nNULL? Either way, it would seem to me that the behavior with v7.0.0\nshould be the same as v6.5.3. And, as you can see below, the response\nfrom v7.0.0 is to terminate the connection to the db backend... which\nseems severe.\n\nI am running a variety of i686 linux systems (RedHat 6.2 with home rolled\nv2.2.15 kernels), using src compiled Pg. I have been able to duplicate\nthis behavior on a number of machines.\n\n-Jon\n\nPS: This is a re-send of a message I posted 5 days ago (~1 June) which\nappears to not have made it to the list, probably due to something I\ndid. If you received this message before, my apologies.\n\n------------------------\nv6.5.3:\ntemplate1=> create table a (id char[2]);\nCREATE\ntemplate1=> insert into a (id) values (NULL);\nINSERT 112733 1\n\n------------------------\nv7.0.0:\ntemplate1=# create table a (id char[2]);\nCREATE\ntemplate1=# insert into a (id) values (NULL);\npqReadData() -- backend closed the channel unexpectedly.\n\tThis probably means the backend terminated abnormally\n\tbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\n\n-**-*-*---*-*---*-*---*-----*-*-----*---*-*---*-----*-----*-*-----*---\n Jon Lapham, PhD\n Centro Nacional de Ressonancia Magnetica Nuclear de Macromoleculas\n Universidade Federal do Rio de Janeiro (UFRJ) - Brasil\n email: [email protected] \n***-*--*----*-------*------------*--------------------*---------------\n", "msg_date": "Mon, 5 Jun 2000 11:25:55 -0300 (BRT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Problems with char array set to NULL" }, { "msg_contents": "[email protected] writes:\n> v7.0.0:\n> template1=# create table a (id char[2]);\n> CREATE\n> template1=# insert into a (id) values (NULL);\n> pqReadData() -- backend closed the channel unexpectedly.\n\nThis is fixed in 7.0.1.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Jun 2000 12:03:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with char array set to NULL " }, { "msg_contents": "Tom Lane wrote:\n> \n> [email protected] writes:\n> > v7.0.0:\n> > template1=# create table a (id char[2]);\n> > CREATE\n> > template1=# insert into a (id) values (NULL);\n> > pqReadData() -- backend closed the channel unexpectedly.\n>\n\nJust curious, but does postgreSQL support array _elements_ set to NULL.\n\n\nhannu=# insert into arr values ('{1,2,3}');\nINSERT 310985 1\nhannu=# insert into arr values ('{1,NULL,3}');\nERROR: pg_atoi: error in \"NULL\": can't parse \"NULL\"\n\nIf not, should it ?\n\n From my little knowledge on structure of storage I'd suppose it does \nnot and that it would require changing the array machinery to do so.\n\nMaybe this question is more aproppriate for [email protected]\nbut for some strange reason my subscription confirmation was rejected to\nall new pgsql-hackers-xxx lists with a message like this\n\n-----8<-------------------8<-------------------8<-------------------8<--------------\n\n>>>> accept\nToken for command:\n [email protected] pgsql-hackers-smgr Hannu Krosing\n<[email protected]>\nissued at: Wed May 24 15:04:23 2000 GMT\nfrom sessionid: b9ac0349622ef0032295b46f5510bad9\nwas accepted with these results:\n\n**** The following was not successfully added to pgsql-hackers-smgr:\n Hannu Krosing <[email protected]>\n\n1 valid command processed; its status is indeterminate.\n\n-----8<-------------------8<-------------------8<-------------------8<--------------\n\nHannu\n", "msg_date": "Wed, 07 Jun 2000 12:11:39 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with char array set to NULL" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> Just curious, but does postgreSQL support array _elements_ set to NULL.\n\nNo. This is (or should be) on the TODO list.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Jun 2000 10:12:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with char array set to NULL " }, { "msg_contents": "> Hannu Krosing <[email protected]> writes:\n> > Just curious, but does postgreSQL support array _elements_ set to NULL.\n> \n> No. This is (or should be) on the TODO list.\n\nAdded.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Jun 2000 15:55:43 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with char array set to NULL" } ]
[ { "msg_contents": "\n\nWe have for same questions different answers now:\n\n(DAY OF WEEK)\n\ntest=# select date_part('dow', now());\n date_part\n-----------\n 1\n(1 row)\n\ntest=# select to_char(now(), 'D');\n to_char\n---------\n 2\n(1 row)\n\n\nFor to_char() I use POSIX definition of 'tm' where week start on Sunday.\n \nIs it right? (Exuse me, I see archive, but without some effect...).\n\nOr we will support both styles?\n\n\t\t\t\t\t\tKarel\n\n\n\n", "msg_date": "Mon, 5 Jun 2000 22:52:51 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "day of week" }, { "msg_contents": "At 10:52 PM 6/5/00 +0200, Karel Zak wrote:\n>\n>\n>We have for same questions different answers now:\n>\n>(DAY OF WEEK)\n>\n>test=# select date_part('dow', now());\n> date_part\n>-----------\n> 1\n>(1 row)\n>\n>test=# select to_char(now(), 'D');\n> to_char\n>---------\n> 2\n>(1 row)\n>\n>\n>For to_char() I use POSIX definition of 'tm' where week start on Sunday.\n> \n>Is it right? (Exuse me, I see archive, but without some effect...).\n>\n>Or we will support both styles?\n\nto_char() gives the same answer with Oracle, as it is supposed to\nand as you intended it to.\n\nI personally don't find it all that disconcerting that the two give\ndifferent answers. Change the old, PG way and lots of old code\nis likely to break. Change to_char() and the desired compatibility\nwith Oracle breaks.\n\nI think it boils down to needing good documentation ???\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 05 Jun 2000 14:03:46 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: day of week" }, { "msg_contents": "\n> >For to_char() I use POSIX definition of 'tm' where week start on Sunday.\n> > \n> >Is it right? (Exuse me, I see archive, but without some effect...).\n> >\n> >Or we will support both styles?\n> \n> to_char() gives the same answer with Oracle, as it is supposed to\n> and as you intended it to.\n> \n> I personally don't find it all that disconcerting that the two give\n> different answers. Change the old, PG way and lots of old code\n> is likely to break. Change to_char() and the desired compatibility\n> with Oracle breaks.\n\n It is a Solomon's answer :-), but well. I agree. \n\n> I think it boils down to needing good documentation ???\n\n OK, I add it to to_char() and to date_[ trunc | dart ].\n\n\nI'm just now working on 'week' support to date_trunc().\n\nThe date_part() say that monday is a first day, to_char that it is second day,\nand what will say date_trunc()? --- how date is a week start, 'monday' or \n'sunday' date ?\n\n Comments?\n\n(I vote for 'sunday' like first day.)\n \n\t\t\t\t\tKarel\n\n\n\n", "msg_date": "Mon, 5 Jun 2000 23:34:29 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: day of week" }, { "msg_contents": "Don Baccus <[email protected]> writes:\n> At 10:52 PM 6/5/00 +0200, Karel Zak wrote:\n>> For to_char() I use POSIX definition of 'tm' where week start on Sunday.\n>> \n>> Is it right? (Exuse me, I see archive, but without some effect...).\n>> \n>> Or we will support both styles?\n\n> to_char() gives the same answer with Oracle, as it is supposed to\n> and as you intended it to.\n\nI don't think we should change to_char(), but it might make sense\nto create a SET variable that controls the start-of-week day for\ndate_part(); or just have several variants of 'dow' for different\nstart-of-week days. Different applications might reasonably want\ndifferent answers depending on what conventions they have to deal\nwith outside of Postgres.\n\nThomas Lockhart is usually our lead guy on datetime-related issues.\nLet's see what he thinks when he gets back from vacation (he's gone\ntill next week IIRC).\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Jun 2000 20:30:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: day of week " }, { "msg_contents": "Karel Zak writes:\n\n> The date_part() say that monday is a first day, to_char that it is\n> second day, and what will say date_trunc()? --- how date is a week\n> start, 'monday' or 'sunday' date ?\n\nIn a perfect world, the answer would be locale dependent.\n\nIn many implementations Sunday is the first day of the week but counting\nstarts with 0, so you still get Monday as \"1\".\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 6 Jun 2000 18:07:52 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: day of week" }, { "msg_contents": "\nOn Tue, 6 Jun 2000, Peter Eisentraut wrote:\n\n> Karel Zak writes:\n> \n> > The date_part() say that monday is a first day, to_char that it is\n> > second day, and what will say date_trunc()? --- how date is a week\n> > start, 'monday' or 'sunday' date ?\n> \n> In a perfect world, the answer would be locale dependent.\n\n Hmm, I not sure with locale dependent --- if anyone use 'dow' in\nsome calculation he needs control over this number. For me is better \nTom's idea with SET.\n\n> In many implementations Sunday is the first day of the week but counting\n> starts with 0, so you still get Monday as \"1\".\n\n\nAll it is a little mazy, for example week-of-year:\n\n\nFirts day of year:\n=================\n\nselect to_char('2000-01-01'::timestamp, 'WW Day D');\n to_char\n----------------\n 00 Saturday 7 <----- '00' --- here I have bug\n\n\nOracle (8.0.5):\n~~~~~~~~~~~~~~\n\nSVRMGR> select to_char( to_date('31-Dec-1999', 'DD-MON-YYYY'), 'WW Day D')\nfrom dual;\nTO_CHAR(TO_DAT\n--------------\n53 Friday 6\n\nSVRMGR> select to_char( to_date('01-Jan-2000', 'DD-MON-YYYY'), 'WW Day D')\nfrom dual;\nTO_CHAR(TO_DAT\n--------------\n01 Saturday 7\n\n\nThe Oracle always directly set first week on Jan-01, but day-of-week count\ncorrect... It is pretty dirty, but it is a probably set in libc's mktime().\n\n\nWell, we will in PG both version:\n\noracle's to_char: \t\n\t* week-start is a sunday \n\t* first week start on Jan-01, but day-of-week is count continual\n\nPG date_part/trunc: \t\n\t* week-start in monday\n\t* first week is a first full week in new year (really?) \n\n\n\t\t\t\t\tKarel\n\n", "msg_date": "Tue, 6 Jun 2000 19:06:00 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: day of week" }, { "msg_contents": "Karel Zak writes:\n\n> The Oracle always directly set first week on Jan-01, but day-of-week count\n> correct... It is pretty dirty, but it is a probably set in libc's mktime().\n\nThe first week of the year is most certainly not (always) the week with\nJan-01 in it. My understanding is that it's the first week where the\nThursday is in the new year, but I might be mistaken. Here in Sweden much\nof the calendaring is done based on the week of the year concept, so I'm\npretty sure that there's some sort of standard on this. And sure enough,\nthis year started on a Saturday, but according to the calendars that hang\naround here the first week of the year started on the 3rd of January.\n\n\n> Well, we will in PG both version:\n> \n> oracle's to_char: \t\n> \t* week-start is a sunday \n> \t* first week start on Jan-01, but day-of-week is count continual\n> \n> PG date_part/trunc: \t\n> \t* week-start in monday\n> \t* first week is a first full week in new year (really?) \n\nThe worst thing we could do is having an inconsistency here. Having a\nconfiguration option or two that applies to both sounds better.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n\n", "msg_date": "Wed, 7 Jun 2000 18:35:49 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: day of week" }, { "msg_contents": "\nOn Wed, 7 Jun 2000, Peter Eisentraut wrote:\n\n> Karel Zak writes:\n> \n> > The Oracle always directly set first week on Jan-01, but day-of-week count\n> > correct... It is pretty dirty, but it is a probably set in libc's mktime().\n> \n> The first week of the year is most certainly not (always) the week with\n> Jan-01 in it. My understanding is that it's the first week where the\n> Thursday is in the new year, but I might be mistaken. Here in Sweden much\n> of the calendaring is done based on the week of the year concept, so I'm\n> pretty sure that there's some sort of standard on this. And sure enough,\n> this year started on a Saturday, but according to the calendars that hang\n> around here the first week of the year started on the 3rd of January.\n\n You probably right. I belive that Thomas say more about it...\n\n> > \n> > oracle's to_char: \t\n> > \t* week-start is a sunday \n> > \t* first week start on Jan-01, but day-of-week is count continual\n> > \n> > PG date_part/trunc: \t\n> > \t* week-start in monday\n> > \t* first week is a first full week in new year (really?) \n> \n> The worst thing we could do is having an inconsistency here. Having a\n> configuration option or two that applies to both sounds better.\n\n Yes, but Oracle \"porters\" need probably oracle pseudo \ncalculation..\n\n For PG date_part/trunc will SET (or anything like this) good.\n\n\t\t\t\t\t\t\tKarel\n\n", "msg_date": "Wed, 7 Jun 2000 18:56:42 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: day of week" }, { "msg_contents": "> You probably right. I belive that Thomas say more about it...\n\nto_char() is compatible with Oracle. date_part() is compatible with\nIngres (or should be). I've got the docs somewhere, but presumably I\nlooked at them when implementing this in the first place. Maybe not;\nwhat I have is compatible with Unix date formatting.\n\n> For PG date_part/trunc will SET (or anything like this) good.\n\nLet's decide what these functions are for; in this case they are each\ncribbed from an existing database product, and should be compatible with\nthose products imho.\n\nbtw, the \"week of year\" issue is quite a bit more complex; it is defined\nin ISO-8601 and it does not correspond directly to a \"Jan 1\" point in\nthe calendar.\n\n - Thomas\n", "msg_date": "Tue, 13 Jun 2000 02:27:37 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: day of week" } ]