threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "GIS style, but I don't (yet) use the geometric types. That project's\nbeen on hold for a while.\n\nSaying that though, another project I have - the scheduler I'm working\non for the Tass project (www.tass-survey.org) - will use postgresql for\nkeeping track of areas of the sky observed, but it's still in the\nplanning phase at the moment.\n\nSome of the code for the first project (the Java bit that handles the\nmap display) is on my web site, but it's not that stable, and doesn't\nhave any postgresql bits in it yet.\n\nPossibly when I get 7.0's JDBC driver out of the way, I'll return to it.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: The Hermit Hacker [mailto:[email protected]]\nSent: Sunday, December 05, 1999 10:29 PM\nTo: Thomas Lockhart\nCc: Robert Aldridge; Postgres Hackers List\nSubject: Re: [HACKERS] Re: Geometric Data Type in PostgreSQL\n\n\nOn Fri, 3 Dec 1999, Thomas Lockhart wrote:\n\n> > I'm a geographic information systems (GIS) professional and a (home)\n> > Linux user. After reading the documentation for the Geometric data\n> > types in PostgreSQL, I'm excited about the possibilities. Are you\n> > aware of any projects where the geometric data types in PostgreSQL\nare\n> > being used as the basis of a GIS or mapping package?\n> \n> Not specifically, though I do know that folks have used it to do\n> GIS-like things (e.g. given a location on the earth surface, identify\n> satellite tracks which are visible).\n\nIsn't Peter Mount using PostgreSQL & JDBC for a GIS project? \n\n\nMarc G. Fournier ICQ#7615664 IRC Nick:\nScrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary:\nscrappy@{freebsd|postgresql}.org \n\n\n************\n",
"msg_date": "Tue, 7 Dec 1999 07:32:41 -0000 ",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Re: Geometric Data Type in PostgreSQL"
}
] |
[
{
"msg_contents": "A 6.6 release won't bother me, as I've not committed any of the 7.0\nchanges to JDBC yet. If there is a release before 7.0, nothing for JDBC\nwill change.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: The Hermit Hacker [mailto:[email protected]]\nSent: Monday, December 06, 1999 11:21 PM\nTo: Bruce Momjian\nCc: PostgreSQL-development\nSubject: Re: [HACKERS] When is 7.0 going Beta?\n\n\n\n\nWhat do we have now for a v6.6? I'm not against, just wondering if we\nhave enough to warrant a v6.6, that's all...\n\nOn Mon, 6 Dec 1999, Bruce Momjian wrote:\n\n> > I am concerned about a May release. That puts us at almost a year\nfrom\n> > the last major release in mid-June. That is too long. Seems like\nwe\n> > should have some release around February.\n> \n> Let's list the 7.0 items:\n> \n> \t Foreign Keys - Jan\n> \t WAL - Vadim\n> \t Function args - Tom\n> \t System indexes - Bruce\n> \t Date/Time types - Thomas\n> \t Optimizer - Tom\n> \t\n> \t Outer Joins - Thomas?\n> \t Long Tuples - ?\n> \n> None of these are done, except for the system indexes, and that is a\n> small item. It seems everyone wants a grand 7.0, but that is months\n> away.\n> \n> I propose we go into beta on 6.6 Jan 1, with final release Feb 1. We\n> certainly have enough for a 6.6 release.\n> \n> I recommend this so the 6.5.* enhancements are accessible to users\nnow,\n> rather than waiting another severel months while we add the above\nfancy\n> features.\n> \n> Also, I have never been a big fan of huge, fancy releases because they\n> take too long to become stable. Better for us to release what we have\n> now and work out those kinks.\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick:\nScrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary:\nscrappy@{freebsd|postgresql}.org \n\n\n************\n",
"msg_date": "Tue, 7 Dec 1999 08:46:16 -0000 ",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] When is 7.0 going Beta?"
}
] |
[
{
"msg_contents": "\n> Actually, Oracle has been moving *away* from this...more \n> recent versions\n> of Oracle recommend using the Operating System file systems, since, in\n> most cases, the Operating System does a better job, and its \n> too difficult\n> to have Oracle itself optimize internal for all the different variants\n> that it supports....\n\nActually Oracle has features that only work with raw/io, e.g. parallel\nserver.\nOnce you know how to handle the raw devs they are a lot more convenient\nthan flat files. If you have a 100 Gb + DB raw dev is the only way to go. \n\nAndreas\n",
"msg_date": "Tue, 7 Dec 1999 10:47:39 +0100 ",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] RAW I/O device"
},
{
"msg_contents": ">\n> Actually Oracle has features that only work with raw/io, e.g. parallel\n> server.\n> Once you know how to handle the raw devs they are a lot more convenient\n> than flat files. If you have a 100 Gb + DB raw dev is the only way to go.\n>\n\n I've seen a 800+ GB Oracle database using filesystem and\n performing well. But that needs a GOOD filesystem, not\n something like ext2.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 7 Dec 1999 14:37:28 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] RAW I/O device"
}
] |
[
{
"msg_contents": "\n> > Actually, Oracle has been moving *away* from this...more \n> recent versions\n> > of Oracle recommend using the Operating System file \n> systems, since, in\n\nThis is untrue. They need the raw devices for their big DB's\n(Parallel Server)\n\n> > most cases, the Operating System does a better job, and its \n> too difficult\n> > to have Oracle itself optimize internal for all the \n\nThis is especially true for Oracle, since the don't have very \nintelligent IO optimizations.\n\nGood DB sided IO optimization can outperform any OS easily.\nEspecially once you reach the RAM limits of your system.\nwrite cache, read ahead, indexed read ahead, \nincreased block size (up to 256k) .....\n\n> different variants\n> > that it supports....\n\nThere is not really that much difference.\n\nAndreas\n",
"msg_date": "Tue, 7 Dec 1999 11:42:35 +0100 ",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] RAW I/O device"
}
] |
[
{
"msg_contents": "\n> > new way for a faster and better database engine. I know (and agree) \n> > that it not is priority for next year(s?). But it is \n> interesting, and\n> > is prabably good remember it during development, and not \n> write (in future)\n> > features which close this good way. \n> \n> I would be very surprised to see any significant change in raw vs.\n> filesystem i/o on modern file systems, \n\nBeleive me, the difference is substantial. \nWhen you test this you will typically need DB's, that \nare larger than your OS file cache. \nSecond you need to add the memory, that is used by the OS to \ncache the DB files to the DB buffer cache when testing raw devs.\nTypically you will also use async IO.\n\nOne other advantage is, that extending/creating a big raw device\nis way faster (takes subseconds for 2 Gb) than creating a big file\n(takes minutes). This is especially a pain during restores.\nRestores to raw devices (including device/file creation) \nare typically only little slower than the backup. \n\nAndreas\n",
"msg_date": "Tue, 7 Dec 1999 12:06:08 +0100 ",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] RAW I/O device"
}
] |
[
{
"msg_contents": "�����������������������������������������������������������������������������������\n���������������������������������������������������������������������������������������\n\nhttp://www.wd-g.com\n\n������������������������������������������������������������������������������������\n����������������������������������������������������������������������������������\n��������������������������������������������������������������������������������������������������\n�����������������������������������������������������������������������������������������������\n������������������������������������������������������������������������������������\n�����������������������������������������������������������������������������������������\n��������������������������������������������\n\n����������������������������������������������������������������������������������������\n�������������������������������������������������������������������������������\n������������������������������������������������������������������������������������������\n��������������������������������������������������������������������������������������������������\n�����������������������������������������������������������������������������������\n\n������������\n\n������������������������������������������������������������������������������������\n�������������������������������������������������������������������������\n��������������������������������������������������������������������������������������\n���������������������������������������\n\n��������������\n\n������������������������������������������������������������������������������������\n������������������������������������������������������������������������������\n��������������������������������������������������������������������������������������������\n������������������������������������������������������������������������������������������������\n����������������������������������������������������������������������������������������\n������������������������������������������������\n\n�������������������������������������������������������������������������������������\n����������������������������������\n\n����������������������������������������������������������������������������������\n��������������������������������\n\n������������������������������������������������������������������������������������\n������������\n\n\n",
"msg_date": "Tue, 7 Dec 1999 11:11:06 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "�����������������������������������������������������������������������������������"
}
] |
[
{
"msg_contents": "www.postgresql.org doesn't speak about any port so I imagine nothing\nexist for now on that platform ; however you can release ODBC accesses.\n\nFabian\nhttp://www.geocities.com/lonestar_teklords\n",
"msg_date": "Tue, 7 Dec 1999 13:53:39 +0100 ",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "RE: [GENERAL] Postgresql in win9x"
}
] |
[
{
"msg_contents": "Hello,\n\ni am a novice postgresql user (and i like it very much!).\ni found out, that the ref.int. doesn't work via a constraining \ncreate table definition...\n\nnow my question: has anyone writen a 'compiler' that translates\nri-constraints into triggers and PL/pgSQL procedures?\nif so, please tell me where i can find it. if not, tell me about!\nperhaps i will write a 'compiler' to do that work automatically.\n\nthanks for your help!\n\nMatthias Oestreicher\n\nmailto:[email protected]\nor\nmailto:[email protected]\n\n",
"msg_date": "Tue, 7 Dec 1999 14:56:36 +0100 ",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Referential integrity"
}
] |
[
{
"msg_contents": "Hi All,\n\nIs there any reason for not allowing table aliases in\ndelete statements?\n\nI was trying to delete duplicates from an ascend log\ndatabase when I hit the following \"parse\" error.\n\n(Perhaps I shouldn't be using a correlated subquery!!)\n\nSimplified example follows.....\n\n\nemkxp01=> create table deltest ( sessionid int, respdate datetime );\nCREATE\nemkxp01=> insert into deltest values ( 1, now() );\nINSERT 58395 1\nemkxp01=> insert into deltest values ( 1, now() );\nINSERT 58396 1\nemkxp01=> insert into deltest values ( 2, now() );\nINSERT 58397 1\nemkxp01=> insert into deltest values ( 2, now() );\nINSERT 58398 1\nemkxp01=> select * from deltest s1 where s1.respdate not in ( select \nmin(s2.respdate) from deltest s2 where s1.sessionid = s2.sessionid);\n sessionid | respdate \n-----------+------------------------------\n 1 | Tue 07 Dec 22:32:08 1999 GMT\n 2 | Tue 07 Dec 22:32:19 1999 GMT\n(2 rows)\n\nemkxp01=> select * from deltest; \n sessionid | respdate \n-----------+------------------------------\n 1 | Tue 07 Dec 22:32:01 1999 GMT\n 1 | Tue 07 Dec 22:32:08 1999 GMT\n 2 | Tue 07 Dec 22:32:14 1999 GMT\n 2 | Tue 07 Dec 22:32:19 1999 GMT\n(4 rows)\n\nemkxp01=> delete from deltest s1 where s1.respdate not in ( select \nmin(s2.respdate) from deltest s2 where s1.sessionid = s2.sessionid);\nERROR: parser: parse error at or near \"s1\"\nemkxp01=> \n\n",
"msg_date": "Tue, 7 Dec 1999 22:38:08 +0000 (GMT)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Table aliases in delete statements?"
},
{
"msg_contents": "> emkxp01=> delete from deltest s1 where s1.respdate not in ( select \n> min(s2.respdate) from deltest s2 where s1.sessionid = s2.sessionid);\n> ERROR: parser: parse error at or near \"s1\"\n> emkxp01=> \n\nDon't use s1. Just refer to native deltest in the subquery. That\nshould reference the outer table.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 7 Dec 1999 19:23:55 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Table aliases in delete statements?"
},
{
"msg_contents": "Keith Parks <[email protected]> writes:\n> Is there any reason for not allowing table aliases in\n> delete statements?\n\nNot much, I suppose, but it's not in SQL92:\n\n <delete statement: searched> ::=\n DELETE FROM <table name>\n [ WHERE <search condition> ]\n\nThe expansion of <table name> doesn't mention anything about aliases.\n\nAs Bruce points out in another followup, there's no real need for\nan alias for the target table; if you have sub-selects that need\nindependent references to the target, you can always alias *them*.\nThe same goes for INSERT and UPDATE, which also take unadorned\n<table name> as the target table specification.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Dec 1999 01:27:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Table aliases in delete statements? "
},
{
"msg_contents": "Then <[email protected]> spoke up and said:\n> Keith Parks <[email protected]> writes:\n> > Is there any reason for not allowing table aliases in\n> > delete statements?\n> \n> As Bruce points out in another followup, there's no real need for\n> an alias for the target table; if you have sub-selects that need\n> independent references to the target, you can always alias *them*.\n> The same goes for INSERT and UPDATE, which also take unadorned\n> <table name> as the target table specification.\n\nUnless your query is going to be long enough to run into query length\nlimits, aliases are not your friends. Standard SQL they may be, but\naliases always end up obscuring queries to those who come along after\nyou. \n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================",
"msg_date": "8 Dec 1999 09:00:23 -0500",
"msg_from": "Brian E Gallew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Table aliases in delete statements? "
}
] |
[
{
"msg_contents": "\n>Bruce Momjian <[email protected]>\n\n>\n>> emkxp01=> delete from deltest s1 where s1.respdate not in ( select \n>> min(s2.respdate) from deltest s2 where s1.sessionid = s2.sessionid);\n>> ERROR: parser: parse error at or near \"s1\"\n>> emkxp01=> \n>\n>Don't use s1. Just refer to native deltest in the subquery. That\n>should reference the outer table.\n\nThat doesn't seem to work as 3 rows are deleted and not just the\ntwo duplicates.\n\nemkxp01=> delete from deltest where respdate not in ( select min(s2.respdate) \nfrom deltest s2 where sessionid = s2.sessionid);\nDELETE 3\nemkxp01=> select * from deltest;\n sessionid | respdate \n-----------+------------------------------\n 1 | Tue 07 Dec 22:32:01 1999 GMT\n(1 row)\n\nemkxp01=> \n\nKeith.\n\n",
"msg_date": "Wed, 8 Dec 1999 00:48:12 +0000 (GMT)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Table aliases in delete statements?"
},
{
"msg_contents": "> >Don't use s1. Just refer to native deltest in the subquery. That\n> >should reference the outer table.\n> \n> That doesn't seem to work as 3 rows are deleted and not just the\n> two duplicates.\n> \n> emkxp01=> delete from deltest where respdate not in ( select min(s2.respdate) \n> from deltest s2 where sessionid = s2.sessionid);\n> DELETE 3\n> emkxp01=> select * from deltest;\n> sessionid | respdate \n> -----------+------------------------------\n> 1 | Tue 07 Dec 22:32:01 1999 GMT\n> (1 row)\n\nNo. Use:\n\nemkxp01=> delete from deltest where respdate not in ( select min(s2.respdate) \nfrom deltest s2 where deltest.sessionid = s2.sessionid);\n ^^^^^^^\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 7 Dec 1999 20:05:18 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Table aliases in delete statements?"
}
] |
[
{
"msg_contents": "Keith Parks <[email protected]>\n>>Bruce Momjian <[email protected]>\n>\n>>\n>>> emkxp01=> delete from deltest s1 where s1.respdate not in ( select \n>>> min(s2.respdate) from deltest s2 where s1.sessionid = s2.sessionid);\n>>> ERROR: parser: parse error at or near \"s1\"\n>>> emkxp01=> \n>>\n>>Don't use s1. Just refer to native deltest in the subquery. That\n>>should reference the outer table.\n>\n>That doesn't seem to work as 3 rows are deleted and not just the\n>two duplicates.\n>\n>emkxp01=> delete from deltest where respdate not in ( select min(s2.respdate) \n>from deltest s2 where sessionid = s2.sessionid);\n>DELETE 3\n>emkxp01=> select * from deltest;\n> sessionid | respdate \n>-----------+------------------------------\n> 1 | Tue 07 Dec 22:32:01 1999 GMT\n>(1 row)\n>\n>emkxp01=> \n\nOoops sorry, it does work if I use the tablename.colname syntax.\n\nemkxp01=> delete from deltest where respdate not in ( select min(s2.respdate) \nfrom deltest s2 where deltest.sessionid = s2.sessionid);\nDELETE 2\nemkxp01=> select * from deltest; \n sessionid | respdate \n-----------+------------------------------\n 1 | Tue 07 Dec 22:32:01 1999 GMT\n 2 | Wed 08 Dec 00:48:59 1999 GMT\n(2 rows)\n\nemkxp01=> \n\n",
"msg_date": "Wed, 8 Dec 1999 00:52:03 +0000 (GMT)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Table aliases in delete statements?"
}
] |
[
{
"msg_contents": "I seem to run into a serious problem. With 6.5.x + FreeBSD 3.2, I get\na core under heavy load (16 or more concurrent users). FreeBSD 2.2.x\nseems more stable but soon or later same thing happens. Examing a\ncore, I found it segfaulted in hash_search(). It was not possible to\nget more info having a -g compiled backend becasue it did not fail if -\ng was given. It is likely that random memory corruptions occured. It\nis also reported by a user that he often sees:\n\n\tNOTICE: LockReplace: xid table corrupted\n\nNote that these problems never happen on Linux (even running 128 users\nare ok on Linux). Only FreeBSD is suffered as far as I can see(not\nsure about other *BSD). Increasing shmem or semaphore never helps.\n\nHow to reproduce the problem:\n\n1) obtain pgbench source from\nftp.sra.co.jp/pub/cmd/postgres/pgbench/pgbench-1.1.tar.gz\n\n2) unpack the archive and run configure\n\n3) edit the first line in Makefile\n\nPOSTGRESHOME = /usr/local/pgsql\n\n4) make. you will get an executable \"pgbench\" there.\n\n5) make a fresh DB (suppose it is \"test\")\n\n6) initialize DB\n\npgbench -i test\n\nthis will take for a while\n\n7) run the test\n\npgbench -n -c numeber_of_concurrent users test\n\nI see problems with numeber_of_concurrent users ~ 16 or more.\n\nHere are my shmem settings:\n\nshminfo:\n shmmax: 41943041 (max shared memory segment size)\n shmmin: 1 (min shared memory segment size)\n shmmni: 32 (max number of shared memory identifiers)\n shmseg: 8 (max shared memory segments per process)\n shmall: 10240 (max amount of shared memory in pages)\n\nseminfo:\n semmap: 40 (# of entries in semaphore map)\n semmni: 32 (# of semaphore identifiers)\n semmns: 256 (# of semaphores in system)\n semmnu: 30 (# of undo structures in system)\n semmsl: 256 (max # of semaphores per id)\n semopm: 100 (max # of operations per semop call)\n semume: 10 (max # of undo entries per process)\n semusz: 92 (size in bytes of undo structure)\n semvmx: 32767 (semaphore maximum value)\n semaem: 16384 (adjust on exit max value)\n\nAny thoughts?\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 08 Dec 1999 17:26:14 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "FreeBSD problem under heavy load"
},
{
"msg_contents": "Note that same phenomen happens on current too.\n--\nTatsuo Ishii\n\n> I seem to run into a serious problem. With 6.5.x + FreeBSD 3.2, I get\n> a core under heavy load (16 or more concurrent users). FreeBSD 2.2.x\n> seems more stable but soon or later same thing happens. Examing a\n> core, I found it segfaulted in hash_search(). It was not possible to\n> get more info having a -g compiled backend becasue it did not fail if -\n> g was given. It is likely that random memory corruptions occured. It\n> is also reported by a user that he often sees:\n> \n> \tNOTICE: LockReplace: xid table corrupted\n> \n> Note that these problems never happen on Linux (even running 128 users\n> are ok on Linux). Only FreeBSD is suffered as far as I can see(not\n> sure about other *BSD). Increasing shmem or semaphore never helps.\n> \n> How to reproduce the problem:\n> \n> 1) obtain pgbench source from\n> ftp.sra.co.jp/pub/cmd/postgres/pgbench/pgbench-1.1.tar.gz\n> \n> 2) unpack the archive and run configure\n> \n> 3) edit the first line in Makefile\n> \n> POSTGRESHOME = /usr/local/pgsql\n> \n> 4) make. you will get an executable \"pgbench\" there.\n> \n> 5) make a fresh DB (suppose it is \"test\")\n> \n> 6) initialize DB\n> \n> pgbench -i test\n> \n> this will take for a while\n> \n> 7) run the test\n> \n> pgbench -n -c numeber_of_concurrent users test\n> \n> I see problems with numeber_of_concurrent users ~ 16 or more.\n> \n> Here are my shmem settings:\n> \n> shminfo:\n> shmmax: 41943041 (max shared memory segment size)\n> shmmin: 1 (min shared memory segment size)\n> shmmni: 32 (max number of shared memory identifiers)\n> shmseg: 8 (max shared memory segments per process)\n> shmall: 10240 (max amount of shared memory in pages)\n> \n> seminfo:\n> semmap: 40 (# of entries in semaphore map)\n> semmni: 32 (# of semaphore identifiers)\n> semmns: 256 (# of semaphores in system)\n> semmnu: 30 (# of undo structures in system)\n> semmsl: 256 (max # of semaphores per id)\n> semopm: 100 (max # of operations per semop call)\n> semume: 10 (max # of undo entries per process)\n> semusz: 92 (size in bytes of undo structure)\n> semvmx: 32767 (semaphore maximum value)\n> semaem: 16384 (adjust on exit max value)\n> \n> Any thoughts?\n> --\n> Tatsuo Ishii\n> \n> ************\n",
"msg_date": "Wed, 08 Dec 1999 18:17:05 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] FreeBSD problem under heavy load"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> I seem to run into a serious problem. With 6.5.x + FreeBSD 3.2, I get\n> a core under heavy load (16 or more concurrent users). FreeBSD 2.2.x\n> seems more stable but soon or later same thing happens. Examing a\n> core, I found it segfaulted in hash_search().\n\nI've been looking into this without much success. I cannot reproduce it\nhere under HPUX --- I ran pgbench for several hours without seeing any\nproblem. I also made another pass over the dynahash.c code looking for\nportability bugs, but didn't find anything that looked promising. (The\ncode is ugly and fragile, but AFAICT it will work under existing usage\npatterns.) It's quite possible the problem is elsewhere and dynahash is\njust on the receiving end of a memory clobber ... but if so, we have\nvery little to go on in guessing where to look.\n\nCan anyone else reproduce the problem? Does anything show up in the\npostmaster log at or just before the crash?\n\n\t\t\tregards, tom lane\n\nPS: pgbench's configure fails on HPUX, because HP's compiler doesn't\nlike whitespace before #include. I modified configure.in like this:\n\nAC_TRY_LINK([#include <sys/time.h>\n#include <sys/resource.h>],\n\t [struct rlimit rlim;\n",
"msg_date": "Thu, 09 Dec 1999 22:45:09 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] FreeBSD problem under heavy load "
},
{
"msg_contents": "> Tatsuo Ishii <[email protected]> writes:\n> > I seem to run into a serious problem. With 6.5.x + FreeBSD 3.2, I get\n> > a core under heavy load (16 or more concurrent users). FreeBSD 2.2.x\n> > seems more stable but soon or later same thing happens. Examing a\n> > core, I found it segfaulted in hash_search().\n> \n> I've been looking into this without much success. I cannot reproduce it\n> here under HPUX --- I ran pgbench for several hours without seeing any\n> problem. I also made another pass over the dynahash.c code looking for\n> portability bugs, but didn't find anything that looked promising. (The\n> code is ugly and fragile, but AFAICT it will work under existing usage\n> patterns.) It's quite possible the problem is elsewhere and dynahash is\n> just on the receiving end of a memory clobber ... but if so, we have\n> very little to go on in guessing where to look.\n> \n> Can anyone else reproduce the problem? Does anything show up in the\n> postmaster log at or just before the crash?\n> \n> \t\t\tregards, tom lane\n\nI think I got it. in storage/lmgr/lock.c:WaitOnLock:\n\n\tchar\t\told_status[64],\n\t\t\t\tnew_status[64];\n\t\t:\n\t\t:\n\n\tstrcpy(old_status, PS_STATUS);\n\tstrcpy(new_status, PS_STATUS);\n\tstrcat(new_status, \" waiting\");\n\tPS_SET_STATUS(new_status);\n\t\t:\n\t\t:\n\tPS_SET_STATUS(old_status);\n\nThe current status string is copied into old_status, then the pointer\nto it is set to a gloable variable ps_status by PS_SET_STATUS\nmacro. Unfortunately old_status is on the stack, so once WaitOnLock\nreturns, ps_status would point to a garbage. In the subsequent call to\nWaitOnLock,\n\n\tstrcpy(old_status, PS_STATUS);\n\nwill copy garbage string into old_status. So if the string is longer\nthan 63, the stack would be broken. Note that this would not happen on\nLinux due to the difference of the definition of the macro. See\ninclude/utils/ps_status.h for more details.\n\nAlso, I don't understand why:\n\n\tstrcpy(new_status, PS_STATUS);\n\tstrcat(new_status, \" waiting\");\n\tPS_SET_STATUS(new_status);\n\nis necessary. Just:\n\n\tPS_SET_STATUS(\"waiting\");\n\nshould be enough. After doing some tests on my FreeBSD and Linux box,\nI will commit fixes to both current and 6.5 source tree.\n\n> PS: pgbench's configure fails on HPUX, because HP's compiler doesn't\n> like whitespace before #include. I modified configure.in like this:\n> \n> AC_TRY_LINK([#include <sys/time.h>\n> #include <sys/resource.h>],\n> \t [struct rlimit rlim;\n\nThanks. I will incorporate your fix.\n\nBTW, I think pgbench is usefull to detect this kind of problems. Can I\nput it into contrib or somewhere?\n--\nTatsuo Ishii\n",
"msg_date": "Fri, 10 Dec 1999 14:56:58 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] FreeBSD problem under heavy load "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> The current status string is copied into old_status, then the pointer\n> to it is set to a gloable variable ps_status by PS_SET_STATUS\n> macro. Unfortunately old_status is on the stack, so once WaitOnLock\n> returns, ps_status would point to a garbage. In the subsequent call to\n> WaitOnLock,\n\n> \tstrcpy(old_status, PS_STATUS);\n\n> will copy garbage string into old_status. So if the string is longer\n> than 63, the stack would be broken. Note that this would not happen on\n> Linux due to the difference of the definition of the macro. See\n> include/utils/ps_status.h for more details.\n\nUgh. It wouldn't happen on HPUX either, because the PS_STATUS stuff\nall compiles as no-ops here. So that's why I couldn't see it.\n\nYou didn't say what you had in mind to fix this, but I think the safest\napproach would be to reserve an area to copy the PS_SET_STATUS string\ninto on *all* systems. Otherwise we'll just get bitten by this kind of\nbug again in future.\n\n> BTW, I think pgbench is usefull to detect this kind of problems. Can I\n> put it into contrib or somewhere?\n\nSounds like a good idea to me.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Dec 1999 01:58:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] FreeBSD problem under heavy load "
},
{
"msg_contents": "> You didn't say what you had in mind to fix this, but I think the safest\n> approach would be to reserve an area to copy the PS_SET_STATUS string\n> into on *all* systems. Otherwise we'll just get bitten by this kind of\n> bug again in future.\n\nDone for current.\n\n> > BTW, I think pgbench is usefull to detect this kind of problems. Can I\n> > put it into contrib or somewhere?\n> \n> Sounds like a good idea to me.\n\nWill commit into contrib.\n--\nTatsuo Ishii\n",
"msg_date": "Fri, 10 Dec 1999 19:30:57 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] FreeBSD problem under heavy load "
},
{
"msg_contents": "On 1999-12-10, Tatsuo Ishii mentioned:\n\n> BTW, I think pgbench is usefull to detect this kind of problems. Can I\n> put it into contrib or somewhere?\n\nUnder src/test there already is a bench subdirectory which I'm not sure\nwhat it is for, but pgbench might have good company there.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sat, 11 Dec 1999 03:01:29 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] FreeBSD problem under heavy load "
}
] |
[
{
"msg_contents": "\nNever mind. Fixed. I had forgotten to add a line to pg_amproc.h. So\nobvious, hard to imagine how I could have missed it... :-)\n\n\n> > Frank Cusack wrote:\n> > > \n> > > Solaris 2.6/sparc; postgres 6.5.1\n> > > \n> > > dns=> create table test (zone int4, net cidr, unique(zone, net));\n> > > NOTICE: CREATE TABLE/UNIQUE will create implicit index 'test_zone_key' for table 'test'\n> > > CREATE\n> > > dns=> insert into test (zone, net) values (1, '1.2.3/24');\n> > > INSERT 21750 1\n> > > dns=> insert into test (zone, net) values (1, '2.3.4/24');\n> > > INSERT 21751 1\n> > > dns=> insert into test (zone, net) values (1, '1.2.3/24');\n> > > INSERT 21752 1\n> > > dns=> insert into test (zone, net) values (1, '2.3.4/24');\n> > > ERROR: Cannot insert a duplicate key into a unique index\n> > \n> > Yes, I reproduced this (Solaris 2.5/sparc). \n> > Seems like CIDR problem(??!):\n> \n> I see a more serious problem in the current source tree:\n> \t\n> \ttest=> create table test (zone int4, net cidr, unique(zone, net));\n> \tNOTICE: CREATE TABLE/UNIQUE will create implicit index 'test_zone_key' for table 'test'\n> \tCREATE\n> \ttest=> insert into test (zone, net) values (1, '1.2.3/24');\n> \tERROR: fmgr_info: function 0: cache lookup failed\n> \n> Seems something is broken with CIDR, but not INET:\n> \t\n> \ttest=> create table test2 (x inet unique(x)); \n> \tERROR: parser: parse error at or near \"(\"\n> \ttest=> create table test2 (x inet, unique(x));\n> \tNOTICE: CREATE TABLE/UNIQUE will create implicit index 'test2_x_key' for table 'test2'\n> \tCREATE\n> \ttest=> insert into test2 values ('1.2.3.4/24');\n> \tINSERT 19180 1\n> \ttest=> create table test3 (x cidr, unique(x));\n> \tNOTICE: CREATE TABLE/UNIQUE will create implicit index 'test3_x_key' for table 'test3'\n> \tCREATE\n> \ttest=> insert into test3 values ('1.2.3.4/24');\n> \tERROR: fmgr_info: function 0: cache lookup failed\n> \n> The problem appears to be in _bt_mkscankey() and index_getprocid().\n> \n> Any ideas?\n> \n> Backtrace shows:\n> \n> ---------------------------------------------------------------------------\n> \n> #0 elog (lev=-1, fmt=0x817848e \"fmgr_info: function %u: cache lookup failed\")\n> at elog.c:94\n> #1 0x8135a47 in fmgr_info (procedureId=0, finfo=0x830a060) at fmgr.c:225\n> #2 0x80643f9 in ScanKeyEntryInitialize (entry=0x830a058, flags=0, \n> attributeNumber=2, procedure=0, argument=137404148) at scankey.c:65\n> #3 0x8083e70 in _bt_mkscankey (rel=0x8312230, itup=0x8309ee8) at nbtutils.c:56\n> #4 0x8079989 in _bt_doinsert (rel=0x8312230, btitem=0x8309ee8, \n> index_is_unique=1 '\\001', heapRel=0x82dfd38) at nbtinsert.c:52\n> #5 0x807eabe in btinsert (rel=0x8312230, datum=0x8309b28, \n> nulls=0x830a020 \" \", ht_ctid=0x8309e2c, heapRel=0x82dfd38) at nbtree.c:358\n> #6 0x81358d8 in fmgr_c (finfo=0x80476e8, values=0x80476f8, \n> isNull=0x80476df \"\") at fmgr.c:146\n> #7 0x8135c25 in fmgr (procedureId=331) at fmgr.c:336\n> #8 0x8073c6d in index_insert (relation=0x8312230, datum=0x8309b28, \n> nulls=0x830a020 \" \", heap_t_ctid=0x8309e2c, heapRel=0x82dfd38)\n> at indexam.c:211\n> #9 0x80ae3d9 in ExecInsertIndexTuples (slot=0x8309bf8, tupleid=0x8309e2c, \n> estate=0x8309950, is_update=0) at execUtils.c:1206\n> #10 0x80aa77e in ExecAppend (slot=0x8309bf8, tupleid=0x0, estate=0x8309950)\n> at execMain.c:1178\n> #11 0x80aa60e in ExecutePlan (estate=0x8309950, plan=0x83098b0, \n> operation=CMD_INSERT, offsetTuples=0, numberTuples=0, \n> direction=ForwardScanDirection, destfunc=0x817cdc4) at execMain.c:1024\n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 8 Dec 1999 06:36:23 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BUGS] uniqueness not always correct"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Never mind. Fixed. I had forgotten to add a line to pg_amproc.h. So\n> obvious, hard to imagine how I could have missed it... :-)\n\n>> ERROR: fmgr_info: function 0: cache lookup failed\n\nThis seems a mighty unhelpful error message for a missing pg_amproc\nentry. Perhaps whatever code is doing the amproc lookup ought to be\nchecking for a failure and issuing a more specific message? I haven't\nlooked to see whether that's practical or not...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Dec 1999 10:49:35 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] uniqueness not always correct "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Never mind. Fixed. I had forgotten to add a line to pg_amproc.h. So\n> > obvious, hard to imagine how I could have missed it... :-)\n> \n> >> ERROR: fmgr_info: function 0: cache lookup failed\n> \n> This seems a mighty unhelpful error message for a missing pg_amproc\n> entry. Perhaps whatever code is doing the amproc lookup ought to be\n> checking for a failure and issuing a more specific message? I haven't\n> looked to see whether that's practical or not...\n\nNot practical. The lookup in fmgr is far away from the rd_strategy load\nfailure.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 8 Dec 1999 22:03:00 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [BUGS] uniqueness not always correct"
}
] |
[
{
"msg_contents": "Please note that SQLweb is a free public interface to SQL datatbases and\nwill work with postgresql. An E-Commerce Interface, just released, will\nbe valuable to PostgreSQL users who want to clear credit cards on-line.\n\nPlease feel free to add it to your list of 3rd party tools. SQLweb is\nan HTML interface to PostgreSQL, for making database web applications.\n\nSQLweb can be downloaded free from http://www.sqlweb.com\n\nRegards,\n\nDon Schindhelm\nSQLweb Technologies\nApplied Information Technologies, Inc\n410-203-1999\n\n",
"msg_date": "Wed, 08 Dec 1999 10:24:06 -0500",
"msg_from": "Don Schindhelm <[email protected]>",
"msg_from_op": true,
"msg_subject": "Free SQLweb interface to postgresql w/E-Commerce capabilities"
}
] |
[
{
"msg_contents": "Hi,\n\nI was able to crash postgres 6.5.3 when I did an 'alter user' command. \nAfter I started a debugger I found the problem in the timezone handling of \ndatetime (my Linux box lost its timezone information, that's how the \nproblem occurred).\n\nOnly 7 bytes are reserved for the timezone, without checking for boundaries.\n\nAttached is a patch that fixes this problem and emits a NOTICE if a \ntimezone is encountered that is longer than MAXTZLEN bytes, like this:\n\ntemplate1=# alter user postgres with password postgres;\nNOTICE: Invalid timezone 'Local time zone must be set--see zic manual page'\nNOTICE: Invalid timezone 'Local time zone must be set--see zic manual page'\nALTER USER\n\nI don't know whether the timezone should be reset to some predefined \nconstant (like \"GMT\") if an error like this occurs. This patch at least \ndirects the user in a general direction that something is wrong with his setup.\n\nCheers,\n\nJeroen",
"msg_date": "Wed, 08 Dec 1999 17:56:21 +0100",
"msg_from": "Jeroen van Vianen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Small timezone bug fixed"
},
{
"msg_contents": "\nApplied.\n\n\n> Hi,\n> \n> I was able to crash postgres 6.5.3 when I did an 'alter user' command. \n> After I started a debugger I found the problem in the timezone handling of \n> datetime (my Linux box lost its timezone information, that's how the \n> problem occurred).\n> \n> Only 7 bytes are reserved for the timezone, without checking for boundaries.\n> \n> Attached is a patch that fixes this problem and emits a NOTICE if a \n> timezone is encountered that is longer than MAXTZLEN bytes, like this:\n> \n> template1=# alter user postgres with password postgres;\n> NOTICE: Invalid timezone 'Local time zone must be set--see zic manual page'\n> NOTICE: Invalid timezone 'Local time zone must be set--see zic manual page'\n> ALTER USER\n> \n> I don't know whether the timezone should be reset to some predefined \n> constant (like \"GMT\") if an error like this occurs. This patch at least \n> directs the user in a general direction that something is wrong with his setup.\n> \n> Cheers,\n> \n> Jeroen \n\n[Attachment, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 9 Dec 1999 00:02:13 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Small timezone bug fixed"
}
] |
[
{
"msg_contents": "Hi,\n\nEver since I began working with Postgres, I have had one little irritating\nproblem with psql. It may be that I am mis-using this program; if so, my\nsuggestion is not helpful, however, if others have encountered this problem,\nperhaps the developers can look at a fix for 7.0?\n\nWhen I develop a new DB schema using psql, I usually first create a file, say\n\"mySchema.sql\". I then \"createdb\" the database, start up psql, and use the\ncommand \"\\i mySchema.sql\" to load in my new schema. There will be, needless to\nsay, several errors. These fall nicely below the offending line and I can look\nat fixing them. I drop the DB, re-edit my SQL file and re-do the \"\\i\" command.\n\nSometimes, however, rather than using the \"\\i\" command, I would like to simply\nload my schema directly into psql and capture the output on STDOUT (ie \"psql <\nmySchema.sql >& myOutput\"). The problem that arises is that the errors and\nnotices all come out on STDERR. I am not sure this is the right choice. Because\nof the lack of synchronization between STDOUT and STDERR, it becomes impossible\nto associate an SQL statement with either a CREATE or an ERROR message. The\noption, \"-e\", is supposed to echo the query, but it doesn't help.\n\nWhile I can see wanting to separate STDERR and STDOUT when one uses psql to run\nan SQL query against a DB from within a shell script, it makes it much more\ndifficult when developing, and if I were to run several SQL queries into psql,\nexactly the same association problem would occur.\n\nPerhaps a combination of the function \"isatty()\" plus the -e flag would work? So\nthat if STDOUT \"isatty()\" then echo errors to STDOUT, otherwise send them to\nSTDERR. And if the -e flag is set, echo the queries to STDERR, so the\ncorrelation between ERROR, CREATE, etc and SQL could be made.\n\nJust my $0.02.\n\nMark\n\nPS I only recently learned of the setting of the PAGER environment variable to\nmake it so I needn't scroll back up 400 lines to find my errors; perhaps this\ncould be made more prominent in the documentation as it would be a big help.\nThen again, perhaps I should completely re-read the docs to see if this is\nmentioned; I haven't done that for several releases now.\n\n--\nMark Dalphin email: [email protected]\nMail Stop: 29-2-A phone: +1-805-447-4951 (work)\nOne Amgen Center Drive +1-805-375-0680 (home)\nThousand Oaks, CA 91320 fax: +1-805-499-9955 (work)\n\n\n\n",
"msg_date": "Wed, 08 Dec 1999 10:03:46 -0800",
"msg_from": "Mark Dalphin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Suggested \"minor\" change to psql"
},
{
"msg_contents": "\n\n\tSomebody can help me?\n\n\tI need to know which is the maximum size of the database in\nPostgresql and how many records I can keeps into it?\n\n\t\tTnaks!\n\nSonia Sanchez Diaz\n\tUNAM_FCA_CIFCA_Admon.Red\n\te-mail: [email protected]\n\n\n",
"msg_date": "Wed, 8 Dec 1999 12:27:44 -0600 (CST)",
"msg_from": "Sanchez Diaz Sonia <[email protected]>",
"msg_from_op": false,
"msg_subject": "Size of database"
},
{
"msg_contents": " When I develop a new DB schema using psql, I usually first create a file, say\n \"mySchema.sql\". I then \"createdb\" the database, start up psql, and use the\n command \"\\i mySchema.sql\" to load in my new schema. There will be, needless to\n say, several errors. These fall nicely below the offending line and I can look\n at fixing them. I drop the DB, re-edit my SQL file and re-do the \"\\i\" command.\n\nI think the new stuff allows separating or merging different output\n\"channels\" so that psql can be run in the different ways you wish.\n\nHowever, this does raise another issue that might make debugging\nscripts run through psql easier. I have found that emacs compile\nbuffer semantics are extremely useful for debugging source code, and\nsuggest that error messages from psql follow something similar (at\nleast as an option) to aid in script debugging. The output of\ncompiler error messages generally gives a filename:linenumber prefix\nto the message; emacs can parse that an put you exactly at the correct\npoint for fixing the error.\n\nIf psql would also output messages in the same form, i.e.,\n\n filename:linenumber: error message\n\nthen scripts run in emacs compile buffer (easily done either directly\nor with make) could be rapidly debugged using the normal mechanisms\navailable for \"normal\" source code debugging.\n\nI realize that everyone does not use emacs, but I can't see how\nincluding that information would be detrimental to anyone. It gives\nmore information useful for anyone debugging scripts.\n\nCheers,\nBrook\n",
"msg_date": "Wed, 8 Dec 1999 11:54:26 -0700 (MST)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Suggested \"minor\" change to psql"
},
{
"msg_contents": "What's wrong with pgsql -d xxxx -c '\\i myschema' > databaseload.logfile ?\n\nSeems to work OK for me. \n\nYou can always use the 2>&1 syntax to redirect STDERR to STDOUT as well.\n\nYours,\nMoray\n\n\n\n\n\n\n",
"msg_date": "Wed, 8 Dec 1999 19:06:15 +0000 (GMT)",
"msg_from": "Moray McConnachie <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Suggested \"minor\" change to psql"
},
{
"msg_contents": "Look at:\n\nhttp://www.postgresql.org/docs/faq-english.html#4.6\n\nDaniel Stolk\n\nSanchez Diaz Sonia wrote:\n> \n> Somebody can help me?\n> \n> I need to know which is the maximum size of the database in\n> Postgresql and how many records I can keeps into it?\n> \n> Tnaks!\n> \n> Sonia Sanchez Diaz\n> UNAM_FCA_CIFCA_Admon.Red\n> e-mail: [email protected]\n> \n> ************\n",
"msg_date": "Wed, 08 Dec 1999 11:22:54 -0800",
"msg_from": "Daniel Stolk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Size of database"
},
{
"msg_contents": "Mark Dalphin wrote:\n\n> Sometimes, however, rather than using the \"\\i\" command, I would like to simply\n> load my schema directly into psql and capture the output on STDOUT (ie \"psql <\n> mySchema.sql >& myOutput\"). The problem that arises is that the errors and\n> notices all come out on STDERR. I am not sure this is the right choice. Because\n> of the lack of synchronization between STDOUT and STDERR, it becomes impossible\n> to associate an SQL statement with either a CREATE or an ERROR message. The\n> option, \"-e\", is supposed to echo the query, but it doesn't help.\n\nI have experienced this problem as well. It is a bit of a pain. I would love to\nhear how others are handling this. I have one partial workaround.\n\n % psql -d test -f createdb.sql 2>&1 | less\n\nFor whatever reason, the above seems to keep the msgs fairly synchronized (at least\non Redhat 6.0), making it useful for visual inspection of short loads.\nUnfortunately, that approach far exceeds my patience for my situation. I'm\nfrequently recreating 150 tables and redoing ~1400 INSERTs via psql with input\nscripts. That takes about 4 minutes on a dual PII 450 and generates ~15K lines of\noutput (~500 PAGER pages @30 lines/page). Instead, I pipe STDERR/STDOUT to a file,\nand then grep the file for 'INSERT 0 0', 'ERROR', and other problem signs. I've\ngotten pretty good at matching up the error msgs with the problem by interspersing\njudiciously comments and queries, but it's still a pain.\n\nIt'd be nice to be able to get all psql msgs sync'ed on either STDERR or STDOUT.\n\nCheers.\nEd\n\n\n\n\n",
"msg_date": "Wed, 08 Dec 1999 13:25:14 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Suggested \"minor\" change to psql"
},
{
"msg_contents": "On 1999-12-08, Mark Dalphin mentioned:\n\n> Sometimes, however, rather than using the \"\\i\" command, I would like to simply\n> load my schema directly into psql and capture the output on STDOUT (ie \"psql <\n> mySchema.sql >& myOutput\"). The problem that arises is that the errors and\n> notices all come out on STDERR. I am not sure this is the right choice. Because\n> of the lack of synchronization between STDOUT and STDERR, it becomes impossible\n> to associate an SQL statement with either a CREATE or an ERROR message. The\n> option, \"-e\", is supposed to echo the query, but it doesn't help.\n\nYou might be glad to hear that I've been addressing these issues. The way\nit currently looks is that everything that is related to backend traffic\n(query results, INSERT xxx, notices, errors) will all go to the same\nstream (the \\o one) in the order they arrive. I think this is what\neveryone wanted. If you are running interactively, it doesn't make a\ndifference anyway, but in a automated script you'll rarely have the need\nto have the errors without the commands that caused them.\n\nThe only thing that will keep going to stderr are fatal notices from psql\nitself. The only thing that always goes to stdout is psql internal\nmessages (\"Turned on expanded mode.\").\n\nOne additional feature that's coming up, which you might like, is the\npossibility to stop such a psql script after the first error it\nencounters.\n\n> While I can see wanting to separate STDERR and STDOUT when one uses psql to run\n> an SQL query against a DB from within a shell script, it makes it much more\n> difficult when developing, and if I were to run several SQL queries into psql,\n> exactly the same association problem would occur.\n\nYou can check the return code and decide what to do with the output that\nway.\n\n> Perhaps a combination of the function \"isatty()\" plus the -e flag would work? So\n> that if STDOUT \"isatty()\" then echo errors to STDOUT, otherwise send them to\n> STDERR. And if the -e flag is set, echo the queries to STDERR, so the\n> correlation between ERROR, CREATE, etc and SQL could be made.\n\nThere are already about 4 or 5 different output sources and 2 or 3 states\ncontrolling them; I'm hesitant to adding more confusion, especially\nsubtle things.\n\nAlso, the meaning of the -e flag has been adjusted. In interactive mode it\ndoesn't do anything, in script mode it prints every line as it reads it.\nIf you don't give it, you don't see the code of your script. That is more\nlike a regular shell.\n\n> PS I only recently learned of the setting of the PAGER environment variable to\n> make it so I needn't scroll back up 400 lines to find my errors; perhaps this\n> could be made more prominent in the documentation as it would be a big help.\n\nThat part has been changed, because the purpose of the PAGER environment\nvariable in general is not to toggle the use of the pager in psql. There\nis now an internal switch.\n\n> Then again, perhaps I should completely re-read the docs to see if this is\n> mentioned; I haven't done that for several releases now.\n\nWell, I rewrote the complete manual, so you're in for a great work of\nliterature. :)\n\n\nWhen will you be able to reach this promised land? You could start by\nflaming the hackers list about a 6.6 release in Feb/Mar ... ;)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Fri, 10 Dec 1999 02:28:41 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Suggested \"minor\" change to psql"
}
] |
[
{
"msg_contents": "Hi,\n\nis anyone interested in, or actually working on advanced issues\nwith PostgreSQL, like:\n\n- recursive queries and transitive closure optimization\n\n- more sophisticated representation incomplete information\n (different null values)\n\n- probabilistic relations\n\n- temporal (preferrably bi-temporal) relations\n\nI am planning to use PostgreSQL for medical information and medical\nknowledge, so I run into all these problems. I wonder whether it\nmakes sense to tweak on PostgreSQL for these matters or whether a\nmore generic approach (where the client does all the advanced stuff)\nis more realistic.\n\nany ideas appreciated,\n-Gunther Schadow\n\n-- \nGunther_Schadow-------------------------------http://aurora.rg.iupui.edu\nRegenstrief Institute for Health Care\n1050 Wishard Blvd., Indianapolis IN 46202, Phone: (317) 630 7960\[email protected]#include <usual/disclaimer>",
"msg_date": "Wed, 08 Dec 1999 15:15:19 -0500",
"msg_from": "Gunther Schadow <[email protected]>",
"msg_from_op": true,
"msg_subject": "Advanced projects ... anyone interested?"
},
{
"msg_contents": "On Wed, Dec 08, 1999 at 03:15:19PM -0500, Gunther Schadow wrote:\n> is anyone interested in, or actually working on advanced issues\n> with PostgreSQL, like:\n\nYes and no.\n\n> - recursive queries and transitive closure optimization\n\nYup, that's the topic I'm interested in.\n\n> I am planning to use PostgreSQL for medical information and medical\n> knowledge, so I run into all these problems. I wonder whether it\n> makes sense to tweak on PostgreSQL for these matters or whether a\n> more generic approach (where the client does all the advanced stuff)\n> is more realistic.\n\nIMO the backend should be expanded to handle this stuff. In fact I had\nactual plans to add the recursive stuff but have not even found enough time\nto dig into the source code. \n\nFor those who don't know, I made my Ph.D. in deductive database systems\nwhich is mostly about recursive queries and transitive closures.\n\nMichael\n\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Thu, 9 Dec 1999 08:43:51 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Advanced projects ... anyone interested?"
}
] |
[
{
"msg_contents": "\nBrian E Gallew <[email protected]>\n>Then <[email protected]> spoke up and said:\n>> Keith Parks <[email protected]> writes:\n>> > Is there any reason for not allowing table aliases in\n>> > delete statements?\n>> \n>> As Bruce points out in another followup, there's no real need for\n>> an alias for the target table; if you have sub-selects that need\n>> independent references to the target, you can always alias *them*.\n>> The same goes for INSERT and UPDATE, which also take unadorned\n>> <table name> as the target table specification.\n>\n>Unless your query is going to be long enough to run into query length\n>limits, aliases are not your friends. Standard SQL they may be, but\n>aliases always end up obscuring queries to those who come along after\n>you. \n\nThe problem is that it's difficult to refer to the same table twice\nin a single query without using aliases.\n\nThe trap I fell into was thinking I had to alias both references to\nthe table.\n\nI'd be interested in seeing alternative solutions to the duplicate\nremoval problem.\n\nKeith.\n\n",
"msg_date": "Wed, 8 Dec 1999 22:32:15 +0000 (GMT)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Table aliases in delete statements? "
},
{
"msg_contents": "> >Unless your query is going to be long enough to run into query length\n> >limits, aliases are not your friends. Standard SQL they may be, but\n> >aliases always end up obscuring queries to those who come along after\n> >you. \n> \n> The problem is that it's difficult to refer to the same table twice\n> in a single query without using aliases.\n> \n> The trap I fell into was thinking I had to alias both references to\n> the table.\n> \n> I'd be interested in seeing alternative solutions to the duplicate\n> removal problem.\n\nYes, that is tricky in that there is an aliased version and a non-alias\nversion of the same table.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 8 Dec 1999 22:20:14 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Table aliases in delete statements?"
}
] |
[
{
"msg_contents": "Hello,\n\nI've a problem and i am looking for help?\n\nI have a table with a field varchar(5), filed with right alligned numbers.\nOrdering was fine, just like we expected compared with nummeric.\n\nStarting from RedHat version 6.1 the ordening seems to remove the leading\nblanco's, what was not for use. I try different versions of postgres, as\nthere are 6.5.2-1, 6.5.3-1, 6.5.3-2\nI also try to change varchar in char, and remove the index on the varchar\nfield but nothing helps.\n\nGoing back to redhat 6.0 (and try with the same postgres versions) ordering\nbecomes fine again.\n\n\nAny idea??\nFrans\n\n\n",
"msg_date": "Wed, 08 Dec 1999 23:46:58 +0100",
"msg_from": "Frans Van Elsacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgresql 6.5.3-2 for redhat 6.1"
},
{
"msg_contents": "On Wed, 08 Dec 1999, Frans Van Elsacker wrote:\n> I have a table with a field varchar(5), filed with right alligned numbers.\n> Ordering was fine, just like we expected compared with nummeric.\n> \n> Starting from RedHat version 6.1 the ordening seems to remove the leading\n> blanco's, what was not for use. I try different versions of postgres, as\n> there are 6.5.2-1, 6.5.3-1, 6.5.3-2\n> I also try to change varchar in char, and remove the index on the varchar\n> field but nothing helps.\n\nAs I e-mailed to you before, I cannot reproduce this behaviour on my\ninstallation of RedHat 6.1. If you can provide a session transcript for the\ncreate, some inserts, and a select, then I might be able to help. What is your\nlocale set to, out of curiousity?\n\nIf I do the following:\nCREATE TABLE BLANK (column1 varchar(5));\nINSERT INTO BLANK (column1) VALUES (' 12');\nINSERT INTO BLANK (column1) VALUES (' 212');\nINSERT INTO BLANK (column1) VALUES (' 3212');\nINSERT INTO BLANK (column1) VALUES ('3212 ');\n\nthen:\nSELECT * FROM BLANK;\n\nproduces:\ncolumn1\n-------\n 12\n 212\n 3212\n3212 \n(4 rows)\n\nJust like it's supposed to.\n\nPostgreSQL 6.5.3-2nl on RedHat 6.1.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 8 Dec 1999 21:45:19 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postgresql 6.5.3-2 for redhat 6.1"
}
] |
[
{
"msg_contents": "\nI am trying to make an index on three columns (text, int2, date)\non a table with 193 million records. I believe I am finding that\nthe pg_sorttemp files reach 2GB before the index finishes. \nThe backend finishes the index but it's clearly missing tuples.\n\nDoes this make sense to you folks?\n\n--Martin\n\nP.S. My text field only contains a single char and I realize that\nI was foolish not use a char(1) instead of varchar . . . \n\n",
"msg_date": "Wed, 8 Dec 1999 23:06:01 -0500",
"msg_from": "Martin Weinberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_sorttemp hits 2GB during index construction"
},
{
"msg_contents": "Martin Weinberg <[email protected]> writes:\n> I am trying to make an index on three columns (text, int2, date)\n> on a table with 193 million records. I believe I am finding that\n> the pg_sorttemp files reach 2GB before the index finishes. \n\n2GB/193million is only about 10 (bytes per index tuple), and your\nindex tuples obviously will need more than 10 bytes apiece, so\nyeah, you can't do that in 6.5.*. It'd lose even without the fact\nthat sorts in 6.5.* require more space than the actual data volume.\n\nOne possible workaround is to define the indexes while the table\nis empty and then fill the table. You could probably have not only\na coffee break but a full-course meal while the data is loading,\nbut at least it'd work.\n\n> The backend finishes the index but it's clearly missing tuples.\n\nYeah :-(. The 6.5 sort code fails to notice write errors on the temp\nfiles, so lost tuples would be the likely result of file overflow.\n\nThese problems are fixed in current sources, but I dunno if you\nwant to run bleeding-edge development code just to get work done...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Dec 1999 01:20:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_sorttemp hits 2GB during index construction "
}
] |
[
{
"msg_contents": "Ok, I just had a brainstorm for those who want to get a pooling-capable\npostmaster up and running.\n\nDo the following:\n\n1.)\tGrab the GPL'd AOLserver 3.0 from aolserver.lcs.mit.edu\n2.)\tStrip out the webserver stuff.\n3.)\tWrite a communications module for AOLserver that implements the\nPostgreSQL FE-BE protocol. \n4.)\tThat module then redirects said communications to a backend\nunder the pool.\n5.)\t Of course, the real postmaster has to run on a different port, as the\nAOLserver database driver still will need to initiate connections through\npostmaster.\n6.)\tA few other minor issues will need to be handled, such as\nmulti-database enabling the AOLserver pool mechanism.\n\nAdvantages: AOLserver is multithreaded -- this thing could be made quite fast,\nwith low latency.\n\nDisadvantages: Adds yet another layer of communication, unless some really\ncreative coding can be done, thus, post-connect throughput is likely to suffer.\nAOLserver is GPL'd, so code lifted from it could not be integrated into the main\nPostgreSQL tree.\n\nFor those who think they need pooled connections and are not already running\nAOLServer...\n\n--\nLamar Owen\nWGCR Internet Radio\n",
"msg_date": "Wed, 8 Dec 1999 23:37:37 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Pooled postgresql backends"
}
] |
[
{
"msg_contents": "[Hackers: Can anyone comment on my idea on how point 1 below could be\ndone, or not if thats the case? Thanks, Peter]\n\n-----Original Message-----\nFrom: Assaf Arkin [mailto:[email protected]]\nSent: Wednesday, December 08, 1999 7:41 PM\nTo: Peter Mount\nCc: [email protected]\nSubject: Re: [INTERFACES] Transaction support in 6.5.3/JDBC\n\n\n> PM: JDBC based code should never issue begin/commit/rollback commands,\n> and should use the similarly named methods in Connection. This is\n> because a JDBC driver could be issuing these statements internally,\nand\n> it would be confused. With our driver, you could currently get away\nwith\n> it, but it's not guaranteed to stay that way.\n\nInside a transaction, the application should not even use\ncommit/rollback on the JDBC connection, only through the transaction\nmonitor API. This is easy to solve, I simply return a ClientConnection\nwrapper that prevents that. But someone can still send a commit/rollback\nstatement directly through the JDBC driver.\n\nWhat I'm more afraid of is some operation that will cause a\ncommit/rollback to occur, e.g. a failed update, a trigger or stored\nprocedure.\n\nPM: This is tricky. Some JDBC drivers do parse the SQL before sending to\nthe backend, but we dont - mainly because its faster to let the\nbackend's parser do the job, and also it keeps our size down. The latter\naffects applet users more than anything else.\n\nPM: I suppose we could add a check for the simplest cases, ie sql\ncontaining just \"begin\" \"commit\" \"rollback\" etc, but it won't catch all\npossible cases.\n\n> PM: Hmmm, in theory if a transaction is in a dead state (ie: an SQL\n> statement failed, so anything else is ignored until the rollback),\nthere\n> should be a message in the notify queue. Our JDBC driver keeps these\nin\n> the warnings queue, so you could read them prior to calling commit()\n> yourself.\n\nThanks I'll try to look that out.\n\n\nI've minimized all the special requirements I need from the driver to\nthree methods calls:\n\n1. enbleSQLTransactions -- prevents a commit/rollback from being\nexecuted directly in SQL; you can never be too careful ;-)\n\nPM: I wonder if we can get this functionality in the backend's parser -\nie, for the API interfaces, they can set a variable on startup that\ndisables begin, commit and rollback, then when they need to use them, it\ncan then either temporarily clear the variable, or use a prefix that\nforces the statement to work?\n\nPM: enableSQLTransactions can then act immediately above this\nfunctionality.\n\n2. prepare -- should return false if the transaction is read-only, true\nif it will commit, throw an exception if it will rollback\n\n3. isCriticalError -- should tell me if a critical error occured in the\nconnection and the connection is no longer useable\n\nHow do I detect no. 3? Is there are certain range of error codes, should\nI just look at certain PSQLExceptions as being critial (e.g. all I/O\nrelated errors)?\n\nPM: Don't rely on the text returned from PSQLException to be in English.\nWe are pretty unique in that the driver will return an error message in\nthe language defined by the locale of the client (also depends on if we\nhave translated the errors into that language). What I could to is add a\nmethod to PSQLException that returns the original id of the Exception,\nand another to return the arguments supplied. That may make your code\nmore portable.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n",
"msg_date": "Thu, 9 Dec 1999 07:28:11 -0000 ",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [INTERFACES] Transaction support in 6.5.3/JDBC"
},
{
"msg_contents": "> PM: I wonder if we can get this functionality in the backend's parser -\n> ie, for the API interfaces, they can set a variable on startup that\n> disables begin, commit and rollback, then when they need to use them, it\n> can then either temporarily clear the variable, or use a prefix that\n> forces the statement to work?\n> \n> PM: enableSQLTransactions can then act immediately above this\n> functionality.\n\nThat's what I was hoping for.\n\n\n> 2. prepare -- should return false if the transaction is read-only, true\n> if it will commit, throw an exception if it will rollback\n\nWorks fine now that I've added a check for *ABORT STATUS*.\n\n\n> 3. isCriticalError -- should tell me if a critical error occured in the\n> connection and the connection is no longer useable\n> \n> How do I detect no. 3? Is there are certain range of error codes, should\n> I just look at certain PSQLExceptions as being critial (e.g. all I/O\n> related errors)?\n> \n> PM: Don't rely on the text returned from PSQLException to be in English.\n> We are pretty unique in that the driver will return an error message in\n> the language defined by the locale of the client (also depends on if we\n> have translated the errors into that language). What I could to is add a\n> method to PSQLException that returns the original id of the Exception,\n> and another to return the arguments supplied. That may make your code\n> more portable.\n\nI'm not looking into the messages, I know their language dependent. I\neven added two or three new error messages, but only in English.\n\nI'm looking for either specific error codes, range of error codes, or\nsome class extending PSQLException that will just indicate that this\nconnection is no longer useful. For example, if an I/O error occurs,\nthere's no ReadyForQuery reply, there's garbled response, etc.\n\narkin\n\n> \n> Peter\n> \n> --\n> Peter Mount\n> Enterprise Support\n> Maidstone Borough Council\n> Any views stated are my own, and not those of Maidstone Borough Council.\n> \n> ************\n\n-- \n____________________________________________________________\nAssaf Arkin [email protected]\nCTO http://www.exoffice.com\nExoffice, The ExoLab Company tel: (650) 259-9796\n",
"msg_date": "Thu, 09 Dec 1999 10:42:43 -0800",
"msg_from": "Assaf Arkin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Transaction support in 6.5.3/JDBC"
}
] |
[
{
"msg_contents": "There's a small connection pool manager class for JDBC in the\nsrc/interfaces/jdbc/example/corba directory. Ok, it doesn't implement\nthe FE-BE protocol, but does hand out already open Connections.\n\nIt's simple, but it works.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Lamar Owen [mailto:[email protected]]\nSent: Thursday, December 09, 1999 4:38 AM\nTo: [email protected]\nSubject: [HACKERS] Pooled postgresql backends\n\n\nOk, I just had a brainstorm for those who want to get a pooling-capable\npostmaster up and running.\n\nDo the following:\n\n1.)\tGrab the GPL'd AOLserver 3.0 from aolserver.lcs.mit.edu\n2.)\tStrip out the webserver stuff.\n3.)\tWrite a communications module for AOLserver that implements the\nPostgreSQL FE-BE protocol. \n4.)\tThat module then redirects said communications to a backend\nunder the pool.\n5.)\t Of course, the real postmaster has to run on a different port,\nas the\nAOLserver database driver still will need to initiate connections\nthrough\npostmaster.\n6.)\tA few other minor issues will need to be handled, such as\nmulti-database enabling the AOLserver pool mechanism.\n\nAdvantages: AOLserver is multithreaded -- this thing could be made quite\nfast,\nwith low latency.\n\nDisadvantages: Adds yet another layer of communication, unless some\nreally\ncreative coding can be done, thus, post-connect throughput is likely to\nsuffer.\nAOLserver is GPL'd, so code lifted from it could not be integrated into\nthe main\nPostgreSQL tree.\n\nFor those who think they need pooled connections and are not already\nrunning\nAOLServer...\n\n--\nLamar Owen\nWGCR Internet Radio\n\n************\n",
"msg_date": "Thu, 9 Dec 1999 07:46:00 -0000 ",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Pooled postgresql backends"
}
] |
[
{
"msg_contents": "At 01:40 PM 12/27/99 +0100, Karel Zak - Zakkr wrote:\n\n>\tnot use cache - hmm.. but I like fast routines (my current\n>\tto_char() implementation is faster (20-50%) than current \n>\tdate_part()).\n\nWhile fast routines are nice indeed, isn't it true in practice\nthat to_char() times will be swamped by the amount of time to\nparse, plan, and execute a query in most cases?\n\nTrivial cases like \"select to_char('now'::datetime,...)\" can't in\ngeneral be cached anyway, since 'now' is always changing...\n\nYour caching code needs to guarantee that it can't leak memory\nin any circumstance. In environments where database servers\nrun 24/7 that's far more important than minor increases in\nspeed.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 09 Dec 1999 03:17:33 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] memory dilemma"
},
{
"msg_contents": "At 02:26 PM 12/27/99 +0100, Karel Zak - Zakkr wrote:\n\n>Sorry, but it is not good argument. If any routine (in the query path) \n>spend time is not interesting write (other) fast routine? No, we must \n>try rewrite this slowly part to faster version.\n>\n>*Very* simpl test over 10000 rows:\n>\n>$ time psql test -c \"select date_part('second', d)\n>from dtest;\" -o /dev/null\n>\n>real 0m0.504s\n>user 0m0.100s\n>sys 0m0.000s\n>\n>$ time psql test -c \"select to_char(d, 'SI') from\n>dtest;\" -o /dev/null\n>\n>real 0m0.288s\n>user 0m0.100s\n>sys 0m0.000s\n\nThis would seem to be a great argument to investigate why\ndate_part's so much slower. However, it says nothing about\nthe times saving of caching vs. not caching.\n\nA more interesting comparison, more germane to the point under\ndiscussion, would be:\n\ntime psql test -c \"select d from dtest;\"\n\nIn other words, how much overhead does \"to_char\" add? That's what\nyou need to look at if you want to measure whether or not caching's\nworth it. Caching the parse of the format string will save a \npercentage of the to_char overhead, but a test like the above\nwill at least help you get a handle on how much overhead the\nformat string parse adds.\n\n>> Your caching code needs to guarantee that it can't leak memory\n>> in any circumstance. In environments where database servers\n>> run 24/7 that's far more important than minor increases in\n>> speed.\n\n>Yes, I agree - robus SQL is more importent, but always say \"speed is not\n>interesting, we can robus only\" is way to robus-snail-SQL. \n\nWhich, of course, isn't what I said...after all, I've spent most of\nmy adult life writing highly optimizing compilers. I merely asked\nif a typical query wouldn't swamp any savings that caching the\nparse of a format string might yield.\n\n>I want nice-robus-fast-SQL :-)\n\nSure, but given the great disparity between \"date_part\" and your\ninitial \"to_char\" implementation, more people might see a more\nsignificant speed-up if you spent time speeding up \"date_part\"...\nof course, if caching greatly speeds up queries using \"to_char\"\nthen it's probably worth doing, but at least try to measure first\nbefore adding the complication.\n\nAt least, that's how I tend to work...\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 09 Dec 1999 07:57:28 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] memory dilemma"
},
{
"msg_contents": "At 11:28 AM 12/28/99 +0100, Karel Zak - Zakkr wrote:\n>Thank for all suggestion. I finaly use in to_char() cache via static buffer,\n>and if format-picture will bigger than this buffer, to_char will work as\n>without cache. This solution eliminate memory leak - this solution is used\n>in current datetime routines. It is good compromise.\n\nSeems simple and safe, yes. My objection was really due to my concern\nover memory leaks. The \"to_char()\" function will be a great help to\nthose porting over Oracle applications to Postgres.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 10 Dec 1999 03:17:05 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] memory dilemma "
},
{
"msg_contents": "\nHi,\n\n\nI have dilemma with PG's memory. I'am finishing with to_char()\nimplementation and I try use internal cache buffer in this routines. \nThis cache is used if a to_char() format-picture (which going to the \ninternal to_char parser) is equal as previous and to_char's parser is \nskiped. It is very good, because speed rise (20%).\n\nA problem is how implement this cache:\n\n\tvia palloc - It is standard in PG, but it is problem, because \n\tmemory contents is not persisten across transactions. And \n\tI don't know how check when memory is free (lose) and a routine \n\tmust reallocs memory again (if transaction finish PG memory\n\tmanagement not zeroizing (reset) memory and any \"if( buffer )\" \n\tstill affects as good memory). \n\n\tvia malloc - (Now used). It is good, because buffer is persistent.\n\tThis variant is (example) use in regexp utils in PG now. \n\tBut is it nice? \t\n\n\tvia a static buffer - but how long? Terrible. Or set any default \n\tsize for this buffer, and if format-picture will bigger - use\n\tpallocated memory and not use cache buffer. (It is my favourite \n\tvariant.) \n\n\tnot use cache - hmm.. but I like fast routines (my current\n\tto_char() implementation is faster (20-50%) than current \n\tdate_part()).\n\t \n\nAny idea? Please.\t\n\n\t\t\t\t\t\tKarel\n\nPS. IMHO - add to PostgreSQL any *across transactions persistent* \n memory managemet?\n\n----------------------------------------------------------------------\nKarel Zak <[email protected]> http://home.zf.jcu.cz/~zakkr/\n\nDocs: http://docs.linux.cz (big docs archive)\t\nKim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\nFTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n-----------------------------------------------------------------------\n\n",
"msg_date": "Mon, 27 Dec 1999 13:40:49 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": false,
"msg_subject": "memory dilemma"
},
{
"msg_contents": "\nOn Thu, 9 Dec 1999, Don Baccus wrote:\n\n> At 01:40 PM 12/27/99 +0100, Karel Zak - Zakkr wrote:\n> \n> >\tnot use cache - hmm.. but I like fast routines (my current\n> >\tto_char() implementation is faster (20-50%) than current \n> >\tdate_part()).\n> \n> While fast routines are nice indeed, isn't it true in practice\n> that to_char() times will be swamped by the amount of time to\n> parse, plan, and execute a query in most cases?\n\nSorry, but it is not good argument. If any routine (in the query path) \nspend time is not interesting write (other) fast routine? No, we must \ntry rewrite this slowly part to faster version.\n\n*Very* simpl test over 10000 rows:\n\n$ time psql test -c \"select date_part('second', d)\nfrom dtest;\" -o /dev/null\n\nreal 0m0.504s\nuser 0m0.100s\nsys 0m0.000s\n\n$ time psql test -c \"select to_char(d, 'SI') from\ndtest;\" -o /dev/null\n\nreal 0m0.288s\nuser 0m0.100s\nsys 0m0.000s\n\n\n> Trivial cases like \"select to_char('now'::datetime,...)\" can't in\n> general be cached anyway, since 'now' is always changing...\n\nNo, you not understend me. I want cached 'format-picture':\n\nrun 10000 x\t\t\n\tselect to_char(datetime, 'HH24:MI:SI FMMonth YYYY'); \n\nyes, 'datetime' can always changing, but 'HH24:MI:SI FMMonth YYYY' not, \nand this format-picture must be always parsed. It is terrible always call\nto_char() parser, if I can use cache for it.\n\n> Your caching code needs to guarantee that it can't leak memory\n> in any circumstance. In environments where database servers\n> run 24/7 that's far more important than minor increases in\n> speed.\n\n\nYes, I agree - robus SQL is more importent, but always say \"speed is not\ninteresting, we can robus only\" is way to robus-snail-SQL. \n\nI want nice-robus-fast-SQL :-)\n \n\t\t\t\t\t\tKarel\n\n",
"msg_date": "Mon, 27 Dec 1999 14:26:31 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] memory dilemma"
},
{
"msg_contents": "Karel Zak - Zakkr <[email protected]> writes:\n> \tnot use cache - hmm.. but I like fast routines (my current\n> \tto_char() implementation is faster (20-50%) than current \n> \tdate_part()).\n\nI like that one. Anything else is a potential memory leak, and I really\nfind it hard to believe that the speed of to_char() itself is going to\nbe a critical factor in a real-world application. You have client-to-\nbackend communication, parsing, planning, I/O, etc that are all going\nto swamp out the cost of a single function.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 27 Dec 1999 10:35:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] memory dilemma "
},
{
"msg_contents": "Karel Zak - Zakkr <[email protected]> writes:\n> *Very* simpl test over 10000 rows:\n\n> $ time psql test -c \"select date_part('second', d)\n> from dtest;\" -o /dev/null\n\n> real 0m0.504s\n> user 0m0.100s\n> sys 0m0.000s\n\n> $ time psql test -c \"select to_char(d, 'SI') from\n> dtest;\" -o /dev/null\n\n> real 0m0.288s\n> user 0m0.100s\n> sys 0m0.000s\n\nThat isn't necessarily an impressive demonstration --- what is the data\ntype of your \"d\" column? Four of the six variants of date_part() are\nimplemented as SQL functions, which naturally adds a lot of overhead...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 27 Dec 1999 11:09:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] memory dilemma "
},
{
"msg_contents": "\n\nOn Mon, 27 Dec 1999, Tom Lane wrote:\n\n> That isn't necessarily an impressive demonstration --- what is the data\n> type of your \"d\" column? Four of the six variants of date_part() are\n> implemented as SQL functions, which naturally adds a lot of overhead...\n\n\nSorry. I better describe problem now. \n\n The test-table 'tab':\n\n\tCRAETE TABLE tab (d datetime);\n\n The 'tab' contain _random_ datetime values (generate via my program\nrand_datetime - it is in PG's contrib/dateformat/test). In this table \nis 10000 rows.\n\nTest:\n\ntime psql test -c \"select d from tab;\" -o /dev/null\n\nreal 0m0.530s\nuser 0m0.060s\nsys 0m0.020s\n\ntime psql test -c \"select date_part('second', d) from tab;\" -o /dev/null\n\nreal 0m0.494s\nuser 0m0.060s\nsys 0m0.030s\n\ntime psql test -c \"select to_char(d, 'SS') from tab;\" -o /dev/null\n\nreal 0m0.368s\nuser 0m0.080s\nsys 0m0.000s\n\n(to_char() is a little slowly now (than in previous test), because I rewrite \nany parts)\n \t\nThis comparison is *not* show cache effect. This test show (probably) better\nsearching and datetime part extraction in to_char().\n\n\nCache has effect for long and complicated 'format-picture' in to_char().\n\nWith cache (Cache has implement via malloc/free.) :\n~~~~~~~~~~\ntime psql test -c \"select to_char(d, 'HH12:MI:SS YYYY FMMonth Day') from\ntab;\" -o /dev/null\n\nreal 0m0.545s\nuser 0m0.060s\nsys 0m0.010s\n\nWithout cache:\n~~~~~~~~~~~~~\ntime psql test -c \"select to_char(d, 'HH12:MI:SS YYYY FMMonth Day') from\ntab;\" -o /dev/null\n\nreal 0m0.638s\nuser 0m0.060s\nsys 0m0.010s\n \n\nHmm.. my internal to_char() parser is very fast (0.100s for 10000 \ncalls only) :-))\n\n\nThank for all suggestion. I finaly use in to_char() cache via static buffer,\nand if format-picture will bigger than this buffer, to_char will work as\nwithout cache. This solution eliminate memory leak - this solution is used\nin current datetime routines. It is good compromise.\n\nI plan in future make small changes in datetime routines. The to_char is\nprobably fastly, because it use better search algorithm (has a simple index \nfor scanned array). The date_part() will fast too :-)\n\n-\n \nA last (PG's novice) question - how problem appear if PG is compilate with \n(gcc) -O3 optimalization? Or why is not used in PG 'inline' function\ndeclaration? \n\n\t\t\t\t\t\t\tKarel\n\n \n\n",
"msg_date": "Tue, 28 Dec 1999 11:28:22 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] memory dilemma "
}
] |
[
{
"msg_contents": "Given a table definition like:\n\ncreate table foo (i integer check (i > 0));\n\nI noticed the following works in Oracle but fails in Postgres:\n\ninsert into foo values(null);\n\nI was curious about what the standard might say, and had been\nmeaning to buy Date's book for some time, so broke down and\ndid so.\n\nAccording to Date, a check contraint should fail if the expression\nevaluates to false. It appears that Postgres only passes the\ncheck constraint if it evaluates to true. In three-valued logic,\nthese statements aren't equivalent. He has a paragraph about\nnulls and check contraints in chapter 14, I believe, and his\nexplanation makes it clear that Oracle is right, Postgres wrong.\n\nIt's easy to fix by adding a check for null to the constraint,\nand afterwards the SQL still works with Oracle, but it's still\na bug...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 09 Dec 1999 08:00:50 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "check contraints incorrectly reject \"null\""
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n> According to Date, a check contraint should fail if the expression\n> evaluates to false.\n\nAnd SQL92 says:\n\n A table check constraint is satisfied if and only if the specified\n <search condition> is not false for any row of a table.\n ^^^^^^^^^\n\nso they agree: a constraint that yields NULL should be considered\nto pass. A tad nonintuitive, but who am I to argue...\n\nI have fixed several bugs recently having to do with incorrect\nevaluation of three-state boolean logic. I'll take care of this one.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Dec 1999 19:36:16 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] check contraints incorrectly reject \"null\" "
}
] |
[
{
"msg_contents": "I need someone to enlighten me!\n\n Have this setup\n\n create table t1 (a int4 primary key);\n create table t2 (b int4 references t1 match full\n on delete restrict\n on update restrict);\n\n Now I use two sessions:\n\n (S1) insert into t1 values (1);\n (S1) begin;\n (S1) delete from t1 where a = 1;\n\n (S2) insert into t2 values (1);\n (S2) -- Session is now blocked\n\n (S1) commit;\n\n (S2) -- Bails out with the correct violation message.\n\n Now the other way round:\n\n (S1) insert into t1 values (1);\n (S1) begin;\n (S1) insert into t2 values (1);\n\n (S2) delete from t1 where a = 1;\n (S2) -- Session is now blocked\n\n (S1) commit;\n\n (S2) -- Session continues without error\n\n The interesting thing is, that in both cases the trigger\n procs use a\n\n SELECT oid FROM ... FOR UPDATE ...\n\n In the first case, where the primary key has been deleted\n first, the triggers SELECT does not find the deleted row\n anymore. But in the second case, the freshly inserted\n referencing row doesn't show up.\n\n Why are the visibilities different between INSERTED and\n DELETED tuples?\n\n I tried to acquire an exclusive table lock before beginning\n the scan, to increment the command counter at various\n different places, but nothing helped so far. The inserted row\n is invisible for this trigger invocation. The next command\n in the transaction can see it, but that's too late.\n\n What state must be changed by the trigger to make it visible?\n\n What confuses me totally is the fact, that S2 does block\n already at the attempt to delete from t1, not down in the\n trigger. This is because S1 executed a SELECT FOR UPDATE due\n to the insertion check trigger on t2. So S2 has no active\n scans or the like on the FK table at the time S2 blocks. I\n think it's a general bug in the visibility code - no?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 9 Dec 1999 20:43:17 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Weired FK problem"
},
{
"msg_contents": "> Why are the visibilities different between INSERTED and\n> DELETED tuples?\n\n There's something weired going on. As far as I read the code\n in tqual.c, all changes done by transactions that started\n before and committed after my own transaction should be\n invisible.\n\n In the case that works now (PK deleted while FK is inserted),\n HeapTupleSatisfiesSnapshot() tells, that the PK tuple is\n still alive. But then it should be locked (for update), the\n process blocks, and when the deleter commits it somehow\n magically doesn't make it into the SPI return set.\n\n Anyway, this visibility mechanism can never work with\n referential integrity constraints.\n\n At least the RI trigger procedures need some way to override\n this snapshot qualification temporary, so the check's will\n see what's committed, regardless who did it and when -\n committed is committed, basta.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 9 Dec 1999 22:08:55 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Weired FK problem"
},
{
"msg_contents": "> > Why are the visibilities different between INSERTED and\n> > DELETED tuples?\n> \n> There's something weired going on. As far as I read the code\n> in tqual.c, all changes done by transactions that started\n> before and committed after my own transaction should be\n> invisible.\n> \n> In the case that works now (PK deleted while FK is inserted),\n> HeapTupleSatisfiesSnapshot() tells, that the PK tuple is\n> still alive. But then it should be locked (for update), the\n> process blocks, and when the deleter commits it somehow\n> magically doesn't make it into the SPI return set.\n> \n> Anyway, this visibility mechanism can never work with\n> referential integrity constraints.\n> \n> At least the RI trigger procedures need some way to override\n> this snapshot qualification temporary, so the check's will\n> see what's committed, regardless who did it and when -\n> committed is committed, basta.\n\nI stared at your first e-mail for quite some time, and couldn't figure\nout what was happening. This second e-mail clears it up. The code:\n\n (S1) insert into t1 values (1);\n (S1) begin;\n (S1) insert into t2 values (1);\n\n (S2) delete from t1 where a = 1;\n (S2) -- Session is now blocked\n\n (S1) commit;\n\nWhen S1 does the INSERT and commit, it sees the row still in T1, so the\ncommit works. When the commit completes, the delete is performed.\n\nMy guess is that the T1 delete by S2 started before the S1 committed,\nand that is why it doesn't see the actual insert from S1.\n\nMaybe we can talk on IRC about this. It looks like a tough issue, and I\ndon't understand most of it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 9 Dec 1999 16:58:02 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Weired FK problem"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Jan Wieck\n> \n> > Why are the visibilities different between INSERTED and\n> > DELETED tuples?\n> \n> There's something weired going on. As far as I read the code\n> in tqual.c, all changes done by transactions that started\n> before and committed after my own transaction should be\n> invisible.\n> \n> In the case that works now (PK deleted while FK is inserted),\n> HeapTupleSatisfiesSnapshot() tells, that the PK tuple is\n> still alive. But then it should be locked (for update), the\n> process blocks, and when the deleter commits it somehow\n> magically doesn't make it into the SPI return set.\n> \n> Anyway, this visibility mechanism can never work with\n> referential integrity constraints.\n> \n> At least the RI trigger procedures need some way to override\n> this snapshot qualification temporary, so the check's will\n> see what's committed, regardless who did it and when -\n> committed is committed, basta.\n>\n\nThere's no user level method which allows to see being inserted\ntuples of other backends now.\nAs Vadim suggested before in a discussion between you,\nSnapshotDirty is needed to see uncommitted tuples of other\nbackends.\nIIRC,duplicate index check for unique indexes is a unique case\nthat uses this dirty read technique currently. \n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Fri, 10 Dec 1999 08:52:11 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Weired FK problem"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n\n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]]On Behalf Of Jan Wieck\n> >\n> > At least the RI trigger procedures need some way to override\n> > this snapshot qualification temporary, so the check's will\n> > see what's committed, regardless who did it and when -\n> > committed is committed, basta.\n> >\n>\n> There's no user level method which allows to see being inserted\n> tuples of other backends now.\n> As Vadim suggested before in a discussion between you,\n> SnapshotDirty is needed to see uncommitted tuples of other\n> backends.\n> IIRC,duplicate index check for unique indexes is a unique case\n> that uses this dirty read technique currently.\n\n Thanks - yes that was some issue at the time I totally\n underestimated the entire complexity and (silly as I am)\n thought RI could be implemented with rules.\n\n Anyway, the locking, RI triggers do internally by doing all\n their internal SELECT's with FOR UPDATE, seems to help much.\n Actually I'm playing with another global bool, that the\n triggers set. It simply causes HeapTupleSatisfiesSnapshot()\n to forward the check into HeapTupleSatisfiesNow(). It is\n reset on every transaction start and after any AFTER ROW\n trigger call. So far it seems to do the job perfectly.\n\n What I found out so far is this: The only problem, the\n locking wasn't able to catch, is the case, where an IMMEDIATE\n RESTRICT trigger successfully checked, that no references\n exist, while another transaction was inserting exactly that\n and still saw the PK alive. Looking up with snapshot NOW does\n the trick, because it sees anything committed, and the\n locking guarantees that this lookup is delayed until the\n other ones transaction ended.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Fri, 10 Dec 1999 01:27:41 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Weired FK problem"
},
{
"msg_contents": "Looks like it works. I just tried a related item:\n\n> Now the other way round:\n> \n> (S1) insert into t1 values (1);\n> (S1) begin;\n\n> (S2) delete from t1 where a = 1;\n> (S1) insert into t2 values (1);\n\nI swapped the above two items, and the INSERT properly failed the\ncontraint.\n\n> \n> (S2) -- Session is now blocked\n> \n> (S1) commit;\n> \n> (S2) -- Session continues without error\n\nI was a little unsure how trigger visibility was going to handle cases\nwhere the constraint failure happened after the other transaction\nstarted, but it seems to work fine.\n\nIt is only the trigger that has full visibility, not the statements in\nthe query, right?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Dec 1999 14:17:28 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Weired FK problem"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n>\n> Looks like it works. I just tried a related item:\n>\n> > Now the other way round:\n> >\n> > (S1) insert into t1 values (1);\n> > (S1) begin;\n>\n> > (S2) delete from t1 where a = 1;\n> > (S1) insert into t2 values (1);\n>\n> I swapped the above two items, and the INSERT properly failed the\n> contraint.\n>\n> >\n> > (S2) -- Session is now blocked\n> >\n> > (S1) commit;\n> >\n> > (S2) -- Session continues without error\n>\n> I was a little unsure how trigger visibility was going to handle cases\n> where the constraint failure happened after the other transaction\n> started, but it seems to work fine.\n\n I already committed the visibility overriding by RI triggers\n for time qualification. Maybe you're seeing the results of\n this little hack.\n\n> It is only the trigger that has full visibility, not the statements in\n> the query, right?\n\n That's the behaviour I wanted to get from it. RI triggers\n need to see what's committed and what their own transaction\n did so far. That's HeapTupleSatisfiesNow().\n\n Since they lock everything they access, they simply force the\n old (pre MVCC) behaviour - wait if something is actually in\n use until the other transaction ends. No snapshots, no pain.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Fri, 10 Dec 1999 20:41:50 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Weired FK problem"
},
{
"msg_contents": "> That's the behaviour I wanted to get from it. RI triggers\n> need to see what's committed and what their own transaction\n> did so far. That's HeapTupleSatisfiesNow().\n> \n> Since they lock everything they access, they simply force the\n> old (pre MVCC) behaviour - wait if something is actually in\n> use until the other transaction ends. No snapshots, no pain.\n\nSounds good.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Dec 1999 17:23:38 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Weired FK problem"
}
] |
[
{
"msg_contents": "Hello,\n\nI'll donate some (read all freely available) of my spare time to\nimplementing tuple\nchaining. It looks like this feature is most wanted and it would be a\npity to hold this until post 7.0. Personally I don't need it, yet ...\nBut I will definitely find a use for it once available ;-) And it looks\nlike a good start for hacking on pgsql.\n\nI already dived into the depth of pgsql's page and tuple structures and\nit looks like it is possible. But before I start coding I would like to\nhear some more experienced opinions on how to implement it.\n\nDid you alread discuss technical matters about the implementation? How\ncan I get in touch with it? (Simply browse the mailing list archives?)\n\nHere's a layout how I imagine the work:\n\nWhat is needed:\n- lay out a tuple continuation structure\n- put tuple into multiple chunks when pages are considered, reconcile\nwhen\n loaded from disk\n (how to continue a tuple - need a structure)\n how is a tuple (read page item) addressed? ItemPointerData\n I imagine to store a continuation address as the last bytes of the\ntuple unless it\n fits into one page.\n I need to mark large tuples (how, just one flag in tuple)\n How to tell a maximum possible size last block from a continued \n (which carries a pointer to the next one at its end)? \n Or don't care: make item continued and put last 6(?) bytes into a new\nblock\n- note that the continued tuples are not referenced directly (vacuum?)\n mark them as used. I hope vacuum operates on a tuple basis and has no\nconcept of\n pages\n- I guess that the tuple pointer points into page memory, if multiple\npages \n are concatenated for a tuple, these pages must not reside in memory\nbut\n the full tuple's memory must be allocated (from a memory similar to\npages)\n (shared mem?)\n- should be possible for memory only pages \n see PageGetPageSize but od_pagesize is 16bit!\n Reuse another variable? Another type of page? (32bit od_pagesize)\n \nVery fascinated by this large beast of ancient code to explore\n Christof\n\nPS: I think the documentation on page layout is far outdated (or points\ninto the future since it speaks about ItemContinuationData structures.)\nShould I update it?\nThe table doesn't match actual structure components. At least I don't\nunderstand what it's about. The source code mentions a different page\nlayout.\n\nPPS: Do not pity me, I have ten+ years of coding experience in C.\n\nPPPS: Could someone in few words tell me what an access method is (a\ntuple is an access method, log pages are another?)\n\n",
"msg_date": "Thu, 09 Dec 1999 23:13:51 +0100",
"msg_from": "Christof Petig <[email protected]>",
"msg_from_op": true,
"msg_subject": "Volunteer: Large Tuples / Tuple chaining"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]]On Behalf Of Christof Petig\n> \n> Hello,\n> \n> I'll donate some (read all freely available) of my spare time to\n> implementing tuple\n> chaining. It looks like this feature is most wanted and it would be a\n> pity to hold this until post 7.0. Personally I don't need it, yet ...\n> But I will definitely find a use for it once available ;-) And it looks\n> like a good start for hacking on pgsql.\n> \n> I already dived into the depth of pgsql's page and tuple structures and\n> it looks like it is possible. But before I start coding I would like to\n> hear some more experienced opinions on how to implement it.\n>\n\nWill you put a long tuple into a long logical page(continued multiple\nphisical(?) pages) ?\nI'm suspicious about the way that allows non-page-formatted page.\n\nAnyway it would need a big change around bufmgr/smgr etc.\nCould someone estimate the influence/danger before going forward ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n \n\n",
"msg_date": "Sat, 11 Dec 1999 00:33:36 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Volunteer: Large Tuples / Tuple chaining"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> Will you put a long tuple into a long logical page(continued multiple\n> phisical(?) pages) ?\n> I'm suspicious about the way that allows non-page-formatted page.\n> \n> Anyway it would need a big change around bufmgr/smgr etc.\n> Could someone estimate the influence/danger before going forward ?\n> \n\nI planned to use as many of PostgreSQL data structures unaltered as\npossible. Storing one Tuple in multiple Items should not pose too much\ndanger on bufmgr and smgr unless they access tuple internals. (I didn't\ncheck that yet). This would mean that on disk Items do no longer\ncorrespond to Tuples. (Some of them might form one tuple).\n\nI dropped the plan of Unformatted pages very soon. But the issue of\ntuple in-memory-storage remains (I don't know the internals of\nallocating/freeing, yet).\n\nChristof\n\n\n",
"msg_date": "Mon, 13 Dec 1999 22:59:27 +0100",
"msg_from": "Christof Petig <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Volunteer: Large Tuples / Tuple chaining"
},
{
"msg_contents": "\nThanks. Seems like Jan is going to be doing this.\n\n\n> Hello,\n> \n> I'll donate some (read all freely available) of my spare time to\n> implementing tuple\n> chaining. It looks like this feature is most wanted and it would be a\n> pity to hold this until post 7.0. Personally I don't need it, yet ...\n> But I will definitely find a use for it once available ;-) And it looks\n> like a good start for hacking on pgsql.\n> \n> I already dived into the depth of pgsql's page and tuple structures and\n> it looks like it is possible. But before I start coding I would like to\n> hear some more experienced opinions on how to implement it.\n> \n> Did you alread discuss technical matters about the implementation? How\n> can I get in touch with it? (Simply browse the mailing list archives?)\n> \n> Here's a layout how I imagine the work:\n> \n> What is needed:\n> - lay out a tuple continuation structure\n> - put tuple into multiple chunks when pages are considered, reconcile\n> when\n> loaded from disk\n> (how to continue a tuple - need a structure)\n> how is a tuple (read page item) addressed? ItemPointerData\n> I imagine to store a continuation address as the last bytes of the\n> tuple unless it\n> fits into one page.\n> I need to mark large tuples (how, just one flag in tuple)\n> How to tell a maximum possible size last block from a continued \n> (which carries a pointer to the next one at its end)? \n> Or don't care: make item continued and put last 6(?) bytes into a new\n> block\n> - note that the continued tuples are not referenced directly (vacuum?)\n> mark them as used. I hope vacuum operates on a tuple basis and has no\n> concept of\n> pages\n> - I guess that the tuple pointer points into page memory, if multiple\n> pages \n> are concatenated for a tuple, these pages must not reside in memory\n> but\n> the full tuple's memory must be allocated (from a memory similar to\n> pages)\n> (shared mem?)\n> - should be possible for memory only pages \n> see PageGetPageSize but od_pagesize is 16bit!\n> Reuse another variable? Another type of page? (32bit od_pagesize)\n> \n> Very fascinated by this large beast of ancient code to explore\n> Christof\n> \n> PS: I think the documentation on page layout is far outdated (or points\n> into the future since it speaks about ItemContinuationData structures.)\n> Should I update it?\n> The table doesn't match actual structure components. At least I don't\n> understand what it's about. The source code mentions a different page\n> layout.\n> \n> PPS: Do not pity me, I have ten+ years of coding experience in C.\n> \n> PPPS: Could someone in few words tell me what an access method is (a\n> tuple is an access method, log pages are another?)\n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Dec 1999 20:59:38 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Volunteer: Large Tuples / Tuple chaining"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On Behalf Of\n> Christof Petig\n> \n> Hiroshi Inoue wrote:\n> > \n> > Will you put a long tuple into a long logical page(continued multiple\n> > phisical(?) pages) ?\n> > I'm suspicious about the way that allows non-page-formatted page.\n> > \n> > Anyway it would need a big change around bufmgr/smgr etc.\n> > Could someone estimate the influence/danger before going forward ?\n> > \n> \n> I planned to use as many of PostgreSQL data structures unaltered as\n> possible. Storing one Tuple in multiple Items should not pose too much\n> danger on bufmgr and smgr unless they access tuple internals. (I didn't\n> check that yet). This would mean that on disk Items do no longer\n> correspond to Tuples. (Some of them might form one tuple).\n>\n\nHmm,we have discussed about LONG.\nChange by LONG is transparent to users and would resolve\nthe big tuple problem mostly.\nI'm suspicious that tuple chaining is worth the work now.\n\nAt least a consensus is needed before going,I think.\nBad design would only introduce a confusion.\n\nRegards.\n\nHiroshi Inoue\[email protected] \n",
"msg_date": "Tue, 14 Dec 1999 17:58:57 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Volunteer: Large Tuples / Tuple chaining"
},
{
"msg_contents": "> > I planned to use as many of PostgreSQL data structures unaltered as\n> > possible. Storing one Tuple in multiple Items should not pose too much\n> > danger on bufmgr and smgr unless they access tuple internals. (I didn't\n> > check that yet). This would mean that on disk Items do no longer\n> > correspond to Tuples. (Some of them might form one tuple).\n> >\n> \n> Hmm,we have discussed about LONG.\n> Change by LONG is transparent to users and would resolve\n> the big tuple problem mostly.\n> I'm suspicious that tuple chaining is worth the work now.\n> \n> At least a consensus is needed before going,I think.\n> Bad design would only introduce a confusion.\n\nAgreed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 14 Dec 1999 11:25:40 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Volunteer: Large Tuples / Tuple chaining"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> > > I planned to use as many of PostgreSQL data structures unaltered as\n> > > possible. Storing one Tuple in multiple Items should not pose too much\n> > > danger on bufmgr and smgr unless they access tuple internals. (I didn't\n> > > check that yet). This would mean that on disk Items do no longer\n> > > correspond to Tuples. (Some of them might form one tuple).\n> > >\n> >\n> > Hmm,we have discussed about LONG.\n> > Change by LONG is transparent to users and would resolve\n> > the big tuple problem mostly.\n> > I'm suspicious that tuple chaining is worth the work now.\n> >\n> > At least a consensus is needed before going,I think.\n> > Bad design would only introduce a confusion.\n>\n> Agreed.\n\nMe too.\n\n I think that only a combination of LONG attributes and split\n tuples will be a complete solution.\n\n What I'm worried about is to make the segments of a large\n tuple specialized things in the main table. The reliability\n of Vacuum is one of the most important things for any system\n in production. While the general operation of vacuum seems to\n be well known, it's requirements for atomicy of some actions\n appears to be lesser. The more chunks a tuple consists of,\n the more possible an abort of vacuum in the middle of their\n moving becomes. So keeping the links of chained tuples fail\n safe intact is IMHO an issue, a little underestimated in this\n discussion.\n\n Maybe we can split tuples in another way, must think about it\n for another hour - 'til later.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 14 Dec 1999 19:45:11 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Volunteer: Large Tuples / Tuple chaining"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> I think that only a combination of LONG attributes and split\n> tuples will be a complete solution.\n\nIf we can do a good job with long attributes, I really think we\nwill not need to have split tuples too.\n\nYou'd be able to put perhaps 400 LONG attributes into an 8K tuple,\nmore than that if they are float8 or int or bool attributes.\nIf someone needs tables with even more columns than that, they\ncould bump BLCKSZ up to 32K and quadruple the number of columns.\n\nHow many people are really going to be bumping into that limit?\nIs it worth the work and reliability risk to support long tuples\nfor a few applications that are about three sigmas out on the bell\ncurve? I doubt it.\n\nI think the effort this would take would be *much* more profitably\nspent on tuning the LONG-attribute support. If we can make that\nfast and robust, we will have very few complaints.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Dec 1999 15:24:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Volunteer: Large Tuples / Tuple chaining "
},
{
"msg_contents": "Tom Lane wrote:\n\n> [email protected] (Jan Wieck) writes:\n> > I think that only a combination of LONG attributes and split\n> > tuples will be a complete solution.\n>\n> If we can do a good job with long attributes, I really think we\n> will not need to have split tuples too.\n\n I really hope so, because there will be very severe problems\n coming up with a real tuple split at arbitrary cut points\n that can occur somewhere in the middle of an attribute.\n Arbitrary cut points are the only way to support single\n values over BLKSIZE.\n\n Just to tell one problem, the scan key tests during\n heap_getnext() are handed down into heapgettup() and\n performed with HeapTupleSatisfies, a macro using the in\n buffer tuple here. IIRC it was turned into a macro in one of\n our last releases for performance reasons.\n\n If now faced with a tuple living in multiple pages, these\n checks will need to reconstruct the tuple in memory, to\n concatenate the attributes well again.\n\n This now needs to lock multiple buffers at once during\n heapgettup(), where I'm not sure if they must all stay with\n the bumped refcount when returning the tuple or not. So\n ReleaseBuffer() might need to be changed into something,\n where the HeapTuple remembers all the buffers that where\n locked for it.\n\n Also this separate ReleaseBuffer() reminds me, that there are\n some places in the backend that assume a tuple returned by\n heap AM allways is in a buffer! But that can't be true any\n more, because a buffer allways has BLKSIZE.\n\n> I think the effort this would take would be *much* more profitably\n> spent on tuning the LONG-attribute support. If we can make that\n> fast and robust, we will have very few complaints.\n\n *MUCH*!\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 14 Dec 1999 22:32:52 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Volunteer: Large Tuples / Tuple chaining"
},
{
"msg_contents": "\n> -----Original Message-----\n> From: Jan Wieck [mailto:[email protected]]\n> Sent: Wednesday, December 15, 1999 3:45 AM\n> \n> Bruce Momjian wrote:\n> \n> > > > I planned to use as many of PostgreSQL data structures unaltered as\n> > > > possible. Storing one Tuple in multiple Items should not \n> pose too much\n> > > > danger on bufmgr and smgr unless they access tuple \n> internals. (I didn't\n> > > > check that yet). This would mean that on disk Items do no longer\n> > > > correspond to Tuples. (Some of them might form one tuple).\n> > > >\n> > >\n> > > Hmm,we have discussed about LONG.\n> > > Change by LONG is transparent to users and would resolve\n> > > the big tuple problem mostly.\n> > > I'm suspicious that tuple chaining is worth the work now.\n> > >\n> > > At least a consensus is needed before going,I think.\n> > > Bad design would only introduce a confusion.\n> >\n> > Agreed.\n> \n> Me too.\n> \n> I think that only a combination of LONG attributes and split\n> tuples will be a complete solution.\n> \n> What I'm worried about is to make the segments of a large\n> tuple specialized things in the main table. The reliability\n> of Vacuum is one of the most important things for any system\n> in production. While the general operation of vacuum seems to\n> be well known, it's requirements for atomicy of some actions\n> appears to be lesser. The more chunks a tuple consists of,\n> the more possible an abort of vacuum in the middle of their\n> moving becomes. So keeping the links of chained tuples fail\n> safe intact is IMHO an issue, a little underestimated in this\n> discussion.\n>\n\nThere exists another related problem.\nVacuum could hardly move big tuples if some tuples of each page\nlive long. Though we have to move a long tuple at once,there won't\nbe so many clean pages.\n\nProbably vacuum couldn't move even a 8K tuple in some cases.\nThe problem is already there,more or less.\nBut it seems very difficult to solve this problem without giving up\nto preserve consistency in case of a crash. \n\nRegards.\n \nHiroshi Inoue\[email protected]\n",
"msg_date": "Wed, 15 Dec 1999 11:43:51 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Volunteer: Large Tuples / Tuple chaining"
},
{
"msg_contents": "Remember, chaining tuples had all sorts of performance, vacuum, code\nhandling, and UPDATE problems. They buy us very little, and almost\nnothing if we have LONG tables.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 14 Dec 1999 21:52:14 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Volunteer: Large Tuples / Tuple chaining"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Remember, chaining tuples had all sorts of performance, vacuum, code\n> handling, and UPDATE problems. They buy us very little, and almost\n> nothing if we have LONG tables.\n> \n\nI had already contacted Jan in private Email. Since we share country,\nnative language and time zone, this is even the most comfortable way.\n\nI agree with the concerns you mailed and will (most likely) start\nhelping Jan to implement LONG. As I had seen your LONG discussion\n_after_ my original post, this had been a strange coincidence. But I had\nbeen following it with interest.\n\n Christof\n",
"msg_date": "Wed, 15 Dec 1999 09:27:39 +0100",
"msg_from": "Christof Petig <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Volunteer: Large Tuples / Tuple chaining"
}
] |
[
{
"msg_contents": "http://www.postgresql.org/ shows hub.org's homepage, not postgres's,\nand none of my bookmarked links to other pgsql pages work. I think\nsomeone messed up the virtual-host redirection tables there...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Dec 1999 23:11:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgsql web site busted?"
},
{
"msg_contents": "On Thu, 9 Dec 1999, Tom Lane wrote:\n\n> http://www.postgresql.org/ shows hub.org's homepage, not postgres's,\n> and none of my bookmarked links to other pgsql pages work. I think\n> someone messed up the virtual-host redirection tables there...\n\nmoved to new ip's without putting in a record for the old...fixed now, or,\nat least, should be...let me know...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 10 Dec 1999 01:30:45 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pgsql web site busted?"
}
] |
[
{
"msg_contents": "There have been some people who have said they want a 6.6 release with\nbeta to start on February 1. They are Tom Lane, Thomas Lockhart, and\nmyself. Jan and Peter Eisentraut have said they will be ready on that\ndate.\n\nSeems foreign key ability would be enough to justify a 6.6.\n\nComments?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Dec 1999 00:44:55 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "6.6 release"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> There have been some people who have said they want a 6.6 release with\n> beta to start on February 1. They are Tom Lane, Thomas Lockhart, and\n> myself. Jan and Peter Eisentraut have said they will be ready on that\n> date.\n>\n> Seems foreign key ability would be enough to justify a 6.6.\n>\n> Comments?\n\n>From a user's perspective, that would be great. Our application is composed\nof over 130 C++ class objects (its about 100K lines of C++) and the move to\n6.5 meant:\n\n1) A change throughout the code to lock tables appropriately to support the\nrefint.c code (which itself doesn't work for cascading updates) under MVCC\n\n2) Keep using 6.4 which isn't all that hot for concurrent access, or\n\n3) Wait for referential integrity...and pray the race condition isn't\ntriggered under 6.5 for tables being altered.\n\nDue to the nature of our application, and the number of people actually\nupdating and deleting base tables whose keys would require a cascading\ndelete/update, we choose #3...... :-)\n\nMike Mascari\n\n\n",
"msg_date": "Fri, 10 Dec 1999 01:37:30 -0500",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Seems foreign key ability would be enough to justify a 6.6.\n\nEven without foreign keys, we have enough bugfixes in place to justify\na 6.6 release, I think. If Jan can get some amount of foreign key\nsupport working before Feb, that'd be a nice bonus --- but it's not\nreally necessary.\n\nThe way I see it, we should push what we have out the door, and then\nsettle in for a long slog on 7.0. We need to do WAL, querytree\nredesign, long tuples, function manager changeover, date/time type\nunification, and probably a couple other things that I don't remember\nat this time of night. These are all appropriate for \"7.0\" because\nthey are big items and/or will involve some loss of backward\ncompatibility. Before we start in on that stuff, it'd be good to\nconsolidate the gains we already have. Almost every day I find myself\nsaying to someone \"that's fixed in current sources\". 7.0 is still\na long way away, so we ought to get the existing improvements out\nto our users.\n\n(In short, Bruce persuaded me: we ought to do a 6.6 cycle.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Dec 1999 01:37:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release "
},
{
"msg_contents": "On Fri, 10 Dec 1999, Bruce Momjian wrote:\n\n> There have been some people who have said they want a 6.6 release with\n> beta to start on February 1. They are Tom Lane, Thomas Lockhart, and\n> myself. Jan and Peter Eisentraut have said they will be ready on that\n> date.\n> \n> Seems foreign key ability would be enough to justify a 6.6.\n> \n> Comments?\n\nSo we'd be looking at Beta on Feb 1st, with a release around Apr 1st, and\nbeta for 7 being around June 1st, with 7 release for Sept 1st?\n\nIMHO, 7 is waiting for Vadim/WAL...we're doing a 6.6 due to him being\nindisposed until Mar/Apr, correct?\n\nJust want to get this clarified, that's all :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 10 Dec 1999 02:55:18 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "On Fri, 10 Dec 1999, Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> > Seems foreign key ability would be enough to justify a 6.6.\n> \n> Even without foreign keys, we have enough bugfixes in place to justify\n> a 6.6 release, I think. If Jan can get some amount of foreign key\n> support working before Feb, that'd be a nice bonus --- but it's not\n> really necessary.\n> \n> The way I see it, we should push what we have out the door, and then\n> settle in for a long slog on 7.0. We need to do WAL, querytree\n> redesign, long tuples, function manager changeover, date/time type\n> unification, and probably a couple other things that I don't remember\n> at this time of night. These are all appropriate for \"7.0\" because\n> they are big items and/or will involve some loss of backward\n> compatibility. Before we start in on that stuff, it'd be good to\n> consolidate the gains we already have. Almost every day I find myself\n> saying to someone \"that's fixed in current sources\". 7.0 is still\n> a long way away, so we ought to get the existing improvements out\n> to our users.\n\nWait, now I'm confused...so between 6.6 and 7, we're talking another year\nanyway? *raised eyebrow* Just curious about your 'long slog' above :)\n\nHere's a question...should we beta on Feb 1st but make it 7.0? If we are\ngoing to be looking for a \"long slog\" for 7, why not \"freeze\" things on\nFeb 1st as v7, and start working on v8 with WAL, long tuples, etc, etc...\n\nLike, what point do we call things a major release? In a sense, MVCC\nprobably should have been considered a large enough overhaul to warrant\n7.0, no?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 10 Dec 1999 03:08:50 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release "
},
{
"msg_contents": "The Hermit Hacker wrote:\n> \n> Wait, now I'm confused...so between 6.6 and 7, we're talking another year\n> anyway? *raised eyebrow* Just curious about your 'long slog' above :)\n> \n> Here's a question...should we beta on Feb 1st but make it 7.0? If we are\n> going to be looking for a \"long slog\" for 7, why not \"freeze\" things on\n> Feb 1st as v7, and start working on v8 with WAL, long tuples, etc, etc...\n> \n> Like, what point do we call things a major release? In a sense, MVCC\n> probably should have been considered a large enough overhaul to warrant\n> 7.0, no?\n\nSo, may be call next after 6.6 release just 6.7 ? -:)\nDoes it so matter - v7, v8? I would be happy with 6.7 -:)\n\nVadim\n",
"msg_date": "Fri, 10 Dec 1999 15:04:46 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n>> 7.0 is still a long way away, so we ought to get the existing\n>> improvements out to our users.\n\n> Wait, now I'm confused...so between 6.6 and 7, we're talking another year\n> anyway? *raised eyebrow* Just curious about your 'long slog' above :)\n\nI hope not a year ... but I could easily believe we have three to six\nmonths of development ahead, if 7.0 is to contain all the stuff I\nmentioned.\n\n> Here's a question...should we beta on Feb 1st but make it 7.0? If we are\n> going to be looking for a \"long slog\" for 7, why not \"freeze\" things on\n> Feb 1st as v7, and start working on v8 with WAL, long tuples, etc, etc...\n> Like, what point do we call things a major release? In a sense, MVCC\n> probably should have been considered a large enough overhaul to warrant\n> 7.0, no?\n\nMaybe so. What's in a name, anyway? But I think we've established a\nprecedent that it takes a really significant jump to bump the front\nnumber. If we didn't call MVCC 7.0, the stuff we currently have\nready-to-go doesn't seem to justify it either. I think what we have\nin current sources is a nice maintenance update, or maybe a little more\nthan that if Jan has a good chunk of foreign-key stuff working. It's\nworth getting it out to users --- but it doesn't feel like a \"7.0\"\nto me.\n\nOTOH, we've already changed the version ID in current sources, and\nchanging it back might not be worth the trouble of arguing ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Dec 1999 03:06:16 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release "
},
{
"msg_contents": "On Fri, 10 Dec 1999, The Hermit Hacker wrote:\n\n> Here's a question...should we beta on Feb 1st but make it 7.0? If we are\n> going to be looking for a \"long slog\" for 7, why not \"freeze\" things on\n> Feb 1st as v7, and start working on v8 with WAL, long tuples, etc, etc...\n> \n> Like, what point do we call things a major release? In a sense, MVCC\n> probably should have been considered a large enough overhaul to warrant\n> 7.0, no?\n\nI thought Marc decided[1] last year to drop the minor.minor version\nnumbers. IOW, there would be no 6.6.1, 6.6.2, etc. Make the upcoming\nrelease 7.0 and take care of any minor glitches in it as 7.1, 7.2 and\nwhen WAL and the other stuff is ready - or as it's ready - release 8.0\nand fix any glitches as 8.1, etc. Currently every minor release is really\na major one, so why not just mark it as such and not worry about it?\n\nVince.\n\n[1] Or did you do that on inn-workers and not here? It was about the same\ntime FreeBSD dropped the major.minor.minor for the major.minor numbering.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Fri, 10 Dec 1999 06:38:48 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> > Seems foreign key ability would be enough to justify a 6.6.\n>\n> Even without foreign keys, we have enough bugfixes in place to justify\n> a 6.6 release, I think. If Jan can get some amount of foreign key\n> support working before Feb, that'd be a nice bonus --- but it's not\n> really necessary.\n\n As far as I see it now, I can get the FK stuff with MATCH\n FULL ready by February first. Must be enough.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Fri, 10 Dec 1999 13:38:49 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "On Fri, 10 Dec 1999, Tom Lane wrote:\n\n> Maybe so. What's in a name, anyway? But I think we've established a\n> precedent that it takes a really significant jump to bump the front\n\nActually, we've never set a precedent...v6.0 was so named more because\nv1.10 just sounded like such a small number compared to the overall age of\nthe software...\n\n> OTOH, we've already changed the version ID in current sources, and\n> changing it back might not be worth the trouble of arguing ;-)\n\nOkay, I can agree with that one :)\n\nPeter brought up a good argument over on his side too...make the Feb1st\none 7, and we'll make the post-WAL stuff 8.0 ...\n\nJust as a note, I'm not 100% certain how this generally works in \"real\nlife\", but, in some circumstances, I've seen it happen where the major\ngets bumped a significant number of changes have gone into everything\nsince the last major bump...I think we have achieved that at least one\nrelease back...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 10 Dec 1999 08:42:41 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release "
},
{
"msg_contents": "On Fri, 10 Dec 1999, Vince Vielhaber wrote:\n\n> On Fri, 10 Dec 1999, The Hermit Hacker wrote:\n> \n> > Here's a question...should we beta on Feb 1st but make it 7.0? If we are\n> > going to be looking for a \"long slog\" for 7, why not \"freeze\" things on\n> > Feb 1st as v7, and start working on v8 with WAL, long tuples, etc, etc...\n> > \n> > Like, what point do we call things a major release? In a sense, MVCC\n> > probably should have been considered a large enough overhaul to warrant\n> > 7.0, no?\n> \n> I thought Marc decided[1] last year to drop the minor.minor version\n> numbers. IOW, there would be no 6.6.1, 6.6.2, etc. Make the upcoming\n> release 7.0 and take care of any minor glitches in it as 7.1, 7.2 and\n> when WAL and the other stuff is ready - or as it's ready - release 8.0\n> and fix any glitches as 8.1, etc. Currently every minor release is really\n> a major one, so why not just mark it as such and not worry about it?\n> \n> Vince.\n> \n> [1] Or did you do that on inn-workers and not here? It was about the same\n> time FreeBSD dropped the major.minor.minor for the major.minor numbering.\n\nWould have been here...\n\nThe problem, as I see it, is that the FreeBSD camp is more \"strict\" in how\nit does their source tree...there is a development tree (X.y), and a\nstable tree (X-1.y)...if something is back-patchable to X-1.y from X.y, it\ngets done (ie. bug fixes, security fixes or even feature changes *as long\nas* they don't change the API...\n\nWe're about 50% there, but not completely...this last release (6.5) has\nbeen fantastic...ppl have been back-patching to the 6.5 tree, providing us\nwiht interim releases, but not to the level that we can build a 6.6 off\nthat tree...\n\nwhen we do up Release 7, which I'd like to make this one, I'd *love* to\nmake this a whole-hog thing...tag/branch things as REL_7, no minor\nnumber...then its up to the developers to decide whether something is\nback-patchable (like they've been doing up until now) with a periodic\nrelease put out while Release 8 is being worked on.\n\nIt slows down the rush of getting a full release out while allowign ppl\naccess to the debug'd advances in the upcoming release...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 10 Dec 1999 08:56:30 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release "
},
{
"msg_contents": "Thus spake Jan Wieck\n> As far as I see it now, I can get the FK stuff with MATCH\n> FULL ready by February first. Must be enough.\n\nAny chance of getting the FK semantics into the parser right away even\nthough it is ignored? As soon as it is there we can start modifying\nour CREATE TABLE scripts in preparation for when the underlying code\nis there.\n\nHmm. Sounds like an argument I had with Jolly once over PKs. :-)\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Fri, 10 Dec 1999 08:58:42 -0500 (EST)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n>>>> Seems foreign key ability would be enough to justify a 6.6.\n>> \n>> Even without foreign keys, we have enough bugfixes in place to justify\n>> a 6.6 release, I think.\n\n> As far as I see it now, I can get the FK stuff with MATCH\n> FULL ready by February first. Must be enough.\n\nIf we need another feature to \"justify\" a release, I think I just\nfigured out how to do \"COUNT(DISTINCT x)\" with only maybe a day's work.\nWatch this space...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Dec 1999 09:59:27 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release "
},
{
"msg_contents": "Marc G. Fournier wrote:\n\n> when we do up Release 7, which I'd like to make this one, I'd *love* to\n> make this a whole-hog thing...tag/branch things as REL_7, no minor\n> number...then its up to the developers to decide whether something is\n> back-patchable (like they've been doing up until now) with a periodic\n> release put out while Release 8 is being worked on.\n\n I would really appreceate that. Maybe we need to go ahead in\n this manner and make more use of CVS branching.\n\n We have long standing TODO items, which require co work of\n multiple developers, affect alot of the code and will take a\n long time to implement. Tuple split, fmgr redesign, parsetree\n overhaul to name some.\n\n Especially the fact that noone can do them alone IMHO\n requires to have a separate branch, where the sources can\n stay broken for some time. For example if we change the\n parsetree representation, we first change the parser and look\n at the printed output's until it fits. Then work on the\n planner to get them running and parallel enhance the rewriter\n to integrate it again. During this time, the parser will\n generate things that may make the entire system unusable, so\n any other development would get stuck.\n\n I don't think that all problems could be tackled at once. My\n idea is to analyze one of these problems in depth, then\n branch off and have the developers, required to get this item\n done, doing it separated there. The final result will be a\n patch based on an older release, that requires some manual\n work to get it merged into the current tree, of course. The\n benefit would be, that this long term development would not\n be interfered by CURRENT improvements, nor will it delay any\n subsequent releasing of funny, neat things.\n\n Just an idea.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Fri, 10 Dec 1999 16:03:27 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> On Fri, 10 Dec 1999, Vince Vielhaber wrote:\n>> I thought Marc decided[1] last year to drop the minor.minor version\n>> numbers. IOW, there would be no 6.6.1, 6.6.2, etc. Make the upcoming\n>> release 7.0 and take care of any minor glitches in it as 7.1, 7.2 and\n>> when WAL and the other stuff is ready - or as it's ready - release 8.0\n>> and fix any glitches as 8.1, etc. Currently every minor release is really\n>> a major one, so why not just mark it as such and not worry about it?\n\n> when we do up Release 7, which I'd like to make this one, I'd *love* to\n> make this a whole-hog thing...tag/branch things as REL_7, no minor\n> number...\n\nYeah, I was thinking that if we were to call this 7.0 and have plans\nfor going to 8.0 as soon as WAL &etc are done, then we'd basically be\ndropping one level of version number --- no need for a third number\nif major revs are that close together. That's OK with me as long as\nwe all understand that it's a change in naming practices. There are\nthings we'd need to change to make it work. For example, PG_VERSION\nwould need to record only the top version number: 7.0 and 7.1 would be\nexpected to have compatible databases, not incompatible ones.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Dec 1999 10:06:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release "
},
{
"msg_contents": ">\n> Thus spake Jan Wieck\n> > As far as I see it now, I can get the FK stuff with MATCH\n> > FULL ready by February first. Must be enough.\n>\n> Any chance of getting the FK semantics into the parser right away even\n> though it is ignored? As soon as it is there we can start modifying\n> our CREATE TABLE scripts in preparation for when the underlying code\n> is there.\n>\n> Hmm. Sounds like an argument I had with Jolly once over PKs. :-)\n\n The current source tree only lacks the parsers part to\n specify\n\n INITIALLY DEFERRED|IMMEDIATE\n [ NOT ] DEFERRABLE\n\n in a columns REFERENCES clause. They are fully supported in a\n tables CONSTRAINT clause.\n\n All the functionality for MATCH FULL is there too already.\n Though, it's not well tested up to now, but that's not your\n problem I assume.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Fri, 10 Dec 1999 16:11:56 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "> So we'd be looking at Beta on Feb 1st, with a release around Apr 1st, and\n> beta for 7 being around June 1st, with 7 release for Sept 1st?\n\nI don't see why we couldn't plan on a Mar 1 final, with the assumption\nthat the beta will take one month. It may take longer, but it may not.\n\n> \n> IMHO, 7 is waiting for Vadim/WAL...we're doing a 6.6 due to him being\n> indisposed until Mar/Apr, correct?\n\nNot really. We have some big items open, but they are not very far\nalong, except WAL, and because he can't finish for a while, it makes\nsense to release what we have done for the past six months.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Dec 1999 11:21:52 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "> I thought Marc decided[1] last year to drop the minor.minor version\n> numbers. IOW, there would be no 6.6.1, 6.6.2, etc. Make the upcoming\n> release 7.0 and take care of any minor glitches in it as 7.1, 7.2 and\n> when WAL and the other stuff is ready - or as it's ready - release 8.0\n> and fix any glitches as 8.1, etc. Currently every minor release is really\n> a major one, so why not just mark it as such and not worry about it?\n> \n> Vince.\n> \n> [1] Or did you do that on inn-workers and not here? It was about the same\n> time FreeBSD dropped the major.minor.minor for the major.minor numbering.\n\nI don't think it was here. I never heard about it.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Dec 1999 11:38:42 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "> Tom Lane wrote:\n> \n> > Bruce Momjian <[email protected]> writes:\n> > > Seems foreign key ability would be enough to justify a 6.6.\n> >\n> > Even without foreign keys, we have enough bugfixes in place to justify\n> > a 6.6 release, I think. If Jan can get some amount of foreign key\n> > support working before Feb, that'd be a nice bonus --- but it's not\n> > really necessary.\n> \n> As far as I see it now, I can get the FK stuff with MATCH\n> FULL ready by February first. Must be enough.\n\nForeign key is quite complicated. It will take them a while even to ask\nfor more than that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Dec 1999 11:39:27 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "> Yeah, I was thinking that if we were to call this 7.0 and have plans\n> for going to 8.0 as soon as WAL &etc are done, then we'd basically be\n> dropping one level of version number --- no need for a third number\n> if major revs are that close together. That's OK with me as long as\n> we all understand that it's a change in naming practices. There are\n> things we'd need to change to make it work. For example, PG_VERSION\n> would need to record only the top version number: 7.0 and 7.1 would be\n> expected to have compatible databases, not incompatible ones.\n\nMakes sense in that our 6.4->6.5 release is really a major release for\nother people, but if we go to the new naming, we are going to get > 10\nvery soon, and we will start looking like GNU Emacs at version 19 or 20.\n\nWe are guilty of our own success in making such big releases.\n\nI vote we keep it the same. Our users already know every release is a\nmajor one, and very high release numbers > 10 look kind of strange to\nme.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Dec 1999 11:46:38 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "On Fri, 10 Dec 1999, Bruce Momjian wrote:\n\n> Makes sense in that our 6.4->6.5 release is really a major release for\n> other people, but if we go to the new naming, we are going to get > 10\n> very soon, and we will start looking like GNU Emacs at version 19 or 20.\n\nThe other problem is that if we keep going with 6.5->6.6->6.x, we're gonna\nhit 6.10, etc...looks funnier, IMHO...and, unless something major comes\nalong after WAL and all that, never go beyond 7? :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 10 Dec 1999 12:56:19 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "> The other problem is that if we keep going with 6.5->6.6->6.x, we're gonna\n> hit 6.10, etc...looks funnier, IMHO...and, unless something major comes\n> along after WAL and all that, never go beyond 7? :)\n\nv8.0: Corba, or XML, or one of those IBM standard protocol things for\nfe/be\nv9.0: multiple database access\nv10.0: distributed databases\nv11.0: features released as M$Postgres-1.0 after M$ owns every ISP\nscrappy could use, cuts off access, and takes over the sources\n\n;) :)))\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 10 Dec 1999 17:26:59 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "\"D'Arcy J.M. Cain\" wrote:\n> \n> Thus spake Jan Wieck\n> > As far as I see it now, I can get the FK stuff with MATCH\n> > FULL ready by February first. Must be enough.\n> \n> Any chance of getting the FK semantics into the parser right away even\n> though it is ignored?\n\nWe do have foreign key syntax in parser\n\nhannu=> create table foreign_tab(\nhannu-> f int,\nhannu-> foreign key(f) references primary_tab (i)\nhannu-> );\nNOTICE: CREATE TABLE/FOREIGN KEY clause ignored; not yet implemented\n\nWhat do you mean by semantics here ? \nShould it check that the primary table and field(s) exist ?\n\n------------------\nHannu\n",
"msg_date": "Sat, 11 Dec 1999 11:28:02 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "Thus spake Hannu Krosing\n> \"D'Arcy J.M. Cain\" wrote:\n> > Any chance of getting the FK semantics into the parser right away even\n> > though it is ignored?\n> \n> We do have foreign key syntax in parser\n> \n> hannu=> create table foreign_tab(\n> hannu-> f int,\n> hannu-> foreign key(f) references primary_tab (i)\n> hannu-> );\n> NOTICE: CREATE TABLE/FOREIGN KEY clause ignored; not yet implemented\n> \n> What do you mean by semantics here ? \n> Should it check that the primary table and field(s) exist ?\n\nNope. That's exactly what I meant. I didn't realize that it was already\nthere. Sorry for the confusion.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sat, 11 Dec 1999 07:48:43 -0500 (EST)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": ">\n> Thus spake Hannu Krosing\n> > \"D'Arcy J.M. Cain\" wrote:\n> > > Any chance of getting the FK semantics into the parser right away even\n> > > though it is ignored?\n> >\n> > We do have foreign key syntax in parser\n> >\n> > hannu=> create table foreign_tab(\n> > hannu-> f int,\n> > hannu-> foreign key(f) references primary_tab (i)\n> > hannu-> );\n> > NOTICE: CREATE TABLE/FOREIGN KEY clause ignored; not yet implemented\n> >\n> > What do you mean by semantics here ?\n> > Should it check that the primary table and field(s) exist ?\n>\n> Nope. That's exactly what I meant. I didn't realize that it was already\n> there. Sorry for the confusion.\n\nCaution D'Arcy,\n\n the FOREIGN KEY syntax that's in 6.5 is a little incomplete.\n Doesn't allow match type and constraint attribute\n specification (deferrability and initial deferred state).\n Especially the match type is required, because in 7.0 only\n MATCH FULL will be implemented, not the <unspecified>\n default.\n\n As I said in another post, the constraint attr spec isn't\n possible in column constraint right now in 7.0, but we're\n working on it. Should be ready in a few days.\n\n\nJan\n\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sat, 11 Dec 1999 14:39:02 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
}
] |
[
{
"msg_contents": "The recent QNX patches have broken current sources. I think\nmaybe the patches were incomplete or were not applied fully.\nEvery makefile now has $(LD) $(LDREL) in place of $(LD) -r,\nwhich would be cool if only LDREL were defined as -r someplace.\nBut it ain't defined anywhere. Major lossage ensues.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Dec 1999 01:25:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "OK, what's this LDREL all about?"
},
{
"msg_contents": "> The recent QNX patches have broken current sources. I think\n> maybe the patches were incomplete or were not applied fully.\n> Every makefile now has $(LD) $(LDREL) in place of $(LD) -r,\n> which would be cool if only LDREL were defined as -r someplace.\n> But it ain't defined anywhere. Major lossage ensues.\n\nFixed. CVS update.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Dec 1999 01:34:58 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] OK, what's this LDREL all about?"
},
{
"msg_contents": "On 1999-12-10, Tom Lane mentioned:\n\n> The recent QNX patches have broken current sources. I think\n> maybe the patches were incomplete or were not applied fully.\n> Every makefile now has $(LD) $(LDREL) in place of $(LD) -r,\n> which would be cool if only LDREL were defined as -r someplace.\n> But it ain't defined anywhere. Major lossage ensues.\n\nISTM, the proper way to do this sort of thing would be to add it to\nLDFLAGS in Makefile.global.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sat, 11 Dec 1999 03:01:43 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] OK, what's this LDREL all about?"
},
{
"msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> On 1999-12-10, Tom Lane mentioned:\n> \n> > The recent QNX patches have broken current sources. I think\n> > maybe the patches were incomplete or were not applied fully.\n> > Every makefile now has $(LD) $(LDREL) in place of $(LD) -r,\n> > which would be cool if only LDREL were defined as -r someplace.\n> > But it ain't defined anywhere. Major lossage ensues.\n> \n> ISTM, the proper way to do this sort of thing would be to add it to\n> LDFLAGS in Makefile.global.\n\nWe have an LDFLAGS, but this for using ld to generate SUBSYS.o. Not to\nbe used in normal ld linking use.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Dec 1999 21:44:52 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] OK, what's this LDREL all about?"
}
] |
[
{
"msg_contents": "\n\n-----Original Message-----\nFrom: Assaf Arkin [mailto:[email protected]]\nSent: Thursday, December 09, 1999 6:43 PM\nTo: Peter Mount\nCc: [email protected]; '[email protected]'\nSubject: Re: [INTERFACES] Transaction support in 6.5.3/JDBC\n\n[snip]\n\n> 3. isCriticalError -- should tell me if a critical error occured in\nthe\n> connection and the connection is no longer useable\n> \n> How do I detect no. 3? Is there are certain range of error codes,\nshould\n> I just look at certain PSQLExceptions as being critial (e.g. all I/O\n> related errors)?\n> \n> PM: Don't rely on the text returned from PSQLException to be in\nEnglish.\n> We are pretty unique in that the driver will return an error message\nin\n> the language defined by the locale of the client (also depends on if\nwe\n> have translated the errors into that language). What I could to is add\na\n> method to PSQLException that returns the original id of the Exception,\n> and another to return the arguments supplied. That may make your code\n> more portable.\n\nI'm not looking into the messages, I know their language dependent. I\neven added two or three new error messages, but only in English.\n\nI'm looking for either specific error codes, range of error codes, or\nsome class extending PSQLException that will just indicate that this\nconnection is no longer useful. For example, if an I/O error occurs,\nthere's no ReadyForQuery reply, there's garbled response, etc.\n\narkin\n\nPM: There are not error codes available. Also, there's nothing extending\nPSQLException (yet), but there's no reason not to extend it.\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n",
"msg_date": "Fri, 10 Dec 1999 07:27:20 -0000",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [INTERFACES] Transaction support in 6.5.3/JDBC"
}
] |
[
{
"msg_contents": "\n-----Original Message-----\nFrom: Assaf Arkin [mailto:[email protected]]\nSent: Thursday, December 09, 1999 6:52 PM\nTo: Peter Mount\nSubject: Re: [INTERFACES] Transaction support in 6.5.3/JDBC\n\n\nThe version I have sends requests protocol 1.0, once I changed that to\nprotocol 2.0, I got the pid/key. I also got the read-for-query response,\nso that works fine for detecting when the BE is unable to process\nfurther requests (for whatever reason).\n\nPM: Eeek, this will break, as current sources already do this (I still\ndon't know why 6.5.3 didn't go out with those patches).\n\nI've implemented the setTransactionIsolation method, that also works\nfine. According to the specs only two levels are supported read\ncommitted and serializable, and serializable is mistakingly spelled\n\"SERIALIZED\".\n\nPM: Already done, should have been in 6.5.3 :-(\n\nI couldn't find any indication that PostgreSQL supports read-only\ntransactions, so either setReadOnly should throw an not-supported\nexception, or getReadOnly should always return false. The current\nbehavior is to use a boolean which is not reflected in the DB.\n\n\nI have another question, that is how can we synchronize our code bases.\n\nFor the minor changes I did to the JDBC layer I will simply send you the\nmodified sources.\n\nPM: Currently its best to send direct to me, rather than to the patches\nlist. This is because the changes I'm making for 7.0 is very extensive,\nand it would be safer for me to apply them manually. Also, it seems that\n6.5.3 didn't pick up a lot of the patches I committed (but are in the\nCVS). The protocol version is definitely one, as was some of the\ntransaction stuff.\n\nFor the JDBC 2.0 standard extensions stuff, I'm using a very generic\nimplementation that was developed independently of (but tested with)\nPostgreSQL. The exact same code base exists in a different package\n(txm.jdbc.xa) and can be put on top of any JDBC driver. It works better,\nthough, if the JDBC driver implements the TwoPhaseConnection interface.\n\nRight now, anytime a change happens to txm.jdbc.xa I simply copy the\nfiles and change the package to postgresql.xa. I would like to see these\nfiles distributed as part of the PostgreSQL driver, but I also don't\nwant to get a conflict where updates to one code base are not reflected\nin the other.\n\nPM: What copyright do they have? I'm cc'ing the hackers list, as it's\none area we have to be careful of, and the others have more\nexperience/feelings on this subject.\n\nPeter\n\n> PM: In theory 6.5.3 should be requesting the current protocol (I\n> remember the patch being submitted to me as part of another fix).\n> However, I haven't (yet) had chance to look at it yet - hence not\n> knowing about the security key. The bit I want to add is for JDBC2,\n> ResultSet would by default use a cursor, and if it's closed while a\nread\n> is in effect, it would send cancel to the backend.\n> \n> Peter\n> \n> regards, tom lane\n> \n> ************\n> \n> ************\n\n-- \n____________________________________________________________\nAssaf Arkin [email protected]\nCTO http://www.exoffice.com\nExoffice, The ExoLab Company tel: (650) 259-9796\n",
"msg_date": "Fri, 10 Dec 1999 07:34:27 -0000",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [INTERFACES] Transaction support in 6.5.3/JDBC"
}
] |
[
{
"msg_contents": "It looks like it may be an idea now, as for some reason, some parts of\nthe 6.5.3 JDBC driver isn't in 6.5.3?\n\nWe had a similar problem with 6.5.2, so before 6.5.3 was released, I\nchecked CVS to make sure the changes were there, and they were. It's\njust that I've seen several references recently (the most recent from\nAssaf this morning) about the protocol version being 1.0. The 6.5.3\ndriver was the first version not to use 1.0! eek.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:[email protected]]\nSent: Friday, December 10, 1999 5:45 AM\nTo: PostgreSQL-development\nSubject: [HACKERS] 6.6 release\n\n\nThere have been some people who have said they want a 6.6 release with\nbeta to start on February 1. They are Tom Lane, Thomas Lockhart, and\nmyself. Jan and Peter Eisentraut have said they will be ready on that\ndate.\n\nSeems foreign key ability would be enough to justify a 6.6.\n\nComments?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n\n************\n",
"msg_date": "Fri, 10 Dec 1999 07:43:07 -0000",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] 6.6 release"
},
{
"msg_contents": "Peter Mount <[email protected]> writes:\n> It looks like it may be an idea now, as for some reason, some parts of\n> the 6.5.3 JDBC driver isn't in 6.5.3?\n> We had a similar problem with 6.5.2, so before 6.5.3 was released, I\n> checked CVS to make sure the changes were there, and they were.\n\nThey may be in the tip, but are they in the REL6_5_PATCHES branch?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Dec 1999 03:11:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release "
}
] |
[
{
"msg_contents": "I'm also confused. So far, I've been working on the premise that the\nnext release would be 7.0 because of the probably major additions\nexpected, and that I'm hitting the JDBC driver hard to get as much of\nthe 2.0 spec complete as is possible.\n\nI think, if the other changes are going to be that long, the version for\nbeta on Feb 1st should be 7.0, and have WAL (and others) for 8.0.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: The Hermit Hacker [mailto:[email protected]]\nSent: Friday, December 10, 1999 7:09 AM\nTo: Tom Lane\nCc: Bruce Momjian; PostgreSQL-development\nSubject: Re: [HACKERS] 6.6 release \n\n[snipped toms comments]\n\nWait, now I'm confused...so between 6.6 and 7, we're talking another\nyear\nanyway? *raised eyebrow* Just curious about your 'long slog' above :)\n\nHere's a question...should we beta on Feb 1st but make it 7.0? If we\nare\ngoing to be looking for a \"long slog\" for 7, why not \"freeze\" things on\nFeb 1st as v7, and start working on v8 with WAL, long tuples, etc,\netc...\n\nLike, what point do we call things a major release? In a sense, MVCC\nprobably should have been considered a large enough overhaul to warrant\n7.0, no?\n\nMarc G. Fournier ICQ#7615664 IRC Nick:\nScrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary:\nscrappy@{freebsd|postgresql}.org \n\n\n************\n",
"msg_date": "Fri, 10 Dec 1999 07:53:05 -0000",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] 6.6 release "
},
{
"msg_contents": "Peter Mount <[email protected]> writes:\n> I'm also confused. So far, I've been working on the premise that the\n> next release would be 7.0 because of the probably major additions\n> expected, and that I'm hitting the JDBC driver hard to get as much of\n> the 2.0 spec complete as is possible.\n\nThat was what I was thinking also, until yesterday. I think that the\nproposal on the table is simply to consolidate/debug what we've already\ndone and push it out the door. If you've still got substantial work\nleft to finish JDBC 2.0, then it'd be better left for the next release.\n\nI know I have a lot of little loose ends dangling on stuff that's\nalready \"done\", and a long list of nitty little bugs to fix, so it\nmakes sense to me to spend some time in fix-bugs-and-make-a-release\nmode before going back into long-haul-feature-development mode.\nNow, if other people don't have that feeling, maybe the idea of\na near-term release isn't so hot after all.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Dec 1999 03:18:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release "
},
{
"msg_contents": "On 1999-12-10, Tom Lane mentioned:\n\n> I know I have a lot of little loose ends dangling on stuff that's\n> already \"done\", and a long list of nitty little bugs to fix, so it\n> makes sense to me to spend some time in fix-bugs-and-make-a-release\n> mode before going back into long-haul-feature-development mode.\n> Now, if other people don't have that feeling, maybe the idea of\n> a near-term release isn't so hot after all.\n\nI do have that feeling. That's better than tying up the loose ends and\nfixing the nitty little bugs half a year from now when you have no clue\nwhere that list went ...\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sat, 11 Dec 1999 03:01:22 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release "
}
] |
[
{
"msg_contents": "I thought they were, but it's possible as I don't really know CVS that\nwell.\n\nPeter\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: Friday, December 10, 1999 8:11 AM\nTo: Peter Mount\nCc: PostgreSQL-development\nSubject: Re: [HACKERS] 6.6 release \n\n\nPeter Mount <[email protected]> writes:\n> It looks like it may be an idea now, as for some reason, some parts of\n> the 6.5.3 JDBC driver isn't in 6.5.3?\n> We had a similar problem with 6.5.2, so before 6.5.3 was released, I\n> checked CVS to make sure the changes were there, and they were.\n\nThey may be in the tip, but are they in the REL6_5_PATCHES branch?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Dec 1999 08:12:57 -0000",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] 6.6 release "
}
] |
[
{
"msg_contents": "Ok, if we go for a 6.6, then we will need to make sure the current\nsources for JDBC are included in it (The stuff I have for 7.0 I've kept\nseparate).\n\nI'll keep on plodding along with a \"7.0\" version of the driver, but I\nwon't commit anything until either 6.6 is out, or we decide that 7.0\nwould be imminent.\n\nPeter\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: Friday, December 10, 1999 8:19 AM\nTo: Peter Mount\nCc: 'The Hermit Hacker'; Bruce Momjian; PostgreSQL-development\nSubject: Re: [HACKERS] 6.6 release \n\n\nPeter Mount <[email protected]> writes:\n> I'm also confused. So far, I've been working on the premise that the\n> next release would be 7.0 because of the probably major additions\n> expected, and that I'm hitting the JDBC driver hard to get as much of\n> the 2.0 spec complete as is possible.\n\nThat was what I was thinking also, until yesterday. I think that the\nproposal on the table is simply to consolidate/debug what we've already\ndone and push it out the door. If you've still got substantial work\nleft to finish JDBC 2.0, then it'd be better left for the next release.\n\nI know I have a lot of little loose ends dangling on stuff that's\nalready \"done\", and a long list of nitty little bugs to fix, so it\nmakes sense to me to spend some time in fix-bugs-and-make-a-release\nmode before going back into long-haul-feature-development mode.\nNow, if other people don't have that feeling, maybe the idea of\na near-term release isn't so hot after all.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Dec 1999 08:30:52 -0000",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] 6.6 release "
},
{
"msg_contents": "> > I'm also confused. So far, I've been working on the premise that the\n> > next release would be 7.0 because of the probably major additions\n> > expected, and that I'm hitting the JDBC driver hard to get as much of\n> > the 2.0 spec complete as is possible.\n\nOK, now *I'm* confused too! Peter, what in your stuff *requires* a\nversion renumbering to 7.0? The proposal was that we consolidate\nchanges in the backend server for a 6.6 release. Why does JDBC need to\nwait for a \"7.0\" in the version number to support the 2.0 spec?\n\n> That was what I was thinking also, until yesterday. I think that the\n> proposal on the table is simply to consolidate/debug what we've already\n> done and push it out the door. If you've still got substantial work\n> left to finish JDBC 2.0, then it'd be better left for the next release.\n\nRight.\n\n> I know I have a lot of little loose ends dangling on stuff that's\n> already \"done\", and a long list of nitty little bugs to fix, so it\n> makes sense to me to spend some time in fix-bugs-and-make-a-release\n> mode before going back into long-haul-feature-development mode.\n> Now, if other people don't have that feeling, maybe the idea of\n> a near-term release isn't so hot after all.\n\nYes I've got that feeling too!! :)\n\nMarc, I'd like to understand why we are pushing 7.0 for this \"release\nwhere we are\" release. We've (perhaps) got FK support, and a rewritten\npsql, and lots of bug fixes, and maybe \"join syntax\" but not outer\njoins. If we release as 7.0, then I'll force the date/time\nreunification into this release, since it is a pretty big change to\nthe backend tables (I've been waiting quite a while already for the\nmajor rev jump to do this).\n\nBut we won't have WAL, outer joins, rewritten query tree, etc etc so\nwhy are we pushing the major rev jump now? imho rewriting the query\ntree, which affects the parser, planner, optimizer, and perhaps\nexecutor, is as invasive as we'll get; that and WAL should trigger\n7.0. \n\nbtw, I'm not really happy with the prospect/suggestion of going from\n7.0 to 8.0 in a short time period; one of things I'm most satisfied\nwith in our development is that we have significant minor releases and\nthat we haven't succumbed to the \"major rev only\" marketing driven\nploys of the big guys...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 10 Dec 1999 15:43:31 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "> Marc, I'd like to understand why we are pushing 7.0 for this \"release\n> where we are\" release. We've (perhaps) got FK support, and a rewritten\n> psql, and lots of bug fixes, and maybe \"join syntax\" but not outer\n> joins. If we release as 7.0, then I'll force the date/time\n> reunification into this release, since it is a pretty big change to\n> the backend tables (I've been waiting quite a while already for the\n> major rev jump to do this).\n\nOne issue is that while we all want WAL and new query structure and\nstuff like that, we don't have end users asking for this repeatedly. \nWhat we do have them asking for is foreign keys.\n\nThe major issue seems to be that the 7.0 release is going to have major\nincompatibilities for prior releases in the area of date types, and\nstuff like that. With all we are doing, I am not sure that is even\ngoing to work because we can't synchonize all the incompatibility stuff\nfor one release.\n\nMaybe we just call it 7.0, and have some more incompatibility stuff in\n7.1. Seems waiting for some .0 release is not going to work, unless we\nscrap the Feb 1 beta and just wait for all new stuff to be finished, but\nthat seems worse than having a 7.1 that contains some incompatiblities.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Dec 1999 11:51:46 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "On Fri, 10 Dec 1999, Thomas Lockhart wrote:\n\n> Marc, I'd like to understand why we are pushing 7.0 for this \"release\n> where we are\" release. We've (perhaps) got FK support, and a rewritten\n> psql, and lots of bug fixes, and maybe \"join syntax\" but not outer\n> joins. If we release as 7.0, then I'll force the date/time\n> reunification into this release, since it is a pretty big change to\n> the backend tables (I've been waiting quite a while already for the\n> major rev jump to do this).\n> \n> But we won't have WAL, outer joins, rewritten query tree, etc etc so\n> why are we pushing the major rev jump now? imho rewriting the query\n> tree, which affects the parser, planner, optimizer, and perhaps\n> executor, is as invasive as we'll get; that and WAL should trigger\n> 7.0. \n> \n> btw, I'm not really happy with the prospect/suggestion of going from\n> 7.0 to 8.0 in a short time period; one of things I'm most satisfied\n> with in our development is that we have significant minor releases and\n> that we haven't succumbed to the \"major rev only\" marketing driven\n> ploys of the big guys...\n\nFreeBSD (my role model, always has been) has two trees right now...4.0,\nwhich is the development tree (ie. what I'm proposing as our 8.0), and,\ncurrently, 3.3 for their stable tree. Anything new and wonderful goes\ninto 4.0...anything deemed \"safe\" gets back patched to 3.x and\nperiodically released.\n\nThe idea is that anyone can throw anything (within reason) into the 8.0\ntree while we still have a stable branch to work on and make releases\non...so any \"safe features\" can be back-patched to 7.x. \n\nDamn damn damn...I can never explain these things right. The 7.x would,\n*at all times* maintain database compatibility with any 7.x release...I\ncould cvsup down the newest source, build and install it, without any risk\nto my current databases...but still get access to a newer feature\nset. After a few months of development, like now, we freeze the 7.x\nbranch and do up a release (7.1) that packages things up.\n\nFor instance, if you look at Hub, its running 3.4-RC right now...FreeBSD\njust did a 'freeze' for a 3.4 release, and because Hub has its kernel\nupdated periodically through cvsup, the 'uname -a' output changes with...I\nbasically keep up with the latest *stable* version of FreeBSD on Hub, but\nmy home machine, using the same mechanism, runs 4.0-CURRENT, a totally\ndevelopmental/experimental version...\n\nI think the project has gotten to such a size, and such a number of\ndevelopers, that this is feasible to do...we'd still have our major\nreleases, but only have minor, not minor.minor releases...\n\nInstead of v6.5.1 after a month of v6.5 being released, we'd have released\nv6.6 as being the more current stable version...its just taking things one\nstep further then what we've done recently with the release of v6.5.3...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 10 Dec 1999 12:54:48 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> One issue is that while we all want WAL and new query structure and\n> stuff like that, we don't have end users asking for this repeatedly.\n> What we do have them asking for is foreign keys.\n>\n> The major issue seems to be that the 7.0 release is going to have major\n> incompatibilities for prior releases in the area of date types, and\n> stuff like that. With all we are doing, I am not sure that is even\n> going to work because we can't synchonize all the incompatibility stuff\n> for one release.\n>\n> Maybe we just call it 7.0, and have some more incompatibility stuff in\n> 7.1. Seems waiting for some .0 release is not going to work, unless we\n> scrap the Feb 1 beta and just wait for all new stuff to be finished, but\n> that seems worse than having a 7.1 that contains some incompatiblities.\n\nNow that you say it,\n\n not just maybe, definitely call it 7.0!\n\n As said on the phone, the deferred trigger queue required for\n the FOREIGN KEY stuff delays all AFTER ROW trigger for\n execution at least past the entire statement.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Fri, 10 Dec 1999 18:10:20 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "> I think the project has gotten to such a size, and such a number of\n> developers, that this is feasible to do...we'd still have our major\n> releases, but only have minor, not minor.minor releases...\n\nHmm. Pretty sure I don't agree that we have enough developers to\nhandle this...\n\n> Instead of v6.5.1 after a month of v6.5 being released, we'd have released\n> v6.6 as being the more current stable version...its just taking things one\n> step further then what we've done recently with the release of v6.5.3...\n\nOK, I *think* I understand your suggestion. If that is the way the\nproject goes, OK, but I'm not happy about it, really. If we had been\ndoing this scheme since v6.0, we would have gone from v6.0 to v11.3 in\n2.5-3 years, with (from my saved tarballs and the release notes):\n\nv6.0 (6.0 series)\nv7.0 (6.1 series)\nv7.1\nv8.0 (6.2 series)\nv8.1\nv9.0 (6.3 series)\nv9.1\nv9.2\nv10.0 (v6.4 series)\nv10.1\nv10.2\nv11.0 (6.5 series)\nv11.1\nv11.2\nv11.3\n\nOh, btw, virtually no minor release has new features (since they all\npreserve DB contents and structure), just fixes for code breakage.\n\nI'd like to put dates on the releases, to point out that in several\ninstances we went from vX.0 to vX.1 in two to four weeks :(\n\nActually, this is the slippery road to name and version escalation: we\nshould have released \"PostgreSQL+\", \"PostgreSQL Pro\", \"PostgreSQL\nDevelopers Edition\", \"PostgreSQL++\", \"PostgreSQL II\", \"PostgreSQL\nPro+\", etc by now ;)\n\nThat way, we can have a v2.0 of a bunch of products, and people will\nthink we're doing real development without ever checking that we are.\nWorks for other folks, but I don't see what it buys us.\n\nOK, I've had a bit of fun with this, and I'll shut up now (well,\nmaybe), but I don't think that escalating our version numbering fixes\nproblems, and just means that we have a \"R10\" (a la \"Y2K\") problem\nsooner rather than later.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 10 Dec 1999 17:21:14 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "> > I think the project has gotten to such a size, and such a number of\n> > developers, that this is feasible to do...we'd still have our major\n> > releases, but only have minor, not minor.minor releases...\n> \n> Hmm. Pretty sure I don't agree that we have enough developers to\n> handle this...\n\nAgreed.\n\n> OK, I've had a bit of fun with this, and I'll shut up now (well,\n> maybe), but I don't think that escalating our version numbering fixes\n> problems, and just means that we have a \"R10\" (a la \"Y2K\") problem\n> sooner rather than later.\n\nAgreed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Dec 1999 12:21:24 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "\nIncompatibilities from one release to the next *has* to bump the major\nversion...a minor number should be a *minor* upgrade, plain and simple...\n\nOn Fri, 10 Dec 1999, Bruce Momjian wrote:\n\n> > Marc, I'd like to understand why we are pushing 7.0 for this \"release\n> > where we are\" release. We've (perhaps) got FK support, and a rewritten\n> > psql, and lots of bug fixes, and maybe \"join syntax\" but not outer\n> > joins. If we release as 7.0, then I'll force the date/time\n> > reunification into this release, since it is a pretty big change to\n> > the backend tables (I've been waiting quite a while already for the\n> > major rev jump to do this).\n> \n> One issue is that while we all want WAL and new query structure and\n> stuff like that, we don't have end users asking for this repeatedly. \n> What we do have them asking for is foreign keys.\n> \n> The major issue seems to be that the 7.0 release is going to have major\n> incompatibilities for prior releases in the area of date types, and\n> stuff like that. With all we are doing, I am not sure that is even\n> going to work because we can't synchonize all the incompatibility stuff\n> for one release.\n> \n> Maybe we just call it 7.0, and have some more incompatibility stuff in\n> 7.1. Seems waiting for some .0 release is not going to work, unless we\n> scrap the Feb 1 beta and just wait for all new stuff to be finished, but\n> that seems worse than having a 7.1 that contains some incompatiblities.\n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 10 Dec 1999 13:22:10 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "> Incompatibilities from one release to the next *has* to bump the major\n> version...a minor number should be a *minor* upgrade, plain and simple...\n\nFine. But I'm happy with \"minor\" Postgres improvements counting as\n\"major\" for other packages. We're doing a better job then lots of\ncommercial companies in improving the product; I'd hate to try\nmatching some of their pathetic release bumps in our version numbering\nsince by that standard we should be *skipping* some of the whole\nnumbers.\n\nLets see, \n\nSolaris 2.7 == SunOS 5.5 (or is it 5.4?) == Solaris 7\nJDK1.2 == Java1.2 == Java 2\nWin98 != Win98 Rel2 != Win98 Rel2 Hotfix x != ...\n\nYuck.\n\nimo the *only* reason we are tempted to do more major releases is that\nwe are too lazy/understaffed/sensible (you pick it) to support\nmultiple db formats for our compiled code. Other commercial DBs don't\nrelease often, and they don't include big improvements, but they *do*\ninclude support for multiple db formats/schemas in their product, so\nyou aren't forced into an initdb for each release. Instead they\ninclude klugy workaround code to allow reading older formats with the\nnewer version.\n\nGood things are being said about us, and people are noticing that the\nproduct has improved from v6.0 to v6.5. We don't need to be at v11.0\nto get noticed; in fact it may look a little silly...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 10 Dec 1999 17:41:01 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "On 1999-12-10, The Hermit Hacker mentioned:\n\n> Damn damn damn...I can never explain these things right. The 7.x would,\n> *at all times* maintain database compatibility with any 7.x release...I\n> could cvsup down the newest source, build and install it, without any risk\n> to my current databases...but still get access to a newer feature\n\nIn general, I like that concept, but I don't see that happening. With\nevery third patch \"requiring initdb\" you would potentially stall certain\nareas of development indefinitely with your requirement. Unless we dream\nup some way to dynamically adjust outdated system catalogues ...\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sat, 11 Dec 1999 03:00:57 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "On 1999-12-10, Bruce Momjian mentioned:\n\n> Maybe we just call it 7.0, and have some more incompatibility stuff in\n> 7.1. Seems waiting for some .0 release is not going to work, unless we\n> scrap the Feb 1 beta and just wait for all new stuff to be finished, but\n> that seems worse than having a 7.1 that contains some incompatiblities.\n\nWhat kind of incompatibilities are we talking about here really? Is there\nanything that can't be resolved via\n* big warning signs\n* pg_dump or (to be created) friends\n* supporting the old stuff for a while as well\n* automated conversion of the things using the old stuff\n* informative documents outlining the reason of the change and how to\ncope with it?\n\nThings change all the time, that's a fact of life.\n\nIf foreign keys get done this is definitely the greatest thing in the\nworld for the end user, so 7.0 is a good name. \n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sat, 11 Dec 1999 03:01:09 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "\nIncompatibilities is a simple concept..if it requires an initdb, its\nincompatible, period...if a pg_dump has to be performed, its\nincompatible...if it requires me to do more then a 'make install' and a\nrestart of the server, its incompatible...if it requires me to recompile\nany of my binaries, its incompatible...\n\nNot all changes that are made require changes to system tables...\n\nOn Sat, 11 Dec 1999, Peter Eisentraut wrote:\n\n> On 1999-12-10, Bruce Momjian mentioned:\n> \n> > Maybe we just call it 7.0, and have some more incompatibility stuff in\n> > 7.1. Seems waiting for some .0 release is not going to work, unless we\n> > scrap the Feb 1 beta and just wait for all new stuff to be finished, but\n> > that seems worse than having a 7.1 that contains some incompatiblities.\n> \n> What kind of incompatibilities are we talking about here really? Is there\n> anything that can't be resolved via\n> * big warning signs\n> * pg_dump or (to be created) friends\n> * supporting the old stuff for a while as well\n> * automated conversion of the things using the old stuff\n> * informative documents outlining the reason of the change and how to\n> cope with it?\n> \n> Things change all the time, that's a fact of life.\n> \n> If foreign keys get done this is definitely the greatest thing in the\n> world for the end user, so 7.0 is a good name. \n> \n> -- \n> Peter Eisentraut Sernanders v�g 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sat, 11 Dec 1999 15:06:25 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> btw, I'm not really happy with the prospect/suggestion of going from\n> 7.0 to 8.0 in a short time period; one of things I'm most satisfied\n ^^^^^^^^^^\n> with in our development is that we have significant minor releases and\n> that we haven't succumbed to the \"major rev only\" marketing driven\n> ploys of the big guys...\n\nI agreed! I propose to name the next release as 6.6 \nand the \"WAL\" release as 7.0 or 6.7, but not 8.0...\n\nVadim\n",
"msg_date": "Sun, 12 Dec 1999 18:55:38 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "Vadim Mikheev wrote:\n> \n> Thomas Lockhart wrote:\n> >\n> > btw, I'm not really happy with the prospect/suggestion of going from\n> > 7.0 to 8.0 in a short time period; one of things I'm most satisfied\n> ^^^^^^^^^^\n> > with in our development is that we have significant minor releases and\n> > that we haven't succumbed to the \"major rev only\" marketing driven\n> > ploys of the big guys...\n> \n> I agreed! I propose to name the next release as 6.6\n ^^^\n or 7.0\n\n> and the \"WAL\" release as 7.0 or 6.7, but not 8.0...\n ^^^\n and 7.1\n\nVadim\n",
"msg_date": "Sun, 12 Dec 1999 19:22:09 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "\n7.0 and 7.1 I could live with...\n\n\nOn Sun, 12 Dec 1999, Vadim Mikheev wrote:\n\n> Vadim Mikheev wrote:\n> > \n> > Thomas Lockhart wrote:\n> > >\n> > > btw, I'm not really happy with the prospect/suggestion of going from\n> > > 7.0 to 8.0 in a short time period; one of things I'm most satisfied\n> > ^^^^^^^^^^\n> > > with in our development is that we have significant minor releases and\n> > > that we haven't succumbed to the \"major rev only\" marketing driven\n> > > ploys of the big guys...\n> > \n> > I agreed! I propose to name the next release as 6.6\n> ^^^\n> or 7.0\n> \n> > and the \"WAL\" release as 7.0 or 6.7, but not 8.0...\n> ^^^\n> and 7.1\n> \n> Vadim\n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 12 Dec 1999 14:01:20 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n>> I agreed! I propose to name the next release as 6.6\n> ^^^\n> or 7.0\n>> and the \"WAL\" release as 7.0 or 6.7, but not 8.0...\n> ^^^\n> and 7.1\n\n7.0 and 7.1 seem like the worst choice of names to me. We are not\nplanning any major new features for the Feb release (except for whatever\npart of foreign key support Jan has working by then). There will be\nsome major new features for the release-after-that: WAL, some kind of\nanswer for the long-tuple problem, etc. etc. So it'd be very confusing\nto users to call this one a \"major\" version bump, when it will have less\nnew stuff in it than the \"minor\" version bumps before and after.\n\nI could live with 7.0 and then 8.0, if we were going to switch to\ntwo-part instead of three-part version numbering. But I agree with\nThomas that I'd rather stick to the convention we have been using.\nIf we are going to be consistent with the way we have named prior\nreleases, it seems to me that there is no choice: the Feb release\nis 6.6, and the one after it will be 7.0 (or maybe even 6.7).\n\nI also would rather do it that way because I think the idea is to\nwrap up *what we have now* and get it out. If we call the Feb release\n7.0, then Thomas will want to cram in date/time type consolidation work\nthat (AFAIK) he hasn't even started on, and there'll be great temptation\nto try to squeeze in other half-baked stuff in order to try to justify\ncalling this a major version bump. That's completely at odds with what\nI thought the proposal of a near-term release was all about.\n\nBasically, if people insist that the next release should be called 7.0,\nI'd be inclined to forget about a near-term release and go back to\nPlan A: keep working on it until we have enough stuff done to justify\ncalling it 7.0.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Dec 1999 13:28:01 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release "
},
{
"msg_contents": "> Vadim Mikheev <[email protected]> writes:\n> >> I agreed! I propose to name the next release as 6.6\n> > ^^^\n> > or 7.0\n> >> and the \"WAL\" release as 7.0 or 6.7, but not 8.0...\n> > ^^^\n> > and 7.1\n> \n> 7.0 and 7.1 seem like the worst choice of names to me. We are not\n> planning any major new features for the Feb release (except for whatever\n> part of foreign key support Jan has working by then). There will be\n> some major new features for the release-after-that: WAL, some kind of\n> answer for the long-tuple problem, etc. etc. So it'd be very confusing\n> to users to call this one a \"major\" version bump, when it will have less\n> new stuff in it than the \"minor\" version bumps before and after.\n> \n> I could live with 7.0 and then 8.0, if we were going to switch to\n> two-part instead of three-part version numbering. But I agree with\n> Thomas that I'd rather stick to the convention we have been using.\n> If we are going to be consistent with the way we have named prior\n> releases, it seems to me that there is no choice: the Feb release\n> is 6.6, and the one after it will be 7.0 (or maybe even 6.7).\n> \n> I also would rather do it that way because I think the idea is to\n> wrap up *what we have now* and get it out. If we call the Feb release\n> 7.0, then Thomas will want to cram in date/time type consolidation work\n> that (AFAIK) he hasn't even started on, and there'll be great temptation\n> to try to squeeze in other half-baked stuff in order to try to justify\n> calling this a major version bump. That's completely at odds with what\n> I thought the proposal of a near-term release was all about.\n> \n> Basically, if people insist that the next release should be called 7.0,\n> I'd be inclined to forget about a near-term release and go back to\n> Plan A: keep working on it until we have enough stuff done to justify\n> calling it 7.0.\n\nLet's look at the 7.0 features list:\n\n Foreign Keys - Jan\n WAL - Vadim\n Function args - Tom\n System indexes - Bruce\n Date/Time types - Thomas\n Optimizer - Tom\n\n Outer Joins - Thomas?\n Long Tuples - ?\n\nWe have foreign keys and long tuples in Feb 1. Jan says on�long tuples:\n\n I thought about the huge size variable text type a little\n more. And I think I could get the following implementation\n to work reliable for our upcoming release.\n\nThe more we explore long tuples, it seems easier than expected. \nChaining tuples was going to be hard. The new way is more efficient, and\neasier.\n\nI assume Thomas may do the date/time for Feb 1 because it mostly\nremoving old types, I think.\n\nSo, we will not have WAL for Feb 1, but people are clammoring for\nforeign keys and long tuples. I think 7.0 is good for Feb 1. We can add\nWAL in 7.1.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 12 Dec 1999 15:59:27 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "\nOkay, this whole thread could continue going back and forth for the next 6\nmonths and we may as well wait for WAL :)\n\nIt is agreed that Feb 1st is the beta date...it will not include WAL, but\nwill be numbered v7.0, with v7.1 going BETA as soon as Vadim feels\nprepared with the WAL code...\n\nAltho I would personally like to get rid of the major.minor.minor\nnumbering scheme, and just have it major.minor, the arguments against vs\nfor outweigh, so we'll stick with what we've always had in that regard...\n\nOn Feb 1st, the CVS repository will be branched, like we did on the last\nrelease, so that we can beta/debug 7.0 *without* interfering with\ndevelopment on 7.1. This has proven to work quite well with v6.5.x, as\nfar as I'm concerned...since, once we go beta, there are to be no new\nfeatures, only bug fixes, this shouldn't affect anyone, eh? :)\n\n\n\nOn Sun, 12 Dec 1999, Bruce Momjian wrote:\n\n> > Vadim Mikheev <[email protected]> writes:\n> > >> I agreed! I propose to name the next release as 6.6\n> > > ^^^\n> > > or 7.0\n> > >> and the \"WAL\" release as 7.0 or 6.7, but not 8.0...\n> > > ^^^\n> > > and 7.1\n> > \n> > 7.0 and 7.1 seem like the worst choice of names to me. We are not\n> > planning any major new features for the Feb release (except for whatever\n> > part of foreign key support Jan has working by then). There will be\n> > some major new features for the release-after-that: WAL, some kind of\n> > answer for the long-tuple problem, etc. etc. So it'd be very confusing\n> > to users to call this one a \"major\" version bump, when it will have less\n> > new stuff in it than the \"minor\" version bumps before and after.\n> > \n> > I could live with 7.0 and then 8.0, if we were going to switch to\n> > two-part instead of three-part version numbering. But I agree with\n> > Thomas that I'd rather stick to the convention we have been using.\n> > If we are going to be consistent with the way we have named prior\n> > releases, it seems to me that there is no choice: the Feb release\n> > is 6.6, and the one after it will be 7.0 (or maybe even 6.7).\n> > \n> > I also would rather do it that way because I think the idea is to\n> > wrap up *what we have now* and get it out. If we call the Feb release\n> > 7.0, then Thomas will want to cram in date/time type consolidation work\n> > that (AFAIK) he hasn't even started on, and there'll be great temptation\n> > to try to squeeze in other half-baked stuff in order to try to justify\n> > calling this a major version bump. That's completely at odds with what\n> > I thought the proposal of a near-term release was all about.\n> > \n> > Basically, if people insist that the next release should be called 7.0,\n> > I'd be inclined to forget about a near-term release and go back to\n> > Plan A: keep working on it until we have enough stuff done to justify\n> > calling it 7.0.\n> \n> Let's look at the 7.0 features list:\n> \n> Foreign Keys - Jan\n> WAL - Vadim\n> Function args - Tom\n> System indexes - Bruce\n> Date/Time types - Thomas\n> Optimizer - Tom\n> \n> Outer Joins - Thomas?\n> Long Tuples - ?\n> \n> We have foreign keys and long tuples in Feb 1. Jan says on�long tuples:\n> \n> I thought about the huge size variable text type a little\n> more. And I think I could get the following implementation\n> to work reliable for our upcoming release.\n> \n> The more we explore long tuples, it seems easier than expected. \n> Chaining tuples was going to be hard. The new way is more efficient, and\n> easier.\n> \n> I assume Thomas may do the date/time for Feb 1 because it mostly\n> removing old types, I think.\n> \n> So, we will not have WAL for Feb 1, but people are clammoring for\n> foreign keys and long tuples. I think 7.0 is good for Feb 1. We can add\n> WAL in 7.1.\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 12 Dec 1999 18:46:51 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> It is agreed that Feb 1st is the beta date...it will not include WAL, but\n> will be numbered v7.0, with v7.1 going BETA as soon as Vadim feels\n> prepared with the WAL code...\n\nOK, it's decided. Let's quit arguing.\n\n> On Feb 1st, the CVS repository will be branched, like we did on the last\n> release, so that we can beta/debug 7.0 *without* interfering with\n> development on 7.1. This has proven to work quite well with v6.5.x,\n\nActually, I thought what worked well was to postpone the branch as long\nas possible. Double-patching is no fun...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Dec 1999 17:50:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release "
},
{
"msg_contents": "> The Hermit Hacker <[email protected]> writes:\n> > It is agreed that Feb 1st is the beta date...it will not include WAL, but\n> > will be numbered v7.0, with v7.1 going BETA as soon as Vadim feels\n> > prepared with the WAL code...\n> \n> OK, it's decided. Let's quit arguing.\n> \n> > On Feb 1st, the CVS repository will be branched, like we did on the last\n> > release, so that we can beta/debug 7.0 *without* interfering with\n> > development on 7.1. This has proven to work quite well with v6.5.x,\n> \n> Actually, I thought what worked well was to postpone the branch as long\n> as possible. Double-patching is no fun...\n\nDitto. Look at the 6_5 branch and you will see it was done far into the\n6.5 release, not at the 6.5.0 release. I don't want to continue\nmentioning this for every release.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 12 Dec 1999 17:54:06 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "On Sun, 12 Dec 1999, Bruce Momjian wrote:\n\n> > The Hermit Hacker <[email protected]> writes:\n> > > It is agreed that Feb 1st is the beta date...it will not include WAL, but\n> > > will be numbered v7.0, with v7.1 going BETA as soon as Vadim feels\n> > > prepared with the WAL code...\n> > \n> > OK, it's decided. Let's quit arguing.\n> > \n> > > On Feb 1st, the CVS repository will be branched, like we did on the last\n> > > release, so that we can beta/debug 7.0 *without* interfering with\n> > > development on 7.1. This has proven to work quite well with v6.5.x,\n> > \n> > Actually, I thought what worked well was to postpone the branch as long\n> > as possible. Double-patching is no fun...\n> \n> Ditto. Look at the 6_5 branch and you will see it was done far into the\n> 6.5 release, not at the 6.5.0 release. I don't want to continue\n> mentioning this for every release.\n\nThe branch should be created on release, not after the release, else the\nbranch is useless...sorry, think I said something wrong\noriginally...didn't mean 'on beta', meant 'on release'...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 12 Dec 1999 19:41:36 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "> > Ditto. Look at the 6_5 branch and you will see it was done far into the\n> > 6.5 release, not at the 6.5.0 release. I don't want to continue\n> > mentioning this for every release.\n> \n> The branch should be created on release, not after the release, else the\n> branch is useless...sorry, think I said something wrong\n> originally...didn't mean 'on beta', meant 'on release'...\n\nNo.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 12 Dec 1999 20:56:30 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "> > The Hermit Hacker <[email protected]> writes:\n> > > It is agreed that Feb 1st is the beta date...it will not include WAL, but\n> > > will be numbered v7.0, with v7.1 going BETA as soon as Vadim feels\n> > > prepared with the WAL code...\n> > \n> > OK, it's decided. Let's quit arguing.\n> > \n> > > On Feb 1st, the CVS repository will be branched, like we did on the last\n> > > release, so that we can beta/debug 7.0 *without* interfering with\n> > > development on 7.1. This has proven to work quite well with v6.5.x,\n> > \n> > Actually, I thought what worked well was to postpone the branch as long\n> > as possible. Double-patching is no fun...\n> \n> Ditto. Look at the 6_5 branch and you will see it was done far into the\n> 6.5 release, not at the 6.5.0 release. I don't want to continue\n> mentioning this for every release.\n\n6_5 branch was after 6.5.1 release.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 12 Dec 1999 21:22:11 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>>>> Ditto. Look at the 6_5 branch and you will see it was done far into the\n>>>> 6.5 release, not at the 6.5.0 release. I don't want to continue\n>>>> mentioning this for every release.\n>> \n>> The branch should be created on release, not after the release, else the\n>> branch is useless...sorry, think I said something wrong\n>> originally...didn't mean 'on beta', meant 'on release'...\n\n> No.\n\nBruce is right: we should delay making a branch for REL7_0 patches\nuntil people are ready to start committing new features for 7.1.\nI'm guessing that would be a month or two after formal release of 7.0.\nWe did that after the 6.5 release (IIRC, we didn't make the branch\nuntil around the time of 6.5.2) and I thought it worked just great;\nsaved a lot of double-patching.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Dec 1999 21:44:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release "
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> Good things are being said about us, and people are noticing that the\n> product has improved from v6.0 to v6.5. We don't need to be at v11.0\n> to get noticed; in fact it may look a little silly...\n\nAgreed again! I would be happy with 6.X up to 6.9 -:)\n\nVadim\n",
"msg_date": "Tue, 14 Dec 1999 10:12:41 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "On Tue, 14 Dec 1999, Vadim Mikheev wrote:\n\n> Thomas Lockhart wrote:\n> > \n> > Good things are being said about us, and people are noticing that the\n> > product has improved from v6.0 to v6.5. We don't need to be at v11.0\n> > to get noticed; in fact it may look a little silly...\n> \n> Agreed again! I would be happy with 6.X up to 6.9 -:)\n\nAt an ~2year development cycle for each major, it would take us ~6 years\nto attain v10...I think we are safe :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 13 Dec 1999 23:24:05 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "On Fri, 10 Dec 1999, Thomas Lockhart wrote:\n\n> imo the *only* reason we are tempted to do more major releases is that\n> we are too lazy/understaffed/sensible (you pick it) to support\n> multiple db formats for our compiled code. Other commercial DBs don't\n> release often, and they don't include big improvements, but they *do*\n> include support for multiple db formats/schemas in their product, so\n> you aren't forced into an initdb for each release. Instead they\n> include klugy workaround code to allow reading older formats with the\n> newer version.\n\nThen why don't we come up with something to autoconvert the user's\ndatabases without having to dump/initdb/reload? Or is that just\nnot feasable (impossible's an answer I'd find hard to believe, but\nmore trouble than it's worth is understandable).\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 14 Dec 1999 05:45:17 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "> On Fri, 10 Dec 1999, Thomas Lockhart wrote:\n> \n> > imo the *only* reason we are tempted to do more major releases is that\n> > we are too lazy/understaffed/sensible (you pick it) to support\n> > multiple db formats for our compiled code. Other commercial DBs don't\n> > release often, and they don't include big improvements, but they *do*\n> > include support for multiple db formats/schemas in their product, so\n> > you aren't forced into an initdb for each release. Instead they\n> > include klugy workaround code to allow reading older formats with the\n> > newer version.\n> \n> Then why don't we come up with something to autoconvert the user's\n> databases without having to dump/initdb/reload? Or is that just\n> not feasable (impossible's an answer I'd find hard to believe, but\n> more trouble than it's worth is understandable).\n\nSystem table changes often make that difficult. pg_upgrade does most of\nwhat we want by keeping the disk tables and allowing initdb. If we\ndon't change the on-disk structure of user tables, pg_upgrade allows\nquick upgrades. Not sure 7.0 will allow the use of pg_upgrade. 6.5 did\nnot because the on-disk table structure changed with MVCC.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 14 Dec 1999 11:27:47 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "Is there a writeup of the version numbering and CVS branch scheme in any of the\nFAQs?\n\nIf not, there should be.\n\nSteve\n\n\nBruce Momjian wrote:\n\n> > The Hermit Hacker <[email protected]> writes:\n> > > It is agreed that Feb 1st is the beta date...it will not include WAL, but\n> > > will be numbered v7.0, with v7.1 going BETA as soon as Vadim feels\n> > > prepared with the WAL code...\n> >\n> > OK, it's decided. Let's quit arguing.\n> >\n> > > On Feb 1st, the CVS repository will be branched, like we did on the last\n> > > release, so that we can beta/debug 7.0 *without* interfering with\n> > > development on 7.1. This has proven to work quite well with v6.5.x,\n> >\n> > Actually, I thought what worked well was to postpone the branch as long\n> > as possible. Double-patching is no fun...\n>\n> Ditto. Look at the 6_5 branch and you will see it was done far into the\n> 6.5 release, not at the 6.5.0 release. I don't want to continue\n> mentioning this for every release.\n>\n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ************\n\n",
"msg_date": "Thu, 30 Dec 1999 05:10:32 -0800",
"msg_from": "Stephen Birch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
},
{
"msg_contents": "> Is there a writeup of the version numbering and CVS branch scheme\n> in any of the FAQs?\n\nThere is no item because there isn't a standard yet.\n\n--\n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 30 Dec 1999 12:03:26 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
}
] |
[
{
"msg_contents": "Hi\n\nJust noticed that limit is ignored when using a select to insert\ninto a table.\n\nEg. insert into mytable (f1, f2) select f1, f2 from myothertable limit 10;\n\nselects all records from myothertable.\n\nUsing the select with limit on it's own works fine.\n\nVersion 6.5.2 on RH6\n\n--------\nRegards\nTheo\n",
"msg_date": "Fri, 10 Dec 1999 12:57:17 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "insert using select with limit"
},
{
"msg_contents": "Theo Kramer <[email protected]> writes:\n> Just noticed that limit is ignored when using a select to insert\n> into a table.\n> Eg. insert into mytable (f1, f2) select f1, f2 from myothertable limit 10;\n> selects all records from myothertable.\n\nUgh, you're right. Not sure if this will be easily fixable or not.\nWorst case, the fix might have to wait for the long-planned query tree\nredesign.\n\nOr it might be a one-liner. Will look into it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Dec 1999 10:41:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] insert using select with limit "
}
] |
[
{
"msg_contents": "At 09:14 PM 12/27/99 -0500, Marc G. Fournier wrote:\n>\n>For those working on INNER/OUTER Joins...any comments? :)\n\nI'm not working on them (or on Postgres at all, other than steadily\nplowing through the code to familiarize myself with it) but I'm\nalways willing to comment...\n\n>\n>> JOIN statement? I take it that this is different then:\n>>\n>> SELECT a.field1, b.field2 from table1 a, table2 b where a.key = b.key\n>\n>ANSI92 supports the far better readable JOIN statement:\n>\n>\n>select a.field1, b.field2\n> from table1 a\n> join table2 b on\n> a.key = b.key\n\nHe's right that they are different, but they give the same result.\n\nWearing my compiler-writer's hat, something like:\n\nselect a.field1, b.field2 from table1 a, table2 b where a.key=b.key\n\nsays \"cross join table1 and table2, then return only those rows \nwhere a.key=b.key\"\n\nin other words, it's not (strictly speaking) an inner join.\n\nHowever...the rows returned by this are the same as the rows\nreturned by an inner join. One could look at the traditional \nimplementation as an inner join as being an OPTIMIZATION of \nthis query. It qualifies as an optimization in the sense that\nit's certainly far faster for the vast majority of such queries!\n\n>From my reading of the standard (or Date's review of it), this\nis really how the standard defines things, i.e. an inner join\nare explicitly given in the \"from\" clause.\n\n>\n>\n>Left outer joins are now easy to:\n>\n>select a.field1, b.field2\n> from table1 a\n> left outer join table2 b on\n> a.key = b.key\n>\n>\n>It generally parses and optimizes faster too. For MS SQL Server I've seen\n>improvements of up to 75% percent: execution time was the same, but the plan\n>was calculated much faster.\n\nThis is a bit surprising to me. One source might be the fact that outer\njoins aren't associative (SQL for smarties gives examples), so outer joins\nappearing in the \"from\" clause may simply force left-to-right execution\nwhich reduces the number of cases a plan optimizer (whatever Sybase/SQL server\nuses) must consider.\n\nOr it may be that SQL server just executes ALL joins, inner or outer,\nexplicitly listed in the \"from\" clause in left-to-right order under\nthe assumption that the programmer knows best. I kinda doubt that,\nthough. If true, it would certainly simplify plan optimization, there\nwouldn't be any other than deciding what kind of join and which indices\nto use for each one (as opposed to figuring out that plus which order\nof execution).\n\n>From my reading of the work done on joins thus far for Postgres, the\nplan optimizer will be fed essentially the same information whether\nan inner join is listed in the \"from\" clause or derived from the\n\"where\" clause, so I wouldn't expect to see such speed ups. The\nnon-associativity of outer joins might impose an ordering on \ninner joins mixed in, though (I haven't thought through the cases,\nagain I'm just reading Postgres code and Date's book on the standard,\nI wrote my first SQL query less than a year ago and am still very\nmuch a novice at all this).\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 10 Dec 1999 03:08:52 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] RE: What database i can use? (fwd)"
},
{
"msg_contents": "\nFor those working on INNER/OUTER Joins...any comments? :)\n\nMarc G. Fournier [email protected]\nSystems Administrator @ hub.org \nscrappy@{postgresql|isc}.org ICQ#7615664\n\n---------- Forwarded message ----------\nDate: Mon, 27 Dec 1999 10:36:52 +0100\nFrom: Berend de Boer <[email protected]>\nTo: 'Marc G. Fournier' <[email protected]>\nCc: [email protected]\nSubject: RE: What database i can use?\n\n> JOIN statement? I take it that this is different then:\n>\n> SELECT a.field1, b.field2 from table1 a, table2 b where a.key = b.key\n\nANSI92 supports the far better readable JOIN statement:\n\n\nselect a.field1, b.field2\n from table1 a\n join table2 b on\n a.key = b.key\n\n\nLeft outer joins are now easy to:\n\nselect a.field1, b.field2\n from table1 a\n left outer join table2 b on\n a.key = b.key\n\n\nIt generally parses and optimizes faster too. For MS SQL Server I've seen\nimprovements of up to 75% percent: execution time was the same, but the plan\nwas calculated much faster.\n\nGroetjes,\n\nBerend. (-:\n\n\n",
"msg_date": "Mon, 27 Dec 1999 21:14:08 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: What database i can use? (fwd)"
},
{
"msg_contents": "> For those working on INNER/OUTER Joins...any comments? :)\n> > JOIN statement? I take it that this is different then:\n> > SELECT a.field1, b.field2 from table1 a, table2 b where a.key = b.key\n> ANSI92 supports the far better readable JOIN statement:\n> select a.field1, b.field2\n> from table1 a\n> join table2 b on\n> a.key = b.key\n\nDon't know why one would consider this better or more readable;\ndepends on your past lives I guess...\n\nSQL92 outer joins use this syntax, but other DBs (claiming SQL92\ncompliance, btw; they usually only meet the lowest defined level of\ncompliance) use a different syntax with no ill effects. We are\nimplementing the SQL92 syntax.\n\n> It generally parses and optimizes faster too. For MS SQL Server I've seen\n> improvements of up to 75% percent: execution time was the same, but the plan\n> was calculated much faster.\n\nI would guess that any speedup would be an indication of a bad\noptimizer, which apparently skips work when given the \"join syntax\".\nIf the statements are equivalent, then one would hope that the\nparser/optimizer would consider the same set of plans to satisfy it.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 28 Dec 1999 06:37:39 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RE: What database i can use? (fwd)"
}
] |
[
{
"msg_contents": "I'd like to set up a system where every employee can log into an intranet\nserver and enter the time he/she spend on each of the projects. At the end\nof the month I'd like to create a list of time per project from this data.\n\nMy idea was to use PostgreSQL as backend (of course) and a web front-end.\n\nDoes anyone have a similar system running? Or any ideas concerning how to\nset this up and what software to use?\n\nThanks in advance.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Fri, 10 Dec 1999 14:27:51 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "question"
},
{
"msg_contents": "On Fri, 10 Dec 1999, Michael Meskes wrote:\n\n> I'd like to set up a system where every employee can log into an intranet\n> server and enter the time he/she spend on each of the projects. At the end\n> of the month I'd like to create a list of time per project from this data.\n> \n> My idea was to use PostgreSQL as backend (of course) and a web front-end.\n> \n> Does anyone have a similar system running? Or any ideas concerning how to\n> set this up and what software to use?\n\nIt would seem rather trivial in PHP to do that. I've done a number of\ndatabase routines with PostgreSQL and PHP and most of 'em end up as less\nthan a page of code (including blank lines).\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Fri, 10 Dec 1999 11:06:43 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] question"
},
{
"msg_contents": "Vince Vielhaber wrote:\n \n> On Fri, 10 Dec 1999, Michael Meskes wrote:\n \n> > I'd like to set up a system where every employee can log into an intranet\n> > server and enter the time he/she spend on each of the projects. At the end\n> > of the month I'd like to create a list of time per project from this data.\n \n> It would seem rather trivial in PHP to do that. I've done a number of\n> database routines with PostgreSQL and PHP and most of 'em end up as less\n> than a page of code (including blank lines).\n\nTry onShore TimeSheet, which uses PostgreSQL -- www.onshoretimesheet.org\n\n--\nLamar Owen\nWGCR Internet Radio\n",
"msg_date": "Fri, 10 Dec 1999 14:11:26 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] question"
},
{
"msg_contents": "On Fri, Dec 10, 1999 at 11:06:43AM -0500, Vince Vielhaber wrote:\n> It would seem rather trivial in PHP to do that. I've done a number of\n> database routines with PostgreSQL and PHP and most of 'em end up as less\n> than a page of code (including blank lines).\n\nThat's the kind of answer I expected. But I still hope I can get along with\nless programming on my part. :-)\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Fri, 10 Dec 1999 20:55:34 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] question"
},
{
"msg_contents": "\nOn 10-Dec-99 Michael Meskes wrote:\n> On Fri, Dec 10, 1999 at 11:06:43AM -0500, Vince Vielhaber wrote:\n>> It would seem rather trivial in PHP to do that. I've done a number of\n>> database routines with PostgreSQL and PHP and most of 'em end up as less\n>> than a page of code (including blank lines).\n> \n> That's the kind of answer I expected. But I still hope I can get along with\n> less programming on my part. :-)\n\nRight, but the reason I suggested doing it that way is something as small\nas this is usually quicker to do it yourself than installing a package.\nAlot of the packages that should be simple (and probably are) end up taking\n3-4 hours to figure out, install and setup and may not fit the bill vs an \nhour or so knocking something out that would be exactly what you want.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Fri, 10 Dec 1999 15:40:39 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] question"
},
{
"msg_contents": "Lamar Owen <[email protected]> writes:\n> Try onShore TimeSheet, which uses PostgreSQL -- www.onshoretimesheet.org\n\nIt's doubly amusing that you suggest this, since the author, like\nMichael himself, is a Debian developer. :-)\n\nMike.\n",
"msg_date": "10 Dec 1999 16:00:08 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] question"
}
] |
[
{
"msg_contents": "Well this is my first post to this list, so be gentle ;-). With it I hope\nwe will be able to do most group amin, without going into psql. It has\nthe following syntax:\n\nusage: pg_group options -- dB group [user ...] || [table ...]\nwhere options is one of:\n -c create group\n -d delete group\n -a add user(s) to group\n -r remove user(s) to group\n +g give group access to tables\n -g revoke group access to tables\n -p privlages to grant/revoke. This is only used with the +g and -g\noptions.\n -- end of switches.\n\n examples:\n pg_group -c -- guestbook grp_gstbook_usr nobody tux\n pg_group -a -- guestbook grp_gstbook_usr webuser\n pg_group -d -- guestbook grp_gstbook_usr\n pg_group -r -- guestbook grp_gstbook_usr nobody\n pg_group +g -p \"insert,select\" -- guestbook grp_gstbook_usr gstbook\n pg_group -g -p \"insert\" -- guestbook grp_gstbook_usr gstbook\n\nI have attched the pg_group script src (it's in tcl), I hope It will make\nit and it was not a bad thing to do so, if not you can get it from:\nhttp://www.lowcountry.com/~jscottb/pg_group.tar.gz\n\nIf you have any questions, comments, patches or total rewrites email me\nat: [email protected]\n\nscott",
"msg_date": "Fri, 10 Dec 1999 09:49:15 -0500 (EST)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "First draft of pg_group admin tool."
}
] |
[
{
"msg_contents": "\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: Friday, December 10, 1999 3:07 PM\nTo: The Hermit Hacker\nCc: Vince Vielhaber; Bruce Momjian; PostgreSQL-development\nSubject: Re: [HACKERS] 6.6 release \n\nYeah, I was thinking that if we were to call this 7.0 and have plans\nfor going to 8.0 as soon as WAL &etc are done, then we'd basically be\ndropping one level of version number --- no need for a third number\nif major revs are that close together. That's OK with me as long as\nwe all understand that it's a change in naming practices. There are\nthings we'd need to change to make it work. For example, PG_VERSION\nwould need to record only the top version number: 7.0 and 7.1 would be\nexpected to have compatible databases, not incompatible ones.\n\nPM: Actually, JDBC only has room for a single Major/Minor pair in it's\napi, so it could actually help by having differing version numbers\nbetween releases (JDBC wise).\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n",
"msg_date": "Fri, 10 Dec 1999 15:19:33 -0000",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] 6.6 release "
}
] |
[
{
"msg_contents": "\nI'm using the postgresql-6.5.3-1 rpm from PostgreSQLs website on\nredhat 6.0\n\nSome time between yesterday and today postgres developed the habit of\nlisting all may table twice when I do \\d or \\dS from psql. Right now\nthis is only annoying, but does this mean there is a system corruption\nI need to fix?\n\nI've destroyed all my databases other than template1 and vacuumed in\ntemplate1. My next thought is to reinstall, but I'd rather not if I\ndon't have too.\n\n-- \nKarl DeBisschop <[email protected]>\n617.832.0332 (Fax: 617.956.2696)\n\nInformation Please - your source for FREE online reference\nhttp://www.infoplease.com - Your Ultimate Fact Finder\nhttp://kids.infoplease.com - The Great Homework Helper\n\nNetsaint Development\nhttp://netsaintplug.sourceforge.net\n",
"msg_date": "Fri, 10 Dec 1999 12:02:46 -0500",
"msg_from": "Karl DeBisschop <[email protected]>",
"msg_from_op": true,
"msg_subject": "\\d shows all my tables twice"
},
{
"msg_contents": "\nYou have duplicate entries in pg_shadow/pg_user table.\n\n\n> \n> I'm using the postgresql-6.5.3-1 rpm from PostgreSQLs website on\n> redhat 6.0\n> \n> Some time between yesterday and today postgres developed the habit of\n> listing all may table twice when I do \\d or \\dS from psql. Right now\n> this is only annoying, but does this mean there is a system corruption\n> I need to fix?\n> \n> I've destroyed all my databases other than template1 and vacuumed in\n> template1. My next thought is to reinstall, but I'd rather not if I\n> don't have too.\n> \n> -- \n> Karl DeBisschop <[email protected]>\n> 617.832.0332 (Fax: 617.956.2696)\n> \n> Information Please - your source for FREE online reference\n> http://www.infoplease.com - Your Ultimate Fact Finder\n> http://kids.infoplease.com - The Great Homework Helper\n> \n> Netsaint Development\n> http://netsaintplug.sourceforge.net\n> \n> ************\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Dec 1999 12:15:13 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] \\d shows all my tables twice"
},
{
"msg_contents": "\n> From: Bruce Momjian <[email protected]>\n>\n> You have duplicate entries in pg_shadow/pg_user table.\n>\n> > \n> > I'm using the postgresql-6.5.3-1 rpm from PostgreSQLs website on\n> > redhat 6.0\n> > \n> > Some time between yesterday and today postgres developed the habit of\n> > listing all may table twice when I do \\d or \\dS from psql. Right now\n> > this is only annoying, but does this mean there is a system corruption\n> > I need to fix?\n> > \n> > I've destroyed all my databases other than template1 and vacuumed in\n> > template1. My next thought is to reinstall, but I'd rather not if I\n> > don't have too.\n> > \n> > -- \n> > Karl DeBisschop <[email protected]>\n\nDead on right. Duplicates for postgres itself and the two admins.\nThanks.\n\nBy the way, you guys are great. I really appreciate the work you do.\nAnd I wanted to say the the RPM packaging was very well done in my\nopinion - it installed like a dream, and the init script did a perfect\njob of saving me the effort of manually initializing the DBMS. Kudos\nto all involved.\n\n-- \nKarl DeBisschop <[email protected]>\n617.832.0332 (Fax: 617.956.2696)\n\nInformation Please - your source for FREE online reference\nhttp://www.infoplease.com - Your Ultimate Fact Finder\nhttp://kids.infoplease.com - The Great Homework Helper\n\nNetsaint Plugins Development\nhttp://netsaintplug.sourceforge.net\n",
"msg_date": "Fri, 10 Dec 1999 12:25:06 -0500",
"msg_from": "Karl DeBisschop <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] \\d shows all my tables twice"
},
{
"msg_contents": "> Dead on right. Duplicates for postgres itself and the two admins.\n> Thanks.\n> \n> By the way, you guys are great. I really appreciate the work you do.\n> And I wanted to say the the RPM packaging was very well done in my\n> opinion - it installed like a dream, and the init script did a perfect\n> job of saving me the effort of manually initializing the DBMS. Kudos\n> to all involved.\n\nNext release will not allow this problem to happen.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Dec 1999 12:47:06 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] \\d shows all my tables twice"
},
{
"msg_contents": "I second Karl DeBisschop's comments below. It took me hours to get\npostgresql working on an HP/UX system, compared with minutes on Linux.\nI'm using Linux as our main database server now.\n\nBy the way, I know about using pg_dump to backup the database and I do\nthat. Is there a good way to maintain a second identical copy of the\ndatabase on another machine? Will simply copying the dump over and\nrestoring it with psql do the trick? Would I need to delete an old copy\nof the same database first? We have a somewhat slow Internet connection\nto our Linux system's location and it would be nice to have an alternate\nsite with the same data.\n\n--\nStephen Walton, Professor of Physics and Astronomy,\nCalifornia State University, Northridge\[email protected]\n\nOn Fri, 10 Dec 1999, Karl DeBisschop wrote:\n\n> \n> By the way, you guys are great. I really appreciate the work you do.\n> And I wanted to say the the RPM packaging was very well done in my\n> opinion - it installed like a dream, and the init script did a perfect\n> job of saving me the effort of manually initializing the DBMS. Kudos\n> to all involved.\n\n",
"msg_date": "Fri, 10 Dec 1999 09:52:47 -0800 (PST)",
"msg_from": "Stephen Walton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] \\d shows all my tables twice"
},
{
"msg_contents": "\n> By the way, I know about using pg_dump to backup the database and I do\n> that. Is there a good way to maintain a second identical copy of the\n> database on another machine? Will simply copying the dump over and\n> restoring it with psql do the trick? Would I need to delete an old copy\n> of the same database first? We have a somewhat slow Internet connection\n> to our Linux system's location and it would be nice to have an alternate\n> site with the same data.\n\nWe sometimes do:\n\n pg_dump -o -h <live> <table> | psql -h <mirror> <table>\n\n(Note that you will probably want -z as well if pre-6.5)\n\nThis generally works, but has a habit recreating the views as actual\ntables. Often you can live with this, and there may be a simple way\nto prevent it. I just haven't found one yet.\n\n-- \nKarl DeBisschop <[email protected]>\n617.832.0332 (Fax: 617.956.2696)\n\nInformation Please - your source for FREE online reference\nhttp://www.infoplease.com - Your Ultimate Fact Finder\nhttp://kids.infoplease.com - The Great Homework Helper\n\nNetsaint Plugins Development\nhttp://netsaintplug.sourceforge.net\n",
"msg_date": "Fri, 10 Dec 1999 13:09:18 -0500",
"msg_from": "Karl DeBisschop <[email protected]>",
"msg_from_op": true,
"msg_subject": "Mirroring a DB (was Re: [GENERAL] \\d shows all my tables twice)"
},
{
"msg_contents": "On 1999-12-10, Karl DeBisschop mentioned:\n\n> pg_dump -o -h <live> <table> | psql -h <mirror> <table>\n> \n> This generally works, but has a habit recreating the views as actual\n> tables. Often you can live with this, and there may be a simple way\n> to prevent it. I just haven't found one yet.\n\nI view *is* a table, with a ON SELECT rule on it. So writing\n\nCREATE TABLE foo ( ... );\nCREATE RULE _RETfoo AS ON SELECT DO INSTEAD SELECT your_stuff_here;\n\nis equivalent to\n\nCREATE VIEW foo AS SELECT your_stuff_here;\n\nPerhaps it would be nicer if the dump contained the second version, but\nyou're not supposed to read these dumps (in case you didn't know :).\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sat, 11 Dec 1999 03:00:12 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Mirroring a DB"
},
{
"msg_contents": "\n> From: Peter Eisentraut <[email protected]>\n>\n> On 1999-12-10, Karl DeBisschop mentioned:\n>\n> > pg_dump -o -h <live> <table> | psql -h <mirror> <table>\n> > \n> > This generally works, but has a habit recreating the views as actual\n> > tables. Often you can live with this, and there may be a simple way\n> > to prevent it. I just haven't found one yet.\n>\n> I view *is* a table, with a ON SELECT rule on it. So writing\n>\n> CREATE TABLE foo ( ... );\n> CREATE RULE _RETfoo AS ON SELECT DO INSTEAD SELECT your_stuff_here;\n>\n> is equivalent to\n>\n> CREATE VIEW foo AS SELECT your_stuff_here;\n>\n> Perhaps it would be nicer if the dump contained the second version, but\n> you're not supposed to read these dumps (in case you didn't know :).\n\nI was in fact aware of everything that you mentioned here. The only\npoint I was trying make, albeit not clearly, is that when executing\nthe above pipe, the create rule provided by pg_dump is often ambiguous.\n\nto use a real world example, this is the output from pg_dump for a\nview that we have:\n\nCREATE RULE \"_RETelement_types\" AS ON SELECT TO \"element_types\" DO INSTEAD SELECT \"ref\", \"fcat\", \"ecat\", \"oid\" AS \"ecat_oid\", \"ord\", \"emin\", \"emax\", \"rows\" FROM \"fcat\", \"ecat\" WHERE \"ref\" = \"fcat\";\n\nIn fact, it needs to be modified before it will parse to:\n\nCREATE RULE \"_RETelement_types\" AS ON SELECT TO \"element_types\" DO INSTEAD SELECT \"ref\", fcat.fcat, \"ecat\", ecat.oid AS \"ecat_oid\", \"ord\", \"emin\", \"emax\", \"rows\" FROM \"fcat\", \"ecat\" WHERE fcat.ref = ecat.fcat;\n\nSince the rules come at the end of the pg_dump, the transfer mostly\nworks. But I would not depend on it.\n\nNow I'm not sure if this is a bug, since I think there are choices of\nattribute names that will make the rule parse. But it might be a bug, and\ncertainly the questioner should be aware that there are common\ndatabase structures for which the above command can fail to correctly\ncreate the views.\n\nPlease forgive the sloppiness of my nomenclature if the this was not\nclear before. I had just assumed that this was a known issue, and\nthat a caution was justified.\n\n-- \nKarl DeBisschop <[email protected]>\n617.832.0332 (Fax: 617.956.2696)\n\nInformation Please - your source for FREE online reference\nhttp://www.infoplease.com - Your Ultimate Fact Finder\nhttp://kids.infoplease.com - The Great Homework Helper\n\nNetsaint Plugins Development\nhttp://netsaintplug.sourceforge.net\n",
"msg_date": "Sat, 11 Dec 1999 09:41:41 -0500",
"msg_from": "Karl DeBisschop <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Mirroring a DB"
},
{
"msg_contents": "Could the God of Rules please comment on this? It seems to be a deficiency\nin the get_rule_def (sp?) backend function. Perhaps to play it safe all\nattributes should be fully qualified, but that's probably not as easy as\nit sounds.\n\nOn Sat, 11 Dec 1999, Karl DeBisschop wrote:\n\n> to use a real world example, this is the output from pg_dump for a\n> view that we have:\n> \n> CREATE RULE \"_RETelement_types\" AS ON SELECT TO \"element_types\" DO\n> INSTEAD SELECT \"ref\", \"fcat\", \"ecat\", \"oid\" AS \"ecat_oid\", \"ord\",\n ^^^^^^ ^^^^^\n> \"emin\", \"emax\", \"rows\" FROM \"fcat\", \"ecat\" WHERE \"ref\" = \"fcat\";\n> \n> In fact, it needs to be modified before it will parse to:\n> \n> CREATE RULE \"_RETelement_types\" AS ON SELECT TO \"element_types\" DO\n> INSTEAD SELECT \"ref\", fcat.fcat, \"ecat\", ecat.oid AS \"ecat_oid\",\n ^^^^^^^^^ ^^^^^^^^\n> \"ord\", \"emin\", \"emax\", \"rows\" FROM \"fcat\", \"ecat\" WHERE fcat.ref =\n> ecat.fcat;\n\n[my highlightings]\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sat, 11 Dec 1999 15:51:44 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Mirroring a DB"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Could the God of Rules please comment on this? It seems to be a deficiency\n> in the get_rule_def (sp?) backend function. Perhaps to play it safe all\n> attributes should be fully qualified, but that's probably not as easy as\n> it sounds.\n\nI'm not the god of rules, but I have messed with that code. Current\nsources will put table prefixes on every var in a rule if more than one\ntable appears in the rule's rangelist. I think this should be\nsufficient, but it's hard to tell from this incomplete example;\nare you actually complaining about some special case that arises when\na column has the same name as its table?\n\nIt would be nice to see the original view definition (plus enough table\ndefinitions to let us create the rule without guessing).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 11 Dec 1999 13:22:03 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Mirroring a DB "
},
{
"msg_contents": "\n> I'm not the god of rules, but I have messed with that code. Current\n> sources will put table prefixes on every var in a rule if more than one\n> table appears in the rule's rangelist. I think this should be\n> sufficient, but it's hard to tell from this incomplete example;\n> are you actually complaining about some special case that arises when\n> a column has the same name as its table?\n\nAs far as I can see, the problem has nothing to do with whether the\ntable has the same name as the column. The problem arises when the\ntwo tables each have attributes with the same name. So for instance\nwhen t1 has an attribute (say \"foriegn_oid\") that joins to oid in t2,\nthe rule gets saved as just \"oid\" so when recreated, the parser can't\ndetermine which oid to join to.\n\n> It would be nice to see the original view definition (plus enough table\n> definitions to let us create the rule without guessing).\n\nSorry, I really didn't think this was an unknown issue, otherwise I\nwould have sent in a bug report with such details. I think the stuff\nbelow should cover it. If there's any more info that I can provide,\njust ask.\n\nKarl\n\n==============================================================================\n\ncreate view element_types as select fcat.ref,fcat.fcat,ecat.ecat,ecat.oid as ecat_oid,ecat.ord,ecat.emin,ecat.emax,ecat.rows from fcat,ecat where fcat.ref=ecat.fcat;\n\n------------------------------------------------------------------------------\n\nfeature=> \\d fcat \nTable = fcat\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| ref | int2 not null | 2 |\n| owner | int2 not null | 2 |\n| fcat | text not null | var |\n+----------------------------------+----------------------------------+-------+\nfeature=> \\d ecat\nTable = ecat\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| fcat | int2 not null | 2 |\n| ord | int2 not null | 2 |\n| emin | int2 not null | 2 |\n| emax | int2 | 2 |\n| rows | int2 not null | 2 |\n| ecat | text not null | var |\n+----------------------------------+----------------------------------+-------+\nIndex: zecat_sf\n\n",
"msg_date": "Sat, 11 Dec 1999 16:32:28 -0500",
"msg_from": "Karl DeBisschop <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: Mirroring a DB"
},
{
"msg_contents": "\n> I'm not the god of rules, but I have messed with that code. Current\n> sources will put table prefixes on every var in a rule if more than one\n> table appears in the rule's rangelist. I think this should be\n> sufficient, but it's hard to tell from this incomplete example;\n> are you actually complaining about some special case that arises when\n> a column has the same name as its table?\n>\n> It would be nice to see the original view definition (plus enough table\n> definitions to let us create the rule without guessing).\n>\n>\t\t\t regards, tom lane\n\nI also looked back to double check versions. Unbeknownst to me, the\nsource database is 6.5.1 - the destination is 6.5.3\n\nVersion 6.5.3 seem to behave as you said, so I'm guessing that this\nfix occurred relatively recently and I was just unaware it had been\nfixed.\n\nKarl\n\n",
"msg_date": "Sat, 11 Dec 1999 16:45:44 -0500",
"msg_from": "Karl DeBisschop <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: Mirroring a DB"
},
{
"msg_contents": "Karl DeBisschop <[email protected]> writes:\n>> Current\n>> sources will put table prefixes on every var in a rule if more than one\n>> table appears in the rule's rangelist. I think this should be\n>> sufficient, but it's hard to tell from this incomplete example;\n\n> Version 6.5.3 seem to behave as you said, so I'm guessing that this\n> fix occurred relatively recently and I was just unaware it had been\n> fixed.\n\nActually, 6.5.3 just unconditionally prefixes all vars in a decompiled\nrule, all the time. That was a quick-patch solution to the type of\nproblem you are complaining of. Current sources (6.6/7.0-to-be) try to\nbe smarter by only prefixing vars when there is possible ambiguity (ie,\nmore than one table in the rangelist). That's why I was concerned about\nthe details of your example --- I was wondering if this \"improvement\"\nmight fail under the right special case...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Dec 1999 01:03:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Mirroring a DB "
}
] |
[
{
"msg_contents": "Well this is my first post to this list, so be gentle ;-). I have\nwritten a small utility I use and wanted to get it out and better\ntested :-)\n\nWith it you should be able to do most group amin, without going into psql.\nIt has the following syntax:\n\nusage: pg_group options -- dB group [user ...] || [table ...]\nwhere options is one of:\n -c create group\n -d delete group\n -a add user(s) to group\n -r remove user(s) to group\n +g give group access to tables\n -g revoke group access to tables\n -p privlages to grant/revoke. This is only used with the +g and -g\n options.\n -- end of switches.\n\n examples:\n pg_group -c -- guestbook grp_gstbook_usr nobody tux\n pg_group -a -- guestbook grp_gstbook_usr webuser\n pg_group -d -- guestbook grp_gstbook_usr\n pg_group -r -- guestbook grp_gstbook_usr nobody\n pg_group +g -p \"insert,select\" -- guestbook grp_gstbook_usr gstbook\n pg_group -g -p \"insert\" -- guestbook grp_gstbook_usr gstbook\n\nyou can get it from: http://www.lowcountry.com/~jscottb/pg_group.tar.gz.\n \nIf you have any questions, comments, patches or total rewrites email me\nat: [email protected]\n\nscott\n\n\n\n",
"msg_date": "Fri, 10 Dec 1999 13:41:24 -0500 (EST)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "First draft of pg_group admin tool."
}
] |
[
{
"msg_contents": "Well this is my first post to this list, so be gentle ;-). I have\nwritten a small utility I use and wanted to get it out and better\ntested :-)\n\nWith it you should be able to do most group amin, without going into psql. \nIt has the following syntax:\n\nusage: pg_group options -- dB group [user ...] || [table ...]\nwhere options is one of:\n -c create group\n -d delete group\n -a add user(s) to group\n -r remove user(s) to group\n +g give group access to tables\n -g revoke group access to tables\n -p privlages to grant/revoke. This is only used with the +g and -g\n options.\n -- end of switches.\n \n examples:\n pg_group -c -- guestbook grp_gstbook_usr nobody tux\n pg_group -a -- guestbook grp_gstbook_usr webuser\n pg_group -d -- guestbook grp_gstbook_usr\n pg_group -r -- guestbook grp_gstbook_usr nobody\n pg_group +g -p \"insert,select\" -- guestbook grp_gstbook_usr gstbook\n pg_group -g -p \"insert\" -- guestbook grp_gstbook_usr gstbook\n \nyou can get it from: http://www.lowcountry.com/~jscottb/pg_group.tar.gz\n\nIf you have any questions, comments, patches or total rewrites email me\nat: [email protected]\n \nscott\n\n",
"msg_date": "Fri, 10 Dec 1999 13:52:36 -0500 (EST)",
"msg_from": "Scott Beasley <[email protected]>",
"msg_from_op": true,
"msg_subject": "First draft of pg_group admin tool."
},
{
"msg_contents": "On 1999-12-10, Scott Beasley mentioned:\n\n> Well this is my first post to this list, so be gentle ;-). I have\n> written a small utility I use and wanted to get it out and better\n> tested :-)\n\nSo you might want to send the news to pgsql-general as well.\n\nThe proper fix for this would of course be a CREATE GROUP command, but no\none has been willing to work on that. Until then, more power to you. ;)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sat, 11 Dec 1999 03:00:31 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] First draft of pg_group admin tool."
}
] |
[
{
"msg_contents": "Peter,\n\n I just noticed that the new psql doesn't handle semicolon\n inside of unmatched parentheses correct any more. This is a\n requirement for defining multi action rules and was properly\n supported by v6.5.* psql.\n\n The CURRENT version submits the query buffer as soon, as it\n encounters the first semicolon outside of a string literal,\n and that is wrong according to the definition of CREATE RULE.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sat, 11 Dec 1999 00:27:54 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Error in new psql"
},
{
"msg_contents": "> Peter,\n> \n> I just noticed that the new psql doesn't handle semicolon\n> inside of unmatched parentheses correct any more. This is a\n> requirement for defining multi action rules and was properly\n> supported by v6.5.* psql.\n> \n> The CURRENT version submits the query buffer as soon, as it\n> encounters the first semicolon outside of a string literal,\n> and that is wrong according to the definition of CREATE RULE.\n\nI assume you mean:\n\n\ttest=> select (;) \n\tERROR: parser: parse error at or near \")\"\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Dec 1999 19:13:53 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Error in new psql"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> > Peter,\n> >\n> > I just noticed that the new psql doesn't handle semicolon\n> > inside of unmatched parentheses correct any more. This is a\n> > requirement for defining multi action rules and was properly\n> > supported by v6.5.* psql.\n> >\n> > The CURRENT version submits the query buffer as soon, as it\n> > encounters the first semicolon outside of a string literal,\n> > and that is wrong according to the definition of CREATE RULE.\n>\n> I assume you mean:\n>\n> test=> select (;)\n> ERROR: parser: parse error at or near \")\"\n\nKinda,\n\n actually I meant\n\n CREATE RULE myrule AS ON DELETE TO mytable DO (\n DELETE FROM myothertab1 WHERE key = old.key;\n DELETE FROM myothertab2 WHERE key = old.key;\n );\n ERROR: parser: parse error at or near \"\"\n\n This is a possible syntax which (IIRC) got released with v6.4\n and is subject to the examples in the rule system\n documentation. The parser still accepts it, so breaking it\n due to changes in psql is an IMHO unacceptable backward\n incompatibility.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sat, 11 Dec 1999 01:41:54 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Error in new psql"
},
{
"msg_contents": "> > test=> select (;)\n> > ERROR: parser: parse error at or near \")\"\n> \n> Kinda,\n> \n> actually I meant\n> \n> CREATE RULE myrule AS ON DELETE TO mytable DO (\n> DELETE FROM myothertab1 WHERE key = old.key;\n> DELETE FROM myothertab2 WHERE key = old.key;\n> );\n> ERROR: parser: parse error at or near \"\"\n> \n> This is a possible syntax which (IIRC) got released with v6.4\n> and is subject to the examples in the rule system\n> documentation. The parser still accepts it, so breaking it\n> due to changes in psql is an IMHO unacceptable backward\n> incompatibility.\n> \n\nYes, certainly this will be fixed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Dec 1999 19:57:27 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Error in new psql"
},
{
"msg_contents": "> > I assume you mean:\n> >\n> > test=> select (;)\n> > ERROR: parser: parse error at or near \")\"\n> \n> Kinda,\n> \n> actually I meant\n> \n> CREATE RULE myrule AS ON DELETE TO mytable DO (\n> DELETE FROM myothertab1 WHERE key = old.key;\n> DELETE FROM myothertab2 WHERE key = old.key;\n> );\n> ERROR: parser: parse error at or near \"\"\n> \n> This is a possible syntax which (IIRC) got released with v6.4\n> and is subject to the examples in the rule system\n> documentation. The parser still accepts it, so breaking it\n> due to changes in psql is an IMHO unacceptable backward\n> incompatibility.\n\nOK, I fixed it. Just one addition test in an _if_ statement.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Dec 1999 20:01:08 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Error in new psql"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> > actually I meant\n> >\n> > CREATE RULE myrule AS ON DELETE TO mytable DO (\n> > DELETE FROM myothertab1 WHERE key = old.key;\n> > DELETE FROM myothertab2 WHERE key = old.key;\n> > );\n> > ERROR: parser: parse error at or near \"\"\n>\n> OK, I fixed it. Just one addition test in an _if_ statement.\n\n Thank you.\n\n You remember, that it's not the first time multiple action\n rules have been broken? The other one was due to the\n EXCEPT/INTERCEPT patch.\n\n I added a check to the rules regression test after that, to\n ensure it never happens again. Unfortunately, Peter's\n enforcement to use old psql for regression prevented it from\n showing up.\n\n Don't misunderstand this as some whining about it. It is a\n very important issue. It shows that the changes made to psql\n can cause backward incompatibilities by themself.\n\n AFAIK, the proposed procedure to activate the new psql was to\n run the regression test with an old psql, if it's O.K. run it\n again with the new one and replace all expected output files.\n THIS IS INADEQUATE according to the results seen in this\n case.\n\n Don't know if anyone would feel comfortable with it, but at\n least, the postmaster log must be checked to show up exactly\n the same too. The only alternative would be to check every\n old/expected to new/results manually (what's really a whole\n lot of damned stupid work).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sat, 11 Dec 1999 02:16:25 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Error in new psql"
},
{
"msg_contents": "On 1999-12-11, Jan Wieck mentioned:\n\n> I just noticed that the new psql doesn't handle semicolon\n> inside of unmatched parentheses correct any more. This is a\n> requirement for defining multi action rules and was properly\n> supported by v6.5.* psql.\n\nAah, I knew that there must have been a reason for this parentheses\ncounting. Patch attached. Backslash-escaping semicolons works as well, by\nthe way.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden",
"msg_date": "Sat, 11 Dec 1999 03:00:46 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Error in new psql"
},
{
"msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> On 1999-12-11, Jan Wieck mentioned:\n> \n> > I just noticed that the new psql doesn't handle semicolon\n> > inside of unmatched parentheses correct any more. This is a\n> > requirement for defining multi action rules and was properly\n> > supported by v6.5.* psql.\n> \n> Aah, I knew that there must have been a reason for this parentheses\n> counting. Patch attached. Backslash-escaping semicolons works as well, by\n> the way.\n\nThis is the same as the patch I did. Thanks.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Dec 1999 21:42:19 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Error in new psql"
},
{
"msg_contents": "On 1999-12-11, Jan Wieck mentioned:\n\n> I added a check to the rules regression test after that, to\n> ensure it never happens again. Unfortunately, Peter's\n> enforcement to use old psql for regression prevented it from\n> showing up.\n\nTo be completely honest, I was just waiting to see what this was good\nfor. As you have seen (or not), it was more or less disabled but still\nthere.\n\nRegarding the regression tests, before any more of this stuff gets thrown\naround, how do you regenerate the output? Easily? Do it now. As far as I'm\nconcerned, psql is finished. Anything else will be bug-fixing.\n\nI'm planning on some sort of beta somewhere around Feb 1st with release on\nFeb 29th (to prove Y2K compliancy). If we don't come up with a name by\nthen, we can always start naming it after Norse gods.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sat, 11 Dec 1999 03:45:14 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Error in new psql"
},
{
"msg_contents": "On 1999-12-10, Bruce Momjian mentioned:\n\n> I assume you mean:\n> \n> \ttest=> select (;) \n> \tERROR: parser: parse error at or near \")\"\n\nThat was actually a different bug, which must have slipped in on the\nlatest update. Please use the attached patch. This overlaps with the one\nsent in a few minutes ago, but I think you'll easily figure out what's\ngoing on. Just a few lines to delete.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sat, 11 Dec 1999 03:45:24 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Error in new psql"
},
{
"msg_contents": "> Don't know if anyone would feel comfortable with it, but at\n> least, the postmaster log must be checked to show up exactly\n> the same too. The only alternative would be to check every\n> old/expected to new/results manually (what's really a whole\n> lot of damned stupid work).\n\nI've done a whole lot of dsw before, and will get to it sometime\nunless someone does it first...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sat, 11 Dec 1999 03:13:47 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Error in new psql"
},
{
"msg_contents": "Peter Eisentraut wrote:\n\n> On 1999-12-11, Jan Wieck mentioned:\n>\n> > I added a check to the rules regression test after that, to\n> > ensure it never happens again. Unfortunately, Peter's\n> > enforcement to use old psql for regression prevented it from\n> > showing up.\n>\n> To be completely honest, I was just waiting to see what this was good\n> for. As you have seen (or not), it was more or less disabled but still\n> there.\n\n Maybe it sounded the like, but I really did not wanted to\n citicize your work. It was a great job and IMHO a big leap\n forward in user friendliness of psql. I expect all this tab-\n completion and help stuff to be highly appreceated and\n honored. Let me be the first to explicitly say CONGRATS.\n\n What I just wanted to point out is, that such a little,\n subtle change in psql's input preprocessing could distort an\n existing feature. In this case, it's totally clear to me\n that is was only disabled and still there. But I only\n stumbled over it because I tried to create a multi action\n rule by hand to evaluate some comment I was writing on a\n list. Without that, the proposed procedure (I outlined) to\n update expected output would have broken the \"rules\"\n regression test and stamped the broken results into expected.\n So it probably wouldn't have been noticed until after\n release.\n\n And who can guarantee that this kind of flaw cannot happen\n anywhere else? There are many, very old regression tests.\n Some of them go back to the roots, Postgres 4.2, and I'm not\n sure anyone ever looked at the expected results lately, if\n they are really what SHOULD be expected. The tenk data for\n example is something where even I don't know where it was\n coming from, and I already joined the Postgres community with\n release 4.2 back in 1994.\n\n All this IMHO isn't really subject to your personal\n responsibility. The interface of our interactive shell\n needed the now happened polishing for some time. Instead I\n wanted the backend developers to handle this major change in\n psql, which is a core utility of the regression suite, not as\n lax as past changes to it might have been. That's all.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sat, 11 Dec 1999 04:17:36 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "psql & regression (was: Error in new psql)"
},
{
"msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> On 1999-12-10, Bruce Momjian mentioned:\n> \n> > I assume you mean:\n> > \n> > \ttest=> select (;) \n> > \tERROR: parser: parse error at or near \")\"\n> \n> That was actually a different bug, which must have slipped in on the\n> latest update. Please use the attached patch. This overlaps with the one\n> sent in a few minutes ago, but I think you'll easily figure out what's\n> going on. Just a few lines to delete.\n\nI don't see any patch attached to this message.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Dec 1999 22:37:08 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Error in new psql"
},
{
"msg_contents": "> And who can guarantee that this kind of flaw cannot happen\n> anywhere else? There are many, very old regression tests.\n> Some of them go back to the roots, Postgres 4.2, and I'm not\n> sure anyone ever looked at the expected results lately, if\n> they are really what SHOULD be expected. The tenk data for\n> example is something where even I don't know where it was\n> coming from, and I already joined the Postgres community with\n> release 4.2 back in 1994.\n\nThomas is the regression man, and has checked the output to see that\nit was expected in the past. I assume he will regenerate it soon.\n\nA good point is that he can use the old psql to see any changes/breakage\nin the backend code, but can _not_ use the new psql to check because the\noutput is different. That is a good point, and I think the one Jan was\nmaking.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Dec 1999 22:40:52 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql & regression (was: Error in new psql)"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> > And who can guarantee that this kind of flaw cannot happen\n> > anywhere else? There are many, very old regression tests.\n> > Some of them go back to the roots, Postgres 4.2, and I'm not\n> > sure anyone ever looked at the expected results lately, if\n> > they are really what SHOULD be expected. The tenk data for\n> > example is something where even I don't know where it was\n> > coming from, and I already joined the Postgres community with\n> > release 4.2 back in 1994.\n>\n> Thomas is the regression man, and has checked the output to see that\n> it was expected in the past. I assume he will regenerate it soon.\n\n Oh yeah, I've seen his response with great pleasure. I did\n not knew that there's really someone taking care for\n breakage->expected glitches.\n\n> A good point is that he can use the old psql to see any changes/breakage\n> in the backend code, but can _not_ use the new psql to check because the\n> output is different. That is a good point, and I think the one Jan was\n> making.\n\n Yes. The verification, if the new expected output is correct,\n needs one or more eyes (and AFAIK Thomas has good ones - he's\n one of a fistful who notice mistakes in my statements even if\n they are between the lines :-)).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sat, 11 Dec 1999 04:58:09 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: psql & regression (was: Error in new psql)"
},
{
"msg_contents": "Posted this a few days ago on pgsql-general and deja, with no response, so\nhoping hackers might help...\n\n Anyone know what this error is or how to prevent it? Seems to\n usually show up on large queries...\n\n \"ExecInitIndexScan: both left and right op's are rel-vars\"\n\n I've seen it before, but can't recall a solution and couldn't find\n one in archives/deja...\n\n Thanks in advance...\n\n Ed\n\n pgsql 6.5.2, redhat 6.0 (2.2.5-15smp).\n\n\n",
"msg_date": "Mon, 13 Dec 1999 12:55:21 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": false,
"msg_subject": "[HACKERS] \"ExecInitIndexScan: both left and right...\" meaning?"
},
{
"msg_contents": "Ed Loehr <[email protected]> writes:\n> Anyone know what this error is or how to prevent it? Seems to\n> usually show up on large queries...\n> \"ExecInitIndexScan: both left and right op's are rel-vars\"\n\nSounds like you've found a bug. How about a specific example of\na query that causes this?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Dec 1999 17:24:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] \"ExecInitIndexScan: both left and right...\" meaning? "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Ed Loehr <[email protected]> writes:\n> > Anyone know what this error is or how to prevent it? Seems to\n> > usually show up on large queries...\n> > \"ExecInitIndexScan: both left and right op's are rel-vars\"\n>\n> Sounds like you've found a bug. How about a specific example of\n> a query that causes this?\n\nUnfortunately, this is the simplest example I have to offer. The\nfollowing query succeeds numerous times before going into a continuous\nfailure mode due to the error above. Vacuuming the DB fixes the\nproblem temporarily \"for a while\".\n\nSELECT sum( cet.default_budget_per_unit * cahrn.hr_count )\nFROM contract_activity_hr_need cahrn, contract_expense_type cet,\n contract_activity_type_expense_type catet,\n contract_activity_type cat, activity pa\nWHERE -- lame attempt at making this easy on the eye...\n cet.contract_id = 1 AND catet.contract_id = 1 AND\n cahrn.contract_id = 1 AND pa.contract_id = 1 AND\n cat.contract_id = 1 AND cet.expense_unit_id = 6 AND\n pa.activity_state_id <> 5 AND\n pa.activity_state_id <> 4 AND\n (pa.billable = 0 OR cahrn.billable = 0) AND\n catet.expense_type_id = cet.expense_type_id AND\n catet.activity_type_id = cat.activity_type_id AND\n cahrn.contract_activity_type_id = cat.id AND\n pa.activity_type_id = cat.activity_type_id;\n\nWithout including the rather lengthy schema definition for the 5\ntables involved, let me clarify the data types of the example by\nsaying that every single column in the query above is of type INTEGER\nexcept for cet.default_budget_per_unit in the SELECT clause, which is\nof type FLOAT8. Note that all columns above ending in 'XXX_id' are\nforeign keys referencing the 'id' column of the 'XXX' table, which is\ndeclared as type SERIAL. Note also that every table has a couple of\nbook-keeping columns ('creation_time' and 'record_status'). For\nexample, cet.contract_id is an INTEGER value acting as a foreign key\nto the 'contract' table:\n\nCREATE TABLE contract (\n id SERIAL, -- pkey, ref'd as fkey 'contract_id'\n ...\n creation_time DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,\n record_status INTEGER NOT NULL DEFAULT 1\n);\n\nCREATE TABLE contract_expense_type (\n id SERIAL,\n contract_id INTEGER NOT NULL, -- fkey to contract table\n ...\n creation_time DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,\n record_status INTEGER NOT NULL DEFAULT 1\n);\n\nOne might suspect the size of my tuples might be a factor. I believe\nmy **largest** rowsize in any table is 152 bytes, though I'm not sure\nhow VARCHARs are sized (all my varchar values are considerably less\nthan 256 bytes, and rarely are there more than 2 of these in a table).\n\nI think the error comes from line 862 of\n.../src/backend/executor/nodeIndexscan.c, though it's possible it may\nhave come at times from line 925 of the same file (a similar error msg\ndiffering only by an apostrophe).\n\nOther current configuration details:\n\n Pgsql configured with: ./configure --prefix=/usr/local/pgsql\n-with-odbc\n\n PG: PostgreSQL 6.5.2 on i686-pc-linux-gnu, compiled by gcc\negcs-2.91.66\n OS: RH6.1 Linux XXX 2.2.12-20smp #1 SMP Mon Sep 27 10:34:45 EDT\n1999 i686 unknown,\n HW: dual P3 600Mhz w/1Gb RAM and 3 UW 9Gb SCSI drives in software\nRAID.\n SW: Apache 1.3.9 with mod_ssl 2.4.9, mod_perl 1.21, DBI 1.13,\nDBD/Pg 0.92\n\nI've also seen this problem on RH6.0, Pg6.5.2, Linux2.2.12-15,\n512MbRAM, dual450MhzP3, NoRAID, mod_ssl 2.4.5...\n\nAny help would be greatly appreciated. I can code around this, of\ncourse, but it'd be nice...\n\nCheers,\nEd Loehr\n\n",
"msg_date": "Wed, 15 Dec 1999 18:14:57 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] \"ExecInitIndexScan: both left and right...\" meaning?"
},
{
"msg_contents": "Ed Loehr <[email protected]> writes:\n>> Sounds like you've found a bug. How about a specific example of\n>> a query that causes this?\n\n> Unfortunately, this is the simplest example I have to offer. The\n> following query succeeds numerous times before going into a continuous\n> failure mode due to the error above. Vacuuming the DB fixes the\n> problem temporarily \"for a while\".\n\nOh my, *that's* interesting. I have no idea what could be causing that.\nThe error message you're getting suggests that the planner is generating\nan incorrect plan tree for the query, which I'd believe soon enough,\nbut I don't understand why the behavior would change over time.\nA VACUUM could change the planner's results by altering the stored\nstatistics for the tables --- but if you're not vacuuming, the plan\nshould be the same every time.\n\nDoes the EXPLAIN output showing the query plan change from when it's\nworking to when it's not? What would really be helpful is to see the\nEXPLAIN VERBOSE output in both states (preferably, the pretty-printed\nversion that gets put in the postmaster log file, not the compressed\nversion that gets sent to the client).\n\nAlso, what indexes do you have on these tables?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Dec 1999 23:03:08 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] \"ExecInitIndexScan: both left and right...\" meaning? "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Ed Loehr <[email protected]> writes:\n> > ... query succeeds numerous times before going into a continuous\n> > failure mode due to the error above. Vacuuming the DB fixes the\n> > problem \"for a while\".\n>\n> Oh my, *that's* interesting. I have no idea what could be causing that.\n> The error message you're getting suggests that the planner is generating\n> an incorrect plan tree for the query, which I'd believe soon enough,\n> but I don't understand why the behavior would change over time.\n> A VACUUM could change the planner's results by altering the stored\n> statistics for the tables --- but if you're not vacuuming, the plan\n> should be the same every time.\n\nNo intermediate vacuuming is occurring, AFAIK (though I'm trying to figure\nout how to trigger vacuuming on this error). Speculating, does the genetic\nalgorithm twiddle any of the planner's stats? I ask because I know some\nof my other queries involve 6 or more tables, and I seem to recall that was\na trigger point for genetic algorithms to kick in with default settings.\nI am running with defaults.\n\n> Does the EXPLAIN output showing the query plan change from when it's\n> working to when it's not? What would really be helpful is to see the\n> EXPLAIN VERBOSE output in both states (preferably, the pretty-printed\n> version that gets put in the postmaster log file, not the compressed\n> version that gets sent to the client).\n\nI will attempt to capture EXPLAIN output for the problem situation.\n\n> Also, what indexes do you have on these tables?\n\nI have single-column indices on most every foreign key field (ie,\ncontract_id), some unique and some not, and on every primary key field\n(i.e., 'id' in the 'contract' table). I have a few multi-column indices.\nThe only types I use in the entire database are INTEGER, SERIAL, FLOAT8,\nDATETIME, and VARCHAR, and I have indices involving on all of these types\nat one point or another. I also have a few of what I'd call \"overlapping\"\nindices, i.e.,\n\n create table mytable (\n id serial,\n dog_id integer,\n cat_id integer,\n ...\n );\n create index mytable_dog_idx on mytable(dog_id);\n create index mytable_cat_idx on mytable(cat_id);\n create index mytable_dogcat_idx on mytable(dog_id,cat_id);\n\n...thinking these indices would allow the fastest lookups from 3 different\nangles (at the cost of slower inserts, of course). Not sure my intuition\nhere corresponds directly with the technical reality...\n\nYour question also reminds me of a scenario I'd wondered about:\n\n create table mytable (\n id serial,\n ...\n primary key (id)\n );\n create unique index mytable_id on mytable(id);\n\nThe primary key designation implicitly creates a unique index\n('mytable_id_pkey', is it?). What happens if I inadvertently create\nanother unique index on the same field (other than being worthless,\nredundant, and a needless performance hit)? I believe I have this\nsituation in some cases as a result of adding the 'primary key' designation\nlater, and hadn't gotten around to cleaning it up. Does that smell like a\nrat? Any other ideas?\n\nCheers,\nEd Loehr\n\n",
"msg_date": "Thu, 16 Dec 1999 00:35:27 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] \"ExecInitIndexScan: both left and right...\" meaning?"
},
{
"msg_contents": "Ed Loehr <[email protected]> writes:\n> Tom Lane wrote:\n>> Oh my, *that's* interesting. I have no idea what could be causing that.\n\n> Speculating, does the genetic algorithm twiddle any of the planner's\n> stats?\n\nNo, or at least no more than regular planning does. Let's say it's not\n*supposed* to. When dealing with a hard-to-characterize bug, it's wise\nnot to rule anything out...\n\n> I ask because I know some of my other queries involve 6 or\n> more tables, and I seem to recall that was a trigger point for genetic\n> algorithms to kick in with default settings.\n\nI think the default is 11 tables in 6.5.*. At least I get\n\nplay=> show geqo;\nNOTICE: GEQO is ON beginning with 11 relations\nSHOW VARIABLE\n\n> create index mytable_dog_idx on mytable(dog_id);\n> create index mytable_cat_idx on mytable(cat_id);\n> create index mytable_dogcat_idx on mytable(dog_id,cat_id);\n\n> ...thinking these indices would allow the fastest lookups from 3 different\n> angles (at the cost of slower inserts, of course). Not sure my intuition\n> here corresponds directly with the technical reality...\n\nI doubt the 2-column index earns its keep given that you have another\nindex on the front column. A multicolumn index is a pretty specialized\nbeast, so I don't recommend creating one unless you have a very specific\nheavily-used query in mind. (Of course, if you're making a multicol\nUNIQUE index to enforce uniqueness of a multicol primary key, that's\na different matter entirely. But if you're just fishing for performance\nimprovements, you're probably fishing in the wrong place.)\n\n> Your question also reminds me of a scenario I'd wondered about:\n> create table mytable (\n> id serial,\n> ...\n> primary key (id)\n> );\n> create unique index mytable_id on mytable(id);\n\n> The primary key designation implicitly creates a unique index\n> ('mytable_id_pkey', is it?).\n\nYes, I think so.\n\n> What happens if I inadvertently create\n> another unique index on the same field (other than being worthless,\n> redundant, and a needless performance hit)?\n\nAFAIK it should work, but as you say it's a useless performance hit.\n\nIt's barely conceivable that there's a bug lurking in there, since\nit's a very-seldom-exercised case. But having lots of (nonidentical)\nindexes on one table is very well exercised, and it's tough to see\nwhy it would matter if two of them happened to have identical\nparameters.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Dec 1999 02:04:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] \"ExecInitIndexScan: both left and right...\" meaning? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Ed Loehr <[email protected]> writes:\n> > create index mytable_dog_idx on mytable(dog_id);\n> > create index mytable_cat_idx on mytable(cat_id);\n> > create index mytable_dogcat_idx on mytable(dog_id,cat_id);\n> \n> > ...thinking these indices would allow the fastest lookups from 3 different\n> > angles (at the cost of slower inserts, of course). Not sure my intuition\n> > here corresponds directly with the technical reality...\n> \n> I doubt the 2-column index earns its keep given that you have another\n> index on the front column. A multicolumn index is a pretty specialized\n> beast, so I don't recommend creating one unless you have a very specific\n> heavily-used query in mind. (Of course, if you're making a multicol\n> UNIQUE index to enforce uniqueness of a multicol primary key, that's\n> a different matter entirely. But if you're just fishing for performance\n> improvements, you're probably fishing in the wrong place.)\n\nActually I think that the first (dog_id) is worthless in this situation as\n(dog_id,cat_id) can be used instead of it.\n\nI vaguely remember that Hiroshi posted a patch some time ago that fixed \nthe plan to use more then only the first column of multi-column index \nif possible. \n\nThe first column of a multi-column index has always been used afaik.\n\n------------------------\nHannu\n",
"msg_date": "Thu, 16 Dec 1999 12:01:56 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] \"ExecInitIndexScan: both left and right...\" meaning?"
},
{
"msg_contents": "Tom Lane wrote:\n\n> Does the EXPLAIN output showing the query plan change from when it's\n> working to when it's not? What would really be helpful is to see the\n> EXPLAIN VERBOSE output in both states (preferably, the pretty-printed\n> version that gets put in the postmaster log file, not the compressed\n> version that gets sent to the client).\n\nYes, the query plan changes between working state and non-working state.\nVaccum triggers the change. Other things may also, I'm not sure yet. Here\nare the failing and successful query plans, respectively...\n\nQUERY PLAN: (failed due to ExecInitIndexScan left/right rel op error)\n\nAggregate (cost=10.05 rows=1 width=48)\n -> Nested Loop (cost=10.05 rows=1 width=48)\n -> Nested Loop (cost=8.05 rows=1 width=36)\n -> Nested Loop (cost=6.05 rows=1 width=24)\n -> Nested Loop (cost=4.05 rows=1 width=16)\n -> Index Scan using activity_cid on activity pa (cost=2.05 rows=1 width=8)\n -> Index Scan using contract_activity_type_pkey on contract_activity_type cat (cost=2.00 rows=2 width=8)\n -> Index Scan using contract_activity_type_exp_pkey on contract_activity_type_expense_ catet (cost=2.00 rows=2 width=8)\n -> Index Scan using contract_expense_type_pkey on contract_expense_type cet (cost=2.00 rows=1 width=12)\n -> Index Scan using contract_activity_hr_need_pkey on contract_activity_hr_need cahrn (cost=2.00 rows=2 width=12)\n\nVACUUM\n\nQUERY PLAN: (successful query after vacuuming)\n\nAggregate (cost=9.58 rows=1 width=48)\n -> Nested Loop (cost=9.58 rows=1 width=48)\n -> Nested Loop (cost=7.58 rows=1 width=36)\n -> Nested Loop (cost=5.53 rows=1 width=28)\n -> Nested Loop (cost=3.53 rows=1 width=16)\n -> Seq Scan on contract_activity_type cat (cost=1.53 rows=1 width=8)\n -> Index Scan using contract_activity_type_exp_pkey on contract_activity_type_expense_ catet (cost=2.00 rows=2 width=8)\n -> Index Scan using contract_expense_type_pkey on contract_expense_type cet (cost=2.00 rows=1 width=12)\n -> Index Scan using activity_cid on activity pa (cost=2.05 rows=1 width=8)\n -> Index Scan using contract_activity_hr_need_pkey on contract_activity_hr_need cahrn (cost=2.00 rows=2 width=12)\n\nOther ideas?\n\nCheers,\nEd Loehr\n\n",
"msg_date": "Fri, 17 Dec 1999 16:44:28 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] \"ExecInitIndexScan: both left and right...\" meaning?"
},
{
"msg_contents": "Ed Loehr <[email protected]> writes:\n> Yes, the query plan changes between working state and non-working state.\n> Vaccum triggers the change. Other things may also, I'm not sure yet. Here\n> are the failing and successful query plans, respectively...\n\nMmmm ... I suspected it had something to do with indexscan on the inner\nside of a nestloop (the optimizer has some strange hacks for that).\nLooks like I was right. Could I trouble you for the EXPLAIN VERBOSE\noutput, rather than just EXPLAIN? (Preferably, the pretty-printed form\nthat gets dumped into the postmaster log, not the unreadable form that\npsql shows.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 Dec 1999 18:11:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] \"ExecInitIndexScan: both left and right...\" meaning? "
}
] |
[
{
"msg_contents": "It's rude, it's crude, but it's finally here!\n\nThe mods for the logging subsystem have been posted to pgsql-patches\nfor your amusement and edification.\n\nOr whatever.\n\nAt the moment it does nothing but start itself up, channel a message\nevery time a backend starts, and leave some slop hanging around the\nsystem when you shutdown, but I hope that its potential will shine through anyway.\n\n regards,\n\n Tim Holloway\n",
"msg_date": "Fri, 10 Dec 1999 21:32:49 -0500",
"msg_from": "Tim Holloway <[email protected]>",
"msg_from_op": true,
"msg_subject": "Industrial-Strength Logging"
}
] |
[
{
"msg_contents": "\n I thought about the huge size variable text type a little\n more. And I think I could get the following implementation\n to work reliable for our upcoming release.\n\n For any relation, having one or more LONG data type\n attributes, another relation (named pg_<something>) is\n created, accessible only to superusers (and internal access\n routines). All LONG data items are stored as a reference\n into that relation, split up automatically so the chunks fit\n into the installation specific tuple limit size. Items are\n added/updated/removed totally transparent.\n\n It would not be indexable (jesus no!) and using it in a WHERE\n clause will be expensive. But who ever uses a WHERE on a not\n indexable (possibly containing megabytes per item) data type\n is a silly fool who should get what he wanted, poor response\n times.\n\n I'd like to name it LONG, like Oracle's 2G max. data type.\n Even if I intend to restrict the data size to some megabytes\n for now. All the data must still be processable in memory,\n and there might be multiple instances of one item in memory\n at the same time. So a real 2G datatype is impossible with\n this kind of approach. But isn't a 64MB #define'd limit\n enough for now? This would possibly still blow away many\n installations due to limited memory and/or swap space. And we\n can adjust that #define in 2001 (an address space odyssey),\n when 64bit hardware and plenty of GB real memory is the low\n end standard *1).\n\n I already thought that the 8K default BLKSIZE is a little out\n of date for today's hardware standards. Two weeks ago I\n bought a PC for my kids. It's a 433MHz Celeron, 64MB ram, 6GB\n disk - costs about $500 (exactly DM 999,-- at Media Markt).\n With the actual on disk cache <-> memory and cache <->\n surface transfer rates, the 8K size seems a little archaic to\n me.\n\n Thus, if we can get a LONG data type in 7.0, and maybe adjust\n the default BLKSIZE to something more up to date, wouldn't\n the long tuple item get away silently?\n\n Should I go ahead on this or not?\n\n\nJan\n\n*1) Or will it be TB/PB?\n\n I fear to estimate, because it's only a short time ago, that\n a 4G hard disk was high-end. Today, IBM offers a 3.5'' disk\n with 72G formatted capacity and 64M is the lowest end of real\n memory, so where's the limit?\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sat, 11 Dec 1999 06:33:06 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "LONG"
},
{
"msg_contents": "> \n> I thought about the huge size variable text type a little\n> more. And I think I could get the following implementation\n> to work reliable for our upcoming release.\n> \n> For any relation, having one or more LONG data type\n> attributes, another relation (named pg_<something>) is\n> created, accessible only to superusers (and internal access\n> routines). All LONG data items are stored as a reference\n> into that relation, split up automatically so the chunks fit\n> into the installation specific tuple limit size. Items are\n> added/updated/removed totally transparent.\n\nShould we use large objects for this, and beef them up. Seems that\nwould be a good way. I have considered putting them in a hash\nbucket/directory tree for faster access to lots of large objects.\n\nThere is a lot to say about storing long tuples outside the tables\nbecause long tuples fill cache buffers and make short fields longer to\naccess.\n\n> \n> It would not be indexable (jesus no!) and using it in a WHERE\n> clause will be expensive. But who ever uses a WHERE on a not\n> indexable (possibly containing megabytes per item) data type\n> is a silly fool who should get what he wanted, poor response\n> times.\n\nGood restriction.\n\n> I'd like to name it LONG, like Oracle's 2G max. data type.\n> Even if I intend to restrict the data size to some megabytes\n> for now. All the data must still be processable in memory,\n> and there might be multiple instances of one item in memory\n> at the same time. So a real 2G datatype is impossible with\n> this kind of approach. But isn't a 64MB #define'd limit\n> enough for now? This would possibly still blow away many\n> installations due to limited memory and/or swap space. And we\n> can adjust that #define in 2001 (an address space odyssey),\n> when 64bit hardware and plenty of GB real memory is the low\n> end standard *1).\n> \n> I already thought that the 8K default BLKSIZE is a little out\n> of date for today's hardware standards. Two weeks ago I\n> bought a PC for my kids. It's a 433MHz Celeron, 64MB ram, 6GB\n> disk - costs about $500 (exactly DM 999,-- at Media Markt).\n> With the actual on disk cache <-> memory and cache <->\n> surface transfer rates, the 8K size seems a little archaic to\n> me.\n\nWe use 8K blocks because that is the base size for most file systems. \nWhen we fsync an 8k buffer, the assumption is that that buffer is\nwritten in a single write to the disk. Larger buffers would be spread\nover the disk, making a single fsync() impossible to be atomic, I think.\n\nAlso, larger buffers take more cache space per buffer, makeing the\nbuffer cache more corse holding fewer buffers.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 11 Dec 1999 01:03:05 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> Should we use large objects for this, and beef them up. Seems that\n> would be a good way. I have considered putting them in a hash\n> bucket/directory tree for faster access to lots of large objects.\n>\n> There is a lot to say about storing long tuples outside the tables\n> because long tuples fill cache buffers and make short fields longer to\n> access.\n\n I thought to use a regular table. Of course, it will eat\n buffers, but managing external files or even large objects\n for it IMHO isn't that simple, if you take transaction\n commit/abort and MVCC problematic into account too. And IMHO\n this is something that must be covered, because I meant to\n create a DATATYPE that can be used as a replacement for TEXT\n if that's too small, so it must behave as a regular datatype,\n without any restrictions WRT beeing able to rollback etc.\n\n Using LO or external files would need much more testing, than\n creating one other shadow table (plus an index for it) at\n CREATE TABLE. This table would automatically have all the\n concurrency, MVCC and visibility stuff stable. And it would\n automatically split into multiple files if growing very\n large, be vacuumed, ...\n\n Let me do it this way for 7.0, and then lets collect some\n feedback and own experience with it. For 8.0 we can discuss\n again, if doing it the hard way would be worth the efford.\n\n> We use 8K blocks because that is the base size for most file systems.\n> When we fsync an 8k buffer, the assumption is that that buffer is\n> written in a single write to the disk. Larger buffers would be spread\n> over the disk, making a single fsync() impossible to be atomic, I think.\n>\n> Also, larger buffers take more cache space per buffer, makeing the\n> buffer cache more corse holding fewer buffers.\n\n Maybe something to play with a little.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sat, 11 Dec 1999 13:48:19 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "At 01:48 PM 12/11/99 +0100, Jan Wieck wrote:\n\n> I thought to use a regular table. Of course, it will eat\n> buffers, but managing external files or even large objects\n> for it IMHO isn't that simple, if you take transaction\n> commit/abort and MVCC problematic into account too. And IMHO\n> this is something that must be covered, because I meant to\n> create a DATATYPE that can be used as a replacement for TEXT\n> if that's too small, so it must behave as a regular datatype,\n> without any restrictions WRT beeing able to rollback etc.\n\nYes, please, this is what (some of, at least) the world wants.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sat, 11 Dec 1999 07:09:28 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> \n> > Should we use large objects for this, and beef them up. Seems that\n> > would be a good way. I have considered putting them in a hash\n> > bucket/directory tree for faster access to lots of large objects.\n> >\n> > There is a lot to say about storing long tuples outside the tables\n> > because long tuples fill cache buffers and make short fields longer to\n> > access.\n> \n> I thought to use a regular table. Of course, it will eat\n> buffers, but managing external files or even large objects\n> for it IMHO isn't that simple, if you take transaction\n> commit/abort and MVCC problematic into account too. And IMHO\n> this is something that must be covered, because I meant to\n> create a DATATYPE that can be used as a replacement for TEXT\n> if that's too small, so it must behave as a regular datatype,\n> without any restrictions WRT beeing able to rollback etc.\n\n\nOK, I have thought about your idea, and I like it very much. In fact,\nit borders on genius.\n\nOur/my original idea was to chain tuple in the main table. That has\nsome disadvantages:\n\n\tMore complex tuple handling of chained tuples\n\tRequires more tuple storage overhead for housekeeping of chaining data\n\tSequential scan of table has to read those large fields\n\tVacuum has to keep the tuples chained as they are moved\n\t\nYour system would be:\n\n\tCREATE TABLE pg_long (\n\t\trefoid\tOID,\n\t\tattno\tint2,\n\t\tline\tint4,\n\t\tattdata\tVARCHAR(8000);\n\n\tCREATE INDEX pg_long_idx ON pg_long (refoid, attno, line);\n\nYou keep the long data out of the table. When updating the tuple, you\nmark the pg_long tuples as superceeded with the transaction id, and just\nkeep going. No need to do anything special. Vacuum will remove\nsuperceeded tuples automatically while processing pg_long if the\ntransaction was committed.\n\nThe pg_long_idx index will allow rapid access to tuple long data.\n\nThis approach seems better than tuple chaining because it uses our\nexisting code more efficiently. You keep long data out of the main\ntable, and allow use of existing tools to access the long data. \n\nIn fact, you may decide to just extent varchar() and text to allow use\nof long tuples. Set the varlena VARLEN field to some special value like\n-1, and when you see that, you go to pg_long to get the data. Seems\nvery easy. You could get fancy and keep data in the table in most\ncases, but if the tuple length exceeds 8k, go to all the varlena fields\nand start moving data into pg_long. That way, a table with three 4k\ncolumns could be stored without the user even knowing pg_long is\ninvolved, but for shorter tuples, they are stored in the main table.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 11 Dec 1999 10:20:50 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "> I thought to use a regular table. Of course, it will eat\n> buffers, but managing external files or even large objects\n> for it IMHO isn't that simple, if you take transaction\n> commit/abort and MVCC problematic into account too. And IMHO\n> this is something that must be covered, because I meant to\n> create a DATATYPE that can be used as a replacement for TEXT\n> if that's too small, so it must behave as a regular datatype,\n> without any restrictions WRT beeing able to rollback etc.\n\nIn fact, you could get fancy and allow an update of a non-pg_long using\ncolumn to not change pg_long at all. Just keep the same value in the\ncolumn. If the transaction fails or succeeds, the pg_long is the same\nfor that tuple. Of course, because an update is a delete and then an\ninsert, that may be hard to do. For very long fields, it would be a win\nfor UPDATE. You certainly couldn't do that with chained tuples.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 11 Dec 1999 10:38:28 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "On Sat, 11 Dec 1999, Bruce Momjian wrote:\n\n> In fact, you could get fancy and allow an update of a non-pg_long using\n> column to not change pg_long at all. Just keep the same value in the\n> column. If the transaction fails or succeeds, the pg_long is the same\n> for that tuple. Of course, because an update is a delete and then an\n> insert, that may be hard to do. For very long fields, it would be a win\n> for UPDATE. You certainly couldn't do that with chained tuples.\n\nWhile this is great and all, what will happen when long tuples finally get\ndone? Will you remove this, or keep it, or just make LONG and TEXT\nequivalent? I fear that elaborate structures will be put in place here\nthat might perhaps only be of use for one release cycle.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sat, 11 Dec 1999 17:14:40 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "I wrote:\n\n> Bruce Momjian wrote:\n>\n> > Should we use large objects for this, and beef them up. Seems that\n> > would be a good way. I have considered putting them in a hash\n> > bucket/directory tree for faster access to lots of large objects.\n> >\n> > There is a lot to say about storing long tuples outside the tables\n> > because long tuples fill cache buffers and make short fields longer to\n> > access.\n>\n> I thought to use a regular table. Of course, it will eat\n> buffers ...\n\n When looking at my actual implementation concept, I'm not\n sure if it will win or loose compared against text itself!\n Amazing, but I think it could win already on relatively small\n text sizes (1-2K is IMHO small compared to what this type\n could store).\n\n Well, the implementation details. I really would like some\n little comments to verify it's really complete before\n starting.\n\n - A new field \"rellongrelid\" type Oid is added to pg_class.\n It contains the Oid of the long-value relation or the\n invalid Oid for those who have no LONG attributes.\n\n - At CREATE TABLE, a long value relation named\n \"_LONG<tablename>\" is created for those tables who need it.\n And of course dropped and truncated appropriate. The schema\n of this table is\n\n rowid Oid, -- oid of our main data row\n rowattno int2, -- the attribute number in main data\n chunk_seq int4, -- the part number of this data chunk\n chunk text -- the content of this data chunk\n\n There is a unique index defined on (rowid, rowattno).\n\n - The new data type is of variable size with the following\n header:\n\n typedef struct LongData {\n int32 varsize;\n int32 datasize;\n Oid longrelid;\n Oid rowid;\n int16 rowattno;\n } LongData;\n\n The types input function is very simple. Allocate\n sizeof(LongData) + strlen(input), set varsize to it,\n datasize to strlen(input), and the rest to invalid and 0.\n Then copy the input after the struct.\n\n The types output function determines on the longrelid, what\n to do. If it's invalid, just output the bytes stored after\n the struct (it must be a datum that resulted from an input\n operation. If longrelid isn't invalid, it does an index\n scan on that relation, fetching all tuples that match rowid\n and attno. Since it knows the datasize, it doesn't need\n them in the correct order, it can put them at the right\n places into the allocated return buffer by their chunk_seq.\n\n - For now (until we have enough experience to judge) I think\n it would be better to forbid ALTER TABLE when LONG\n attributes are involved. Sure, must be implemented\n finally, but IMHO not on the first evaluation attempt.\n\nNow how the data goes in and out of the longrel.\n\n - On heap_insert(), we look for non NULL LONG attributes in\n the tuple. If there could be any can simply be seen by\n looking at the rellongrelid in rd_rel. We fetch the value\n either from the memory after LongData or by using the type\n output function (for fetching it from the relation where it\n is!). Then we simply break it up into single chunks and\n store them with our tuples information. Now we need to do\n something tricky - to shrink the main data tuple size, we\n form a new heap tuple with the datums of the original one.\n But we replace all LongData items we stored by faked ones,\n where the varsize is sizeof(LongData) and all the other\n information is setup appropriate. We append that faked\n tuple instead, copy the resulting information into the\n original tuples header and throw it away.\n\n This is a point, where I'm not totally sure. Could it\n possibly be better or required to copy the entire faked\n tuple over the one we should have stored? It could never\n need more space, so that wouldn't be a problem.\n\n - On heap_replace(), we check all LONG attributes if they are\n NULL of if the information in longrelid, rowid and rowattno\n doesn't match our rellongrelid, tupleid, and attno. In that\n case this attribute might have an old content in the\n longrel, which we need to delete first.\n\n The rest of the operation is exactly like for\n heap_insert(), except all the attributes information did\n match - then it's our own OLD value that wasn't changed. So\n we can simply skip it - the existing data is still valid.\n\n - heap_delete() is so simple that I don't explain it.\n\n Now I hear you asking \"how could this overhead be a win?\" :-)\n\n That's easy to explain. As long as you don't use a LONG\n column in the WHERE clause, when will the data be fetched? At\n the time it's finally clear that it's needed. That's when a\n result tuple is sent to the client (type output) or when a\n tuple resulting from INSERT ... SELECT should be stored.\n\n Thus, all the tuples moving around in the execution tree,\n getting joined together, abused by sorts and aggregates and\n filtered out again, allways contain the small LongData\n struct, not the data itself. Wheren't there recently reports\n about too expansive sorts due to their huge size?\n\n Another bonus would be this: What happens on an UPDATE to a\n table having LONG attributes? If the attribute is not\n modified, the OLD LongData will be found in the targetlist,\n and we'll not waste any space by storing the same information\n again. IIRC that one was one of the biggest concerns about\n storing huge data in tuples, but it disappeared without\n leaving a trace - funny eh?\n\n It is so simple, that I fear I made some mistake somewhere.\n But where?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sat, 11 Dec 1999 17:21:28 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Last thoughts about LONG"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> In fact, you may decide to just extent varchar() and text to allow use\n> of long tuples. Set the varlena VARLEN field to some special value like\n> -1, and when you see that, you go to pg_long to get the data. Seems\n> very easy. You could get fancy and keep data in the table in most\n> cases, but if the tuple length exceeds 8k, go to all the varlena fields\n> and start moving data into pg_long. That way, a table with three 4k\n> columns could be stored without the user even knowing pg_long is\n> involved, but for shorter tuples, they are stored in the main table.\n\n So you realized most of my explanations yourself while I\n wrote the last mail. :-)\n\n No, I don't intend to change anything on the existing data\n types. Where should be the limit on which to decide to store\n a datum in pg_long? Based on the datums size? On the tuple\n size and attribute order, take one by one until the tuple\n became small enough to fit?\n\n Maybe we make this mechanism so general that it is\n automatically applied to ALL varsize attributes? We'll end up\n with on big pg_long where 90+% of the databases content will\n be stored.\n\n But as soon as an attribute stored there is used in a WHERE\n or is subject to be joined, you'll see why not (as said, this\n type will NOT be enabled for indexing). The operation will\n probably fallback to a seq-scan on the main table and then\n the attribute must be fetched from pg_long with an index scan\n on every single compare etc. - no, no, no.\n\n And it will not be one single pg_long table. Instead it will\n be a separate table per table, that contains one or more LONG\n attributes. IIRC, the TRUNCATE functionality was implemented\n exactly to QUICKLY be able to whipe out the data from huge\n relations AND get the disk space back. In the case of a\n central pg_long, TRUNCATE would have to scan pg_long to mark\n the tuples for deletion and vacuum must be run to really get\n back the space. And a vacuum on this central pg_long would\n probably take longer than the old DELETE, VACUUM of the now\n truncated table itself. Again no, no, no.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sat, 11 Dec 1999 17:45:53 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "On Sat, 11 Dec 1999, Jan Wieck wrote:\n\n> Well, the implementation details. I really would like some\n> little comments to verify it's really complete before\n> starting.\n\nBefore I start the nagging, please be aware that I'm not as smart as I\nthink I am. Long datatypes of some sort are clearly necessary -- more\npower to you.\n\n> - A new field \"rellongrelid\" type Oid is added to pg_class.\n> It contains the Oid of the long-value relation or the\n> invalid Oid for those who have no LONG attributes.\n\nI have a mixed feeling about all these \"sparse\" fields everywhere. Doing\nit completely formally, this seems to be a one-to-many relation, so you\nshould put the referencing field into the pg_long table or whatever\nstructure you use, pointing the other way around. This is probably slower,\nbut it's cleaner. As I mentioned earlier, this whole arrangement will\n(hopefully) not be needed for all too long, and then we wouldn't want to\nbe stuck with it.\n\n> - At CREATE TABLE, a long value relation named\n> \"_LONG<tablename>\" is created for those tables who need it.\n\nPlease don't forget, this would require changes to pg_dump and psql. Also,\nthe COPY command might not be able to get away without changes, either.\n\nIn general, it wouldn't surprise me if some sections of the code would go\nnuts about the news of tuples longer than BLCKSZ coming along. (Where\n\"nuts\" is either 'truncation' or 'segfault'.)\n\nI guess what I'm really saying is that I'd be totally in awe of you if you\ncould get all of this (and RI) done by Feb 1st. Good luck.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sat, 11 Dec 1999 17:55:24 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Last thoughts about LONG"
},
{
"msg_contents": "Peter Eisentraut wrote:\n\n> On Sat, 11 Dec 1999, Bruce Momjian wrote:\n>\n> > In fact, you could get fancy and allow an update of a non-pg_long using\n> > column to not change pg_long at all. Just keep the same value in the\n> > column. If the transaction fails or succeeds, the pg_long is the same\n> > for that tuple. Of course, because an update is a delete and then an\n> > insert, that may be hard to do. For very long fields, it would be a win\n> > for UPDATE. You certainly couldn't do that with chained tuples.\n>\n> While this is great and all, what will happen when long tuples finally get\n> done? Will you remove this, or keep it, or just make LONG and TEXT\n> equivalent? I fear that elaborate structures will be put in place here\n> that might perhaps only be of use for one release cycle.\n\n With the actual design explained, I don't think we aren't\n that much in need for long tuples any more, that we should\n introduce all the problems of chaninig tuples into the\n vacuum, bufmgr, heapam, hio etc. etc. code.\n\n The rare cases, where someone really needs larger tuples and\n not beeing able to use the proposed LONG data type can be\n tackled by increasing BLKSIZE for this specific installation.\n\n Isn't there a FAQ entry about \"tuple size too big\" pointing\n to BLKSIZE? Haven't checked, but if it is, could that be the\n reason why we get lesser request on this item?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sat, 11 Dec 1999 18:04:04 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": ">> I thought about the huge size variable text type a little\n>> more. And I think I could get the following implementation\n>> to work reliable for our upcoming release.\n>> \n>> For any relation, having one or more LONG data type\n>> attributes, another relation (named pg_<something>) is\n>> created, accessible only to superusers (and internal access\n>> routines). All LONG data items are stored as a reference\n>> into that relation, split up automatically so the chunks fit\n>> into the installation specific tuple limit size. Items are\n>> added/updated/removed totally transparent.\n\n> Should we use large objects for this, and beef them up. Seems that\n> would be a good way.\n\nYes, I think what Jan is describing *is* a large object, with the\nslight change that he wants to put multiple objects into the same\nbehind-the-scenes relation. (That'd be a good change for regular\nlarge objects as well ... it'd cut down the umpteen-thousand-files\nproblem.)\n\nThe two principal tricky areas would be (1) synchronization ---\nhaving one hidden relation per primary relation might solve the\nproblems there, but I'm not sure about it; and (2) VACUUM.\n\nBut I don't really see why this would be either easier to do or\nmore reliable than storing multiple segments of a tuple in the\nprimary relation itself. And I don't much care for\ninstitutionalizing a hack like a special \"LONG\" datatype.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 11 Dec 1999 13:13:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LONG "
},
{
"msg_contents": "Peter Eisentraut wrote:\n\n> Before I start the nagging, please be aware that I'm not as smart as I\n> think I am. Long datatypes of some sort are clearly necessary -- more\n> power to you.\n\n So be it. It forces me to think it over again and points to\n sections, I might have forgotten so far. Also, it happend\n more than one time to me, that writing a totally OBVIOUS\n answer triggerd a better solution in my brain (dunno what's\n wrong with that brain, but sometimes it needs to be shaken\n well before use). Thus, any of your notes can help, and that\n counts!\n\n>\n> > - A new field \"rellongrelid\" type Oid is added to pg_class.\n> > It contains the Oid of the long-value relation or the\n> > invalid Oid for those who have no LONG attributes.\n>\n> I have a mixed feeling about all these \"sparse\" fields everywhere. Doing\n> it completely formally, this seems to be a one-to-many relation, so you\n> should put the referencing field into the pg_long table or whatever\n> structure you use, pointing the other way around. This is probably slower,\n> but it's cleaner. As I mentioned earlier, this whole arrangement will\n> (hopefully) not be needed for all too long, and then we wouldn't want to\n> be stuck with it.\n\n It's 4 bytes per RELATION in pg_class. As a side effect, the\n information will be available at NO COST immediately after\n heap_open() and in every place, where a relation is accessed.\n So it is the best place to put it.\n\n>\n> > - At CREATE TABLE, a long value relation named\n> > \"_LONG<tablename>\" is created for those tables who need it.\n>\n> Please don't forget, this would require changes to pg_dump and psql. Also,\n> the COPY command might not be able to get away without changes, either.\n\n Oh yes, thanks. That was a point I forgot!\n\n Psql must not list tables that begin with \"_LONG\" on the \\d\n request. Anything else should IMHO be transparent.\n\n Pg_dump either uses a SELECT to build a script that INSERT's\n the data via SQL, or uses COPY. In the SELECT/INSERT case, my\n implementation would again be totally transparent and not\n noticed by pg_dump, only that it must IGNORE \"_LONG*\"\n relations and be aware that really big tuples can be sent,\n but that's more a libpq question I think (what I already\n checked because the view/rule/PL combo I created to\n demonstrate a >128K tuple was done through psql). AFAIK,\n pg_dump doesn't use a binary COPY, and looking at the code\n tells me that this is transparent too (due to use of type\n specific input/output function there).\n\n All pg_dump would have to do is to ignore \"_LONG*\" relations\n too.\n\n The real problem is COPY. In the case of a COPY BINARY it\n outputs the data portion of the fetched tuples directly. But\n these will only contain the LongData headers, not the data\n itself.\n\n So at that point, COPY has to do the reverse process of\n heap_insert(). Rebuild a faked tuple where all the not NULL\n LONG values are placed in the representation, they would have\n after type input. Not a big deal, must only be done with the\n same care as the changes in heapam not to leave unfreed,\n leaked memory around.\n\n> In general, it wouldn't surprise me if some sections of the code would go\n> nuts about the news of tuples longer than BLCKSZ coming along. (Where\n> \"nuts\" is either 'truncation' or 'segfault'.)\n\n The place, where the size of a heap tuple only is checked\n (and where the \"tuple size too big\" message is coming from)\n is in hio.c, right before it is copied into the block. Up to\n then, a tuple is NOT explicitly limited to any size.\n\n So I would be glad to see crashes coming up from this change\n (not after release - during BETA of course). It would help us\n to get another existing bug out of the code.\n\n> I guess what I'm really saying is that I'd be totally in awe of you if you\n> could get all of this (and RI) done by Feb 1st. Good luck.\n\n Thank's for the flowers, but \"awe\" is far too much - sorry.\n\n During the years I had my hands on nearly every part of the\n code involved in this. So I'm not a newbe in creating data\n types, utility commands or doing syscat changes. The LONG\n type I described will be the work of two or three nights.\n\n I already intended to tackle the long tuples next. Missing\n was the idea how to AVOID it simply. And I had this idea just\n while answering a question about storing big text files in\n the database in the [SQL] list - that woke me up.\n\n In contrast to the RI stuff, this time I don't expect any\n bugs, because there are absolutely no side effects I noticed\n so far. On the RI stuff, we discussed for weeks (if not\n months) about tuple visibility during concurrent transactions\n and I finally ran into exactly these problems anyway.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sat, 11 Dec 1999 19:29:59 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Last thoughts about LONG"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> The rare cases, where someone really needs larger tuples and\n> not beeing able to use the proposed LONG data type can be\n> tackled by increasing BLKSIZE for this specific installation.\n\nThis would be a more convincing argument if we supported BLCKSZ\ngreater than 32K, but we don't.\n\nI think we've speculated about having a compilation flag that gets\nthrown to change page offsets from shorts to longs, thereby allowing\nlarger page sizes. But as Bruce was just pointing out, all of the\ncode depends in a fundamental way on the assumption that writing a\npage is an atomic action. The larger the page size, the more likely\nthat you'll see broken tables caused by partial page writes. So\nallowing BLCKSZ large enough to accomodate any tuple wouldn't be a\nvery good answer.\n\nI think the proposed LONG type is a hack, and I'd rather see us solve\nthe problem correctly. ISTM that allowing a tuple to be divided into\n\"primary\" and \"continuation\" tuples, all stored in the same relation\nfile, would be a much more general answer and not significantly harder\nto implement than a LONG datatype as Jan is describing it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 11 Dec 1999 13:32:46 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LONG "
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> Another bonus would be this: What happens on an UPDATE to a\n> table having LONG attributes? If the attribute is not\n> modified, the OLD LongData will be found in the targetlist,\n> and we'll not waste any space by storing the same information\n> again.\n\nWon't work. If you do that, you have several generations of the\n\"primary\" tuple pointing at the same item in the \"secondary\" table.\nThere is room in the multiple primary tuples to keep track of their\ncommitted/uncommitted status, but there won't be enough room to\nkeep track in the secondary table.\n\nI think this can only work if there are exactly as many generations\nof the LONG chunks in the secondary table as there are of the primary\ntuple in the main table, and all of them have the same transaction\nidentification info stored in them as the corresponding copies of\nthe primary tuple have.\n\nAmong other things, this means that an update or delete *must* scan\nthrough the tuple, find all the LONG fields, and go over to the\nsecondary table to mark all the LONG chunks as deleted by the current\nxact, just the same as the primary tuple gets marked. This puts a\nconsiderable crimp in your claim that it'd be more efficient than\na multiple-tuple-segment approach.\n\nOf course, this could be worked around if the secondary table did *not*\nuse standard access methods (it could be more like an index, and rely on\nthe primary table for all xact status info). But that makes it look\neven less like a clean data-type-based solution...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 11 Dec 1999 13:52:11 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Last thoughts about LONG "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I guess what I'm really saying is that I'd be totally in awe of you if you\n> could get all of this (and RI) done by Feb 1st. Good luck.\n\nWhen Jan said this was for 7.0, I assumed he meant the release *after*\nthe Feb 1st one ... whatever it ends up being called. I don't believe\nit's possible or reasonable to get this done by Feb 1, either.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 11 Dec 1999 13:55:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Last thoughts about LONG "
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> Peter Eisentraut wrote:\n> \n> > Please don't forget, this would require changes to pg_dump and psql. Also,\n> > the COPY command might not be able to get away without changes, either.\n> \n> Oh yes, thanks. That was a point I forgot!\n> \n> Psql must not list tables that begin with \"_LONG\" on the \\d\n> request. Anything else should IMHO be transparent.\n> \n\nIf this is the main concern then start them with \"pg_L_\" and they will be \nignored by the current implementation as well.\n\nBut of corse they will surface ad \\dS , which may or may not be a good thing\nas it makes it possible to list them without changing psql.\n\n-----------\nHannu\n",
"msg_date": "Sat, 11 Dec 1999 21:22:29 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Last thoughts about LONG"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> - At CREATE TABLE, a long value relation named\n> \"_LONG<tablename>\" is created for those tables who need it.\n> And of course dropped and truncated appropriate. The schema\n> of this table is\n> \n> rowid Oid, -- oid of our main data row\n> rowattno int2, -- the attribute number in main data\n> chunk_seq int4, -- the part number of this data chunk\n> chunk text -- the content of this data chunk\n> \n> There is a unique index defined on (rowid, rowattno).\n>\n\nIf you plan to use the same LONGs for multiple versions you will probably \nneed a refcount int4 too\n\n--------------------\nHannu\n",
"msg_date": "Sat, 11 Dec 1999 21:29:15 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Last thoughts about LONG"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \n> But I don't really see why this would be either easier to do or\n> more reliable than storing multiple segments of a tuple in the\n> primary relation itself. And I don't much care for\n> institutionalizing a hack like a special \"LONG\" datatype.\n\nAFAIK the \"hack\" is similar to what Oracle does.\n\nAt least this is my impression from some descriptions, and it also \nseems reasonable thing to do in general as we dont want to read in\n500K tuples (and then sort them) just to join on int fields and filter\nout on boolean and count(n) < 3.\n\nThe description referred above is about Oracle's habit to return LONG* \nfields as open file descriptions ready for reading when doing FETCH 1 \nand as already read-in \"strings\" when fetching more than 1 tuple.\n\n--------------------\nHannu\n",
"msg_date": "Sat, 11 Dec 1999 21:39:05 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> I think the proposed LONG type is a hack, and I'd rather see us solve\n> the problem correctly. ISTM that allowing a tuple to be divided into\n> \"primary\" and \"continuation\" tuples, all stored in the same relation\n> file, would be a much more general answer and not significantly harder\n> to implement than a LONG datatype as Jan is describing it.\n\nActually they seem to be two _different_ problems - \n\n1) we may need bigger tuples for several reasons (I would also suggest \nmaking index tuples twice as long as data tuples to escape the problem \nof indexing text fields above 4K (2K?)\n\n2) the LOB support should be advanced to a state where one could reasonably \nuse them for storing more than a few LOBs without making everything else to \ncrawl, even on filesystems that don't use indexes on filenames (like ext2)\n\nAfter achieving 2) support could be added for on-demand migrating of LONG \ntypes to LOBs\n\nI guess that Jans suggestion is just a quick hack for avoiding fixing LOBs.\n\n-----------------------\nHannu\n",
"msg_date": "Sat, 11 Dec 1999 21:48:34 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "Tom Lane wrote:\n\n> [email protected] (Jan Wieck) writes:\n> > Another bonus would be this: What happens on an UPDATE to a\n> > table having LONG attributes? If the attribute is not\n> > modified, the OLD LongData will be found in the targetlist,\n> > and we'll not waste any space by storing the same information\n> > again.\n>\n> Won't work. If you do that, you have several generations of the\n> \"primary\" tuple pointing at the same item in the \"secondary\" table.\n> There is room in the multiple primary tuples to keep track of their\n> committed/uncommitted status, but there won't be enough room to\n> keep track in the secondary table.\n\n A really critical point, to think about in depth. And another\n point I could have stumbled over.\n\n But it would work anyway.\n\n I assumed up to now, that even under MVCC, and even if\n reading dirty, there could be at max one single transaction\n modifying one and the same tuple - no? Ignore all the rest\n and forget all my comments if your answer is no. But please\n tell me how something like RI should ever work RELIABLE in\n such an environment. In fact, in that case I would\n immediately stop all my efford in FOREIGN KEY, because it\n would be a dead end street - so I assume your answer is yes.\n\n My concept, using regular heap access inside of heap access\n to act on \"secondary\" table, means to stamp the same current\n xact as for \"primary\" table into xmax of old, and into xmin\n of new tuples for the \"secondary\" table. And it means that\n this operation appears to be atomic if living in a locking\n environment.\n\n The only thing I DON'T wanted to do is to stamp xmax and\n create new instances in \"secondary\" table, if no update is\n done to the value of the old LONG attribute. Any UPDATE\n modifying the LONG value, and INSERT/DELETE of course will\n stamp this information and/or create new instances. So the\n only thing (because the only difference) to worry about are\n unstamped and uncreated instances in \"secondary\" table -\n right?\n\n Since INSERT/DELETE allways act synchronous to the \"primary\"\n table, and and UPDATE modifying the LONG too, the only thing\n left to worry about is an UPDATE without updating the LONG.\n\n In this scenario, a \"secondary\" tuple of a not updated\n \"primary\" LONG will have an older, but surely committed,\n xmin. And it's xmax will be either infinite, or aborted. So\n it is visible - no other chance. And that's good, because at\n the time beeing, the updater of the \"primary\" tuple does a\n NOOP on the \"secondary\". And this (extended) part of the\n \"primaries\" tuple information is absolutely unaffected,\n regardless if it's transaction will commit or rollback.\n\n Well, your concern is again valid. This concept MIGHT need to\n force a NON-MVCC locking scheme for \"secondary\" tables. But\n as far as I learned from the RI stuff, that isn't a problem\n and therefore current Jackpot value to be added to Vadim's\n account.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sat, 11 Dec 1999 22:02:43 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Last thoughts about LONG"
},
{
"msg_contents": "> On Sat, 11 Dec 1999, Bruce Momjian wrote:\n> \n> > In fact, you could get fancy and allow an update of a non-pg_long using\n> > column to not change pg_long at all. Just keep the same value in the\n> > column. If the transaction fails or succeeds, the pg_long is the same\n> > for that tuple. Of course, because an update is a delete and then an\n> > insert, that may be hard to do. For very long fields, it would be a win\n> > for UPDATE. You certainly couldn't do that with chained tuples.\n> \n> While this is great and all, what will happen when long tuples finally get\n> done? Will you remove this, or keep it, or just make LONG and TEXT\n> equivalent? I fear that elaborate structures will be put in place here\n> that might perhaps only be of use for one release cycle.\n\nI think the idea is that Jan's idea is better than chaining tuples.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 11 Dec 1999 16:24:08 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> > While this is great and all, what will happen when long tuples finally get\n> > done? Will you remove this, or keep it, or just make LONG and TEXT\n> > equivalent? I fear that elaborate structures will be put in place here\n> > that might perhaps only be of use for one release cycle.\n>\n> I think the idea is that Jan's idea is better than chaining tuples.\n\n Just as Tom already pointed out, it cannot completely replace\n tuple chaining because of the atomicy assumption of single\n fsync(2) operation in current code. Due to this, we cannot\n get around the cases LONG will leave open by simply raising\n BLKSIZE, we instead need to tackle that anyways.\n\n But I believe LONG would still be something worth the efford.\n It will lower the pressure on chained tuples, giving us more\n time to build a really good solution, and I think LONG can\n survive tuple chaining and live in coexistance with it. As\n said in my last mail, I still believe that not touching LONG\n values at UPDATE can avoid storing the same huge value again.\n And that's a benefit, tuple chaining will never give us.\n\n Remember: If your only tool is a hammer, anything MUST look\n like a nail. So why not provide a richer set of tools?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sat, 11 Dec 1999 23:36:22 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "Hannu Krosing wrote:\n\n> Tom Lane wrote:\n> >\n> >\n> > But I don't really see why this would be either easier to do or\n> > more reliable than storing multiple segments of a tuple in the\n> > primary relation itself. And I don't much care for\n> > institutionalizing a hack like a special \"LONG\" datatype.\n>\n> AFAIK the \"hack\" is similar to what Oracle does.\n>\n> At least this is my impression from some descriptions, and it also\n> seems reasonable thing to do in general as we dont want to read in\n> 500K tuples (and then sort them) just to join on int fields and filter\n> out on boolean and count(n) < 3.\n\n Even if this is a side effect I haven't seen at the\n beginning, it would be one of the best side effect's I've\n ever seen. A really tempting one that's worth to try it\n anyway.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sun, 12 Dec 1999 00:05:37 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> \n> > > While this is great and all, what will happen when long tuples finally get\n> > > done? Will you remove this, or keep it, or just make LONG and TEXT\n> > > equivalent? I fear that elaborate structures will be put in place here\n> > > that might perhaps only be of use for one release cycle.\n> >\n> > I think the idea is that Jan's idea is better than chaining tuples.\n> \n> Just as Tom already pointed out, it cannot completely replace\n> tuple chaining because of the atomicy assumption of single\n> fsync(2) operation in current code. Due to this, we cannot\n> get around the cases LONG will leave open by simply raising\n> BLKSIZE, we instead need to tackle that anyways.\n\nActually, in looking at the fsync() system call, it does write the\nentire file descriptor before marking the transaction as complete, so\nthere is no hard reason not to raise it, but because the OS has to do\ntwo reads to get 16k, I think we are better keeping 8k as our base block\nsize.\n\nJan's idea is not to chain tuples, but to keep tuples at 8k, and instead\nchain out individual fields into 8k tuple chunks, as needed. This seems\nlike it makes much more sense. It uses the database to recreate the\nchains.\n\nLet me mention a few things. First, I would like to avoid a LONG data\ntype if possible. Seems a new data type is just going to make things\nmore confusing for users.\n\nMy ideas is a much more limited one than Jan's. It is to have a special\n-1 varlena length when the data is chained on the long relation. I\nwould do:\n\n\n\t-1|oid|attno\n\nin 12 bytes. That way, you can pass this around as long as you want,\nand just expand it in the varlena textout and compare routines when you\nneed the value. That prevents the tuples from changing size while being\nprocessed. As far as I remember, there is no need to see the data in\nthe tuple except in the type comparison/output routines.\n\nNow it would be nice if we could set the varlena length to 12, it's\nactual length, and then just somehow know that the varlena of 12 was a\nlong data entry. Our current varlena has a maximum length of 64k. I\nwonder if we should grab a high bit of that to trigger long. I think we\nmay be able to do that, and just do a AND mask to remove the bit to see\nthe length. We don't need the high bit because our varlena's can't be\nover 32k. We can modify VARSIZE to strip it off, and make another\nmacro like ISLONG to check for that high bit.\n\nSeems this could be done with little code.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 11 Dec 1999 18:25:12 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "> > At least this is my impression from some descriptions, and it also\n> > seems reasonable thing to do in general as we dont want to read in\n> > 500K tuples (and then sort them) just to join on int fields and filter\n> > out on boolean and count(n) < 3.\n> \n> Even if this is a side effect I haven't seen at the\n> beginning, it would be one of the best side effect's I've\n> ever seen. A really tempting one that's worth to try it\n> anyway.\n> \n\nOr make struct varlena vl_len a 15-bit field, and make islong a 1-bit\nfield. I don't remember if using & manually or bit fields is faster.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 11 Dec 1999 18:28:07 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "> Maybe we make this mechanism so general that it is\n> automatically applied to ALL varsize attributes? We'll end up\n> with on big pg_long where 90+% of the databases content will\n> be stored.\n\nIf most joins, comparisons are done on the 10% in the main table, so\nmuch the better.\n\n> \n> But as soon as an attribute stored there is used in a WHERE\n> or is subject to be joined, you'll see why not (as said, this\n> type will NOT be enabled for indexing). The operation will\n> probably fallback to a seq-scan on the main table and then\n> the attribute must be fetched from pg_long with an index scan\n> on every single compare etc. - no, no, no.\n\nLet's fact it. Most long tuples are store/retrieve, not ordered on or\nused in WHERE clauses. Moving them out of the main table speeds up\nthings. It also prevents expansion of rows that never end up in the\nresult set.\n\nIn your system, a sequential scan of the table will pull in all this\nstuff because you are going to expand the tuple. That could be very\ncostly. In my system, the expansion only happens on output if they LONG\nfield does not appear in the WHERE or ORDER BY clauses.\n\nAlso, my idea was to auto-enable longs for all varlena types, so short\nvalues stay in the table, while longer chained ones that take up lots of\nspace and are expensive to expand are retrieved only when needed.\n\nI see this as much better than chained tuples.\n\n\n> \n> And it will not be one single pg_long table. Instead it will\n> be a separate table per table, that contains one or more LONG\n> attributes. IIRC, the TRUNCATE functionality was implemented\n> exactly to QUICKLY be able to whipe out the data from huge\n> relations AND get the disk space back. In the case of a\n> central pg_long, TRUNCATE would have to scan pg_long to mark\n> the tuples for deletion and vacuum must be run to really get\n> back the space. And a vacuum on this central pg_long would\n> probably take longer than the old DELETE, VACUUM of the now\n> truncated table itself. Again no, no, no.\n> \n\nI guess a separate pg_long_ per table would be good.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 11 Dec 1999 18:54:58 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "> Maybe we make this mechanism so general that it is\n> automatically applied to ALL varsize attributes? We'll end up\n> with on big pg_long where 90+% of the databases content will\n> be stored.\n> \n> But as soon as an attribute stored there is used in a WHERE\n> or is subject to be joined, you'll see why not (as said, this\n> type will NOT be enabled for indexing). The operation will\n> probably fallback to a seq-scan on the main table and then\n> the attribute must be fetched from pg_long with an index scan\n> on every single compare etc. - no, no, no.\n\nA field value over 8k is not going to be something you join on,\nrestrict, or order by in most cases. It is going to be some long\nnarrative or field that is just for output to the user, usually not used\nto process the query.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 11 Dec 1999 19:01:49 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "Bruce Momjian wrote (in several messages):\n\n> Actually, in looking at the fsync() system call, it does write the\n> entire file descriptor before marking the transaction as complete, so\n> there is no hard reason not to raise it, but because the OS has to do\n> two reads to get 16k, I think we are better keeping 8k as our base block\n> size.\n\n Agreed. Let's stay with the 8K default.\n\n> -1|oid|attno\n\n Actually I think you need two more informations to move it\n around independently. As you agreed somewhere else (on my\n TRUNCATE issue), it would be better to keep the long values\n in a per table expansion relation. Thus, you need the Oid of\n that too at least. Also, it would be good to know the size of\n the data before fetching it, so you need that to.\n\n But that's not the important issue, there's also an (IMHO\n dangerous) assumption on it, see below.\n\n> Now it would be nice if we could set the varlena length to 12, it's\n> actual length, and then just somehow know that the varlena of 12 was a\n> long data entry. Our current varlena has a maximum length of 64k.\n>\n> Or make struct varlena vl_len a 15-bit field, and make islong a 1-bit\n> field. I don't remember if using & manually or bit fields is faster.\n\n I don't see vl_len as a 15-bit field. In the current sources\n (in postgres.h), it is an int32. And I'm sure that not any\n code is aware that some magic bit's in it contain a special\n meaning. At least the types I added recently (numeric and\n lztext) aren't. Nor am I sure, a variable length Datum is\n never duplicated somewhere, just by using the information\n from vl_len, with or without using the macro. Thus we would\n have to visit alot of code to make sure this new variable\n length Datum can be passed around as you like.\n\n And the IMHO most counting drawback is, that existing user\n type definitions treat the first 32 bits in a variable length\n data type just as I interpreted the meaning up to now. So we\n could occationally break more than we are aware of.\n\n> In your system, a sequential scan of the table will pull in all this\n> stuff because you are going to expand the tuple. That could be very\n> costly. In my system, the expansion only happens on output if they LONG\n> field does not appear in the WHERE or ORDER BY clauses.\n\nIn my system, it would do exactly as in your's, because they are mostly the\nsame. The modification done to the tuple in heap_insert() and heap_replace(),\njust before the call to RelationPutHeapTupleAtEnd(), makes each\nLONG Datum of varsize 20. Just that the first 32 bits don't contain any\nmagic information.\n\n> > Maybe we make this mechanism so general that it is\n> > automatically applied to ALL varsize attributes? We'll end up\n> > with on big pg_long where 90+% of the databases content will\n> > be stored.\n>\n> If most joins, comparisons are done on the 10% in the main table, so\n> much the better.\n\n Yes, but how would you want to judge which varsize value to\n put onto the \"secondary\" relation, and which one to keep in\n the \"primary\" table for fast comparisions?\n\n I think you forgot one little detail. In our model, you can\n only move around the Datum's extended information around as\n is. It will never be expanded in place, so it must be fetched\n (index scan) again at any place, the value itself is\n required.\n\n The installed base currently uses varsize attributes with\n indices on them to condition, sort and group on them. Now\n pushing such a field into \"secondary\" occationally will cause\n a substantial loss of performance.\n\n So again, how do you determine which of the attributes is a\n candidate to push into \"secondary\"? It is a such generic\n approach, that I cannot imagine any fail safe method.\n\n I'd better like to have another LONG data type, that enables\n me to store huge string into but where I exactly know what I\n can't do with, than having some automatic detection process\n that I cannot force to do what I want. It happened just to\n often to me, that these \"user friendly better knowing what I\n might want\" systems got me by the ball's. I'm a real\n programmer, so there's allway a way out for me, but what\n shoud a real user do?\n\n> Let's fact it. Most long tuples are store/retrieve, not ordered on or\n> used in WHERE clauses. Moving them out of the main table speeds up\n> things. It also prevents expansion of rows that never end up in the\n> result set.\n\n Having a tuple consisting of 30+ attributes, where 20 of them\n are varsize ones (CHAR, VARCHAR, NUMERIC etc.), what makes it\n a long tuple? Yes, I'm repeating this question once again,\n because we're talking about a \"one must fit all cases\" here.\n\n> stuff because you are going to expand the tuple. That could be very\n> costly. In my system, the expansion only happens on output if they LONG\n> field does not appear in the WHERE or ORDER BY clauses.\n\n No I won't. As explained, I would return a tuple as is, just\n with the LONG reference information. It will only, but then\n allways again, be expanded if needed to compare, store again\n or beeing output to the client. This \"allways again\" is one\n of my drawbacks against your \"treating all varsize pushable\"\n concept. In one of my early projects, I had to manage a\n microVax for a year, and I love systems that can be fine\n tuned since then, really! Auto detection is a nice feature,\n but if that failes and you don't have any override option,\n you're hosed.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sun, 12 Dec 1999 02:33:08 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Jesus, what have I done (was: LONG)"
},
{
"msg_contents": "OK, I think I can take your ideas and polish this into a killer feature,\nso I will keep going on this discussion.\n\n\n> Bruce Momjian wrote (in several messages):\n> \n> > Actually, in looking at the fsync() system call, it does write the\n> > entire file descriptor before marking the transaction as complete, so\n> > there is no hard reason not to raise it, but because the OS has to do\n> > two reads to get 16k, I think we are better keeping 8k as our base block\n> > size.\n> \n> Agreed. Let's stay with the 8K default.\n\nOK. I am worried about performance problems with increasing this for\nnon-large tuples. That is why I was liking to keep 8k. We are never\ngoing to be able to configure 8MB tuples, so I figured 8k was good\nenough.\n\n\n> \n> > -1|oid|attno\n> \n> Actually I think you need two more informations to move it\n> around independently. As you agreed somewhere else (on my\n> TRUNCATE issue), it would be better to keep the long values\n> in a per table expansion relation. Thus, you need the Oid of\n> that too at least. Also, it would be good to know the size of\n> the data before fetching it, so you need that to.\n\nYes, I see your point that you don't know the relation oid in those adt\nroutintes. Yes, you would need the oid too. New structure would be:\n\n\t1-bit long flag|31-bit length|long relid|tuple oid|attno\n\n\n> > Now it would be nice if we could set the varlena length to 12, it's\n> > actual length, and then just somehow know that the varlena of 12 was a\n> > long data entry. Our current varlena has a maximum length of 64k.\n> >\n> > Or make struct varlena vl_len a 15-bit field, and make islong a 1-bit\n> > field. I don't remember if using & manually or bit fields is faster.\n> \n> I don't see vl_len as a 15-bit field. In the current sources\n> (in postgres.h), it is an int32. And I'm sure that not any\n\nSorry, 32-bit field. I thought 16-bit because there is no need for\nvalues >8k for length. Seems we have >16 unused bits in the length.\n\n\n> code is aware that some magic bit's in it contain a special\n> meaning. At least the types I added recently (numeric and\n> lztext) aren't. Nor am I sure, a variable length Datum is\n> never duplicated somewhere, just by using the information\n> from vl_len, with or without using the macro. Thus we would\n> have to visit alot of code to make sure this new variable\n> length Datum can be passed around as you like.\n\nI just checked vl_len is used only in varlena.c inv_api.c and in the\nVARSIZE define. I make sure of that several releases ago, so they all\nuse the macro.\n\n\n> \n> And the IMHO most counting drawback is, that existing user\n> type definitions treat the first 32 bits in a variable length\n> data type just as I interpreted the meaning up to now. So we\n> could occationally break more than we are aware of.\n\nOK, the solution is that we never pass back this type with the long bit\nset. We always expand it on return to user applications.\n\nWe can restrict type expansion to only certain data types. Not all\nvarlena types have to be expanded.\n\n> \n> > In your system, a sequential scan of the table will pull in all this\n> > stuff because you are going to expand the tuple. That could be very\n> > costly. In my system, the expansion only happens on output if they LONG\n> > field does not appear in the WHERE or ORDER BY clauses.\n> \n> In my system, it would do exactly as in your's, because they are mostly the\n> same. The modification done to the tuple in heap_insert() and heap_replace(),\n> just before the call to RelationPutHeapTupleAtEnd(), makes each\n> LONG Datum of varsize 20. Just that the first 32 bits don't contain any\n> magic information.\n\nOK. I just want to get this working in a seamless way with our existing\ntypes.\n\n> \n> > > Maybe we make this mechanism so general that it is\n> > > automatically applied to ALL varsize attributes? We'll end up\n> > > with on big pg_long where 90+% of the databases content will\n> > > be stored.\n> >\n> > If most joins, comparisons are done on the 10% in the main table, so\n> > much the better.\n> \n> Yes, but how would you want to judge which varsize value to\n> put onto the \"secondary\" relation, and which one to keep in\n> the \"primary\" table for fast comparisions?\n\nThere is only one place in heap_insert that checks for tuple size and\nreturns an error if it exceeds block size. I recommend when we exceed\nthat we scan the tuple, and find the largest varlena type that is\nsupported for long relations, and set the long bit and copy the data\ninto the long table. Keep going until the tuple is small enough, and if\nnot, throw an error on tuple size exceeded. Also, prevent indexed\ncolumns from being made long.\n\n> \n> I think you forgot one little detail. In our model, you can\n> only move around the Datum's extended information around as\n> is. It will never be expanded in place, so it must be fetched\n> (index scan) again at any place, the value itself is\n> required.\n\nYes, I agree, but in most cases it will only be expanded to return to\nuser application because long fields, as used above only when needed,\nare usually not used in WHERE or ORDER BY. If only a few values exceed\nthe 8k limit, those would have to be retrieved to meet the WHERE or\nORDER BY. If many are long, it would be a lot of lookups, but I think\nthis solution would be the best for most uses.\n\n> \n> The installed base currently uses varsize attributes with\n> indices on them to condition, sort and group on them. Now\n> pushing such a field into \"secondary\" occationally will cause\n> a substantial loss of performance.\n\nReally? Do people really group/order by on >8k value often? I question\nthis.\n\n> \n> So again, how do you determine which of the attributes is a\n> candidate to push into \"secondary\"? It is a such generic\n> approach, that I cannot imagine any fail safe method.\n\nOutlined above in heap_insert(). Seems it would be a small loop.\n\n> \n> I'd better like to have another LONG data type, that enables\n> me to store huge string into but where I exactly know what I\n> can't do with, than having some automatic detection process\n> that I cannot force to do what I want. It happened just to\n> often to me, that these \"user friendly better knowing what I\n> might want\" systems got me by the ball's. I'm a real\n> programmer, so there's allway a way out for me, but what\n> shoud a real user do?\n\nAutomatic is better, I think. We already have too many character types,\nand another one is going to be confusing. Also, if you have data that\nis mostly under 8k, but a few are over, how do you store that. Make\nthem all LONG and have the overhead for each row? This seems like a\nsituation many people are in. Also, by making it automatic, we can\nchange the implentation later without having to re-teach people how to\nstore long tuples.\n\n> \n> > Let's fact it. Most long tuples are store/retrieve, not ordered on or\n> > used in WHERE clauses. Moving them out of the main table speeds up\n> > things. It also prevents expansion of rows that never end up in the\n> > result set.\n> \n> Having a tuple consisting of 30+ attributes, where 20 of them\n> are varsize ones (CHAR, VARCHAR, NUMERIC etc.), what makes it\n> a long tuple? Yes, I'm repeating this question once again,\n> because we're talking about a \"one must fit all cases\" here.\n\nAgain, scan tuple and move to long table until tuple fits.\n\n> \n> > stuff because you are going to expand the tuple. That could be very\n> > costly. In my system, the expansion only happens on output if they LONG\n> > field does not appear in the WHERE or ORDER BY clauses.\n> \n> No I won't. As explained, I would return a tuple as is, just\n> with the LONG reference information. It will only, but then\n> allways again, be expanded if needed to compare, store again\n> or beeing output to the client. This \"allways again\" is one\n> of my drawbacks against your \"treating all varsize pushable\"\n> concept. In one of my early projects, I had to manage a\n> microVax for a year, and I love systems that can be fine\n> tuned since then, really! Auto detection is a nice feature,\n> but if that failes and you don't have any override option,\n> you're hosed.\n\nI am confused here. With my code, you only have to:\n\n\tadd code to write/read from long tables\n\tadd code to expand long values in varlen access routines\n\tadd code to heap_insert() to move data to long tables\n\tadd code to heap_delete() to invalidate long tuples\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 12 Dec 1999 00:39:31 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Jesus, what have I done (was: LONG)"
},
{
"msg_contents": "> around independently. As you agreed somewhere else (on my\n> TRUNCATE issue), it would be better to keep the long values\n> in a per table expansion relation. Thus, you need the Oid of\n> that too at least. Also, it would be good to know the size of\n> the data before fetching it, so you need that to.\n> \n\nYes, I guess you could store the size, but the length is known by\nlooking at the long relation. We already have an index to get them in\norder, so there is no need to load them in random order.\n\n\n> The installed base currently uses varsize attributes with\n> indices on them to condition, sort and group on them. Now\n> pushing such a field into \"secondary\" occationally will cause\n> a substantial loss of performance.\n\nWe could allow indexes on long values by storing only 4k of the value. \nIf there is no other index value with a matching 4k value, the index of\n4k length is fine. If no, you fail the insert with an error.\n\n\n> I'd better like to have another LONG data type, that enables\n> me to store huge string into but where I exactly know what I\n> can't do with, than having some automatic detection process\n> that I cannot force to do what I want. It happened just to\n> often to me, that these \"user friendly better knowing what I\n> might want\" systems got me by the ball's. I'm a real\n> programmer, so there's allway a way out for me, but what\n> shoud a real user do?\n\nAutomatic allows small values to be inline, and long values to be moved\nto long tables in the same column. This is a nice feature. It\nmaximizes performance and capabilities. I can't imagine why someone\nwould want a LONG column if they can have a column that does both inline\nand long automatically and efficiently.\n\n> No I won't. As explained, I would return a tuple as is, just\n> with the LONG reference information. It will only, but then\n> allways again, be expanded if needed to compare, store again\n> or beeing output to the client. This \"allways again\" is one\n> of my drawbacks against your \"treating all varsize pushable\"\n> concept. In one of my early projects, I had to manage a\n> microVax for a year, and I love systems that can be fine\n> tuned since then, really! Auto detection is a nice feature,\n> but if that failes and you don't have any override option,\n> you're hosed.\n\nSo you expand it when you need it? That's fine. We can do that, except\nif you are accessing a real in-buffer tuple, and I am not sure you are\ngoing to know that at the time in all routines. By looking up each time\nit is needed and not changing the tuple, you make changes to the system\nminimal. And in my system, you have long entries only when the data\nrequires it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 12 Dec 1999 00:53:27 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Jesus, what have I done (was: LONG)"
},
{
"msg_contents": "> > No I won't. As explained, I would return a tuple as is, just\n> > with the LONG reference information. It will only, but then\n> > allways again, be expanded if needed to compare, store again\n> > or beeing output to the client. This \"allways again\" is one\n> > of my drawbacks against your \"treating all varsize pushable\"\n> > concept. In one of my early projects, I had to manage a\n> > microVax for a year, and I love systems that can be fine\n> > tuned since then, really! Auto detection is a nice feature,\n> > but if that failes and you don't have any override option,\n> > you're hosed.\n>\n> I am confused here. With my code, you only have to:\n>\n> add code to write/read from long tables\n> add code to expand long values in varlen access routines\n> add code to heap_insert() to move data to long tables\n> add code to heap_delete() to invalidate long tuples\n\n Add code to expand long values in varlen access routines,\n you're joking - no?\n\n How many functions are there, called via the fmgr with a\n Datum as argument, and only knowing by themself (and a system\n catalog) that they receive a variable length attribute?\n\n So you would better do the fetching in the fmgr. Then again,\n there are many places in the code (and possibly in user\n extensions too), that call builtin functions like textout()\n directly, passing it the Datum they got from somewhere.\n\n I can understand why you would like to automatically pull out\n varsize values as needed. But I see really a bunch of\n problems coming with it.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sun, 12 Dec 1999 12:04:25 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: Jesus, what have I done (was: LONG)"
},
{
"msg_contents": "> > I am confused here. With my code, you only have to:\n> >\n> > add code to write/read from long tables\n> > add code to expand long values in varlen access routines\n> > add code to heap_insert() to move data to long tables\n> > add code to heap_delete() to invalidate long tuples\n> \n> Add code to expand long values in varlen access routines,\n> you're joking - no?\n\nNo, I am not joking. Why not expand them there? If we look at textout,\nit returns a character string for the text field. Why not do the lookup\nof long there and return a very long value?\n\nIf we look at texteq, we expand any long values into a palloc'ed area\nand do the compare. Here, I can see the advantage of knowing the length\nof the long string.\n\n> \n> How many functions are there, called via the fmgr with a\n> Datum as argument, and only knowing by themself (and a system\n> catalog) that they receive a variable length attribute?\n> \n> So you would better do the fetching in the fmgr. Then again,\n> there are many places in the code (and possibly in user\n> extensions too), that call builtin functions like textout()\n> directly, passing it the Datum they got from somewhere.\n\nI see what you are suggesting, that we expand in fmgr, but we don't know\nthe arg types in there, do we? I was suggesting we create an\nexpand_long() function that takes a long varlena and returns the long\nvalue in palloc'ed memory, and sprinkle the calls in varlena.c and\nvarchar.c, etc.\n\nIf you prefer to expand the tuple itself, you can do that, but I think\ndoing it only when needed is easier because of in-buffer tuples that you\nhave to process without modification.\n\n> \n> I can understand why you would like to automatically pull out\n> varsize values as needed. But I see really a bunch of\n> problems coming with it.\n\nThese are the only comments you have? Does that mean the other things I\nsaid are OK, or that you are humoring me?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 12 Dec 1999 09:42:50 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Jesus, what have I done (was: LONG)"
},
{
"msg_contents": "> > add code to write/read from long tables\n> > add code to expand long values in varlen access routines\n> > add code to heap_insert() to move data to long tables\n> > add code to heap_delete() to invalidate long tuples\n> \n> Add code to expand long values in varlen access routines,\n> you're joking - no?\n> \n> How many functions are there, called via the fmgr with a\n> Datum as argument, and only knowing by themself (and a system\n> catalog) that they receive a variable length attribute?\n> \n> So you would better do the fetching in the fmgr. Then again,\n> there are many places in the code (and possibly in user\n> extensions too), that call builtin functions like textout()\n> directly, passing it the Datum they got from somewhere.\n\n\nYou may be able to expand the in-tuple copy if you had a bit on the\ntuple that said long fields exist, and do a heap_tuplecopy() only in\nthose cases.\n\nYou also could cache recently lookuped expand_long value so repeated\ncalls could return the value without reconstructing the long value.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 12 Dec 1999 09:56:16 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Jesus, what have I done (was: LONG)"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Also, my idea was to auto-enable longs for all varlena types, so short\n> values stay in the table, while longer chained ones that take up lots of\n> space and are expensive to expand are retrieved only when needed.\n\nI missed most of yesterday's discussion (was off fighting a different\nfire...). This morning in the shower I had a brilliant idea, which\nI now see Bruce has beaten me to ;-)\n\nThe idea of doing tuple splitting by pushing \"long\" fields out of line,\nrather than just cutting up the tuple at arbitrary points, is clearly\na win for the reasons Bruce and Jan point out. But I like Bruce's\napproach (automatically do it for any overly-long varlena attribute)\nmuch better than Jan's (invent a special LONG datatype). A special\ndatatype is bad for several reasons:\n* it forces users to kluge up their database schemas;\n* inevitably, users will pick the wrong columns to make LONG (it's\n a truism that programmers seldom guess right about what parts of\n their programs consume the most resources; users would need a\n \"profiler\" to make the right decisions);\n* it doesn't solve the problems for arrays, which desperately need it;\n* we'd need to add a whole bunch of operations on the special datatype;\n\nI could live with all of those limitations if a \"clean\" datatype-based\nsolution were possible, ie, all the special code is in the datatype\nfunctions. But we already know that that's not possible --- there would\nhave to be special hacks for the LONG datatype in other places. So I\nthink we ought to handle the problem as part of the tuple access\nmachinery, not as a special datatype.\n\nI think that the right place to implement this is in heapam, and that\nit should go more or less like this:\n\n1. While writing out a tuple, if the total tuple size is \"too big\"\n(threshold would be some fraction of BLCKSZ, yet to be chosen),\nthen the tuple manager would go through the tuple to find the longest\nvarlena attribute, and convert same into an out-of-line attribute.\nRepeat if necessary until tuple size fits within threshold.\n\n2. While reading a tuple, fastgetattr() automatically fetches the\nout-of-line value if it sees the requested attribute is out-of-line.\n(I'd be inclined to mark out-of-line attributes in the same way that\nNULL attributes are marked: one bit in the tuple header shows if any\nout-of-line attrs are present, and if so there is a bitmap to show\nwhich ones are out-of-line. We could also use Bruce's idea of\ncommandeering the high-order bit of the varlena length word, but\nI think that's a much uglier and more fragile solution.)\n\nI think that these two changes would handle 99% of the problem.\nVACUUM would still need work, but most normal access to tuples would\njust work automatically, because all access to varlena fields must go\nthrough fastgetattr().\n\nAn as-yet-unsolved issue is how to avoid memory leaks of out-of-line\nvalues after they have been read in by fastgetattr(). However, I think\nthat's going to be a nasty problem with Jan's approach as well. The\nbest answer might be to solve this in combination with addressing the\nproblem of leakage of temporary results during expression evaluation,\nsay by adding some kind of reference-count convention to all varlena\nvalues.\n\nBTW, I don't see any really good reason to keep the out-of-line values\nin a separate physical file (relation) as Jan originally proposed.\nWhy not keep them in the same file, but mark them as being something\ndifferent than a normal tuple? Sequential scans would have to know to\nskip over them (big deal), and VACUUM would have to handle them\nproperly, but I think VACUUM is going to have to have special code to\nsupport this feature no matter what. If we do make them a new primitive\nkind-of-a-tuple on disk, we could sidestep the problem of marking all\nthe out-of-line values associated with a tuple when the tuple is\noutdated by a transaction. The out-of-line values wouldn't have\ntransaction IDs in them at all; they'd just be labeled with the CTID\nand/or OID of the primary tuple they belong to. VACUUM would consult\nthat tuple to determine whether to keep or discard an out-of-line value.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Dec 1999 13:02:13 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LONG "
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> A field value over 8k is not going to be something you join on,\n> restrict, or order by in most cases. It is going to be some long\n> narrative or field that is just for output to the user, usually not used\n> to process the query.\n\nNot necessarily. The classic example in my mind is a text field that\nthe user will want to do LIKE or regexp matching searches on. When\nhe does so (and only when he does so), we'd have no choice but to pull\nin the out-of-line value for each tuple in order to check the WHERE\nclause. But we'd have to do that no matter how you slice the problem.\n\nI think the case that is actually worth thinking about is where some\nvalues of the column are long and some are not so long. We should avoid\na solution that imposes out-of-line storage on *every* tuple even when\nthe particular tuple isn't large enough to cause a problem.\n\nI believe all of the proposals made so far have the ability to keep a\nshort value in-line, but the data-type-based approach has a significant\ndisadvantage: the decision has to be made by a data-type-specific\nroutine that wouldn't have information about the rest of the tuple that\nthe data will end up in. So it would have to err on the side of caution\nand put anything more than a fairly short value out-of-line. If the\ndecision is made by the tuple storage routine, then it can examine the\nwhole tuple and make a more nearly optimal choice about what to put\nout-of-line.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Dec 1999 13:10:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LONG "
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > > If most joins, comparisons are done on the 10% in the main table, so\n> > > much the better.\n> >\n> > Yes, but how would you want to judge which varsize value to\n> > put onto the \"secondary\" relation, and which one to keep in\n> > the \"primary\" table for fast comparisions?\n> \n> There is only one place in heap_insert that checks for tuple size and\n> returns an error if it exceeds block size. I recommend when we exceed\n> that we scan the tuple, and find the largest varlena type that is\n> supported for long relations, and set the long bit and copy the data\n> into the long table. Keep going until the tuple is small enough, and if\n> not, throw an error on tuple size exceeded. Also, prevent indexed\n> columns from being made long.\n\nAnd prevent indexes from being created later if fields in some recorde \nare made long ?\n\nOr would it be enogh here to give out a warning ?\n\nOr should one try to re-pack these tuples ?\n\nOr, for tables that have mosty 10-char fields bu an occasional 10K field \nwe could possibly approach the indexes as currently proposed for tables, \ni.e. make the index's data part point to the same LONG relation ?\n\nThe latter would probably open another can of worms.\n\n---------\nHannu\n",
"msg_date": "Sun, 12 Dec 1999 21:55:36 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Jesus, what have I done (was: LONG)"
},
{
"msg_contents": "Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> > Also, my idea was to auto-enable longs for all varlena types, so short\n> > values stay in the table, while longer chained ones that take up lots of\n> > space and are expensive to expand are retrieved only when needed.\n>\n> I missed most of yesterday's discussion (was off fighting a different\n> fire...). This morning in the shower I had a brilliant idea, which\n> I now see Bruce has beaten me to ;-)\n>\n> The idea of doing tuple splitting by pushing \"long\" fields out of line,\n> rather than just cutting up the tuple at arbitrary points, is clearly\n> a win for the reasons Bruce and Jan point out. But I like Bruce's\n> approach (automatically do it for any overly-long varlena attribute)\n> much better than Jan's (invent a special LONG datatype). A special\n> datatype is bad for several reasons:\n> * it forces users to kluge up their database schemas;\n> * inevitably, users will pick the wrong columns to make LONG (it's\n> a truism that programmers seldom guess right about what parts of\n> their programs consume the most resources; users would need a\n> \"profiler\" to make the right decisions);\n> * it doesn't solve the problems for arrays, which desperately need it;\n> * we'd need to add a whole bunch of operations on the special datatype;\n\nO.K.,\n\n you two got me now.\n\n>\n> I think that the right place to implement this is in heapam, and that\n> it should go more or less like this:\n>\n> 1. While writing out a tuple, if the total tuple size is \"too big\"\n> (threshold would be some fraction of BLCKSZ, yet to be chosen),\n> then the tuple manager would go through the tuple to find the longest\n> varlena attribute, and convert same into an out-of-line attribute.\n> Repeat if necessary until tuple size fits within threshold.\n\n Yepp. But it does NOT mangle up the tuple handed to it in\n place. The flat values in the tuple are sometimes used AFTER\n heap_insert() and heap_update(), for example for\n index_insert. So that might break other places.\n\n> 2. While reading a tuple, fastgetattr() automatically fetches the\n> out-of-line value if it sees the requested attribute is out-of-line.\n> (I'd be inclined to mark out-of-line attributes in the same way that\n> NULL attributes are marked: one bit in the tuple header shows if any\n> out-of-line attrs are present, and if so there is a bitmap to show\n> which ones are out-of-line. We could also use Bruce's idea of\n> commandeering the high-order bit of the varlena length word, but\n> I think that's a much uglier and more fragile solution.)\n>\n> I think that these two changes would handle 99% of the problem.\n> VACUUM would still need work, but most normal access to tuples would\n> just work automatically, because all access to varlena fields must go\n> through fastgetattr().\n\n And I like Bruce's idea with the high order bit of vl_len.\n This is IMHO the only chance, to tell on UPDATE if the value\n wasn't changed.\n\n To detect that an UPDATE did not touch the out of line value,\n you need the complete long reference information in the\n RESULT tuple. The executor must not expand the value while\n building them up already.\n\n But Tom is right, there is a visibility problem I haven't\n seen before. It is that when fetching the out of line\n attribute (for example in the type output function) is done\n later than fetching the reference information. Then a\n transaction reading dirty or committed might see wrong\n content, or worse, see different contents at different\n fetches.\n\n The solution I see is to give any out of line datum another\n Oid, that is part of it's header and stamped into the\n reference data. That way, the long attribute lookup can use\n SnapshotAny using this Oid, there can only be one that\n exists, so SnapshotAny is safe here and forces that only the\n visibility of the master tuple in the main table counts at\n all.\n\n Since this Values Oid is known in the Values reference of the\n tuple, we only need two indices on the out of line data. One\n on this Oid, on on the referencing row's oid|attrno|seq to\n be fast in heap_delete() and heap_update().\n\n> An as-yet-unsolved issue is how to avoid memory leaks of out-of-line\n> values after they have been read in by fastgetattr(). However, I think\n> that's going to be a nasty problem with Jan's approach as well. The\n> best answer might be to solve this in combination with addressing the\n> problem of leakage of temporary results during expression evaluation,\n> say by adding some kind of reference-count convention to all varlena\n> values.\n\n At the point we decide to move an attribute out of the tuple,\n we make a lookup in an array consisting of type Oid's. Thus,\n we have plenty of time to add one datatype after another and\n enable them separately for long processing, but get the ones\n enabled ASAP (next release) out of the door.\n\n As Bruce suggested, we implement a central function that\n fetches back the long value. This is used in all the type\n specific funcitons in adt. Now that we have an Oid\n identifier per single value, it's easy to implement a cache\n there, that can manage a LRU table of the last fetched values\n and cache smaller ones for fast access.\n\n It's the response of the types adt functions, to free the\n returned (old VARLENA looking) memory. Since we enable the\n types one-by-one, there's no need to hurry on this.\n\n> BTW, I don't see any really good reason to keep the out-of-line values\n> in a separate physical file (relation) as Jan originally proposed.\n> Why not keep them in the same file, but mark them as being something\n> different than a normal tuple? Sequential scans would have to know to\n> skip over them (big deal), and VACUUM would have to handle them\n\n The one I see is that a sequential scan would not benefit\n from this, it still has to read the entire relation, even if\n looking only on small, fixed size items in the tuple. Will be\n a big win for count(*). And with the mentioned value cache\n for relatively small (yet to define what that is) values,\n there will be very little overhead in a sort, if the tuples\n in it are sorted by an attribute where some long values\n occationally appear.\n\n> properly, but I think VACUUM is going to have to have special code to\n> support this feature no matter what. If we do make them a new primitive\n> kind-of-a-tuple on disk, we could sidestep the problem of marking all\n> the out-of-line values associated with a tuple when the tuple is\n> outdated by a transaction. The out-of-line values wouldn't have\n> transaction IDs in them at all; they'd just be labeled with the CTID\n> and/or OID of the primary tuple they belong to. VACUUM would consult\n> that tuple to determine whether to keep or discard an out-of-line value.\n\n AFAIK, VACUUM consults single attributes of a tuple only to\n produce the statistical informations for them on ANALYZE.\n Well, statistical information for columns containing LONG\n values aren't good for the WHERE clause (I think we all agree\n on that). So it doesn't matter if these informations aren't\n totally accurate, or if VACUUM counts them but uses only the\n first couple of bytes for the min/max etc. info.\n\n Also, the new long data relations should IMHO have their own\n relkind. So VACUUM can easily detect them. This I think is\n required, so VACUUM can place an exclusive lock on the main\n table first before starting to vacuum the long values (which\n can be done as is since it is in fact a normal relation -\n just not visible to the user). This should avoid race\n conditions as explained above on the visibility problem.\n\n I'll start to play around with this approach for a while,\n using lztext as test candidate (with custom compression\n parameters that force uncompressed storage). When I have some\n reasonable result ready to look at, I'll send a patch here,\n so we can continue the discussion while looking at some test\n implementation.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sun, 12 Dec 1999 21:45:56 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "> > I am confused here. With my code, you only have to:\n> >\n> > add code to write/read from long tables\n> > add code to expand long values in varlen access routines\n> > add code to heap_insert() to move data to long tables\n> > add code to heap_delete() to invalidate long tuples\n> \n> Add code to expand long values in varlen access routines,\n> you're joking - no?\n\nHere is a patch to textout() that allows it to handle long tuples. It\nchecks the long bit, and calls the proper expansion function, and\npfree()'s it on exit.\n\nIt is a minimal amount of code that could be added to all the varlena\naccess routines. I would be glad to do it.\n\nBy doing it there, we expand only when we access the varlena value, not\non every tuple.\n\n---------------------------------------------------------------------------\n\n\n*** varlena.c\tSun Nov 7 18:08:24 1999\n--- varlena.c.new\tSun Dec 12 15:49:35 1999\n***************\n*** 176,181 ****\n--- 176,182 ----\n {\n \tint\t\t\tlen;\n \tchar\t *result;\n+ \tbool\t islong = false;\n \n \tif (vlena == NULL)\n \t{\n***************\n*** 184,189 ****\n--- 185,197 ----\n \t\tresult[1] = '\\0';\n \t\treturn result;\n \t}\n+ \n+ \tif (VARISLONG(vlena)) /* checks long bit */\n+ \t{\n+ \t\tvlena = expand_long(vlena); /* returns palloc long */\n+ \t\tislong = true;\n+ \t}\n+ \n \tlen = VARSIZE(vlena) - VARHDRSZ;\n \tresult = (char *) palloc(len + 1);\n \tmemmove(result, VARDATA(vlena), len);\n***************\n*** 192,197 ****\n--- 200,208 ----\n #ifdef CYR_RECODE\n \tconvertstr(result, len, 1);\n #endif\n+ \n+ \tif (islong)\n+ \t\tpfree(vlena);\n \n \treturn result;\n }\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 12 Dec 1999 16:11:17 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Jesus, what have I done (was: LONG)"
},
{
"msg_contents": "> 2. While reading a tuple, fastgetattr() automatically fetches the\n> out-of-line value if it sees the requested attribute is out-of-line.\n> (I'd be inclined to mark out-of-line attributes in the same way that\n> NULL attributes are marked: one bit in the tuple header shows if any\n> out-of-line attrs are present, and if so there is a bitmap to show\n> which ones are out-of-line. We could also use Bruce's idea of\n> commandeering the high-order bit of the varlena length word, but\n> I think that's a much uglier and more fragile solution.)\n\nNot sure if fastgetattr() is the place for this. I thought the varlena\naccess routines themselves would work. It is nice and clean to do it in\nfastgetattr, but how do you know to pfree it? I suppose if you kept the\nhigh bit set, you could try cleaning up, but where?\n\nMy idea was to expand the out-of-line varlena, and unset the 'long' bit.\n\n\tlong-bit|length|reloid|tupleoid|attno|longlen\n\nUnexpanded would be:\n\n\t1|20|10032|23123|5|20000\n\nunexpanded is:\n\n\t0|20000|data\n\n\n> \n> I think that these two changes would handle 99% of the problem.\n> VACUUM would still need work, but most normal access to tuples would\n> just work automatically, because all access to varlena fields must go\n> through fastgetattr().\n> \n> An as-yet-unsolved issue is how to avoid memory leaks of out-of-line\n> values after they have been read in by fastgetattr(). However, I think\n> that's going to be a nasty problem with Jan's approach as well. The\n> best answer might be to solve this in combination with addressing the\n> problem of leakage of temporary results during expression evaluation,\n> say by adding some kind of reference-count convention to all varlena\n> values.\n\nThat's why I was going to do the expansion only in the varlena access\nroutines. Patch already posted.\n\n> \n> BTW, I don't see any really good reason to keep the out-of-line values\n> in a separate physical file (relation) as Jan originally proposed.\n> Why not keep them in the same file, but mark them as being something\n> different than a normal tuple? Sequential scans would have to know to\n> skip over them (big deal), and VACUUM would have to handle them\n> properly, but I think VACUUM is going to have to have special code to\n> support this feature no matter what. If we do make them a new primitive\n> kind-of-a-tuple on disk, we could sidestep the problem of marking all\n> the out-of-line values associated with a tuple when the tuple is\n> outdated by a transaction. The out-of-line values wouldn't have\n> transaction IDs in them at all; they'd just be labeled with the CTID\n> and/or OID of the primary tuple they belong to. VACUUM would consult\n> that tuple to determine whether to keep or discard an out-of-line value.\n\nI disagree. By moving to another table, we don't have non-standard\ntuples in the main table. We can create normal tuples in the long*\ntable, of identical format, and access them just like normal tuples. \nHaving special long tuples in the main table that don't follow the\nformat of the other tuples it a certain mess. The long* tables also\nmove the long data out of the main table so it is not accessed in\nsequential scans. Why keep them in the main table?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 12 Dec 1999 16:44:27 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> I disagree. By moving to another table, we don't have non-standard\n> tuples in the main table. We can create normal tuples in the long*\n> table, of identical format, and access them just like normal tuples.\n> Having special long tuples in the main table that don't follow the\n> format of the other tuples it a certain mess. The long* tables also\n> move the long data out of the main table so it is not accessed in\n> sequential scans. Why keep them in the main table?\n\n More ugly and complicated (especially for VACUUM) seems to\n me, the we need an index on these nonstandard tuples, that\n doesn't see the standard ones, while the regular indices\n ignore the new long tuples. At least if we want to delay\n reading of long values until they're explicitly requested.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sun, 12 Dec 1999 22:54:35 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> \n> > I disagree. By moving to another table, we don't have non-standard\n> > tuples in the main table. We can create normal tuples in the long*\n> > table, of identical format, and access them just like normal tuples.\n> > Having special long tuples in the main table that don't follow the\n> > format of the other tuples it a certain mess. The long* tables also\n> > move the long data out of the main table so it is not accessed in\n> > sequential scans. Why keep them in the main table?\n> \n> More ugly and complicated (especially for VACUUM) seems to\n> me, the we need an index on these nonstandard tuples, that\n> doesn't see the standard ones, while the regular indices\n> ignore the new long tuples. At least if we want to delay\n> reading of long values until they're explicitly requested.\n> \n\nYes, good point. No reason to create non-standard tuples if you can\navoid it. And a separate table has performance advantages, especially\nbecause the long tuples are by definition long and take up lots of\nblocks.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 12 Dec 1999 17:04:54 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "I'm working on bullet-proofing AOLserver's postgres driver.\n\nI've fixed a bunch of weaknesses, but am stumped by the\nfollowing...\n\nAOLserver's a multithreaded server, and libpq's database\nconnection routines aren't threadsafe. It turns out the\nenvironment in which the driver lives doesn't allow me\nto ensure that only one thread executes a PQsetdb at a\ntime, at least without resorting to the specific operating\nsystem's mutexes and cond primitives. The server provides\na nice portable interface for such things but they're\nnot available to database drivers because in general the\nserver's not interested in having database drivers do such\nthings.\n\nThat's not a problem for this group, but I'm curious. People\nhave been using this driver for years, and some use it \nheavily (Lamar Owen, for one). Despite the thread unsafeness\nof PQsetdb et al, I've never seen a failure in this environment\nand I've never heard of folks experiencing such a failure.\n\nSo my question's simple - what exactly makes PQsetdb et al\nthread unsafe? I'm asking in order to attempt to get a handle\non just how vulnerable the routines are when two threads attempt\nto open a database connection simultaneously.\n\nThe other question's simple, too - are the implications predictable,\ni.e. will (for instance) one of the attemps simply crash or\nfail when two or more threads attempt to make a connection? Or\nam I looking at something more evil, like silent building of a\nconnection messed up in some subtle way? \n\nI suspect the answer to the last question is that the result of\ndoing this is unpredictable, but thought I'd ask.\n\nAOLserver supports external drivers called by a proxy with a \nseparate process provided for each database connection, but\nthere are unfortunate performance implications with this\napproach. It's designed explicitly for dbs with no threadsafe\nC API. This includes Sybase, and in my testing the internal\nPostgres driver can feed bytes to the server about three times\nas fast as the external driver written for Sybase, so you\ncan see why I'm reluctant to rewrite the Postgres driver simply\nbecause building a connection's not threadsafe. After all,\nunless a backend crashes they only happen when the server's\nfirst fired up. And people aren't seeing problems.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 12 Dec 1999 14:18:47 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "libpq questions...when threads collide"
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n> Despite the thread unsafeness\n> of PQsetdb et al, I've never seen a failure in this environment\n> and I've never heard of folks experiencing such a failure.\n\nThe *only* thing that's actually thread-unsafe, AFAIR, is\nPQconnectdb's use of a global array for connection parameters.\nPQsetdb/setdbLogin are thread-safe; so just use them instead.\n\nAt least that was true before the async-connection code got added.\nI haven't looked at that to see if it introduces any problems.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Dec 1999 17:41:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq questions...when threads collide "
},
{
"msg_contents": "At 05:41 PM 12/12/99 -0500, Tom Lane wrote:\n>Don Baccus <[email protected]> writes:\n>> Despite the thread unsafeness\n>> of PQsetdb et al, I've never seen a failure in this environment\n>> and I've never heard of folks experiencing such a failure.\n>\n>The *only* thing that's actually thread-unsafe, AFAIR, is\n>PQconnectdb's use of a global array for connection parameters.\n>PQsetdb/setdbLogin are thread-safe; so just use them instead.\n\nCool! I am using setdbLogin but the documentation sez they,\ntoo, aren't threadsafe...maybe this should be changed? This\nis great news.\n\n>At least that was true before the async-connection code got added.\n>I haven't looked at that to see if it introduces any problems.\n\nFor the moment, I'm happy to believe that it hasn't, it makes my\nimmediate future much simpler if I do so...\n\nAlso, the documentation describes two routines, PQoidStatus and\nPQoidValue, but the libpq source seem to only define PQoidStatus.\n\n(some user asked for a routine to feed back the oid of an insert,\nso I looked into it while simultaneously suggesting he study\n\"sequence\" and its associated \"nextval\" and \"currval\" functions\nand ponder on why it's really a bad idea to related tables by\nstoring oids rather than generated keys)\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 12 Dec 1999 14:53:13 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq questions...when threads collide "
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n> Cool! I am using setdbLogin but the documentation sez they,\n> too, aren't threadsafe...maybe this should be changed?\n\nI guess so. Submit a patch...\n\n> Also, the documentation describes two routines, PQoidStatus and\n> PQoidValue, but the libpq source seem to only define PQoidStatus.\n\nPQoidValue is new in current sources --- you must be looking at\ncurrent-snapshot docs, rather than what was released with 6.5.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Dec 1999 17:58:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq questions...when threads collide "
},
{
"msg_contents": "At 05:58 PM 12/12/99 -0500, Tom Lane wrote:\n\n>PQoidValue is new in current sources --- you must be looking at\n>current-snapshot docs, rather than what was released with 6.5.\n\nI'm using the docs at www.postgresql.org, which I assumed would\nbe matched to the current release.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 12 Dec 1999 15:01:19 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq questions...when threads collide "
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n>> PQoidValue is new in current sources --- you must be looking at\n>> current-snapshot docs, rather than what was released with 6.5.\n\n> I'm using the docs at www.postgresql.org, which I assumed would\n> be matched to the current release.\n\nI believe the on-line manual is a nightly snapshot. This is awfully\nhandy for developers but not so good for ordinary users. What we\nshould probably do is have both the snapshot and the last release's\ndocs on the website ... clearly marked ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Dec 1999 18:11:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq questions...when threads collide "
},
{
"msg_contents": "\nOn 12-Dec-99 Tom Lane wrote:\n> Don Baccus <[email protected]> writes:\n>>> PQoidValue is new in current sources --- you must be looking at\n>>> current-snapshot docs, rather than what was released with 6.5.\n> \n>> I'm using the docs at www.postgresql.org, which I assumed would\n>> be matched to the current release.\n> \n> I believe the on-line manual is a nightly snapshot. This is awfully\n> handy for developers but not so good for ordinary users. What we\n> should probably do is have both the snapshot and the last release's\n> docs on the website ... clearly marked ;-)\n\nLast I looked the docs for every particular version were included with \nthe tarball. No matter how clearly you mark anything it still won't be\nseen and someone will complain. Either we should keep the current docs\nor the release docs online - not both.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Sun, 12 Dec 1999 18:27:04 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq questions...when threads collide"
},
{
"msg_contents": "At 06:27 PM 12/12/99 -0500, Vince Vielhaber wrote:\n\n>Last I looked the docs for every particular version were included with \n>the tarball. No matter how clearly you mark anything it still won't be\n>seen and someone will complain.\n\nI'm not complaining, I'm explaining where I found the definition of\nPQoidValue. And, yes, I know the docs are in the tarball. As it\nhappens I have a permanent, high-speed internet connection and find\nit convenient to use the docs at postgres.org. If that makes me an\nidiot in your book I could care less.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 12 Dec 1999 15:37:12 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq questions...when threads collide"
},
{
"msg_contents": "\nOn 12-Dec-99 Don Baccus wrote:\n> At 06:27 PM 12/12/99 -0500, Vince Vielhaber wrote:\n> \n>>Last I looked the docs for every particular version were included with \n>>the tarball. No matter how clearly you mark anything it still won't be\n>>seen and someone will complain.\n> \n> I'm not complaining, I'm explaining where I found the definition of\n> PQoidValue. And, yes, I know the docs are in the tarball. As it\n> happens I have a permanent, high-speed internet connection and find\n> it convenient to use the docs at postgres.org. If that makes me an\n> idiot in your book I could care less.\n\nNow where the hell did I call you an idiot? Tom said we should have\ncurrent and release docs online clearly marked. Reread my reply. Are\nyou volunteering to be that special \"someone\"?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Sun, 12 Dec 1999 18:47:43 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq questions...when threads collide"
},
{
"msg_contents": "There are so many mails for me to follow about this issue. \nFor example,what's the conclusion about the following ?\nPlease teach me.\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> \n> \n> BTW, I don't see any really good reason to keep the out-of-line values\n> in a separate physical file (relation) as Jan originally proposed.\n> Why not keep them in the same file, but mark them as being something\n> different than a normal tuple? Sequential scans would have to know to\n> skip over them (big deal), and VACUUM would have to handle them\n> properly, but I think VACUUM is going to have to have special code to\n> support this feature no matter what. If we do make them a new primitive\n> kind-of-a-tuple on disk, we could sidestep the problem of marking all\n> the out-of-line values associated with a tuple when the tuple is\n> outdated by a transaction. The out-of-line values wouldn't have\n> transaction IDs in them at all; they'd just be labeled with the CTID\n\nWhat is wong if out-of-line values have their own XIDs ?\nIf an out-of-line is newer than corresponding row in \"primary\" table\nit's bad but could it occur ?\nBecause (rowid) of \"secondary\" table references \"primary\" table(oid)\non delete cascade,XID_MAXs of them would be synchronized.\n \nWhy is CTID needed ? Is it necessary to know \"primary\" tuples from\nout-of-lines values ? \n\n> and/or OID of the primary tuple they belong to. VACUUM would consult\n> that tuple to determine whether to keep or discard an out-of-line value.\n>\n\nWhat is wrong with separate VACUUM ?\nVACUUM never changes OIDs and XIDs(after MVCC).\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Mon, 13 Dec 1999 10:27:10 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] LONG "
},
{
"msg_contents": "Vince Vielhaber <[email protected]> writes:\n> Either we should keep the current docs\n> or the release docs online - not both.\n\nI disagree, because they serve different audiences. The snapshot docs\nare very useful to developers, particularly those of us who don't have\nSGML tools installed but still want to know whether the docs we\ncommitted recently look right or not ;-). Meanwhile, current-release\ndocuments are clearly the right thing to provide for ordinary users.\n\nI think a reasonable choice would be to provide current-release docs\nas the most readily accessible set of docs on the website, and to put\nthe snapshot docs somewhere less obvious where only developers would\nnormally go (preferably, accessed off a page that is clearly about\ndevelopment sources).\n\nIf I can't have both, I'd reluctantly say that the release docs are\nthe right ones to have on the website.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Dec 1999 21:01:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq questions...when threads collide "
},
{
"msg_contents": "> The solution I see is to give any out of line datum another\n> Oid, that is part of it's header and stamped into the\n> reference data. That way, the long attribute lookup can use\n> SnapshotAny using this Oid, there can only be one that\n> exists, so SnapshotAny is safe here and forces that only the\n> visibility of the master tuple in the main table counts at\n> all.\n\nThis is a great idea. Get rid of my use of the attribute number. Make\nthe varlena long value be:\n\n\tlong-bit|length|longrelid|longoid|longlen\n\nNo need for attno in there anymore.\n\nHaving a separate oid for the long value is great. You can then have\nmultiple versions of the long attribute in the long table and can\ncontrol when updating a tuple.\n\nI liked Hiroshi's idea of allowing long values in an index by just\npointing to the long table. Seems that would work too. varlena access\nroutines make that possible.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 12 Dec 1999 21:38:15 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> There are so many mails for me to follow about this issue. \n> For example,what's the conclusion about the following ?\n\nI don't think it's concluded yet...\n\n> Why is CTID needed ? Is it necessary to know \"primary\" tuples from\n> out-of-lines values ? \n\nIt seems to me that the primary tuple should store CTIDs of the\nout-of-line segment(s) it's using. That way, we need no index at\nall on the expansion relation, which would clearly be a win.\n\nMy thought was that if the expansion tuples stored CTIDs of their\nprimary tuples, then it would be practical to have VACUUM consult\nthe primary tuples' xact status while vacuuming the expansion.\nThat way, we'd have no need to update expansion tuples when changing\nxact status of primary tuples. But I think Jan has something else\nin mind for that.\n\nIt would be a little tricky to write out a tuple plus its expansion\ntuples and have them all know each others' CTIDs; the CTIDs would\nhave to be assigned before anything got written. And VACUUM would\nneed a little extra logic to update these things. But those are\nvery localized and IMHO solvable problems, and I think the performance\nadvantages would be significant...\n\n> What is wrong with separate VACUUM ?\n> VACUUM never changes OIDs and XIDs(after MVCC).\n\nI believe VACUUM does assign its own XID to tuples that it moves,\nso that a crash during VACUUM doesn't corrupt the table by leaving\nmultiple apparently-valid copies of a tuple. We'd have to figure out\nhow to accomplish the same result for expansion tuples.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Dec 1999 22:00:16 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LONG "
},
{
"msg_contents": ">\n> > The solution I see is to give any out of line datum another\n> > Oid, that is part of it's header and stamped into the\n> > reference data. That way, the long attribute lookup can use\n> > SnapshotAny using this Oid, there can only be one that\n> > exists, so SnapshotAny is safe here and forces that only the\n> > visibility of the master tuple in the main table counts at\n> > all.\n>\n> This is a great idea. Get rid of my use of the attribute number. Make\n> the varlena long value be:\n>\n> long-bit|length|longrelid|longoid|longlen\n>\n> No need for attno in there anymore.\n\n I still need it to explicitly remove one long value on\n update, while the other one is untouched. Otherwise I would\n have to drop all long values for the row together and\n reinsert all new ones.\n\n> Having a separate oid for the long value is great. You can then have\n> multiple versions of the long attribute in the long table and can\n> control when updating a tuple.\n>\n> I liked Hiroshi's idea of allowing long values in an index by just\n> pointing to the long table. Seems that would work too. varlena access\n> routines make that possible.\n\n Maybe possible, but not that good IMHO. Would cause another\n index scan from inside index scan to get at the value. An we\n all agree that indexing huge values isn't that a good thing\n at all.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 13 Dec 1999 04:01:55 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n>> I liked Hiroshi's idea of allowing long values in an index by just\n>> pointing to the long table. Seems that would work too. varlena access\n>> routines make that possible.\n\n> Maybe possible, but not that good IMHO. Would cause another\n> index scan from inside index scan to get at the value. An we\n> all agree that indexing huge values isn't that a good thing\n> at all.\n\nWell, no, you shouldn't make indexes on fields that are usually big.\nBut it'd be awfully nice if the system could cope with indexing fields\nthat just had a long value once in a while. Right now, our answer is\nto refuse to let you insert a long value into an indexed field; I don't\nthink that's very satisfactory.\n\nWhat do you think of my idea of not using any index on the expansion\ntable at all, but instead having the primary tuple reference the\nexpansion tuples via their CTIDs? More work at VACUUM time, for sure,\nbut a lot less work elsewhere.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Dec 1999 22:23:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LONG "
},
{
"msg_contents": "> \"Hiroshi Inoue\" <[email protected]> writes:\n> > There are so many mails for me to follow about this issue. \n> > For example,what's the conclusion about the following ?\n> \n> I don't think it's concluded yet...\n> \n> > Why is CTID needed ? Is it necessary to know \"primary\" tuples from\n> > out-of-lines values ? \n> \n> It seems to me that the primary tuple should store CTIDs of the\n> out-of-line segment(s) it's using. That way, we need no index at\n> all on the expansion relation, which would clearly be a win.\n\nThat could be bad. Vacuum moving expired entries in long_ tables would\nneed to update the ctids in the primary relation, which would be a mess.\nAlso, I can see an 16MB relation using 8k of stored ctids. Entries over\n16MB would be overflow, causing problems. I think an index and\ntradition access will be just fine.\n\n> \n> My thought was that if the expansion tuples stored CTIDs of their\n> primary tuples, then it would be practical to have VACUUM consult\n> the primary tuples' xact status while vacuuming the expansion.\n> That way, we'd have no need to update expansion tuples when changing\n> xact status of primary tuples. But I think Jan has something else\n> in mind for that.\n\nThen you need to have a way to point back to the primary table from the\nlong_ table. Doesn't seem worth it.\n\nAlso, I am questioning the use of compressed for long tuples. I often\ndon't want some compression happening behind the scenes.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 12 Dec 1999 22:25:54 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Jan Wieck\n> \n> >\n> > Having a separate oid for the long value is great. You can then have\n> > multiple versions of the long attribute in the long table and can\n> > control when updating a tuple.\n> >\n> > I liked Hiroshi's idea of allowing long values in an index by just\n> > pointing to the long table. Seems that would work too. varlena access\n> > routines make that possible.\n> \n> Maybe possible, but not that good IMHO. Would cause another\n> index scan from inside index scan to get at the value. An we\n> all agree that indexing huge values isn't that a good thing\n> at all.\n>\n\nWhat I need is an unqiue index (rowid,rowattno,chunk_seq) on\n\"secondary\" table.\nIs it different from your orginal idea ?\nI don't need any index on primary table.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Mon, 13 Dec 1999 12:56:08 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] LONG"
},
{
"msg_contents": "> >\n> > > The solution I see is to give any out of line datum another\n> > > Oid, that is part of it's header and stamped into the\n> > > reference data. That way, the long attribute lookup can use\n> > > SnapshotAny using this Oid, there can only be one that\n> > > exists, so SnapshotAny is safe here and forces that only the\n> > > visibility of the master tuple in the main table counts at\n> > > all.\n> >\n> > This is a great idea. Get rid of my use of the attribute number. Make\n> > the varlena long value be:\n> >\n> > long-bit|length|longrelid|longoid|longlen\n> >\n> > No need for attno in there anymore.\n> \n> I still need it to explicitly remove one long value on\n> update, while the other one is untouched. Otherwise I would\n> have to drop all long values for the row together and\n> reinsert all new ones.\n\nI am suggesting the longoid is not the oid of the primary or long*\ntable, but a unque id we assigned just to number all parts of the long*\ntuple. I thought that's what your oid was for.\n\n> \n> > Having a separate oid for the long value is great. You can then have\n> > multiple versions of the long attribute in the long table and can\n> > control when updating a tuple.\n> >\n> > I liked Hiroshi's idea of allowing long values in an index by just\n> > pointing to the long table. Seems that would work too. varlena access\n> > routines make that possible.\n> \n> Maybe possible, but not that good IMHO. Would cause another\n> index scan from inside index scan to get at the value. An we\n> all agree that indexing huge values isn't that a good thing\n> at all.\n\nMay as well. I can't think of a better solution for indexing when you\nhave long values. I don't think we want long* versions of indexes.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 12 Dec 1999 23:12:19 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> Sent: Monday, December 13, 1999 12:00 PM\n> To: Hiroshi Inoue\n> Cc: Bruce Momjian; Jan Wieck; [email protected]\n> Subject: Re: [HACKERS] LONG \n> \n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > There are so many mails for me to follow about this issue. \n> > For example,what's the conclusion about the following ?\n> \n> I don't think it's concluded yet...\n> \n> > Why is CTID needed ? Is it necessary to know \"primary\" tuples from\n> > out-of-lines values ? \n> \n> It seems to me that the primary tuple should store CTIDs of the\n> out-of-line segment(s) it's using. That way, we need no index at\n> all on the expansion relation, which would clearly be a win.\n> \n> My thought was that if the expansion tuples stored CTIDs of their\n> primary tuples, then it would be practical to have VACUUM consult\n> the primary tuples' xact status while vacuuming the expansion.\n> That way, we'd have no need to update expansion tuples when changing\n> xact status of primary tuples. But I think Jan has something else\n> in mind for that.\n> \n> It would be a little tricky to write out a tuple plus its expansion\n> tuples and have them all know each others' CTIDs; the CTIDs would\n> have to be assigned before anything got written. And VACUUM would\n> need a little extra logic to update these things. But those are\n> very localized and IMHO solvable problems, and I think the performance\n> advantages would be significant...\n>\n\nIf CTIDs are needed it isn't worth the work,I think.\nI don't understand why the reference \"secondary\" to \"primary\" is\nneeded. As far as I see,VACUUM doesn't need the reference. \n \n> > What is wrong with separate VACUUM ?\n> > VACUUM never changes OIDs and XIDs(after MVCC).\n> \n> I believe VACUUM does assign its own XID to tuples that it moves,\n\nAFAIK,vacuum never changes XIDs because MVCC doesn't allow\nit. Vadim changed to preverve XIDs between VACUUM before MVCC.\nVadim used CommandId instead to see whether VACUUM succeeded\nor not. \n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Mon, 13 Dec 1999 13:16:50 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] LONG "
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Bruce Momjian\n> \n> > >\n> > > > The solution I see is to give any out of line datum another\n> > > > Oid, that is part of it's header and stamped into the\n> > > > reference data. That way, the long attribute lookup can use\n> > > > SnapshotAny using this Oid, there can only be one that\n> > > > exists, so SnapshotAny is safe here and forces that only the\n> > > > visibility of the master tuple in the main table counts at\n> > > > all.\n> > >\n> > > This is a great idea. Get rid of my use of the attribute \n> number. Make\n> > > the varlena long value be:\n> > >\n> > > long-bit|length|longrelid|longoid|longlen\n> > >\n> > > No need for attno in there anymore.\n> > \n> > I still need it to explicitly remove one long value on\n> > update, while the other one is untouched. Otherwise I would\n> > have to drop all long values for the row together and\n> > reinsert all new ones.\n> \n> I am suggesting the longoid is not the oid of the primary or long*\n> table, but a unque id we assigned just to number all parts of the long*\n> tuple. I thought that's what your oid was for.\n>\n\nUnfortunately I couldn't follow this issue correctly. \nIs the format of long value relation different from Jan's original now ?\n\n - At CREATE TABLE, a long value relation named\n \"_LONG<tablename>\" is created for those tables who need it.\n And of course dropped and truncated appropriate. The schema\n of this table is\n\n rowid Oid, -- oid of our main data row\n rowattno int2, -- the attribute number in main data\n chunk_seq int4, -- the part number of this data chunk\n chunk text -- the content of this data chunk\n \nI thought that there's an unique index (rowid,rowattno,chunk_seq).\nSeems we could even update partially(specified chunk_seq only)\nwithout problem.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Mon, 13 Dec 1999 14:19:27 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] LONG"
},
{
"msg_contents": "> > I am suggesting the longoid is not the oid of the primary or long*\n> > table, but a unque id we assigned just to number all parts of the long*\n> > tuple. I thought that's what your oid was for.\n> >\n> \n> Unfortunately I couldn't follow this issue correctly. \n> Is the format of long value relation different from Jan's original now ?\n> \n> - At CREATE TABLE, a long value relation named\n> \"_LONG<tablename>\" is created for those tables who need it.\n> And of course dropped and truncated appropriate. The schema\n> of this table is\n> \n> rowid Oid, -- oid of our main data row\n\nI am suggesting a unique oid just to store this long value. The new oid\ngets stored in the primary table, and on every row of the long* table.\n\n\n> rowattno int2, -- the attribute number in main data\n\nNot needed anymore.\n\n> chunk_seq int4, -- the part number of this data chunk\n> chunk text -- the content of this data chunk\n\nYes.\n\n> \n> I thought that there's an unique index (rowid,rowattno,chunk_seq).\n\nIndex on longoid only. No need index on longoid and chunk_seq because\nyou don't need the rows returned in order.\n\n\n> Seems we could even update partially(specified chunk_seq only)\n> without problem.\n\nThat could be done, but seems too rare because the new data would have\nto be the same length. Doesn't seem worth�it, though others may\ndisagree. \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Dec 1999 00:59:21 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> > > No need for attno in there anymore.\n> >\n> > I still need it to explicitly remove one long value on\n> > update, while the other one is untouched. Otherwise I would\n> > have to drop all long values for the row together and\n> > reinsert all new ones.\n>\n> I am suggesting the longoid is not the oid of the primary or long*\n> table, but a unque id we assigned just to number all parts of the long*\n> tuple. I thought that's what your oid was for.\n\n It's not even an Oid of any existing tuple, just an\n identifier to quickly find all the chunks of one LONG value\n by (non-unique) index.\n\n My idea is this now:\n\n The schema of the expansion relation is\n\n value_id Oid\n chunk_seq int32\n chunk_data text\n\n with a non unique index on value_id.\n\n We change heap_formtuple(), heap_copytuple() etc. not to\n allocate the entire thing in one palloc(). Instead the tuple\n portion itself is allocated separately and the current memory\n context remembered too in the HeapTuple struct (this is\n required below).\n\n The long value reference in a tuple is defined as:\n\n vl_len int32; /* high bit set, 32-bit = 18 */\n vl_datasize int32; /* real vl_len of long value */\n vl_valueid Oid; /* value_id in expansion relation */\n vl_relid Oid; /* Oid of \"expansion\" table */\n vl_rowid Oid; /* Oid of the row in \"primary\" table */\n vl_attno int16; /* attribute number in \"primary\" table */\n\n The tuple given to heap_update() (the most complex one) can\n now contain usual VARLENA values of the format\n\n high-bit=0|31-bit-size|data\n\n or if the value is the result of a scan eventually\n\n high-bit=1|31-bit=18|datasize|valueid|relid|rowid|attno\n\n Now there are a couple of different cases.\n\n 1. The value found is a plain VARLENA that must be moved\n off.\n\n To move it off a new Oid for value_id is obtained, the\n value itself stored in the expansion relation and the\n attribute in the tuple is replaced by the above structure\n with the values 1, 18, original VARSIZE(), value_id,\n \"expansion\" relid, \"primary\" tuples Oid and attno.\n\n 2. The value found is a long value reference that has our\n own \"expansion\" relid and the correct rowid and attno.\n This would be the result of an UPDATE without touching\n this long value.\n\n Nothing to be done.\n\n 3. The value found is a long value reference of another\n attribute, row or relation and this attribute is enabled\n for move off.\n\n The long value is fetched from the expansion relation it\n is living in, and the same as for 1. is done with that\n value. There's space for optimization here, because we\n might have room to store the value plain. This can happen\n if the operation was an INSERT INTO t1 SELECT FROM t2,\n where t1 has few small plus one varsize attribute, while\n t2 has many, many long varsizes.\n\n 4. The value found is a long value reference of another\n attribute, row or relation and this attribute is disabled\n for move off (either per column or because our relation\n does not have an expansion relation at all).\n\n The long value is fetched from the expansion relation it\n is living in, and the reference in our tuple is replaced\n with this plain VARLENA.\n\n This in place replacement of values in the main tuple is the\n reason, why we have to make another allocation for the tuple\n data and remember the memory context where made. Due to the\n above process, the tuple data can expand, and we then need to\n change into that context and reallocate it.\n\n What heap_update() further must do is to examine the OLD\n tuple (that it already has grabbed by CTID for header\n modification) and delete all long values by their value_id,\n that aren't any longer present in the new tuple.\n\n The VARLENA arguments to type specific functions now can also\n have both formats. The macro\n\n #define VAR_GETPLAIN(arg) \\\n (VARLENA_ISLONG(arg) ? expand_long(arg) : (arg))\n\n can be used to get a pointer to an allways plain\n representation, and the macro\n\n #define VAR_FREEPLAIN(arg,userptr) \\\n if (arg != userptr) pfree(userptr);\n\n is to be used to tidy up before returning.\n\n In this scenario, a function like smaller(text,text) would\n look like\n\n text *\n smaller(text *t1, text *t2)\n {\n text *plain1 = VAR_GETPLAIN(t1);\n text *plain2 = VAR_GETPLAIN(t2);\n text *result;\n\n if ( /* whatever to compare plain1 and plain2 */ )\n result = t1;\n else\n result = t2;\n\n VAR_FREEPLAIN(t1,plain1);\n VAR_FREEPLAIN(t2,plain2);\n\n return result;\n }\n\n The LRU cache used in expand_long() will the again and again\n expansion become cheap enough. The benefit would be, that\n huge values resulting from table scans will be passed around\n in the system (in and out of sorting, grouping etc.) until\n they are modified or really stored/output.\n\n And the LONG index stuff should be covered here already (free\n lunch)! Index_insert() MUST allways be called after\n heap_insert()/heap_update(), because it needs the there\n assigned CTID. So at that time, the moved off attributes are\n replaced in the tuple data by the references. These will be\n stored instead of the values that originally where in the\n tuple. Should also work with hash indices, as long as the\n hashing functions use VAR_GETPLAIN as well.\n\n If we want to use auto compression too, no problem. We code\n this into another bit of the first 32-bit vl_len. The\n question if to call expand_long() changes now to \"is one of\n these set\". This way, we can store both, compressed and\n uncompressed into both, \"primary\" tuple or \"expansion\"\n relation. expand_long() will take care for it.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 13 Dec 1999 07:27:06 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "> We change heap_formtuple(), heap_copytuple() etc. not to\n> allocate the entire thing in one palloc(). Instead the tuple\n> portion itself is allocated separately and the current memory\n> context remembered too in the HeapTuple struct (this is\n> required below).\n\nUhh,\n\n just realized that the usual pfree(htup) will not work\n anymore. But shouldn't that already have been something like\n heap_freetuple(htup)?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 13 Dec 1999 07:46:01 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n>\n> > > I am suggesting the longoid is not the oid of the primary or long*\n> > > table, but a unque id we assigned just to number all parts of\n> the long*\n> > > tuple. I thought that's what your oid was for.\n> > >\n> > Unfortunately I couldn't follow this issue correctly.\n> > Is the format of long value relation different from Jan's original now ?\n> >\n> > - At CREATE TABLE, a long value relation named\n> > \"_LONG<tablename>\" is created for those tables who need it.\n> > And of course dropped and truncated appropriate. The schema\n> > of this table is\n> >\n> > rowid Oid, -- oid of our main data row\n>\n> I am suggesting a unique oid just to store this long value. The new oid\n> gets stored in the primary table, and on every row of the long* table.\n>\n\nHmm,we could delete long values easily using rowid in case of\nheap_delete() .......\n\n>\n> > Seems we could even update partially(specified chunk_seq only)\n> > without problem.\n>\n> That could be done, but seems too rare because the new data would have\n> to be the same length. Doesn't seem worth���it, though others may\n> disagree.\n>\n\nFirst,I wanted to emphasize that we don't have to update any long value\ntuples if we don't update long values. It's a special case of partial\nupdate.\n\nSecond,large object has an feature like this. If we would replace large\nobject by LONG,isn't it needed ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Mon, 13 Dec 1999 18:42:01 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] LONG"
},
{
"msg_contents": "On Sun, 12 Dec 1999, Tom Lane wrote:\n\n> Vince Vielhaber <[email protected]> writes:\n> > Either we should keep the current docs\n> > or the release docs online - not both.\n> \n> I disagree, because they serve different audiences. The snapshot docs\n> are very useful to developers, particularly those of us who don't have\n> SGML tools installed but still want to know whether the docs we\n> committed recently look right or not ;-). Meanwhile, current-release\n> documents are clearly the right thing to provide for ordinary users.\n\nUm, you mean you commit docs before you know whether they even \"compile\"?\nAs I see it, if you want to edit the docs, you should test them with your\nown SGML tools. With recent sgmltools packages, this is not so hard. At\nleast the patch applicator hopefully does this.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Mon, 13 Dec 1999 12:03:27 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq questions...when threads collide "
},
{
"msg_contents": "As I offered some time to work on tuple chaining this thread clearly\ntouches the same area.\n\nThe idea of transparantly moving big attributes into a seperate table\nclearly has its benefits as long as normal operations need not to touch\nthese long values. I (too) see this as a great deal. And the fact that\nit happens transparently (not visible to user) is the best about it.\n\nBut AFAICS tuple chaining shouldn't be such a big deal, it should be\nabout three days of work. (It'll definitely take longer for me, since I\nhave to understand pgsql's internals first.): Split the tuple into\nmultiple Items on disk storage, concatenate them on read in. Then make\nvacuum ignore continued items when not dealing with the whole tuple. No\nneed to touch CID, XID etc. The most obvious disadvantage is possible\nfragmentation of tuples (unless handled in vacuum). Disk access\natomicity for tuples is a non issue for Linux people since Linux uses 1k\nblocks :-(\n\nStoring attributes seperately is the best solution once you exceed\n4*BLKSZ, tuple chaining addresses 1.1-3*BLKSZ most efficiently. (correct\nme if I'm wrong)\n\nLONG as a seperate type is IMHO just another concept you have to master\nbefore you can use a RDBMS efficiently. The less different concepts a\nuser needs to learn, the easier life is for him. Postgres already has a\nlot of data types to learn. \n\nWrapping lo in a user type sounds good to me.\n\nYours \n Christof\n\n\n",
"msg_date": "Mon, 13 Dec 1999 22:50:52 +0100",
"msg_from": "Christof Petig <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "\nThis outline is perfect!\n\n\n> > I am suggesting the longoid is not the oid of the primary or long*\n> > table, but a unque id we assigned just to number all parts of the long*\n> > tuple. I thought that's what your oid was for.\n> \n> It's not even an Oid of any existing tuple, just an\n> identifier to quickly find all the chunks of one LONG value\n> by (non-unique) index.\n\nYes, I understood this and I think it is a great idea. It allows UPDATE\nto control whether it wants to replace the LONG value.\n\n\n> \n> My idea is this now:\n> \n> The schema of the expansion relation is\n> \n> value_id Oid\n> chunk_seq int32\n> chunk_data text\n> \n> with a non unique index on value_id.\n\nYes, exactly.\n\n> \n> We change heap_formtuple(), heap_copytuple() etc. not to\n> allocate the entire thing in one palloc(). Instead the tuple\n> portion itself is allocated separately and the current memory\n> context remembered too in the HeapTuple struct (this is\n> required below).\n\nI read the later part. I understand.\n\n> \n> The long value reference in a tuple is defined as:\n> \n> vl_len int32; /* high bit set, 32-bit = 18 */\n> vl_datasize int32; /* real vl_len of long value */\n> vl_valueid Oid; /* value_id in expansion relation */\n> vl_relid Oid; /* Oid of \"expansion\" table */\n> vl_rowid Oid; /* Oid of the row in \"primary\" table */\n> vl_attno int16; /* attribute number in \"primary\" table */\n\nI see you need vl_rowid and vl_attno so you don't accidentally reference\na LONG value twice. Good point. I hadn't thought of that.\n\n> \n> The tuple given to heap_update() (the most complex one) can\n> now contain usual VARLENA values of the format\n> \n> high-bit=0|31-bit-size|data\n> \n> or if the value is the result of a scan eventually\n> \n> high-bit=1|31-bit=18|datasize|valueid|relid|rowid|attno\n> \n> Now there are a couple of different cases.\n> \n> 1. The value found is a plain VARLENA that must be moved\n> off.\n> \n> To move it off a new Oid for value_id is obtained, the\n> value itself stored in the expansion relation and the\n> attribute in the tuple is replaced by the above structure\n> with the values 1, 18, original VARSIZE(), value_id,\n> \"expansion\" relid, \"primary\" tuples Oid and attno.\n> \n> 2. The value found is a long value reference that has our\n> own \"expansion\" relid and the correct rowid and attno.\n> This would be the result of an UPDATE without touching\n> this long value.\n> \n> Nothing to be done.\n> \n> 3. The value found is a long value reference of another\n> attribute, row or relation and this attribute is enabled\n> for move off.\n> \n> The long value is fetched from the expansion relation it\n> is living in, and the same as for 1. is done with that\n> value. There's space for optimization here, because we\n> might have room to store the value plain. This can happen\n> if the operation was an INSERT INTO t1 SELECT FROM t2,\n> where t1 has few small plus one varsize attribute, while\n> t2 has many, many long varsizes.\n> \n> 4. The value found is a long value reference of another\n> attribute, row or relation and this attribute is disabled\n> for move off (either per column or because our relation\n> does not have an expansion relation at all).\n> \n> The long value is fetched from the expansion relation it\n> is living in, and the reference in our tuple is replaced\n> with this plain VARLENA.\n\nYes.\n\n> \n> This in place replacement of values in the main tuple is the\n> reason, why we have to make another allocation for the tuple\n> data and remember the memory context where made. Due to the\n> above process, the tuple data can expand, and we then need to\n> change into that context and reallocate it.\n\n\nYes, got it.\n\n> \n> What heap_update() further must do is to examine the OLD\n> tuple (that it already has grabbed by CTID for header\n> modification) and delete all long values by their value_id,\n> that aren't any longer present in the new tuple.\n\nYes, makes vacuum run find on the LONG* relation.\n\n> \n> The VARLENA arguments to type specific functions now can also\n> have both formats. The macro\n> \n> #define VAR_GETPLAIN(arg) \\\n> (VARLENA_ISLONG(arg) ? expand_long(arg) : (arg))\n> \n> can be used to get a pointer to an allways plain\n> representation, and the macro\n> \n> #define VAR_FREEPLAIN(arg,userptr) \\\n> if (arg != userptr) pfree(userptr);\n> \n> is to be used to tidy up before returning.\n\nGot it.\n\n> \n> In this scenario, a function like smaller(text,text) would\n> look like\n> \n> text *\n> smaller(text *t1, text *t2)\n> {\n> text *plain1 = VAR_GETPLAIN(t1);\n> text *plain2 = VAR_GETPLAIN(t2);\n> text *result;\n> \n> if ( /* whatever to compare plain1 and plain2 */ )\n> result = t1;\n> else\n> result = t2;\n> \n> VAR_FREEPLAIN(t1,plain1);\n> VAR_FREEPLAIN(t2,plain2);\n> \n> return result;\n> }\n\nYes.\n\n> \n> The LRU cache used in expand_long() will the again and again\n> expansion become cheap enough. The benefit would be, that\n> huge values resulting from table scans will be passed around\n> in the system (in and out of sorting, grouping etc.) until\n> they are modified or really stored/output.\n\nYes.\n\n> \n> And the LONG index stuff should be covered here already (free\n> lunch)! Index_insert() MUST allways be called after\n> heap_insert()/heap_update(), because it needs the there\n> assigned CTID. So at that time, the moved off attributes are\n> replaced in the tuple data by the references. These will be\n> stored instead of the values that originally where in the\n> tuple. Should also work with hash indices, as long as the\n> hashing functions use VAR_GETPLAIN as well.\n\nI hoped this would be true. Great.\n\n> \n> If we want to use auto compression too, no problem. We code\n> this into another bit of the first 32-bit vl_len. The\n> question if to call expand_long() changes now to \"is one of\n> these set\". This way, we can store both, compressed and\n> uncompressed into both, \"primary\" tuple or \"expansion\"\n> relation. expand_long() will take care for it.\n\nPerfect. Sounds great.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Dec 1999 20:56:35 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LONG"
},
{
"msg_contents": "> > > Either we should keep the current docs\n> > > or the release docs online - not both.\n> > I disagree, because they serve different audiences. The snapshot docs\n> > are very useful to developers, particularly those of us who don't have\n> > SGML tools installed but still want to know whether the docs we\n> > committed recently look right or not ;-). Meanwhile, current-release\n> > documents are clearly the right thing to provide for ordinary users.\n\nVince, I'm with Tom on this one, having both would be great. The\n\"developer's only\" posting is a holdover from the first days when we\ncould generate docs on the Postgres machine, and I only had one place\non the web page I could put docs. But having the release docs posted\nfrom the \"Documentation\" page and the current tree docs posted either\nthere or on the \"Developers\" page would be great. I'm happy to\nredirect my nightly cron job to put the output somewhere other than\nwhere they are now.\n\n> Um, you mean you commit docs before you know whether they even \"compile\"?\n> As I see it, if you want to edit the docs, you should test them with your\n> own SGML tools. With recent sgmltools packages, this is not so hard. At\n> least the patch applicator hopefully does this.\n\nNo, testing doc output has never been a prerequisite for submitting\nand committing doc improvements/updates. If the submitted sgml code is\na bit wrong, the nightly cron job halts in the middle and the output\ntar files and web page copies don't get updated. I see the results in\nthe cron output I have sent to my home machine, and usually fix the\nproblem within a day or two (would be longer recently since I'm so\nbusy, but the scheme still is working...).\n\nThe important thing is getting the words updated in the docs, and\nrunning jade or the SGML-tools wrappers is still too much of a barrier\nif it were a prerequisite.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 14 Dec 1999 14:34:33 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq questions...when threads collide"
},
{
"msg_contents": "On Tue, 14 Dec 1999, Thomas Lockhart wrote:\n\n> > > > Either we should keep the current docs\n> > > > or the release docs online - not both.\n> > > I disagree, because they serve different audiences. The snapshot docs\n> > > are very useful to developers, particularly those of us who don't have\n> > > SGML tools installed but still want to know whether the docs we\n> > > committed recently look right or not ;-). Meanwhile, current-release\n> > > documents are clearly the right thing to provide for ordinary users.\n> \n> Vince, I'm with Tom on this one, having both would be great. The\n> \"developer's only\" posting is a holdover from the first days when we\n> could generate docs on the Postgres machine, and I only had one place\n> on the web page I could put docs. But having the release docs posted\n> from the \"Documentation\" page and the current tree docs posted either\n> there or on the \"Developers\" page would be great. I'm happy to\n> redirect my nightly cron job to put the output somewhere other than\n> where they are now.\n\nNo problem, I'll come up with a developer's section. I need to make it\nas obvious as possible or as obscure as possible to keep the webmaster\nmailbox from overflowing. I'll let you know 'cuze it'll also affect \nthe search engine. Hopefully in the next week, otherwise it won't happen\ntill the next century :)\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 14 Dec 1999 09:37:23 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq questions...when threads collide"
}
] |
[
{
"msg_contents": "On Sat, 11 Dec 1999, Oleg Bartunov wrote:\n\n> I have a problem with pg_dump (6.5.3) if I use\n> create table foo (\n> a text default foo_function()\n> );\n> where foo_function() is my function.\n> pg_dump dumps create table first and create function\n> later. Obvioulsy restoring doesn't works and\n> I have to edit dump file. It's rather annoying.\n> Is it fixed in current tree ?\n\nWhat though if a function accesses a table? Which one goes first? Do we\nhave to maintain a network of dependencies in pg_dump? Eventually we'll\nprobably have to, with all the foreign key stuff coming up. Gloomy\nprospects.\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sat, 11 Dec 1999 13:47:31 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] pg_dump primary keys"
},
{
"msg_contents": "Peter Eisentraut wrote:\n\n> What though if a function accesses a table? Which one goes first? Do we\n> have to maintain a network of dependencies in pg_dump? Eventually we'll\n> probably have to, with all the foreign key stuff coming up. Gloomy\n> prospects.\n\n No need to worry about FOREIGN KEY stuff here. These\n functions are generic builtins not dumped at all.\n\n But need to worry about all other functions of all languages.\n They can be used in a table schema and OTOH their definition\n might need a relation to exist (could have tuple type as\n argument). Plus, for SQL language functions (only SQL, not\n PL/pgSQL or any other language) their body is checked at\n CREATE time for syntax, so relations they use are required.\n\n This can only be solved by your mentioned dependency network.\n\n BTW: All this was one reason to dump views as CREATE TABLE\n and later CREATE RULE. Because views likely contain\n functions.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n\n",
"msg_date": "Sat, 11 Dec 1999 14:28:19 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] pg_dump primary keys"
},
{
"msg_contents": "Peter Eisentraut wrote:\n>> What though if a function accesses a table? Which one goes first? Do we\n>> have to maintain a network of dependencies in pg_dump? Eventually we'll\n>> probably have to, with all the foreign key stuff coming up. Gloomy\n>> prospects.\n\nCouldn't we solve this by the simple expedient of dumping all the\nobjects in the database in OID order?\n\nExpecting pg_dump to parse function bodies to discover what\nrelations/types are mentioned doesn't look appetizing at all...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 11 Dec 1999 12:58:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] pg_dump primary keys "
},
{
"msg_contents": "On 1999-12-11, Tom Lane mentioned:\n\n> Peter Eisentraut wrote:\n> >> What though if a function accesses a table? Which one goes first? Do we\n> >> have to maintain a network of dependencies in pg_dump? Eventually we'll\n> >> probably have to, with all the foreign key stuff coming up. Gloomy\n> >> prospects.\n> \n> Couldn't we solve this by the simple expedient of dumping all the\n> objects in the database in OID order?\n\nWow, great idea! That might actually solve all (well, most) pg_dump\nrelated problems once and for all. Of course how you get all objects in\nthe database in oid order is to be determined.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sun, 12 Dec 1999 03:06:49 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] pg_dump primary keys "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> Couldn't we solve this by the simple expedient of dumping all the\n>> objects in the database in OID order?\n\n> Wow, great idea! That might actually solve all (well, most) pg_dump\n> related problems once and for all. Of course how you get all objects in\n> the database in oid order is to be determined.\n\nI think it would take some restructuring in pg_dump: instead of\nprocessing each type of database object separately, it would have to\ngrab some info (at least the OIDs and types) for all the different\nobjects in the DB, then sort this info by OID, and finally get the\ndetails and produce the output for each object in OID order.\n\nThis would still fail in some pathological cases involving ALTER --- for\nexample, make a table, later create a new datatype, and then ALTER TABLE\nADD COLUMN of that datatype. So the next refinement would be to examine\ndependencies and do a topological sort rather than a simple sort by OID.\nWe'd still have to restructure pg_dump as above, though, and \"examining\ndependencies\" is not exactly trivial for function bodies in unknown PL\nlanguages...\n\nIf we had ALTER FUNCTION, which we don't but should, I think it would\nactually be possible to create circular dependencies for which there is\n*no* dump order that will work :-(. So I'm not sure it's worth the\ntrouble to add dependency extraction and a topological sort algorithm\nto pg_dump rather than just sorting by OID. Dumping in OID order will\nsolve 99% of the problem with a fraction of the work.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Dec 1999 01:52:41 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] pg_dump primary keys "
},
{
"msg_contents": "Added to TODO list.\n\n> Peter Eisentraut wrote:\n> >> What though if a function accesses a table? Which one goes first? Do we\n> >> have to maintain a network of dependencies in pg_dump? Eventually we'll\n> >> probably have to, with all the foreign key stuff coming up. Gloomy\n> >> prospects.\n> \n> Couldn't we solve this by the simple expedient of dumping all the\n> objects in the database in OID order?\n> \n> Expecting pg_dump to parse function bodies to discover what\n> relations/types are mentioned doesn't look appetizing at all...\n> \n> \t\t\tregards, tom lane\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Jun 2000 13:57:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] pg_dump primary keys"
}
] |
[
{
"msg_contents": "Hi,\n\nit seems to me that it's not quite clear whether pgsql makes\na consistent difference between byte and char, and if so, if \nthere is any way to store a small-sized array of bytes without\ngoing right to a BLOB. If you interface pgsql with Java/JDBC \nthe support of UNICODE (16 bit per char) is quite essential to\navoid surprises.\n\nA related question is whether we could support some more \nstandard names for data types (e.g., BIGINT, SMALLINT, etc.)\nBut I'm not sure there is really any standard. I would be\nwilling to work a little on these data types but I'd need\nsomeone to hint me on who else is doing stuff and, if possible,\nwhere to look first (and what known mistakes to avoid.)\n\nregards\n-Gunther\n\n\n-- \nGunther_Schadow-------------------------------http://aurora.rg.iupui.edu\nRegenstrief Institute for Health Care\n1050 Wishard Blvd., Indianapolis IN 46202, Phone: (317) 630 7960\[email protected]#include <usual/disclaimer>",
"msg_date": "Sat, 11 Dec 1999 12:33:10 -0500",
"msg_from": "Gunther Schadow <[email protected]>",
"msg_from_op": true,
"msg_subject": "UNICODE characters vs. BINARY"
},
{
"msg_contents": "> A related question is whether we could support some more\n> standard names for data types (e.g., BIGINT, SMALLINT, etc.)\n> But I'm not sure there is really any standard. I would be\n> willing to work a little on these data types but I'd need\n> someone to hint me on who else is doing stuff and, if possible,\n> where to look first (and what known mistakes to avoid.)\n\npostgres=> create table x (i smallint);\nCREATE\npostgres=> create table y (j bigint);\nERROR: Unable to locate type name 'bigint' in catalog\n\nafaik we support the type names defined in SQL92 (like smallint),\nhistorical names in Postgres, and some extensions. What more do we\nneed?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 14 Dec 1999 07:35:43 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] UNICODE characters vs. BINARY"
},
{
"msg_contents": "Thomas Lockhart wrote:\n\n> postgres=> create table x (i smallint);\n> CREATE\n> postgres=> create table y (j bigint);\n> ERROR: Unable to locate type name 'bigint' in catalog\n\nso BIGINT (as a synonym for INT8 is not supported). Is \nBIGINT not a standard SQL92 or de Facto?\n\nBTW: I have tried to make BIGINT a synonym of INT8 using\nCREATE TYPE with the parameters I've got from pg_type\nbut it would not work.\n\n> afaik we support the type names defined in SQL92 (like smallint),\n> historical names in Postgres, and some extensions. What more do we\n> need?\n\nI'm not entirely sure which types in pg_type are historical\nbut unsupported and which do work. For example: what is \n\"bytea\" ... I remember darkly that this was an array of bytes\nin original Postgres? But I may be mistaken. Why do I ask?\nBecause I see the need to store small byte sequences w/o \nhaving to deploy the large object inversion. For example\nif I want to store 128 bit UUIDs (or something similar with\n128 bits) I need this as a straight byte sequence, indexable\nof course -- not as a CHAR (since no character conversion should\noccur and these bytes are not printable), not as a BLOB.\n\nThen, how much do we guarrantee about PostgreSQL internal OIDs?\nWhat if I want to use OIDs directly in the context of a multiple\ndata bases. Is there any way to control assignment of OIDs so\nthat cooperation with other databases would be possible? \n\nthanks,\n-Gunther\n\n\nMy original question was: \n> > A related question is whether we could support some more\n> > standard names for data types (e.g., BIGINT, SMALLINT, etc.)\n> > But I'm not sure there is really any standard. I would be\n> > willing to work a little on these data types but I'd need\n> > someone to hint me on who else is doing stuff and, if possible,\n> > where to look first (and what known mistakes to avoid.)\n\n\n-- \nGunther_Schadow-------------------------------http://aurora.rg.iupui.edu\nRegenstrief Institute for Health Care\n1050 Wishard Blvd., Indianapolis IN 46202, Phone: (317) 630 7960\[email protected]#include <usual/disclaimer>",
"msg_date": "Tue, 14 Dec 1999 13:34:10 -0500",
"msg_from": "Gunther Schadow <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] UNICODE characters vs. BINARY"
},
{
"msg_contents": "At 01:34 PM 12/14/99 -0500, Gunther Schadow wrote:\n>so BIGINT (as a synonym for INT8 is not supported). Is \n>BIGINT not a standard SQL92 or de Facto?\n\nI've got Date's book sitting here, and it says that integer\nand smallint are standard, with int being a standard\nabbreviation for integer. So apparently bigint is\na common additional type, not standard SQL92.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 14 Dec 1999 10:54:37 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] UNICODE characters vs. BINARY"
},
{
"msg_contents": "On 1999-12-14, Thomas Lockhart mentioned:\n\n> afaik we support the type names defined in SQL92 (like smallint),\n> historical names in Postgres, and some extensions. What more do we\n> need?\n\nWe need to move the standard names up in the docs and the historical ones\ndown. I guess what you're doing with the date/time types would also be a\ngood idea.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Wed, 15 Dec 1999 00:01:52 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] UNICODE characters vs. BINARY"
}
] |
[
{
"msg_contents": "Well,\n\ni am a german boy, and I want to start hacking! But I know nobody who can\nhelp me! Perhaps someone can help me!\n\n\n\n",
"msg_date": "Sat, 11 Dec 1999 19:53:19 +0100",
"msg_from": "\"Morfeus\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Help"
}
] |
[
{
"msg_contents": "In my venturing into createdb/dropdb I came to that little artifact that\nallows you to create databases at alternate locations using environment\nvariables as part of the path (CREATE DATABASE elsewhere WITH LOCATION =\n'PGDATA2/foo').\n\nThe problem with this is that it doesn't work. It never could have, and\nGod help you if you try to use it anyway. And it's not only one isolated\nproblem.\n\nFirst off, the paths generated by the expansion and the ones generated by\ninitlocation are different, so the directory will never be found unless\nyou tweak it by hand. This can be fixed.\n\nWorse, however, is that the expanded path is stored in pg_database (at\nleast in theory, since you never get there) and once you try to reference\n(e.g., remove) the database, the same expansion routine will see the now\nabsolute path and refuses to allow it.\n\nWhat really gets me, though, is how this sort of scheme is supposed to\ncreate security in the first place. Perhaps I can create a path based on\nthe environment variable HOME, or maybe SHELL? Or how about this: you take\nan empty environment variable and specify VAR/usr/local/pgsql/lib as your\npath. Fun ensues! You can never completely control this stuff. ISTM, this\njust makes things worse and more complicated.\n\nHow could we still keep this feature but select another method of\nspecifying the list of allowed paths? The Unix file permissions should\nhelp, but that doesn't necessarily prevent anyone from frying your\nexisting databases, if you exercise a little imagination when specifying\nthe paths. How about a) something in some options file (lot of work), or\nb) some environment variable of the form PGSQL_ALTLOC=path:path:path?\n\nThis is of course barring the potential fact that parts of the code which\nI don't completely understand or which I haven't read yet prevent this\nwhole concept from working in other ways.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sat, 11 Dec 1999 23:04:10 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "createdb with alternate location"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> [ CREATE DATABASE WITH LOCATION shouldn't depend on environment vars ]\n\nI agree, this oughta be flushed. Is the expansion routine used in any\nother contexts where depending on an environment var *would* make sense?\n\n> What really gets me, though, is how this sort of scheme is supposed to\n> create security in the first place.\n\nI doubt security was foremost in the mind of whoever did that. Still,\nthe environment vars in question are those created by the dbadmin before\nstarting the postmaster; it's not like unprivileged users can affect\nthem. So I'd say it's just a chance to shoot yourself in the foot,\nnot a question of exposing yourself to enemy fire...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Dec 1999 01:13:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] createdb with alternate location "
},
{
"msg_contents": "\nOn Sat, 11 Dec 1999, Peter Eisentraut wrote:\n\n> In my venturing into createdb/dropdb I came to that little artifact that\n> allows you to create databases at alternate locations using environment\n> variables as part of the path (CREATE DATABASE elsewhere WITH LOCATION =\n> 'PGDATA2/foo').\n\nAnd what add to create-database management a TABLESPACE layout? It is\nstandard in any SQL (Oracle). It is good \"investment\" to future, because\non TABLESPACE tie storage management ..etc (see old \"Raw device\" thread\nin the hackers list). \n\n\t\t\t\t\t\t\tKarel\n\n----------------------------------------------------------------------\nKarel Zak <[email protected]> http://home.zf.jcu.cz/~zakkr/\n\nDocs: http://docs.linux.cz (big docs archive)\t\nKim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\nFTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n-----------------------------------------------------------------------\n\n",
"msg_date": "Mon, 13 Dec 1999 13:26:47 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] createdb with alternate location"
},
{
"msg_contents": "On 1999-12-13, Karel Zak - Zakkr mentioned:\n\n> On Sat, 11 Dec 1999, I wrote:\n> \n> > In my venturing into createdb/dropdb I came to that little artifact that\n> > allows you to create databases at alternate locations using environment\n> > variables as part of the path (CREATE DATABASE elsewhere WITH LOCATION =\n> > 'PGDATA2/foo').\n> \n> And what add to create-database management a TABLESPACE layout? It is\n> standard in any SQL (Oracle). It is good \"investment\" to future, because\n\nIt's not standard in _the_ SQL though.\n\n> on TABLESPACE tie storage management ..etc (see old \"Raw device\" thread\n> in the hackers list). \n\nI don't know how much use such a generalized version would be vs. the\noverhead involved. You can hang on to that thought though, if you like.\nI'd like to get this \"advertised\" feature working first, however.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n\n",
"msg_date": "Tue, 14 Dec 1999 00:27:43 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] createdb with alternate location"
}
] |
[
{
"msg_contents": "As I mentioned in passing a day or two ago, I've figured out how to\nsupport aggregates whose input is flagged DISTINCT without too much\npain. Basically it can all be done inside nodeAgg.c, once we teach\nthe parser to put the DISTINCT flag bit into Aggref querytree nodes.\n\n(a) If DISTINCT is not specified for a particular aggregate, then\nnodeAgg.c runs the aggregate's transition function(s) as each input\ntuple is presented, same as now.\n\n(b) If DISTINCT is specified, then nodeAgg.c evaluates the aggregate's\ninput expression at each input tuple, and passes the resulting datum\ninto a sort operation that it's started. (Now that tuplesort.c has\na fairly clean object-based interface, it will be easy to start up\na separate sort operation for each DISTINCT aggregate.)\n\n(c) At the end of the input table (or row group), nodeAgg.c does this\nfor each DISTINCT aggregate:\n * finish the pending sort operation;\n * scan the sort output, drop adjacent duplicate values (the code for\n this can be borrowed from nodeUnique), and run the aggregate's\n transition function(s) for each remaining value.\nFinally, the aggregate result values can be computed for all the\naggregates (both DISTINCT and regular), and then the output tuple\ncan be formed.\n\nThis is looking like a day's work at most, and considering how often\nit gets asked for, I think it's well worth doing.\n\nA limitation of this approach is that an explicit sort of the aggregate\ninput values will always be done, even when the input is or could be\ndelivered in the right order anyway. It is certainly *necessary* that\nnodeAgg.c be able to do internal sorts on-the-fly, in order to cope with\nmultiple DISTINCT aggregates, eg\n\tSELECT COUNT(DISTINCT foo), AVG(DISTINCT bar) FROM table;\nsince there is no way to scan the table in an order that's sorted for\nboth simultaneously. But in simpler cases it might be a win if the\noptimizer generated a plan that delivered the data in the right order\nand nodeAgg.c could be told to skip the internal sort for a DISTINCT\naggregate. I'm not going to worry about that now, but it's a possible\nfuture improvement.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Dec 1999 14:15:47 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Work plan: aggregate(DISTINCT ...)"
}
] |
[
{
"msg_contents": "Well,\n\n first I want to summarize some details, to see if we all\n agree so far in the discussion.\n\n - The implementation should be generic for all variable size\n types, but can be enabled/disabled per type.\n\n - Large values are moved out of the main tuple until it fit's\n a yet to be defined size.\n\n - The moved off values are kept in another relation per\n table, using regular tuples where the value is split into\n chunks. The new \"expansion\" relations get another relkind,\n so they can be hidden from the user and the system can\n easily identify them as such.\n\n - The type specific functions call a central support function\n to get the usual VARLENA format, which is taken from a LRU\n cache or fetched from the extension relation. They are\n responsible for freeing the memory after they're done with\n the value. Some macro's should make it fairly simple to\n handle.\n\n I don't think it is a good idea to create the expansion\n relation all the time. Some keyword in CREATE TABLE, and/or\n another ALTER TABLE should do it instead, so the DB admin can\n activate the LONG feature on a per table base as needed. In\n the first implementation there will be no command to\n deactivate it again. Workaround is rename table and select\n into as usual.\n\n Also I would like to say that system relations cannot have\n expansion relations. At least not until we have enough\n experience with this stuff.\n\n Is that now what we initially want to give a try? If so, I\n would like to start soon to get the generic part ready ASAP.\n Others could then join in and contribute by adding LONG\n support for all the VARLENA data types we have.\n\n Would really be a big leap if we can get this finished for a\n reasonable number of VARLENA types by February 1.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 13 Dec 1999 01:19:14 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "generic LONG VARLENA"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> first I want to summarize some details, to see if we all\n> agree so far in the discussion.\n\nI snipped everything I agreed with ;-)\n\n> - The implementation should be generic for all variable size\n> types, but can be enabled/disabled per type.\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nPer-type control doesn't strike me as interesting or useful. If there\nneeds to be a control at all, which I doubt, per-table would be the\nway to go. But how many users will really say, \"Oh yes, I *want* the\nthing to fail if my tuple's too big!\"? I say: make it automatically\napply whenever needed, don't force users to think about it.\n\n> - The type specific functions call a central support function\n> to get the usual VARLENA format, which is taken from a LRU\n> cache or fetched from the extension relation. They are\n> responsible for freeing the memory after they're done with\n> the value.\n\nIf we are going to do this, we ought also think about solving the\ngeneric memory-leakage problem at the same time. No point in having\nto revisit all the same code later to deal with that issue.\n\n> I don't think it is a good idea to create the expansion\n> relation all the time. Some keyword in CREATE TABLE, and/or\n> another ALTER TABLE should do it instead, so the DB admin can\n> activate the LONG feature on a per table base as needed.\n\nI don't believe it. See above: people will complain that it's a bug\nthat the system doesn't handle their long data values. Saying \"oh, you\nhave to turn it on\" will not appease them. My objection is really the\nsame as for the specialized LONG datatype: I do *not* want people to\nhave to put nonstandard junk into their database schema declarations\nin order to activate this feature. I think it should Just Work and\nstay out of users' faces.\n\nCreating the expansion relation isn't that big a deal, but if you\ndon't want to do it always, why not do it on first use?\n\n> Also I would like to say that system relations cannot have\n> expansion relations. At least not until we have enough\n> experience with this stuff.\n\nI'd really, really, really like to have this work for rules, though.\nWhy shouldn't we allow it for system relations? Most of the critical\nones have fixed-width tuples anyway, so it won't matter for them.\n\nBTW, it strikes me we should drop the \"lztext\" special datatype, and\ninstead have compression automatically applied to any varlena that\nwe are contemplating putting out-of-line. (If we're really lucky,\nthat saves us having to put the value out-of-line!)\n\n> Is that now what we initially want to give a try? If so, I\n> would like to start soon to get the generic part ready ASAP.\n> Others could then join in and contribute by adding LONG\n> support for all the VARLENA data types we have.\n\nYes, if we don't do it inside fastgetattr then there's a lot of code\nthat will have to change.\n\n> Would really be a big leap if we can get this finished for a\n> reasonable number of VARLENA types by February 1.\n\nThe more I think about this the more I think that it's a bad, bad idea\nto try to have it ready by Feb 1. There's not really enough time to\nget it right and test it. I don't want to be putting out an unstable\nrelease, and that's what I'm afraid we'll have if we try to rush in\nsuch a major change as this. Particularly when we have nontrivial\namounts of unfinished business elsewhere that we shouldn't neglect.\n(Jan, do you really think you can make this happen *and* bring foreign\nkeys to a finished status before February? If you are going to leave\nstuff undone in foreign keys, I think you are making the wrong choice.)\n\nFurthermore, we can save ourselves some time if we tackle this change\nin combination with the fmgr revision and the memory-leak-elimination\nissue. We will be touching all the same per-data-type code for each\nof these issues, so why not touch it once instead of several times?\n\nIn short, I like this design but I think we should plan it for 7.1.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Dec 1999 21:23:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] generic LONG VARLENA "
},
{
"msg_contents": "Tom Lane wrote:\n\n> [email protected] (Jan Wieck) writes:\n>\n> Per-type control doesn't strike me as interesting or useful. If there\n> needs to be a control at all, which I doubt, per-table would be the\n\n Isn't intended to be a runtime configuration. Just a\n temporary feature to restrict the attributes that can be\n moved off to those types, where WE know that the adt\n functions are prepared for them. If we finally have all\n builtin types finished for LONG handling, it will be removed,\n making user defined types LONGable too.\n\n> > I don't think it is a good idea to create the expansion\n> > relation all the time. Some keyword in CREATE TABLE, and/or\n> > another ALTER TABLE should do it instead, so the DB admin can\n> > activate the LONG feature on a per table base as needed.\n>\n> I don't believe it. See above: people will complain that it's a bug\n> that the system doesn't handle their long data values. Saying \"oh, you\n> have to turn it on\" will not appease them. My objection is really the\n> same as for the specialized LONG datatype: I do *not* want people to\n> have to put nonstandard junk into their database schema declarations\n> in order to activate this feature. I think it should Just Work and\n> stay out of users' faces.\n>\n> Creating the expansion relation isn't that big a deal, but if you\n> don't want to do it always, why not do it on first use?\n\n So you want to do a heap_create_with_catalog() plus\n index_create()'s from inside the heap_insert() or\n heap_update(). Cannot be done from anywhere else, because\n that's the point where we recognize the need. I don't think\n that's a good idea. What would happen if\n\n Xact 1 needs expansion relation and creates it.\n Xact 2 needs expansion relation too and uses that one\n Xact 1 aborts\n Xact 2 commits\n\n Better to put out an explanative error message if tuple too\n big and no expansion relation exists, than dealing with\n trouble when autocreating it. If it later turns out that it\n can safely work as an automated process, we can do it in a\n subsequent release.\n\n> > Also I would like to say that system relations cannot have\n> > expansion relations. At least not until we have enough\n> > experience with this stuff.\n>\n> I'd really, really, really like to have this work for rules, though.\n> Why shouldn't we allow it for system relations? Most of the critical\n> ones have fixed-width tuples anyway, so it won't matter for them.\n\n Me too, and for function source text again.\n\n But this time, you include the syscache into the entire\n approach too.\n\n> BTW, it strikes me we should drop the \"lztext\" special datatype, and\n> instead have compression automatically applied to any varlena that\n> we are contemplating putting out-of-line. (If we're really lucky,\n> that saves us having to put the value out-of-line!)\n\n Nice idea, and should be technically easy since the\n compressor itself is separated from the lztext type. OTOH the\n user then will have no choice to prevent compression tries\n for performance reasons.\n\n So this feature again is something that IMHO should go into a\n configurable option.\n\n> > Is that now what we initially want to give a try? If so, I\n> > would like to start soon to get the generic part ready ASAP.\n> > Others could then join in and contribute by adding LONG\n> > support for all the VARLENA data types we have.\n>\n> Yes, if we don't do it inside fastgetattr then there's a lot of code\n> that will have to change.\n\n That's why I'd like a small number of types involved at\n first.\n\n And there we're back on the \"release what we have now\"\n discussion again. Some like to get new functionality out in\n a couple of smaller steps, than doing the big all-in-one\n roll. Some not. Seems we can never get a consensus on that\n :-(\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 13 Dec 1999 03:54:35 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] generic LONG VARLENA"
},
{
"msg_contents": "> [email protected] (Jan Wieck) writes:\n> > first I want to summarize some details, to see if we all\n> > agree so far in the discussion.\n> \n> I snipped everything I agreed with ;-)\n> \n> > - The implementation should be generic for all variable size\n> > types, but can be enabled/disabled per type.\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> \n> Per-type control doesn't strike me as interesting or useful. If there\n> needs to be a control at all, which I doubt, per-table would be the\n> way to go. But how many users will really say, \"Oh yes, I *want* the\n> thing to fail if my tuple's too big!\"? I say: make it automatically\n> apply whenever needed, don't force users to think about it.\n\nAgreed. Who wouldn't want it.\n\n> \n> > - The type specific functions call a central support function\n> > to get the usual VARLENA format, which is taken from a LRU\n> > cache or fetched from the extension relation. They are\n> > responsible for freeing the memory after they're done with\n> > the value.\n> \n> If we are going to do this, we ought also think about solving the\n> generic memory-leakage problem at the same time. No point in having\n> to revisit all the same code later to deal with that issue.\n\nI have a good fix for this. My patch suggested the varlena routine\npfree the pointer returned from expand_long(). No need for that. With\nan LRU cache, we can have the cache itself free the old values. This\nwould be a nice optimization. Just add the lines below:\n\n+ \n+ \tif (VARISLONG(vlena)) /* checks long bit */\n+ \t\tvlena = expand_long(vlena); /* returns palloc long */\n+ \n\nThere aren't any cases where the varlena access routines access more\nthan two varlena values at the same time. If the expansion cache is at\nleast two values, you can just expand it and return memory. When that\ncache entry is expired, the memory is freed. Wow, this makes the\nvarlena changes very compact. All the action is in expand_long().\n\nBasically, don't have the access routines free the memory, have the old\ncache entries be pfreed.\n\n> \n> > I don't think it is a good idea to create the expansion\n> > relation all the time. Some keyword in CREATE TABLE, and/or\n> > another ALTER TABLE should do it instead, so the DB admin can\n> > activate the LONG feature on a per table base as needed.\n> \n> I don't believe it. See above: people will complain that it's a bug\n> that the system doesn't handle their long data values. Saying \"oh, you\n> have to turn it on\" will not appease them. My objection is really the\n> same as for the specialized LONG datatype: I do *not* want people to\n> have to put nonstandard junk into their database schema declarations\n> in order to activate this feature. I think it should Just Work and\n> stay out of users' faces.\n\n> Creating the expansion relation isn't that big a deal, but if you\n> don't want to do it always, why not do it on first use?\n> \n\nYes, why not just create it the first time it is needed. Seems pretty\nsmall performance-wise.\n\n> > Also I would like to say that system relations cannot have\n> > expansion relations. At least not until we have enough\n> > experience with this stuff.\n> \n> I'd really, really, really like to have this work for rules, though.\n> Why shouldn't we allow it for system relations? Most of the critical\n> ones have fixed-width tuples anyway, so it won't matter for them.\n\nOh, that's a good point. Seems that is a big reason for expansion of\ntypes.\n\n> \n> BTW, it strikes me we should drop the \"lztext\" special datatype, and\n> instead have compression automatically applied to any varlena that\n> we are contemplating putting out-of-line. (If we're really lucky,\n> that saves us having to put the value out-of-line!)\n\nOoh, very smart. You would need another bit to say whether the varlena\nis compressed or now. If you take it from 4-byte header, we are down to\na 1 GB length limit. You could do all the compression/decompression in\nthe two expansion functions, though compressing and then not using the\nlong_ table would be a little tricky to code, but do-able. You would\ncompress, then if still too large, move to long table.\n\n\n\n> \n> > Is that now what we initially want to give a try? If so, I\n> > would like to start soon to get the generic part ready ASAP.\n> > Others could then join in and contribute by adding LONG\n> > support for all the VARLENA data types we have.\n> \n> Yes, if we don't do it inside fastgetattr then there's a lot of code\n> that will have to change.\n\nSee above. It looks like only a few lines per function now. If we do\nit in fastgetattr, is there less code to change? How do we pfree()?\n\n> \n> > Would really be a big leap if we can get this finished for a\n> > reasonable number of VARLENA types by February 1.\n> \n> The more I think about this the more I think that it's a bad, bad idea\n> to try to have it ready by Feb 1. There's not really enough time to\n> get it right and test it. I don't want to be putting out an unstable\n> release, and that's what I'm afraid we'll have if we try to rush in\n> such a major change as this. Particularly when we have nontrivial\n> amounts of unfinished business elsewhere that we shouldn't neglect.\n> (Jan, do you really think you can make this happen *and* bring foreign\n> keys to a finished status before February? If you are going to leave\n> stuff undone in foreign keys, I think you are making the wrong choice.)\n> \n> Furthermore, we can save ourselves some time if we tackle this change\n> in combination with the fmgr revision and the memory-leak-elimination\n> issue. We will be touching all the same per-data-type code for each\n> of these issues, so why not touch it once instead of several times?\n> \n> In short, I like this design but I think we should plan it for 7.1.\n\nNot sure on this one. Jan will have to comment.\n\nI am excited about the long data type. This is _the_ way to do long\ndata types. Have any of the commercial databases figured out this way\nto do it. I can't imagine a better system.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 12 Dec 1999 21:54:41 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] generic LONG VARLENA"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> If we are going to do this, we ought also think about solving the\n>> generic memory-leakage problem at the same time. No point in having\n>> to revisit all the same code later to deal with that issue.\n\n> I have a good fix for this. My patch suggested the varlena routine\n> pfree the pointer returned from expand_long(). No need for that. With\n> an LRU cache, we can have the cache itself free the old values.\n\nOooh, that's a thought. Sort of like applying TupleTableSlot to\nindividual datum values.\n\n> There aren't any cases where the varlena access routines access more\n> than two varlena values at the same time.\n\nHuh? The standard operators on varlena types access at least three (two\ninputs and a result), and multi-argument functions could access more.\nAlso think about functions written in PLs: they could invoke a large\namount of computation, and would still expect to be able to access their\noriginal input arguments.\n\nI'd feel more comfortable with explicit reference counting. Perhaps\nwe could make an exception for function return values: the cache\nguarantees to hold onto a function return value for a little while\neven though no one is holding a refcount on it at the instant it's\nreturned. Functions (including PL functions) that want to access\nvarlena values across any significant amount of computation would\nhave to bump the refcount on those values somehow.\n\n> I am excited about the long data type. This is _the_ way to do long\n> data types. Have any of the commercial databases figured out this way\n> to do it. I can't imagine a better system.\n\nI think we are working on some really cool ideas here. But I *don't*\nthink we have a solid enough hold on all the details that we can expect\nto implement it and ship it out one-two-three. Thus my feeling that\nthis is for 7.1 not 7.0...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Dec 1999 22:17:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] generic LONG VARLENA "
},
{
"msg_contents": "Tom Lane wrote:\n\n> [email protected] (Jan Wieck) writes:\n> > Nice idea, and should be technically easy since the\n> > compressor itself is separated from the lztext type. OTOH the\n> > user then will have no choice to prevent compression tries\n> > for performance reasons.\n> > So this feature again is something that IMHO should go into a\n> > configurable option.\n>\n> Good point. You're right, there should be a per-datatype \"don't\n> bother to try to compress this type\" flag. (Is per-datatype the\n> right granularity?)\n\n Per column!\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 13 Dec 1999 04:27:07 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] generic LONG VARLENA"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> Tom Lane wrote:\n>> Per-type control doesn't strike me as interesting or useful.\n\n> Isn't intended to be a runtime configuration. Just a\n> temporary feature to restrict the attributes that can be\n> moved off to those types, where WE know that the adt\n> functions are prepared for them.\n\nOh, I see. Yeah, if we wanted to make an interim release where only\nsome datatypes were ready for long values, that would be a necessary\nsafety measure. But I'd rather plan on just getting it done in one\nrelease.\n\n>> BTW, it strikes me we should drop the \"lztext\" special datatype, and\n>> instead have compression automatically applied to any varlena that\n>> we are contemplating putting out-of-line. (If we're really lucky,\n>> that saves us having to put the value out-of-line!)\n\n> Nice idea, and should be technically easy since the\n> compressor itself is separated from the lztext type. OTOH the\n> user then will have no choice to prevent compression tries\n> for performance reasons.\n> So this feature again is something that IMHO should go into a\n> configurable option.\n\nGood point. You're right, there should be a per-datatype \"don't\nbother to try to compress this type\" flag. (Is per-datatype the\nright granularity?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Dec 1999 22:31:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] generic LONG VARLENA "
},
{
"msg_contents": "Tom Lane asked:\n\n> (Jan, do you really think you can make this happen *and* bring foreign\n> keys to a finished status before February? If you are going to leave\n> stuff undone in foreign keys, I think you are making the wrong choice.)\n\n Except for the file buffering of the trigger event queue,\n FOREIGN KEY is completely implemented as I proposed, MATCH\n FULL.\n\n Thus I HAVE the time.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 13 Dec 1999 05:04:44 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] generic LONG VARLENA"
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> If we are going to do this, we ought also think about solving the\n> >> generic memory-leakage problem at the same time. No point in having\n> >> to revisit all the same code later to deal with that issue.\n> \n> > I have a good fix for this. My patch suggested the varlena routine\n> > pfree the pointer returned from expand_long(). No need for that. With\n> > an LRU cache, we can have the cache itself free the old values.\n> \n> Oooh, that's a thought. Sort of like applying TupleTableSlot to\n> individual datum values.\n> \n> > There aren't any cases where the varlena access routines access more\n> > than two varlena values at the same time.\n> \n> Huh? The standard operators on varlena types access at least three (two\n> inputs and a result), and multi-argument functions could access more.\n> Also think about functions written in PLs: they could invoke a large\n> amount of computation, and would still expect to be able to access their\n> original input arguments.\n> \n> I'd feel more comfortable with explicit reference counting. Perhaps\n> we could make an exception for function return values: the cache\n> guarantees to hold onto a function return value for a little while\n> even though no one is holding a refcount on it at the instant it's\n> returned. Functions (including PL functions) that want to access\n> varlena values across any significant amount of computation would\n> have to bump the refcount on those values somehow.\n\nI just checked the code, and I don't see any places where a varlena is\nreturned that isn't palloc'ed inside the function, so the cache memory\nnever makes it out of the routines.\n\nHowever, I see any reference to VARDATA could be a problem because it\nassume the data is there, and not in the long* relations. I could\nprobably figure out which ones need expanding. They are mostly system\ntable accesses. The others go through adt or are output to the user.\n\n> \n> > I am excited about the long data type. This is _the_ way to do long\n> > data types. Have any of the commercial databases figured out this way\n> > to do it. I can't imagine a better system.\n> \n> I think we are working on some really cool ideas here. But I *don't*\n> think we have a solid enough hold on all the details that we can expect\n> to implement it and ship it out one-two-three. Thus my feeling that\n> this is for 7.1 not 7.0...\n\nWe have gotten pretty far in two days. This long tuple stuff is not as\ndifficult as foreign key because I can actually figure out what is\nhappening with the long types, while foreign key is a complete mystery\nto me.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 12 Dec 1999 23:33:33 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] generic LONG VARLENA"
},
{
"msg_contents": "> Tom Lane asked:\n> \n> > (Jan, do you really think you can make this happen *and* bring foreign\n> > keys to a finished status before February? If you are going to leave\n> > stuff undone in foreign keys, I think you are making the wrong choice.)\n> \n> Except for the file buffering of the trigger event queue,\n> FOREIGN KEY is completely implemented as I proposed, MATCH\n> FULL.\n> \n> Thus I HAVE the time.\n\nWell, this is very good news. Jan, aren't you going to bed?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 12 Dec 1999 23:34:11 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] generic LONG VARLENA"
},
{
"msg_contents": "> However, I see any reference to VARDATA could be a problem because it\n> assume the data is there, and not in the long* relations. I could\n> probably figure out which ones need expanding. They are mostly system\n> table accesses. The others go through adt or are output to the user.\n\nVARDATA looks tricky. Seems I may need that cache of values. In most\ncases, VARDATA values are used within the next few lines of code, just\nlike system cache tuples. If I need it for longer periods, I have to\npalloc it. Good thing most VARDATA values are used for brief periods.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Dec 1999 01:03:26 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] generic LONG VARLENA"
}
] |
[
{
"msg_contents": "Hi,\n\n actually the opr_sanity regression test complains about a\n proc update_pg_pwd(). It has a return type of Opaque and is\n declared as void return type in commands/user.c. It seems to\n be nowhere directly called, nor does it appear to be a\n trigger function.\n\n I wonder if it is properly defined. Shouldn't it return at\n least a valid type to be callable via SQL?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 13 Dec 1999 04:58:00 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "update_pg_pwd"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> actually the opr_sanity regression test complains about a\n> proc update_pg_pwd(). It has a return type of Opaque and is\n> declared as void return type in commands/user.c. It seems to\n> be nowhere directly called, nor does it appear to be a\n> trigger function.\n\nIt is a trigger function for pg_shadow updates, see PATCHES message\nfrom a day or two back.\n\n> I wonder if it is properly defined. Shouldn't it return at\n> least a valid type to be callable via SQL?\n\nopr_sanity is complaining because the declared return type is 0.\nI am not very happy about taking out opr_sanity's check on return types;\nperhaps I should lobby to have Opaque-valued trigger functions be\ndeclared with an actually valid return-type OID. What do you think?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Dec 1999 23:57:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] update_pg_pwd "
},
{
"msg_contents": "> It is a trigger function for pg_shadow updates, see PATCHES message\n> from a day or two back.\n>\n> > I wonder if it is properly defined. Shouldn't it return at\n> > least a valid type to be callable via SQL?\n>\n> opr_sanity is complaining because the declared return type is 0.\n> I am not very happy about taking out opr_sanity's check on return types;\n> perhaps I should lobby to have Opaque-valued trigger functions be\n> declared with an actually valid return-type OID. What do you think?\n\n Trigger functions should allways return at least a NULL\n pointer of type HeapTuple, not be declared void. From this I\n assume it's an AFTER ROW trigger,\n\n There are already some exceptions coded into the test. These\n are PL handlers. Since their real return value is HeapTuple,\n you would have to make this defined special type not\n selectable in another way. So why do you want?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 13 Dec 1999 07:34:30 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] update_pg_pwd"
},
{
"msg_contents": "On Sun, 12 Dec 1999, Tom Lane wrote:\n\n> > I wonder if it is properly defined. Shouldn't it return at\n> > least a valid type to be callable via SQL?\n> \n> opr_sanity is complaining because the declared return type is 0.\n> I am not very happy about taking out opr_sanity's check on return types;\n> perhaps I should lobby to have Opaque-valued trigger functions be\n> declared with an actually valid return-type OID. What do you think?\n\nPlease don't lose me here. Did I do something wrong? Isn't oid 0 used for\nopaque return types? What should an opaque function return in C? I don't\nsee a good reason from a practical point of view to disallow opaque\nfunctions as triggers, for this very reason, achieving none-database side\neffects. At least the create trigger command should say something if it\ndoesn't like it.\n\nIf you have to tailor functionality around the regression tests, this is\nnot the right direction. After all 0 is a valid oid in this context: it's\nopaque.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Mon, 13 Dec 1999 12:12:42 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] update_pg_pwd "
},
{
"msg_contents": "On Mon, 13 Dec 1999, Jan Wieck wrote:\n\n> Trigger functions should allways return at least a NULL\n> pointer of type HeapTuple, not be declared void. From this I\n> assume it's an AFTER ROW trigger,\n\nMust be after row, because it has to wait until the change is actually\nwritten to pg_shadow. Better would be an AFTER STATEMENT is assume.\n\n> There are already some exceptions coded into the test. These\n> are PL handlers. Since their real return value is HeapTuple,\n> you would have to make this defined special type not\n> selectable in another way. So why do you want?\n\nI'm not sure I'm following you, but why would a function that doesn't have\na useful return value return one?\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Mon, 13 Dec 1999 12:15:06 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] update_pg_pwd"
},
{
"msg_contents": "Peter Eisentraut wrote:\n\n> On Mon, 13 Dec 1999, Jan Wieck wrote:\n>\n> I'm not sure I'm following you, but why would a function that doesn't have\n> a useful return value return one?\n\n AFTER ROW triggers indeed have no useful return value,\n because it is ignored for now. But IMHO they still should\n follow the trigger programming guidelines.\n\n That means, the declaration should read\n\n HeapTuple funcname(void);\n\n Then they should contain\n\n TriggerData *trigdata;\n ...\n trigdata = CurrentTriggerData;\n CurrentTriggerData = NULL;\n\n and if they do not want to manipulate the actual action, just\n to get informed that it happened, return\n\n trigdata->tg_trigtuple;\n\n I'll make these changes to update_pg_pwd(), now that I know\n for sure what it is.\n\n One last point though. The comment says it's using lower case\n name now to be callable from SQL, what it isn't because of\n it's Opaque return type in pg_proc.\n\n pgsql=> select update_pg_pwd();\n ERROR: typeidTypeRelid: Invalid type - oid = 0\n\n Is that a wanted (needed) capability or should I better\n change the comment to reflect it's real nature?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 13 Dec 1999 12:38:17 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] update_pg_pwd"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On Sun, 12 Dec 1999, Tom Lane wrote:\n>> opr_sanity is complaining because the declared return type is 0.\n>> I am not very happy about taking out opr_sanity's check on return types;\n>> perhaps I should lobby to have Opaque-valued trigger functions be\n>> declared with an actually valid return-type OID. What do you think?\n\n> Please don't lose me here. Did I do something wrong?\n\nNo, I think you coded the function the way it's currently done. I'm\njust muttering that the way it's currently done needs rethinking.\n\n> If you have to tailor functionality around the regression tests, this is\n> not the right direction. After all 0 is a valid oid in this context: it's\n> opaque.\n\nThe thing I'm unhappy about is that \"0\" is being overloaded way too far\nas a function argument/result type in pg_proc. Currently it could mean:\n\t* unused position in proargtype array;\n\t* erroneous definition;\n\t* \"C string\" parameter to a type input function (but, for who\n\t knows what reason, C string outputs from type-output functions\n\t are represented differently);\n\t* user proc returning some kind of tuple;\n\t* user proc returning nothing in particular;\nand who knows what else. This is bogus. I've complained before that\nthere ought to be a specific OID value associated with \"C string\" and\nthat type input/output functions ought to be declared to take or return\nthat type ID, even though it wouldn't be a \"real\" type in the sense of\never appearing as a column type. The parser already has a similar\nconcept in its \"UNKNOWN\" type, which it uses for string constants that\nare of as-yet-undetermined type. UNKNOWN is real to the extent of\nhaving a specific OID.\n\nI'm thinking maybe we ought to invent another type OID (or two?) that\ncan be used for user functions that are declared OPAQUE. Triggers in\nparticular. That would allow more error checking, and it'd let me take\nout the kluges that presently exist in the opr_sanity regress test to\nkeep it from spitting up on things that are practically\nindistinguishable from genuine errors.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Dec 1999 10:44:09 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] update_pg_pwd "
},
{
"msg_contents": "I wrote:\n> The thing I'm unhappy about is that \"0\" is being overloaded way too far\n> as a function argument/result type in pg_proc. Currently it could mean:\n> \t* unused position in proargtype array;\n> \t* erroneous definition;\n> \t* \"C string\" parameter to a type input function (but, for who\n> \t knows what reason, C string outputs from type-output functions\n> \t are represented differently);\n> \t* user proc returning some kind of tuple;\n> \t* user proc returning nothing in particular;\n> and who knows what else.\n\nAlmost forgot:\n\t* function accepting any data type whatever\n(I think COUNT() is the only one at present).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Dec 1999 10:46:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] update_pg_pwd "
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> One last point though. The comment says it's using lower case\n> name now to be callable from SQL, what it isn't because of\n> it's Opaque return type in pg_proc.\n\n> pgsql=> select update_pg_pwd();\n> ERROR: typeidTypeRelid: Invalid type - oid = 0\n\n> Is that a wanted (needed) capability or should I better\n> change the comment to reflect it's real nature?\n\nWhat would you expect the SELECT to produce here? I think the error\nmessage is pretty poor, but I can't really see OPAQUE functions being\nallowed in expression contexts...\n\nI don't really like the description of these functions as returning\nsomething \"OPAQUE\", anyway, particularly when that is already being\n(mis) used for user-defined type input/output functions. I wish\nthey were declared as returning something like \"TUPLE\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Dec 1999 11:02:41 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] update_pg_pwd "
},
{
"msg_contents": "Tom Lane wrote:\n\n> [email protected] (Jan Wieck) writes:\n> > One last point though. The comment says it's using lower case\n> > name now to be callable from SQL, what it isn't because of\n> > it's Opaque return type in pg_proc.\n>\n> > pgsql=> select update_pg_pwd();\n> > ERROR: typeidTypeRelid: Invalid type - oid = 0\n>\n> > Is that a wanted (needed) capability or should I better\n> > change the comment to reflect it's real nature?\n>\n> What would you expect the SELECT to produce here? I think the error\n> message is pretty poor, but I can't really see OPAQUE functions being\n> allowed in expression contexts...\n\n Exactly that error message, you know as I know why it should\n happen. What I wanted to know is, if there could be a reason\n to make it callable via such a SELECT, to force it to do it's\n action, or if that's just an old comment that's wrong now.\n\n> I don't really like the description of these functions as returning\n> something \"OPAQUE\", anyway, particularly when that is already being\n> (mis) used for user-defined type input/output functions. I wish\n> they were declared as returning something like \"TUPLE\".\n\n Yes, that would clearly separate trigger proc's from\n functions. And for unused arguments I would suggest VOID.\n\n But I expect (hope), you want to do this all during the fmgr\n redesign, not right now, no?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 13 Dec 1999 17:54:39 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] update_pg_pwd"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n>> I don't really like the description of these functions as returning\n>> something \"OPAQUE\", anyway, particularly when that is already being\n>> (mis) used for user-defined type input/output functions. I wish\n>> they were declared as returning something like \"TUPLE\".\n\n> Yes, that would clearly separate trigger proc's from\n> functions. And for unused arguments I would suggest VOID.\n\n> But I expect (hope), you want to do this all during the fmgr\n> redesign, not right now, no?\n\nYes, this ought to go along with fmgr changes, probably. But I'm still\nunhappy about the idea of doing all these updates for long values to\nvarlena datatypes without doing the fmgr update at the same time.\n\nI have been thinking some more about the schedule issue, and I still\nthink it's foolhardy to try to do the long-values change by Feb 1.\nIf you recall, that date was set on the assumption that we were only\ngoing to clean up what we had before making the release, not insert\nmajor new features.\n\nIf people are really excited about getting this done for the next\nrelease, I propose that we forget all about Feb 1 and just say\n\"we'll release when this set of changes are done\". We ought to deal\nwith all of these issues together:\n\t* support long values for varlena datatypes;\n\t* eliminate memory leaks for pass-by-ref datatypes (both\n\t varlena and fixed-length);\n\t* rewrite fmgr interface to fix NULL and portability bugs.\nIf we don't do it that way, then not only will we ourselves be\nhaving to visit each datatype module multiple times, but we will be\nbreaking user-added functions in two successive releases. Users\nwill not be happy about that. We should change these coding rules\nthat affect user datatypes *once*, and get it right the first time.\n\nI'd personally prefer to see us put off all these issues till after\na Feb-1-beta release, but I fear I am fighting a losing battle there.\nLet's at least be sane enough to recognize that we don't know quite\nhow long this will take.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Dec 1999 13:02:43 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] update_pg_pwd "
},
{
"msg_contents": "On 1999-12-13, Jan Wieck mentioned:\n\n> I'll make these changes to update_pg_pwd(), now that I know\n> for sure what it is.\n> \n> One last point though. The comment says it's using lower case\n> name now to be callable from SQL, what it isn't because of\n> it's Opaque return type in pg_proc.\n> \n> pgsql=> select update_pg_pwd();\n> ERROR: typeidTypeRelid: Invalid type - oid = 0\n> \n> Is that a wanted (needed) capability or should I better\n> change the comment to reflect it's real nature?\n\nDo as you seem fit. I just copied that together from other places. What's\nimportant though, is that this function is also called other places, so if\nyou make it \"trigger fit\", then you ought to make it a wrapper around the\nreal one.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Tue, 14 Dec 1999 00:28:50 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] update_pg_pwd"
},
{
"msg_contents": "> [email protected] (Jan Wieck) writes:\n> >> I don't really like the description of these functions as returning\n> >> something \"OPAQUE\", anyway, particularly when that is already being\n> >> (mis) used for user-defined type input/output functions. I wish\n> >> they were declared as returning something like \"TUPLE\".\n> \n> > Yes, that would clearly separate trigger proc's from\n> > functions. And for unused arguments I would suggest VOID.\n> \n> > But I expect (hope), you want to do this all during the fmgr\n> > redesign, not right now, no?\n> \n> Yes, this ought to go along with fmgr changes, probably. But I'm still\n> unhappy about the idea of doing all these updates for long values to\n> varlena datatypes without doing the fmgr update at the same time.\n> \n> I have been thinking some more about the schedule issue, and I still\n> think it's foolhardy to try to do the long-values change by Feb 1.\n> If you recall, that date was set on the assumption that we were only\n> going to clean up what we had before making the release, not insert\n> major new features.\n\nThe scheme followed in previous releases was to put in features just\nbefore beta with little testing, because you can fix bugs in beta, but\nnot add new features. I know I did that trick a few times.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Dec 1999 19:00:49 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] update_pg_pwd"
}
] |
[
{
"msg_contents": "\n\n-----Original Message-----\nFrom: Thomas Lockhart [mailto:[email protected]]\nSent: Friday, December 10, 1999 3:44 PM\nTo: Peter Mount\nCc: 'Tom Lane'; 'The Hermit Hacker'; Bruce Momjian;\nPostgreSQL-development\nSubject: Re: [HACKERS] 6.6 release\n\n\n> > I'm also confused. So far, I've been working on the premise that the\n> > next release would be 7.0 because of the probably major additions\n> > expected, and that I'm hitting the JDBC driver hard to get as much\nof\n> > the 2.0 spec complete as is possible.\n\nOK, now *I'm* confused too! Peter, what in your stuff *requires* a\nversion renumbering to 7.0? The proposal was that we consolidate\nchanges in the backend server for a 6.6 release. Why does JDBC need to\nwait for a \"7.0\" in the version number to support the 2.0 spec?\n\nPM: Nothing yet, but it's possible that when I start on Arrays it will\nneed to work with the latest backend. Also, the version currently in CVS\n(originally intended for 6.5.3) is the first not to be backward\ncompatible with earlier backends, and this one may follow suit.\n\nPM: As for the 2.0 spec, we currently only touch the surface, and there\nmay be the possibility that I have to add some functionality in the\nbackend, esp. with PreparedStatement or CallableStatement.\n\n> That was what I was thinking also, until yesterday. I think that the\n> proposal on the table is simply to consolidate/debug what we've\nalready\n> done and push it out the door. If you've still got substantial work\n> left to finish JDBC 2.0, then it'd be better left for the next\nrelease.\n\nRight.\n\nPM: That's exactly what I'm planning. There is a lot of work to get it\nup to spec, so if we have a 6.6 in Feb. then I won't have it all done by\nthen.\n\n> I know I have a lot of little loose ends dangling on stuff that's\n> already \"done\", and a long list of nitty little bugs to fix, so it\n> makes sense to me to spend some time in fix-bugs-and-make-a-release\n> mode before going back into long-haul-feature-development mode.\n> Now, if other people don't have that feeling, maybe the idea of\n> a near-term release isn't so hot after all.\n\nYes I've got that feeling too!! :)\n\nPM: I'm thinking that now (after thinking about it over the weekend that\nis)\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n",
"msg_date": "Mon, 13 Dec 1999 08:07:20 -0000",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] 6.6 release"
},
{
"msg_contents": "Peter Mount wrote:\n> \n> PM: As for the 2.0 spec, we currently only touch the surface, and there\n> may be the possibility that I have to add some functionality in the\n> backend, esp. with PreparedStatement or CallableStatement.\n\nThat would be rally great if some kind of PreparedStatement support would \nappear in backend. Currently all frontends that need it (at least ODBC, JDBC,\npossibly others too) must fake it.\n\nAlso the protocol (or frontend) should be made smarter about how to insert \nBinary data, espacially in the light of the possibility that we will soon get\nsupport for LONG fields thanks to Jan.\n\nI would hate to construct an insert (or update) command by base64 encoding a\nlarge word file and then constructing an humongous string of it by appending\n\"insert into t(contents) values('\" and prepending \"');\"\n\nI would much prefer the intelligence for it to be in at least libpq if not \nin the protocol, so that I could use something like:\n\ns = prepare(\"insert into t(contents) values($1);\ns.bind(open(myfile).read(),'text');\ns.execute()\ns.bind(open(myotherfile).read(),'text');\ns.execute()\ns.close()\n\nThat would have the advantage of possibly not encoding the whole thing but \neven possibly compressing it for transfer in case of slow links.\n\n----------------------\nHannu\n",
"msg_date": "Mon, 13 Dec 1999 12:38:45 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 release"
}
] |
[
{
"msg_contents": "Warnung!!\n\n Auf dem Markt befindet sich das Buch \"Hacker�s Black Book\". Abbildungen,\n z.B. unter astalavista.com, erwecken den Anschein, da� es sich um ein\n solides, gebundenes, umfangreicheres Buch, mit fundierten Angaben zum Thema\n handelt. Dies ist nicht der Fall. F�r 30.-DM werden 39 Seiten, billigster\n Aufmachung, ohne wesentlichen Wert geliefert. Der Autor ist nicht\nerreichbar\n und reagiert auch nicht auf Aufforderungen sich zu melden. �berlegt Euch\n genau, ob Ihr Euch darauf einlasst.\n\n Bitte die Info weitergegeben!\n\n\n\n",
"msg_date": "Mon, 13 Dec 1999 10:02:37 +0100",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Warnung!"
}
] |
[
{
"msg_contents": "Warnung!!\n\n Auf dem Markt befindet sich das Buch \"Hacker�s Black Book\". Abbildungen,\n z.B. unter astalavista.com, erwecken den Anschein, da� es sich um ein\n solides, gebundenes, umfangreicheres Buch, mit fundierten Angaben zum Thema\n handelt. Dies ist nicht der Fall. F�r 30.-DM werden 39 Seiten, billigster\n Aufmachung, ohne wesentlichen Wert geliefert. Der Autor ist nicht\nerreichbar\n und reagiert auch nicht auf Aufforderungen sich zu melden. �berlegt Euch\n genau, ob Ihr Euch darauf einlasst.\n\n Bitte die Info weitergegeben!\n\n\n\n",
"msg_date": "Mon, 13 Dec 1999 10:38:27 +0100",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Warnung"
}
] |
[
{
"msg_contents": "I somehow remember the MONEY datatype has some problems and might be\nremoved. Now I didn�t follow this topic closely enough, but now I've\nencountered I could use it pretty well. Of course a DECIMAL datatype fits\nthe bill as good since I do not need the currency symbol in psql's output.\n\nBefore I set up my DB I'd like to know which type to prefer.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Mon, 13 Dec 1999 11:57:23 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Datatype MONEY"
},
{
"msg_contents": "Use DECIMAL/NUMERIC. Money is deprecated, slightly broken, and will\neventually disappear. There are some thoughts of re-implementing it on top\nof numeric as a collection of formatting functions in essence, so with\nnumeric (or decimal) you'll be fit for the future.\n\nOn Mon, 13 Dec 1999, Michael Meskes wrote:\n\n> I somehow remember the MONEY datatype has some problems and might be\n> removed. Now I didn�t follow this topic closely enough, but now I've\n> encountered I could use it pretty well. Of course a DECIMAL datatype fits\n> the bill as good since I do not need the currency symbol in psql's output.\n> \n> Before I set up my DB I'd like to know which type to prefer.\n> \n> Michael\n> \n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Mon, 13 Dec 1999 12:25:28 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Datatype MONEY"
},
{
"msg_contents": "\nOn Mon, 13 Dec 1999, Michael Meskes wrote:\n\n> I somehow remember the MONEY datatype has some problems and might be\n> removed. Now I didn�t follow this topic closely enough, but now I've\n> encountered I could use it pretty well. Of course a DECIMAL datatype fits\n> the bill as good since I do not need the currency symbol in psql's output.\n> \n> Before I set up my DB I'd like to know which type to prefer.\n> \n> Michael\n\n\n I have complete code for numbers formatting (to_char() compatible with\nOracle). It allow you add a currency symbol corresponding with current\nlocale ... and more features over basic datatypes (float4/8, int4/8).\n\nI send it to the PACHES list next week (probably). \n\n\nExample:\n\ntemplate1=> select float8_to_char(455.9 , 'L999D99') as price;\nprice\n---------\nKc 455,90\n(1 row)\n\n(It is with Czech currency symbol and decimal point (locales))\n\nIMHO is good use for money a float type.\n\n\t\t\t\t\t\t\tKarel\n\n",
"msg_date": "Mon, 13 Dec 1999 13:13:55 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Datatype MONEY"
},
{
"msg_contents": "Karel Zak - Zakkr wrote:\n\n> On Mon, 13 Dec 1999, Michael Meskes wrote:\n>\n> > I somehow remember the MONEY datatype has some problems and might be\n> > removed. Now I didn=B4t follow this topic closely enough, but now I've\n> > encountered I could use it pretty well. Of course a DECIMAL datatype fi=\n> ts\n> > the bill as good since I do not need the currency symbol in psql's outp=\n> ut.\n> >=20\n> > Before I set up my DB I'd like to know which type to prefer.\n> >=20\n> > Michael\n>\n> I have complete code for numbers formatting (to_char() compatible with\n> Oracle). It allow you add a currency symbol corresponding with current\n> locale ... and more features over basic datatypes (float4/8, int4/8).\n>\n> I send it to the PACHES list next week (probably). =20\n>\n>\n> Example:\n>\n> template1=3D> select float8_to_char(455.9 , 'L999D99') as price;\n> price\n> ---------\n> Kc 455,90\n> (1 row)\n>\n> (It is with Czech currency symbol and decimal point (locales))\n>\n> IMHO is good use for money a float type.\n\n In some countries (Germany at least) storage of financial\n booking information is not permitted to use floats. And you\n aren't allowed to use it for calculation of taxes etc.,\n instead you must use some datatype with a fixable number of\n digits after the decimal point.\n\n Thus, only our NUMERIC/DECIMAL type or int4/8 and using the\n 'V' (IIRC) format specifier in to_char() should be used.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 13 Dec 1999 13:34:54 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Datatype MONEY"
},
{
"msg_contents": "\nOn Mon, 13 Dec 1999, Jan Wieck wrote:\n\n> Karel Zak - Zakkr wrote:\n> >\n> > IMHO is good use for money a float type.\n> \n> In some countries (Germany at least) storage of financial\n> booking information is not permitted to use floats. And you\n> aren't allowed to use it for calculation of taxes etc.,\n> instead you must use some datatype with a fixable number of\n> digits after the decimal point.\n> \n> Thus, only our NUMERIC/DECIMAL type or int4/8 and using the\n> 'V' (IIRC) format specifier in to_char() should be used.\n\n Hmm, interesting.. but it is not problem for to_char(), it is problem \n(how number datetype choise) for users.\n\nTo_char() formatting numbers by course of format-picture (second arg.) \nonly - total all is user choise (how set format), and to_char() not check \nif country form allow to use fixet/notfixet digits after the decimal point \n(in locales is not information about it, or yes?). \n\nI take back my previous \"IMHO\". \n\nBut if you use to_char(444.555, '999.99'), output is always with two digits\nafter the decimal point and our country form is pleased ... I agree, it is \nonly output option, internaly is still problem if you will calculate with\nfloat. \n\nOr is other idea for to_char() money formatting and how datetype must be\nsupported (I plan float4/8 int4/8 now)?\n\n(note: 'V' format specifier is multiplier and return a value as 10^n).\n\n\t\t\t\t\t\t\tKarel\n\n\n \n\n",
"msg_date": "Mon, 13 Dec 1999 14:37:44 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Datatype MONEY"
},
{
"msg_contents": "Karel Zak - Zakkr wrote:\n>\n> On Mon, 13 Dec 1999, Jan Wieck wrote:\n>\n> > In some countries (Germany at least) storage of financial\n> > booking information is not permitted to use floats. And you\n>\n> Hmm, interesting.. but it is not problem for to_char(), it is problem\n> (how number datetype choise) for users.\n\n But it is subject for what would happen in the expression\n first if you have both, to_char(float8, text) and\n to_char(numeric, text) available and execute a query with\n to_char(444.55, '9999.99').\n\n If the parser could choose to read in the value as float8 and\n pass that to to_char(float8, text), the system would not be\n compliant to financial software requirements in Germany.\n\n> Or is other idea for to_char() money formatting and how datetype must be\n> supported (I plan float4/8 int4/8 now)?\n\n You should at least add NUMERIC to possible inputs. Otherwise\n there would be no chance than to convert it to float8,\n possibly loosing significant digits (and becoming not\n compliant as to above).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 13 Dec 1999 15:03:08 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Datatype MONEY"
},
{
"msg_contents": "\n\n\nOn Mon, 13 Dec 1999, Jan Wieck wrote:\n\n> Karel Zak - Zakkr wrote:\n> >\n> > On Mon, 13 Dec 1999, Jan Wieck wrote:\n> >\n> > > In some countries (Germany at least) storage of financial\n> > > booking information is not permitted to use floats. And you\n> >\n> > Hmm, interesting.. but it is not problem for to_char(), it is problem\n> > (how number datetype choise) for users.\n> \n> But it is subject for what would happen in the expression\n> first if you have both, to_char(float8, text) and\n> to_char(numeric, text) available and execute a query with\n> to_char(444.55, '9999.99').\n> \n> If the parser could choose to read in the value as float8 and\n> pass that to to_char(float8, text), the system would not be\n> compliant to financial software requirements in Germany.\n\n Hmm, it is very firm in Germany (or in EU?) if not allow to use float \nin financ. software, I must ask about it how is it in Czech. Thank for\ninteresting information :-)\n\n> > Or is other idea for to_char() money formatting and how datetype must be\n> > supported (I plan float4/8 int4/8 now)?\n> \n> You should at least add NUMERIC to possible inputs. Otherwise\n> there would be no chance than to convert it to float8,\n> possibly loosing significant digits (and becoming not\n> compliant as to above).\n> \n\n Well, on a datetype is depend only small part in to_char(), I try \nwrite to_char(numeric, text) version. But I must first explore \nNUMERIC datetupe... (documentation is quiet for this).\n\nThank Jan for suggestion.\n\n\t\t\t\t\t\t\tKarel\n\n\n\n",
"msg_date": "Mon, 13 Dec 1999 15:14:32 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Datatype MONEY"
},
{
"msg_contents": "On Mon, Dec 13, 1999 at 01:13:55PM +0100, Karel Zak - Zakkr wrote:\n> I have complete code for numbers formatting (to_char() compatible with\n> Oracle). It allow you add a currency symbol corresponding with current\n\nSounds good.\n\n> locale ... and more features over basic datatypes (float4/8, int4/8).\n\nNot about DECIMAL/NUMERIC? I don't like the idea of doing currecny\ncalculations with floats.\n\nBTW could anyone tell me how exactly NUMERIC is stored? Just for curiosity.\n\nMichael\n\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Mon, 13 Dec 1999 15:26:39 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Datatype MONEY"
},
{
"msg_contents": "> Well, on a datetype is depend only small part in to_char(), I try\n> write to_char(numeric, text) version. But I must first explore\n> NUMERIC datetupe... (documentation is quiet for this).\n\n NUMERIC's output function returns a null terminated string\n representation as usual. Possibly a dash (negative sign), one\n or more digits, optionally followed by a decimal point and\n one or more digits. And you could get it with an adjusted\n number of digits after the decimal point by doing\n\n text *numeric_to_char(Numeric num, format text)\n {\n char *numstr;\n int32 scale;\n\n ... /* calculate the wanted number of digits */\n ... /* after DP in scale depending on format */\n\n numstr = numeric_out(numeric_round(num, scale));\n }\n\n There will be \"scale\" number of digits after the DP, which is\n missing if scale is zero. The value will be correct rouded\n and/or zero padded at the end.\n\n Wouldn't that be enough for you?\n\n Well, you must work on the string only and cannot read it\n into a float internally, but with the above preprocessing, it\n should be fairly simple.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 13 Dec 1999 15:52:25 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Datatype MONEY"
},
{
"msg_contents": "Michael Meskes wrote:\n\n> On Mon, Dec 13, 1999 at 01:13:55PM +0100, Karel Zak - Zakkr wrote:\n> > I have complete code for numbers formatting (to_char() compatible with\n> > Oracle). It allow you add a currency symbol corresponding with current\n>\n> Sounds good.\n>\n> > locale ... and more features over basic datatypes (float4/8, int4/8).\n>\n> Not about DECIMAL/NUMERIC? I don't like the idea of doing currecny\n> calculations with floats.\n\n First it's a variable size datatype. There's some information\n about weight of first digit, precision, scale and sign.\n Following are all digits coded into nibbles (4-bit per\n digit).\n\n The weight tells which of the digits WRT to the decimal point\n the first nibble contains. Precision and scale tell how many\n digits at all and after DP to have. Leading and trailing zero\n digits are stripped off in the DB stored value with an\n adjusted weight, so a 5000000000000 value with a precision of\n 200 digits will only occupy one nibble when stored. A single\n 5 with a weight of 12.\n\n If I ever find the time (soonest 2001 I expect) I'll\n completely replace the digit storage by small integers and\n store the value in base 10000 instead of 10. Just for\n performance reasons.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 13 Dec 1999 16:13:29 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Datatype MONEY"
},
{
"msg_contents": "Michael Meskes <[email protected]> writes:\n> BTW could anyone tell me how exactly NUMERIC is stored? Just for curiosity.\n\nI believe it's a simple-minded BCD format, one decimal digit per byte.\n\nJan has been muttering about reimplementing it as radix-10000, storing\nfour decimal digits per short instead of one per byte; that'd reduce\nthe number of iterations in the inner calculation loops by 4x, without\nmaking the elementary steps noticeably more expensive on modern hardware...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Dec 1999 11:11:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Datatype MONEY "
},
{
"msg_contents": "Michael Meskes wrote:\n> \n> I somehow remember the MONEY datatype has some problems and might be\n> removed. Now I didn�t follow this topic closely enough, but now I've\n> encountered I could use it pretty well. Of course a DECIMAL datatype fits\n> the bill as good since I do not need the currency symbol in psql's output.\n> \n> Before I set up my DB I'd like to know which type to prefer.\n\nAFAIK the MONEY data type in SQL is a toy rather than a serious thing.\nIt makes a big deal out of locale-dependent currency symbols but that\nway lacks robustness: try the following game:\n\nlocale = INDIA (currency 1 RUPEE <= 1/40 US$)\n\nUPDATE bankAccounts SET balance='10000 Rs.' WHERE id='123'\n\nthen switch your locale to USA (currency 1 US$ >= 40 Rs.)\n\nSELECT balance FROM bankAccounts WHERE id='123'\n\n-> 10000 US$\n\nYou have just got your rupees converted at an exceptional exchange rate\nof 1:1!!!\n\nIn my opinion locale should not affect what gets stored in the data\nbase and local should not change the meaning of the data. So using\nthe locale for currency symbol naively can be problematic. What you\nneed to do to really support money in different currencies is keep\ntrack of your hourly exchange rates etc. Then store your data in\none currency as a DECIMAL or whatever. Alternatively, store the pair\n(value DECIMAL, currency CHAR(3)) in the data base, with currency\nbeing the ISO 3-letter code. Be aware of the difference in semantics!\n\nregards\n-Gunther\n\n-- \nGunther_Schadow-------------------------------http://aurora.rg.iupui.edu\nRegenstrief Institute for Health Care\n1050 Wishard Blvd., Indianapolis IN 46202, Phone: (317) 630 7960\[email protected]#include <usual/disclaimer>",
"msg_date": "Mon, 13 Dec 1999 11:54:04 -0500",
"msg_from": "Gunther Schadow <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Datatype MONEY"
},
{
"msg_contents": "> In my opinion locale should not affect what gets stored in the data\n> base and local should not change the meaning of the data. So using\n> the locale for currency symbol naively can be problematic. What you\n> need to do to really support money in different currencies is keep\n> track of your hourly exchange rates etc. Then store your data in\n> one currency as a DECIMAL or whatever. Alternatively, store the pair\n> (value DECIMAL, currency CHAR(3)) in the data base, with currency\n> being the ISO 3-letter code. Be aware of the difference in semantics!\n\n The latter is IMHO the better. If you have a foreign currency\n account, it's balance will not raise and fall as exchange\n rates change. That's what they are good for. Only at the\n time, you transfer money between different currency accounts,\n the actual exchange rate is used.\n\n Keeping track of hourly/dayly exchange rates is only good if\n you need reports for controlling purposes. There it's better\n to have anything converted into your inhouse currency. View's\n do a wonderful job here.\n\n BTW: The non-floating-point restriction does NOT apply to\n controlling systems, because they are management information\n systems and not subject to the Ministry of Finance, as the\n bookkeeping data is.\n\n For those who wonder: between 1980 and 1983 I learned, and\n until 1987 I worked as a bank clerk. That left some traces\n that sometimes are useful.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 13 Dec 1999 18:14:31 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Datatype MONEY"
},
{
"msg_contents": "\n\nOn Mon, 13 Dec 1999, Jan Wieck wrote:\n\n> text *numeric_to_char(Numeric num, format text)\n> {\n> char *numstr;\n> int32 scale;\n> \n> ... /* calculate the wanted number of digits */\n> ... /* after DP in scale depending on format */\n> \n> numstr = numeric_out(numeric_round(num, scale));\n> }\n> \n> There will be \"scale\" number of digits after the DP, which is\n> missing if scale is zero. The value will be correct rouded\n> and/or zero padded at the end.\n> \n> Wouldn't that be enough for you?\n\nMy answer :-)\n\ntest=> select numeric_to_char(545454.98, '\"der Preis: \"L999G999D99');\nnumeric_to_char\n------------------------\nder Preis: DM 545.454,98\n(1 row)\n\n> Well, you must work on the string only and cannot read it\n> into a float internally, but with the above preprocessing, it\n> should be fairly simple.\n\nYes, I good understend your previous letter(s). Formatting routine in\nto_char() is independent on datetype and for all use string. \n\n(IMHO numeric is very interesting type and not has 16-decimal limitation as\nfloat8, it is good, good, good... \n\nAgain Thank!\n\t\t\t\t\t\t\tKarel\n\n----------------------------------------------------------------------\nKarel Zak <[email protected]> http://home.zf.jcu.cz/~zakkr/\n\nDocs: http://docs.linux.cz (big docs archive)\t\nKim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\nFTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n-----------------------------------------------------------------------\n\n",
"msg_date": "Tue, 14 Dec 1999 15:54:10 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Datatype MONEY"
}
] |
[
{
"msg_contents": "Hello,\n\nI was wondering if there's something here I'm not seeing (or am unaware\nof) with respect to the use of LIKE in Microsoft Access over ODBC to\nPostgreSQL 6.5.0 using the 6.40.00.06 ODBC driver. I have the following\nquery:\n\nSELECT workorders.workorder, workorders.workorderno, equipment.assetno,\nequipment.controlno\nFROM workorders, equipment\nWHERE equipment.assetno LIKE '%214%' AND\nworkorders.equipment=equipment.equipment\nORDER BY workorders.workorder;\n\nWhen I just \"copy-and-paste\" this query into a psql session, everything\nworks fine (although if I recall correctly, 6.5.0 has a bug with\nsomething of the form LIKE '214%'). However, the following appears in\nthe trace log:\n\nMSACCESS fff8be35:fffa52c5 EXIT S\nQLExecDirect with return code 0 (SQL_SUCCESS)\n HSTMT 0x057d0b60\n UCHAR * 0x051c1828 [ -3]\n\"SELECT \"workorders\".\"workorder\",\"equipment\".\"equipment\" FROM\n\"workorders\",\"equipment\" WHERE\n((\"equipment\".\"assetno\" = '%214%' ) AND\n(\"workorders\".\"equipment\" = \"equipment\".\"equipment\" ) )\nORDER BY \"workorders\".\"workorder\" \\ 0\"\n SDWORD -3\n\nMSACCESS fff8be35:fffa52c5 ENTER SQLFetch\n HSTMT 0x057d0b60\n\nMSACCESS fff8be35:fffa52c5 EXIT SQLFetch with\nreturn code 100 (SQL_NO_DATA_FOUND)\n HSTMT 0x057d0b60\n\nSo I'm wondering who's turning my LIKE clause into an equality operator?\nIs this a Microsoft Access issue? Is there a method of pattern matching\navailable that wouldn't require me to use the direct pass-thru method?\n\nAny help would be greatly appreciated,\n\nMike Mascari\n\n\n",
"msg_date": "Mon, 13 Dec 1999 08:03:43 -0500",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is something wrong here?"
}
] |
[
{
"msg_contents": "\n> I am excited about the long data type. This is _the_ way to do long\n> data types. Have any of the commercial databases figured out this way\n> to do it. I can't imagine a better system.\n\nThe commercial db's usually make the dba decide on a per column basis \nwhether the value is stored inside the table or in an extra space \n(blobspace,lobspace ...). They all have propietary syntax for this.\n(I would probably like it configurabe, whith some reasonable default) \nIt is usually available for the text/byte and user defined datatypes. \nIn PostgreSQL the array types come to mind.\n\nWhat I think would be good is, if you could avoid the need for an index on \nthe _LARGE_.. table.\nMy Idea would be to store an xtid of the first lob page slot in the user\ntable,\nand have an xtid pointer to the next lob page slot in it, and so on.\nThat way you could avoid indices on the LARGE table.\nSnapshotAny() would also see the correct long, since an updated value would \nget a new xtid anyway. No need to use up an extra oid.\n\nSince lob's are typically large, the large overhead would be especially \npainful, so a different relkind with another pagelayout seems adequate. \n\nThe pointer would imho be:\n\nlongbit|length|largetableoid|xtid_of_first_lobpage|loblength\n\nJust some ideas\nAndreas\n",
"msg_date": "Mon, 13 Dec 1999 14:42:46 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] generic LONG VARLENA"
},
{
"msg_contents": "Andreas Zeugswetter wrote:\n\n> > I am excited about the long data type. This is _the_ way to do long\n> > data types. Have any of the commercial databases figured out this way\n> > to do it. I can't imagine a better system.\n>\n> The commercial db's usually make the dba decide on a per column basis\n> whether the value is stored inside the table or in an extra space\n> (blobspace,lobspace ...). They all have propietary syntax for this.\n> (I would probably like it configurabe, whith some reasonable default)\n> It is usually available for the text/byte and user defined datatypes.\n\n Must have been proprietary syntax. There are may places in\n SQL92 and SQL3 specs, where the words IMPLEMENTATION DEFINED\n appear. With so many possible differences between\n implementations within standard compliance, there must be\n differences in the language too.\n\n For the database schema, we cannot avoid proprietary syntax\n to use implementation specific features. We don't have\n tablespaces, extents etc., but if we ever implement something\n like that, should we be unable to customize it because there\n is no syntax defined in the standard?\n\n> In PostgreSQL the array types come to mind.\n\n There was a user request about \"tuple too big\" right today\n when storing a polygon.\n\n> What I think would be good is, if you could avoid the need for an index on\n> the _LARGE_.. table.\n> My Idea would be to store an xtid of the first lob page slot in the user\n> table,\n> and have an xtid pointer to the next lob page slot in it, and so on.\n> That way you could avoid indices on the LARGE table.\n> SnapshotAny() would also see the correct long, since an updated value would\n> get a new xtid anyway. No need to use up an extra oid.\n\n While I would like such an approach too, I don't want to do\n it really. It would require to treat the lob tuples\n different from regular ones in vacuum. It is one of the most\n important tools for a productional DB. One single broken\n xtid chain due to an aborted vacuum will corrupt your\n database. Better keep it working the same as for a regular\n table.\n\n> Since lob's are typically large, the large overhead would be especially\n> painful, so a different relkind with another pagelayout seems adequate.\n\n No, I think a single Oid index on a relation, where only\n usually large tuples are stored is a very small overhead.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 13 Dec 1999 15:34:37 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] generic LONG VARLENA"
},
{
"msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> \n> > I am excited about the long data type. This is _the_ way to do long\n> > data types. Have any of the commercial databases figured out this way\n> > to do it. I can't imagine a better system.\n> \n> The commercial db's usually make the dba decide on a per column basis \n> whether the value is stored inside the table or in an extra space \n> (blobspace,lobspace ...). They all have propietary syntax for this.\n> (I would probably like it configurabe, whith some reasonable default) \n> It is usually available for the text/byte and user defined datatypes. \n> In PostgreSQL the array types come to mind.\n> \n> What I think would be good is, if you could avoid the need for an index on \n> the _LARGE_.. table.\n> My Idea would be to store an xtid of the first lob page slot in the user\n> table,\n> and have an xtid pointer to the next lob page slot in it, and so on.\n> That way you could avoid indices on the LARGE table.\n> SnapshotAny() would also see the correct long, since an updated value would \n> get a new xtid anyway. No need to use up an extra oid.\n\nYou are getting the data in 8k chunks, so it shouldn't be bad. I think\nusing ctid is overly complex and makes vacuum fragile on that table. \nBetter to use the tools we have like indexing and the standard tuple\nlayout code. Custom solutions like ctid are better off only when we see\nseriouis performance problems and can't resolve them any other way.\n\nFor example, having an expanded tuple cache will give us great speed\nimprovements.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Dec 1999 17:52:08 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] generic LONG VARLENA"
}
] |
[
{
"msg_contents": "\n> > Since lob's are typically large, the large overhead would \n> be especially\n> > painful, so a different relkind with another pagelayout \n> seems adequate.\n> \n> No, I think a single Oid index on a relation, where only\n> usually large tuples are stored is a very small overhead.\n\nWell actually it will be one tuple per ~8k, so more than the \noriginal tuple count, but I also meant the ~40 bytes per tuple\nin the datapage.\n\nBut your arguments sound convincing :-)\nAndreas\n",
"msg_date": "Mon, 13 Dec 1999 15:49:51 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] generic LONG VARLENA"
}
] |
[
{
"msg_contents": "I'm currently working on Create/Alter/Drop Group statements. I have all\nthe framework set up, I can create empty groups and drop them, and create\nall sorts of notices about unimplemented functionality.\n\nFirst, the syntax I had in mind:\n\nCREATE GROUP name [ WITH [ SYSID id ] [ USER name1, name2, ... ] ]\nALTER GROUP name WITH SYSID id /* changes sysid */\nALTER GROUP name ADD USER name1, name2, ...\nALTER GROUP name DROP USER name1, name2, ...\nDROP GROUP name\n\nPlease protest now or hold your peace for at least one release. :)\n\n\nHere's a tricky problem:\n=> insert into pg_group values ('one', 1, NULL);\n=> create group two;\nboth create groups of identical fashion. The create group uses\nheap_insert().\n\nNow I do\ntemplate1=> alter group one add user foo;\nNOTICE: Cannot add users to a group, yet.\nALTER USER\n/* OK */\ntemplate1=> alter group two add user foo;\nAlterGroup: Group \"two\" does not exist.\n/* Huh? */\n\nThis is caused by this statement:\nif (!HeapTupleIsValid(\n SearchSysCacheTuple(GRONAME, PointerGetDatum(stmt->name), 0, 0, 0))\n )\n{\n\theap_close(pg_group_rel, AccessExclusiveLock);\n\t\tUserAbortTransactionBlock();\n\t\telog(ERROR, \"AlterGroup: Group \\\"%s\\\" does not exist.\",\nstmt->name);\n\t}\n\n\nHowever:\n\ntemplate1=> select * from pg_group;\ngroname,grosysid,grolist\none,0,\ntwo,1,\n\nHowever however:\n\ntemplate1=> select * from pg_group where groname='two';\ngroname,grosysid,grolist\n(0 rows)\n\nBUT:\n\ntemplate1=> drop group two;\nDROP GROUP\ntemplate1=> drop group two;\nERROR: DropGroup: Group \"two\" does not exist.\nas expected.\n\nDropGroup does a normal heap_scan checking for the existence of the\ngroups. I'm not sure, is there some subtle binary off-by-one-bit problem,\nare the strings encoded differently, etc.?\n\nInterestingly, this similar statement works:\n\n\ttuple = SearchSysCacheTuple(SHADOWNAME, PointerGetDatum(stmt->user), 0, 0, 0);\n\tif (!HeapTupleIsValid(tuple))\n\t{\n\t\theap_close(pg_shadow_rel, AccessExclusiveLock);\n\t\tUserAbortTransactionBlock();\n\t\telog(ERROR, \"AlterUser: user \\\"%s\\\" does not exist\", stmt->user);\n\t}\n\nAlso, why can pg_group not be vacuumed? (pg_shadow can.) With all this\ntesting, mine is filling up.\n\nPerhaps related, but just out of curiosity: Why is pg_group a system\nrelatation (pg_class.relkind='s')? Only three other ones have this\nset: pg_variable, pg_log, and pg_xactlog.\n\nAny help will be appreciated. I'll be back when I need to figure out the\narrays. :)\n\n(Patch is included for those that don't have a clue what I'm talking about\nbut would like to find out anyway.)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden",
"msg_date": "Mon, 13 Dec 1999 21:48:16 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Create Group"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Now I do\n> template1=> alter group one add user foo;\n> NOTICE: Cannot add users to a group, yet.\n> ALTER USER\n> /* OK */\n> template1=> alter group two add user foo;\n> AlterGroup: Group \"two\" does not exist.\n> /* Huh? */\n\nI'll bet you forgot to update the indexes on pg_group. heap_insert is\nnot sufficient for a table that has indexes; you have to do a little\nnumber that typically looks like (this example from the COMMENT code):\n\n if (RelationGetForm(description)->relhasindex) {\n Relation idescs[Num_pg_description_indices];\n \n CatalogOpenIndices(Num_pg_description_indices, \n\t\t\t Name_pg_description_indices, idescs);\n CatalogIndexInsert(idescs, Num_pg_description_indices, description, \n\t\t\t desctuple);\n CatalogCloseIndices(Num_pg_description_indices, idescs);\n }\n\n> Also, why can pg_group not be vacuumed? (pg_shadow can.) With all this\n> testing, mine is filling up.\n\n> Perhaps related, but just out of curiosity: Why is pg_group a system\n> relatation (pg_class.relkind='s')?\n\nThat seems wrong, wrong, wrong --- and it probably explains why VACUUM\nwon't touch it. 's' is for special relations not system relations, and\npg_group is not special. I'm surprised it works at all...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Dec 1999 17:35:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Create Group "
},
{
"msg_contents": "On 1999-12-13, Tom Lane mentioned:\n\n> > Also, why can pg_group not be vacuumed? (pg_shadow can.) With all this\n> > testing, mine is filling up.\n> \n> > Perhaps related, but just out of curiosity: Why is pg_group a system\n> > relatation (pg_class.relkind='s')?\n> \n> That seems wrong, wrong, wrong --- and it probably explains why VACUUM\n> won't touch it. 's' is for special relations not system relations, and\n> pg_group is not special. I'm surprised it works at all...\n\nNOTICE: Vacuum: can not process index and certain system tables\n\nFeel free to change this sooner rather than later because it also throws\noff a few other things (e.g., psql and pg_dump probably). I couldn't even\nfind the place where this is specified in the catalogs. I really assume\nthis is an accident that has gone unnoticed because of the lack of usage.\n\nAfterthought: The last claim seems to be supported by code fragments such\nas this:\n\n#define Natts_pg_group 1\n#define Anum_pg_group_groname 1\n#define Anum_pg_group_grosysid 2\n#define Anum_pg_group_grolist 3\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n\n",
"msg_date": "Tue, 14 Dec 1999 20:26:01 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Create Group "
}
] |
[
{
"msg_contents": "We did discuss this. It seems there is circular dependency about\ndumping functions and tables, where some rely on the other. We\ndiscussed this, and the only fix we can think of is to dump the entries\nin creation order, using the oid as a guide.\n\nNot sure when we can implement this.\n\n\n> Hi Bruce,\n> \n> Sorry to bother you personally, but as you are keeper of the \"To Do\" list, I\n> thought I would check with you directly rather than clutter up the Postgresql\n> mail lists.\n> \n> Some time ago I submitted a bug report about PostgreSQL pg_dump. I would forward\n> you a copy of my e-mail if I could find one.\n> \n> The essence of the report was that the order of the dumped items from pg_dump\n> made a direct reload (without hand editing the dump) impossible.\n> \n> The case I stumbled on was something like:\n> \n> \n> > CREATE Function MyTimeStamp (what ever);\n> >\n> > CREATE TABLE MyTable (\n> > key int PRIMARY KEY,\n> > add_date timestamp DEFAULT MyTimeStamp()\n> > );\n> >\n> The problem is that pg_dump dumps the Functions after the Tables, so when\n> re-loading, the above table definition fails (it doesn't know about the function\n> MyTimeStamp() at the time of creation).\n> \n> There were no comments about my report at the time I made it, so I was concerned\n> that the HACKERs may have missed it. With a major release \"just now coming\", I\n> thought I should re-port the report.\n> \n> Hope this helps,\n> Mark\n> \n> --\n> Mark Dalphin email: [email protected]\n> Mail Stop: 29-2-A phone: +1-805-447-4951 (work)\n> One Amgen Center Drive +1-805-375-0680 (home)\n> Thousand Oaks, CA 91320 fax: +1-805-499-9955 (work)\n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Dec 1999 18:02:54 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Followup to my bug report"
},
{
"msg_contents": "Hmmm, I thought I had tested for that. It seemed to me that functions were evaluated\nat run-time, so any reference in a function to a table would not be noticed until\nthat function was actually called (this is for PL/pgsql where I know that even stupid\nsyntax errors are not caught until run-time). The SQL parser, however, does check\nthat the function exists before one can create the table...\n\nThank you for following up on this.\nMark\n\nBruce Momjian wrote:\n\n> We did discuss this. It seems there is circular dependency about\n> dumping functions and tables, where some rely on the other. We\n> discussed this, and the only fix we can think of is to dump the entries\n> in creation order, using the oid as a guide.\n>\n> Not sure when we can implement this.\n>\n> > Hi Bruce,\n> >\n> > Sorry to bother you personally, but as you are keeper of the \"To Do\" list, I\n> > thought I would check with you directly rather than clutter up the Postgresql\n> > mail lists.\n> >\n> > Some time ago I submitted a bug report about PostgreSQL pg_dump. I would forward\n> > you a copy of my e-mail if I could find one.\n> >\n> > The essence of the report was that the order of the dumped items from pg_dump\n> > made a direct reload (without hand editing the dump) impossible.\n> >\n> > The case I stumbled on was something like:\n> >\n> >\n> > > CREATE Function MyTimeStamp (what ever);\n> > >\n> > > CREATE TABLE MyTable (\n> > > key int PRIMARY KEY,\n> > > add_date timestamp DEFAULT MyTimeStamp()\n> > > );\n> > >\n> > The problem is that pg_dump dumps the Functions after the Tables, so when\n> > re-loading, the above table definition fails (it doesn't know about the function\n> > MyTimeStamp() at the time of creation).\n> >\n> > There were no comments about my report at the time I made it, so I was concerned\n> > that the HACKERs may have missed it. With a major release \"just now coming\", I\n> > thought I should re-port the report.\n> >\n> > Hope this helps,\n> > Mark\n> >\n> > --\n> > Mark Dalphin email: [email protected]\n> > Mail Stop: 29-2-A phone: +1-805-447-4951 (work)\n> > One Amgen Center Drive +1-805-375-0680 (home)\n> > Thousand Oaks, CA 91320 fax: +1-805-499-9955 (work)\n> >\n> >\n> >\n> >\n>\n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n--\nMark Dalphin email: [email protected]\nMail Stop: 29-2-A phone: +1-805-447-4951 (work)\nOne Amgen Center Drive +1-805-375-0680 (home)\nThousand Oaks, CA 91320 fax: +1-805-499-9955 (work)\n\n\n\n",
"msg_date": "Mon, 13 Dec 1999 15:32:42 -0800",
"msg_from": "Mark Dalphin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Followup to my bug report"
}
] |
[
{
"msg_contents": "Just going through the TODO list, at the risk of starting another\nheart-breaking discussion, did we not agree to _not_ do this:\n\n* rename 'createuser' to 'pg_createuser', and add 'pg_' to other commands\n\nAll the commands that need it (version, dump, id) already have this\nprefix, all the other ones have no provable name conflicts, so leave as\nis?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Tue, 14 Dec 1999 00:28:07 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_createuser"
},
{
"msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> Just going through the TODO list, at the risk of starting another\n> heart-breaking discussion, did we not agree to _not_ do this:\n> \n> * rename 'createuser' to 'pg_createuser', and add 'pg_' to other commands\n> \n> All the commands that need it (version, dump, id) already have this\n> prefix, all the other ones have no provable name conflicts, so leave as\n> is?\n\nRemoved from TODO list.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Dec 1999 19:12:08 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_createuser"
}
] |
[
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> In some countries (Germany at least) storage of financial\n> booking information is not permitted to use floats. And you\n> aren't allowed to use it for calculation of taxes etc.,\n> instead you must use some datatype with a fixable number of\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> digits after the decimal point.\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n*AND* with correct rounding behavior for the least significant digit\n(which may not be displayed, as with the U.S. \"mil\"--one tenth of a \ncent).\n\n\t-Michael Robinson\n\nP.S. I like the idea of a money type with an internal field for the ISO\ncurrency type. For all my current applications I have to break that out\nas a separate char(3) field (USD, HKD, JPY, RMB, etc.).\n\n",
"msg_date": "Tue, 14 Dec 1999 12:20:21 +0800 (CST)",
"msg_from": "Michael Robinson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Datatype MONEY"
}
] |
[
{
"msg_contents": "I have found a few questionable codings. I'm not sure if it really\nhurts anything. Suggestions are welcome.\n\n1) in storage/lmgr/lock.c: LockShmemSize()\n\nsize += MAXALIGN(maxBackends * sizeof(PROC));\t\t/* each MyProc */\nsize += MAXALIGN(maxBackends * sizeof(LOCKMETHODCTL));\t\t/* each\n\nshouldn't be:\n\nsize += maxBackends * MAXALIGN(sizeof(PROC));\t\t/* each MyProc */\nsize += maxBackends * MAXALIGN(sizeof(LOCKMETHODCTL));\t\t/* each\n\n2) in utils/hash/dynahash.c:hash_search():\n\nAssert(saveState.currElem && !(saveState.currElem = 0));\n\nDoes anybody know what it is for?\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 14 Dec 1999 17:54:54 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Questionable codes"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> I have found a few questionable codings. I'm not sure if it really\n> hurts anything. Suggestions are welcome.\n\n> 1) in storage/lmgr/lock.c: LockShmemSize()\n\n> size += MAXALIGN(maxBackends * sizeof(PROC));\t\t/* each MyProc */\n> size += MAXALIGN(maxBackends * sizeof(LOCKMETHODCTL));\t\t/* each\n\n> shouldn't be:\n\n> size += maxBackends * MAXALIGN(sizeof(PROC));\t\t/* each MyProc */\n> size += maxBackends * MAXALIGN(sizeof(LOCKMETHODCTL));\t\t/* each\n\nProbably, but I'm not sure it really makes any difference. We add on\n10% or so slop after we've finished adding up all these numbers, anyway\n;-)\n\n> 2) in utils/hash/dynahash.c:hash_search():\n\n> Assert(saveState.currElem && !(saveState.currElem = 0));\n\n> Does anybody know what it is for?\n\nThat's part of that horribly ugly, non-reentrant HASH_REMOVE_SAVED\ninterface, isn't it? I have a to-do item to rip that code out and\nreplace it with a more reasonable design ... in the meantime, I don't\nthink it much matters whether the Assert could be tightened up ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Dec 1999 11:23:24 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Questionable codes "
},
{
"msg_contents": "> I have found a few questionable codings. I'm not sure if it really\n> hurts anything. Suggestions are welcome.\n> \n> 1) in storage/lmgr/lock.c: LockShmemSize()\n> \n> size += MAXALIGN(maxBackends * sizeof(PROC));\t\t/* each MyProc */\n> size += MAXALIGN(maxBackends * sizeof(LOCKMETHODCTL));\t\t/* each\n> \n> shouldn't be:\n> \n> size += maxBackends * MAXALIGN(sizeof(PROC));\t\t/* each MyProc */\n> size += maxBackends * MAXALIGN(sizeof(LOCKMETHODCTL));\t\t/* each\n\nYes, you are correct. The bottom one is better.\n\n> \n> 2) in utils/hash/dynahash.c:hash_search():\n> \n> Assert(saveState.currElem && !(saveState.currElem = 0));\n> \n> Does anybody know what it is for?\n\nNo idea.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 14 Dec 1999 11:25:10 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Questionable codes"
}
] |
[
{
"msg_contents": "\n> First, the syntax I had in mind:\n> \n> CREATE GROUP name [ WITH [ SYSID id ] [ USER name1, name2, ... ] ]\n> ALTER GROUP name WITH SYSID id /* changes sysid */\n> ALTER GROUP name ADD USER name1, name2, ...\n> ALTER GROUP name DROP USER name1, name2, ...\n> DROP GROUP name\n> \n> Please protest now or hold your peace for at least one release. :)\n>\n\nI think a group can be interpreted somehow like a priviledge.\nAs such the statement to add or remove a user from a group \nwould be a \"grant\" statement.\n\nThe standard mutters something about \"role\"s \n(again haven't looked it up) \nI don't like the word role instead of group, but maybe if there\nis a standard we should use it.\n\nInformix and Oracle use the keyword role for groups,\nand use grant/revoke to administer them.\n\nAndreas\n",
"msg_date": "Tue, 14 Dec 1999 10:39:29 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Create Group"
},
{
"msg_contents": "On Tue, 14 Dec 1999, Zeugswetter Andreas SB wrote:\n\n> > CREATE GROUP name [ WITH [ SYSID id ] [ USER name1, name2, ... ] ]\n> > ALTER GROUP name WITH SYSID id /* changes sysid */\n> > ALTER GROUP name ADD USER name1, name2, ...\n> > ALTER GROUP name DROP USER name1, name2, ...\n> > DROP GROUP name\n\n> I think a group can be interpreted somehow like a priviledge.\n> As such the statement to add or remove a user from a group \n> would be a \"grant\" statement.\n\nNot really, at least not in our context. A group is a collection\n(\"group\") of users which can collectively be granted privileges. For\nexample, you can do grant select on your_table to group staff (even right\nnow).\n\n> The standard mutters something about \"role\"s \n> (again haven't looked it up) \n> I don't like the word role instead of group, but maybe if there\n> is a standard we should use it.\n> \n> Informix and Oracle use the keyword role for groups,\n> and use grant/revoke to administer them.\n\nI suppose they have a slightly different underlying philosposhy then.\nPostgreSQL already uses \"group\" all over the place, this is just a logical\nextension which was missing.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 14 Dec 1999 12:12:33 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Create Group"
}
] |
[
{
"msg_contents": "\n> Hmm,we have discussed about LONG.\n> Change by LONG is transparent to users and would resolve\n> the big tuple problem mostly.\n> I'm suspicious that tuple chaining is worth the work now.\n\nAll commercial db's I know allow at least 32kb tuples,\nthey all do it with chaining, because they usually have a \nsmaller (often configurable) pagesize. \nImho it is definitely worth it.\n\nAndreas\n",
"msg_date": "Tue, 14 Dec 1999 11:05:05 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] Volunteer: Large Tuples / Tuple chaining"
},
{
"msg_contents": "\n\nZeugswetter Andreas SB wrote:\n\n> > Hmm,we have discussed about LONG.\n> > Change by LONG is transparent to users and would resolve\n> > the big tuple problem mostly.\n> > I'm suspicious that tuple chaining is worth the work now.\n>\n> All commercial db's I know allow at least 32kb tuples,\n> they all do it with chaining, because they usually have a\n> smaller (often configurable) pagesize.\n> Imho it is definitely worth it.\n>\n\nThere would be few cases > 8K tuples after LONG was implemented.\nAnd tuple chaining is much difficult to implement than LONG.\nIf it is badly designed it would be a disaster.\nIs it still worth doing now ?\n\nAt least the design must be verified sufficiently before going.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Tue, 14 Dec 1999 19:50:23 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] Volunteer: Large Tuples / Tuple chaining"
}
] |
[
{
"msg_contents": "\n> \n> > > CREATE GROUP name [ WITH [ SYSID id ] [ USER name1, name2, ... ] ]\n> > > ALTER GROUP name WITH SYSID id /* changes sysid */\n> > > ALTER GROUP name ADD USER name1, name2, ...\n> > > ALTER GROUP name DROP USER name1, name2, ...\n> > > DROP GROUP name\n> \n> > I think a group can be interpreted somehow like a priviledge.\n> > As such the statement to add or remove a user from a group \n> > would be a \"grant\" statement.\n> \n> Not really, at least not in our context. A group is a collection\n> (\"group\") of users which can collectively be granted privileges. For\n> example, you can do grant select on your_table to group staff \n> (even right\n> now).\n\nAt least Informix and Oracle see it that way (and call it role).\nThe functionality is the same.\n\nAndreas\n",
"msg_date": "Tue, 14 Dec 1999 12:16:24 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] Create Group"
}
] |
[
{
"msg_contents": "> All I really wanted to do is fix TODO item\n> * database names with spaces fail\n> but that is already taken care of, they work fine. Please check it off.\n> Meanwhile, database names with single quotes in names don't work very well\n> at all, and because of shell quoting rules this can't be fixed, so I put\n> in error messages to that end.\n\nThat seems to be a bit heavy handed; why bother disallowing things in\nthe backend because some (small number of) shell-based tools have\ntrouble as clients? I'd prefer filtering that at the client end, and\nallowing capable clients to do whatever they please.\n\nThere is a related issue which afaik no one has addressed yet: the\npermissions ACLs are stored as a string with a format like\n\"accountname=permissions\" (doing this from memory, so the details may\nbe wrong) but with quoting allowed for table names and user names one\ncould embed an equals sign into an account or group name and muck with\npermissions. I haven't looked at the code in a long time, but was\nthinking about recoding ACLs as a two-field type to enforce an\nunambigous interpretation of the two fields. Interested??\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 14 Dec 1999 14:21:35 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] createdb/dropdb fixes"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Meanwhile, database names with single quotes in names don't work very well\n>> at all, and because of shell quoting rules this can't be fixed, so I put\n>> in error messages to that end.\n\n> That seems to be a bit heavy handed; why bother disallowing things in\n> the backend because some (small number of) shell-based tools have\n> trouble as clients? I'd prefer filtering that at the client end, and\n> allowing capable clients to do whatever they please.\n\nNo, you're missing the point: the backend itself uses shell escapes\nfor some whole-database functions. IIRC, database creation is done with\nsomething like\n\tsystem(\"cp -r base/template1 base/newdb\");\nSo shell metacharacters in database names are Bad News. We need to\nput in a filter that will prevent appearances of / | ` etc in DB names.\nI assume that's what Peter was doing.\n\nI think we may have some bugs with metacharacters in table names (which\nbecome filenames) as well, but haven't really pushed on it.\n\n> thinking about recoding ACLs as a two-field type to enforce an\n> unambigous interpretation of the two fields. Interested??\n\nSeems like a good idea.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Dec 1999 11:45:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] createdb/dropdb fixes "
},
{
"msg_contents": "On 1999-12-14, Thomas Lockhart mentioned:\n\n> That seems to be a bit heavy handed; why bother disallowing things in\n> the backend because some (small number of) shell-based tools have\n> trouble as clients? I'd prefer filtering that at the client end, and\n\nIt's really about statements like this:\n\n\tsnprintf(buf, sizeof(buf), \"rm -rf '%s'\", path);\n\nThere is no way around disallowing single-quotes unless you double quote\nthe argument and be very careful with the escaping. Of course this\nparticular case might as well use unlink(), but there is a recursive copy\nof the template1 dir which would take a little more work (opendir(),\netc.). At that point we could lift that restriction.\n\n> permissions. I haven't looked at the code in a long time, but was\n> thinking about recoding ACLs as a two-field type to enforce an\n> unambigous interpretation of the two fields. Interested??\n\nI've been puzzled about this for a long time, is there a reason this is\nstored as an array at all? Why not use tuples like\n\taclperm\t\tchar?\n\taclrelation\toid\n\taclentity\toid\t/* user or group sysid */\n\taclisgroup\tbool\t/* is it a user or group? */\n\nAnd then it looks like this:\naclperm|aclrel|acluser|aclisgroup\n-------+------+-------+----------\nR |177777| 100|f\nW |177777| 100|f\nR |177777| 120|f\nR |188888| 5|t\n\nThat's much cleaner. GRANT and REVOKE would be reduced to simple\ninsert/delete equivalents. I'm not sure how the actual authentication code\nwould like that overheadwise, though.\n\nA related issue is pg_group, which I'm currently working on. Those arrays\nare killing me. A simple user/group associating relation would be much\nnicer.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Wed, 15 Dec 1999 00:01:39 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] createdb/dropdb fixes"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> It's really about statements like this:\n\n> \tsnprintf(buf, sizeof(buf), \"rm -rf '%s'\", path);\n\n> There is no way around disallowing single-quotes unless you double quote\n> the argument and be very careful with the escaping.\n\nYes. In fact, I'd argue for filtering the names more heavily than that;\njust to take a for-example, Bad Things would ensue if we accepted a\ndatabase name of \"..\".\n\nIt is easy to devise cases in which accepting leading \".\" or embedded \"/\"\nleads to disaster; if you think those are OK, allow me to destroy your\ninstallation for you ;-). I haven't yet thought of a way to cause\ntrouble with a back-quote in a DB name (given that single quotes are\ndisallowed) ... but I bet some enterprising hacker can find one.\n\nBeyond the bare minimum security issues, I also think we should take\npity on the poor dbadmin who may have to be looking at these\nsubdirectories or filenames. Is it really a good idea to allow carriage\nreturns or other control characters in file/directory names? Is it\neven a good idea to allow spaces? I don't think so. If we were not\nusing these names for Unix file/dir names then we could allow anything\nwe felt like --- but since we are using them that way, I think that the\nsafest path is to only allow things that are going to look like ordinary\nfile names when used in Unix shell commands. Otherwise there's still a\nbig chance of trouble if the dbadmin gets a little bit careless.\n\n> Of course this particular case might as well use unlink(),\n\nNot unless your system's unlink is much different from mine's...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Dec 1999 18:44:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] createdb/dropdb fixes "
},
{
"msg_contents": "Here's another anomaly I've run across in porting the Ars Digita\nCommunity System web development toolkit from Oracle to Postgres:\n\nIn oracle, if we do:\n\nSQL> create table foo(i integer, j integer);\n\nTable created.\n\nthen select like this, we get no rows returned:\n\nSQL> select i, count(*) from foo group by i;\n\nno rows selected\n\nIn postgres, the same select on the same empty table yields:\n\ntest=> select i, count(*) from foo group by i;\ni|count\n-+-----\n | 0\n(1 row)\n\ntest=> \n\nWhich is correct? It's the count() causing the row to be output,\napparently PostgreSQL feels obligated to return at least one \nvalue for the aggragate and since there are no groups has to\ninvent one. I assume this is wrong, and that Oracle's right, but\nhaven't dug through Date's book to verify.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 14 Dec 1999 16:11:39 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Bug or feature? select, count(*), group by and empty tables"
},
{
"msg_contents": "At 04:11 PM 12/14/99 -0800, Don Baccus wrote:\n>Here's another anomaly I've run across in porting the Ars Digita\n>Community System web development toolkit from Oracle to Postgres:\n\n(always returning a row for a select count(*) ... group by query\n even if there aren't any groups)\n\nOK, I've gotten the latest sources with the bright idea of digging\naround, and in nodeAgg.c the routine ExecAgg.c has been somewhat\nrewritten, with comments that make it clear that this bug's already\nbeen fixed.\n\nI should build myself a latest version so I can filter out non-problems\nbefore reporting them, sorry...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 14 Dec 1999 16:32:55 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug or feature? select, count(*), group by and\n\tempty tables"
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n> (always returning a row for a select count(*) ... group by query\n> even if there aren't any groups)\n\nYah: if you have aggregates and no GROUP, for empty input you should\nget one row out with \"default\" results (0 for COUNT, null for most other\naggregates). But for GROUP mode, no rows in should yield no rows out,\naggregates or no. It took a fair amount of arguing before everyone was\nconvinced that that is the correct interpretation of the spec ;-),\nwhich is why it's only been fixed recently.\n\n> OK, I've gotten the latest sources with the bright idea of digging\n> around, and in nodeAgg.c the routine ExecAgg.c has been somewhat\n> rewritten, with comments that make it clear that this bug's already\n> been fixed.\n> I should build myself a latest version so I can filter out non-problems\n> before reporting them, sorry...\n\nNot a problem. Bug reports on the latest release are fair game.\nIf it's already been fixed in current sources, whoever fixed it\nwill surely take pleasure in telling you so...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Dec 1999 00:52:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug or feature? select, count(*),\n\tgroup by and empty tables"
},
{
"msg_contents": "On 1999-12-14, Tom Lane mentioned:\n\n> Yes. In fact, I'd argue for filtering the names more heavily than that;\n> just to take a for-example, Bad Things would ensue if we accepted a\n> database name of \"..\".\n\nMy rm refuses to remove '..' and '.'.\n\n> It is easy to devise cases in which accepting leading \".\" or embedded \"/\"\n> leads to disaster; if you think those are OK, allow me to destroy your\n> installation for you ;-). I haven't yet thought of a way to cause\n> trouble with a back-quote in a DB name (given that single quotes are\n> disallowed) ... but I bet some enterprising hacker can find one.\n\nThe slash problem will disappear (I hope) when I fix that alternate\nlocation issue.\n\n> Beyond the bare minimum security issues, I also think we should take\n> pity on the poor dbadmin who may have to be looking at these\n> subdirectories or filenames. Is it really a good idea to allow carriage\n> returns or other control characters in file/directory names? Is it\n> even a good idea to allow spaces? I don't think so. If we were not\n\nSpaces why not? I use spaces all the time in filenames. But perhaps we\nshould make a definite list of things we won't allow, such as dots (.),\nslashes (/), tildes (~), etc., and everything below ASCII 32. But limiting\nit to a finite list of characters would really be a blow to people using\nother character sets.\n\n> > Of course this particular case might as well use unlink(),\n> \n> Not unless your system's unlink is much different from mine's...\n\nIs it just me or is your system intentionally designed to be different\nfrom anybody elses? :) Last time I checked rm does call unlink. Relying on\nsh-utils sort of commands has its own set of problems. For example in the\n6.5 source, run the postmaster on a terminal, revoke all permissions on a\ndatabase directory (chmod a-rwx testdb) and try to drop it. My rm will\nprompt on the postmaster terminal for verification while the whole thing\nhangs ...\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Fri, 17 Dec 1999 01:31:23 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] createdb/dropdb fixes "
}
] |
[
{
"msg_contents": "\nJust a quick question. Am trying to debug some problems with UdmSearch,\nand the following being pumped out of hte backend (have debug mode's\nturned on) don't look right...\n\nStartTransactionCommand^M\nquery: BEGIN WORK^M\nProcessUtility: BEGIN WORK^M\nCommitTransactionCommand^M\nStartTransactionCommand^M\nquery: INSERT INTO dict (url_id,word,intag) VALUES(810,'date',3)^M\nProcessQuery^M\nCommitTransactionCommand^M\nStartTransactionCommand^M\nquery: INSERT INTO dict (url_id,word,intag) VALUES(810,'support',3)^M\nProcessQuery^M\nCommitTransactionCommand^M\nStartTransactionCommand^M\nquery: INSERT INTO dict (url_id,word,intag) VALUES(810,'postgresql',1)^M\nProcessQuery^M\nCommitTransactionCommand^M\nStartTransactionCommand^M\nquery: INSERT INTO dict (url_id,word,intag) VALUES(810,'user',1)^M\nProcessQuery^M\nCommitTransactionCommand^M\nStartTransactionCommand^M\nquery: INSERT INTO dict (url_id,word,intag) VALUES(810,'s',1)^M\nProcessQuery^M\nCommitTransactionCommand^M\n\n\nIf they issue a 'BEGIN WORK', shouldn't it eliminate all the\n'{Commit,Start}TransactionCommand's?\n\nThe man page shows: begin [transaction|work], but doesn't tell the\ndifference between 'transaction' and 'work'...\n\nComments?\n\nThanks...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 14 Dec 1999 13:27:24 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Transactions ..."
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> and the following being pumped out of hte backend (have debug mode's\n\n> StartTransactionCommand^M\n> query: BEGIN WORK^M\n> ProcessUtility: BEGIN WORK^M\n> CommitTransactionCommand^M\n> StartTransactionCommand^M\n> query: INSERT INTO dict (url_id,word,intag) VALUES(810,'date',3)^M\n> ProcessQuery^M\n> CommitTransactionCommand^M\n> StartTransactionCommand^M\n\n> If they issue a 'BEGIN WORK', shouldn't it eliminate all the\n> '{Commit,Start}TransactionCommand's?\n\nNo. Those routines are still called, they just behave differently.\n\nYou're not the first one to be confused by those debug messages, IIRC.\nIt would probably make sense to rip those TPRINTFs out of postgres.c,\nwhich only knows that it's calling those 2 routines, and put TPRINTFs\ninto the routines themselves (in xact.c) that show what state transition\nis actually being taken...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Dec 1999 15:11:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Transactions ... "
}
] |
[
{
"msg_contents": "\nPretty much reproducable each time, and nothing other then that in the\nlogs...I can restart the process, let it run and after awhile, it does it\nagain...\n\nI'm trying to get the UdmSearch program in place to replace ht/Dig, and\nthis is from the program that is creating the databases:\n\nUdmSearch[47380]: Error: Error: 'pqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\n\nI have a pg_options file set at:\n\nverbose=2\nquery\nhostlookup\nshowportnumber\nsyslog=2\n\n\nbut all that appears to show up is:\n\nStartTransactionCommand\nquery: INSERT INTO dict (url_id,word,intag) VALUES(1248,'enough',1)\nProcessQuery\nCommitTransactionCommand\nStartTransactionCommand\nquery: INSERT INTO dict (url_id,word,intag) VALUES(1248,'information',1)\nProcessQuery\nCommitTransactionCommand\nStartTransactionCommand\nquery: INSERT INTO dict (url_id,word,intag) VALUES(1248,'require',1)\nProcessQuery\nproc_exit(0) [#0]\nshmem_exit(0) [#0]\nexit(0)\n\nNone production database right now, so its pretty much open game for\ntrying things with it...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 14 Dec 1999 14:13:53 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "[6.5.3] FATAL 1: my bits moved right off the end of the world!"
},
{
"msg_contents": "Isn't that what you get after a btree index has gotten corrupted?\n(Probably by trying to insert a too-large index entry?)\n\nI thought we'd put in a defense against oversize index entries,\nbut maybe it hasn't made it to the REL6_5 series...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Dec 1999 15:14:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [6.5.3] FATAL 1: my bits moved right off the end of the\n\tworld!"
},
{
"msg_contents": "On Tue, 14 Dec 1999, Tom Lane wrote:\n\n> Isn't that what you get after a btree index has gotten corrupted?\n> (Probably by trying to insert a too-large index entry?)\n> \n> I thought we'd put in a defense against oversize index entries,\n> but maybe it hasn't made it to the REL6_5 series...\n\nSuggestion on what to check for this? if it restart the process, it\nappears to resume from where it left off, so I'm guessing it isn't in the\napplication itself...?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 14 Dec 1999 16:52:50 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] [6.5.3] FATAL 1: my bits moved right off the end of\n\tthe world!"
}
] |
[
{
"msg_contents": "As threatened, here's the array inquisition. If have already found out how\narrays work in essence, but the could anybody tell me how I create a new\narray? I mean I could just allocate a chunk of memory and set all the\nflags and sizes myself, but that doesn't seem very clean.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Tue, 14 Dec 1999 20:26:33 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Arrays"
}
] |
[
{
"msg_contents": "[ I'm redirecting this to pg-hackers since it doesn't look like an\ninterfaces problem ... ]\n\nMatthew Hagerty <[email protected]> writes:\n> The app is written in PHP3-3.0.12 compiled as an Apache-1.3.6 module. The\n> OS is FreeBSD-3.1-Release with GCC-2.7.2.1 and a PostgreSQL-6.5.1 backend.\n\nYou should probably update to 6.5.3 for starters. I'm not all that\nhopeful that any of the bugfixes in 6.5.3 will fix this, but it'd be\npretty silly not to try it before investing a lot of work running down\nthe problem.\n\n> The app went online on August 30, 1999 and has run without incident until\n> yesterday. At about 10am Dec, 13th, 1999 one of the programmers noticed\n> that none of the forum messages would come up. I went to the console of\n> the server and saw this message about 10 or 15 times:\n\n> Dec 13 10:35:56 redbox /kernel: pid 13856 (postgres), uid 1002: exited on\n> signal 11 (core dumped)\n\n> A ps -xa revealed about 15 or so postgres processes! I did not think\n> postgres made any child processes?!?! So I stopped the web server and\n> killed the main postgres process which seemed to kill all the other\n> postgres processes. I then tried to restart postgres and got an error\n> message that was something like:\n\n> IpcSemaphore??? - Key=54321234 Max\n\nYou could probably have recovered from this with \"ipcclean\" instead of a\nreboot; it sounds like the postmaster failed to release the shared\nsemaphores before exiting. Which it should have, unless maybe you used\nkill -9 on it...\n\n> At 9:36am on the 14th it happened again. Again I was unable to recover the\n> data and had to rebuild the data directory. I did not delete the data\n> directory this time, I just moved it to another directory so I would have\n> it. I also have the core dumps. The only file I had to delete was the\n> pg_log in the data directory. What is this file? It had grown to 700Meg\n> in under 24 hours!! Also, the core dump for the main app grew from 2.7Meg\n> to over 80Meg while I was trying to dump the data.\n\nSure sounds like a corrupted-data problem. Can you use gdb on the\ncorefiles to get a backtrace of what they were doing?\n\n> My biggest hang-up is why all of a sudden?\n\nGood question. We'll probably know the answer when we find the problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Dec 1999 16:39:36 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Backend core dump, Please help, Urgent! "
},
{
"msg_contents": "> Sure sounds like a corrupted-data problem. Can you use gdb on the\n> corefiles to get a backtrace of what they were doing?\n> \n> > My biggest hang-up is why all of a sudden?\n> \n> Good question. We'll probably know the answer when we find the problem.\n\nBesides the problem Tom has pointed out its possibility, there is a\nknown problem with 6.5.x on FreeBSD. It would be rather important,\nsince it results in a core dump as well. The problem occurs while a\nbackend is waiting for acquiring a lock. Thus it tends to happen on\nrelatively heavy load (I observed the problem starting with 4\nconcurrent transactions). As far as I know, Linux does not have the\nproblem at all, but FreeBSD does. I'm not sure about other\nplatforms. Solaris seems to be not suffered.\n\nYou could try following patch. It was made for 6.5.3, but you could\napply it to 6.5.1 or 6.5.2 as well. Current has been already fixed\nwith more complex and long-term-aid solution. But I would prefer to\nminimize the impact to existing releases. Keeping that in mind, I have\nmade the patch the simplest.\n--\nTatsuo Ishii\n\n---------------------------- cut here -----------------------------\n*** postgresql-6.5.3/src/backend/storage/lmgr/lock.c~\tSat May 29 15:14:42 1999\n--- postgresql-6.5.3/src/backend/storage/lmgr/lock.c\tMon Dec 13 16:45:47 1999\n***************\n*** 940,946 ****\n {\n \tPROC_QUEUE *waitQueue = &(lock->waitProcs);\n \tLOCKMETHODTABLE *lockMethodTable = LockMethodTable[lockmethod];\n! \tchar\t\told_status[64],\n \t\t\t\tnew_status[64];\n \n \tAssert(lockmethod < NumLockMethods);\n--- 940,946 ----\n {\n \tPROC_QUEUE *waitQueue = &(lock->waitProcs);\n \tLOCKMETHODTABLE *lockMethodTable = LockMethodTable[lockmethod];\n! \tstatic char\t\told_status[64],\n \t\t\t\tnew_status[64];\n \n \tAssert(lockmethod < NumLockMethods);\n",
"msg_date": "Wed, 15 Dec 1999 20:43:33 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Backend core dump, Please help, Urgent! "
},
{
"msg_contents": "Thanks for the patch. I think I'm going to upgrade to FreeBSD-3.3 and \nPG-6.5.3 tonight. Will I still need the patch with 6.5.3? I'm also going \nto do a connection test on another offline server to see if it is indeed a \nload problem. I'll post the results if anyone is interested.\nThank you for the help, \nMatthew\n\n\nAt 08:43 PM 12/15/99 +0900, Tatsuo Ishii wrote:\n>> Sure sounds like a corrupted-data problem. Can you use gdb on the\n>> corefiles to get a backtrace of what they were doing?\n>> \n>> > My biggest hang-up is why all of a sudden?\n>> \n>> Good question. We'll probably know the answer when we find the problem.\n>\n>Besides the problem Tom has pointed out its possibility, there is a\n>known problem with 6.5.x on FreeBSD. It would be rather important,\n>since it results in a core dump as well. The problem occurs while a\n>backend is waiting for acquiring a lock. Thus it tends to happen on\n>relatively heavy load (I observed the problem starting with 4\n>concurrent transactions). As far as I know, Linux does not have the\n>problem at all, but FreeBSD does. I'm not sure about other\n>platforms. Solaris seems to be not suffered.\n>\n>You could try following patch. It was made for 6.5.3, but you could\n>apply it to 6.5.1 or 6.5.2 as well. Current has been already fixed\n>with more complex and long-term-aid solution. But I would prefer to\n>minimize the impact to existing releases. Keeping that in mind, I have\n>made the patch the simplest.\n>--\n>Tatsuo Ishii\n>\n>---------------------------- cut here -----------------------------\n>*** postgresql-6.5.3/src/backend/storage/lmgr/lock.c~\tSat May 29 15:14:42 1999\n>--- postgresql-6.5.3/src/backend/storage/lmgr/lock.c\tMon Dec 13 16:45:47 1999\n>***************\n>*** 940,946 ****\n> {\n> \tPROC_QUEUE *waitQueue = &(lock->waitProcs);\n> \tLOCKMETHODTABLE *lockMethodTable = LockMethodTable[lockmethod];\n>! \tchar\t\told_status[64],\n> \t\t\t\tnew_status[64];\n> \n> \tAssert(lockmethod < NumLockMethods);\n>--- 940,946 ----\n> {\n> \tPROC_QUEUE *waitQueue = &(lock->waitProcs);\n> \tLOCKMETHODTABLE *lockMethodTable = LockMethodTable[lockmethod];\n>! \tstatic char\t\told_status[64],\n> \t\t\t\tnew_status[64];\n> \n> \tAssert(lockmethod < NumLockMethods);\n\n",
"msg_date": "Wed, 15 Dec 1999 14:57:04 -0500",
"msg_from": "Matthew Hagerty <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Backend core dump, Please help, Urgent! "
}
] |
[
{
"msg_contents": "Greetings,\n\nI think Tom Lane forwarded this over from [INTERFACES] (thanks Tom!), but I\nthought I should post it since it is my problem and not Tom's.\n\nOriginal post as follows:\n-------------------------\n\nIf anyone could help me figure out what is going on with my PostgreSQL\nbackend I would greatly appreciate it!! I'll try to be brief and to the point.\n\nI work for a small company and we created an online app for another small\ncompany that has about 300 members who access the site. I think the record\nfor simultaneous logins is about 15, so the load is not really that great.\nThere are about 3000 to 5000 records added per month.\n\nThe app is written in PHP3-3.0.12 compiled as an Apache-1.3.6 module. The\nOS is FreeBSD-3.1-Release with GCC-2.7.2.1 and a PostgreSQL-6.5.1 backend.\nI start the postgres process at startup like this:\n\nsu postgres -c \"/usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data -i\n> /usr/local/pgsql/postgres.log 2>&1 &\"\n\nThe server is an Intel R440LX Motherboard with two P2/333, 128Meg ECC DIMM,\nand three 4.5G WD SCSI drives.\n\nThe primary database and main app code were designed and written in-house,\nhowever we do use a PHP3 program called Phorum to implement a message forum\nfor the users. The main app database and the phorum database are two\nseparate databases.\n\nThe app went online on August 30, 1999 and has run without incident until\nyesterday. At about 10am Dec, 13th, 1999 one of the programmers noticed\nthat none of the forum messages would come up. I went to the console of\nthe server and saw this message about 10 or 15 times:\n\nDec 13 10:35:56 redbox /kernel: pid 13856 (postgres), uid 1002: exited on\nsignal 11 (core dumped)\n\nA ps -xa revealed about 15 or so postgres processes! I did not think\npostgres made any child processes?!?! So I stopped the web server and\nkilled the main postgres process which seemed to kill all the other\npostgres processes. I then tried to restart postgres and got an error\nmessage that was something like:\n\nIpcSemaphore??? - Key=54321234 Max\n\nI could kick myself for not recording the exact message. Something to do\nwith shared memory I think. Never the less, postgres was not going to\nstart back up and I did not know what the error was telling me, so I had to\nreboot (uptime said 143 days).\n\nWhen the system came back up postgres started and I tried to check if there\nwas a post to the phorum database that may have caused the core dump. I\nexecuted 2 queries and then tried to query the main app database from\nanother terminal. The main app queries were not executing, so I did a ps\n-xa to see what processes were running and there were exactly 2 core dumped\nsig 11 postgres processes!! So I did another query on the phorum database\nand got a 3rd core dumped process!\n\nAt this point I killed all the postgres processes, restarted postgres and\ntried to do a dump on the main app database. pg_dump gave an error similar\nto this (I kick myself again):\n\nTuple 0:0 invalid, can't dump.\n\nSo, pg_dump was not going to give me a backup to that point, so I stopped\npostgres and issued:\n\n# rm -r data\n# initdb\n# createdb ipa\n# createdb phorum\n\nThen I used the previous day's backup for the main app, and just created\nthe table structure for the phourm since we do not backup that data.\nRestarted the postgres and the web server and all seemed fine... until today.\n\nAt 9:36am on the 14th it happened again. Again I was unable to recover the\ndata and had to rebuild the data directory. I did not delete the data\ndirectory this time, I just moved it to another directory so I would have\nit. I also have the core dumps. The only file I had to delete was the\npg_log in the data directory. What is this file? It had grown to 700Meg\nin under 24 hours!! Also, the core dump for the main app grew from 2.7Meg\nto over 80Meg while I was trying to dump the data.\n\nMy biggest hang-up is why all of a sudden? We literally did not change\nanything! The system was working fine since August. And now, after\ncreating new databases, it does it again in less than 24 hours! Also, is\nthere some reason why the log file created by postgres does not timestamp\nits entries?\n\nI will provide any table structures, core files, server logs, etc. if\nneeded. Anything that might give me an idea as to what is going on.\n\nThank you,\nMatthew\n\n\nMatthew Hagerty\nVenux Technology Group\[email protected]\n616.458.9800 \n",
"msg_date": "Tue, 14 Dec 1999 17:45:38 -0500",
"msg_from": "Matthew Hagerty <[email protected]>",
"msg_from_op": true,
"msg_subject": "Backend core dump, Please help, Urgent!"
}
] |
[
{
"msg_contents": "Greetings,\n\nThis follows a post I just made with the subject line: [Backend core dump,\nPlease help, Urgent!]\n\nWe just had the same thing occur on a completely different server! It\nhappened on our development server which is different hardware and\ndifferent version of FreeBSD (3.2-Release) than on our production server.\nHowever both are running pg-6.5.1. Both of these machines have been\nrunning for over 6 months without incident.\n\nOne of the programmers was in psql, he created this table:\n\nCREATE TABLE \"instacom\" ( \n\"submit_date\" date, \n\"instacom_id\" int4, \n\"message\" text); \nCREATE INDEX \"instacom_submit_date\" on \"instacom\" using btree \n\"submit_date\" \"date_ops\" );\n\nI'm not sure how much data he had in the table, but I'm sure it was not\nmore than a few records, but when he submitted a query on this table\n*only*, other tables queried just fine, the backend crashed and this\nmessage displayed on the console:\n\nDec 14 16:00:56 bluebox /kernel: pid 79923 (postgres), uid 1001: exited on\nsignal 11 \nDec 14 16:01:14 bluebox /kernel: pid 79925 (postgres), uid 1001: exited on \nsignal 11 \nDec 14 16:03:06 bluebox /kernel: pid 79940 (postgres), uid 1001: exited on \nsignal 10\n\nThe signal 10 kind of caught my eye since we had only seen signal 11s so\nfar. He restarted the server, delete the table, and recreated it like this:\n\nCREATE TABLE \"instacom\" ( \n\"submit_date\" date, \n\"instacom_id\" int4); \nCREATE INDEX \"instacom_submit_date\" on \"instacom\" using btree \n\"submit_date\" \"date_ops\" );\n\nThe backend has not yet crashed. We are in the process of recreating the\nold structure to see if we can reproduce the error. There are no error\nmessages in our postgres.log. I will gladly provide any log files, core\ndumps, make config changes, etc.\n\nAny insight would be greatly appreciated.\n\nThank you,\nMatthew Hagerty\n",
"msg_date": "Tue, 14 Dec 1999 18:11:50 -0500",
"msg_from": "Matthew Hagerty <[email protected]>",
"msg_from_op": true,
"msg_subject": "Backend core dump, different server!"
}
] |
[
{
"msg_contents": "Hi,\n\nI got seriuos problem this night with postgres which is running as \ndb backend to apache. I have cron job which vacuuming database\nevery hour and it worked for weeks without problem\n(well, there is problem with concurrent processes under high load,\nbut this night was very quiet)\n\nHere is the processes currently seen:\n\n 167 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd nature idle \n 168 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd nature idle \n 169 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd nature idle \n 170 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd nature idle \n 171 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd nature idle \n26578 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd nature idle \n29372 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd discovery idle\n\n\nfrom apache's error log:\n[Wed Dec 15 02:10:08 1999] [error] DBI->connect failed: connectDB() -- couldn't\n send startup packet: errno=32\nBroken pipe\n at /opt/perl5/lib/site_perl/5.005/Apache/DBI.pm line 138\n\n[Wed Dec 15 02:10:08 1999] [error] DBI->connect failed: pqReadData() -- backend \nclosed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\n at /opt/perl5/lib/site_perl/5.005/Apache/DBI.pm line 138\n\n[Wed Dec 15 02:10:43 1999] [error] DBI->connect failed: connectDB() -- connect()\n failed: No such file or directory\nIs the postmaster running at 'localhost' and accepting connections on Unix socke\nt '5432'?\n at /opt/perl5/lib/site_perl/5.005/Apache/DBI.pm line 138\n\n\nfortunately postmaster was started with debug option:\n\nStartTransactionCommand\nquery: SET client_encoding = 'KOI8'\nProcessUtility: SET client_encoding = 'KOI8'\nCommitTransactionCommand\npostmaster: StreamConnection: accept: Invalid argument\n/usr/local/pgsql/bin/postmaster: ServerLoop: handling reading 6\n/usr/local/pgsql/bin/postmaster: ServerLoop: handling reading 6\nFATAL 1: ReleaseLruFile: No opened files - no one can be closed\nproc_exit(0) [#0]\nshmem_exit(0) [#0]\nexit(0)\n\nAha,\n\n>From system log files I found probable explanation - file table overflow !\nCould this be a reason of postmaster dead and how to avoid this ?\n\nDec 15 01:47:27 zeus kernel: Unable to load interpreter\nDec 15 02:09:28 zeus xntpd[103]: kernel pll status change 89\nDec 15 02:10:08 zeus squid[133]: file_open: error opening file /d4/squid/cache/0\n2/69/00001692: (23) File table overflow \nDec 15 02:10:08 zeus squid[133]: storeSwapInStart: Failed for 'http://xyz.tvcom.\nru/99/12/100_2.jpg' \nDec 15 02:10:12 zeus kernel: Unable to load interpreter\nDec 15 03:26:16 zeus xntpd[103]: kernel pll status change 89\n\n\n\tRegards,\n\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Wed, 15 Dec 1999 11:15:38 +0300 (MSK)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "postmaster dies (6.5.3)"
},
{
"msg_contents": "Thanks for the patch. I think I'm going to upgrade to FreeBSD-3.3 and\nPG-6.5.3 tonight. Will I still need the patch with 6.5.3? I'm also going\nto do a connection test on another offline server to see if it is indeed a\nload problem. I'll post the results if anyone is interested.\n\nThank you for the help,\nMatthew\n\nAt 11:15 AM 12/15/99 +0300, Oleg Bartunov wrote:\n>Hi,\n>\n>I got seriuos problem this night with postgres which is running as \n>db backend to apache. I have cron job which vacuuming database\n>every hour and it worked for weeks without problem\n>(well, there is problem with concurrent processes under high load,\n>but this night was very quiet)\n>\n>Here is the processes currently seen:\n>\n> 167 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd nature\nidle \n> 168 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd nature\nidle \n> 169 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd nature\nidle \n> 170 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd nature\nidle \n> 171 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd nature\nidle \n>26578 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd nature\nidle \n>29372 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd discovery \n>idle\n>\n>\n>from apache's error log:\n>[Wed Dec 15 02:10:08 1999] [error] DBI->connect failed: connectDB() -- \n>couldn't\n> send startup packet: errno=32\n>Broken pipe\n> at /opt/perl5/lib/site_perl/5.005/Apache/DBI.pm line 138\n>\n>[Wed Dec 15 02:10:08 1999] [error] DBI->connect failed: pqReadData() -- \n>backend \n>closed the channel unexpectedly.\n> This probably means the backend terminated abnormally\n> before or while processing the request.\n> at /opt/perl5/lib/site_perl/5.005/Apache/DBI.pm line 138\n>\n>[Wed Dec 15 02:10:43 1999] [error] DBI->connect failed: connectDB() -- \n>connect()\n> failed: No such file or directory\n>Is the postmaster running at 'localhost' and accepting connections on Unix \n>socke\n>t '5432'?\n> at /opt/perl5/lib/site_perl/5.005/Apache/DBI.pm line 138\n>\n>\n>fortunately postmaster was started with debug option:\n>\n>StartTransactionCommand\n>query: SET client_encoding = 'KOI8'\n>ProcessUtility: SET client_encoding = 'KOI8'\n>CommitTransactionCommand\n>postmaster: StreamConnection: accept: Invalid argument\n>/usr/local/pgsql/bin/postmaster: ServerLoop: handling reading 6\n>/usr/local/pgsql/bin/postmaster: ServerLoop: handling reading 6\n>FATAL 1: ReleaseLruFile: No opened files - no one can be closed\n>proc_exit(0) [#0]\n>shmem_exit(0) [#0]\n>exit(0)\n>\n>Aha,\n>\n>>From system log files I found probable explanation - file table overflow !\n>Could this be a reason of postmaster dead and how to avoid this ?\n>\n>Dec 15 01:47:27 zeus kernel: Unable to load interpreter\n>Dec 15 02:09:28 zeus xntpd[103]: kernel pll status change 89\n>Dec 15 02:10:08 zeus squid[133]: file_open: error opening file \n>/d4/squid/cache/0\n>2/69/00001692: (23) File table overflow \n>Dec 15 02:10:08 zeus squid[133]: storeSwapInStart: Failed for \n>'http://xyz.tvcom.\n>ru/99/12/100_2.jpg' \n>Dec 15 02:10:12 zeus kernel: Unable to load interpreter\n>Dec 15 03:26:16 zeus xntpd[103]: kernel pll status change 89\n>\n>\n>\tRegards,\n>\n>\t\tOleg\n>_____________________________________________________________\n>Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n>Sternberg Astronomical Institute, Moscow University (Russia)\n>Internet: [email protected], http://www.sai.msu.su/~megera/\n>phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>\n>************\n\n",
"msg_date": "Wed, 15 Dec 1999 14:40:32 -0500",
"msg_from": "Matthew Hagerty <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postmaster dies (6.5.3)"
},
{
"msg_contents": "Umm, sorry, I hit reply to the wrong message...\n\nMatthew\n\nAt 02:40 PM 12/15/99 -0500, Matthew Hagerty wrote:\n>Thanks for the patch. I think I'm going to upgrade to FreeBSD-3.3 and\n>PG-6.5.3 tonight. Will I still need the patch with 6.5.3? I'm also going\n>to do a connection test on another offline server to see if it is indeed a\n>load problem. I'll post the results if anyone is interested.\n>\n>Thank you for the help,\n>Matthew\n>\n>At 11:15 AM 12/15/99 +0300, Oleg Bartunov wrote:\n>>Hi,\n>>\n>>I got seriuos problem this night with postgres which is running as \n>>db backend to apache. I have cron job which vacuuming database\n>>every hour and it worked for weeks without problem\n>>(well, there is problem with concurrent processes under high load,\n>>but this night was very quiet)\n>>\n>>Here is the processes currently seen:\n>>\n>> 167 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd nature\n>idle \n>> 168 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd nature\n>idle \n>> 169 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd nature\n>idle \n>> 170 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd nature\n>idle \n>> 171 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd nature\n>idle \n>>26578 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd nature\n>idle \n>>29372 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd discovery \n>>idle\n>>\n>>\n>>from apache's error log:\n>>[Wed Dec 15 02:10:08 1999] [error] DBI->connect failed: connectDB() -- \n>>couldn't\n>> send startup packet: errno=32\n>>Broken pipe\n>> at /opt/perl5/lib/site_perl/5.005/Apache/DBI.pm line 138\n>>\n>>[Wed Dec 15 02:10:08 1999] [error] DBI->connect failed: pqReadData() -- \n>>backend \n>>closed the channel unexpectedly.\n>> This probably means the backend terminated abnormally\n>> before or while processing the request.\n>> at /opt/perl5/lib/site_perl/5.005/Apache/DBI.pm line 138\n>>\n>>[Wed Dec 15 02:10:43 1999] [error] DBI->connect failed: connectDB() -- \n>>connect()\n>> failed: No such file or directory\n>>Is the postmaster running at 'localhost' and accepting connections on Unix \n>>socke\n>>t '5432'?\n>> at /opt/perl5/lib/site_perl/5.005/Apache/DBI.pm line 138\n>>\n>>\n>>fortunately postmaster was started with debug option:\n>>\n>>StartTransactionCommand\n>>query: SET client_encoding = 'KOI8'\n>>ProcessUtility: SET client_encoding = 'KOI8'\n>>CommitTransactionCommand\n>>postmaster: StreamConnection: accept: Invalid argument\n>>/usr/local/pgsql/bin/postmaster: ServerLoop: handling reading 6\n>>/usr/local/pgsql/bin/postmaster: ServerLoop: handling reading 6\n>>FATAL 1: ReleaseLruFile: No opened files - no one can be closed\n>>proc_exit(0) [#0]\n>>shmem_exit(0) [#0]\n>>exit(0)\n>>\n>>Aha,\n>>\n>>>From system log files I found probable explanation - file table overflow !\n>>Could this be a reason of postmaster dead and how to avoid this ?\n>>\n>>Dec 15 01:47:27 zeus kernel: Unable to load interpreter\n>>Dec 15 02:09:28 zeus xntpd[103]: kernel pll status change 89\n>>Dec 15 02:10:08 zeus squid[133]: file_open: error opening file \n>>/d4/squid/cache/0\n>>2/69/00001692: (23) File table overflow \n>>Dec 15 02:10:08 zeus squid[133]: storeSwapInStart: Failed for \n>>'http://xyz.tvcom.\n>>ru/99/12/100_2.jpg' \n>>Dec 15 02:10:12 zeus kernel: Unable to load interpreter\n>>Dec 15 03:26:16 zeus xntpd[103]: kernel pll status change 89\n>>\n>>\n>>\tRegards,\n>>\n>>\t\tOleg\n>>_____________________________________________________________\n>>Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n>>Sternberg Astronomical Institute, Moscow University (Russia)\n>>Internet: [email protected], http://www.sai.msu.su/~megera/\n>>phone: +007(095)939-16-83, +007(095)939-23-83\n>>\n>>\n>>************\n>\n>\n>************\n\n",
"msg_date": "Wed, 15 Dec 1999 14:58:22 -0500",
"msg_from": "Matthew Hagerty <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postmaster dies (6.5.3)"
}
] |
[
{
"msg_contents": "Can anyone comment on this? I don't know the answer, and I know he is\nwaiting for help. This originally appeared on the patches list.\n\n> \n> These patches to libpq allow a user to toggle the blocking nature of\n> the database connection.\n> \n> They also fix a problem where if the database's pipe was full it could\n> busy-loop attempting to flush the send buffer.\n> \n> It's assumed that callers to PQexec want blocking behavior and the \n> mode of the connection will be toggled while the query is being \n> executed via PQexec.\n> \n> A new field has been added to the PGconn structure to allow the database\n> to track the non-blocking nature of the connection without polling the\n> status of the socket via syscalls.\n> \n> When in non-blocking mode the library is careful to make sure that\n> it will send a complete command/line down the wire before allowing it.\n> \n> The case of EINTR in pqFlush() is caught and averted from making a\n> useless select() call.\n> \n> There is a problem though, some of the code (particularly \"fe-exec.c\" line 518)\n> may now get out of sync because:\n> \n> \tif (pqPutnchar(\"Q\", 1, conn) ||\n> \t\tpqPuts(query, conn) ||\n> \t\tpqFlush(conn))\n> \t{\n> \t\thandleSendFailure(conn);\n> \t\treturn 0;\n> \t}\n> \n> may send a 'Q' but be unable to send the query, I'm unsure if\n> handleSendFailure() is able to reliably deal with this. I may need\n> to work on reservations for the send buffer if not.\n> \n> Does anyone know? I'll be investigating meanwhile but I wanted\n> people to get a snapshot of what I was working on so I could get\n> some feedback if i'm going in the right direction.\n> \n> These patches need review. My apologies for not running it through\n> pgindent, but the patches supplied postgresql don't seem to apply\n> cleanly to FreeBSD's indent any longer and my indent was segfaulting.\n> \n> Hopefully I kept within the guidelines for acceptable changes.\n> \n> thanks,\n> -Alfred Perlstein - [[email protected]|[email protected]]\n> Wintelcom systems administrator and programmer\n> - http://www.wintelcom.net/ [[email protected]]\n> \n> \n> Index: fe-connect.c\n> ===================================================================\n> RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-connect.c,v\n> retrieving revision 1.108\n> diff -u -u -r1.108 fe-connect.c\n> --- fe-connect.c\t1999/12/02 00:26:15\t1.108\n> +++ fe-connect.c\t1999/12/14 09:42:24\n> @@ -595,31 +595,6 @@\n> \treturn 0;\n> }\n> \n> -\n> -/* ----------\n> - * connectMakeNonblocking -\n> - * Make a connection non-blocking.\n> - * Returns 1 if successful, 0 if not.\n> - * ----------\n> - */\n> -static int\n> -connectMakeNonblocking(PGconn *conn)\n> -{\n> -#ifndef WIN32\n> -\tif (fcntl(conn->sock, F_SETFL, O_NONBLOCK) < 0)\n> -#else\n> -\tif (ioctlsocket(conn->sock, FIONBIO, &on) != 0)\n> -#endif\n> -\t{\n> -\t\tprintfPQExpBuffer(&conn->errorMessage,\n> -\t\t\t\t\t\t \"connectMakeNonblocking -- fcntl() failed: errno=%d\\n%s\\n\",\n> -\t\t\t\t\t\t errno, strerror(errno));\n> -\t\treturn 0;\n> -\t}\n> -\n> -\treturn 1;\n> -}\n> -\n> /* ----------\n> * connectNoDelay -\n> * Sets the TCP_NODELAY socket option.\n> @@ -792,7 +767,7 @@\n> \t * Ewan Mellor <[email protected]>.\n> \t * ---------- */\n> #if (!defined(WIN32) || defined(WIN32_NON_BLOCKING_CONNECTIONS)) && !defined(USE_SSL)\n> -\tif (!connectMakeNonblocking(conn))\n> +\tif (PQsetnonblocking(conn, TRUE) != 0)\n> \t\tgoto connect_errReturn;\n> #endif\t\n> \n> @@ -904,7 +879,7 @@\n> \t/* This makes the connection non-blocking, for all those cases which forced us\n> \t not to do it above. */\n> #if (defined(WIN32) && !defined(WIN32_NON_BLOCKING_CONNECTIONS)) || defined(USE_SSL)\n> -\tif (!connectMakeNonblocking(conn))\n> +\tif (PQsetnonblocking(conn, TRUE) != 0)\n> \t\tgoto connect_errReturn;\n> #endif\t\n> \n> @@ -1702,6 +1677,7 @@\n> \tconn->inBuffer = (char *) malloc(conn->inBufSize);\n> \tconn->outBufSize = 8 * 1024;\n> \tconn->outBuffer = (char *) malloc(conn->outBufSize);\n> +\tconn->nonblocking = FALSE;\n> \tinitPQExpBuffer(&conn->errorMessage);\n> \tinitPQExpBuffer(&conn->workBuffer);\n> \tif (conn->inBuffer == NULL ||\n> @@ -1811,6 +1787,7 @@\n> \tconn->lobjfuncs = NULL;\n> \tconn->inStart = conn->inCursor = conn->inEnd = 0;\n> \tconn->outCount = 0;\n> +\tconn->nonblocking = FALSE;\n> \n> }\n> \n> Index: fe-exec.c\n> ===================================================================\n> RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-exec.c,v\n> retrieving revision 1.86\n> diff -u -u -r1.86 fe-exec.c\n> --- fe-exec.c\t1999/11/11 00:10:14\t1.86\n> +++ fe-exec.c\t1999/12/14 05:55:11\n> @@ -13,6 +13,7 @@\n> */\n> #include <errno.h>\n> #include <ctype.h>\n> +#include <fcntl.h>\n> \n> #include \"postgres.h\"\n> #include \"libpq-fe.h\"\n> @@ -24,7 +25,6 @@\n> #include <unistd.h>\n> #endif\n> \n> -\n> /* keep this in same order as ExecStatusType in libpq-fe.h */\n> const char *const pgresStatus[] = {\n> \t\"PGRES_EMPTY_QUERY\",\n> @@ -574,7 +574,15 @@\n> \t * we will NOT block waiting for more input.\n> \t */\n> \tif (pqReadData(conn) < 0)\n> +\t{\n> +\t\t/*\n> +\t\t * try to flush the send-queue otherwise we may never get a \n> +\t\t * resonce for something that may not have already been sent\n> +\t\t * because it's in our write buffer!\n> +\t\t */\n> +\t\tpqFlush(conn);\n> \t\treturn 0;\n> +\t}\n> \t/* Parsing of the data waits till later. */\n> \treturn 1;\n> }\n> @@ -1088,8 +1096,17 @@\n> {\n> \tPGresult *result;\n> \tPGresult *lastResult;\n> +\tbool\tsavedblocking;\n> \n> \t/*\n> +\t * we assume anyone calling PQexec wants blocking behaviour,\n> +\t * we force the blocking status of the connection to blocking\n> +\t * for the duration of this function and restore it on return\n> +\t */\n> +\tsavedblocking = PQisnonblocking(conn);\n> +\tPQsetnonblocking(conn, FALSE);\n> +\n> +\t/*\n> \t * Silently discard any prior query result that application didn't\n> \t * eat. This is probably poor design, but it's here for backward\n> \t * compatibility.\n> @@ -1102,14 +1119,15 @@\n> \t\t\tPQclear(result);\n> \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\"PQexec: you gotta get out of a COPY state yourself.\\n\");\n> -\t\t\treturn NULL;\n> +\t\t\t/* restore blocking status */\n> +\t\t\tgoto errout;\n> \t\t}\n> \t\tPQclear(result);\n> \t}\n> \n> \t/* OK to send the message */\n> \tif (!PQsendQuery(conn, query))\n> -\t\treturn NULL;\n> +\t\tgoto errout;\n> \n> \t/*\n> \t * For backwards compatibility, return the last result if there are\n> @@ -1142,7 +1160,13 @@\n> \t\t\tresult->resultStatus == PGRES_COPY_OUT)\n> \t\t\tbreak;\n> \t}\n> +\n> +\tPQsetnonblocking(conn, savedblocking);\n> \treturn lastResult;\n> +\n> +errout:\n> +\tPQsetnonblocking(conn, savedblocking);\n> +\treturn NULL;\n> }\n> \n> \n> @@ -1431,8 +1455,14 @@\n> \t\t\t \"PQendcopy() -- I don't think there's a copy in progress.\\n\");\n> \t\treturn 1;\n> \t}\n> +\n> +\t/* make sure no data is waiting to be sent */\n> +\tif (pqFlush(conn))\n> +\t\treturn (1);\n> \n> -\t(void) pqFlush(conn);\t\t/* make sure no data is waiting to be sent */\n> +\t/* non blocking connections may have to abort at this point. */\n> +\tif (PQisnonblocking(conn) && PQisBusy(conn))\n> +\t\treturn (1);\n> \n> \t/* Return to active duty */\n> \tconn->asyncStatus = PGASYNC_BUSY;\n> @@ -2025,4 +2055,72 @@\n> \t\treturn 1;\n> \telse\n> \t\treturn 0;\n> +}\n> +\n> +/* PQsetnonblocking:\n> +\t sets the PGconn's database connection non-blocking if the arg is TRUE\n> +\t or makes it non-blocking if the arg is FALSE, this will not protect\n> +\t you from PQexec(), you'll only be safe when using the non-blocking\n> +\t API\n> +\t Needs to be called only on a connected database connection.\n> +*/\n> +\n> +int\n> +PQsetnonblocking(PGconn *conn, int arg)\n> +{\n> +\tint\tfcntlarg;\n> +\n> +\targ = (arg == TRUE) ? 1 : 0;\n> +\tif (arg == conn->nonblocking)\n> +\t\treturn (0);\n> +\n> +#ifdef USE_SSL\n> +\tif (conn->ssl)\n> +\t{\n> +\t\tprintfPQExpBuffer(&conn->errorMessage,\n> +\t\t\t\"PQsetnonblocking() -- not supported when using SSL\\n\");\n> +\t\treturn (-1);\n> +\t}\n> +#endif /* USE_SSL */\n> +\n> +#ifndef WIN32\n> +\tfcntlarg = fcntl(conn->sock, F_GETFL, 0);\n> +\tif (fcntlarg == -1)\n> +\t\treturn (-1);\n> +\n> +\tif ((arg == TRUE && \n> +\t\tfcntl(conn->sock, F_SETFL, fcntlarg | O_NONBLOCK) == -1) ||\n> +\t\t(arg == FALSE &&\n> +\t\tfcntl(conn->sock, F_SETFL, fcntlarg & ~O_NONBLOCK) == -1)) \n> +#else\n> +\tfcntlarg = arg;\n> +\tif (ioctlsocket(conn->sock, FIONBIO, &fcntlarg) != 0)\n> +#endif\n> +\t{\n> +\t\tprintfPQExpBuffer(&conn->errorMessage,\n> +\t\t\t\"PQsetblocking() -- unable to set nonblocking status to %s\\n\",\n> +\t\t\targ == TRUE ? \"TRUE\" : \"FALSE\");\n> +\t\treturn (-1);\n> +\t}\n> +\n> +\tconn->nonblocking = arg;\n> +\treturn (0);\n> +}\n> +\n> +/* return the blocking status of the database connection, TRUE == nonblocking,\n> +\t FALSE == blocking\n> +*/\n> +int\n> +PQisnonblocking(PGconn *conn)\n> +{\n> +\n> +\treturn (conn->nonblocking);\n> +}\n> +\n> +/* try to force data out, really only useful for non-blocking users */\n> +int\n> +PQflush(PGconn *conn)\n> +{\n> +\n> +\treturn (pqFlush(conn));\n> }\n> Index: fe-misc.c\n> ===================================================================\n> RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-misc.c,v\n> retrieving revision 1.33\n> diff -u -u -r1.33 fe-misc.c\n> --- fe-misc.c\t1999/11/30 03:08:19\t1.33\n> +++ fe-misc.c\t1999/12/14 08:21:09\n> @@ -86,6 +86,34 @@\n> {\n> \tsize_t avail = Max(conn->outBufSize - conn->outCount, 0);\n> \n> +\t/*\n> +\t * if we are non-blocking and the send queue is too full to buffer this\n> +\t * request then try to flush some and return an error \n> +\t */\n> +\tif (PQisnonblocking(conn) && nbytes > avail && pqFlush(conn))\n> +\t{\n> +\t\t/* \n> +\t\t * even if the flush failed we may still have written some\n> +\t\t * data, recalculate the size of the send-queue relative\n> +\t\t * to the amount we have to send, we may be able to queue it\n> +\t\t * afterall even though it's not sent to the database it's\n> +\t\t * ok, any routines that check the data coming from the\n> +\t\t * database better call pqFlush() anyway.\n> +\t\t */\n> +\t\tif (nbytes > Max(conn->outBufSize - conn->outCount, 0))\n> +\t\t{\n> +\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> +\t\t\t\t\"pqPutBytes -- pqFlush couldn't flush enough\"\n> +\t\t\t\t\" data: space available: %d, space needed %d\\n\",\n> +\t\t\t\tMax(conn->outBufSize - conn->outCount, 0), nbytes);\n> +\t\t\treturn EOF;\n> +\t\t}\n> +\t}\n> +\n> +\t/* \n> +\t * the non-blocking code above makes sure that this isn't true,\n> +\t * essentially this is no-op\n> +\t */\n> \twhile (nbytes > avail)\n> \t{\n> \t\tmemcpy(conn->outBuffer + conn->outCount, s, avail);\n> @@ -548,6 +576,14 @@\n> \t\treturn EOF;\n> \t}\n> \n> +\t/* \n> +\t * don't try to send zero data, allows us to use this function\n> +\t * without too much worry about overhead\n> +\t */\n> +\tif (len == 0)\n> +\t\treturn (0);\n> +\n> +\t/* while there's still data to send */\n> \twhile (len > 0)\n> \t{\n> \t\t/* Prevent being SIGPIPEd if backend has closed the connection. */\n> @@ -556,6 +592,7 @@\n> #endif\n> \n> \t\tint sent;\n> +\n> #ifdef USE_SSL\n> \t\tif (conn->ssl) \n> \t\t sent = SSL_write(conn->ssl, ptr, len);\n> @@ -585,6 +622,8 @@\n> \t\t\t\tcase EWOULDBLOCK:\n> \t\t\t\t\tbreak;\n> #endif\n> +\t\t\t\tcase EINTR:\n> +\t\t\t\t\tcontinue;\n> \n> \t\t\t\tcase EPIPE:\n> #ifdef ECONNRESET\n> @@ -616,13 +655,31 @@\n> \t\t\tptr += sent;\n> \t\t\tlen -= sent;\n> \t\t}\n> +\n> \t\tif (len > 0)\n> \t\t{\n> \t\t\t/* We didn't send it all, wait till we can send more */\n> +\n> +\t\t\t/* \n> +\t\t\t * if the socket is in non-blocking mode we may need\n> +\t\t\t * to abort here \n> +\t\t\t */\n> +#ifdef USE_SSL\n> +\t\t\t/* can't do anything for our SSL users yet */\n> +\t\t\tif (conn->ssl == NULL)\n> +\t\t\t{\n> +#endif\n> +\t\t\t\tif (PQisnonblocking(conn))\n> +\t\t\t\t{\n> +\t\t\t\t\t/* shift the contents of the buffer */\n> +\t\t\t\t\tmemmove(conn->outBuffer, ptr, len);\n> +\t\t\t\t\tconn->outCount = len;\n> +\t\t\t\t\treturn EOF;\n> +\t\t\t\t}\n> +#ifdef USE_SSL\n> +\t\t\t}\n> +#endif\n> \n> -\t\t\t/* At first glance this looks as though it should block. I think\n> -\t\t\t * that it will be OK though, as long as the socket is\n> -\t\t\t * non-blocking. */\n> \t\t\tif (pqWait(FALSE, TRUE, conn))\n> \t\t\t\treturn EOF;\n> \t\t}\n> Index: libpq-fe.h\n> ===================================================================\n> RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/libpq-fe.h,v\n> retrieving revision 1.53\n> diff -u -u -r1.53 libpq-fe.h\n> --- libpq-fe.h\t1999/11/30 03:08:19\t1.53\n> +++ libpq-fe.h\t1999/12/14 01:30:01\n> @@ -269,6 +269,13 @@\n> \textern int\tPQputnbytes(PGconn *conn, const char *buffer, int nbytes);\n> \textern int\tPQendcopy(PGconn *conn);\n> \n> +\t/* Set blocking/nonblocking connection to the backend */\n> +\textern int\tPQsetnonblocking(PGconn *conn, int arg);\n> +\textern int\tPQisnonblocking(PGconn *conn);\n> +\n> +\t/* Force the write buffer to be written (or at least try) */\n> +\textern int\tPQflush(PGconn *conn);\n> +\n> \t/*\n> \t * \"Fast path\" interface --- not really recommended for application\n> \t * use\n> Index: libpq-int.h\n> ===================================================================\n> RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/libpq-int.h,v\n> retrieving revision 1.14\n> diff -u -u -r1.14 libpq-int.h\n> --- libpq-int.h\t1999/11/30 03:08:19\t1.14\n> +++ libpq-int.h\t1999/12/14 01:30:01\n> @@ -215,6 +215,9 @@\n> \tint\t\t\tinEnd;\t\t\t/* offset to first position after avail\n> \t\t\t\t\t\t\t\t * data */\n> \n> +\tint\t\t\tnonblocking;\t/* whether this connection is using a blocking\n> +\t\t\t\t\t\t\t\t * socket to the backend or not */\n> +\n> \t/* Buffer for data not yet sent to backend */\n> \tchar\t *outBuffer;\t\t/* currently allocated buffer */\n> \tint\t\t\toutBufSize;\t\t/* allocated size of buffer */\n> \n> \n> ************\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 15 Dec 1999 08:51:37 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] non-blocking patches."
}
] |
[
{
"msg_contents": "\nI think I asked this before but don't recall seeing an answer. Do we \nhave a logical AND? partial example:\n\nSELECT (( sum(case dict.word when 'enable' then 1 else 0 end) && sum(case\ndict.word when 'test' then 1 else 0 end))) FROM blahblahblah\n\nNote the && in the first line. I'm guessing this came from MySQL, it's\na query that UdmSearch created.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n128K ISDN - $24.95/mo or less; 56K Dialup - $17.95/mo or less www.pop4.net\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n\n",
"msg_date": "Wed, 15 Dec 1999 10:10:03 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
},
{
"msg_contents": "> I think I asked this before but don't recall seeing an answer. Do we\n> have a logical AND?\n\nUh, yes. It's called \"AND\" ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 15 Dec 1999 15:46:40 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] From: Vince Vielhaber <[email protected]>"
}
] |
[
{
"msg_contents": "I just spent some time trying to work out why PG_VERSION contained 6.6 rather\nthan 7.0 in my freshly initdb'd directory. End result: I don't understand\nwhy after doing a make in src/bin/pg_version, doing a make install recompiles\npg_version even though it was just made. This leads to the problem that I\ndo the first make as prlw1, then the make install as postgres. As make install\ninsists on relinking pg_version even though it is up to date, postgres tries\nto write pg_version to prlw1's src directory which fails, so it doesn't\ninstall. (I suspect that the general trend is to do everything as user\npostgres, but there must be something up with the Makefile..)\n\nAny thoughts to fix the build process?\n\nCheers,\n\nPatrick\n",
"msg_date": "Wed, 15 Dec 1999 15:56:45 +0000 (GMT)",
"msg_from": "\"Patrick Welche\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "initdb / pg_version"
},
{
"msg_contents": "\"Patrick Welche\" <[email protected]> writes:\n> I just spent some time trying to work out why PG_VERSION contained 6.6\n> rather than 7.0 in my freshly initdb'd directory. End result: I don't\n> understand why after doing a make in src/bin/pg_version, doing a make\n> install recompiles pg_version even though it was just made.\n\nYou know, I'd always assumed that it was done that way deliberately\nto put an up-to-date build date into pg_version ... but on looking\nat the code, pg_version doesn't know anything about its build date.\nIt just cares about the PG_VERSION string.\n\n> Any thoughts to fix the build process?\n\nThe dependency on a phony submake target is the problem;\nneed to put in real dependencies for version.o instead.\nMight be easier if version.c were removed from .../utils\nand put in bin/pg_version.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Dec 1999 12:21:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] initdb / pg_version "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> The dependency on a phony submake target is the problem;\n> need to put in real dependencies for version.o instead.\n> Might be easier if version.c were removed from .../utils\n> and put in bin/pg_version.\n\nAgreed, though not sure what is best. utils/version.c defines\n\n ValidatePgVersion\n SetPgVersion\n\nexported in include/version.h. The only source files which reference them\nare:\n\nbackend/postmaster/postmaster.c uses ValidatePgVersion\nbin/pg_version/pg_version.c uses SetPgVersion\nbackend/utils/init/postinit.c uses ValidatePgVersion\n\nNow, postmaster and postinit don't have the same problem as pg_version\nas they both link against SUBSYS.o rather than version.o. I also note that\nthere are lots of mentions of version.o in backend/Makefile that I don't\nfollow.\n\nCheers,\n\nPatrick\n",
"msg_date": "Wed, 15 Dec 1999 17:42:20 +0000 (GMT)",
"msg_from": "\"Patrick Welche\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] initdb / pg_version"
},
{
"msg_contents": "On Wed, 15 Dec 1999, Patrick Welche wrote:\n\n> Any thoughts to fix the build process?\n\nOh yeah. It's on my wish/todo list. But I just looked at some of those\nthings the other day and it looks like for a complete solution, much of\nthe makefiles will simply need to be scrapped and rewritten. So I don't\nexpect to bother with that soon.\n\nThe latest weirdness I experienced was in fact that the clean target\ncompiled half the source before deciding to delete it ...\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 16 Dec 1999 13:14:59 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] initdb / pg_version"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Oh yeah. It's on my wish/todo list. But I just looked at some of those\n> things the other day and it looks like for a complete solution, much of\n> the makefiles will simply need to be scrapped and rewritten.\n\nPerhaps GNU automake would give a good running start on the problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Dec 1999 09:25:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] initdb / pg_version "
},
{
"msg_contents": "On 1999-12-16, Tom Lane mentioned:\n\n> Peter Eisentraut <[email protected]> writes:\n> > Oh yeah. It's on my wish/todo list. But I just looked at some of those\n> > things the other day and it looks like for a complete solution, much of\n> > the makefiles will simply need to be scrapped and rewritten.\n> \n> Perhaps GNU automake would give a good running start on the problem.\n\nAah, I was going to suggest that but feared too many people being\nreluctant to adding more GNU stuff and learning another macro set, but now\nthat you said it it's fair game. :) I agree we should investigate that\nsometime.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Fri, 17 Dec 1999 01:32:13 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] initdb / pg_version "
}
] |
[
{
"msg_contents": "Thomas Lockhart <[email protected]> said: \n\n> > I think I asked this before but don't recall seeing an answer. Do we\n> > have a logical AND?\n> \n> Uh, yes. It's called \"AND\" ;)\n\nThat's what I was afraid of.\n\n ERROR: left-hand side of AND is type 'int4', not bool\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n128K ISDN - $24.95/mo or less; 56K Dialup - $17.95/mo or less www.pop4.net\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n\n",
"msg_date": "Wed, 15 Dec 1999 11:00:43 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] AND &&"
},
{
"msg_contents": "At 11:00 AM 12/15/99 -0500, Vince Vielhaber wrote:\n>Thomas Lockhart <[email protected]> said: \n>\n>> > I think I asked this before but don't recall seeing an answer. Do we\n>> > have a logical AND?\n>> \n>> Uh, yes. It's called \"AND\" ;)\n>\n>That's what I was afraid of.\n>\n> ERROR: left-hand side of AND is type 'int4', not bool\n\n>SELECT (( sum(case dict.word when 'enable' then 1 else 0 end) && sum(case\n>dict.word when 'test' then 1 else 0 end)))\n\ntry something like\n\nselect ((sum(case dict.work when 'enable' then 1 else 0 end) > 0 and\n sum(case dict.word when 'test' then 1 else 0 end) > 0))\n\nor perhaps rewrite the query to use \"exists\"??? That appears to be the\npoint of this snippet.\n\nApparently mySQL is misnamed. Perhaps it should be renamed myC-ishQL.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Wed, 15 Dec 1999 08:27:36 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] AND &&"
},
{
"msg_contents": "Vince Vielhaber <[email protected]> writes:\n>> Uh, yes. It's called \"AND\" ;)\n\n> That's what I was afraid of.\n\n> ERROR: left-hand side of AND is type 'int4', not bool\n\n1 => 't'::bool, etc.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Dec 1999 12:22:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] AND && "
},
{
"msg_contents": "I wrote:\n> 1 => 't'::bool, etc.\n\nEr, ignore that ... I missed the sum() operator. Don Baccus has\nmore reasonable comments ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Dec 1999 12:27:41 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] AND && "
}
] |
[
{
"msg_contents": "CREATE UNIQUE INDEX \"ethernet_ip_key\" on \"ethernet\" using btree ( \"ip\" \"network_ops\" );\n\nwas generated by dumpall. \"network_ops\" apparently don't exist (not sure what\nis should be called). Changing to\nusing btree ( \"ip\" )\nwas sufficient to fix, but I don't know what it should be to fix dumpall.\n\npatrimoine=> create unique index \"ethernet_ip_key\" on \"ethernet\" using btree ( \"ip\" );\nCREATE\npatrimoine=> \\d ethernet_ip_key\nIndex \"ethernet_ip_key\"\n Attribute | Type \n-----------+------\n ip | inet\nunique btree\n\n\nCheers,\n\nPatrick\n",
"msg_date": "Wed, 15 Dec 1999 16:24:27 +0000 (GMT)",
"msg_from": "\"Patrick Welche\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "dumpall prob"
},
{
"msg_contents": "Patrick Welche wrote:\n> \n> CREATE UNIQUE INDEX \"ethernet_ip_key\" on \"ethernet\"\n> using btree ( \"ip\" \"network_ops\" );\n> was generated by dumpall.\n\nWhat version? If your example above is verbatim, then there seems to\nbe a missing comma in the arguments to the USING clause; if you are\nusing a recent/current version of pg_dump, then it is not likely\nfixed...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 15 Dec 1999 16:49:43 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dumpall prob"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> Patrick Welche wrote:\n> > \n> > CREATE UNIQUE INDEX \"ethernet_ip_key\" on \"ethernet\"\n> > using btree ( \"ip\" \"network_ops\" );\n> > was generated by dumpall.\n> \n> What version? If your example above is verbatim, then there seems to\n> be a missing comma in the arguments to the USING clause; if you are\n> using a recent/current version of pg_dump, then it is not likely\n> fixed...\n\nOops - my fault. Obviously I pg_dump'd before installing the new one, so\nit's the old one that had the \"network_ops\" problem.\nFunnily enough I thought I would pg_dump with the new one and diff it\nagainst the new, but\n\n% pg_dumpall > db.out2\nConnection to database 'List' failed.\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\n\npg_dump failed on List, exiting\n\n\n... investigating ...\n\nCheers,\n\nPatrick\n",
"msg_date": "Wed, 15 Dec 1999 17:25:10 +0000 (GMT)",
"msg_from": "\"Patrick Welche\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] dumpall prob"
},
{
"msg_contents": "\"Patrick Welche\" <[email protected]> writes:\n> CREATE UNIQUE INDEX \"ethernet_ip_key\" on \"ethernet\" using btree ( \"ip\" \"network_ops\" );\n> was generated by dumpall. \"network_ops\" apparently don't exist (not sure what\n> is should be called).\n\nCurrent sources dump this as \"inet_ops\". I think the problem in 6.5.*\nwas caused by bogus entries in the pg_opclass table (same classname for\ninet and cidr types). You could try reaching in and changing the system\ntable entries, but it might be safer just to live with manually patching\nthe dump file until the next release.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Dec 1999 12:38:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dumpall prob "
},
{
"msg_contents": "> % pg_dumpall > db.out2\n> Connection to database 'List' failed.\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally\n> before or while processing the request.\n> \n> pg_dump failed on List, exiting\n> \n> \n> ... investigating ...\n\nReason:\n\nIn pg_dumpall, line 50 is:\n\npsql -l -A -q -t| tr '|' ' ' | grep -v '^template1 ' | \\\n\nwhich outputs:\n\n% psql -l -A -q -t| tr '|' ' ' | grep -v '^template1 '\nList of databases\nDatabase Owner\ndarwin prlw1\n...\n(5 rows)\n\n\nso presumably, it tries to open a connection to \"List\" as user \"of\"\nthis implies that \"-q\" isn't quite as quiet as it could be...\n\nI tried changing it to\n\npsql -l -A -q -t| tr '|' ' ' | egrep -v '(^template1 |^List of databases|^Database Owner| rows)' | \\ \n\nas a work around, but then:\n\n\\connect template1 ERROR: attribute 'prlw1' not found\ncreate database darwin;\n\\connect darwin ERROR: attribute 'prlw1' not found\n\netc\n\nand this is because line 56 wants to read\n\n where usesysid = \\'$DBUSERID\\'; \\\" | \\\nrather than\n where usesysid = $DBUSERID; \\\" | \\\n\n\nCheers,\n\nPatrick\n",
"msg_date": "Wed, 15 Dec 1999 18:20:12 +0000 (GMT)",
"msg_from": "\"Patrick Welche\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] dumpall prob"
},
{
"msg_contents": "> \"Patrick Welche\" <[email protected]> writes:\n> > CREATE UNIQUE INDEX \"ethernet_ip_key\" on \"ethernet\" using btree ( \"ip\" \"network_ops\" );\n> > was generated by dumpall. \"network_ops\" apparently don't exist (not sure what\n> > is should be called).\n> \n> Current sources dump this as \"inet_ops\". I think the problem in 6.5.*\n> was caused by bogus entries in the pg_opclass table (same classname for\n> inet and cidr types). You could try reaching in and changing the system\n> table entries, but it might be safer just to live with manually patching\n> the dump file until the next release.\n\nFixed in the current source tree.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 15 Dec 1999 20:17:44 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dumpall prob"
}
] |
[
{
"msg_contents": "Who's up for a little language-lawyering discussion?\n\nI have just noticed that our parser is probably in error in treating\nGROUP BY and ORDER BY expressions similarly. This came up while\nchecking whether we were doing the right thing in rejecting\n\nSELECT complicated-expression AS foo FROM table WHERE foo < 42;\n\nOur parser will accept AS-names in ORDER BY and GROUP BY clauses,\nbut not in WHERE or HAVING. But eyeballing the spec makes it look like\nAS-names should *only* be recognized in ORDER BY, nowhere else. The\nspec's organization of a SELECT query is\n\n <direct select statement: multiple rows> ::=\n <query expression> [ <order by clause> ]\n\n <query specification> ::=\n SELECT [ <set quantifier> ] <select list> <table expression>\n\n <table expression> ::=\n <from clause>\n [ <where clause> ]\n [ <group by clause> ]\n [ <having clause> ]\n\n(<query expression> reduces to <query specification>s combined by\nUNION/INTERSECT/EXCEPT, which are not of interest here).\n\nNow the interesting thing about this is that WHERE, GROUP BY, and HAVING\nare all defined to use column names that are defined by the <table\nexpression> they're in. As far as I can see, that means they can use\ncolumn names that come from tables in the FROM clause. There isn't any\nsuggestion that they can refer to SELECT-list items from the enclosing\n<query specification>.\n\nThe ORDER BY clause, however, is allowed to reference columns of the\n<query expression>'s result --- ie, columns from the <select list>\n--- either by name or number. So it's definitely OK to use an AS-name\nin ORDER BY.\n\nCurrently, because the parser uses the same code to interpret ORDER BY\nand GROUP BY lists, it will accept AS-names and column numbers in both\nkinds of clauses. Unless I've misread the spec, this is an error.\nCan anyone confirm or refute my reasoning?\n\nNext question is, do we want to leave the code as-is, or tighten up\nthe parser to reject AS-names and column numbers in GROUP BY?\nIt seems to me we should change it, because there are cases where the\nexisting code will do the wrong thing according to the SQL spec.\nIf \"foo\" is a column name and also an AS-name for something else,\n\"GROUP BY foo\" should group on the raw column according to the spec,\nbut right now we will pick the SELECT result value instead.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Dec 1999 12:05:41 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "SELECT ... AS ... names in WHERE/GROUP BY/HAVING"
},
{
"msg_contents": "At 12:05 PM 12/15/99 -0500, Tom Lane wrote:\n>Who's up for a little language-lawyering discussion?\n>\n>I have just noticed that our parser is probably in error in treating\n>GROUP BY and ORDER BY expressions similarly. This came up while\n>checking whether we were doing the right thing in rejecting\n>\n>SELECT complicated-expression AS foo FROM table WHERE foo < 42;\n\nFWIW, here's what Oracle does:\n\n\nSQL> select * from foo;\n\n I J\n---------- ----------\n 1 2\n\nSQL> select i as ii from foo where ii=3;\nselect i as ii from foo where ii=3\n *\nERROR at line 1:\nORA-00904: invalid column name\n\nThis seems in agreement with PostgreSQL's rejection of the query.\n\n>\n>Our parser will accept AS-names in ORDER BY and GROUP BY clauses,\n\nOracle, again:\n\n\nSQL> select i as ii from foo where i=1 group by ii;\nselect i as ii from foo where i=1 group by ii\n *\nERROR at line 1:\nORA-00904: invalid column name\n\nBTW, at times at least it seems that PostgreSQL REQUIRES use of the\n\"as\" name in group by, at least I've had queries I've been unable to move\nfrom Oracle to PostgreSQL unless I've done so. It's not consistent,\nthough, i.e. there's some kind of bug that pops up for some queries.\nI've not had time to attempt to figure out exactly what differentiates\nqueries that work from queries that fail if I don't use the \"as\" name.\n\nHere's Oracle's pronouncement on order by:\n\n\nSQL> select i as ii from foo where i=1 order by ii;\n\n II\n----------\n 1\n\n\n>but not in WHERE or HAVING. But eyeballing the spec makes it look like\n>AS-names should *only* be recognized in ORDER BY, nowhere else. \n\nAnd this jives with Oracle. I offer this as supporting evidence only,\nof course, I'm sure Oracle violates the standard in some ways as well\nso we can't take their implementation as being definitive in regard\nto standards issues.\n\n\n\n> <direct select statement: multiple rows> ::=\n> <query expression> [ <order by clause> ]\n>\n> <query specification> ::=\n> SELECT [ <set quantifier> ] <select list> <table expression>\n>\n> <table expression> ::=\n> <from clause>\n> [ <where clause> ]\n> [ <group by clause> ]\n> [ <having clause> ]\n>\n>(<query expression> reduces to <query specification>s combined by\n>UNION/INTERSECT/EXCEPT, which are not of interest here).\n>\n>Now the interesting thing about this is that WHERE, GROUP BY, and HAVING\n>are all defined to use column names that are defined by the <table\n>expression> they're in.\n\nWell, really it's a scoping issue, not a syntax issue. What is the scope\nof an identifier defined by an \"as\" identifier? Of course, this is simple\nenough that they might've been able to encapsulate the scope in the syntax.\n\nDo you have the syntax for the various clauses available? For instance,\ntwo kinds of identifiers might be defined, say a column_id which must\nreally be the name of a real column and a more general id which is the\nunion of real column ids and \"as\" names. \n\nI just looked at the grammar in Date's book, and it says:\n\norder-item ::= { column | integer } [ ASC | DESC ]\n\nand GROUP BY is followed by a \"column-ref-commalist\"\n\nwhich would leave me to think that the issue of where an \"as\" identifier\ncan be used is addressed semantically, not syntactically, since both\nsimply refer to \"column\" identifiers. I don't have time at the \nmoment to dig into Date's book further to see what he says, I can look\nlater if you want.\n\nKeep in mind this is the very first time I've looked at the formal\nsyntax for SQL. I have a background in language-lawyering, though.\n\n>Next question is, do we want to leave the code as-is, or tighten up\n>the parser to reject AS-names and column numbers in GROUP BY?\n\nMy personal feeling is that minor extensions, including accidental\nones, work against the goal of standards which is of course to make\nit easier to move stuff from one implementation to another. From my\ncurrent perspective of moving nearly 9,000 lines of Oracle SQL (just\nin the data model, with thousands more in the code that uses it) examples\nlike this where postgres implements a superset of the standard is a lot\neasier to deal with than those areasa where postgres implements a subset\n(no outer joins, for instance)!\n\nBut philosophically I'm a believer in standards and in making it\nas easy as possible to move code back and forth between various\nSQL engines.\n\nCan it silently break a query, i.e. are there examples where an\nidentifier might refer to different columns in the two cases? If not,\nI wouldn't worry about it too much though if it were up to me I'd probably\nadhere to the standard. Silent breakage (i.e. \"working\" but returning an\nincorrect result compared to the result you'd get with a standard\nimplementation)\nis more insidious as such queries can be hard to uncover when porting\nsomething.\n\n>It seems to me we should change it, because there are cases where the\n>existing code will do the wrong thing according to the SQL spec.\n>If \"foo\" is a column name and also an AS-name for something else,\n>\"GROUP BY foo\" should group on the raw column according to the spec,\n>but right now we will pick the SELECT result value instead.\n\nOops...silent breakage, all right. Bad. Yeah, it should be fixed, I\ndon't think there should be any question about it - assuming that a\ncloser reading of the standard verifies that it should work as you\nand Oracle both seem to think it should.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Wed, 15 Dec 1999 11:12:54 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SELECT ... AS ... names in WHERE/GROUP BY/HAVING"
},
{
"msg_contents": "On 1999-12-15, Tom Lane mentioned:\n\n> Next question is, do we want to leave the code as-is, or tighten up\n> the parser to reject AS-names and column numbers in GROUP BY?\n> It seems to me we should change it, because there are cases where the\n> existing code will do the wrong thing according to the SQL spec.\n> If \"foo\" is a column name and also an AS-name for something else,\n> \"GROUP BY foo\" should group on the raw column according to the spec,\n> but right now we will pick the SELECT result value instead.\n\nThe AS-names are way too convenient to drop them. In the particular\nexample of ambiguity you could send a notice or a reject it. (What does\nORDER BY foo do in this case? Same problem.)\n\nPerhaps it's really time for the --enable-sql option. :)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Fri, 17 Dec 1999 01:31:39 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SELECT ... AS ... names in WHERE/GROUP BY/HAVING"
}
] |
[
{
"msg_contents": "\n> Next question is, do we want to leave the code as-is, or tighten up\n> the parser to reject AS-names and column numbers in GROUP BY?\n\nThe numbers are also allowed in other DBMS's, so I would leave that as is.\n\n> It seems to me we should change it, because there are cases where the\n> existing code will do the wrong thing according to the SQL spec.\n> If \"foo\" is a column name and also an AS-name for something else,\n> \"GROUP BY foo\" should group on the raw column according to the spec,\n> but right now we will pick the SELECT result value instead.\n\nThis of course should be handeled the other way around.\nImho the feature to use the AS-names is too convenient,\nto drop it alltogether. Remember, that the lable could stand for\na complete subselect, and writing the same subselect again and\nagain is quite bad for readability.\nI would rather extend this AS-names capability to the where\nand having clause too.\n\nselect\n(select max(colname) from syscolumns c\n where c.tabid = systables.tabid) as collabel\nfrom systables\nwhere tabname='systables' and collabel matches 'n*';\n\nis imho a very nice and readable syntax .\n\nAndreas\n",
"msg_date": "Wed, 15 Dec 1999 18:37:55 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] SELECT ... AS ... names in WHERE/GROUP BY/HAVING"
}
] |
[
{
"msg_contents": "At 06:37 PM 12/15/99 +0100, Zeugswetter Andreas SB wrote:\n>\n>> Next question is, do we want to leave the code as-is, or tighten up\n>> the parser to reject AS-names and column numbers in GROUP BY?\n>\n>The numbers are also allowed in other DBMS's, so I would leave that as is.\n\n>From Oracle:\n\n\nSQL> select i from foo group by i;\n\n I\n----------\n 1\n\nSQL> select i from foo group by 1;\nselect i from foo group by 1\n *\nERROR at line 1:\nORA-00979: not a GROUP BY expression\n\n\nOracle doesn't appear to allow column numbers here, FWIW and if we\ncare.\n\nWhich dbms's allow it? Or is there an error in my query (I don't\nuse column numbering, I'm into names myself)?\n\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Wed, 15 Dec 1999 11:18:12 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AW: [HACKERS] SELECT ... AS ... names in WHERE/GROUP\n BY/HAVING"
}
] |
[
{
"msg_contents": "\n\n Hi,\n\nand see:\n\n$ pg_dump --help\n/usr/lib/postgresql/bin/pg_dump: invalid option -- -\n\nhmm ?\n\nPrepare anyone long options for pg_dump, pg_passwd, pg_version ? \nIf not, I make it, current state is disgraceful.\n\n\t\t\t\t\tKarel\n\nPS. If I meet with any mazy M$-Win user I always tell him: \"..in \n open-source software we have good practice, all software has \n nice face and open work with unknow software is easy...\" \n\n----------------------------------------------------------------------\nKarel Zak <[email protected]> http://home.zf.jcu.cz/~zakkr/\n\nDocs: http://docs.linux.cz (big docs archive)\t\nKim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\nFTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n-----------------------------------------------------------------------\n\n",
"msg_date": "Wed, 15 Dec 1999 21:30:11 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump --help"
},
{
"msg_contents": "> \n> \n> Hi,\n> \n> and see:\n> \n> $ pg_dump --help\n> /usr/lib/postgresql/bin/pg_dump: invalid option -- -\n> \n> hmm ?\n> \n> Prepare anyone long options for pg_dump, pg_passwd, pg_version ? \n> If not, I make it, current state is disgraceful.\n> \n\n\n#$ pg_dump -h \npg_dump: option requires an argument -- h\nusage: pg_dump [options] dbname\n -a dump out only the data, no schema\n -c clean(drop) schema prior to create\n -d dump data as proper insert strings\n -D dump data as inserts with attribute names\n -f filename script output filename\n -h hostname server host name\n -n suppress most quotes around identifiers\n -N enable most quotes around identifiers\n -o dump object id's (oids)\n -p port server port number\n -s dump out only the schema, no data\n -t table dump for this table only\n -u use password authentication\n -v verbose\n -x do not dump ACL's (grant/revoke)\n\nIf dbname is not supplied, then the DATABASE environment variable value\nis used.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 15 Dec 1999 20:25:07 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump --help"
},
{
"msg_contents": "\nOn Wed, 15 Dec 1999, Bruce Momjian wrote:\n\n> > $ pg_dump --help\n> > /usr/lib/postgresql/bin/pg_dump: invalid option -- -\n> > \n> > hmm ?\n> > \n> > Prepare anyone long options for pg_dump, pg_passwd, pg_version ? \n> > If not, I make it, current state is disgraceful.\n> > \n> \n> \n> #$ pg_dump -h \n> pg_dump: option requires an argument -- h\n> usage: pg_dump [options] dbname\n> -a dump out only the data, no schema\n> -c clean(drop) schema prior to create\n\n --cut--\n\n$ mysqldump --help\nmysqldump Ver 4.0 Distrib 3.21.31, for pc-linux-gnu (i586)\nBy Igor Romanenko & Monty & Jani. This software is in public Domain\nThis software comes with ABSOLUTELY NO WARRANTY\n\nDumping definition and data mysql database or table\nUsage: mysqldump [OPTIONS] database [tables]\n\n -#, --debug=... Output debug log. Often this is 'd:t:o,filename\n -?, --help Displays this help and exits.\n -c, --compleat-insert Use complete insert statements.\n\n.....etc.\n\nI send patch with long options next week.....\n\n\t\t\t\t\t\tKarel\n\n\n\n\n\n\n",
"msg_date": "Thu, 16 Dec 1999 10:54:25 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pg_dump --help"
},
{
"msg_contents": "On Wed, 15 Dec 1999, Karel Zak - Zakkr wrote:\n\n> $ pg_dump --help\n> /usr/lib/postgresql/bin/pg_dump: invalid option -- -\n> \n> hmm ?\n> \n> Prepare anyone long options for pg_dump, pg_passwd, pg_version ? \n> If not, I make it, current state is disgraceful.\n\nOnly GNU (Linux) systems support long options. What you consider\ndisgraceful is normal behavior on the majority of platforms.\n\nI put in long options into psql and into the wrapper scripts (under some\nprotests). But I agree that we should if at all only provide them, not\nadvertise them, since it's going to be a support nightmare.\n\nIf you plan on doing this, please look there and use the same options\nwhereever possible. I have been trying to establish some consistent\noptions naming across client applications. Also check out how psql copes\nwith systems where there are no long options available.\n\nMeanwhile I have been getting closer to resolving to scratch pg_dump and\nwrite a new one for 7.1, so whatever you do now might not live long. :(\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 16 Dec 1999 13:23:00 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump --help"
},
{
"msg_contents": "\nOn Thu, 16 Dec 1999, Peter Eisentraut wrote:\n\n> On Wed, 15 Dec 1999, Karel Zak - Zakkr wrote:\n> \n> > $ pg_dump --help\n> > /usr/lib/postgresql/bin/pg_dump: invalid option -- -\n> > \n> > hmm ?\n> > \n> > Prepare anyone long options for pg_dump, pg_passwd, pg_version ? \n> > If not, I make it, current state is disgraceful.\n\n Hi,\n\n> Only GNU (Linux) systems support long options. What you consider\n> disgraceful is normal behavior on the majority of platforms.\n\n Yes, I know this (it is not first program, which I hacking :-),\nIMHO is good resolution (?) add getopt_long() to str/utils and use it\nfor non-GNU (non getopt_long) platforms, or not? \n\n> I put in long options into psql and into the wrapper scripts (under some\n> protests). But I agree that we should if at all only provide them, not\n> advertise them, since it's going to be a support nightmare.\n\nI saw/use your current psql (great!). Plan you add long option to\nusage() output? If yes, will good make identical output format for \nall pg_ routines (nice is example 'mc --help' styl).\n\n \n> If you plan on doing this, please look there and use the same options\n> whereever possible. I have been trying to establish some consistent\n> options naming across client applications. Also check out how psql copes\n> with systems where there are no long options available.\n\n Yes, I agree and I use your options.\n\n> Meanwhile I have been getting closer to resolving to scratch pg_dump and\n> write a new one for 7.1, so whatever you do now might not live long. :(\n\n Hmm.. :-(, but I not worry, it is only small work for me and you can use\nthis options (and code) for your 7.1 version.\n\n \n\t\t\t\t\t\t\tKarel\n\n",
"msg_date": "Thu, 16 Dec 1999 13:45:01 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pg_dump --help"
},
{
"msg_contents": "> > #$ pg_dump -h \n> > pg_dump: option requires an argument -- h\n> > usage: pg_dump [options] dbname\n> > -a dump out only the data, no schema\n> > -c clean(drop) schema prior to create\n> \n> --cut--\n> \n> $ mysqldump --help\n> mysqldump Ver 4.0 Distrib 3.21.31, for pc-linux-gnu (i586)\n> By Igor Romanenko & Monty & Jani. This software is in public Domain\n> This software comes with ABSOLUTELY NO WARRANTY\n> \n> Dumping definition and data mysql database or table\n> Usage: mysqldump [OPTIONS] database [tables]\n> \n> -#, --debug=... Output debug log. Often this is 'd:t:o,filename\n> -?, --help Displays this help and exits.\n> -c, --compleat-insert Use complete insert statements.\n> \n> .....etc.\n> \n> I send patch with long options next week.....\n> \n\nIt must be portable. See src/bin/psql/startup.c for an example of a\nportable solution.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 16 Dec 1999 11:31:15 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump --help"
}
] |
[
{
"msg_contents": "Hi, Tom,\n\nIt's my understanding that WHERE and GROUP BY will only accept table or view\ncolumns, while ORDER BY and HAVING will accept SELECT columns (aliases) as\nwell. I'll double check this with Oracle (Oracle tends to be pretty SQL\ncompliant), but it makes sense to me.\n\nSo according to my view of the world ;-) HAVING is broken, because it\nrejects aliases, and GROUP BY is broken because it accepts them. Of course,\nI haven't looked at the spec, and Oracle could adhere to an older spec which\nmay have changed. At least I don't have to take any responsibility for my\nclaims ;-)\n\nOK, I've just checked it against Oracle, and what you had originally seems\nto be the way to go: no aliases for WHERE, GROUP BY, or HAVING. However,\naggregates are allowed in the HAVING clause. Also, aliases are allowed for\nORDER BY.\n\nSo, according to Oracle's view of the world, HAVING is orrect because it\nrejects aliases, but GROUP BY is broken because it accepts them.\n\n\nMikeA\n\n\n-----Original Message-----\nFrom: Tom Lane\nTo: [email protected]\nSent: 99/12/15 07:05\nSubject: [HACKERS] SELECT ... AS ... names in WHERE/GROUP BY/HAVING\n\nWho's up for a little language-lawyering discussion?\n\nI have just noticed that our parser is probably in error in treating\nGROUP BY and ORDER BY expressions similarly. This came up while\nchecking whether we were doing the right thing in rejecting\n\nSELECT complicated-expression AS foo FROM table WHERE foo < 42;\n\nOur parser will accept AS-names in ORDER BY and GROUP BY clauses,\nbut not in WHERE or HAVING. But eyeballing the spec makes it look like\nAS-names should *only* be recognized in ORDER BY, nowhere else. The\nspec's organization of a SELECT query is\n\n <direct select statement: multiple rows> ::=\n <query expression> [ <order by clause> ]\n\n <query specification> ::=\n SELECT [ <set quantifier> ] <select list> <table\nexpression>\n\n <table expression> ::=\n <from clause>\n [ <where clause> ]\n [ <group by clause> ]\n [ <having clause> ]\n\n(<query expression> reduces to <query specification>s combined by\nUNION/INTERSECT/EXCEPT, which are not of interest here).\n\nNow the interesting thing about this is that WHERE, GROUP BY, and HAVING\nare all defined to use column names that are defined by the <table\nexpression> they're in. As far as I can see, that means they can use\ncolumn names that come from tables in the FROM clause. There isn't any\nsuggestion that they can refer to SELECT-list items from the enclosing\n<query specification>.\n\nThe ORDER BY clause, however, is allowed to reference columns of the\n<query expression>'s result --- ie, columns from the <select list>\n--- either by name or number. So it's definitely OK to use an AS-name\nin ORDER BY.\n\nCurrently, because the parser uses the same code to interpret ORDER BY\nand GROUP BY lists, it will accept AS-names and column numbers in both\nkinds of clauses. Unless I've misread the spec, this is an error.\nCan anyone confirm or refute my reasoning?\n\nNext question is, do we want to leave the code as-is, or tighten up\nthe parser to reject AS-names and column numbers in GROUP BY?\nIt seems to me we should change it, because there are cases where the\nexisting code will do the wrong thing according to the SQL spec.\nIf \"foo\" is a column name and also an AS-name for something else,\n\"GROUP BY foo\" should group on the raw column according to the spec,\nbut right now we will pick the SELECT result value instead.\n\n\t\t\tregards, tom lane\n\n************\n",
"msg_date": "Wed, 15 Dec 1999 22:47:51 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] SELECT ... AS ... names in WHERE/GROUP BY/HAVING"
}
] |
[
{
"msg_contents": "Greetings,\n\nSorry for all the posts, but I'm trying to put my finger on my backend crash.\n\nAny insight on any of the following would be very helpful:\n\nHow many backend processes is considered a large number? The man pages\nsays the default is 32. Does anyone set their number higher?\n\nKind of related to the question above; when does the postmaster spawn\nanother backend process? Is it for each additional connection, or will\neach backend process handle several connections/queries before another\nprocess is started?\n\nThe postmaster log file, why are the entries not datestamped? If I start\nthe postmaster with a debug level of 2 or greater do I get datestamped\nentries? Also, what is the highest debug level and how big can I expect\nthe log to grow? Can I rotate the log without stopping the postmaster?\n\nWhat is the pg_log file in the data directory?\n\nWhat are the major system resources used by postgres, i.e. semaphores, file\nhandles, mbufs, etc.? I'm trying to determine if I have my resources\nconfigured high enough for my user base.\n\nAgain, thank you! I'll try not to be such a problem in the future! :)\n\nMatthew\n",
"msg_date": "Wed, 15 Dec 1999 18:26:31 -0500",
"msg_from": "Matthew Hagerty <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postmaster options, process spawning, logging, etc."
},
{
"msg_contents": "At 06:26 PM 12/15/99 -0500, Matthew Hagerty wrote:\n\n>How many backend processes is considered a large number? The man pages\n>says the default is 32. Does anyone set their number higher?\n\nThat would depend on your situation. I use AOLserver to service\nmy web site. It maintains a pool of persistent connections and\nI throttle the number of connections to the database via the\nweb server. So, I like having a high limit on backend processes\nfor the postmaster itself, and 32 suits me fine. I like throttling\nat the webserver level because I can throttle on individual virtual\nservers, etc.\n\n>Kind of related to the question above; when does the postmaster spawn\n>another backend process? Is it for each additional connection,\n\nYes. Each connection. I assume your PHP environment includes some\nmeans to allocate a database handle either out of a persistent pool\nor otherwise. Each time that pool mechanism opens a database\nconnection the postmaster forks a new backend process. It goes\naway when you close a connection (normally, unless the backend\ncrashes etc).\n\n> or will\n>each backend process handle several connections/queries before another\n>process is started?\n\nOnce forked, the process stays alive until the connection's closed.\nYou can feed that process as many queries as you want in serial\nfashion.\n\nHowever, in practice, the API you're using to connect PHP to the\ndatabase may or may not pool persistent connections. In this case,\nit will probably be shutting down and reopening connections \nfrequently, say once per web page or the like.\n\nI'll leave your other questions to folks who know more about\npostgres specifics.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Wed, 15 Dec 1999 15:40:59 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postmaster options, process spawning, logging,\n etc."
},
{
"msg_contents": "On Wed, 15 Dec 1999, Matthew Hagerty wrote:\n\n> Greetings,\n> \n> Sorry for all the posts, but I'm trying to put my finger on my backend crash.\n> \n> Any insight on any of the following would be very helpful:\n> \n> How many backend processes is considered a large number? The man pages\n> says the default is 32. Does anyone set their number higher?\n> \n> Kind of related to the question above; when does the postmaster spawn\n> another backend process? Is it for each additional connection, or will\n> each backend process handle several connections/queries before another\n> process is started?\n\nspawns a new backend for each new connection...\n\n> The postmaster log file, why are the entries not datestamped? If I start\n> the postmaster with a debug level of 2 or greater do I get datestamped\n> entries? Also, what is the highest debug level and how big can I expect\n> the log to grow? Can I rotate the log without stopping the postmaster?\n\npg_options provides the ability to send the log to syslog, which would\ngive you both the timestamping, and the ability to 'cleanly' rotate the\nlogs...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 15 Dec 1999 19:43:34 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postmaster options, process spawning, logging, etc."
},
{
"msg_contents": "Matthew Hagerty <[email protected]> writes:\n> How many backend processes is considered a large number? The man pages\n> says the default is 32. Does anyone set their number higher?\n\nI've run test cases with 100, which is about as high as I can go on my\npersonal box without running out of swap space. I think some people\nare using several hundred.\n\n> Kind of related to the question above; when does the postmaster spawn\n> another backend process? Is it for each additional connection,\n\nPer connection. The backend quits when the client disconnects. Of\ncourse, it's up to the client how long it stays connected or how many\nqueries it asks...\n\n> The postmaster log file, why are the entries not datestamped?\n\nUncomment #define ELOG_TIMESTAMPS in include/config.h after configure\nand before make...\n\n> Also, what is the highest debug level and how big can I expect\n> the log to grow? Can I rotate the log without stopping the postmaster?\n\nNot very readily. There is someone working on using syslog logging,\nwhich'd be a lot nicer than what we have.\n\n> What is the pg_log file in the data directory?\n\nTransaction commit data. Don't touch it ;-). It shouldn't be all that\nbig, though...\n\n> What are the major system resources used by postgres, i.e. semaphores, file\n> handles, mbufs, etc.? I'm trying to determine if I have my resources\n> configured high enough for my user base.\n\nIn 6.5, the postmaster won't start up if you don't have enough\nsemaphores and shared memory. I've never heard of anyone running out of\nfile handles, but it certainly seems possible if you start enough\nbackends. Still, though, I wouldn't expect a hard coredump such as you\nare getting from running out of any of these resources. There should at\nleast be something showing up in the postmaster log if we fail to open a\nfile or something like that...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Dec 1999 18:46:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postmaster options, process spawning, logging, etc. "
},
{
"msg_contents": "1. Matthew's problem sounds an awful lot like what's being reported\nby Lucio Andres Perez in v6.4.2. Maybe some kind of bug in detecting\nand handling over-the-limit backends. Can someone cook up a Q&D backend-\nspawner? I've spend enough time inside that part of the system lately that\nI could probably track it down from a core file.\n\n2. Yup, there's a \"patch\" out on pgsql-patches that previews an advanced log\nsystem. Although the traditional elog method has seen a lot of enhancement\nlately, it tends to be more geared towards development over administration.\nPlus it was never designed to support national languages or machine parsing.\n\nSo far I've had not feedback on itm however, so I don't know if it will ever\nmake its way into a production release.\n\n regards,\n\n Tim Holloway\n\nTom Lane wrote:\n> \n> Matthew Hagerty <[email protected]> writes:\n> > How many backend processes is considered a large number? The man pages\n> > says the default is 32. Does anyone set their number higher?\n> \n> I've run test cases with 100, which is about as high as I can go on my\n> personal box without running out of swap space. I think some people\n> are using several hundred.\n> \n> > Kind of related to the question above; when does the postmaster spawn\n> > another backend process? Is it for each additional connection,\n> \n> Per connection. The backend quits when the client disconnects. Of\n> course, it's up to the client how long it stays connected or how many\n> queries it asks...\n> \n> > The postmaster log file, why are the entries not datestamped?\n> \n> Uncomment #define ELOG_TIMESTAMPS in include/config.h after configure\n> and before make...\n> \n> > Also, what is the highest debug level and how big can I expect\n> > the log to grow? Can I rotate the log without stopping the postmaster?\n> \n> Not very readily. There is someone working on using syslog logging,\n> which'd be a lot nicer than what we have.\n> \n> > What is the pg_log file in the data directory?\n> \n> Transaction commit data. Don't touch it ;-). It shouldn't be all that\n> big, though...\n> \n> > What are the major system resources used by postgres, i.e. semaphores, file\n> > handles, mbufs, etc.? I'm trying to determine if I have my resources\n> > configured high enough for my user base.\n> \n> In 6.5, the postmaster won't start up if you don't have enough\n> semaphores and shared memory. I've never heard of anyone running out of\n> file handles, but it certainly seems possible if you start enough\n> backends. Still, though, I wouldn't expect a hard coredump such as you\n> are getting from running out of any of these resources. There should at\n> least be something showing up in the postmaster log if we fail to open a\n> file or something like that...\n> \n> regards, tom lane\n> \n> ************\n",
"msg_date": "Thu, 16 Dec 1999 08:28:43 -0500",
"msg_from": "Tim Holloway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postmaster options, process spawning, logging, etc."
},
{
"msg_contents": "Tim Holloway <[email protected]> writes:\n> 1. Matthew's problem sounds an awful lot like what's being reported\n> by Lucio Andres Perez in v6.4.2. Maybe some kind of bug in detecting\n> and handling over-the-limit backends.\n\n6.4.* didn't really have any check/defense against spawning more\nbackends than it had resources for. 6.5 does check and enforce the\nmaxbackends limit. It's certainly possible that Matthew's running into\nsome kind of resource-exhaustion problem, but I doubt that it's just\nthe number of backends that's at issue, except indirectly. (He could\nbe running out of swap space or filetable slots, possibly.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Dec 1999 09:40:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postmaster options, process spawning, logging, etc. "
},
{
"msg_contents": "Ah. I hadn't noticed they were that far back. I've passed your news on\nto our distraught friends in Columbia with a suggestion to try the 6.5.3\nRPM. \n\nTom Lane wrote:\n> \n> Tim Holloway <[email protected]> writes:\n> > 1. Matthew's problem sounds an awful lot like what's being reported\n> > by Lucio Andres Perez in v6.4.2. Maybe some kind of bug in detecting\n> > and handling over-the-limit backends.\n> \n> 6.4.* didn't really have any check/defense against spawning more\n> backends than it had resources for. 6.5 does check and enforce the\n> maxbackends limit. It's certainly possible that Matthew's running into\n> some kind of resource-exhaustion problem, but I doubt that it's just\n> the number of backends that's at issue, except indirectly. (He could\n> be running out of swap space or filetable slots, possibly.)\n> \n> regards, tom lane\n> \n> ************\n",
"msg_date": "Thu, 16 Dec 1999 14:26:58 -0500",
"msg_from": "Tim Holloway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postmaster options, process spawning, logging, etc."
},
{
"msg_contents": "Tom Lane wrote:\n\n> > The postmaster log file, why are the entries not datestamped?\n>\n> Uncomment #define ELOG_TIMESTAMPS in include/config.h after configure\n> and before make...\n\nI'm still missing something...\n\nAfter running ./configure, I modifed ...src/include/config.h to uncomment\nthis...\n\n#define ELOG_TIMESTAMPS\n\n[I also came back later and tried uncommenting #define USE_SYSLOG and repeating\nthe process, but to no avail...]\n\nThen I ran make, etc, created the file $PGDATA/pg_options...\n\n% cat $PGDATA/pg_options\nverbose=2\nquery\nsyslog=2\n\nAnd restarted the server...and still no timestamps.\n\nI verified most everything syslog-wise (configured in /etc/syslog.conf) is\nbeing sent to /var/log/messages...\n\nAnyone notice what am I missing?\n\nCheers,\nEd Loehr\n\n[ps - Forgive my spewage...I mistakenly sent this out of context to\npgsql-general as well...]\n\n",
"msg_date": "Fri, 17 Dec 1999 02:06:08 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postmaster options, process spawning, logging, etc."
},
{
"msg_contents": "You need only ELOG_TIMESTAMPS defined. syslogd is another story - \nI had no luck with it under Linux. Don't forget to make clean and\nrecompile sources. It should works.\n\n\tOleg\n\nOn Fri, 17 Dec 1999, Ed Loehr wrote:\n\n> Date: Fri, 17 Dec 1999 02:06:08 -0600\n> From: Ed Loehr <[email protected]>\n> To: Tom Lane <[email protected]>\n> Cc: Matthew Hagerty <[email protected]>, [email protected]\n> Subject: Re: [HACKERS] Postmaster options, process spawning, logging, etc.\n> \n> Tom Lane wrote:\n> \n> > > The postmaster log file, why are the entries not datestamped?\n> >\n> > Uncomment #define ELOG_TIMESTAMPS in include/config.h after configure\n> > and before make...\n> \n> I'm still missing something...\n> \n> After running ./configure, I modifed ...src/include/config.h to uncomment\n> this...\n> \n> #define ELOG_TIMESTAMPS\n> \n> [I also came back later and tried uncommenting #define USE_SYSLOG and repeating\n> the process, but to no avail...]\n> \n> Then I ran make, etc, created the file $PGDATA/pg_options...\n> \n> % cat $PGDATA/pg_options\n> verbose=2\n> query\n> syslog=2\n> \n> And restarted the server...and still no timestamps.\n> \n> I verified most everything syslog-wise (configured in /etc/syslog.conf) is\n> being sent to /var/log/messages...\n> \n> Anyone notice what am I missing?\n> \n> Cheers,\n> Ed Loehr\n> \n> [ps - Forgive my spewage...I mistakenly sent this out of context to\n> pgsql-general as well...]\n> \n> \n> ************\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Fri, 17 Dec 1999 13:39:38 +0300 (MSK)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postmaster options, process spawning, logging, etc."
}
] |
[
{
"msg_contents": "Since the new parallel regression tests I've always had a few\nlock NOTICE messages like the following.\n\nIn my latest run (from a CVS update today) I seem to have\na significantly higher level of failures and NOTICE messages.\n\nI'm not sure where to look for the problem.\n\nPlatform Is Solaris 7 SPARC.\n\nHere's just a sample from the 1st few parallel tests:-\n\nbash-2.03$ ./checkresults | more\n====== boolean ======\n0a1,2\n> NOTICE: LockRelease: locktable lookup failed, no lock\n> NOTICE: LockRelease: locktable lookup failed, no lock\n====== varchar ======\n0a1,2\n> NOTICE: LockRelease: locktable lookup failed, no lock\n> NOTICE: LockRelease: xid table corrupted\n====== int2 ======\n0a1,4\n> NOTICE: LockRelease: locktable lookup failed, no lock\n> NOTICE: LockRelease: locktable lookup failed, no lock\n> NOTICE: LockRelease: locktable lookup failed, no lock\n> NOTICE: LockRelease: locktable lookup failed, no lock\n10c14\n\n",
"msg_date": "Wed, 15 Dec 1999 23:33:17 +0000 (GMT)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "NOTICE: LockRelease: locktable lookup failed, no lock"
},
{
"msg_contents": "Keith Parks <[email protected]> writes:\n> Since the new parallel regression tests I've always had a few\n> lock NOTICE messages like the following.\n\nInteresting --- I had not run the parallel test for a while,\nbut I tried it just now and got half a dozen of these:\n\tNOTICE: LockRelease: locktable lookup failed, no lock\n(Otherwise the tests all passed.)\n\nSomething's been broken fairly recently. Does anyone have an\nidea what changed?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Dec 1999 22:36:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] NOTICE: LockRelease: locktable lookup failed, no lock "
},
{
"msg_contents": "> Keith Parks <[email protected]> writes:\n> > Since the new parallel regression tests I've always had a few\n> > lock NOTICE messages like the following.\n> \n> Interesting --- I had not run the parallel test for a while,\n> but I tried it just now and got half a dozen of these:\n> \tNOTICE: LockRelease: locktable lookup failed, no lock\n> (Otherwise the tests all passed.)\n> \n> Something's been broken fairly recently. Does anyone have an\n> idea what changed?\n\nGood question. I can't imagine what it would be. We didn't do much,\nand parallel regression is not that old.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 15 Dec 1999 22:46:31 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] NOTICE: LockRelease: locktable lookup failed, no lock"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> > Keith Parks <[email protected]> writes:\n> > > Since the new parallel regression tests I've always had a few\n> > > lock NOTICE messages like the following.\n> >\n> > Interesting --- I had not run the parallel test for a while,\n> > but I tried it just now and got half a dozen of these:\n> > NOTICE: LockRelease: locktable lookup failed, no lock\n> > (Otherwise the tests all passed.)\n> >\n> > Something's been broken fairly recently. Does anyone have an\n> > idea what changed?\n>\n> Good question. I can't imagine what it would be. We didn't do much,\n> and parallel regression is not that old.\n\n While building the parallel test, I've ran the tests dozens\n of times to figure out which tests can be run grouped, which\n not. Haven't seen such a message while doing so.\n\n Also, I used it after another dozen times without. Now I see\n them too. So I assume it was a recent change that introduced\n the problem.\n\n And if not, much better. Would show that running all tests\n serialized had hidden a bug for a long time.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 16 Dec 1999 13:08:32 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] NOTICE: LockRelease: locktable lookup failed, no lock"
}
] |
[
{
"msg_contents": "Greetings,\n\nIf there was corrupt data in a table, how would one go about finding it?\nIs it possible that corrupt data could cause a backend crash?\n\nThank you,\nMatthew\n",
"msg_date": "Wed, 15 Dec 1999 18:40:40 -0500",
"msg_from": "Matthew Hagerty <[email protected]>",
"msg_from_op": true,
"msg_subject": "Finding corrupt data"
},
{
"msg_contents": "Matthew Hagerty <[email protected]> writes:\n> Is it possible that corrupt data could cause a backend crash?\n\nAbsolutely. The scenario I've seen most is that the length word of a\nvariable-length field value (a \"varlena\" value in pghackers-speak)\ncontains garbage. The backend comes along and tries to allocate space\nequal to the claimed field length in order to copy the value to\nsomeplace, and the usual result is that the backend process exceeds\nits allowed memory usage and is summarily killed by the kernel.\n\n> If there was corrupt data in a table, how would one go about finding it?\n\nThe brute-force way is to do a SELECT * or COPY TO and see if the\nbackend survives ;-). If not, narrowing down which record is bad\nis left as an exercise for the student...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Dec 1999 22:09:27 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Finding corrupt data "
},
{
"msg_contents": "Tom Lane wrote:\n\n> > If there was corrupt data in a table, how would one go about finding it?\n>\n> The brute-force way is to do a SELECT * or COPY TO and see if the\n> backend survives ;-). If not, narrowing down which record is bad\n> is left as an exercise for the student...\n\nOne RDBMS I used had a utility called 'dbcheck' which did some sort of\nexamination of indices, tables, etc., and issued an 'OK' or 'CORRUPT' for\neach examined object. Such a utility for pgsql might simply do some\ncombination of SELECT * or COPY TO as you suggest above.\n\nWould it be reasonable to put such a tool make its way onto the todo list, in\nthe absence of better alternatives? I'd argue it's important for pgsql's\nfuture popular prospects to be able to be _operated_ (i.e., live dbs backed\nup, diagnosed as corrupted, and restored) by folks who may know very little\nabout the internals or the design of the schema/code. Quick and correct\ndiagnosis of the problem is the key for them. Such a tool would seem to go a\nlong way toward that end.\n\nCheers,\nEd Loehr\n\n",
"msg_date": "Thu, 16 Dec 1999 02:05:18 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Finding corrupt data"
},
{
"msg_contents": "Ed Loehr <[email protected]> writes:\n> One RDBMS I used had a utility called 'dbcheck' which did some sort of\n> examination of indices, tables, etc., and issued an 'OK' or 'CORRUPT' for\n> each examined object. Such a utility for pgsql might simply do some\n> combination of SELECT * or COPY TO as you suggest above.\n\n> Would it be reasonable to put such a tool make its way onto the todo list, in\n> the absence of better alternatives?\n\nWhat'd be really nice is some kind of 'fsck' for databases. But it'd be\na lot of work to write one, and more work to keep it up to date in the\nface of continuing changes to the database representation.\n\nOne simpler thing that I'd like to see is for VACUUM to recreate indexes\nfrom scratch instead of trying to compact them. This would provide a\nvery simple recovery procedure for corrupted indexes, and it seems\npossible that it'd actually be faster than what VACUUM does now.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Dec 1999 09:46:11 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Finding corrupt data "
},
{
"msg_contents": "> Tom Lane wrote:\n> \n> > > If there was corrupt data in a table, how would one go about finding it?\n> >\n> > The brute-force way is to do a SELECT * or COPY TO and see if the\n> > backend survives ;-). If not, narrowing down which record is bad\n> > is left as an exercise for the student...\n> \n> One RDBMS I used had a utility called 'dbcheck' which did some sort of\n> examination of indices, tables, etc., and issued an 'OK' or 'CORRUPT' for\n> each examined object. Such a utility for pgsql might simply do some\n> combination of SELECT * or COPY TO as you suggest above.\n\nDoes vacuum already do that?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 16 Dec 1999 11:29:05 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Finding corrupt data"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> > One RDBMS I used had a utility called 'dbcheck' which did some sort of\n> > examination of indices, tables, etc., and issued an 'OK' or 'CORRUPT' for\n> > each examined object. Such a utility for pgsql might simply do some\n> > combination of SELECT * or COPY TO as you suggest above.\n>\n> Does vacuum already do that?\n\nNot as far as I can tell. Here's the kind of output I see from vacuum:\n\nDEBUG: --Relation pg_class--\nDEBUG: Pages 10: Changed 0, Reapped 1, Empty 0, New 0; Tup 695: Vac 0, Keep/VTL\n0/0, Crash 0, UnUsed 35, MinLen 102, MaxLen 132; Re-using: Free/Avail. Space\n3828/0; EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.\nDEBUG: Index pg_class_relname_index: Pages 16; Tuples 695: Deleted 0. Elapsed\n0/0 sec.\nDEBUG: Index pg_class_oid_index: Pages 7; Tuples 695: Deleted 0. Elapsed 0/0\nsec.\n\nAm I missing something?\n\nCheers,\nEd Loehr\n\n",
"msg_date": "Thu, 16 Dec 1999 11:29:32 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Finding corrupt data"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> \n> > > One RDBMS I used had a utility called 'dbcheck' which did some sort of\n> > > examination of indices, tables, etc., and issued an 'OK' or 'CORRUPT' for\n> > > each examined object. Such a utility for pgsql might simply do some\n> > > combination of SELECT * or COPY TO as you suggest above.\n> >\n> > Does vacuum already do that?\n> \n> Not as far as I can tell. Here's the kind of output I see from vacuum:\n> \n> DEBUG: --Relation pg_class--\n> DEBUG: Pages 10: Changed 0, Reapped 1, Empty 0, New 0; Tup 695: Vac 0, Keep/VTL\n> 0/0, Crash 0, UnUsed 35, MinLen 102, MaxLen 132; Re-using: Free/Avail. Space\n> 3828/0; EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.\n> DEBUG: Index pg_class_relname_index: Pages 16; Tuples 695: Deleted 0. Elapsed\n> 0/0 sec.\n> DEBUG: Index pg_class_oid_index: Pages 7; Tuples 695: Deleted 0. Elapsed 0/0\n> sec.\n> \n> Am I missing something?\n\nVacuum does catch some problems, not all of them.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 16 Dec 1999 12:31:10 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Finding corrupt data"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> > Bruce Momjian wrote:\n> >\n> > > > One RDBMS I used had a utility called 'dbcheck' which did some sort of\n> > > > examination of indices, tables, etc., and issued an 'OK' or 'CORRUPT' for\n> > > > each examined object. Such a utility for pgsql might simply do some\n> > > > combination of SELECT * or COPY TO as you suggest above.\n> > >\n> > > Does vacuum already do that?\n> >\n> > Not as far as I can tell. Here's the kind of output I see from vacuum:\n> >\n> > DEBUG: --Relation pg_class--\n> > DEBUG: Pages 10: Changed 0, Reapped 1, Empty 0, New 0; Tup 695: Vac 0, Keep/VTL\n> > 0/0, Crash 0, UnUsed 35, MinLen 102, MaxLen 132; Re-using: Free/Avail. Space\n> > 3828/0; EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.\n> > DEBUG: Index pg_class_relname_index: Pages 16; Tuples 695: Deleted 0. Elapsed\n> > 0/0 sec.\n> > DEBUG: Index pg_class_oid_index: Pages 7; Tuples 695: Deleted 0. Elapsed 0/0\n> > sec.\n> >\n> > Am I missing something?\n>\n> Vacuum does catch some problems, not all of them.\n\nYes, and vacuum appears to be the only known remedy to my current postgresql\nshowstopper. For that, I'm grateful. However, I think that misses the point I'm\ntrying to convey...\n\nThere are a three basic tasks critically important to an operationally viable\ndatabase, from my perspective. First, I need to be able to easily create a backup of\nthe database at any point. The pg_dump appears to serve that function.\n\nSecond, I need to be able to restore from a backup copy if something goes terribly\nwrong. Psql coupled with pg_dump output seems to support that. So far, so good.\n\nThird, and most importantly, I need to be able to tell *when* I need to restore from\na backup. A restoration from a backup copy usually involves a likely loss of data,\nand that can be a Very Big Deal. \"Is this database corrupt?\", is a critically\nimportant question. And I need to be able to answer it without knowing the details\nof postgresql C code. If I can't somehow answer that question when a problem arises,\nthe total cost of ownership of the database jumps pretty dramatically due to wasted\ntime and data loss, and the operability/viability drops in tandem.\n\nCheers,\nEd Loehr\n\n",
"msg_date": "Thu, 16 Dec 1999 11:59:30 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Finding corrupt data"
}
] |
[
{
"msg_contents": "\n> Which dbms's allow it? Or is there an error in my query (I don't\n> use column numbering, I'm into names myself)?\n\nInformix and DB/2 allow column numbering in the group by clause.\n\nAndreas\n",
"msg_date": "Thu, 16 Dec 1999 09:30:23 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: [HACKERS] SELECT ... AS ... names in WHERE/GROUP BY/HAVIN\n\tG"
},
{
"msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> Informix and DB/2 allow column numbering in the group by clause.\n\nWhat do they do with\n\n\tSELECT foo AS bar FROM table GROUP BY bar\n\n?\n\nWhat do they do if bar is the real name of another column in the table?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Dec 1999 09:23:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: [HACKERS] SELECT ... AS ... names in WHERE/GROUP BY/HAVIN\n\tG"
}
] |
[
{
"msg_contents": "subscribe\n\n_____________________________________________\n������--�Ƚ��й��˵����ϼ� http://www.263.net\n������� �ʼ���־ ǩ���ʼ� �ʼ����� �ʼ�����\n�������� ����վ�� ������Ϸ �������� ���ϹҺ�\n�������� ����ɱ�� �����г� �������� ��������\nŵ����ȫ������e·ƽ��\n",
"msg_date": "Thu, 16 Dec 1999 20:05:13 +0800 (CST)",
"msg_from": "\"feng hao\" <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
}
] |
[
{
"msg_contents": "\n\n\nWhat is changed on INSERT/max() in v7.0? (CVS - today).\n\nSee:\n\n--- new version:\n\ntemplate1=> select * from pg_group;\n groname | grosysid | grolist\n---------+----------+---------\n _dummy_ | 0 | {}\n(1 row)\n\ntemplate1=> INSERT INTO pg_group VALUES ('abg_root', max(grosysid)+1, '{}');\nERROR: attribute 'grosysid' not found\n\n\n--- old version (6.5):\n\ntemplate1=> select * from pg_group;\ngroname|grosysid|grolist\n-------+--------+-------\n_dummy_| 0|{}\n(1 row)\n\ntemplate1=> INSERT INTO pg_group VALUES ('abg_root', max(grosysid)+1, '{}');\nINSERT 10173952 1\n\n\n\t\t\t\t\t\t\tKarel\n\n",
"msg_date": "Thu, 16 Dec 1999 14:06:56 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "INSERT in 7.0"
},
{
"msg_contents": "Karel Zak - Zakkr <[email protected]> writes:\n> What is changed on INSERT/max() in v7.0? (CVS - today).\n\n> template1=> INSERT INTO pg_group VALUES ('abg_root', max(grosysid)+1, '{}');\n> ERROR: attribute 'grosysid' not found\n\nThat's the way it should work, AFAICS. VALUES() isn't supposed to\ncontain anything except constant expressions. Perhaps what you are\nafter would be more properly expressed as\n\nINSERT INTO pg_group SELECT 'abg_root', max(grosysid)+1, '{}' FROM pg_group;\n\n6.5 may have accepted the other, but it was an artifact of extremely\nbroken code...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Dec 1999 09:35:45 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] INSERT in 7.0 "
}
] |
[
{
"msg_contents": "\n> Zeugswetter Andreas SB <[email protected]> writes:\n> > Informix and DB/2 allow column numbering in the group by clause.\n> \n> What do they do with\n> \n> \tSELECT foo AS bar FROM table GROUP BY bar\n> \n> What do they do if bar is the real name of another column in \n> the table?\n\nThey don't allow labels, only numbers, \n(SELECT foo AS bar FROM table GROUP BY 1)\n\nIn the special case where a label collides with a colname,\nwe need to use the colname, because that behavior is \nruled by the standard (since it doesn't allow a label).\n\nThe order by clause is the other way around.\n\nDB Vendors probably disallow this syntax,\nbecause the two different interpretations would be a bit awkward.\n\nBest would of course be if the standard allowed labels in the \ngroup by and where clause and take label before colname.\n\nAndreas\n",
"msg_date": "Thu, 16 Dec 1999 15:52:26 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] SELECT ... AS ... names in WHERE/GROUP BY/HAVIN G "
}
] |
[
{
"msg_contents": "\tI�m trying to implement a security scheme based on groups, so I tried to\nwrite some little functions in C. Unfortunately, although I managed to read the \ninternal format, I couldn�t create the ArrayType to return to the backend. It shouldn�t\n be very difficult. I mean, it�s only one dimension.\n\tI tried to use a string and array_in, too; but array_in keeps saying \n\"array_in:need to specify dimension\"\n\tAre there some functions already written to implement this kind of security.\n",
"msg_date": "Thu, 16 Dec 1999 12:49:26 -0300",
"msg_from": "Nicolas Nappe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Arrays and pg_groupr"
}
] |
[
{
"msg_contents": "\tHow can I display the current privileges of a group or\na user in a particular table? I could read relacl from pg_class,\nbut I can�t process the acltype. In acl.c, acltype is an int4!!!\n\tIs there any code to reuse?\n\n\t\t\t\tNicolas Nappe\n",
"msg_date": "Thu, 16 Dec 1999 12:55:03 -0300",
"msg_from": "Nicolas Nappe <[email protected]>",
"msg_from_op": true,
"msg_subject": "access control lists ( acl )"
}
] |
[
{
"msg_contents": "\n> So, according to Oracle's view of the world, HAVING is orrect \n> because it\n> rejects aliases, but GROUP BY is broken because it accepts them.\n\nJust because it is more powerful than the standard does not mean it is\nbroken.\nThe only thing, that is broken is that the alias is taken before the\ncolname,\nand thus results in wrong output for a standard conformant query.\n\nAndreas\n",
"msg_date": "Thu, 16 Dec 1999 17:22:54 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] SELECT ... AS ... names in WHERE/GROUP BY/HAVING"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have a problem using \\copy to load data into tables.\n\nI have to load data into a table that contains data type fields with\nNULL values.\nI tried using \\N but it doesn't work.\nWhat can I do to insert a null into a data field?\n\nTo arrive this conclusion I had to try many solutions because I cannot\nunderstand\nthe \\copy error messages..\n\nWhat about to have the row number and the error type instead of that...\nhygea=> \\copy movimentazioni from 4;\npqReadData() -- read() failed: errno=32\nBroken pipe\nPQendcopy: resetting connection\nCopy failed.\n\nComments!\n\nJose'\n",
"msg_date": "Thu, 16 Dec 1999 17:27:34 +0100",
"msg_from": "Jose Soares <[email protected]>",
"msg_from_op": true,
"msg_subject": "\\copy problem"
},
{
"msg_contents": "On 1999-12-16, Jose Soares mentioned:\n\n> I have a problem using \\copy to load data into tables.\n> \n> I have to load data into a table that contains data type fields with\n> NULL values.\n> I tried using \\N but it doesn't work.\n> What can I do to insert a null into a data field?\n\nWorks for me. I also just messed with that part in the devel sources the\nother day and I don't see a reason why it wouldn't. Perhaps you could run\nthe COPY command instead (and make sure the file is accessible to the\nserver process) or simply run a COPY FROM STDIN; and enter a few things by\nhand and see what you get.\n\n> the \\copy error messages..\n> \n> What about to have the row number and the error type instead of that...\n> hygea=> \\copy movimentazioni from 4;\n> pqReadData() -- read() failed: errno=32\n> Broken pipe\n> PQendcopy: resetting connection\n> Copy failed.\n\nWhen the backend sends garbage it cannot possibly send the error\nmessage. The error was that the read from the connection failed. Of course\none could argue why that is ... Hmm.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Fri, 17 Dec 1999 01:32:19 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] \\copy problem"
},
{
"msg_contents": "Sorry Peter, I don't say you any thing, I'm using psql on win95.\n\n1. I see psql for Linux requires \\N only for data fields with null\nvalues other fields (char,int,etc) \n doesn't need \\N. Why ?\n2. psql for M$Windows95 has a different behavior. For example I can't\ninsert date fields even using \\N\n\nI tried to load a file by changing every NULL value of date fields with\n\\N and it works\non Linux psql, but Win95 psql shows me the following message:\npqReadData() -- read() failed: errno=0\nNo error\nPQendcopy: resetting connection\nCopy failed.\n \nAny ideas ? \n\n\n\nPeter Eisentraut wrote:\n> \n> On 1999-12-16, Jose Soares mentioned:\n> \n> > I have a problem using \\copy to load data into tables.\n> >\n> > I have to load data into a table that contains data type fields with\n> > NULL values.\n> > I tried using \\N but it doesn't work.\n> > What can I do to insert a null into a data field?\n> \n> Works for me. I also just messed with that part in the devel sources the\n> other day and I don't see a reason why it wouldn't. Perhaps you could run\n> the COPY command instead (and make sure the file is accessible to the\n> server process) or simply run a COPY FROM STDIN; and enter a few things by\n> hand and see what you get.\n> \n> > the \\copy error messages..\n> >\n> > What about to have the row number and the error type instead of that...\n> > hygea=> \\copy movimentazioni from 4;\n> > pqReadData() -- read() failed: errno=32\n> > Broken pipe\n> > PQendcopy: resetting connection\n> > Copy failed.\n> \n> When the backend sends garbage it cannot possibly send the error\n> message. The error was that the read from the connection failed. Of course\n> one could argue why that is ... Hmm.\n> \n> --\n> Peter Eisentraut Sernanders v�g 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n> \n> ************\n",
"msg_date": "Fri, 17 Dec 1999 10:39:49 +0100",
"msg_from": "Jose Soares <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] \\copy problem"
}
] |
[
{
"msg_contents": "[email protected] (Jan Wieck)\n>Bruce Momjian wrote:\n>\n>> > Keith Parks <[email protected]> writes:\n>> > > Since the new parallel regression tests I've always had a few\n>> > > lock NOTICE messages like the following.\n>> >\n>> > Interesting --- I had not run the parallel test for a while,\n>> > but I tried it just now and got half a dozen of these:\n>> > NOTICE: LockRelease: locktable lookup failed, no lock\n>> > (Otherwise the tests all passed.)\n>> >\n>> > Something's been broken fairly recently. Does anyone have an\n>> > idea what changed?\n>>\n>> Good question. I can't imagine what it would be. We didn't do much,\n>> and parallel regression is not that old.\n>\n>\n> Also, I used it after another dozen times without. Now I see\n> them too. So I assume it was a recent change that introduced\n> the problem.\n\nI'm not sure it's that recent, I think I've had 1 or 2 such errors\never since I've been running the \"runcheck\".\n\nWhat I will say is that the parallel running arrived at around the\nsame time as the new psql and I didn't have an old version available\nto run the tests until sometime after. (had to download and build 6.5!)\n \n\n>\n> And if not, much better. Would show that running all tests\n> serialized had hidden a bug for a long time.\n>\n\nQuite possible, although something recent has aggrevated it somewhat ;-)\n\nKeith.\n\n",
"msg_date": "Thu, 16 Dec 1999 19:24:35 +0000 (GMT)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] NOTICE: LockRelease: locktable lookup failed, no lock"
},
{
"msg_contents": "> [email protected] (Jan Wieck)\n> >Bruce Momjian wrote:\n> >\n> >> > Keith Parks <[email protected]> writes:\n> >> > > Since the new parallel regression tests I've always had a few\n> >> > > lock NOTICE messages like the following.\n> >> >\n> >> > Interesting --- I had not run the parallel test for a while,\n> >> > but I tried it just now and got half a dozen of these:\n> >> > NOTICE: LockRelease: locktable lookup failed, no lock\n> >> > (Otherwise the tests all passed.)\n> >> >\n> >> > Something's been broken fairly recently. Does anyone have an\n> >> > idea what changed?\n> >>\n> >> Good question. I can't imagine what it would be. We didn't do much,\n> >> and parallel regression is not that old.\n> >\n> >\n> > Also, I used it after another dozen times without. Now I see\n> > them too. So I assume it was a recent change that introduced\n> > the problem.\n> \n> I'm not sure it's that recent, I think I've had 1 or 2 such errors\n> ever since I've been running the \"runcheck\".\n> \n> What I will say is that the parallel running arrived at around the\n> same time as the new psql and I didn't have an old version available\n> to run the tests until sometime after. (had to download and build 6.5!)\n\nHas anyone used CVS -D date to backtrack to the date it first started?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 16 Dec 1999 14:58:40 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] NOTICE: LockRelease: locktable lookup failed, no lock"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> > [email protected] (Jan Wieck)\n> > >Bruce Momjian wrote:\n> > >\n> > >> > NOTICE: LockRelease: locktable lookup failed, no lock\n> > >> > (Otherwise the tests all passed.)\n>\n> Has anyone used CVS -D date to backtrack to the date it first started?\n\n It also spit out a \"Buffer leak\" message once for me today.\n Did not appear again. But be warned.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Fri, 17 Dec 1999 17:54:16 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] NOTICE: LockRelease: locktable lookup failed, no lock"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]]On Behalf Of Keith Parks\n> \n> [email protected] (Jan Wieck)\n> >Bruce Momjian wrote:\n> >\n> >> > Keith Parks <[email protected]> writes:\n> >> > > Since the new parallel regression tests I've always had a few\n> >> > > lock NOTICE messages like the following.\n> >> >\n> >> > Interesting --- I had not run the parallel test for a while,\n> >> > but I tried it just now and got half a dozen of these:\n> >> > NOTICE: LockRelease: locktable lookup failed, no lock\n> >> > (Otherwise the tests all passed.)\n> >> >\n> >> > Something's been broken fairly recently. Does anyone have an\n> >> > idea what changed?\n> >>\n> >> Good question. I can't imagine what it would be. We didn't do much,\n> >> and parallel regression is not that old.\n> >\n> >\n> > Also, I used it after another dozen times without. Now I see\n> > them too. So I assume it was a recent change that introduced\n> > the problem.\n> \n> I'm not sure it's that recent, I think I've had 1 or 2 such errors\n> ever since I've been running the \"runcheck\".\n>\n\nIt seems that conflicts of the initialization of some backends cause\nabove NOTICE messages.\nThose backends would use the same XIDTAGs for the same relations\nin case of LockAcquire()/LockRelease() because xids of those backends\nare'nt set before starting the first command. When one of the backend\ncall LockReleaseAll(),it would release all together.\n\nIf we set pid member of XIDTAG to the pid of each backend\n,we are able to distinguish XIDTAGs.\nBut there may be some reasons that the member is used only for\nuserlock.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Sat, 18 Dec 1999 08:30:54 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] NOTICE: LockRelease: locktable lookup failed, no lock"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> It seems that conflicts of the initialization of some backends cause\n> above NOTICE messages.\n> Those backends would use the same XIDTAGs for the same relations\n> in case of LockAcquire()/LockRelease() because xids of those backends\n> are'nt set before starting the first command. When one of the backend\n> call LockReleaseAll(),it would release all together.\n\nOooh, that would nicely explain Keith's observation that it seems to\nhappen at backend startup. I guess we need to select an initial XID\nearlier during startup than we now do?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 Dec 1999 18:39:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] NOTICE: LockRelease: locktable lookup failed, no lock "
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > It seems that conflicts of the initialization of some backends cause\n> > above NOTICE messages.\n> > Those backends would use the same XIDTAGs for the same relations\n> > in case of LockAcquire()/LockRelease() because xids of those backends\n> > are'nt set before starting the first command. When one of the backend\n> > call LockReleaseAll(),it would release all together.\n> \n> Oooh, that would nicely explain Keith's observation that it seems to\n> happen at backend startup. I guess we need to select an initial XID\n> earlier during startup than we now do?\n>\n\nI'm not sure it's possible or not.\nIf startup sequence in InitPostgres() is changed,we may hardly\nfind the place to start transaction during backend startup.\n\nSeems the unique place we could call StartTransacationCommand()\nduring backend startup is between InitCatalogCahe() and InitUserId()\nin InitPostgres() now.\nI tried the following patch and it seems work at least now.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\nIndex: tcop/postgres.c\n===================================================================\nRCS file: /home/cvs/pgcurrent/backend/tcop/postgres.c,v\nretrieving revision 1.8\ndiff -c -r1.8 postgres.c\n*** tcop/postgres.c\t1999/11/17 02:12:46\t1.8\n--- tcop/postgres.c\t1999/12/19 02:35:12\n***************\n*** 1474,1480 ****\n \n \ton_shmem_exit(remove_all_temp_relations, NULL);\n \n! \tparser_input = makeStringInfo(); /* initialize input buffer */\n \n \t/* \n \t * Send this backend's cancellation info to the frontend. \n--- 1474,1486 ----\n \n \ton_shmem_exit(remove_all_temp_relations, NULL);\n \n! \t{\n! \t\tMemoryContext\toldcontext;\n! \n! \t\toldcontext = MemoryContextSwitchTo(TopMemoryContext);\n! \t\tparser_input = makeStringInfo(); /* initialize input buffer */\n! \t\tMemoryContextSwitchTo(oldcontext);\n! \t}\n \n \t/* \n \t * Send this backend's cancellation info to the frontend. \nIndex: utils/init/postinit.c\n===================================================================\nRCS file: /home/cvs/pgcurrent/backend/utils/init/postinit.c,v\nretrieving revision 1.6\ndiff -c -r1.6 postinit.c\n*** utils/init/postinit.c\t1999/11/22 01:28:26\t1.6\n--- utils/init/postinit.c\t1999/12/19 02:50:29\n***************\n*** 546,551 ****\n--- 546,554 ----\n \t */\n \tInitCatalogCache();\n \n+ \t/* Could we start transaction here ? */\n+ \tif (!bootstrap)\n+ \t\tStartTransactionCommand();\n \t/*\n \t * Set ourselves to the proper user id and figure out our postgres\n \t * user id. If we ever add security so that we check for valid\n\n\n",
"msg_date": "Sun, 19 Dec 1999 18:25:39 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] NOTICE: LockRelease: locktable lookup failed, no lock "
}
] |
[
{
"msg_contents": "Yes, you are right, of course, it doesn't mean that it's incorrect.\nHowever, assuming that Oracle adheres strictly to the standard (which is a\ngood, but not infallible, assumption), it means that we don't. Of course,\nwe may just extend the standard, but in this particular area, I'm not sure\nthat it's a good idea, because it can be very confusing, and lead to\ninadvertent mistakes, which can be very difficult to find.\n\nMikeA\n\n-----Original Message-----\nFrom: Zeugswetter Andreas SB\nTo: 'Ansley, Michael'; '[email protected] '\nSent: 99/12/16 06:22\nSubject: AW: [HACKERS] SELECT ... AS ... names in WHERE/GROUP BY/HAVING\n\n\n> So, according to Oracle's view of the world, HAVING is orrect \n> because it\n> rejects aliases, but GROUP BY is broken because it accepts them.\n\nJust because it is more powerful than the standard does not mean it is\nbroken.\nThe only thing, that is broken is that the alias is taken before the\ncolname,\nand thus results in wrong output for a standard conformant query.\n\nAndreas\n",
"msg_date": "Thu, 16 Dec 1999 22:31:36 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] SELECT ... AS ... names in WHERE/GROUP BY/HAVING"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.