threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "Hi,\n\nI have committed the first implementation of an automatic code\nconversion between UNICODE and other encodings. Currently\nISO8859-[1-5] and EUC_JP are supported. Supports for other encodings\ncoming soon. Testings of ISO8859 are welcome, since I have almost no\nknowledge about European languages and have no idea how to test with\nthem.\n\nHow to use:\n\n1. configure and install PostgreSQL with --enable-multibyte option\n\n2. create database with UNICODE encoding\n\n\t$ createdb -E UNICODE unicode\n\n3. create a table and fill it with UNICODE (UTF-8) data. You could\ncreate a table with even each column having different language.\n\n\tcreate table t1(latin1 text, latin2 text);\n\n4. set your terminal setting to (for example) ISO8859-2 or whatever\n\n5. start psql\n\n6. set client encoding to ISO8859-2\n\n\t\\encoding LATIN2\n\n7. extract ISO8859-2 data from the UNICODE encoded table\n\n\tselect latin2 from t1;\n\nP.S. I have used bsearch() to search code spaces. Is bsearch() is\nportable enough?\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 12 Oct 2000 17:11:41 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Automatic code conversion between UNICODE and other encodings"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> P.S. I have used bsearch() to search code spaces. Is bsearch() is\n> portable enough?\n\nAccording to my references, bsearch() was originally a SysV localism\nbut is a required library function in ANSI C. So in theory it should\nbe portable enough ... but I notice we have implementations in\nbackend/ports for strtol() and strtoul() which are also required by\nANSI C, so apparently some people are or were running Postgres on\nmachines that are a few bricks shy of a full ANSI library.\n\nI suggest waiting to see if anyone complains. If so, we should be\nable to write up a substitute bsearch() and add it to ports/.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Oct 2000 10:50:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Automatic code conversion between UNICODE and other encodings "
}
]
|
[
{
"msg_contents": "It seems that I'm now getting some delayed emails from around the 27th of\nSeptember. It's probably at my end, but if you sent me anything important,\nlike patches etc, please can you resend it to me.\n\nThanks, Peter\n\n-- \nPeter T Mount [email protected] http://www.retep.org.uk\nPostgreSQL JDBC Driver http://www.retep.org.uk/postgres/\nJava PDF Generator http://www.retep.org.uk/pdf/\n\n\n",
"msg_date": "Thu, 12 Oct 2000 09:56:56 +0100 (BST)",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "Delayed mail"
}
]
|
[
{
"msg_contents": "\n> > being deleted, then if the system crashes part way through, \n> it should be\n> > possible to continue after the system is brought up, no?\n> \n> If it crashes in the middle, some rows have the column \n> removed, and some\n> do not.\n\nWe would need to know where this separation is, but we cannot do a rollback\nonly rollforward. Thus probably not acceptable.\n\nAndreas\n",
"msg_date": "Thu, 12 Oct 2000 13:22:01 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: ALTER TABLE DROP COLUMN"
}
]
|
[
{
"msg_contents": "Sorry for the late reply, but I was on vacation (my 2. daughter was born).\n\n> After looking at the rule rewriter some more, I realized that the only\n> way to push all permissions checks to execution time is not \n> only to keep\n> skipAcl, but to generalize it. The problem is with checks on the view\n> itself --- if you do INSERT INTO someView, which gets rewritten into\n> an insert to someRealTable, then what you want the executor \n> to check for\n> is\n> \tWrite access on someView by current user\n> \tWrite access on someRealTable by owner of rule\n> which is infeasible with the existing code because the executor only\n> checks for write access on the real target table (someRealTable).\n\nWhich is iirc incorrect. It should only check write access to someView.\nThe checks for the someRealTable should be done as view owner.\nViews are often used for horizontal permissions.\n\n> \n> What I have now got, and hope to commit today, is the following fields\n> in RangeTblEntry, replacing skipAcl:\n> \n> * checkForRead, checkForWrite, and checkAsUser control \n> run-time access\n> * permissions checks. A rel will be checked for read \n> or write access\n> * (or both, or neither) per checkForRead and checkForWrite. If\n> * checkAsUser is not InvalidOid, then do the \n> permissions checks using\n> * the access rights of that user, not the current \n> effective user ID.\n> * (This allows rules to act as setuid gateways.)\n> \n> bool checkForRead; /* check rel for read access */\n> bool checkForWrite; /* check rel for write access */\n> Oid checkAsUser; /* if not zero, check access \n> as this user */\n\nI don't know, but imho one field for all permissions would have been better,\nlike we discussed for the permissions system table, since there are more rights\nin SQL than read/write (e.g. write is separated into insert, update and delete)\nOr did I not understand this correctly ? \n\n> NOTE: there is a subtle change here. A rule used to be taken as\n> executing setUID to the owner of the table the rule is attached to.\n> Now it is executed as if setUID to the person who created the rule,\n> who could be a different user if the table owner gave away \n> RULE rights.\n> I think this is a more correct behavior, but I'm open to argument.\n> It'd be easy to make CREATE RULE store the table owner's OID instead,\n> if anyone wants to argue that that was the right thing.\n\nNo, I think setUID to rule creator is correct.\n\nAndreas\n",
"msg_date": "Thu, 12 Oct 2000 16:19:36 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: Reimplementing permission checks for rules "
},
{
"msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> I don't know, but imho one field for all permissions would have been\n> better, like we discussed for the permissions system table, since\n> there are more rights in SQL than read/write (e.g. write is separated\n> into insert, update and delete)\n\nNot really necessary in the current implementation. checkForWrite\nessentially identifies the target table for the operation, and then\nthe query's commandType is used to decide exactly which flavor of\nwrite access to check for.\n\nIIRC, the ACL code doesn't have the right set of primitive access types\nanyway to match the SQL spec's requirements, but that's a task for\nanother day.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Oct 2000 10:36:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: Reimplementing permission checks for rules "
}
]
|
[
{
"msg_contents": "\n> Well, that would only be part og what I'm looking for. The thing I like about \n> informix is that I can make a Level 0 backup of all the data (equal to the \n> pg_dumpall), and then leave the logical logs downloading continuosly, so that \n> if in one moment the system breaks, I restore the Level 0 backup and then \n> apply the logical logs, which are the small changes that have been done to \n> the database in each transaction, administration, etc.\n> \n> Could this be added? I am willing to help with the coding.\n\nThis is what Version 7.1 WAL is all about. \nThere might be some help wanted in one of the possible backup methods:\n\t1. a pg_dumpall restore, and a subsequent restore of logs\n\t2. a restore of a \"physical backup of db files\" + subsequent restore of logs\n\nI think Vadim has the 1st way in his works.\nThe 2nd way would need some work and testing, and probably some utility to \nbackup the files in the correct order \n(Could be something that calls tar with appropriate arguments).\nI am still pretty sure that a physical backup without synchronization with the\npostmaster is possible with a little extra work. E.g. checking index validity\nafter restore and rebuild if bogus. The better way would probably be to not backup \nindex files at all and rebuild them after restore.\n\nA distinct suffix for different file types would definitely help in this area (.dat, .idx, .tmp ...).\nI think this would be a good idea overall. \n\nAndreas\n",
"msg_date": "Thu, 12 Oct 2000 16:21:23 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: backup and restore"
}
]
|
[
{
"msg_contents": "\n> I am a long time ordbms proponent (illustra, informix and\n> now postgresql). And I have heard mixed information\n> regarding Oracle's extensibility. Do they use a unified \n> type system? Are cartriges separate processes? Do the \n> separate processes (if they are) share memory with the \n> server process or is all the communication through pipish \n> things. If I don't use their standard cartridges (image, \n> text, video) how easy is it to write my own. Any GIS \n> specific pro or con info w/Oracle? Am I asking the right\n> questions?\n\nI think you need to be an Oracle partner to write\nyour own cartridge. Then you can buy the needed stuff to write \nyour own. But cartridge development is iirc disclosed from public \nand only available to a selected group of partners.\n\nBy the way, they are now all called Options, and thus I doubt that there\nare any previously called \"cartriges\" that are not directly from Oracle.\n\nAndreas\n",
"msg_date": "Thu, 12 Oct 2000 16:22:18 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: FW: oracle ate"
}
]
|
[
{
"msg_contents": "\n> TODO updated:\n> \n> * Prevent truncate on table with a referential integrity \n> trigger (RESTRICT)\n\nI think this was solved in current with a better approach\n(checks if referenced table is empty).\n\nAndreas\n",
"msg_date": "Thu, 12 Oct 2000 16:23:06 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Proposal: TRUNCATE TABLE table RESTRICT"
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> \n> > TODO updated:\n> > \n> > * Prevent truncate on table with a referential integrity \n> > trigger (RESTRICT)\n> \n> I think this was solved in current with a better approach\n> (checks if referenced table is empty).\n\nRemoved from TODO.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 12 Oct 2000 11:43:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Proposal: TRUNCATE TABLE table RESTRICT"
}
]
|
[
{
"msg_contents": "> WAL would provide the framework to do something like that, but I still\n> say it'd be a bad idea. What you're describing is\n> irrevocable-once-it-starts DROP COLUMN; there is no way to \n> roll it back.\n> We're trying to get rid of statements that act that way, not add more.\n\nYes.\n\n> I am not convinced that a 2x penalty for DROP COLUMN is such a huge\n> problem that we should give up all the normal safety features of SQL\n> in order to avoid it. Seems to me that DROP COLUMN is only a \n> big issue during DB development, when you're usually working with \n> relatively small amounts of test data anyway.\n\nHere I don't agree, the statement can also be used for an application version\nupgrade. Thus seen in SAP/R3 with tables > 30 Gb.\n\nMy conclusion would be that we need both:\n1. a fast system table only solution with physical/logical column id\n2. a tool that does the cleanup (e.g. vacuum) \n\nAndreas \n",
"msg_date": "Thu, 12 Oct 2000 16:23:35 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> My conclusion would be that we need both:\n> 1. a fast system table only solution with physical/logical column id\n> 2. a tool that does the cleanup (e.g. vacuum) \n\nBut the peak space usage during cleanup must still be 2X.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Oct 2000 10:38:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "On Thu, 12 Oct 2000, Tom Lane wrote:\n\n> Zeugswetter Andreas SB <[email protected]> writes:\n> > My conclusion would be that we need both:\n> > 1. a fast system table only solution with physical/logical column id\n> > 2. a tool that does the cleanup (e.g. vacuum) \n> \n> But the peak space usage during cleanup must still be 2X.\n\nIs there no way of doing this such that we have N tuple types in the\ntable? So that UPDATE/INSERTs are minus the extra column, while the old\nones just have that column marked as deleted? Maybe change the stored\nvalue of the deleted field as some internal value that, when vacuum, or\nany other operation, sees it, it 'ignores' that field? maybe something\nthat when you do an 'alter table drop', it effectively does an UPDATE on\nthat field to set it to the 'drop column' value?\n\n",
"msg_date": "Thu, 12 Oct 2000 15:06:43 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> On Thu, 12 Oct 2000, Tom Lane wrote:\n>> Zeugswetter Andreas SB <[email protected]> writes:\n>>>> My conclusion would be that we need both:\n>>>> 1. a fast system table only solution with physical/logical column id\n>>>> 2. a tool that does the cleanup (e.g. vacuum) \n>> \n>> But the peak space usage during cleanup must still be 2X.\n\n> Is there no way of doing this such that we have N tuple types in the\n> table? So that UPDATE/INSERTs are minus the extra column, while the old\n> ones just have that column marked as deleted?\n\nIf we bite the bullet to the extent of supporting a distinction between\nphysical and logical column numbers, then ISTM there's no strong need\nto do any of this other stuff at all. I'd expect that an inserted or\nupdated tuple would have a NULL in any physical column position that\ndoesn't have an equivalent logical column, so the space cost is minimal\n(zero, in fact, if there are any other NULLs in the tuple). Over time\nthe space occupied by deleted-column data would gradually go away as\ntuples got updated.\n\nI really don't see why we're expending so much discussion on ways to\nreformat all the tuples at once. It can't be done cheaply and I see\nno real reason to do it at all, so it seems like we have many\nmore-profitable problems to work on.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Oct 2000 14:55:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "On Thu, 12 Oct 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > On Thu, 12 Oct 2000, Tom Lane wrote:\n> >> Zeugswetter Andreas SB <[email protected]> writes:\n> >>>> My conclusion would be that we need both:\n> >>>> 1. a fast system table only solution with physical/logical column id\n> >>>> 2. a tool that does the cleanup (e.g. vacuum) \n> >> \n> >> But the peak space usage during cleanup must still be 2X.\n> \n> > Is there no way of doing this such that we have N tuple types in the\n> > table? So that UPDATE/INSERTs are minus the extra column, while the old\n> > ones just have that column marked as deleted?\n> \n> If we bite the bullet to the extent of supporting a distinction between\n> physical and logical column numbers, then ISTM there's no strong need\n> to do any of this other stuff at all. \n\nwhat does/would it take to implement this?\n\n\n",
"msg_date": "Thu, 12 Oct 2000 20:06:16 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Zeugswetter Andreas SB <[email protected]> writes:\n> > My conclusion would be that we need both:\n> > 1. a fast system table only solution with physical/logical column id\n> > 2. a tool that does the cleanup (e.g. vacuum)\n> \n> But the peak space usage during cleanup must still be 2X.\n\nPerhaps he means some kind of off-line cleanup tool, that has only the \nrequirement of being able to continue from where it left off (or\ncrashed) ?\n\nIt could be useful in some cases (like removing a column on saturday\nfrom a \nterabyte-sized file that is used only on weekdays :)\n\n----------\nHannu\n",
"msg_date": "Fri, 13 Oct 2000 16:53:08 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: ALTER TABLE DROP COLUMN"
}
]
|
[
{
"msg_contents": "At 16:21 12/10/00 +0200, Zeugswetter Andreas SB wrote:\n>> \n>> Could this be added? I am willing to help with the coding.\n>\n>This is what Version 7.1 WAL is all about. \n>There might be some help wanted in one of the possible backup methods:\n>\t1. a pg_dumpall restore, and a subsequent restore of logs\n>\t2. a restore of a \"physical backup of db files\" + subsequent restore of logs\n>\n>I think Vadim has the 1st way in his works.\n\nMy guess is that there are some issues to resolve: I presume the WAL uses\nOID to identify rows, which will mean that BLOB restoration won't work\nbecause it changes the OID of the BLOB. Also, pg_dump will restore row OIDs\nfor table data, but not the OID for such things as sequences. Guessing\nagain, I'd say that sequence updates in the WAL will also be OID based.\n\nIt's definitely true that the WAL is an essential first step to proper\nbackup, but there's probably the need to write a backup utility as well.\nUnless of course Vadim has done that as well...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 13 Oct 2000 01:31:37 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AW: backup and restore"
}
]
|
[
{
"msg_contents": "\nSparc solaris 2.7 with postgres 7.0.2\n\nIt seems to be reproducable, the server crashes on us at a rate of about\nevery few hours.\n\nAny ideas?\n\nGNU gdb 4.17\nCopyright 1998 Free Software Foundation, Inc.\nGDB is free software, covered by the GNU General Public License, and you are\nwelcome to change it and/or distribute copies of it under certain conditions.\nType \"show copying\" to see the conditions.\nThere is absolutely no warranty for GDB. Type \"show warranty\" for details.\nThis GDB was configured as \"sparc-sun-solaris2.7\"...\n\nwarning: core file may not match specified executable file.\nCore was generated by `postmaster -i -N 128 -B 512'.\nProgram terminated with signal 10, Bus Error.\nReading symbols from /usr/lib/libgen.so.1...done.\nReading symbols from /usr/lib/libcrypt_i.so.1...done.\nReading symbols from /usr/lib/libnsl.so.1...done.\nReading symbols from /usr/lib/libsocket.so.1...done.\nReading symbols from /usr/lib/libdl.so.1...done.\nReading symbols from /usr/lib/libm.so.1...done.\nReading symbols from /usr/lib/libcurses.so.1...done.\nReading symbols from /usr/lib/libc.so.1...done.\nReading symbols from /usr/lib/libmp.so.2...done.\nReading symbols from /usr/platform/SUNW,Ultra-2/lib/libc_psr.so.1...done.\nReading symbols from /usr/lib/nss_files.so.1...done.\n#0 0xff145fa0 in _morecore ()\n(gdb) bt\n#0 0xff145fa0 in _morecore ()\n#1 0xff1457c8 in _malloc_unlocked ()\n#2 0xff1455bc in malloc ()\n#3 0x1dd170 in elog (lev=0, \n fmt=0x21a9b0 \"Message from PostgreSQL backend:\\n\\tThe Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\\n\\tI have rolled back the current transaction and am going \"...)\n at elog.c:263\n#4 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713\n#5 <signal handler called>\n#6 0xff145c04 in realfree ()\n#7 0xff14581c in _malloc_unlocked ()\n#8 0xff1455bc in malloc ()\n#9 0x1dce4c in elog (lev=0, \n fmt=0x21a9b0 \"Message from PostgreSQL backend:\\n\\tThe Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\\n\\tI have rolled back the current transaction and am going \"...)\n at elog.c:176\n#10 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713\n#11 <signal handler called>\n#12 0xff19814c in _libc_write ()\n#13 0x1dd210 in elog (lev=0, \n fmt=0x21a9b0 \"Message from PostgreSQL backend:\\n\\tThe Postmaster has \nd me that some other backend died abnormally and possibly corrupted shared memory.\\n\\tI have rolled back the current transaction and am going \"...)\n at elog.c:312\n#14 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713\n#15 <signal handler called>\n#16 0xff19814c in _libc_write ()\n#17 0x1dd210 in elog (lev=0, \n fmt=0x21a9b0 \"Message from PostgreSQL backend:\\n\\tThe Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\\n\\tI have rolled back the current transaction and am going \"...)\n at elog.c:312\n#18 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713\n#19 <signal handler called>\n#20 0x1dcf7c in elog (lev=0, \n fmt=0x21a9b0 \"Message from PostgreSQL backend:\\n\\tThe Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\\n\\tI have rolled back the current transaction and am going \"...)\n at elog.c:205\n#21 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713\n#22 <signal handler called>\n#23 0xff19814c in _libc_write ()\n#24 0x1dd210 in elog (lev=0, \nd me that some other backend died abnormally and possibly corrupted shared memory.\\n\\tI have rolled back the current transaction and am going \"...)\n at elog.c:312\n#25 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713\n#26 <signal handler called>\n#27 0xff19814c in _libc_write ()\n#28 0x1dd210 in elog (lev=0, \n fmt=0x21a9b0 \"Message from PostgreSQL backend:\\n\\tThe Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\\n\\tI have rolled back the current transaction and am going \"...)\n at elog.c:312\n#29 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713\n#30 <signal handler called>\n#31 0xff17e1c0 in _doprnt ()\n#32 0xff181d0c in vsnprintf ()\n#33 0x1dd100 in elog (lev=0, \n fmt=0x21a9b0 \"Message from PostgreSQL backend:\\n\\tThe Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\\n\\tI have rolled back the current transaction and am going \"...)\n at elog.c:249\n#34 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713\n#35 <signal handler called>\n#36 0xff19814c in _libc_write ()\n#37 0x1dd210 in elog (lev=0, \n fmt=0x21a9b0 \"Message from PostgreSQL backend:\\n\\tThe Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\\n\\tI have rolled back the current transaction and am going \"...)\n at elog.c:312\n#38 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713\n#39 <signal handler called>\n#40 0xff19814c in _libc_write ()\n#41 0x1dd210 in elog (lev=0, \n fmt=0x21a9b0 \"Message from PostgreSQL backend:\\n\\tThe Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\\n\\tI have rolled back the current transaction and am going \"...)\n at elog.c:312\n#42 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713\n#43 <signal handler called>\n#44 0xff177d34 in dcgettext_u ()\n#45 0xff177cc4 in dgettext ()\n#46 0x1dcd84 in elog (lev=0, \n fmt=0x21a9b0 \"Message from PostgreSQL backend:\\n\\tThe Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\\n\\tI have rolled back the current transaction and am going \"...)\n at elog.c:159\n#47 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713\n#48 <signal handler called>\n#49 0xff19814c in _libc_write ()\n#50 0x1dd210 in elog (lev=0, \n fmt=0x21a9b0 \"Message from PostgreSQL backend:\\n\\tThe Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\\n\\tI have rolled back the current transaction and am going \"...)\n at elog.c:312\n#51 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713\n#52 <signal handler called>\n#53 0xff136df0 in strlen ()\n#54 0x1dcddc in elog (lev=0, \n fmt=0x21a9b0 \"Message from PostgreSQL backend:\\n\\tThe Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\\n\\tI have rolled back the current transaction and am going \"...)\n at elog.c:172\n#55 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713\n#56 <signal handler called>\n#57 0xff19814c in _libc_write ()\n#58 0x1dd210 in elog (lev=0, \n fmt=0x21a9b0 \"Message from PostgreSQL backend:\\n\\tThe Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\\n\\tI have rolled back the current transaction and am going \"...)\n at elog.c:312\n#59 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713\n#60 <signal handler called>\n#61 0xff19814c in _libc_write ()\n#62 0x1dd210 in elog (lev=0, \n fmt=0x21a9b0 \"Message from PostgreSQL backend:\\n\\tThe Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\\n\\tI have rolled back the current transaction and am going \"...)\n at elog.c:312\n#63 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713\n#64 <signal handler called>\n#65 0xff19814c in _libc_write ()\n#66 0x1dd210 in elog (lev=0, \n fmt=0x21a9b0 \"Message from PostgreSQL backend:\\n\\tThe Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\\n\\tI have rolled back the current transaction and am going \"...)\n at elog.c:312\n#67 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713\n#68 <signal handler called>\n#69 0xff19814c in _libc_write ()\n#70 0x1dd210 in elog (lev=0, \n fmt=0x21a9b0 \"Message from PostgreSQL backend:\\n\\tThe Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\\n\\tI have rolled back the current transaction and am going \"...)\n#71 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713\n#72 <signal handler called>\n#73 0xff19814c in _libc_write ()\n#74 0x1dd210 in elog (lev=0, \n fmt=0x21a9b0 \"Message from PostgreSQL backend:\\n\\tThe Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\\n\\tI have rolled back the current transaction and am going \"...)\n at elog.c:312\n#75 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713\n#76 <signal handler called>\n#77 0xff19814c in _libc_write ()\n#78 0x1dd210 in elog (lev=0, \n fmt=0x21a9b0 \"Message from PostgreSQL backend:\\n\\tThe Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\\n\\tI have rolled back the current transaction and am going \"...)\n at elog.c:312\n#79 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713\n#80 <signal handler called>\n#81 0xff195dd4 in _poll ()\n#82 0xff14e79c in select ()\n#83 0x14df58 in s_lock_sleep (spin=18) at s_lock.c:62\n#84 0x14dfa0 in s_lock (lock=0xff270011 \"�\", file=0x2197c8 \"spin.c\", line=127)\n at s_lock.c:76\n#85 0x154620 in SpinAcquire (lockid=0) at spin.c:127\n#86 0x149100 in ReadBufferWithBufferLock (reln=0x2ce4e8, blockNum=4323, \n bufferLockHeld=1 '\\001') at bufmgr.c:297\n#87 0x14a130 in ReleaseAndReadBuffer (buffer=360, relation=0x2ce4e8, \n blockNum=4323) at bufmgr.c:900\n#88 0x4d5c4 in heapgettup (relation=0x2ce4e8, tuple=0x2d7a60, dir=1, \n buffer=0x2d7a8c, snapshot=0x2d7648, nkeys=0, key=0x0) at heapam.c:488\n#89 0x4ee00 in heap_getnext (scandesc=0x2d7a48, backw=0) at heapam.c:973\n#90 0xc5c40 in SeqNext (node=0x2d6120) at nodeSeqscan.c:101\n#91 0xbb674 in ExecScan (node=0x2d6120, accessMtd=0xc5afc <SeqNext>)\n at execScan.c:103\n#92 0xc5ccc in ExecSeqScan (node=0x2d6120) at nodeSeqscan.c:150\n#93 0xb7a3c in ExecProcNode (node=0x2d6120, parent=0x2d6120)\n at execProcnode.c:268\n#94 0xb589c in ExecutePlan (estate=0x2d7858, plan=0x2d6120, \n operation=CMD_SELECT, offsetTuples=0, numberTuples=0, \n direction=ForwardScanDirection, destfunc=0x2d7698) at execMain.c:1052\n#95 0xb47bc in ExecutorRun (queryDesc=0x2d7a30, estate=0x2d7858, feature=3, \n limoffset=0x0, limcount=0x0) at execMain.c:291\n#96 0x165bd8 in ProcessQueryDesc (queryDesc=0x2d7a30, limoffset=0x0, \n limcount=0x0) at pquery.c:310\n#97 0x165c90 in ProcessQuery (parsetree=0x2d5470, plan=0x2d6120, dest=Remote)\n at pquery.c:353\n#98 0x163650 in pg_exec_query_dest (\n query_string=0x26b3d8 \"SELECT campid, login, pass FROM ppc_campaigns WHERE login = 'xxx' AND pass = 'xxx'\", dest=Remote, aclOverride=0 '\\000')\n at postgres.c:663\n#99 0x163404 in pg_exec_query (\n query_string=0x26b3d8 \"SELECT campid, login, pass FROM ppc_campaigns WHERE login = 'xxx' AND pass = 'xxx'\") at postgres.c:562\n#100 0x1650ac in PostgresMain (argc=6, argv=0xffbef198, real_argc=6, \n real_argv=0xffbefd9c) at postgres.c:1590\n#101 0x1319cc in DoBackend (port=0x279360) at postmaster.c:2009\n#102 0x13117c in BackendStartup (port=0x279360) at postmaster.c:1776\n#103 0x12f6c4 in ServerLoop () at postmaster.c:1037\n#104 0x12ec34 in PostmasterMain (argc=6, argv=0xffbefd9c) at postmaster.c:725\n#105 0xd8abc in main (argc=6, argv=0xffbefd9c) at main.c:93\n\n\n-- \nMan is a rational animal who always loses his temper when he is called\nupon to act in accordance with the dictates of reason.\n -- Oscar Wilde\n",
"msg_date": "Thu, 12 Oct 2000 12:49:25 -0400",
"msg_from": "Dan Moschuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Core dump"
},
{
"msg_contents": "* Dan Moschuk <[email protected]> [001012 09:47] wrote:\n> \n> Sparc solaris 2.7 with postgres 7.0.2\n> \n> It seems to be reproducable, the server crashes on us at a rate of about\n> every few hours.\n> \n> Any ideas?\n> \n> GNU gdb 4.17\n> Copyright 1998 Free Software Foundation, Inc.\n\n[snip]\n\n> #78 0x1dd210 in elog (lev=0, \n> fmt=0x21a9b0 \"Message from PostgreSQL backend:\\n\\tThe Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\\n\\tI have rolled back the current transaction and am going \"...)\n> at elog.c:312\n> #79 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713\n> #80 <signal handler called>\n> #81 0xff195dd4 in _poll ()\n> #82 0xff14e79c in select ()\n> #83 0x14df58 in s_lock_sleep (spin=18) at s_lock.c:62\n> #84 0x14dfa0 in s_lock (lock=0xff270011 \"�\", file=0x2197c8 \"spin.c\", line=127)\n> at s_lock.c:76\n> #85 0x154620 in SpinAcquire (lockid=0) at spin.c:127\n> #86 0x149100 in ReadBufferWithBufferLock (reln=0x2ce4e8, blockNum=4323, \n> bufferLockHeld=1 '\\001') at bufmgr.c:297\n\n% uname -sr\nSunOS 5.7\n\nfrom sys/signal.h:\n\n#define SIGUSR1 16 /* user defined signal 1 */\n\nAre you sure you don't have any application running amok sending\nsignals to processes it shouldn't? Getting a superfolous signal\nseems out of place, this doesn't look like a crash or anything\nbecause USR1 isn't delivered by the kernel afaik.\n\nAnd why are you using solaris? *smack*\n\nAny why isn't postmaster either blocking these signals or shutting\ndown cleanly on reciept of them?\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Thu, 12 Oct 2000 10:03:09 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Core dump"
},
{
"msg_contents": "\n| % uname -sr\n| SunOS 5.7\n| \n| from sys/signal.h:\n| \n| #define SIGUSR1 16 /* user defined signal 1 */\n| \n| Are you sure you don't have any application running amok sending\n| signals to processes it shouldn't? Getting a superfolous signal\n| seems out of place, this doesn't look like a crash or anything\n| because USR1 isn't delivered by the kernel afaik.\n\nAny of the applications that are running on that server do not use\nSIGUSR1. I haven't looked through the code yet, but I figure postgres\nwas sending the SIGUSR1.\n\n| And why are you using solaris? *smack*\n\nWell, because our main database server is a sparc, and _someone_ never\ngot around to finishing his sparc port. :-)\n\n-Dan\n-- \nMan is a rational animal who always loses his temper when he is called\nupon to act in accordance with the dictates of reason.\n -- Oscar Wilde\n",
"msg_date": "Thu, 12 Oct 2000 13:56:10 -0400",
"msg_from": "Dan Moschuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Core dump"
},
{
"msg_contents": "Dan Moschuk <[email protected]> writes:\n> Sparc solaris 2.7 with postgres 7.0.2\n> It seems to be reproducable, the server crashes on us at a rate of about\n> every few hours.\n\nThat's a very bizarre backtrace. Why the multiple levels of recursive\nentry to the quickdie() signal handler? I wonder if you aren't looking\nat some kind of Solaris bug --- perhaps it's not able to cope with a\nsignal handler turning around and issuing new kernel calls.\n\nThe core file you are looking at is probably *not* from the original\nfailure, whatever that is. The sequence is probably\n\n1. Some backend crashes for unknown reason, dumping core.\n\n2. Postmaster observes messy death of a child, decides that mass suicide\n followed by restart is called for. Postmaster sends SIGUSR1 to all\n remaining backends to make them commit hara-kiri.\n\n3. One or more other backends crash trying to obey postmaster's command.\n The corefile left for you to examine comes from whichever crashed\n last.\n\nSo there are at least two problems here, but we only have evidence of\nthe second one.\n\nSince the problem is fairly reproducible, I'd suggest you temporarily\ndike out the elog(NOTICE) call in quickdie() (in\nsrc/backend/tcop/postgres.c), which will probably allow the backends\nto honor SIGUSR1 without dumping core. Then you have a shot at seeing\nthe core from the original failure.\n\nAssuming that this works (ie, you find a core that's not got anything\nto do with quickdie()), I'd suggest an inquiry to Sun about whether\ntheir signal handler logic hasn't got a problem with write() issued\nfrom inside a signal handler. Meanwhile let us know what the new\nbacktrace shows.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Oct 2000 16:10:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Core dump "
},
{
"msg_contents": "\n| > Sparc solaris 2.7 with postgres 7.0.2\n| > It seems to be reproducable, the server crashes on us at a rate of about\n| > every few hours.\n| \n| That's a very bizarre backtrace. Why the multiple levels of recursive\n| entry to the quickdie() signal handler? I wonder if you aren't looking\n| at some kind of Solaris bug --- perhaps it's not able to cope with a\n| signal handler turning around and issuing new kernel calls.\n\nI'm not sure that is the issue, see below.\n\n| The core file you are looking at is probably *not* from the original\n| failure, whatever that is. The sequence is probably\n| \n| 1. Some backend crashes for unknown reason, dumping core.\n| \n| 2. Postmaster observes messy death of a child, decides that mass suicide\n| followed by restart is called for. Postmaster sends SIGUSR1 to all\n| remaining backends to make them commit hara-kiri.\n| \n| 3. One or more other backends crash trying to obey postmaster's command.\n| The corefile left for you to examine comes from whichever crashed\n| last.\n| \n| So there are at least two problems here, but we only have evidence of\n| the second one.\n| \n| Since the problem is fairly reproducible, I'd suggest you temporarily\n| dike out the elog(NOTICE) call in quickdie() (in\n| src/backend/tcop/postgres.c), which will probably allow the backends\n| to honor SIGUSR1 without dumping core. Then you have a shot at seeing\n| the core from the original failure.\n\nI will try this, however the database is currently running under light load.\nOnly under high load does postgres start to choke, and eventually die.\n\n| Assuming that this works (ie, you find a core that's not got anything\n| to do with quickdie()), I'd suggest an inquiry to Sun about whether\n| their signal handler logic hasn't got a problem with write() issued\n| from inside a signal handler. Meanwhile let us know what the new\n| backtrace shows.\n\nI wrote a quick test program to test this theory. Below is the code and the\noutput.\n\n#include <sys/types.h>\n#include <stdio.h>\n#include <unistd.h>\n#include <signal.h>\n\nstatic void moo (int);\n\nint\nmain (void)\n{\n signal(SIGUSR1, moo);\n raise(SIGUSR1);\n}\n\nstatic void\nmoo (cow)\n int cow;\n{\n printf(\"Getting ready for write()\\n\");\n write(STDOUT_FILENO, \"Hello!\\n\", 7);\n printf(\"Done.\\n\");\n}\n\nstatic void\nmoo (cow)\n int cow;\n{\n printf(\"Getting ready for write()\\n\");\n write(STDOUT_FILENO, \"Hello!\\n\", 7);\n printf(\"Done.\\n\");\n}\n\neclipse% ./x\nGetting ready for write()\nHello!\nDone.\neclipse% \n\nIt would appear from that very rough test program that solaris doesn't mind\nsystem calls from within a signal handler.\n\n-- \nMan is a rational animal who always loses his temper when he is called\nupon to act in accordance with the dictates of reason.\n -- Oscar Wilde\n",
"msg_date": "Thu, 12 Oct 2000 16:47:53 -0400",
"msg_from": "Dan Moschuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Core dump"
},
{
"msg_contents": "Dan Moschuk <[email protected]> writes:\n> It would appear from that very rough test program that solaris doesn't mind\n> system calls from within a signal handler.\n\nStill, it's a mighty peculiar backtrace.\n\nAfter looking at postmaster.c, I see that the postmaster will issue\nSIGUSR1 to all remaining backends *each* time it sees a child exit\nwith nonzero status. And it just so happens that quickdie() chooses\nto exit with exit(1) not exit(0). So a new theory is\n\n1. Some backend crashes.\n\n2. Postmaster issues SIGUSR1 to all remaining backends.\n\n3. As each backend gives up the ghost, postmaster gets another wait()\n response and issues another SIGUSR1 to the ones that are left.\n\n4. Last remaining backend has been SIGUSR1'd enough times to overrun\n stack memory, leading to coredump.\n\nI'm not too enamored of this theory because it doesn't explain the\nperfect repeatability shown in your backtrace. It seems unlikely that\neach recursive quickdie() call would get just as far as elog's write()\nand no farther before the postmaster is able to issue another signal.\nStill, it's a possibility.\n\nWe should probably tweak the postmaster to be less enthusiastic about\nsignaling its children repeatedly.\n\nMeanwhile, have you tried looking in the postmaster log? The postmaster\nshould have logged at least the exit status for the first backend to\nfail.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Oct 2000 18:14:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Core dump "
},
{
"msg_contents": "\n| Still, it's a mighty peculiar backtrace.\n\nIndeed.\n\n| After looking at postmaster.c, I see that the postmaster will issue\n| SIGUSR1 to all remaining backends *each* time it sees a child exit\n| with nonzero status. And it just so happens that quickdie() chooses\n| to exit with exit(1) not exit(0). So a new theory is\n| \n| 1. Some backend crashes.\n| \n| 2. Postmaster issues SIGUSR1 to all remaining backends.\n| \n| 3. As each backend gives up the ghost, postmaster gets another wait()\n| response and issues another SIGUSR1 to the ones that are left.\n| \n| 4. Last remaining backend has been SIGUSR1'd enough times to overrun\n| stack memory, leading to coredump.\n\nThis theory might make a little more sense with the explanation below.\n\n| I'm not too enamored of this theory because it doesn't explain the\n| perfect repeatability shown in your backtrace. It seems unlikely that\n| each recursive quickdie() call would get just as far as elog's write()\n| and no farther before the postmaster is able to issue another signal.\n| Still, it's a possibility.\n\nWell, when this happens the machine is _heavily_ loaded. It could be that\nthe write()s are just taking longer than they should, giving it enough time\nto be signaled by another SIGUSR1. It may also explain why the SIGUSR1s\nare being sent so much, as the heavily loaded machine tends not to clean up\nits children as fast as it is expected.\n\n| We should probably tweak the postmaster to be less enthusiastic about\n| signaling its children repeatedly.\n\nPerhaps have postgres ignore SIGUSR1 after it has already received one?\n\nRegards,\n-Dan\n-- \nMan is a rational animal who always loses his temper when he is called\nupon to act in accordance with the dictates of reason.\n -- Oscar Wilde\n",
"msg_date": "Thu, 12 Oct 2000 18:24:42 -0400",
"msg_from": "Dan Moschuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Core dump"
},
{
"msg_contents": "Dan Moschuk <[email protected]> writes:\n> | We should probably tweak the postmaster to be less enthusiastic about\n> | signaling its children repeatedly.\n\n> Perhaps have postgres ignore SIGUSR1 after it has already received one?\n\nNow that you mention it, it tries to do exactly that:\n\nvoid\nquickdie(SIGNAL_ARGS)\n{\n\tPG_SETMASK(&BlockSig);\n\telog(NOTICE, \"Message from PostgreSQL backend:\"\n\t...\n\nBlockSig includes SIGUSR1. So why is the quickdie() routine entered\nagain? I'm back to suspecting something funny in Solaris' signal\nhandling...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Oct 2000 18:31:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Core dump "
},
{
"msg_contents": "I said:\n> BlockSig includes SIGUSR1.\n\nOh, wait, I take that back. It's initialized that way, but then\npostmaster.c removes SIGUSR1 from the set.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Oct 2000 18:35:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Core dump "
},
{
"msg_contents": "\n| I said:\n| > BlockSig includes SIGUSR1.\n| \n| Oh, wait, I take that back. It's initialized that way, but then\n| postmaster.c removes SIGUSR1 from the set.\n| \n| \t\t\tregards, tom lane\n\nSo, back to my initial question, why not make each postmaster SIG_IGN \nSIGUSR1 after it receives one?\n\n-Dan\n-- \nMan is a rational animal who always loses his temper when he is called\nupon to act in accordance with the dictates of reason.\n -- Oscar Wilde\n",
"msg_date": "Thu, 12 Oct 2000 19:13:41 -0400",
"msg_from": "Dan Moschuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Core dump"
}
]
|
[
{
"msg_contents": "I'm just committing the first changes to ECPGs parser so variables are\nallowed wherever possible instead of constants. I'm pretty sure this breaks\nstuff at one point or the other. Also I have not yet fixed all problems I\nknow about. So please bear with me.\n\nI'd like everyone who has embedded SQL sources to test them with this\nrelease and send me all bugs they can find, so I can create a roadmap for\n7.1. I want to get this as bug free as possible.\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Thu, 12 Oct 2000 20:16:59 +0200",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "ECPG changes"
}
]
|
[
{
"msg_contents": "> > > How far are we from seeing the version 7.1 out?\n> > \n> > beta starts ~Nov 1st, release in January ...\n> \n> Just wondering, WAL is going to be integrated when, and that gives how\n\nHopefully, next week.\n\n> much time to test it before releasing the beta?\n\nVadim\n",
"msg_date": "Thu, 12 Oct 2000 16:27:48 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Re: postgresql 7.1"
}
]
|
[
{
"msg_contents": "> > Could this be added? I am willing to help with the coding.\n> \n> This is what Version 7.1 WAL is all about. \n> There might be some help wanted in one of the possible backup methods:\n> \t1. a pg_dumpall restore, and a subsequent restore of logs\n> \t2. a restore of a \"physical backup of db files\" + \n> subsequent restore of logs\n> \n> I think Vadim has the 1st way in his works.\n\nNo, 2nd. WAL reflects layout of tuples in data files, so I don't see\nhow pg_dump output could be used.\nI'll really appreciate if someone will help with this issue ... after\nalpha testing will start next week.\n\nVadim\n",
"msg_date": "Thu, 12 Oct 2000 16:34:00 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: backup and restore"
},
{
"msg_contents": "On Thu, 12 Oct 2000, Mikheev, Vadim wrote:\n\n> > > Could this be added? I am willing to help with the coding.\n> > \n> > This is what Version 7.1 WAL is all about. \n> > There might be some help wanted in one of the possible backup methods:\n> > \t1. a pg_dumpall restore, and a subsequent restore of logs\n> > \t2. a restore of a \"physical backup of db files\" + \n> > subsequent restore of logs\n> > \n> > I think Vadim has the 1st way in his works.\n> \n> No, 2nd. WAL reflects layout of tuples in data files, so I don't see\n> how pg_dump output could be used.\n> I'll really appreciate if someone will help with this issue ... after\n> alpha testing will start next week.\n\nWhere is this code? How can we get it?\n\n\n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n\n",
"msg_date": "Fri, 13 Oct 2000 08:02:50 -0300 (ART)",
"msg_from": "\"Martin A. Marques\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: backup and restore"
},
{
"msg_contents": ">> No, 2nd. WAL reflects layout of tuples in data files, so I don't see\n>> how pg_dump output could be used.\n>> I'll really appreciate if someone will help with this issue ... after\n>> alpha testing will start next week.\n>\n> Where is this code? How can we get it?\n\nCode is in CVS.\nPhilip Warner already contacted me that he want to do WAL based\nbackup/restore and we'll proceed in a few days. If someone want to\nhelp in other areas, WAL todo follows in separate message.\n\nVadim\n\n\n\n",
"msg_date": "Sat, 14 Oct 2000 13:22:50 -0700",
"msg_from": "\"Vadim Mikheev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: backup and restore"
}
]
|
[
{
"msg_contents": "\nThis time in the client library...\n\n(gdb) bt\n#0 0xff215dd0 in _poll ()\n#1 0xff1ce79c in select ()\n#2 0xff08b164 in select ()\n#3 0xff338ec0 in PQgetResult (conn=0xa89c8) at fe-exec.c:1126\n#4 0xff339168 in PQexec (conn=0xa89c8,\n query=0xfe107b98 \"UPDATE url SET total_os='4963.98|135037.3153|4731.63|23.2|0.0|211.2|720.10|', total_browser='125815.2813|19371.503|54.4|3.0|414.4|487.3|6.0|0.0|', total_country='0.0|0.0|0.0|0.0|0.0|0.0|0.0|0.0|0.0|2.\"...)\n at fe-exec.c:1231\n#5 0x19778 in c2_pgdb_update_r (\n query=0xfe107b98 \"UPDATE url SET total_os='4963.98|135037.3153|4731.63|23.2|0.0|211.2|720.10|', total_browser='125815.2813|19371.503|54.4|3.0|414.4|487.3|6.0|0.0|', total_country='0.0|0.0|0.0|0.0|0.0|0.0|0.0|0.0|0.0|2.\"...,\n pgfd=0xfe109c0c) at pgdb.c:161\n#6 0x145f0 in syncUrlSQL (pgfd=0xfe109c0c, rec=0xe835938) at db.c:251\n#7 0x129e8 in command_server (sockmyballs=0x4) at command_server.c:181\n\n 67 Thread 44 (LWP 33) 0xff215dd0 in _poll ()\n 66 Thread 40 (LWP 30) 0xff215dd0 in _poll ()\n 65 Thread 39 (LWP 29) 0xff215dd0 in _poll ()\n 64 Thread 34 (LWP 28) 0xff215dd0 in _poll ()\n 63 Thread 30 (LWP 27) 0xff217bb4 in __lwp_sema_wait ()\n 62 Thread 27 (LWP 24) 0xff215dd0 in _poll ()\n 61 Thread 26 (LWP 25) 0xff215dd0 in _poll ()\n 60 Thread 25 (LWP 23) 0xff215dd0 in _poll ()\n 59 Thread 24 (LWP 22) 0xff215dd0 in _poll ()\n 58 Thread 23 (LWP 21) 0xff215dd0 in _poll ()\n 57 Thread 22 (LWP 20) 0xff215dd0 in _poll ()\n 56 Thread 21 (LWP 18) 0xff215dd0 in _poll ()\n 55 Thread 20 (LWP 19) 0xff215dd0 in _poll ()\n 54 Thread 19 (LWP 17) 0xff215dd0 in _poll ()\n 53 Thread 18 (LWP 16) 0xff215dd0 in _poll ()\n 52 Thread 17 (LWP 15) 0xff215dd0 in _poll ()\n 51 Thread 16 (LWP 14) 0xff215dd0 in _poll ()\n 50 Thread 15 (LWP 13) 0xff215dd0 in _poll ()\n 49 Thread 14 (LWP 12) 0xff215dd0 in _poll ()\n 48 Thread 13 (LWP 11) 0xff215dd0 in _poll ()\n 47 Thread 12 (LWP 9) 0xff215dd0 in _poll ()\n 46 Thread 11 (LWP 10) 0xff21488c in _so_accept ()\n 45 Thread 10 (LWP 7) 0xff21488c in _so_accept ()\n 44 Thread 9 (LWP 8) 0xff21488c in _so_accept ()\n 43 Thread 8 (LWP 4) 0xff21488c in _so_accept ()\n 42 Thread 7 (LWP 6) 0xff21488c in _so_accept ()\n 41 Thread 6 (LWP 5) 0xff21488c in _so_accept ()\n 40 Thread 5 (LWP 3) 0xff217050 in _read ()\n 39 Thread 4 (LWP 1) 0xff217050 in _read ()\n 38 Thread 3 0xff217bb4 in __lwp_sema_wait ()\n 37 Thread 2 (LWP 2) 0xff217584 in _signotifywait ()\n 36 Thread 1 0xff217bb4 in __lwp_sema_wait ()\n 35 Thread 28 0xff217bb4 in __lwp_sema_wait ()\n 34 LWP 35 0xff217bb4 in __lwp_sema_wait ()\n 33 LWP 34 0xff21515c in door_restart ()\n 32 LWP 33 0xff215dd0 in _poll ()\n 31 LWP 31 0xff217bb4 in __lwp_sema_wait ()\n 30 LWP 30 0xff215dd0 in _poll ()\n 29 LWP 29 0xff215dd0 in _poll ()\n 28 LWP 28 0xff215dd0 in _poll ()\n 27 LWP 27 0xff217bb4 in __lwp_sema_wait ()\n 26 LWP 26 0xff217bb4 in __lwp_sema_wait ()\n 25 LWP 25 0xff215dd0 in _poll ()\n 24 LWP 24 0xff215dd0 in _poll ()\n 23 LWP 23 0xff215dd0 in _poll ()\n 22 LWP 22 0xff215dd0 in _poll ()\n 21 LWP 21 0xff215dd0 in _poll ()\n 20 LWP 20 0xff215dd0 in _poll ()\n 19 LWP 19 0xff215dd0 in _poll ()\n 18 LWP 18 0xff215dd0 in _poll ()\n 17 LWP 17 0xff215dd0 in _poll ()\n 16 LWP 16 0xff215dd0 in _poll ()\n 15 LWP 15 0xff215dd0 in _poll ()\n 14 LWP 14 0xff215dd0 in _poll ()\n 13 LWP 13 0xff215dd0 in _poll ()\n 12 LWP 12 0xff215dd0 in _poll ()\n* 11 LWP 11 0xff215dd0 in _poll ()\n 10 LWP 10 0xff21488c in _so_accept ()\n 9 LWP 9 0xff215dd0 in _poll ()\n 8 LWP 8 0xff21488c in _so_accept ()\n 7 LWP 7 0xff21488c in _so_accept ()\n 6 LWP 6 0xff21488c in _so_accept ()\n 5 LWP 5 0xff21488c in _so_accept ()\n 4 LWP 4 0xff21488c in _so_accept ()\n 3 LWP 3 0xff217050 in _read ()\n 2 LWP 2 0xff217584 in _signotifywait ()\n 1 LWP 1 0xff217050 in _read ()\n\nPretty much all my threads are stuck waiting for data. :(\n\n-- \nMan is a rational animal who always loses his temper when he is called\nupon to act in accordance with the dictates of reason.\n -- Oscar Wilde\n",
"msg_date": "Thu, 12 Oct 2000 22:13:42 -0400",
"msg_from": "Dan Moschuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "More postgres woes."
}
]
|
[
{
"msg_contents": "There is an interesting article about the FreeBSD core team setup:\n\n\thttp://www.daemonnews.org/200010/dadvocate.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 12 Oct 2000 23:02:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "FreeBSD core team history"
}
]
|
[
{
"msg_contents": "I notced that COPY FROM does not invoke the length coercion function\nbefore calling heap_insert(). This leads sometimes bad things such as\nincorrectly truncated mutibyte strings. My idea is finding an\nappropreate function like currently\ncoerce_type_typmod(parser/parse_coerce.c) does, and calling it before\nheap_insert()\n\nComments?\n--\nTatsuo Ishii\n\n",
"msg_date": "Fri, 13 Oct 2000 15:10:16 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "copy and length coercion"
}
]
|
[
{
"msg_contents": "\n> > My conclusion would be that we need both:\n> > 1. a fast system table only solution with physical/logical column id\n> > 2. a tool that does the cleanup (e.g. vacuum) \n> \n> But the peak space usage during cleanup must still be 2X.\n\nThe difference for a cleanup would be, that it does not need to \nbe rolled back as a whole (like current vacuum).\nA cleanup could be partly done, and resumed later, thus a sofisticated \ncleanup could avoid 2X.\n\nAndreas\n",
"msg_date": "Fri, 13 Oct 2000 09:42:10 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: ALTER TABLE DROP COLUMN "
}
]
|
[
{
"msg_contents": "\n> > > He does ask a legitimate question though. If you are \n> going to have a\n> > > LIMIT feature (which of course is not pure SQL), there \n> seems no reason\n> > > you shouldn't be able to insert the result into a table.\n> > \n> > \n> \n> This is an interesting idea. We don't allow ORDER BY in \n> INSERT INTO ...\n> SELECT because it doesn't make any sense, but it does make sense if\n> LIMIT is used:\n\nAn \"order by\" also makes sense if you want to create a presorted table\nfor faster access. I don't see why we should disallow it.\n\nAndreas\n",
"msg_date": "Fri, 13 Oct 2000 09:53:20 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Inserting a select statement result into another ta\n\tble"
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> \n> > > > He does ask a legitimate question though. If you are \n> > going to have a\n> > > > LIMIT feature (which of course is not pure SQL), there \n> > seems no reason\n> > > > you shouldn't be able to insert the result into a table.\n> > > \n> > > \n> > \n> > This is an interesting idea. We don't allow ORDER BY in \n> > INSERT INTO ...\n> > SELECT because it doesn't make any sense, but it does make sense if\n> > LIMIT is used:\n> \n> An \"order by\" also makes sense if you want to create a presorted table\n> for faster access. I don't see why we should disallow it.\n\nLike CLUSTER. I see.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 13 Oct 2000 05:07:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Inserting a select statement result into another ta\n ble"
},
{
"msg_contents": "\nWith how we do things right now, does it actually gain us anything\nto have a presorted table? Do we know not to do a seek on an index scan\nif we're already at the right location in the heap file? We can't assume\nthe table is sorted (unless it hasn't been modified), so it's not like we\ncan sequence scan and stop when the bounds are met. If we don't do the\nseek though, this could definately be good for mostly static data since\nthat might allow us to mostly not do seeks on normal conditions.\n\nOn Fri, 13 Oct 2000, Zeugswetter Andreas SB wrote:\n\n> \n> > > > He does ask a legitimate question though. If you are \n> > going to have a\n> > > > LIMIT feature (which of course is not pure SQL), there \n> > seems no reason\n> > > > you shouldn't be able to insert the result into a table.\n> > > \n> > > \n> > \n> > This is an interesting idea. We don't allow ORDER BY in \n> > INSERT INTO ...\n> > SELECT because it doesn't make any sense, but it does make sense if\n> > LIMIT is used:\n> \n> An \"order by\" also makes sense if you want to create a presorted table\n> for faster access. I don't see why we should disallow it.\n> \n> Andreas\n> \n\n",
"msg_date": "Fri, 13 Oct 2000 09:43:42 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Inserting a select statement result into another\n ta ble"
},
{
"msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n>> This is an interesting idea. We don't allow ORDER BY in \n>> INSERT INTO ...\n>> SELECT because it doesn't make any sense, but it does make sense if\n>> LIMIT is used:\n\n> An \"order by\" also makes sense if you want to create a presorted table\n> for faster access. I don't see why we should disallow it.\n\nIn current sources:\n\nregression=# insert into int4_tbl select * from int4_tbl order by f1;\nINSERT 0 5\nregression=# select * from int4_tbl;\n f1\n-------------\n 0\n 123456\n -123456\n 2147483647\n -2147483647\n -2147483647 <<= insertion starts here\n -123456\n 0\n 123456\n 2147483647\n(10 rows)\n\nLIMIT won't work without some further code-rejiggering, but I think\nit should be made to work eventually.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 14 Oct 2000 00:22:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Inserting a select statement result into another ta ble "
}
]
|
[
{
"msg_contents": "> we bite the bullet to the extent of supporting a distinction between\n> physical and logical column numbers, then ISTM there's no strong need\n> to do any of this other stuff at all. I'd expect that an inserted or\n> updated tuple would have a NULL in any physical column position that\n> doesn't have an equivalent logical column, so the space cost \n> is minimal\n> (zero, in fact, if there are any other NULLs in the tuple). Over time\n> the space occupied by deleted-column data would gradually go away as\n> tuples got updated.\n\nThis said, I think Hiroshi's patch seems a perfect starting point, no ?\n\nAndreas\n",
"msg_date": "Fri, 13 Oct 2000 10:06:03 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> > we bite the bullet to the extent of supporting a distinction between\n> > physical and logical column numbers, then ISTM there's no strong need\n> > to do any of this other stuff at all. I'd expect that an inserted or\n> > updated tuple would have a NULL in any physical column position that\n> > doesn't have an equivalent logical column, so the space cost \n> > is minimal\n> > (zero, in fact, if there are any other NULLs in the tuple). Over time\n> > the space occupied by deleted-column data would gradually go away as\n> > tuples got updated.\n> \n> This said, I think Hiroshi's patch seems a perfect starting point, no ?\n\nHaving phantom columns adds additional complexity to the system overall.\nWe have to decide we really want it before making things more complex\nthan they already are.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 13 Oct 2000 05:07:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "On Fri, 13 Oct 2000, Bruce Momjian wrote:\n\n> [ Charset ISO-8859-1 unsupported, converting... ]\n> > > we bite the bullet to the extent of supporting a distinction between\n> > > physical and logical column numbers, then ISTM there's no strong need\n> > > to do any of this other stuff at all. I'd expect that an inserted or\n> > > updated tuple would have a NULL in any physical column position that\n> > > doesn't have an equivalent logical column, so the space cost \n> > > is minimal\n> > > (zero, in fact, if there are any other NULLs in the tuple). Over time\n> > > the space occupied by deleted-column data would gradually go away as\n> > > tuples got updated.\n> > \n> > This said, I think Hiroshi's patch seems a perfect starting point, no ?\n> \n> Having phantom columns adds additional complexity to the system overall.\n> We have to decide we really want it before making things more complex\n> than they already are.\n\nMy feel from Tom's email, about changing the \"structure\" of how a column\nis defined, seems to be that he thinks *that* will simplify things, not\nmake them more complex, but I may be reading things wrong.\n\nHiroshi's patch would make for a good starting point by bringing in the\nability to do the DROP COLUMN feature, as I understand, without the\nrollback capability, with the changes that Tom is proposing bringing it to\na 'rollbackable' stage ...\n\nAgain, maybe I am misunderstanding Tom's comments, but the whole column\nissue itself sounded like something he wanted to see happen anyway ...\n\n",
"msg_date": "Fri, 13 Oct 2000 08:06:05 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> This said, I think Hiroshi's patch seems a perfect starting point, no ?\n\n> Having phantom columns adds additional complexity to the system overall.\n> We have to decide we really want it before making things more complex\n> than they already are.\n\nI think we do, because it solves more than just the ALTER DROP COLUMN\nproblem: it cleans up other sore spots too. Like ALTER TABLE ADD COLUMN\nin a table with child tables.\n\nOf course, it depends on just how ugly and intrusive the code changes\nare to make physical and logical columns distinct. I'd like to think\nthat some fairly limited changes in and around heap_getattr would do\nmost of the trick. If we need something as messy as the first-cut\nDROP_COLUMN_HACK then I'll look for another way...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Oct 2000 21:15:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> >> This said, I think Hiroshi's patch seems a perfect starting point, no ?\n>\n> > Having phantom columns adds additional complexity to the system overall.\n> > We have to decide we really want it before making things more complex\n> > than they already are.\n>\n> I think we do, because it solves more than just the ALTER DROP COLUMN\n> problem: it cleans up other sore spots too. Like ALTER TABLE ADD COLUMN\n> in a table with child tables.\n>\n> Of course, it depends on just how ugly and intrusive the code changes\n> are to make physical and logical columns distinct. I'd like to think\n> that some fairly limited changes in and around heap_getattr would do\n> most of the trick. If we need something as messy as the first-cut\n> DROP_COLUMN_HACK then I'll look for another way...\n>\n\nHmm,the implementation using physical and logical attribute numbers\nwould be much more complicated than first-cut DROP_COLUMN_HACK.\nThere's no simpler way than first-cut DROP_COLUMN_HACK.\nI see no progress in 2x DROP COLUMN implementation.\n\nHow about giving up DROP COLUMN forever ?\n\nRegards.\n\nHiroshi Inoue\n\n\n",
"msg_date": "Mon, 16 Oct 2000 16:10:35 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: ALTER TABLE DROP COLUMN"
}
]
|
[
{
"msg_contents": "\n> Hiroshi's patch would make for a good starting point by bringing in the\n> ability to do the DROP COLUMN feature, as I understand, without the\n> rollback capability,\n\nNo Hiroshi's patch is rollback enabled, simply because all it does is change \nsome system tables. It only does not free space that is used by \"old\" phantom \ncolumns. This cleanup would need extra work, but for now, I guess it would be fine\nto simply say that if you want to regain the space create a new table and move the \ndata.\n\nAndreas\n",
"msg_date": "Fri, 13 Oct 2000 13:22:14 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: AW: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "On Fri, 13 Oct 2000, Zeugswetter Andreas SB wrote:\n\n> \n> > Hiroshi's patch would make for a good starting point by bringing in the\n> > ability to do the DROP COLUMN feature, as I understand, without the\n> > rollback capability,\n> \n> No Hiroshi's patch is rollback enabled, simply because all it does is\n> change some system tables. It only does not free space that is used by\n> \"old\" phantom columns. This cleanup would need extra work, but for\n> now, I guess it would be fine to simply say that if you want to regain\n> the space create a new table and move the data.\n\nokay, but, again based on my impression of what Tom has stated, and\nprevious conversations on this topic, the key problem is what happens if I\ndrop a column and a later date decide add a new column of the same name,\nwhat happens?\n\nI *believe* its that condition that Tom's thought about physical vs\nlogical column names was about ... Tom?\n\n\n",
"msg_date": "Fri, 13 Oct 2000 18:37:30 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: AW: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> okay, but, again based on my impression of what Tom has stated, and\n> previous conversations on this topic, the key problem is what happens if I\n> drop a column and a later date decide add a new column of the same name,\n> what happens?\n\nI'm not very worried about that --- presumably the old column is either\nmarked as dead or gone entirely from pg_attribute, so why should there\nbe any confusion?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Oct 2000 22:40:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: AW: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "On Fri, 13 Oct 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > okay, but, again based on my impression of what Tom has stated, and\n> > previous conversations on this topic, the key problem is what happens if I\n> > drop a column and a later date decide add a new column of the same name,\n> > what happens?\n> \n> I'm not very worried about that --- presumably the old column is either\n> marked as dead or gone entirely from pg_attribute, so why should there\n> be any confusion?\n\nright, but, if you add a column after it was previuosly marked dead, does\nit then create a second column in pg_attribute with that same\nname? *raised eyebrow* \n\n\n",
"msg_date": "Fri, 13 Oct 2000 23:50:56 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: AW: ALTER TABLE DROP COLUMN "
}
]
|
[
{
"msg_contents": "> > > > The project name on SourceForge is \"Python Interface to PostgreSQL\".\n> > > \n> > > How about PIPgSQL or piPgSQL?\n> > \n> > Perhaps Pi2PgSQL, or PySQL_ba\n> or PyPgSQL?\n> Do we get a reward if you choose our name? ;)\n\nNo...but if this thread doesn't die soon, Marc will have to create a new\nmailing list for it.\n\nJoel\n",
"msg_date": "Fri, 13 Oct 2000 08:33:53 -0400",
"msg_from": "\"Clark, Joel\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [INTERFACES] Announcing PgSQL - a Python DB-API 2.0 compliant\n\tinterface to PostgreSQL"
}
]
|
[
{
"msg_contents": "\nRunning postgres in -d 2 mode is a little frustrating on a server that \nprocesses as little as 10 queries a second. Can we not add getpid() or\nsomething to that code so that we might track which output belongs to\nwhich server a little easier?\n\nThanks,\n-Dan\n\n-- \nMan is a rational animal who always loses his temper when he is called\nupon to act in accordance with the dictates of reason.\n -- Oscar Wilde\n",
"msg_date": "Fri, 13 Oct 2000 11:05:02 -0400",
"msg_from": "Dan Moschuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "-d 2 frustration"
},
{
"msg_contents": "Dan Moschuk writes:\n\n> Running postgres in -d 2 mode is a little frustrating on a server that \n> processes as little as 10 queries a second. Can we not add getpid() or\n> something to that code so that we might track which output belongs to\n> which server a little easier?\n\nIf you're running 7.1-to-be, put \"log_pid = on\" in your postgresql.conf. \nFor earlier versions, look for ELOG_TIMESTAMPS in src/include/config.h and\ncheck out how to turn it on with the pg_options file. (It's in the\ndocumentation somewhere.)\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 13 Oct 2000 21:14:57 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: -d 2 frustration"
}
]
|
[
{
"msg_contents": "\nI'm trying to track down a source of great slowdown on our database, and\nI seem to have become stuck.\n\nIf I issue the command...\n\nc2net=> UPDATE url SET last_hit = 971456105 WHERE memid = 1;\n\nThis is all I see of it in my -d 2 window log.\n\nquery: UPDATE url SET last_hit = 971456105 WHERE memid = 1;\nProcessQuery\n\nThe query actually never completes. Eventually getting bored, I ^C the\nclient, and receive the message..\n\nc2net=> UPDATE url SET last_hit = 971456105 WHERE memid = 1;\n^CCancel request sent\nERROR: Query cancel requested while waiting lock\n\nIt seems to be stuck, as no longer how long I wait for this to become \nunlocked, it never wants to become free. \n\nSuggestions on where to go from here?\n\nThanks,\n-Dan\n-- \nMan is a rational animal who always loses his temper when he is called\nupon to act in accordance with the dictates of reason.\n -- Oscar Wilde\n",
"msg_date": "Fri, 13 Oct 2000 13:01:29 -0400",
"msg_from": "Dan Moschuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Odd behavior on update?"
}
]
|
[
{
"msg_contents": "At 04:23 PM 10/12/00 +0200, Zeugswetter Andreas SB wrote:\n\n>My conclusion would be that we need both:\n>1. a fast system table only solution with physical/logical column id\n>2. a tool that does the cleanup (e.g. vacuum) \n\nOracle provides both styles of \"drop column\" - the \"hide the column's\ndata and make it logically disappear\" style, and the \"grind through\nand delete all the data as well as make the column disappear from\nview\". So there's evidence of a need for both styles.\n\nIf you choose the \"hide the data\" style, I don't know if you can later\nrecover that space.\n\nHowever, despite the above I think a 2x \"grind through and remove the\ndata\" DROP COLUMN would be a welcome first addition, and would meet the\nneeds of a very high percentage of the current user base. A future\noption to just hide the data would most likely be welcome, too.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 13 Oct 2000 13:14:03 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AW: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "> -----Original Message-----\n> From: Don Baccus [mailto:[email protected]]\n> \n> At 04:23 PM 10/12/00 +0200, Zeugswetter Andreas SB wrote:\n> \n> >My conclusion would be that we need both:\n> >1. a fast system table only solution with physical/logical column id\n> >2. a tool that does the cleanup (e.g. vacuum) \n> \n> Oracle provides both styles of \"drop column\" - the \"hide the column's\n> data and make it logically disappear\" style, and the \"grind through\n> and delete all the data as well as make the column disappear from\n> view\". So there's evidence of a need for both styles.\n> \n> If you choose the \"hide the data\" style, I don't know if you can later\n> recover that space.\n> \n> However, despite the above I think a 2x \"grind through and remove the\n> data\" DROP COLUMN would be a welcome first addition, and would meet the\n> needs of a very high percentage of the current user base.\n\nThis style of \"DROP COLUMN\" would change the attribute\nnumbers whose positons are after the dropped column.\nUnfortunately we have no mechanism to invalidate/remove\nobjects(or prepared plans) which uses such attribute numbers.\nAnd I've seen no proposal/discussion to solve this problem\nfor DROP COLUMN feature. We wound't be able to prevent\nPostgreSQL from doing the wrong thing silently. \n\nWhen I used Oracle,I saw neither option of DROP COLUMN\nfeature. It seems to tell us that the implementation isn't \nthat easy. It may not be a bad choise to give up DROP\nCOLUMN feature forever.\n\n> A future\n> option to just hide the data would most likely be welcome, too.\n>\n\nMy trial implementation using physical/logical attribute numbers\nisn't so clean as I expected. I'm inclined to restrict my change to\nfix the TODO\n* ALTER TABLE ADD COLUMN to inherited table put column in wrong place\nthough it would also introduce a backward compatibility.\nI could live without DROP COLUMN feature though I couldn't\nlive without ADD COLUMN feature.\n\nComments ?\n\nRegards.\n\nHiroshi Inoue\n",
"msg_date": "Sun, 15 Oct 2000 22:56:12 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: AW: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "> My trial implementation using physical/logical attribute numbers\n> isn't so clean as I expected. I'm inclined to restrict my change to\n> fix the TODO\n> * ALTER TABLE ADD COLUMN to inherited table put column in wrong place\n> though it would also introduce a backward compatibility.\n> I could live without DROP COLUMN feature though I couldn't\n> live without ADD COLUMN feature.\n\nWe have a DROP COLUMN option in the FAQ, so I don't see a rush there. \nSounds like we need your fix for add column with inheritance, but I\nsuppose the 2x fix for DROP COLUMN could be used to add columns too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 15 Oct 2000 10:12:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "\n Inoue san,\n\n> This style of \"DROP COLUMN\" would change the attribute\n> numbers whose positons are after the dropped column.\n> Unfortunately we have no mechanism to invalidate/remove\n> objects(or prepared plans) which uses such attribute numbers.\n\n1 create table alpha( id int4, payload text );\n2 insert into alpha( id, payload ) values( 0, 'zero' );\n3 create table t( payload text );\n4 insert into t( payload ) select payload from alpha;\n5 drop table alpha;\n6 alter table t rename to alpha;\n\n Not a big deal, right? Also, drop column isn't really needed\nthat often and requires alot of manual processing, like updating\nviews/rules/procedures etc.\n On the other hand, when dropping a column (multiple columns) in a table\nwith 10+ columns, statements 3 and 4 above may become quite painfull.\nIt'd be nice if drop column were `expanded' to appropriate queries\nautomatically. Not sure about abovementioned attribute numbers in such\ncase.\n In general, however, if drop column is the only statement that is likely\nto affect attribute numbers this way (assuming that add column always adds\nand never inserts an attribute), then a fairly simple function in plpgsql,\nshipped with template1 will probably do. At least it should work to drop a\nsingle column, because full-featured function will require argument list of\nvariable length.\n\n Ed\n\n\n\n---\n Well I tried to be meek\n And I have tried to be mild\n But I spat like a woman\n And I sulked like a child\n I have lived behind the walls\n That have made me alone\n Striven for peace\n Which I never have known\n\n Dire Straits, Brothers In Arms, The Man's Too Strong (Knopfler)\n\n",
"msg_date": "Sun, 15 Oct 2000 14:29:31 +0000",
"msg_from": "KuroiNeko <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: AW: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "Here's something I don't understand.... If I'm missing an obvious, please\nfeel free to kick me in the right direction.\n We're given two tables, linked with a foreign key. When insert is run on a\nmaster table, everything is OK, when trying to insert into a detail table,\na strange query appears in the log (schema and log snippet attached).\n In fact, there should be no problem, but if a user has no permissions to\nupdate the master table (eg, this is a log where he can only insert, but\nneither delete, nor update), then inserting into detail table fails on\npermission violation at the query:\n\nSELECT oid FROM \"alpha\" WHERE \"id\" = $1 FOR UPDATE OF \"alpha\"\n\n TIA\n\n Ed\n\n\n---\n Well I tried to be meek\n And I have tried to be mild\n But I spat like a woman\n And I sulked like a child\n I have lived behind the walls\n That have made me alone\n Striven for peace\n Which I never have known\n\n Dire Straits, Brothers In Arms, The Man's Too Strong (Knopfler)",
"msg_date": "Sun, 15 Oct 2000 15:10:08 +0000",
"msg_from": "KuroiNeko <[email protected]>",
"msg_from_op": false,
"msg_subject": "select oid .... for update ...."
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> My trial implementation using physical/logical attribute numbers\n> isn't so clean as I expected. I'm inclined to restrict my change to\n> fix the TODO\n> * ALTER TABLE ADD COLUMN to inherited table put column in wrong place\n> though it would also introduce a backward compatibility.\n\nI'm confused --- how will that make things any simpler or cleaner?\nYou still need physical/logical column numbering distinction in order\nto fix inherited ADD COLUMN, don't you?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 15 Oct 2000 11:57:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "\nThis is a known problem in 7.0.x (see mailing list archives for more\ninformation). Peter E has a patch for 7.1 to remove this problem.\n\nStephan Szabo\[email protected]\n\nOn Sun, 15 Oct 2000, KuroiNeko wrote:\n\n> \n> Here's something I don't understand.... If I'm missing an obvious, please\n> feel free to kick me in the right direction.\n> We're given two tables, linked with a foreign key. When insert is run on a\n> master table, everything is OK, when trying to insert into a detail table,\n> a strange query appears in the log (schema and log snippet attached).\n> In fact, there should be no problem, but if a user has no permissions to\n> update the master table (eg, this is a log where he can only insert, but\n> neither delete, nor update), then inserting into detail table fails on\n> permission violation at the query:\n> \n> SELECT oid FROM \"alpha\" WHERE \"id\" = $1 FOR UPDATE OF \"alpha\"\n\n",
"msg_date": "Sun, 15 Oct 2000 10:16:32 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select oid .... for update ...."
},
{
"msg_contents": "> This is a known problem in 7.0.x (see mailing list archives for more\n> information). Peter E has a patch for 7.1 to remove this problem.\n\n Thanks and sorry for the hassle.\n While we're on it, if there's any work going on premissions (separating\nupdate/delete etc), I'd be glad to offer my help, if needed.\n\n Thx\n\n\n--\n\n Well I tried to be meek\n And I have tried to be mild\n But I spat like a woman\n And I sulked like a child\n I have lived behind the walls\n That have made me alone\n Striven for peace\n Which I never have known\n\n Dire Straits, Brothers In Arms, The Man's Too Strong (Knopfler)\n\n",
"msg_date": "Sun, 15 Oct 2000 19:12:43 +0000",
"msg_from": "KuroiNeko <[email protected]>",
"msg_from_op": false,
"msg_subject": "Permissions, was select oid"
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > My trial implementation using physical/logical attribute numbers\n> > isn't so clean as I expected. I'm inclined to restrict my change to\n> > fix the TODO\n> > * ALTER TABLE ADD COLUMN to inherited table put column in wrong place\n> > though it would also introduce a backward compatibility.\n> \n> I'm confused --- how will that make things any simpler or cleaner?\n> You still need physical/logical column numbering distinction in order\n> to fix inherited ADD COLUMN, don't you?\n>\n\nYes,the implementation would be almost same.\nI've been busy for some time and wasn't able to follow\nthis thread. Don't people love 2x DROP COLUMN ?\nI don't object to 2x DROP COLUMN if it could be\nimplemented properly though I don't want to implement\nit myself. However I would strongly object to 2x ADD\nCOLUMN if such implementations are proposed. \n\nRegards.\n\nHiroshi Inoue\n",
"msg_date": "Mon, 16 Oct 2000 08:21:11 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: AW: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "\n\nKuroiNeko wrote:\n\n> Inoue san,\n>\n> > This style of \"DROP COLUMN\" would change the attribute\n> > numbers whose positons are after the dropped column.\n> > Unfortunately we have no mechanism to invalidate/remove\n> > objects(or prepared plans) which uses such attribute numbers.\n>\n> 1 create table alpha( id int4, payload text );\n> 2 insert into alpha( id, payload ) values( 0, 'zero' );\n> 3 create table t( payload text );\n> 4 insert into t( payload ) select payload from alpha;\n> 5 drop table alpha;\n> 6 alter table t rename to alpha;\n>\n> Not a big deal, right?\n\nYes,there's a similar procedure in FAQ.\n\n> Also, drop column isn't really needed\n> that often and requires alot of manual processing, like updating\n> views/rules/procedures etc.\n\nThe FAQ doesn't refer to alot of manual processing at all.\nCertainly it's very difficult to cover all procederes to\naccomplish \"DROP COLUMN\". It's one of the reason why\nI've said \"DROP COLUMN\" isn't that easy.\n\n>\n> On the other hand, when dropping a column (multiple columns) in a table\n> with 10+ columns, statements 3 and 4 above may become quite painfull.\n> It'd be nice if drop column were `expanded' to appropriate queries\n> automatically. Not sure about abovementioned attribute numbers in such\n> case.\n> In general, however, if drop column is the only statement that is likely\n> to affect attribute numbers this way (assuming that add column always adds\n> and never inserts an attribute), then a fairly simple function in plpgsql,\n> shipped with template1 will probably do.\n\nplpgsql functions are executed in a transaction.\nI don't think plpgsql could execute\n \"insert(select into) -> drop -> rename\"\nproperly(at least currently).\n\nRegards.\n\nHiroshi Inoue\n\n\n\n\n> At least it should work to drop a\n> single column, because full-featured function will require argument list of\n> variable length.\n>\n> Ed\n>\n> ---\n> Well I tried to be meek\n> And I have tried to be mild\n> But I spat like a woman\n> And I sulked like a child\n> I have lived behind the walls\n> That have made me alone\n> Striven for peace\n> Which I never have known\n>\n> Dire Straits, Brothers In Arms, The Man's Too Strong (Knopfler)\n\n",
"msg_date": "Mon, 16 Oct 2000 09:41:59 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "KuroiNeko wrote:\n\n> 1 create table alpha( id int4, payload text );\n<snip>\n\n> Not a big deal, right? \n\nYes a big deal. You just lost all your oids.\n",
"msg_date": "Mon, 16 Oct 2000 18:51:10 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n\n> When I used Oracle,I saw neither option of DROP \n> COLUMN feature. It seems to tell us that the \n> implementation isn't\n> that easy. It may not be a bad choise to give up DROP\n> COLUMN feature forever.\n\nBecause it's not easy we shouldn't do it? I don't think so. The perfect\nsolution is lazy updating of tuples but it requires versioning of\nmeta-data and that requires a bit of work.\n\n> However I would strongly object to 2x\n> ADD COLUMN if such implementations are proposed. \n\nNot even 2x for ADD COLUMN DEFAULT ?\n",
"msg_date": "Mon, 16 Oct 2000 18:57:12 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "\n\nChris wrote:\n\n> Hiroshi Inoue wrote:\n>\n> > When I used Oracle,I saw neither option of DROP\n> > COLUMN feature. It seems to tell us that the\n> > implementation isn't\n> > that easy. It may not be a bad choise to give up DROP\n> > COLUMN feature forever.\n>\n> Because it's not easy we shouldn't do it? I don't think so. The perfect\n> solution is lazy updating of tuples but it requires versioning of\n> meta-data and that requires a bit of work.\n>\n> > However I would strongly object to 2x\n> > ADD COLUMN if such implementations are proposed.\n>\n> Not even 2x for ADD COLUMN DEFAULT ?\n\nCertainly it would need 2x.\nHowever is ADD COLUMN DEFAULT really needed ?\nI would do as follows.\n\n ADD COLUMN (without default)\n UPDATE .. SET new_column = new default\n ALTER TABLE ALTER COLUMN SET DEFAULT\n\nRegards.\nHiroshi Inoue\n\n\n",
"msg_date": "Mon, 16 Oct 2000 17:21:09 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n\n> Certainly it would need 2x.\n> However is ADD COLUMN DEFAULT really needed ?\n> I would do as follows.\n> \n> ADD COLUMN (without default)\n> UPDATE .. SET new_column = new default\n> ALTER TABLE ALTER COLUMN SET DEFAULT\n\nWell in current postgres that would use 2x. With WAL I presume that\nwould use a lot of log space and probably a lot more processing. But if\nyou can do the above you might as well support the right syntax.\n",
"msg_date": "Mon, 16 Oct 2000 19:34:34 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "\n\nChris wrote:\n\n> Hiroshi Inoue wrote:\n>\n> > When I used Oracle,I saw neither option of DROP\n> > COLUMN feature. It seems to tell us that the\n> > implementation isn't\n> > that easy. It may not be a bad choise to give up DROP\n> > COLUMN feature forever.\n>\n> Because it's not easy we shouldn't do it? I don't think so. The perfect\n> solution is lazy updating of tuples but it requires versioning of\n> meta-data and that requires a bit of work.\n>\n\nWe could easily break the consistency of DB due to careless\nimplementations. Is \"DROP COLUMN\" valuable to walk on a\ntightrope ? I would agree if \"ADD COLUMN\" needs to walk\non a tightrope.\n\nRegards.\n\nHiroshi Inoue\n\n",
"msg_date": "Mon, 16 Oct 2000 17:57:14 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n\n> We could easily break the consistency of DB due to \n> careless implementations.\n\nI'm sure no-one around here would do careless implementations. :-)\n",
"msg_date": "Mon, 16 Oct 2000 20:42:18 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "Chris wrote:\n> \n> Hiroshi Inoue wrote:\n> \n> > When I used Oracle,I saw neither option of DROP\n> > COLUMN feature. It seems to tell us that the\n> > implementation isn't\n> > that easy. It may not be a bad choise to give up DROP\n> > COLUMN feature forever.\n> \n> Because it's not easy we shouldn't do it? I don't think so. The perfect\n> solution is lazy updating of tuples but it requires versioning of\n> meta-data and that requires a bit of work.\n\nI would prefer the logical/physical numbering + typed tuples\n(or is it the same thing ;)\n\nIt would give us the additional benefit of being able to move to SQL3-wise \ncorrect CREATE TABLE UNDER syntax with most constraints (primary/foreign key, \nunique, ...) carried on automatically if we store the (single-)inheritance \nhierarchy in one file.\n\nOthers (NOT NULL, CHECK, ...) will need additional check for tuple type.\n\nThis does not solve the problem for multiple inheritance, but then we could \ncludge most of it by inheriting all from a single root.\n\nI suspect it would still be easier than doing it the other way (by\nconstructing \nUNIONs each time, checking several indexes for uniquenass (or creating a new \nindex type for indexing several separate relations))\n\n---------------------\nHannu\n",
"msg_date": "Mon, 16 Oct 2000 13:01:58 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "At 10:56 PM 10/15/00 +0900, Hiroshi Inoue wrote:\n\n>When I used Oracle,I saw neither option of DROP COLUMN\n>feature. It seems to tell us that the implementation isn't \n>that easy. It may not be a bad choise to give up DROP\n>COLUMN feature forever.\n\nBoth options are in Oracle now, as proudly documented in their\nfreely accessible on-line documentation. It is very possible\nthey didn't implement it until version 8, i.e. until a couple of years\nago.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 16 Oct 2000 07:50:32 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: AW: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "> > Not a big deal, right?\n>\n> Yes a big deal. You just lost all your oids.\n\n After I hit the wall with oids for the first time, I don't refer to them\nanymore :) But yes, you're perfectly right, this is one more reason to have\nDDL completely `automated,' ie no manual substitutions.\n And here the fact that drop column is rarely needed is a double-bladed\nsword. With things that you don't do often, you're at risk to forget\nsomething essential and hose your data.\n\n\n--\n\n Well I tried to be meek\n And I have tried to be mild\n But I spat like a woman\n And I sulked like a child\n I have lived behind the walls\n That have made me alone\n Striven for peace\n Which I never have known\n\n Dire Straits, Brothers In Arms, The Man's Too Strong (Knopfler)\n\n",
"msg_date": "Mon, 16 Oct 2000 15:23:23 +0000",
"msg_from": "KuroiNeko <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "> Both options are in Oracle now, as proudly documented in their\n> freely accessible on-line documentation. It is very possible\n> they didn't implement it until version 8, i.e. until a couple of years\n> ago.\n\nFYI: ALTER TABLE DROP COLUMN was added as of 8 / 8i according to our\nOracle DBA.\n\n- merlin\n\n",
"msg_date": "Mon, 16 Oct 2000 13:02:17 -0400 (EDT)",
"msg_from": "merlin <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: AW: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "On Mon, Oct 16, 2000 at 06:51:10PM +1100, Chris wrote:\n> KuroiNeko wrote:\n> \n> > 1 create table alpha( id int4, payload text );\n> <snip>\n> \n> > Not a big deal, right? \n> \n> Yes a big deal. You just lost all your oids.\n\nBeen there. Done that. Learned to heed the warnings about using\noids in any kind of persistant manner.\n\n-- \nAdam Haberlach | ASCII /~\\\[email protected] | Ribbon \\ / Against\nhttp://www.newsnipple.com | Campaign X HTML\n'88 EX500 | / \\ E-mail\n",
"msg_date": "Mon, 16 Oct 2000 10:43:14 -0700",
"msg_from": "Adam Haberlach <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: ALTER TABLE DROP COLUMN"
}
]
|
[
{
"msg_contents": "Well, hopefully WAL will be ready for alpha testing in a few days.\nUnfortunately\nat the moment I have to step side from main stream to implement new file\nnaming,\nthe biggest todo for integration WAL into system.\n\nI would really appreciate any help in the following issues (testing can\nstart regardless\nof their statuses but they must be resolved anyway):\n\n1. BTREE: sometimes WAL can't guarantee right order of items on leaf pages\n after recovery - new flag BTP_REORDER introduced to mark such pages.\n Btree should be changed to handle this case in normal processing mode.\n2. HEAP: like 1., this issue is result of attempt to go without compensation\nrecords\n (ie without logging undo operations): it's possible that sometimes in\nredo\n there will be no space for new records because of in recovery we don't\n undo changes for aborted xactions immediately - function like BTREE'\n_bt_cleanup_page_\n required for HEAP as well as general inspection of all places where\nHEAP' redo ops\n try to insert records (initially I thought that in recovery we'll undo\nchanges immediately\n after reading abort record from log - this wouldn't work for BTREE:\nsplits must be\n redo-ne before undo).\n3. There are no redo/undo for HASH, RTREE & GIST yet. This would be *really\nreally\n great* if someone could implement it using BTREE' redo/undo code as\nprototype.\n These are the most complex parts of this todo.\n\nProbably, something else will follow later.\n\nRegards,\nVadim\n\n\n",
"msg_date": "Sat, 14 Oct 2000 13:19:17 -0700",
"msg_from": "\"Vadim Mikheev\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "WAL status & todo"
},
{
"msg_contents": "On Sat, 14 Oct 2000, Vadim Mikheev wrote:\n> Well, hopefully WAL will be ready for alpha testing in a few days.\n> Unfortunately\n> at the moment I have to step side from main stream to implement new file\n> naming,\n> the biggest todo for integration WAL into system.\n>\n> I would really appreciate any help in the following issues (testing can\n> start regardless\n> of their statuses but they must be resolved anyway):\n\nI have downloaded the source via CVSup. Where can I find the WAL and the \nTOAST code?\n\nThanks!!\n\n-- \n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n",
"msg_date": "Sun, 15 Oct 2000 12:00:39 -0300",
"msg_from": "\"Martin A. Marques\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL status & todo"
},
{
"msg_contents": "\"Vadim Mikheev\" <[email protected]> writes:\n> 3. There are no redo/undo for HASH, RTREE & GIST yet. This would be *really\n> really\n> great* if someone could implement it using BTREE' redo/undo code as\n> prototype.\n> These are the most complex parts of this todo.\n\nI don't understand why WAL needs to log internal operations of any of\nthe index types. Seems to me that you could treat indexes as black\nboxes that are updated as side effects of WAL log items for heap tuples:\nwhen adding a heap tuple as a result of a WAL item, you just call the\nusual index insert routines, and when deleting a heap tuple as a result\nof undoing a WAL item, you mark the tuple invalid but don't physically\nremove it till VACUUM (thus no need to worry about its index entries).\n\nThis doesn't address the issue of recovering from an incomplete index\nupdate (such as a partially-completed btree page split), but I think\nthe most reliable way to do that is to add WAL records on the order of\n\"update beginning for index X\" and \"update done for index X\". If you\nsee the begin and not the done record when replaying a log, you assume\nthe index is corrupt and rebuild it from scratch, using Hiroshi's\nindex-rebuild code.\n\nThe reason I think this is a better way is that I don't believe any of\nus (unless maybe Vadim) understand rtree, hash, or especially GIST\nindexes well enough to implement a correct WAL logging scheme for them.\nCertainly just \"use the btree code as a prototype\" will not yield a\ncrash-robust WAL method for the other index types, because they will\nhave different requirements about what combinations of changes have to\nhappen together to get from one consistent state to the next.\n\nFor that matter I am far from convinced that the currently committed\ncode for btree WAL logging is correct --- where does it cope with\ncleaning up after an unfinished page split? I don't see it.\n\nSince we have very poor testing capabilities for the non-mainstream\nindex types (remember how I broke rtree completely during 6.5 devel,\nand no one noticed till quite late in beta?) I will have absolutely\nzero confidence in WAL support for these index types if it's implemented\nthis way. I think we should go with a black-box approach that's the\nsame for all index types and is implemented completely outside the\nindex-access-method-specific code.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Oct 2000 11:15:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "WAL and indexes (Re: WAL status & todo)"
}
]
|
[
{
"msg_contents": "\t>> Well, hopefully WAL will be ready for alpha testing in a few\ndays.\n\t>> Unfortunately at the moment I have to step side from main stream\n\t>> to implement new file naming, the biggest todo for integration\nWAL into system.\n\t>>\n\t>> I would really appreciate any help in the following issues\n(testing can\n\t>> start regardless of their statuses but they must be resolved\nanyway):\n\t>\n\t> I have downloaded the source via CVSup. Where can I find the WAL\n\t> and the TOAST code?\n\n\tHEAP/BTREE related WAL code are in src/backend/acces/{heap|nbtree}/\n\t#ifdef-ed with XLOG.\n\n\tVadim\n\n",
"msg_date": "Sun, 15 Oct 2000 13:01:05 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?koi8-r?Q?=EF=D4=D7=C5=D4=3A_=5BHACKERS=5D_WAL_status_=26_tod?=\n\t=?koi8-r?Q?o?="
}
]
|
[
{
"msg_contents": "> > > * Prevent index lookups (or index entries using partial index) on most\n> > > common values; instead use sequential scan \n> > \n> > This behavior already exists for the most common value, and would\n> > exist for any additional values that we had stats for. Don't see\n> > why you think a separate TODO item is needed.\n> \n> You mean the optimizer already skips an index lookup for the most common\n> value, and instead does a sequential scan? Seems you are way ahead of\n> me.\n\nIf the answer is yes, how do you prevent a join that hits the most\ncommon value?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 15 Oct 2000 17:55:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance on inserts"
}
]
|
[
{
"msg_contents": "Could you add to the TODO:\n\n support of binary data (eg varbinary type)\n\nI think the above is not trivial, as I think the parser choques on \\00 bytes\nat several levels...\n\nI had a check on the TODO and it seems that TOAST is not planned anymore for\n7.1. Is it true?\n\nFranck Martin\nDatabase Development Officer\nSOPAC South Pacific Applied Geoscience Commission\nFiji\nE-mail: [email protected] <mailto:[email protected]> \nWeb site: http://www.sopac.org/ <http://www.sopac.org/> \n\nThis e-mail is intended for its recipients only. Do not forward this\ne-mail without approval. The views expressed in this e-mail may not be\nneccessarily the views of SOPAC.\n\n\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:[email protected]]\nSent: Monday, October 16, 2000 9:45 AM\nTo: Jules Bean\nCc: Tom Lane; Alfred Perlstein; [email protected]\nSubject: Re: [HACKERS] Performance on inserts\n\n\n\nAdded to TODO:\n\n* Prevent index lookups (or index entries using partial index) on most\n common values; instead use sequential scan \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Oct 2000 11:09:15 +1200",
"msg_from": "Franck Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Performance on inserts"
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> Could you add to the TODO:\n> \n> support of binary data (eg varbinary type)\n> \n> I think the above is not trivial, as I think the parser choques on \\00 bytes\n> at several levels...\n\nbytea type works for inserting null: 'a\\\\000b'.\n\n> \n> I had a check on the TODO and it seems that TOAST is not planned anymore for\n> 7.1. Is it true?\n\nIt is in 7.1.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 15 Oct 2000 19:24:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts"
}
]
|
[
{
"msg_contents": "\nNo, the committers list is a sendmail problem, should be fixed\nmomentarily, as I'm going to disable the strict domain checking on that\nlist ... I'll look at the snapshot thing tonight too, its using Peter's\nnew script(s), which appear to work great from the command line, but has a\nproblem when run from cron ...\n\nOn Sun, 15 Oct 2000, Tom Lane wrote:\n\n> BTW, not only is pgsql-committers wedged (still) since Monday,\n> but I see at ftp://ftp.postgresql.org/pub/dev/ that the nightly\n> snapshot hasn't updated since Monday either.\n> \n> Maybe related?\n> \n> \t\t\tregards, tom lane\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 15 Oct 2000 21:38:12 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql-committers list definitely wedged "
}
]
|
[
{
"msg_contents": "\nMorning all ...\n\n\tI'm trying to get the committers mailing list to work, and the\n\"break\" is in sendmail, as far as I can tell. Basically, its taking\n'locally posted messages' and not adding a domain to the back of it, so\nthat majordomo sees them as:\n\n--== Error when connecting: Invalid address: \"Marc G. Fournier\" <scrappy>\nYou did not include a hostname as part of the address.\n at /usr/local/majordomo/bin/mj_queuerun line 470\n\nI've tried adding 'FEATURE(`always_add_domain')' to my m4 config file, but\nthat doesn't appear to be helping, but figure it probably somethign really\nobvious :(\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n\n",
"msg_date": "Sun, 15 Oct 2000 22:34:07 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "getting local domain to get attached through sendmail ..."
},
{
"msg_contents": "On Sun, 15 Oct 2000, The Hermit Hacker wrote:\n\n> \tI'm trying to get the committers mailing list to work, and the\n> \"break\" is in sendmail, as far as I can tell. Basically, its taking\n> 'locally posted messages' and not adding a domain to the back of it, so\n> that majordomo sees them as:\n> \n> --== Error when connecting: Invalid address: \"Marc G. Fournier\" <scrappy>\n> You did not include a hostname as part of the address.\n> at /usr/local/majordomo/bin/mj_queuerun line 470\n> \n> I've tried adding 'FEATURE(`always_add_domain')' to my m4 config file, but\n> that doesn't appear to be helping, but figure it probably somethign really\n> obvious :(\n\nHack^H^H^H^HEdit the commit script to use sendmail -f [email protected]?\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n",
"msg_date": "Sun, 15 Oct 2000 21:04:36 -0500 (CDT)",
"msg_from": "\"Dominic J. Eidson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: getting local domain to get attached through sendmail\n ..."
},
{
"msg_contents": "\ngoing to try that, but it doesn't fix the problem where if someone from\nthe local machine sends to the local list, it won't go through either\n... :( but, at least I can get htis fixed ...\n\n\nOn Sun, 15 Oct 2000, Dominic J. Eidson wrote:\n\n> On Sun, 15 Oct 2000, The Hermit Hacker wrote:\n> \n> > \tI'm trying to get the committers mailing list to work, and the\n> > \"break\" is in sendmail, as far as I can tell. Basically, its taking\n> > 'locally posted messages' and not adding a domain to the back of it, so\n> > that majordomo sees them as:\n> > \n> > --== Error when connecting: Invalid address: \"Marc G. Fournier\" <scrappy>\n> > You did not include a hostname as part of the address.\n> > at /usr/local/majordomo/bin/mj_queuerun line 470\n> > \n> > I've tried adding 'FEATURE(`always_add_domain')' to my m4 config file, but\n> > that doesn't appear to be helping, but figure it probably somethign really\n> > obvious :(\n> \n> Hack^H^H^H^HEdit the commit script to use sendmail -f [email protected]?\n> \n> \n> -- \n> Dominic J. Eidson\n> \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n> -------------------------------------------------------------------------------\n> http://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 15 Oct 2000 23:07:30 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: getting local domain to get attached through sendmail\n ..."
},
{
"msg_contents": "I wonder how many \"you're not allowed to post to this list\" bounces I'll\nget from answering this... Please, when you send messages to\nsendmail.org, send them *only* there, not to various mailing lists as\nwell.\n\nThe Hermit Hacker <[email protected]> wrote:\n>\tI'm trying to get the committers mailing list to work, and the\n>\"break\" is in sendmail, as far as I can tell. Basically, its taking\n>'locally posted messages' and not adding a domain to the back of it, so\n>that majordomo sees them as:\n>\n>--== Error when connecting: Invalid address: \"Marc G. Fournier\" <scrappy>\n>You did not include a hostname as part of the address.\n> at /usr/local/majordomo/bin/mj_queuerun line 470\n>\n>I've tried adding 'FEATURE(`always_add_domain')' to my m4 config file, but\n>that doesn't appear to be helping, but figure it probably somethign really\n>obvious :(\n\nDid you put it *before* the MAILER lines, as described in cf/README?\n\n--Per Hedeland\n",
"msg_date": "Tue, 17 Oct 2000 01:29:40 +0200 (CEST)",
"msg_from": "Per Hedeland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: getting local domain to get attached through sendmail ..."
}
]
|
[
{
"msg_contents": "\n... should be working again. I hard coded the path so that it finds\nbison, which appears to be what was killing it ...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 15 Oct 2000 23:07:59 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "snapshots ..."
},
{
"msg_contents": "The Hermit Hacker writes:\n\n> ... should be working again. I hard coded the path so that it finds\n> bison, which appears to be what was killing it ...\n\nThat sounds suspiciously like the respective cron user not having\n/usr/local/bin is its path.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Mon, 16 Oct 2000 17:43:38 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: snapshots ..."
}
]
|
[
{
"msg_contents": " Date: Sunday, October 15, 2000 @ 23:34:47\nAuthor: pjw\n\nUpdate of /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/ref\n from hub.org:/home/users/p/pjw/work/pgsql/doc/src/sgml/ref\n\nModified Files:\n\tallfiles.sgml \n\n----------------------------- Log Message -----------------------------\n\nAdded pg_restore to allfiles.sgml\n\n",
"msg_date": "Sun, 15 Oct 2000 23:34:47 -0400 (EDT)",
"msg_from": "Philip Warner - CVS <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgsql/doc/src/sgml/ref (allfiles.sgml)"
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> As a result do people have any objection to changing pg_restore to\n> pg_undump? Or pg_load?\n\nOut of those two names, I'd vote for pg_load ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Oct 2000 00:27:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup, restore & pg_dump "
},
{
"msg_contents": "\nSince we may have a workable backup/restore based on WAL available in 7.1,\nI am now wondering at the wisdom of creating 'pg_restore', which reads the\nnew pg_dump archive files. It is probably better to have pg_backup &\npg_restore as the backup/restore utilities.\n\nAs a result do people have any objection to changing pg_restore to\npg_undump? Or pg_load?\n\nAny other suggestions would also be welcome.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Mon, 16 Oct 2000 14:44:15 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Backup, restore & pg_dump"
},
{
"msg_contents": "What was the matter with the name pg_restore?\n\n> \n> Since we may have a workable backup/restore based on WAL available in 7.1,\n> I am now wondering at the wisdom of creating 'pg_restore', which reads the\n> new pg_dump archive files. It is probably better to have pg_backup &\n> pg_restore as the backup/restore utilities.\n> \n> As a result do people have any objection to changing pg_restore to\n> pg_undump? Or pg_load?\n> \n> Any other suggestions would also be welcome.\n> \n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.B.N. 75 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Oct 2000 00:45:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup, restore & pg_dump"
},
{
"msg_contents": "On Mon, 16 Oct 2000, Bruce Momjian wrote:\n\n> What was the matter with the name pg_restore?\n\nI didn't wanna be the one to ask, but I was kinda confused on that point\ntoo ...\n\n> > Since we may have a workable backup/restore based on WAL available in 7.1,\n> > I am now wondering at the wisdom of creating 'pg_restore', which reads the\n> > new pg_dump archive files. It is probably better to have pg_backup &\n> > pg_restore as the backup/restore utilities.\n> > \n> > As a result do people have any objection to changing pg_restore to\n> > pg_undump? Or pg_load?\n> > \n> > Any other suggestions would also be welcome.\n> > \n> > \n> > ----------------------------------------------------------------\n> > Philip Warner | __---_____\n> > Albatross Consulting Pty. Ltd. |----/ - \\\n> > (A.B.N. 75 008 659 498) | /(@) ______---_\n> > Tel: (+61) 0500 83 82 81 | _________ \\\n> > Fax: (+61) 0500 83 82 82 | ___________ |\n> > Http://www.rhyme.com.au | / \\|\n> > | --________--\n> > PGP key available upon request, | /\n> > and from pgp5.ai.mit.edu:11371 |/\n> > \n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 16 Oct 2000 01:56:46 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup, restore & pg_dump"
},
{
"msg_contents": "Don't go changing yet. When Vadim has something, we can decide. I\nthink we may have unique commands for logging control and stuff, so\nlet's see how it plays out.\n\n> At 00:45 16/10/00 -0400, Bruce Momjian wrote:\n> >What was the matter with the name pg_restore?\n> \n> The fact that we will have a 'proper' backup/restore with the WAL changes,\n> and it seems more appropriate that the new utilities should be called\n> pg_backup & pg_restore. This leaves the 'undump' part of pg_dump without a\n> name. So I will most likely call it pg_load since dump & load are commonly\n> associated as verbs.\n> \n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.B.N. 75 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Oct 2000 01:01:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup, restore & pg_dump"
},
{
"msg_contents": "At 00:45 16/10/00 -0400, Bruce Momjian wrote:\n>What was the matter with the name pg_restore?\n\nThe fact that we will have a 'proper' backup/restore with the WAL changes,\nand it seems more appropriate that the new utilities should be called\npg_backup & pg_restore. This leaves the 'undump' part of pg_dump without a\nname. So I will most likely call it pg_load since dump & load are commonly\nassociated as verbs.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Mon, 16 Oct 2000 15:53:29 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup, restore & pg_dump"
}
]
|
[
{
"msg_contents": "I found bytea doing a \\dT in psql, but I do not find any documentation on\nit.\n\nCould I have some source code implementation of bytea with examples ?\n\nFranck Martin\nDatabase Development Officer\nSOPAC South Pacific Applied Geoscience Commission\nFiji\nE-mail: [email protected] <mailto:[email protected]> \nWeb site: http://www.sopac.org/ <http://www.sopac.org/> \n\nThis e-mail is intended for its recipients only. Do not forward this\ne-mail without approval. The views expressed in this e-mail may not be\nneccessarily the views of SOPAC.\n\n",
"msg_date": "Mon, 16 Oct 2000 17:24:25 +1200",
"msg_from": "Franck Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "bytea type"
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> I found bytea doing a \\dT in psql, but I do not find any documentation on\n> it.\n> \n> Could I have some source code implementation of bytea with examples ?\n\nYes, it is like text, but you can put in binary data as 'a\\\\000b' puts\na, null, b.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Oct 2000 10:30:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bytea type"
}
]
|
[
{
"msg_contents": "\n> > > As a result do people have any objection to changing pg_restore to\n> > > pg_undump? Or pg_load?\n\nAlso possible would be a name like Oracle\npg_exp and pg_imp for export and import.\n(or pg_export and pg_import)\n\nLoad and unload is often more tied to data only (no dml).\n\nI agree that the current name pg_restore for its current functionality\nis not good and misleading in the light of WAL backup. \n\nAndreas\n",
"msg_date": "Mon, 16 Oct 2000 12:23:56 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Backup, restore & pg_dump"
},
{
"msg_contents": "\nI like the pg_{import,export} names myself ... *nod*\n\n\nOn Mon, 16 Oct 2000, Zeugswetter Andreas SB wrote:\n\n> \n> > > > As a result do people have any objection to changing pg_restore to\n> > > > pg_undump? Or pg_load?\n> \n> Also possible would be a name like Oracle\n> pg_exp and pg_imp for export and import.\n> (or pg_export and pg_import)\n> \n> Load and unload is often more tied to data only (no dml).\n> \n> I agree that the current name pg_restore for its current functionality\n> is not good and misleading in the light of WAL backup. \n> \n> Andreas\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 16 Oct 2000 10:08:50 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Backup, restore & pg_dump"
},
{
"msg_contents": "Philip Warner writes:\n\n> >I like the pg_{import,export} names myself ... *nod*\n> \n> Sounds fine also; but we have compatibility issues in that we still need\n> pg_dump. Maybe just a symbolic link to pg_export.\n\nI'm not so fond of changing a long-established program name for the sake\nof ethymological correctness or consistency with other products (yeah,\nright). I got plenty of suggestions if you want to start that. I say\nstick to pg_dump[all], and name the inverse pg_undump, pg_load, or\npmud_gp.\n\nBtw., it will still be possible to restore, er, reload, with psql, right?\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 17 Oct 2000 00:12:59 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Backup, restore & pg_dump"
},
{
"msg_contents": "> Philip Warner writes:\n> \n> > >I like the pg_{import,export} names myself ... *nod*\n> > \n> > Sounds fine also; but we have compatibility issues in that we still need\n> > pg_dump. Maybe just a symbolic link to pg_export.\n> \n> I'm not so fond of changing a long-established program name for the sake\n> of ethymological correctness or consistency with other products (yeah,\n\nAgreed.\n\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Oct 2000 18:15:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Backup, restore & pg_dump"
},
{
"msg_contents": "At 10:08 16/10/00 -0300, The Hermit Hacker wrote:\n>\n>I like the pg_{import,export} names myself ... *nod*\n>\n\nSounds fine also; but we have compatibility issues in that we still need\npg_dump. Maybe just a symbolic link to pg_export.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 17 Oct 2000 08:51:11 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Backup, restore & pg_dump"
},
{
"msg_contents": "On Tue, 17 Oct 2000, Peter Eisentraut wrote:\n\n> Philip Warner writes:\n> \n> > >I like the pg_{import,export} names myself ... *nod*\n> > \n> > Sounds fine also; but we have compatibility issues in that we still need\n> > pg_dump. Maybe just a symbolic link to pg_export.\n> \n> I'm not so fond of changing a long-established program name for the sake\n> of ethymological correctness or consistency with other products (yeah,\n> right). I got plenty of suggestions if you want to start that. I say\n> stick to pg_dump[all], and name the inverse pg_undump, pg_load, or\n> pmud_gp.\n\npmud_gp? *raised eyebrow*\n\n\n",
"msg_date": "Mon, 16 Oct 2000 19:51:48 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Backup, restore & pg_dump"
},
{
"msg_contents": "At 00:12 17/10/00 +0200, Peter Eisentraut wrote:\n>\n>Btw., it will still be possible to restore, er, reload, with psql, right?\n>\n\nCorrect.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 17 Oct 2000 11:56:04 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Backup, restore & pg_dump"
}
]
|
[
{
"msg_contents": "\n> This style of \"DROP COLUMN\" would change the attribute\n> numbers whose positons are after the dropped column.\n> Unfortunately we have no mechanism to invalidate/remove\n> objects(or prepared plans) which uses such attribute numbers.\n> And I've seen no proposal/discussion to solve this problem\n> for DROP COLUMN feature. We wound't be able to prevent\n> PostgreSQL from doing the wrong thing silently. \n\nThat issue got me confused now (There was a previous mail where \nyou suggested using logical colid most of the time). Why not use the \nphysical colid in prepared objects/plans. Since those can't currently change\nit seems such plans would not be invalidated.\n\nAndreas\n",
"msg_date": "Mon, 16 Oct 2000 14:26:13 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "\n\nZeugswetter Andreas SB wrote:\n\n> > This style of \"DROP COLUMN\" would change the attribute\n> > numbers whose positons are after the dropped column.\n> > Unfortunately we have no mechanism to invalidate/remove\n> > objects(or prepared plans) which uses such attribute numbers.\n> > And I've seen no proposal/discussion to solve this problem\n> > for DROP COLUMN feature. We wound't be able to prevent\n> > PostgreSQL from doing the wrong thing silently.\n>\n> That issue got me confused now (There was a previous mail where\n> you suggested using logical colid most of the time). Why not use the\n> physical colid in prepared objects/plans. Since those can't currently change\n> it seems such plans would not be invalidated.\n>\n\nBecause I couldn't follow this thread well,I don't understand\nwhat people expect now.\nI've maintained my trial implementation for 2 months but\nI couldn't do it any longer.\n\nIf people prefer 2x DROP COLUMN,I would give up my trial.\nI know some people(Hannu, you .. ??) prefer logical and physical\nattribute numbers but Tom seems to hate an ugly implementa\n-tation. Unfortunately the implementation is pretty ugly and\nso I may have to give up my trial also.\n\nRegards.\nHiroshi Inoue\n\n",
"msg_date": "Tue, 17 Oct 2000 19:18:56 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: ALTER TABLE DROP COLUMN"
}
]
|
[
{
"msg_contents": "\n> > I've been reading something about implementation of histograms, and,\n> > AFAIK, in practice histograms is just a cool name for no more than:\n> > 1. top ten with frequency for each\n> > 2. the same for top ten worse\n> > 3. average for the rest\n\nConsider, that we only need that info for choice of index, and if an average value was too\nfrequent for this index to be efficient you can safely drop the index, it would be useless.\nThus it seems to me that keeping stats on the most infrequent values (point 2) is useless.\nFor me these would also be the most volatile, thus the stats would only be\naccurate for a short period of time.\n\nI think what we need is as follows:\n1. our current histograms \n2. a list of exceptions for exceptional values that are very frequent\n \nExceptional are those values that would skew the distribution too much.\n\nVery infrequent values should not be used for min|max values of histogram buckets,\nbut that is imho all that needs to be done for infrequent values.\n\nAndreas\n",
"msg_date": "Mon, 16 Oct 2000 14:45:14 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: analyze.c"
}
]
|
[
{
"msg_contents": "On Mon, 16 Oct 2000, Tom Lane wrote:\n\n> > but I see at ftp://ftp.postgresql.org/pub/dev/ that the nightly\n> > snapshot hasn't updated since Monday either.\n> \n> As of this morning, it looks like the tar.gz files all got updated\n> at 4AM EDT last night, but to judge by timestamps, the associated\n> .md5 files didn't. What the heck? What script is being run by the\n> cron job, anyway? ~pgsql/bin/mk-snapshot does not look like it could\n> behave this way.\n\n00 04 * * * /home/projects/pgsql/bin/mk-snapshot\n\nAnd, actually, looking at the script again, it does look like it can\nbehave this way ... if you run a script from cron, its PATH gets set to a\n'secure path', which is very limited. I set it to a specific PATH so that\n/usr/local was included, but didn't realize that md5 was in /sbin ... just\nadded that to the path for tonights run and am running it manually now so\nthat its all in sync ...\n\n\n",
"msg_date": "Mon, 16 Oct 2000 10:44:14 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql-committers list definitely wedged "
}
]
|
[
{
"msg_contents": "\n> I think we do, because it solves more than just the ALTER DROP COLUMN\n> problem: it cleans up other sore spots too. Like ALTER TABLE ADD COLUMN\n> in a table with child tables.\n\nYes, could also implement \"add column xx int before someothercolumn\"\nto add a column in the middle.\n\nAndreas\n",
"msg_date": "Mon, 16 Oct 2000 16:59:01 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: AW: ALTER TABLE DROP COLUMN "
}
]
|
[
{
"msg_contents": "What exactly do we do with 7.1 on Nov 1st? Freeze or release? I'm absolutely\nsure I won't finish ecpg until Nov 1st. Yes, I know I had similar problems\nwith 7.0, but real life tends to take away too much time.\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Mon, 16 Oct 2000 17:05:24 +0200",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "7.1 and ecpg"
}
]
|
[
{
"msg_contents": "is implemented. Regression tests are passed.\nmake distclean + initdb are required.\n\nNow all file/dir names are numeric. OIDs are used\nfor databases *and* relations on creation, but\nrelation file names may be changed later if required\n(separate from oid pg_class.relfilenode field is used).\n\nALTER TABLE RENAME is rollback-able now. I think that\nit's very easy to make DROP TABLE rollback-able too\n(file should be removed after xaction committed) but\nI have no time to deal with this currently.\n\nThere is still one issue - database locations. They don't\nwork at the moment. I'm going to use soft link for them.\nThomas, is it ok?\n\nVadim\n\n",
"msg_date": "Mon, 16 Oct 2000 08:13:14 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "New file naming"
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n> ALTER TABLE RENAME is rollback-able now. I think that\n> it's very easy to make DROP TABLE rollback-able too\n> (file should be removed after xaction committed) but\n> I have no time to deal with this currently.\n\nI will make sure that works before release.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Oct 2000 11:58:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New file naming "
}
]
|
[
{
"msg_contents": "> > >Bottom line is we're not sure what to do now. Opinions from the \n> > >floor, anyone?\n\nOne thing that comes to my mind is, that you (core members) working full\ntime on PG will produce so much work, that we \"hobby PgSQL'ers\" will\nhave a hard job in keeping up to date.\n\nThus you will have to be nice to us, becoming more and more ignorant.\nYou will have to understand seemingly dumb, uninformed, outdated ... questions \nand suggestions :-)\n\nBut I trust, your real [business] heart belongs to PostgreSQL, and if there comes \nthe time of strong disagreement with Great Bridge, it will be easy for you to find \na new job.\n\nCongratulations\nAndreas\n",
"msg_date": "Mon, 16 Oct 2000 17:15:48 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] My new job"
},
{
"msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> One thing that comes to my mind is, that you (core members) working full\n> time on PG will produce so much work, that we \"hobby PgSQL'ers\" will\n> have a hard job in keeping up to date.\n\nWhen core was first talking to the Great Bridge folks, one of our big\nconcerns was that a group of commercial developers contributing to the\nproject would be able to control the direction of the project by sheer\nmanpower, ie, core wouldn't have time to review their submissions in\nany detail.\n\nThat risk is still with us, though it looks a little different now that\nwe ourselves are the commercial developers in question ;-).\n\nI agree with what a couple of people have already remarked: transparency\nof decision making is going to be a critical issue in the future. Each\nof us full-timers will have to be very careful to keep pghackers\ninformed about what we're doing or thinking of doing. It won't benefit\nthe project if a few core developers get to work full-time, but everyone\nelse drops out because they feel they can't keep up or are not able to\nmake a meaningful contribution.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Oct 2000 12:21:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] My new job "
}
]
|
[
{
"msg_contents": "I'm wondering how useful it would be if one could enable selective\nfsync. That would mean that although the database was running\nasync mode, the system tables and doing things like create index\nwould cause an fsync to enforce ordering in case of a crash.\n\nThis would prevent more serious problems like stray files in the\ndatabase while still allowing normal data to be managed quickly.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Mon, 16 Oct 2000 08:56:26 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "selective fsync?"
},
{
"msg_contents": "WAL should supersede all these concerns about fsync. At the very least,\nit changes the tradeoffs enough that there's no point in designing\nperformance improvements based on the old code...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Oct 2000 12:25:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: selective fsync? "
}
]
|
[
{
"msg_contents": ">I don't understand why WAL needs to log internal operations of any of\n>the index types. Seems to me that you could treat indexes as black\n>boxes that are updated as side effects of WAL log items for heap tuples:\n>when adding a heap tuple as a result of a WAL item, you just call the\n>usual index insert routines, and when deleting a heap tuple as a result\n\nOn recovery backend *can't* use any usual routines:\nsystem catalogs are not available.\n\n>of undoing a WAL item, you mark the tuple invalid but don't physically\n>remove it till VACUUM (thus no need to worry about its index entries).\n\nOne of the purposes of WAL is immediate removing tuples \ninserted by aborted xactions. I want make VACUUM\n*optional* in future - space must be available for\nreusing without VACUUM. And this is first, very small,\nstep in this direction.\n\n>This doesn't address the issue of recovering from an incomplete index\n>update (such as a partially-completed btree page split), but I think\n>the most reliable way to do that is to add WAL records on the order of\n>\"update beginning for index X\" and \"update done for index X\". If you\n>see the begin and not the done record when replaying a log, you assume\n\nYou will still have to log changes for *each* page\nupdated on behalf of index operation! The fact that\nyou've seen begin/end records in log doesn't mean\nthat all intermediate changes to index pages are\nwritten to index file unless you've logged all these\nchanges and see all of them in index on recovery.\n\n>the index is corrupt and rebuild it from scratch, using Hiroshi's\n>index-rebuild code.\n\nHow fast is rebuilding of index for table with\n10^7 records?\nI agree to consider rtree/hash/gist as experimental\nindex access methods BUT we have to have at least\n*one* reliable index AM with short down time/\nfast recovery.\n\n>For that matter I am far from convinced that the currently committed\n>code for btree WAL logging is correct --- where does it cope with\n>cleaning up after an unfinished page split? I don't see it.\n\nWhat do you mean? If you say about updating parent\npage (\"my bits moved ...\" etc) then as I've mentioned\npreviously we can handle uninserted parent item in\nrun time (though it's not implemented yet -:)).\nWAL allows to restore both left and right siblings\nand this is the most critical split issue.\n(BTW, note that having all btitems on leaf level\nat place we could do REINDEX veeeeery fast).\n\n>Since we have very poor testing capabilities for the non-mainstream\n>index types (remember how I broke rtree completely during 6.5 devel,\n>and no one noticed till quite late in beta?) I will have absolutely\n>zero confidence in WAL support for these index types if it's implemented\n>this way. I think we should go with a black-box approach that's the\n>same for all index types and is implemented completely outside the\n>index-access-method-specific code.\n\nI agreed with this approach for all indices except\nbtree (above + \"hey, something is already done for\nthem\" -:)). But remember that to know is index\nconsistent or not we have to log *all* changes made\nin index file anyway... so seems we have to be\nvery close to be AM specific -:)\n\nVadim\n\n",
"msg_date": "Mon, 16 Oct 2000 09:18:39 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?koi8-r?Q?=EF=D4=D7=C5=D4=3A_WAL_and_indexes_=28Re=3A_=5BHACK?=\n\t=?koi8-r?Q?ERS=5D_WAL_status_=26_todo=29?="
},
{
"msg_contents": "* Mikheev, Vadim <[email protected]> [001016 09:33] wrote:\n> >I don't understand why WAL needs to log internal operations of any of\n> >the index types. Seems to me that you could treat indexes as black\n> >boxes that are updated as side effects of WAL log items for heap tuples:\n> >when adding a heap tuple as a result of a WAL item, you just call the\n> >usual index insert routines, and when deleting a heap tuple as a result\n> \n> On recovery backend *can't* use any usual routines:\n> system catalogs are not available.\n> \n> >of undoing a WAL item, you mark the tuple invalid but don't physically\n> >remove it till VACUUM (thus no need to worry about its index entries).\n> \n> One of the purposes of WAL is immediate removing tuples \n> inserted by aborted xactions. I want make VACUUM\n> *optional* in future - space must be available for\n> reusing without VACUUM. And this is first, very small,\n> step in this direction.\n\nWhy would vacuum become optional? Would WAL offer an option to\nnot reclaim free space? We're hoping that vacuum becomes unneeded\nwhen postgresql is run with some flag indicating that we're\nuninterested in time travel.\n\nHow much longer do you estimate until you can make it work that way?\n\nthanks,\n-Alfred\n",
"msg_date": "Mon, 16 Oct 2000 09:40:17 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Otvet: WAL and indexes (Re: [HACKERS] WAL status & todo)"
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n>> I don't understand why WAL needs to log internal operations of any of\n>> the index types. Seems to me that you could treat indexes as black\n>> boxes that are updated as side effects of WAL log items for heap tuples:\n>> when adding a heap tuple as a result of a WAL item, you just call the\n>> usual index insert routines, and when deleting a heap tuple as a result\n\n> On recovery backend *can't* use any usual routines:\n> system catalogs are not available.\n\nOK, good point, but that just means you can't use the catalogs to\ndiscover what indexes exist for a given table. You could still create\nlog entries that look like \"insert indextuple X into index Y\" without\nany further detail.\n\n>> the index is corrupt and rebuild it from scratch, using Hiroshi's\n>> index-rebuild code.\n\n> How fast is rebuilding of index for table with 10^7 records?\n\nIt's not fast, of course. But the point is that you should seldom\nhave to do it.\n\n> I agree to consider rtree/hash/gist as experimental\n> index access methods BUT we have to have at least\n> *one* reliable index AM with short down time/\n> fast recovery.\n\nWith all due respect, I wonder just how \"reliable\" btree WAL undo/redo\nwill prove to be ... let alone the other index types. I worry that\nthis approach is putting too much emphasis on making it fast, and not\nenough on making it right.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Oct 2000 12:45:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: =?koi8-r?Q?=EF=D4=D7=C5=D4=3A_WAL_and_indexes_=28Re=3A_=5BHACK?=\n\t=?koi8-r?Q?ERS=5D_WAL_status_=26_todo=29?="
},
{
"msg_contents": "* Tom Lane <[email protected]> [001016 09:47] wrote:\n> \"Mikheev, Vadim\" <[email protected]> writes:\n> >> I don't understand why WAL needs to log internal operations of any of\n> >> the index types. Seems to me that you could treat indexes as black\n> >> boxes that are updated as side effects of WAL log items for heap tuples:\n> >> when adding a heap tuple as a result of a WAL item, you just call the\n> >> usual index insert routines, and when deleting a heap tuple as a result\n> \n> > On recovery backend *can't* use any usual routines:\n> > system catalogs are not available.\n> \n> OK, good point, but that just means you can't use the catalogs to\n> discover what indexes exist for a given table. You could still create\n> log entries that look like \"insert indextuple X into index Y\" without\n> any further detail.\n\nOne thing you guys may wish to consider is selectively fsyncing on\nsystem catelogs and marking them dirty when opened for write:\n\npostgres: i need to write to a critical table...\nopens table, marks dirty\ncompletes operation and marks undirty and fsync\n\n-or-\n\npostgres: i need to write to a critical table...\nopens table, marks dirty\ncrash, burn, smoke (whatever)\n\nNow you may still have the system tables broken, however the chances\nof that may be siginifigantly reduced depending on how often writes\nmust be done to them.\n\nIt's a hack, but depending on the amount of writes done to critical\ntables it may reduce the window for these inconvient situations \nsignifigantly.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Mon, 16 Oct 2000 09:55:44 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Otvet: WAL and indexes (Re: [HACKERS] WAL status & todo)"
}
]
|
[
{
"msg_contents": "Excuse me but what is LRU-2?\nI know that in Oracle unused buffers are not in\nsimple LRU list: Oracle tries to postpone writes\nas long as possible -:)\n\nVadim\n\n> ----- О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫ О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫ -----\n> О©╫О©╫:\t\tTom Lane [SMTP:[email protected]]\n> О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫:\t\t16 ieoya?y 2000 a. 8:50\n> О©╫О©╫О©╫О©╫:\t\tBruce Momjian\n> О©╫О©╫О©╫О©╫О©╫:\t\[email protected]\n> О©╫О©╫О©╫О©╫:\t\tRe: [HACKERS] Possible performance improvement: buffer\n> replacement policy \n> \n> Bruce Momjian <[email protected]> writes:\n> >> It looks like it wouldn't take too much work to replace shared buffers\n> >> on the basis of LRU-2 instead of LRU, so I'm thinking about trying it.\n> >> \n> >> Has anyone looked into this area? Is there a better method to try?\n> \n> > Sounds like a perfect idea. Good luck. :-)\n> \n> Actually, the idea went down in flames :-(, but I neglected to report\n> back to pghackers about it. I did do some code to manage buffers as\n> LRU-2. I didn't have any good performance test cases to try it with,\n> but Richard Brosnahan was kind enough to re-run the TPC tests previously\n> published by Great Bridge with that code in place. Wasn't any faster,\n> in fact possibly a little slower, likely due to the extra CPU time spent\n> on buffer freelist management. It's possible that other scenarios might\n> show a better result, but right now I feel pretty discouraged about the\n> LRU-2 idea and am not pursuing it.\n> \n> \t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Oct 2000 09:21:13 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?koi8-r?Q?=EF=D4=D7=C5=D4=3A_=5BHACKERS=5D_Possible_performan?=\n\t=?koi8-r?Q?ce_improvement=3A_buffer_replacement_policy_?="
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n> Excuse me but what is LRU-2?\n\nLike LRU, but using time to second most recent reference, instead of\nmost recent reference, to sort the buffers for recycling. This gives\na more robust statistic about how often the page is actually being\ntouched. (Or that's the theory anyway.)\n\n> I know that in Oracle unused buffers are not in\n> simple LRU list: Oracle tries to postpone writes\n> as long as possible -:)\n\nManage dirty buffers separately from clean ones, you mean? Hm, we could\ndo that. With WAL it might even make sense, though before we tended to\nflush dirty buffers so fast it would hardly matter.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Oct 2000 12:53:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: =?koi8-r?Q?=EF=D4=D7=C5=D4=3A_=5BHACKERS=5D_Possible_performan?=\n\t=?koi8-r?Q?ce_improvement=3A_buffer_replacement_policy_?="
}
]
|
[
{
"msg_contents": "Great & Thanks! And then we'll have to log all files\nto be removed as part of xaction commit record.\n\nVadim\n\n> ----- О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫ О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫ -----\n> О©╫О©╫:\t\tTom Lane [SMTP:[email protected]]\n> О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫:\t\t16 ieoya?y 2000 a. 8:59\n> О©╫О©╫О©╫О©╫:\t\tMikheev, Vadim\n> О©╫О©╫О©╫О©╫О©╫:\t\[email protected]\n> О©╫О©╫О©╫О©╫:\t\tRe: [HACKERS] New file naming \n> \n> \"Mikheev, Vadim\" <[email protected]> writes:\n> > ALTER TABLE RENAME is rollback-able now. I think that\n> > it's very easy to make DROP TABLE rollback-able too\n> > (file should be removed after xaction committed) but\n> > I have no time to deal with this currently.\n> \n> I will make sure that works before release.\n> \n> \t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Oct 2000 09:23:52 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?koi8-r?Q?=EF=D4=D7=C5=D4=3A_=5BHACKERS=5D_New_file_naming_?="
}
]
|
[
{
"msg_contents": ">>> I don't understand why WAL needs to log internal operations of any of\n>>> the index types. Seems to me that you could treat indexes as black\n>>> boxes that are updated as side effects of WAL log items for heap tuples:\n>>> when adding a heap tuple as a result of a WAL item, you just call the\n>>> usual index insert routines, and when deleting a heap tuple as a result\n>>\n>> On recovery backend *can't* use any usual routines:\n>> system catalogs are not available.\n>\n>OK, good point, but that just means you can't use the catalogs to\n>discover what indexes exist for a given table. You could still create\n>log entries that look like \"insert indextuple X into index Y\" without\n>any further detail.\n\nAnd how could I use such records on recovery\nbeing unable to know what data columns represent\nkeys, what functions should be used for ordering?\n\n>>> the index is corrupt and rebuild it from scratch, using Hiroshi's\n>>> index-rebuild code.\n>>\n>> How fast is rebuilding of index for table with 10^7 records?\n>\n>It's not fast, of course. But the point is that you should seldom\n>have to do it.\n\nWith WAL system writes lazy and as result\nprobability to see \"begin update\" confirmation\nwithout \"done update\" will be high, very high\n(only log records go to disk on commit, data blocks\nwill be forced to disk on checkpoints - each 3-5\nminutes - only).\n\n>> I agree to consider rtree/hash/gist as experimental\n>> index access methods BUT we have to have at least\n>> *one* reliable index AM with short down time/\n>> fast recovery.\n>\n>With all due respect, I wonder just how \"reliable\" btree WAL undo/redo\n>will prove to be ... let alone the other index types. I worry that\n>this approach is putting too much emphasis on making it fast, and not\n>enough on making it right.\n\nThis approach (logging all index changes) is *standard*\nWAL approach and is reliable (but implementation may be\nnot of course -:)). This is what I've seen in books,\nI didn't invent anything new and special here.\nTom, can you implement (or spend a some time for design)\nhash redo/undo with \"black box approach\" so we could\nsee how good is it? I still miss are you going to use\nbegin/done update or \"insert tuple X into index Y\"\nrecords.\n\nVadim\n\n",
"msg_date": "Mon, 16 Oct 2000 12:02:02 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?koi8-r?Q?=EF=D4=D7=C5=D4=3A_=EF=D4=D7=C5=D4=3A_WAL_and_index?=\n\t=?koi8-r?Q?es_=28Re=3A_=5BHACKERS=5D_WAL_status_=26_todo=29_?="
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n> And how could I use such records on recovery\n> being unable to know what data columns represent\n> keys, what functions should be used for ordering?\n\nUm, that's not built into the index either, is it? OK, you win ...\n\nI'm still nervous about how we're going to test the WAL code adequately\nfor the lesser-used index types. Any ideas out there?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Oct 2000 15:19:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: =?koi8-r?Q?=EF=D4=D7=C5=D4=3A_=EF=D4=D7=C5=D4=3A_WAL_and_index?=\n\t=?koi8-r?Q?es_=28Re=3A_=5BHACKERS=5D_WAL_status_=26_todo=29_?="
},
{
"msg_contents": "> \"Mikheev, Vadim\" <[email protected]> writes:\n> > And how could I use such records on recovery\n> > being unable to know what data columns represent\n> > keys, what functions should be used for ordering?\n> \n> Um, that's not built into the index either, is it? OK, you win ...\n> \n> I'm still nervous about how we're going to test the WAL code adequately\n> for the lesser-used index types. Any ideas out there?\n\nWait for bug reports? :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Oct 2000 15:27:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: ?????: ?????: WAL and indexes (Re: [HACKERS] WAL status\n\t& todo)"
}
]
|
[
{
"msg_contents": ">> One of the purposes of WAL is immediate removing tuples \n>> inserted by aborted xactions. I want make VACUUM\n>> *optional* in future - space must be available for\n>> reusing without VACUUM. And this is first, very small,\n>> step in this direction.\n>\n>Why would vacuum become optional? Would WAL offer an option to\n>not reclaim free space? We're hoping that vacuum becomes unneeded\n\nReclaiming free space is issue of storage manager, as\nI said here many times. WAL is just Write A-head Log\n(first write to log then to data files, to have ability\nto recover using log data) and for matter of space it can\nonly help to delete tuples inserted by aborted transaction.\n\n>when postgresql is run with some flag indicating that we're\n>uninterested in time travel.\n\nTime travel is gone ~ 3 years ago and vacuum was needed all\nthese years and will be needed to reclaim space in 7.1\n\n>How much longer do you estimate until you can make it work that way?\n\nHopefully in 7.2\n\nVadim\n\n",
"msg_date": "Mon, 16 Oct 2000 12:15:09 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?koi8-r?Q?=EF=D4=D7=C5=D4=3A_=5BHACKERS=5D_Otvet=3A_WAL_and_i?=\n\t=?koi8-r?Q?ndexes_=28Re=3A_=5BHACKERS=5D_WAL_status_=26_todo=29?="
},
{
"msg_contents": "\n\n\"Mikheev, Vadim\" wrote:\n\n> >> One of the purposes of WAL is immediate removing tuples\n> >> inserted by aborted xactions. I want make VACUUM\n> >> *optional* in future - space must be available for\n> >> reusing without VACUUM. And this is first, very small,\n> >> step in this direction.\n> >\n> >Why would vacuum become optional? Would WAL offer an option to\n> >not reclaim free space? We're hoping that vacuum becomes unneeded\n>\n> Reclaiming free space is issue of storage manager, as\n> I said here many times. WAL is just Write A-head Log\n> (first write to log then to data files, to have ability\n> to recover using log data) and for matter of space it can\n> only help to delete tuples inserted by aborted transaction.\n>\n> >when postgresql is run with some flag indicating that we're\n> >uninterested in time travel.\n>\n> Time travel is gone ~ 3 years ago and vacuum was needed all\n> these years and will be needed to reclaim space in 7.1\n>\n> >How much longer do you estimate until you can make it work that way?\n>\n> Hopefully in 7.2\n>\n\nJust a confirmation.\nDo you plan overwrite storage manager also in 7.2 ?\n\nHiroshi Inoue\n\n",
"msg_date": "Tue, 17 Oct 2000 17:08:37 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: =?iso-2022-jp?B?GyRCflYlaSVKJWQbKEI=?=: [HACKERS] Otvet:\n\tWAL and indexes (Re: [HACKERS] WAL status & todo)"
}
]
|
[
{
"msg_contents": "> >I like the pg_{import,export} names myself ... *nod*\n> >\n> \n> Sounds fine also; but we have compatibility issues in that we \n> still need pg_dump. Maybe just a symbolic link to pg_export.\n\nYes, we still need in pg_dump, because of pg_dump is thing\nquite different from WAL based backup/restore. pg_dump\nis utility to export data in system independant format\nusing standard SQL commands (with COPY extension) and WAL\nbased backup system is to export *physical* data files\n(and logs). So, pg_dump should be preserved asis.\n\nVadim\n",
"msg_date": "Mon, 16 Oct 2000 15:07:38 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: AW: Backup, restore & pg_dump"
},
{
"msg_contents": "At 15:07 16/10/00 -0700, Mikheev, Vadim wrote:\n>\n>So, pg_dump should be preserved asis.\n>\n\nJust to clarify; I have no intention of doing anything nasty to pg_dump.\nAll I plan to do is rename the pg_restore to one of\npg_load/pg_import/pg_undump/pmud_gp, to make way for a WAL based restore\nutility, although as Bruce suggests, this may be premature.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 17 Oct 2000 11:59:08 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: AW: Backup, restore & pg_dump"
}
]
|
[
{
"msg_contents": "> > It looks like it wouldn't take too much work to replace \n> > shared buffers on the basis of LRU-2 instead of LRU, so\n> > I'm thinking about trying it.\n> > \n> > Has anyone looked into this area? Is there a better method to try?\n> \n> Sounds like a perfect idea. Good luck. :-)\n\nHmm, how much time will be required?\nI integrate WAL right now and have to do significant changes\nin bufmgr...\n\nVadim\n\n\n",
"msg_date": "Mon, 16 Oct 2000 17:14:51 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Possible performance improvement: buffer replacemen\n\tt policy"
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n>> Sounds like a perfect idea. Good luck. :-)\n\n> Hmm, how much time will be required?\n> I integrate WAL right now and have to do significant changes\n> in bufmgr...\n\nDon't worry about it, I am not planning to commit that code anytime\nsoon. (I have other stuff I want to fix in bufmgr, but I can wait\nfor you to finish WAL first.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Oct 2000 20:39:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possible performance improvement: buffer replacemen t policy "
}
]
|
[
{
"msg_contents": "Thanks.\n\nVadim\n\n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Monday, October 16, 2000 5:40 PM\n> To: Mikheev, Vadim\n> Cc: 'Bruce Momjian'; [email protected]\n> Subject: Re: [HACKERS] Possible performance improvement: buffer\n> replacemen t policy \n> \n> \n> \"Mikheev, Vadim\" <[email protected]> writes:\n> >> Sounds like a perfect idea. Good luck. :-)\n> \n> > Hmm, how much time will be required?\n> > I integrate WAL right now and have to do significant changes\n> > in bufmgr...\n> \n> Don't worry about it, I am not planning to commit that code anytime\n> soon. (I have other stuff I want to fix in bufmgr, but I can wait\n> for you to finish WAL first.)\n> \n> \t\t\tregards, tom lane\n> \n",
"msg_date": "Mon, 16 Oct 2000 17:28:36 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Possible performance improvement: buffer replacemen\n\t t policy"
}
]
|
[
{
"msg_contents": "... with a blinding flash ...\n\nThe VACUUM funnies I was complaining about before may or may not be real\nbugs, but they are not what's biting Alfred. None of them can lead to\nthe observed crashes AFAICT.\n\nWhat's biting Alfred is the code that moves a tuple update chain, lines\n1541 ff in REL7_0_PATCHES. This sets up a pointer to a source tuple in\n\"tuple\". Then it gets the destination page it plans to move the tuple\nto, and applies vc_vacpage to that page if it hasn't been done already.\nBut when we're moving a tuple chain, *it is possible for the destination\npage to be the same as the source page*. Since vc_vacpage applies\nPageRepairFragmentation, all the live tuples on the page may get moved.\nAfterwards, tuple.t_data is out of date and pointing at some random\nchunk of some other tuple. The subsequent copy of the tuple copies\ngarbage, which explains Alfred's several crashes in constructing index\nentries for the copied tuple (all of which bombed out from the\nindex-build calls at lines 1634 ff, ie, for tuples being moved as part\nof a chain). Once in a while, the obsolete pointer will be pointing at\nthe real header of a different tuple --- perhaps even the place where we\nare about to put the copy. This improbable case explains the one\nobserved Assert crash in which a copied tuple's HEAP_MOVED_IN bit\nmysteriously got turned off. Reason: it was cleared through the\nold-tuple pointer just after being set via the new-tuple one.\n\nProof that this is happening can be seen in the core dumps for Alfred's\nindex-construction-crash cases: tuple.t_data does not point at the same\nplace that the tuple.ip_posid'th page line item points at. This could\nonly happen if the page was reshuffled since the tuple pointer was set\nup. The explanation for the Assert crash is a bit of a leap of faith,\nbut I feel confident that it's right.\n\nThe solution is to do everything we're going to do with the source\ntuple, especially copying it and updating its state, *before* we apply\nvc_vacpage to the destination page. Then we don't care if the source\ngets moved during vc_vacpage.\n\nI will prepare a patch along this line and send it to Alfred for\ntesting.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Oct 2000 20:36:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "The lightbulb just went on..."
},
{
"msg_contents": "\nSomething to force a v7.0.3 ... ?\n\nOn Mon, 16 Oct 2000, Tom Lane wrote:\n\n> ... with a blinding flash ...\n> \n> The VACUUM funnies I was complaining about before may or may not be real\n> bugs, but they are not what's biting Alfred. None of them can lead to\n> the observed crashes AFAICT.\n> \n> What's biting Alfred is the code that moves a tuple update chain, lines\n> 1541 ff in REL7_0_PATCHES. This sets up a pointer to a source tuple in\n> \"tuple\". Then it gets the destination page it plans to move the tuple\n> to, and applies vc_vacpage to that page if it hasn't been done already.\n> But when we're moving a tuple chain, *it is possible for the destination\n> page to be the same as the source page*. Since vc_vacpage applies\n> PageRepairFragmentation, all the live tuples on the page may get moved.\n> Afterwards, tuple.t_data is out of date and pointing at some random\n> chunk of some other tuple. The subsequent copy of the tuple copies\n> garbage, which explains Alfred's several crashes in constructing index\n> entries for the copied tuple (all of which bombed out from the\n> index-build calls at lines 1634 ff, ie, for tuples being moved as part\n> of a chain). Once in a while, the obsolete pointer will be pointing at\n> the real header of a different tuple --- perhaps even the place where we\n> are about to put the copy. This improbable case explains the one\n> observed Assert crash in which a copied tuple's HEAP_MOVED_IN bit\n> mysteriously got turned off. Reason: it was cleared through the\n> old-tuple pointer just after being set via the new-tuple one.\n> \n> Proof that this is happening can be seen in the core dumps for Alfred's\n> index-construction-crash cases: tuple.t_data does not point at the same\n> place that the tuple.ip_posid'th page line item points at. This could\n> only happen if the page was reshuffled since the tuple pointer was set\n> up. The explanation for the Assert crash is a bit of a leap of faith,\n> but I feel confident that it's right.\n> \n> The solution is to do everything we're going to do with the source\n> tuple, especially copying it and updating its state, *before* we apply\n> vc_vacpage to the destination page. Then we don't care if the source\n> gets moved during vc_vacpage.\n> \n> I will prepare a patch along this line and send it to Alfred for\n> testing.\n> \n> \t\t\tregards, tom lane\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 16 Oct 2000 21:54:00 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The lightbulb just went on..."
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Something to force a v7.0.3 ... ?\n\nYes. We had plenty to force a 7.0.3 already, actually, but I was\nholding off recommending a release in hopes of finding Alfred's\nproblem.\n\nI will get this patch made up tonight for REL7_0; if Alfred doesn't\nsee more failures after running it for a few days, then let's move\nforward on a 7.0.3 release.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Oct 2000 20:59:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: The lightbulb just went on... "
},
{
"msg_contents": "On Mon, 16 Oct 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > Something to force a v7.0.3 ... ?\n> \n> Yes. We had plenty to force a 7.0.3 already, actually, but I was\n> holding off recommending a release in hopes of finding Alfred's\n> problem.\n\nI thought so, about having plenty, but when I asked before SF, it sort of\nfell on deaf ears, so figured you weren't ready yet :)\n\n> I will get this patch made up tonight for REL7_0; if Alfred doesn't\n> see more failures after running it for a few days, then let's move\n> forward on a 7.0.3 release.\n\nthat works for me ... I'm in Montreal for the weekend, so if we can get it\nout before Thursday, great, else we'll do it on Monday, 'k? \n\n\n",
"msg_date": "Mon, 16 Oct 2000 22:06:06 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The lightbulb just went on... "
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n>> I will get this patch made up tonight for REL7_0; if Alfred doesn't\n>> see more failures after running it for a few days, then let's move\n>> forward on a 7.0.3 release.\n\n> that works for me ... I'm in Montreal for the weekend, so if we can get it\n> out before Thursday, great, else we'll do it on Monday, 'k? \n\nI think he was seeing MTBF of several days anyway, so we won't have any\nconfidence that the problem is gone before next week.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Oct 2000 00:00:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: The lightbulb just went on... "
},
{
"msg_contents": "Tom:\n\nI think I may have been seeing this problem as well. We were getting\ncrashes very often with 7.0.2 during VACUUM's if activity was going\non to our database during the vacuum (even though the activity was \nlight). Our solution in the meantime was to simply disable the\naplications during a vacuum to avoid any activity during hte vacuum,\nand we have not had a crash on vacuum since that happened. If this\nsounds consistent with the problem you think Alfred is having, then\nI would be willing to test your patch on our system as well.\n\nIf you think it would help, feel free to send me the patch and I will\ndo some testing on it for you.\n\nThanks.\nMike\n\nOn Mon, 16 Oct 2000, Tom Lane wrote:\n\n> ... with a blinding flash ...\n> \n> The VACUUM funnies I was complaining about before may or may not be real\n> bugs, but they are not what's biting Alfred. None of them can lead to\n> the observed crashes AFAICT.\n...\n\n",
"msg_date": "Tue, 17 Oct 2000 10:44:23 -0500 (CDT)",
"msg_from": "Michael J Schout <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The lightbulb just went on..."
},
{
"msg_contents": "Michael J Schout <[email protected]> writes:\n> I think I may have been seeing this problem as well. We were getting\n> crashes very often with 7.0.2 during VACUUM's if activity was going\n> on to our database during the vacuum (even though the activity was \n> light). Our solution in the meantime was to simply disable the\n> aplications during a vacuum to avoid any activity during hte vacuum,\n> and we have not had a crash on vacuum since that happened. If this\n> sounds consistent with the problem you think Alfred is having,\n\nYes, it sure does.\n\nThe patch I have applies atop a previous change in the REL7_0_PATCHES\nbranch, so what I would recommend is that you pull the current state of\nthe REL7_0_PATCHES branch from our CVS server, and then you can test\nwhat will shortly become 7.0.3. There are several other critical bug\nfixes in there since 7.0.2.\n\nDunno if you know how to use cvs, but the critical steps are explained\nat http://www.postgresql.org/docs/postgres/x28786.htm. Note that the\ngiven recipe will pull current development tip, which is NOT what you\nwant. In step 3, instead of doing\n\t... co -P pgsql\ndo\n\t... co -P -r REL7_0_PATCHES pgsql\n\nThen configure and build as usual.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Oct 2000 12:19:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: The lightbulb just went on... "
},
{
"msg_contents": "* Michael J Schout <[email protected]> [001017 08:50] wrote:\n> Tom:\n> \n> I think I may have been seeing this problem as well. We were getting\n> crashes very often with 7.0.2 during VACUUM's if activity was going\n> on to our database during the vacuum (even though the activity was \n> light). Our solution in the meantime was to simply disable the\n> aplications during a vacuum to avoid any activity during hte vacuum,\n> and we have not had a crash on vacuum since that happened. If this\n> sounds consistent with the problem you think Alfred is having, then\n> I would be willing to test your patch on our system as well.\n> \n> If you think it would help, feel free to send me the patch and I will\n> do some testing on it for you.\n\nI'm not sure if you've been subscribed to this list for long but\nIt would have been nice if you had spoken up when I initially\nreported the problems so that the developers realized this wasn't\na completely isolated incident.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Tue, 17 Oct 2000 09:41:42 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The lightbulb just went on..."
},
{
"msg_contents": "\nOn Tue, 17 Oct 2000, Tom Lane wrote:\n\n> > and we have not had a crash on vacuum since that happened. If this\n> > sounds consistent with the problem you think Alfred is having,\n> \n> Yes, it sure does.\n> \n> The patch I have applies atop a previous change in the REL7_0_PATCHES\n> branch, so what I would recommend is that you pull the current state of\n> the REL7_0_PATCHES branch from our CVS server, and then you can test\n> what will shortly become 7.0.3. There are several other critical bug\n> fixes in there since 7.0.2.\n\nHi Tom.\n\nI have built from the REL7_0_PATCHES tree yesturday and did some testing on the\ndatabase. So far no crashes during vacuum like I had been seeing with 7.0.2\n:).\n\nI am seeing a different problem (and I have seen this with 7.0.2 as well). If\nI run vacuum, sometimes this error pops up in the client appliction during the\nvacuum:\n\nERROR: RelationClearRelation: relation 1668325 modified while in use\n\nrelation 1668325 is a view named \"sessions\".\n\nwhat happens to sessions is that it does:\n\nSELECT session_data, id \nFROM sessions\nWHERE id = ?\nFOR UPDATE\n\n.... client does some processing ...\n\nUPDATE sesssions set session_data = ? WHERE id = ?;\n\n(this is where the error happens)\n\nI think part of my problem might be that sessions is a view and not a table,\nbut it is probably a bug that needs to be noted nonetheless. I am going to try\nconverting \"sessions\" to a view and see if I can reproduce it that way.\n\nMike\n\n",
"msg_date": "Wed, 18 Oct 2000 10:42:38 -0500 (CDT)",
"msg_from": "Michael J Schout <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The lightbulb just went on... "
},
{
"msg_contents": "Michael J Schout <[email protected]> writes:\n> ERROR: RelationClearRelation: relation 1668325 modified while in use\n> relation 1668325 is a view named \"sessions\".\n\nHm. This message is coming out of the relation cache code when it sees\nan invalidate-your-cache-for-this-relation message from another backend\nand the relation in question has already been locked during the current\ntransaction. Probably, what is happening is that the vacuum process is\nvacuuming the view (not too much to do there ;-) but it does it anyway)\nand sending out the cache inval message for it after the other client\nprocess has already started parsing of a query using the view.\n\nThis is a fairly subtle problem that I don't think we will be able to\nfix as a backpatch for 7.0.*. It's on the to-fix list for 7.1 though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Oct 2000 11:52:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: The lightbulb just went on... "
}
]
|
[
{
"msg_contents": "It seems the length coerce for bpchar is broken since 7.0.\nIn 6.5 when a string is inserted, bpchar() is called to properly clip\nthe string. However in 7.0 (and probably current) bpchar() is not\ncalled anymore. \n\ncoerce_type_typmod() calls exprTypmod(). exprTypmod() returns VARSIZE\nof the bpchar data only if the data type is bpchar (if the data type\nis varchar, exprTypmod just returns -1 and the parser add a function\nnode to call varchar(). so there is no problem for varchar). If\nVARSIZE returned from exprTypmod() and atttypmod passed to\ncoerce_type_typmod() is equal, the function node to call bpchar()\nwould not be added.\n\nI'm not sure if this was an intended efect of the change. Anyway we\nhave to do the length coerce for bpchar somewhere and I'm thinking now\nis doing in bpcharin(). This would also solve the problem in copy in a\nmail I have posted.\n\nComments?\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 17 Oct 2000 11:30:17 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "length coerce for bpchar is broken since 7.0"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> If VARSIZE returned from exprTypmod() and atttypmod passed to\n> coerce_type_typmod() is equal, the function node to call bpchar()\n> would not be added.\n\nUm, what's wrong with that? Seems to me that parse_coerce is doing\nexactly what it's supposed to, ie, adding only length coercions\nthat are needed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Oct 2000 23:19:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: length coerce for bpchar is broken since 7.0 "
},
{
"msg_contents": "> > If VARSIZE returned from exprTypmod() and atttypmod passed to\n> > coerce_type_typmod() is equal, the function node to call bpchar()\n> > would not be added.\n> \n> Um, what's wrong with that? Seems to me that parse_coerce is doing\n> exactly what it's supposed to, ie, adding only length coercions\n> that are needed.\n\nSimply clipping multibyte strings by atttypmode might produce\nincorrect multibyte strings. Consider a case inserting 3 multibyte\nletters (each consisting of 2 bytes) into a char(5) column.\n\nOr this kind of consideration should be in bpcharin() as I said in the\nearilier mail?\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 17 Oct 2000 13:17:53 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: length coerce for bpchar is broken since 7.0 "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>>>> If VARSIZE returned from exprTypmod() and atttypmod passed to\n>>>> coerce_type_typmod() is equal, the function node to call bpchar()\n>>>> would not be added.\n>> \n>> Um, what's wrong with that? Seems to me that parse_coerce is doing\n>> exactly what it's supposed to, ie, adding only length coercions\n>> that are needed.\n\n> Simply clipping multibyte strings by atttypmode might produce\n> incorrect multibyte strings. Consider a case inserting 3 multibyte\n> letters (each consisting of 2 bytes) into a char(5) column.\n\nIt seems to me that this means that atttypmod or exprTypmod() isn't\ncorrectly defined for MULTIBYTE char(n) values. We should define\ntypmod in such a way that they agree iff the string is correctly\nclipped. This might be easier said than done, perhaps, but I don't\nlike the idea of having to apply length-coercion functions all the\ntime because we can't figure out whether they're needed or not.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Oct 2000 00:27:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: length coerce for bpchar is broken since 7.0 "
},
{
"msg_contents": "> > Simply clipping multibyte strings by atttypmode might produce\n> > incorrect multibyte strings. Consider a case inserting 3 multibyte\n> > letters (each consisting of 2 bytes) into a char(5) column.\n> \n> It seems to me that this means that atttypmod or exprTypmod() isn't\n> correctly defined for MULTIBYTE char(n) values. We should define\n> typmod in such a way that they agree iff the string is correctly\n> clipped. This might be easier said than done, perhaps, but I don't\n> like the idea of having to apply length-coercion functions all the\n> time because we can't figure out whether they're needed or not.\n\nBefore going further, may I ask you a question. Why in exprTypmod() is\nbpchar() treated differently from other data types such as varchar?\n\n\t\t\t\tswitch (con->consttype)\n\t\t\t\t{\n\t\t\t\t\tcase BPCHAROID:\n\t\t\t\t\t\tif (!con->constisnull)\n\t\t\t\t\t\t\treturn VARSIZE(DatumGetPointer(con->constvalue));\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tdefault:\n\t\t\t\t\t\tbreak;\n\t\t\t\t}\n\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 17 Oct 2000 13:38:25 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: length coerce for bpchar is broken since 7.0 "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> Before going further, may I ask you a question. Why in exprTypmod() is\n> bpchar() treated differently from other data types such as varchar?\n\nIt's just hardwired knowledge about that particular datatype. In the\nlight of your comments, it seems clear that the code here is wrong\nfor the MULTIBYTE case: instead of plain VARSIZE(), it should be\nreturning the number of multibyte characters + 4 (or whatever\natttypmod is defined to mean for MULTIBYTE bpchar). I think I wrote\nthis code to start with, so you can blame me for the fact that it\nneglects the MULTIBYTE case :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Oct 2000 00:43:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: length coerce for bpchar is broken since 7.0 "
},
{
"msg_contents": "> It's just hardwired knowledge about that particular datatype. In the\n> light of your comments, it seems clear that the code here is wrong\n> for the MULTIBYTE case: instead of plain VARSIZE(), it should be\n> returning the number of multibyte characters + 4 (or whatever\n> atttypmod is defined to mean for MULTIBYTE bpchar). I think I wrote\n> this code to start with, so you can blame me for the fact that it\n> neglects the MULTIBYTE case :-(\n\nI'm going to fix the problem by changing bpcharin() rather than\nchanging exprTypmod(). Surely we could fix the problem by changing\nexprTypmod() for INSERT, however, we could not fix the similar problem\nfor COPY FROM in the same way. Changing bpcharin() would solve\nproblems of both INSERT and COPY FROM. So bpcharin() seems more\nappropreate place to fix both problems.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 17 Oct 2000 22:33:48 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: length coerce for bpchar is broken since 7.0 "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> I'm going to fix the problem by changing bpcharin() rather than\n> changing exprTypmod(). Surely we could fix the problem by changing\n> exprTypmod() for INSERT, however, we could not fix the similar problem\n> for COPY FROM in the same way. Changing bpcharin() would solve\n> problems of both INSERT and COPY FROM. So bpcharin() seems more\n> appropreate place to fix both problems.\n\nbpcharin() will most definitely NOT fix the problem, because it often\nwill not know the target column's typmod, if indeed there is an\nidentifiable target column at all. I agree that it's a good solution\nfor COPY FROM, but you need to fix exprTypmod() too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Oct 2000 11:05:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: length coerce for bpchar is broken since 7.0 "
},
{
"msg_contents": "> Tatsuo Ishii <[email protected]> writes:\n> > I'm going to fix the problem by changing bpcharin() rather than\n> > changing exprTypmod(). Surely we could fix the problem by changing\n> > exprTypmod() for INSERT, however, we could not fix the similar problem\n> > for COPY FROM in the same way. Changing bpcharin() would solve\n> > problems of both INSERT and COPY FROM. So bpcharin() seems more\n> > appropreate place to fix both problems.\n> \n> bpcharin() will most definitely NOT fix the problem, because it often\n> will not know the target column's typmod, if indeed there is an\n> identifiable target column at all. \n\nCan you give me any example for this case?\n\n> I agree that it's a good solution\n> for COPY FROM, but you need to fix exprTypmod() too.\n\nAnyway, I'm gotoing to fix exprTypmod() also.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 18 Oct 2000 07:57:26 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: length coerce for bpchar is broken since 7.0 "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> bpcharin() will most definitely NOT fix the problem, because it often\n>> will not know the target column's typmod, if indeed there is an\n>> identifiable target column at all. \n\n> Can you give me any example for this case?\n\nUPDATE foo SET bpcharcol = 'a'::char || 'b'::char;\n\nUPDATE foo SET bpcharcol = upper('abc');\n\nIn the first case bpcharin() will be invoked, but not in the context\nof direct assignment to a table column, so it won't receive a valid\ntypmod. In the second case bpcharin() will never be invoked at all,\nbecause upper takes and returns text --- so 'abc' is not a bpchar\nconstant but a text constant. You have to be sure that the parser\nhandles type length coercion correctly, and I think the cleanest way to\ndo that is to fix exprTypmod so that it knows how typmod is defined in\nthe MULTIBYTE case.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Oct 2000 19:36:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: length coerce for bpchar is broken since 7.0 "
},
{
"msg_contents": "> Tatsuo Ishii <[email protected]> writes:\n> >> bpcharin() will most definitely NOT fix the problem, because it often\n> >> will not know the target column's typmod, if indeed there is an\n> >> identifiable target column at all. \n> \n> > Can you give me any example for this case?\n> \n> UPDATE foo SET bpcharcol = 'a'::char || 'b'::char;\n> \n> UPDATE foo SET bpcharcol = upper('abc');\n> \n> In the first case bpcharin() will be invoked, but not in the context\n> of direct assignment to a table column, so it won't receive a valid\n> typmod. In the second case bpcharin() will never be invoked at all,\n> because upper takes and returns text --- so 'abc' is not a bpchar\n> constant but a text constant. You have to be sure that the parser\n> handles type length coercion correctly, and I think the cleanest way to\n> do that is to fix exprTypmod so that it knows how typmod is defined in\n> the MULTIBYTE case.\n\nIn those cases above bpchar() will be called anyway, so I don't see\nMULTIBYTE length coerce problems there.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 25 Oct 2000 10:15:41 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: length coerce for bpchar is broken since 7.0 "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>>>> Can you give me any example for this case?\n>> \n>> UPDATE foo SET bpcharcol = 'a'::char || 'b'::char;\n>> \n>> UPDATE foo SET bpcharcol = upper('abc');\n\n> In those cases above bpchar() will be called anyway, so I don't see\n> MULTIBYTE length coerce problems there.\n\nSo it will, but *only* because the parser realizes that it needs to\nadd a call to bpchar(). If exprTypmod returns incorrect values then\nit's possible that the parser would wrongly decide it didn't need to\ncall bpchar().\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Oct 2000 12:49:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: length coerce for bpchar is broken since 7.0 "
},
{
"msg_contents": "\nCan someone comment on the status of this?\n\n> It seems the length coerce for bpchar is broken since 7.0.\n> In 6.5 when a string is inserted, bpchar() is called to properly clip\n> the string. However in 7.0 (and probably current) bpchar() is not\n> called anymore. \n> \n> coerce_type_typmod() calls exprTypmod(). exprTypmod() returns VARSIZE\n> of the bpchar data only if the data type is bpchar (if the data type\n> is varchar, exprTypmod just returns -1 and the parser add a function\n> node to call varchar(). so there is no problem for varchar). If\n> VARSIZE returned from exprTypmod() and atttypmod passed to\n> coerce_type_typmod() is equal, the function node to call bpchar()\n> would not be added.\n> \n> I'm not sure if this was an intended efect of the change. Anyway we\n> have to do the length coerce for bpchar somewhere and I'm thinking now\n> is doing in bpcharin(). This would also solve the problem in copy in a\n> mail I have posted.\n> \n> Comments?\n> --\n> Tatsuo Ishii\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 19 Jan 2001 21:41:29 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: length coerce for bpchar is broken since 7.0"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Can someone comment on the status of this?\n\nregression=# create table foo (f1 char(7));\nCREATE\nregression=# insert into foo values ('123456789');\nINSERT 145180 1\nregression=# select * from foo;\n f1\n---------\n 1234567\n(1 row)\n\nWhere's the problem?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 19 Jan 2001 22:45:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: length coerce for bpchar is broken since 7.0 "
},
{
"msg_contents": "I believe this has been fixed.\n\n>Subject: [COMMITTERS] pgsql/src/backend/utils/adt (varchar.c)\n>From: [email protected]\n>To: [email protected]\n>Date: Sun, 26 Nov 2000 06:35:23 -0500 (EST)\n\n> Can someone comment on the status of this?\n> \n> > It seems the length coerce for bpchar is broken since 7.0.\n> > In 6.5 when a string is inserted, bpchar() is called to properly clip\n> > the string. However in 7.0 (and probably current) bpchar() is not\n> > called anymore. \n> > \n> > coerce_type_typmod() calls exprTypmod(). exprTypmod() returns VARSIZE\n> > of the bpchar data only if the data type is bpchar (if the data type\n> > is varchar, exprTypmod just returns -1 and the parser add a function\n> > node to call varchar(). so there is no problem for varchar). If\n> > VARSIZE returned from exprTypmod() and atttypmod passed to\n> > coerce_type_typmod() is equal, the function node to call bpchar()\n> > would not be added.\n> > \n> > I'm not sure if this was an intended efect of the change. Anyway we\n> > have to do the length coerce for bpchar somewhere and I'm thinking now\n> > is doing in bpcharin(). This would also solve the problem in copy in a\n> > mail I have posted.\n> > \n> > Comments?\n> > --\n> > Tatsuo Ishii\n> > \n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 20 Jan 2001 13:09:32 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: length coerce for bpchar is broken since 7.0"
}
]
|
[
{
"msg_contents": " Date: Monday, October 16, 2000 @ 23:29:31\nAuthor: momjian\n\nUpdate of /home/projects/pgsql/cvsroot/pgsql/doc\n from hub.org:/home/projects/pgsql/tmp/cvs-serv10656/pgsql/doc\n\nAdded Files:\n\tFAQ_MSWIN \n\nRemoved Files:\n\tINSTALL_MSWIN \n\n----------------------------- Log Message -----------------------------\n\nFAQ_MSWIN is better than INSTALL_MSWIN\n\n",
"msg_date": "Mon, 16 Oct 2000 23:29:31 -0400 (EDT)",
"msg_from": "Bruce Momjian - CVS <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgsql/doc (FAQ_MSWIN INSTALL_MSWIN)"
},
{
"msg_contents": "Bruce Momjian - CVS writes:\n\n> FAQ_MSWIN is better than INSTALL_MSWIN\n\nNo it's not... The file in question doesn't contain any questions or\nanswers, it contains installation instructions for Windows. Oh, and yes,\nMicrosoft does own the Windows name.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 17 Oct 2000 17:10:42 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/doc (FAQ_MSWIN INSTALL_MSWIN)"
},
{
"msg_contents": "> Bruce Momjian - CVS writes:\n> \n> > FAQ_MSWIN is better than INSTALL_MSWIN\n> \n> No it's not... The file in question doesn't contain any questions or\n> answers, it contains installation instructions for Windows. Oh, and yes,\n> Microsoft does own the Windows name.\n\nI don't like to make the MS stuff look any more obvious than it already\nis so I threw it into FAQ. And I run X-Windows, thank you. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 17 Oct 2000 12:04:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/doc (FAQ_MSWIN INSTALL_MSWIN)"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> > Bruce Momjian - CVS writes:\n> > \n> > > FAQ_MSWIN is better than INSTALL_MSWIN\n> > \n> > No it's not... The file in question doesn't contain any questions or\n> > answers, it contains installation instructions for Windows. Oh, and yes,\n> > Microsoft does own the Windows name.\n> \n> I don't like to make the MS stuff look any more obvious than it already\n> is so I threw it into FAQ.\n\nFirst, since when are we in the business of hiding away documentation for\na supported platform, and second, how does putting installation\ninstructions into a file named \"FAQ\" make it less \"obvious\"?\n\n> And I run X-Windows, thank you. :-)\n\n: The X Consortium requests that the following names be used\n: when referring to this software:\n:\n: X\n: X Window System\n: X Version 11\n: X Window System, Version 11\n: X11\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 17 Oct 2000 18:39:39 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/doc (FAQ_MSWIN INSTALL_MSWIN)"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > > Bruce Momjian - CVS writes:\n> > > \n> > > > FAQ_MSWIN is better than INSTALL_MSWIN\n> > > \n> > > No it's not... The file in question doesn't contain any questions or\n> > > answers, it contains installation instructions for Windows. Oh, and yes,\n> > > Microsoft does own the Windows name.\n> > \n> > I don't like to make the MS stuff look any more obvious than it already\n> > is so I threw it into FAQ.\n> \n> First, since when are we in the business of hiding away documentation for\n> a supported platform, and second, how does putting installation\n> instructions into a file named \"FAQ\" make it less \"obvious\"?\n\nHelp, I am losing here. Does anyone want to help me... :-)\n\n\n> > And I run X-Windows, thank you. :-)\n> \n> : The X Consortium requests that the following names be used\n> : when referring to this software:\n> :\n> : X\n> : X Window System\n> : X Version 11\n> : X Window System, Version 11\n> : X11\n\nNo one calls it the X Window System, they call it X Windows, and if MS\nowns the word Windows, we are in trouble. It means they own half our\nname.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 17 Oct 2000 12:41:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/doc (FAQ_MSWIN INSTALL_MSWIN)"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> First, since when are we in the business of hiding away documentation for\n>> a supported platform, and second, how does putting installation\n>> instructions into a file named \"FAQ\" make it less \"obvious\"?\n\n> Help, I am losing here. Does anyone want to help me... :-)\n\nChange the contents of the file to follow a FAQ structure ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Oct 2000 12:58:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/doc (FAQ_MSWIN INSTALL_MSWIN) "
},
{
"msg_contents": "On Tue, 17 Oct 2000, Bruce Momjian wrote:\n\n> > Bruce Momjian writes:\n> > \n> > > > Bruce Momjian - CVS writes:\n> > > > \n> > > > > FAQ_MSWIN is better than INSTALL_MSWIN\n> > > > \n> > > > No it's not... The file in question doesn't contain any questions or\n> > > > answers, it contains installation instructions for Windows. Oh, and yes,\n> > > > Microsoft does own the Windows name.\n> > > \n> > > I don't like to make the MS stuff look any more obvious than it already\n> > > is so I threw it into FAQ.\n> > \n> > First, since when are we in the business of hiding away documentation for\n> > a supported platform, and second, how does putting installation\n> > instructions into a file named \"FAQ\" make it less \"obvious\"?\n> \n> Help, I am losing here. Does anyone want to help me... :-)\n\nI'm having a hard time following the quoting.. Are you arguing with\nyourself? :)\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 17 Oct 2000 13:03:41 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/doc (FAQ_MSWIN INSTALL_MSWIN)"
},
{
"msg_contents": "> > > First, since when are we in the business of hiding away documentation for\n> > > a supported platform, and second, how does putting installation\n> > > instructions into a file named \"FAQ\" make it less \"obvious\"?\n> > \n> > Help, I am losing here. Does anyone want to help me... :-)\n> \n> I'm having a hard time following the quoting.. Are you arguing with\n> yourself? :)\n\nYes, Peter is beating up my argument pretty good.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 17 Oct 2000 13:14:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/doc (FAQ_MSWIN INSTALL_MSWIN)"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> > > > First, since when are we in the business of hiding away documentation for\n> > > > a supported platform, and second, how does putting installation\n> > > > instructions into a file named \"FAQ\" make it less \"obvious\"?\n> > > \n> > > Help, I am losing here. Does anyone want to help me... :-)\n> > \n> > I'm having a hard time following the quoting.. Are you arguing with\n> > yourself? :)\n> \n> Yes, Peter is beating up my argument pretty good.\n\nLet's settle on INSTALL_MSWIN.\n\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 18 Oct 2000 21:44:46 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/doc (FAQ_MSWIN INSTALL_MSWIN)"
},
{
"msg_contents": "No. I will turn it into an FAQ, and the item will be \"How do I install\nPostgreSQL on MS Windows\". How's that?\n\n\n> Bruce Momjian writes:\n> \n> > > > > First, since when are we in the business of hiding away documentation for\n> > > > > a supported platform, and second, how does putting installation\n> > > > > instructions into a file named \"FAQ\" make it less \"obvious\"?\n> > > > \n> > > > Help, I am losing here. Does anyone want to help me... :-)\n> > > \n> > > I'm having a hard time following the quoting.. Are you arguing with\n> > > yourself? :)\n> > \n> > Yes, Peter is beating up my argument pretty good.\n> \n> Let's settle on INSTALL_MSWIN.\n> \n> \n> -- \n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Oct 2000 17:13:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/doc (FAQ_MSWIN INSTALL_MSWIN)"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> No. I will turn it into an FAQ, and the item will be \"How do I install\n> PostgreSQL on MS Windows\". How's that?\n\nI don't see how that would be better. Why this artificiality? \nInstallation instructions belong into INSTALL files. FAQs are questions\nasked by people because the documentation is incomplete or the software is\nbadly designed. I mean, you could make everything a FAQ, but FAQs should\nbe a backup thing, not a first order documentation resource.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 19 Oct 2000 16:36:06 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/doc (FAQ_MSWIN INSTALL_MSWIN)"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > No. I will turn it into an FAQ, and the item will be \"How do I install\n> > PostgreSQL on MS Windows\". How's that?\n> \n> I don't see how that would be better. Why this artificiality? \n> Installation instructions belong into INSTALL files. FAQs are questions\n> asked by people because the documentation is incomplete or the software is\n> badly designed. I mean, you could make everything a FAQ, but FAQs should\n> be a backup thing, not a first order documentation resource.\n\nCurrently all our platform-specific files are FAQ's. I can imagine\nsomeone realizing that, looking for an MS one, and giving up and never\nseeing INSTALL_MSWIN. Also, the platform-specific stuff is on the web\npages under FAQ. I don't want to make a separate section just for the\nMS OS.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 19 Oct 2000 11:49:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/doc (FAQ_MSWIN INSTALL_MSWIN)"
},
{
"msg_contents": "On Thu, 19 Oct 2000, Bruce Momjian wrote:\n\n> > Bruce Momjian writes:\n> > \n> > > No. I will turn it into an FAQ, and the item will be \"How do I install\n> > > PostgreSQL on MS Windows\". How's that?\n> > \n> > I don't see how that would be better. Why this artificiality? \n> > Installation instructions belong into INSTALL files. FAQs are questions\n> > asked by people because the documentation is incomplete or the software is\n> > badly designed. I mean, you could make everything a FAQ, but FAQs should\n> > be a backup thing, not a first order documentation resource.\n> \n> Currently all our platform-specific files are FAQ's. I can imagine\n> someone realizing that, looking for an MS one, and giving up and never\n> seeing INSTALL_MSWIN. Also, the platform-specific stuff is on the web\n> pages under FAQ. I don't want to make a separate section just for the\n> MS OS.\n\nMS_FAQ:\n\n1. How do I install?\na. Read the INSTALL_MSWIN file\n\n\n",
"msg_date": "Thu, 19 Oct 2000 12:59:13 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: pgsql/doc (FAQ_MSWIN INSTALL_MSWIN)"
},
{
"msg_contents": "Fixed. Thanks.\n\n\n> Bruce Momjian writes:\n> \n> > Currently all our platform-specific files are FAQ's. I can imagine\n> > someone realizing that, looking for an MS one, and giving up and never\n> > seeing INSTALL_MSWIN. Also, the platform-specific stuff is on the web\n> > pages under FAQ. I don't want to make a separate section just for the\n> > MS OS.\n> \n> Okay, that's somewhat reasonable. Only that the INSTALL file directs\n> people to read INSTALL_WIN, so you need to change that, too. (Actually,\n> it's in installation.sgml.)\n> \n> -- \n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 19 Oct 2000 13:47:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/doc (FAQ_MSWIN INSTALL_MSWIN)"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > Currently all our platform-specific files are FAQ's. I can imagine\n> > someone realizing that, looking for an MS one, and giving up and never\n> > seeing INSTALL_MSWIN. Also, the platform-specific stuff is on the web\n> > pages under FAQ. I don't want to make a separate section just for the\n> > MS OS.\n> \n> Okay, that's somewhat reasonable. Only that the INSTALL file directs\n> people to read INSTALL_WIN, so you need to change that, too. (Actually,\n> it's in installation.sgml.)\n\nThe bottom line is that I am slightly embarrassed to be supporting a\nMicrosoft OS, and want to give it as little special treatment as\npossible.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 19 Oct 2000 13:49:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/doc (FAQ_MSWIN INSTALL_MSWIN)"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Currently all our platform-specific files are FAQ's. I can imagine\n> someone realizing that, looking for an MS one, and giving up and never\n> seeing INSTALL_MSWIN. Also, the platform-specific stuff is on the web\n> pages under FAQ. I don't want to make a separate section just for the\n> MS OS.\n\nOkay, that's somewhat reasonable. Only that the INSTALL file directs\npeople to read INSTALL_WIN, so you need to change that, too. (Actually,\nit's in installation.sgml.)\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 19 Oct 2000 19:50:07 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/doc (FAQ_MSWIN INSTALL_MSWIN)"
},
{
"msg_contents": "At 01:49 PM 10/19/00 -0400, Bruce Momjian wrote:\n\n>The bottom line is that I am slightly embarrassed to be supporting a\n>Microsoft OS, and want to give it as little special treatment as\n>possible.\n\nYou know, if folks have a good experience running an Open Source RDBMS\nunder a Microsoft OS, they may become more open to exploring other \nOpen Source software, such as Open Office (lately Star Office) as it\nmatures, and perhaps even one of our fave Open Source operating systems\neventually...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 19 Oct 2000 14:40:09 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: pgsql/doc (FAQ_MSWIN\n INSTALL_MSWIN)"
},
{
"msg_contents": "[ I know I should ignore this thread, but ... ]\n\nDon Baccus <[email protected]> writes:\n> You know, if folks have a good experience running an Open Source RDBMS\n> under a Microsoft OS,\n\nYeah, but the $64 question is whether they *will* have a good experience\nrunning Postgres under Windows. I wouldn't regard any Microsoft OS as\nstable enough to run a database server on, and I'm not eager to take the\nblame for their shortcomings if someone gets burnt trying.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Oct 2000 17:55:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: pgsql/doc (FAQ_MSWIN INSTALL_MSWIN) "
},
{
"msg_contents": "> > > Currently all our platform-specific files are FAQ's. I can imagine\n> > > someone realizing that, looking for an MS one, and giving up and never\n> > > seeing INSTALL_MSWIN. Also, the platform-specific stuff is on the web\n> > > pages under FAQ. I don't want to make a separate section just for the\n> > > MS OS.\n> > Okay, that's somewhat reasonable. Only that the INSTALL file directs\n> > people to read INSTALL_WIN, so you need to change that, too. (Actually,\n> > it's in installation.sgml.)\n> The bottom line is that I am slightly embarrassed to be supporting a\n> Microsoft OS, and want to give it as little special treatment as\n> possible.\n\nHmm. It is a fact that M$Windows is not quite the same as *any* of the\nother boxes we support. Lots of us prefer running on a real system, but\nthere is no need to actively obfuscate our support for the unpleasant\nalternative.\n\nWe currently have a section in the printed/html docs covering installing\non M$ for client-side libraries, and we should move the installation\ninstructions for the server into these docs also. If others are willing\nto do the work to support it, we can make sure it has a place in the\ndistro imho.\n\nIt doesn't mean that we can't have a bit of individual joy when M$ moves\nto our \"unsupported, because we haven't heard that anyone is running it\nanymore\" list ;)\n\n - Thomas\n",
"msg_date": "Fri, 20 Oct 2000 02:43:52 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/doc (FAQ_MSWIN INSTALL_MSWIN)"
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> Hmm. It is a fact that M$Windows is not quite the same as *any* of the\n> other boxes we support.\n\n> We currently have a section in the printed/html docs covering installing\n> on M$ for client-side libraries, and we should move the installation\n> instructions for the server into these docs also.\n\nNote that there's a difference here: Support for a plain-old Windows 95\nbox with MS Visual C++ compiler is certainly very different. There are\neven separate makefiles (project files?) for that.\n\nOTOH, support for Windows NT with Cygwin is not \"exceptional\" compared to\nany other supported system. You run configure; make; make install, with\nGNU make and GCC. The only thing this INSTALL_WIN file does is to tell\npeople where to download Cygwin and cgyipc, which is no more different\nthan FAQ_Solaris telling people to download GNU packages from\nwww.sunfreeware.com or FAQ_HPUX advising to download various vendor\npatches.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 20 Oct 2000 19:32:59 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/doc (FAQ_MSWIN INSTALL_MSWIN)"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> OTOH, support for Windows NT with Cygwin is not \"exceptional\" compared to\n> any other supported system. You run configure; make; make install, with\n> GNU make and GCC. The only thing this INSTALL_WIN file does is to tell\n> people where to download Cygwin and cgyipc, which is no more different\n> than FAQ_Solaris telling people to download GNU packages from\n> www.sunfreeware.com or FAQ_HPUX advising to download various vendor\n> patches.\n\nOK. I *should* have looked on my development box before spouting off\nmail, eh? ;)\n\nSorry for the noise...\n\n - Thomas\n",
"msg_date": "Sat, 21 Oct 2000 02:38:46 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/doc (FAQ_MSWIN INSTALL_MSWIN)"
}
]
|
[
{
"msg_contents": "I have finished reading 5k e-mail messages I accumulated in the last\nweeks of my book. Hopefully I have responded to everyone, and applied\nthe proper patches.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 17 Oct 2000 01:11:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "I am done going through my mailbox"
}
]
|
[
{
"msg_contents": "I'm not sure about this one, but could someone update the PG documentation\nand book to describe bytea type.\n\nCould someone explain also if there is an issue to store binary data in\nbytea and retreive the data if there is a BIGENDINA/SMALLENDIAN setup.\n\nAlso can I create a user type which send back some binary data ?\n\nFranck Martin\nDatabase Development Officer\nSOPAC South Pacific Applied Geoscience Commission\nFiji\nE-mail: [email protected] <mailto:[email protected]> \nWeb site: http://www.sopac.org/ <http://www.sopac.org/> \n\nThis e-mail is intended for its recipients only. Do not forward this\ne-mail without approval. The views expressed in this e-mail may not be\nneccessarily the views of SOPAC.\n\n\n\n-----Original Message-----\nFrom: Alfred Perlstein [mailto:[email protected]]\nSent: Tuesday, October 17, 2000 3:57 PM\nTo: PostgreSQL General\nSubject: Re: [GENERAL] storing binary data\n\n\n* Neil Conway <[email protected]> [001016 20:41] wrote:\n> On Mon, Oct 16, 2000 at 11:22:40PM -0400, Tom Lane wrote:\n> > Neil Conway <[email protected]> writes:\n> > > I want to store some binary data in Postgres. The data is an\n> > > MD5 checksum of the user's password, in binary. It will be\n> > > exactly 16 bytes (since it is a one-way hash).\n> > \n> > > Can I store this safely in a CHAR column?\n> > \n> > No. CHAR and friends assume there are no null (zero) bytes.\n> > In MULTIBYTE setups there are probably additional constraints.\n> > \n> > You could use bytea, but I would recommend converting the checksum\n> > to a hex digit string and then storing that in a char-type field.\n> > Hex is the usual textual representation for MD5 values, no?\n> \n> It is, but (IMHO) it's a big waste of space. The actual MD5 digest is\n> 128 bits. If stored in binary form, it's 16 bytes. If stored in hex\n> form (as ASCII), it's 32 characters @ 1 byte per character = 32 bytes.\n> In Unicode, that's 64 bytes (correct me if I'm wrong).\n>\n> It's not a huge deal, but it would be nice to store this efficiently.\n> Is this possible?\n\nWhy not use base64? It's pretty gross but might work for you.\n\n-Alfred\n",
"msg_date": "Tue, 17 Oct 2000 18:30:59 +1200",
"msg_from": "Franck Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: storing binary data - PGSQL book/documentation"
}
]
|
[
{
"msg_contents": "\n> > > > Currently a view may be dropped with either 'DROP VIEW'\n> > > > or 'DROP TABLE'. Should this be changed?\n> > > \n> > > I say let them drop it with either one. \n> > \n> > I kinda like the 'drop index with drop index', 'drop table with drop\n> > table' and 'drop view with drop view' groupings ... at least you are\n> > pretty sure you haven't 'oopsed' in the process :)\n> > \n> \n> So the vote is now tied. Any other opinions\n\nSame here: allow \"drop view\" only\n\nAndreas\n",
"msg_date": "Tue, 17 Oct 2000 10:29:37 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Re: New relkind for views"
}
]
|
[
{
"msg_contents": "\n> >So, pg_dump should be preserved asis.\n> >\n> \n> Just to clarify; I have no intention of doing anything nasty to pg_dump.\n> All I plan to do is rename the pg_restore to one of\n> pg_load/pg_import/pg_undump/pmud_gp, to make way for a WAL based restore\n> utility, although as Bruce suggests, this may be premature.\n\nIt is not premature. We will need a WAL based restore for 7.1\nor we imho don't need to enable WAL for 7.1 at all.\n\nAndreas\n",
"msg_date": "Tue, 17 Oct 2000 10:47:59 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: Backup, restore & pg_dump"
}
]
|
[
{
"msg_contents": ">> Just to clarify; I have no intention of doing anything nasty to pg_dump.\n\nOh, ok, it wasn't clear, sorry -:)\n\n>>All I plan to do is rename the pg_restore to one of\n>>pg_load/pg_import/pg_undump/pmud_gp, to make way for a WAL based\n>>restore utility, although as Bruce suggests, this may be premature.\n>\n>It is not premature. We will need a WAL based restore for 7.1\n>or we imho don't need to enable WAL for 7.1 at all.\n\nI missed your point here - why ?!\nNew backup/restore is not only result of WAL.\nWhat about recovery & performance?\nHm, WAL is required for distributed transactions\nand we are not going to have them in 7.1 - does it\nalso mean that we don't need to enable WAL in 7.1?\n\nThere is WAL - general mechanism for transaction\nrecovery & performance, alternative (with regard to\nnon-overwriting storage manager) approach to transaction\nsystems. And there are WAL based features. Sooner\nwe'll get base sooner we'll have features.\n\nVadim\n\n",
"msg_date": "Tue, 17 Oct 2000 02:36:14 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AW: Backup, restore & pg_dump"
}
]
|
[
{
"msg_contents": "> Just a confirmation.\n> Do you plan overwrite storage manager also in 7.2 ?\n\nYes if I'll get enough time.\n\nVadim\n\n",
"msg_date": "Tue, 17 Oct 2000 02:48:05 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?koi8-r?Q?=EF=D4=D7=C5=D4=3A_=5BHACKERS=5D_=3F=3F=3F=3F=3A_=5B?=\n\t=?koi8-r?Q?HACKERS=5D_Otvet=3A__WAL_and_indexes_=28Re=3A_=5BHACKERS=5D?=\n\t=?koi8-r?Q?_WAL_status_=26_todo=29?="
}
]
|
[
{
"msg_contents": "\n> >It is not premature. We will need a WAL based restore for 7.1\n> >or we imho don't need to enable WAL for 7.1 at all.\n> \n> I missed your point here - why ?!\n> New backup/restore is not only result of WAL.\n> What about recovery & performance?\n\nOk, recovery is only improved for indexes, no ?\nPerformance must imho be worse in your first round\n(at least compared to -F mode).\nThere is room for improvement that was not there\nbefore WAL (like avoiding write calls, non-overwrite ...) \nbut those are not implemented yet.\nPlease correct me if I am wrong here, but imho we accept that \nslowdown, because we gain so much.\n\n> Hm, WAL is required for distributed transactions\n> and we are not going to have them in 7.1 - does it\n> also mean that we don't need to enable WAL in 7.1?\n\nNo, but rollforward is currently the main feature, no ?\nDoes it make sense to ship WAL without using it's currently \nmain feature ?\n\n> \n> There is WAL - general mechanism for transaction\n> recovery & performance, alternative (with regard to\n> non-overwriting storage manager) approach to transaction\n> systems. And there are WAL based features. Sooner\n> we'll get base sooner we'll have features.\n\nOk, you have implemented startup rollforward anyway.\nI think that the logic for a rollforward that starts with a restored tar\nbackup (if done correctly) will be exactly or at least nearly the same.\n\nThat is:\nWalk the log, see if the entry is already done, do it if not, else \ngo to next entry in log.\n\nIt could be the responsibility of the dba to decide with which log to begin\nrollforward after restore.\n\nAndreas\n",
"msg_date": "Tue, 17 Oct 2000 12:48:19 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: Backup, restore & pg_dump"
}
]
|
[
{
"msg_contents": "It seems some incompatible changes have been made between 7.0 and\ncurrent. In 7.0, if a parameter is NULL OR a null string (\"\"), then\nthe value from an environment variable is applied. However in current\nONLY NULL is considered. Is there any reason for this?\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 17 Oct 2000 22:34:18 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "incompatible changes of PQsetdbLogin()"
},
{
"msg_contents": "Tatsuo Ishii writes:\n\n> It seems some incompatible changes have been made between 7.0 and\n> current. In 7.0, if a parameter is NULL OR a null string (\"\"), then\n> the value from an environment variable is applied. However in current\n> ONLY NULL is considered. Is there any reason for this?\n\nYes, there are several reasons. First, to be consistent with\nPQconnectdb(). Second, to be able to override a set environment variable\nwith an empty value. Third, because the existing behaviour was deemed to\nbe quite useless. See Oct 2 thread \"libpq PGHOST\". Is there a problem?\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 17 Oct 2000 17:26:43 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: incompatible changes of PQsetdbLogin()"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> It seems some incompatible changes have been made between 7.0 and\n> current. In 7.0, if a parameter is NULL OR a null string (\"\"), then\n> the value from an environment variable is applied. However in current\n> ONLY NULL is considered. Is there any reason for this?\n\nPeter E. did that recently, after discussion that concluded it was a\ngood idea --- otherwise there is no way to override an environment\nvariable with an empty string. Do you have an example where it's\na bad idea?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Oct 2000 11:51:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: incompatible changes of PQsetdbLogin() "
},
{
"msg_contents": "> Tatsuo Ishii <[email protected]> writes:\n> > It seems some incompatible changes have been made between 7.0 and\n> > current. In 7.0, if a parameter is NULL OR a null string (\"\"), then\n> > the value from an environment variable is applied. However in current\n> > ONLY NULL is considered. Is there any reason for this?\n> \n> Peter E. did that recently, after discussion that concluded it was a\n> good idea --- otherwise there is no way to override an environment\n> variable with an empty string. Do you have an example where it's\n> a bad idea?\n\nFor PGHOST Peter E.'s changes seem reasonable. But what about PGPORT?\nIn 7.0.x, if pgport is an empty string and PGPORT environment variable\nis not set, then the default port no. (5432) is used. However, in\ncurrent, if pgport is an empty string, then the empty string is\nassumed as a port no. that causes a failure on connection even if\nPGPORT variable is set.\n--\nTatsuo Ishii\n",
"msg_date": "Sun, 22 Oct 2000 18:12:12 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: incompatible changes of PQsetdbLogin() "
}
]
|
[
{
"msg_contents": "I don't know if this has been fixed or not, but alter table will not\nadjust RI/FK triggers on the table. \n\nI.E:\n\ncreate table foo (a int4 primary key)\ncreate table bar (b int4 references foo)\nalter table foo rename to foo2\n\nnow, updates to foo will either crash or hang postgres.\n\n\nWhat needs to be done: on alter table, update tgargs in pg_trigger table\n\n-alex\n\n",
"msg_date": "Tue, 17 Oct 2000 11:02:59 -0400 (EDT)",
"msg_from": "Alex Pilosov <[email protected]>",
"msg_from_op": true,
"msg_subject": "bug: alter table/FK"
},
{
"msg_contents": "\nUnder current sources, it no longer crashes, it just elogs now \nafter Tom's changes. When I get more time I'd like to put in a\nmore complete solution (possibly moving to oids rather than names\nin the argument list in the process). There are alot of other related\nproblems (rename column, check constraint dumping after column/table\nrenames, dropping objects that refer to other objects that have\nbeen removed, drop ... restrict vs cascade)\n\nOn Tue, 17 Oct 2000, Alex Pilosov wrote:\n\n> I don't know if this has been fixed or not, but alter table will not\n> adjust RI/FK triggers on the table. \n> \n> I.E:\n> \n> create table foo (a int4 primary key)\n> create table bar (b int4 references foo)\n> alter table foo rename to foo2\n> \n> now, updates to foo will either crash or hang postgres.\n> \n> \n> What needs to be done: on alter table, update tgargs in pg_trigger table\n\n",
"msg_date": "Tue, 17 Oct 2000 08:58:14 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bug: alter table/FK"
}
]
|
[
{
"msg_contents": "i've started playing with the 7.1 snapshots to try out the toast\nsupport. now it looks like i'm going to have to change a bunch of C\nfunctions that i have that call internal postgres functions, which\ndidn't seem to be a problem (other than some extra typing) but my\nquestion is whether i should change the function to use the new fmgr\ntype of definition or if it's only for internal functions. i'm not\nreally clear on what the change does to how the user defined functions\nare handled.\n\n-- \n\nJeff Hoffmann\nPropertyKey.com\n",
"msg_date": "Tue, 17 Oct 2000 11:20:04 -0500",
"msg_from": "Jeff Hoffmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "question about new fmgr in 7.1 snapshots"
},
{
"msg_contents": "Jeff Hoffmann <[email protected]> writes:\n> my question is whether i should change the function to use the new fmgr\n> type of definition or if it's only for internal functions.\n\nUp to you. If you need any of the new features (like clean handling\nof NULLs) then convert. If you were happy with the old way, no need.\n\nA new-style dynamically loaded function must be defined as using\nlanguage \"newC\" not \"C\"; this cues fmgr which way to call it.\n\nGotta start updating the documentation soon ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Oct 2000 12:52:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: question about new fmgr in 7.1 snapshots "
},
{
"msg_contents": "Tom Lane wrote:\n> Jeff Hoffmann <[email protected]> writes:\n> > my question is whether i should change the function to use the new fmgr\n> > type of definition or if it's only for internal functions.\n>\n> Up to you. If you need any of the new features (like clean handling\n> of NULLs) then convert. If you were happy with the old way, no need.\n>\n> A new-style dynamically loaded function must be defined as using\n> language \"newC\" not \"C\"; this cues fmgr which way to call it.\n>\n\n Are you sure on that? Doesn't TOAST mean that any user\n defined function recieving variable size attributes must\n expect them now to be compressed or stored external and\n change it's access to them going through the untoasting? Or\n do you do that for old style 'C' functions all the time in\n the fmgr now?\n\n> Gotta start updating the documentation soon ;-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Mon, 23 Oct 2000 09:42:43 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: question about new fmgr in 7.1 snapshots"
},
{
"msg_contents": "Jan Wieck <[email protected]> writes:\n> Tom Lane wrote:\n>> Jeff Hoffmann <[email protected]> writes:\n>>>> my question is whether i should change the function to use the new fmgr\n>>>> type of definition or if it's only for internal functions.\n>> \n>> Up to you. If you need any of the new features (like clean handling\n>> of NULLs) then convert. If you were happy with the old way, no need.\n\n> Are you sure on that? Doesn't TOAST mean that any user\n> defined function recieving variable size attributes must\n> expect them now to be compressed or stored external and\n> change it's access to them going through the untoasting?\n\nIf you have a user-defined function that takes a potentially-toasted\nargument, you'll have to fix it to detoast its argument. I don't\nthink it's appropriate to saddle fmgr with that responsibility.\n\nAt least in theory, you could detoast the argument without also buying\ninto the new fmgr notation, but I agree that converting is easier ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Oct 2000 11:30:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: question about new fmgr in 7.1 snapshots "
}
]
|
[
{
"msg_contents": "> > >It is not premature. We will need a WAL based restore for 7.1\n> > >or we imho don't need to enable WAL for 7.1 at all.\n> > \n> > I missed your point here - why ?!\n> > New backup/restore is not only result of WAL.\n> > What about recovery & performance?\n> \n> Ok, recovery is only improved for indexes, no ?\n> Performance must imho be worse in your first round\n> (at least compared to -F mode).\n ^^^^^^^^\n only\n\n> There is room for improvement that was not there\n> before WAL (like avoiding write calls, non-overwrite ...) \n> but those are not implemented yet.\n\nAnd what? If there will be no WAL with base functionality\nnow then there will be no additional WAL benefits (eg savepoints)\nlater. WAL based backup/reatore is one of additional WAL benefit.\n*Hopefully* we'll be able to implement it now.\n\nBTW, avoiding writes is base WAL feature, ie - it'll be\nimplemented in 7.1.\n\n> Please correct me if I am wrong here, but imho we accept that \n> slowdown, because we gain so much.\n> \n> > Hm, WAL is required for distributed transactions\n> > and we are not going to have them in 7.1 - does it\n> > also mean that we don't need to enable WAL in 7.1?\n> \n> No, but rollforward is currently the main feature, no ?\n\nI'm going to rollback changes on abort in 7.1. Seems I've\nmentioned both redo and UNDO (without compensation records)\nAM methods many times.\n\n> Does it make sense to ship WAL without using it's currently \n> main feature ?\n\nSorry, but it's not always possible to have all at once.\nBut again, hopefully we'll have backup/restore.\nThanks to Philip.\n\n(BTW, replication server prototype announced by Pgsql, Inc\ncould be used for incremental backup)\n\nVadim\n",
"msg_date": "Tue, 17 Oct 2000 11:00:08 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: AW: Backup, restore & pg_dump"
}
]
|
[
{
"msg_contents": "> I'm still nervous about how we're going to test the WAL code \n> adequately for the lesser-used index types. Any ideas out there?\n\nFirst, seems we'll have to follow to what you've proposed for\ntheir redo/undo: log each *fact* of changing a page to know\nwas update op done entirely or not (rebuild index if so).\n+ log information about where to find tuple pointing to heap\n(for undo).\n\nThis is much easy to do than logging suitable for recovery.\n\nVadim\n",
"msg_date": "Tue, 17 Oct 2000 11:48:35 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Re: ?????: ?????: WAL and indexes (Re: [HACKERS] WA\n\tL status & todo) "
}
]
|
[
{
"msg_contents": "Gert (Pache) reports a bug with a severity of 3\nThe lower the number the more severe it is.\n\nShort Description\nUPPER and LOWER dosen't work correctly on special caracters (umlauts)\n\nLong Description\nThe Upper- and the lower function don't convert the german umlauts (�.�.�.) but leave them in their original condition\n\nSample Code\n\n\nNo file was uploaded with this report\n\n",
"msg_date": "Tue, 17 Oct 2000 15:58:27 -0400 (EDT)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "UPPER and LOWER dosen't work correctly on special caracters (umlauts)"
},
{
"msg_contents": "On Tue, 17 Oct 2000 [email protected] wrote:\n\n> Gert (Pache) reports a bug with a severity of 3\n> The lower the number the more severe it is.\n> \n> Short Description\n> UPPER and LOWER dosen't work correctly on special caracters (umlauts)\n> \n> Long Description\n> The Upper- and the lower function don't convert the german umlauts (�.�.�.) \n> but leave them in their original condition\n> \n> Sample Code\n> \n> \n> No file was uploaded with this report\n> \n\nYou can make this work with some help from your OS. Have your locale set\ncorrectly to recognize non-ascii characters when postmaster starts. I did\nit by setting LC_CTYPE to iso_8859_1 in the postgres superuser's shell\ninitialization file. Note that this won't work unless the locale is\nactually available on your machine; you can't just type the same thing I\ndid and hope for the best.\n\n\nIf it helps, this is from the postgres superuser's shell at our site:\n[1:42pm] /data/postgres> uname -a\nSunOS ptolemy.tlg.uci.edu 5.6 Generic_105181-08 sun4u sparc SUNW,Ultra-5_10\n\n[1:43pm] /data/postgres> locale\nLANG=\nLC_CTYPE=iso_8859_1\nLC_NUMERIC=\"C\"\nLC_TIME=\"C\"\nLC_COLLATE=\"C\"\nLC_MONETARY=\"C\"\nLC_MESSAGES=\"C\"\nLC_ALL=\n\nThis is not a complete solution (collation is still incorrect) but it\nshould get you started. Hope it helps.\n\nNishad\n-- \n\"Underneath the concrete, the dream is still alive\" -- Talking Heads\n\n\n",
"msg_date": "Tue, 17 Oct 2000 13:51:09 -0700 (PDT)",
"msg_from": "Nishad Prakash <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPPER and LOWER dosen't work correctly on special caracters\n\t(umlauts)"
},
{
"msg_contents": "> The Upper- and the lower function don't convert the german umlauts (�.�.�.) but leave them in their original condition\n\nGert (or anyone): what should the result be? I'm German-impaired, so\nyou'll need to be more specific. Did you compile with locale turned on?\nMulti-byte character sets?? Which byte codes correspond to the lower-\nand upper-case umlaut characters???\n\n - Thomas\n",
"msg_date": "Wed, 18 Oct 2000 05:18:08 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPPER and LOWER dosen't work correctly on special caracters\n\t(umlauts)"
},
{
"msg_contents": "You need to enable locale support (see documentation).\nBest regards\nRony\n\n> -----Urspr�ngliche Nachricht-----\n> Von: [email protected] [mailto:[email protected]]Im\n> Auftrag von [email protected]\n> Gesendet: 17 pa�dziernika 2000 21:58\n> An: [email protected]\n> Betreff: [BUGS] UPPER and LOWER dosen't work correctly on special\n> caracters (umlauts)\n>\n>\n> Gert (Pache) reports a bug with a severity of 3\n> The lower the number the more severe it is.\n>\n> Short Description\n> UPPER and LOWER dosen't work correctly on special caracters (umlauts)\n>\n> Long Description\n> The Upper- and the lower function don't convert the german\n> umlauts (�.�.�.) but leave them in their original condition\n>\n> Sample Code\n>\n>\n> No file was uploaded with this report\n>\n\n",
"msg_date": "Wed, 18 Oct 2000 08:34:25 +0200",
"msg_from": "\"Ronald Kuczek\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "AW: UPPER and LOWER dosen't work correctly on special caracters\n\t(umlauts)"
}
]
|
[
{
"msg_contents": "In tcop/ulitity.c we have the following code fragment:\n\ncase VIEW:\n{\n\tchar\t *viewName = stmt->name;\n\tchar\t *ruleName;\n\n\truleName = MakeRetrieveViewRuleName(viewName);\n\trelationName = RewriteGetRuleEventRel(ruleName);\n\nThis looks like an expensive no-op to me.\nif viewname == \"myview\"\nthen ruleName == \"_RETmyview\" (+/- multibyte aware truncation)\nthen relationName == \"myview\"\n\nIs this code doing something that I'm missing?\n\nAlso\n\n\"DROP TABLE x, y, z\" is allowed, but\n\n\"DROP VIEW x, y, z\" is not.\n\nAny reason other than historical?\n\t\n-- \nMark Hollomon\n",
"msg_date": "Tue, 17 Oct 2000 16:16:00 -0400",
"msg_from": "Mark Hollomon <[email protected]>",
"msg_from_op": true,
"msg_subject": "DROP VIEW code question"
},
{
"msg_contents": "Mark Hollomon writes:\n\n> Also\n> \n> \"DROP TABLE x, y, z\" is allowed, but\n> \n> \"DROP VIEW x, y, z\" is not.\n> \n> Any reason other than historical?\n\nI don't know how it looks now, but the \"DROP TABLE x, y, z\" was pretty\nbroken a while ago. For example, if there was some sort of dependency\nbetween the tables (foreign keys?) it would abort and leave an\ninconsistent state. I'm not very fond of this extension, but keep the\nissue in mind.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 17 Oct 2000 22:33:07 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DROP VIEW code question"
},
{
"msg_contents": "Mark Hollomon <[email protected]> writes:\n> In tcop/ulitity.c we have the following code fragment:\n> case VIEW:\n> {\n> \tchar\t *viewName = stmt->name;\n> \tchar\t *ruleName;\n\n> \truleName = MakeRetrieveViewRuleName(viewName);\n> \trelationName = RewriteGetRuleEventRel(ruleName);\n\n> This looks like an expensive no-op to me.\n> if viewname == \"myview\"\n> then ruleName == \"_RETmyview\" (+/- multibyte aware truncation)\n> then relationName == \"myview\"\n\n> Is this code doing something that I'm missing?\n\nIt's probably done that way for symmetry with the DROP RULE case.\nI don't see any big need to change it --- DROP VIEW is hardly a\nperformance-critical path. And it *does* help ensure that what\nyou are dropping is a view not a plain table.\n\n> Also\n> \"DROP TABLE x, y, z\" is allowed, but\n> \"DROP VIEW x, y, z\" is not.\n> Any reason other than historical?\n\nNo, not that I can think of. If you want to fix that, go for it.\nYou might consider merging DropStmt and RemoveStmt into one parsenode\ntype that has both a list and an object-type field. I see no real\ngood reason why they're separate ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Oct 2000 16:33:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DROP VIEW code question "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I don't know how it looks now, but the \"DROP TABLE x, y, z\" was pretty\n> broken a while ago. For example, if there was some sort of dependency\n> between the tables (foreign keys?) it would abort and leave an\n> inconsistent state. I'm not very fond of this extension, but keep the\n> issue in mind.\n\nThis is just a special case of the generic problem that you can't\nroll back a DROP TABLE. That'll be fixed by 7.1, so I see no reason\nnot to allow the more convenient syntax.\n\nBTW, Mark, the reason utility.c implements T_DropStmt with two loops\nis presumably to try to avoid the rollback-drop-table problem; but it's\ninherently bogus because not all error conditions can be checked there.\nYou could fold the two loops into one loop, and/or remove any checks\nthat are redundant with RemoveRelation itself.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Oct 2000 16:43:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DROP VIEW code question "
},
{
"msg_contents": "On Tuesday 17 October 2000 16:33, Tom Lane wrote:\n> Mark Hollomon <[email protected]> writes:\n> > In tcop/ulitity.c we have the following code fragment:\n> > case VIEW:\n> > {\n> > \tchar\t *viewName = stmt->name;\n> > \tchar\t *ruleName;\n> >\n> > \truleName = MakeRetrieveViewRuleName(viewName);\n> > \trelationName = RewriteGetRuleEventRel(ruleName);\n> >\n> > This looks like an expensive no-op to me.\n> > if viewname == \"myview\"\n> > then ruleName == \"_RETmyview\" (+/- multibyte aware truncation)\n> > then relationName == \"myview\"\n> >\n> > Is this code doing something that I'm missing?\n>\n> It's probably done that way for symmetry with the DROP RULE case.\n> I don't see any big need to change it --- DROP VIEW is hardly a\n> performance-critical path. And it *does* help ensure that what\n> you are dropping is a view not a plain table.\n\nYes, prior to the separate relkind for views, it was necessary for that.\n\nI just didn't see a need now.\n\n>\n> > Also\n> > \"DROP TABLE x, y, z\" is allowed, but\n> > \"DROP VIEW x, y, z\" is not.\n> > Any reason other than historical?\n>\n> No, not that I can think of. If you want to fix that, go for it.\n> You might consider merging DropStmt and RemoveStmt into one parsenode\n> type that has both a list and an object-type field. I see no real\n> good reason why they're separate ...\n\nOk.\n\n-- \nMark Hollomon\n",
"msg_date": "Tue, 17 Oct 2000 21:02:25 -0400",
"msg_from": "Mark Hollomon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DROP VIEW code question"
}
]
|
[
{
"msg_contents": "> I think we have found a bug in postgresql 7.0.2. If I'm right then a fix\n> for this should probably be added to 7.0.3 also. Anyway without further\n> adieu:\n> \n> I have attached detail of my session at the end of this email, but the\n> summary is as follows.\n> \n> If I run the following commands my pg_group file will be corrupted and up\n> to 2Gig in size. (FYI - I am starting this after a fresh initdb, and the\n> pg_group file\n> starts at 0 bytes)\n> \n> create user foo1;\n> create user foo2;\n> create user foo3;\n> create group test1;\n> alter group test1 add user foo1,foo2,foo3;\n> \t(pg_group is 8192 bytes)\n> drop user foo1,foo2;\n> \t(now my pg_group file is 24567 bytes)\n> drop user foo3;\n> \n> (now my psql is hanging, control-c does nothing to kill the query. Also\n> my pg_group file is now growing rapidly till it fills the disk or I kill\n> -9 postmaster, which ever comes first. Not good!)\n> \n> I think the problem is with the drop user command (obviously). After the\n> alter group command all three user ids are in the pg_group table. After I\n> drop two of the users, both users are gone from the pg_user table, but the\n> second user is still in pg_group. So we now have a reference to a user\n> that does not exist. At this point I think the system is in trouble, and\n> the last drop user command just seals the coffin. If I do the same script\n> but drop the users one at a time, everything is fine. So apparently while\n> the drop user command correctly drops all users from pg_user, it only\n> checks pg_group for the first user in the list.\n> \n> We found this problem on one of our soon to be production database\n> servers, fortunately prior to install. The only way to get the server\n> back\n> in to a fully functional state was to perform an initdb (I stopped\n> postgre, then deleted the data directory then restarted using\n> /etc/init.d/postgresql start which performs initdb)\n> \n> Anyway, any comments? Can anyone else repeat this? I hope this is easy to\n> fix. I guess the quick fix is to disallow multiple users to be specified \n> in the drop user command.\n> \n> Thanks much,\n> \n> Matt O'Connor\n> \n> \n> p.s.. as promised here is some detail from my session.\n> I have tested this on both RH7 with pg7.0.2 as supplied by RH, and with\n> RH6.2 with 7.0.2 rpms from Lamar.\n> \n> \n> Red Hat Linux release 7.0 (Guinness)\n> select version();\n> version\n> -------------------------------------------------------------\n> PostgreSQL 7.0.2 on i686-pc-linux-gnu, compiled by gcc 2.96\n> (1 row)\n> \n> ls -l ./data/pg_group\n> -rw------- 1 postgres postgres 0 Oct 17 20:26 ./data/pg_group\n> create user foo1;\n> CREATE USER\n> create user foo2;\n> CREATE USER\n> create user foo3;\n> CREATE USER\n> select * from pg_user;\n> usename | usesysid | usecreatedb | usetrace | usesuper | usecatupd |\n> passwd | valuntil\n> ----------+----------+-------------+----------+----------+-----------+----\n> ------+----------\n> postgres | 26 | t | t | t | t |\n> ******** |\n> foo1 | 27 | f | f | f | f |\n> ******** |\n> foo2 | 28 | f | f | f | f |\n> ******** |\n> foo3 | 29 | f | f | f | f |\n> ******** |\n> (4 rows)\n> \n> create group test1;\n> CREATE GROUP\n> select * from pg_group;\n> groname | grosysid | grolist\n> ---------+----------+---------\n> test1 | 1 |\n> (1 row)\n> \n> alter group test1 add user foo1,foo2,foo3;\n> ALTER GROUP\n> select * from pg_user;\n> usename | usesysid | usecreatedb | usetrace | usesuper | usecatupd |\n> passwd | valuntil\n> ----------+----------+-------------+----------+----------+-----------+----\n> ------+----------\n> postgres | 26 | t | t | t | t |\n> ******** |\n> foo1 | 27 | f | f | f | f |\n> ******** |\n> foo2 | 28 | f | f | f | f |\n> ******** |\n> foo3 | 29 | f | f | f | f |\n> ******** |\n> (4 rows)\n> \n> select * from pg_group;\n> groname | grosysid | grolist\n> ---------+----------+------------\n> test1 | 1 | {27,28,29}\n> (1 row)\n> \n> ls -l ./data/pg_group\n> -rw------- 1 postgres postgres 8192 Oct 17 20:27 ./data/pg_group\n> drop user foo1,foo2;\n> DROP USER\n> ls -l ./data/pg_group\n> -rw------- 1 postgres postgres 24576 Oct 17 20:27 ./data/pg_group\n> select * from pg_user;\n> usename | usesysid | usecreatedb | usetrace | usesuper | usecatupd |\n> passwd | valuntil\n> ----------+----------+-------------+----------+----------+-----------+----\n> ------+----------\n> postgres | 26 | t | t | t | t |\n> ******** |\n> foo3 | 29 | f | f | f | f |\n> ******** |\n> (2 rows)\n> \n> select * from pg_group;\n> groname | grosysid | grolist\n> ---------+----------+---------\n> test1 | 1 | {29,27}\n> (1 row)\n> \n> drop user foo3;\n> Cancel request sent\n> Terminated\n> bash-2.04$ ls -l ./data/pg_group\n-rw------- 1 postgres postgres 438386688 Oct 17 20:27 ./data/pg_group\nbash-2.04$\n\n(as you can see, psql was not responding after the last drop user, I hit\ncontrol-c and is said Cancel request sent, but psql never returned to a\nprompt, so I had to kill -9 the postmaster.\n",
"msg_date": "Tue, 17 Oct 2000 20:32:43 -0500",
"msg_from": "Matthew <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgre7.0.2 drop user bug"
},
{
"msg_contents": "Matthew <[email protected]> writes:\n>> I think we have found a bug in postgresql 7.0.2.\n\nBug confirmed --- on a compilation with Asserts enabled, this sequence\ncauses an assert failure during update of pg_group_name_index in the\nDROP USER command, in both 7.0.* and current sources. I ran out of\nsteam before finding the cause, but something's pretty busted here...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Oct 2000 00:52:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgre7.0.2 drop user bug "
},
{
"msg_contents": "Matthew <[email protected]> writes:\n>> Anyway, any comments? Can anyone else repeat this? I hope this is easy to\n>> fix. I guess the quick fix is to disallow multiple users to be specified \n>> in the drop user command.\n\nThe correct fix is CommandCounterIncrement() in the DROP USER loop,\nso that later iterations can see the changes made by prior iterations.\nWithout, death and destruction ensue if any of the users are in the\nsame groups, because the later AlterGroup calls fail.\n\nFixed in current and back-patched for 7.0.3.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Oct 2000 00:02:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgre7.0.2 drop user bug "
}
]
|
[
{
"msg_contents": "I just ran into a strangest thing: within transaction, select now() will\nalways return time when transaction started. Same happens with select\n'now'::timestamp.\n\nThis is with 7.0. I have not tested it with CVS.\n\nI am not sure what causes this. I assume that result of now() is cached by\nfmgr. Is there a way to declare functions 'not-cacheable-ever'? If there\nis, such should be applied to now().\n\n-alex\n\n",
"msg_date": "Tue, 17 Oct 2000 22:20:10 -0400 (EDT)",
"msg_from": "Alex Pilosov <[email protected]>",
"msg_from_op": true,
"msg_subject": "time stops within transaction"
},
{
"msg_contents": "Wow, that is strange.\n\n\n> I just ran into a strangest thing: within transaction, select now() will\n> always return time when transaction started. Same happens with select\n> 'now'::timestamp.\n> \n> This is with 7.0. I have not tested it with CVS.\n> \n> I am not sure what causes this. I assume that result of now() is cached by\n> fmgr. Is there a way to declare functions 'not-cacheable-ever'? If there\n> is, such should be applied to now().\n> \n> -alex\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 17 Oct 2000 23:21:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time stops within transaction"
},
{
"msg_contents": "Alex Pilosov <[email protected]> writes:\n> I just ran into a strangest thing: within transaction, select now() will\n> always return time when transaction started.\n\nThat is what now() is defined to return: transaction start time.\nPerhaps the documentation needs improvement...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Oct 2000 23:33:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time stops within transaction "
},
{
"msg_contents": "\n----- Original Message -----\nFrom: Bruce Momjian <[email protected]>\nTo: Alex Pilosov <[email protected]>\nCc: <[email protected]>\nSent: Wednesday, 18 October 2000 16:21\nSubject: Re: [HACKERS] time stops within transaction\n\n\n> Wow, that is strange.\n>\n>\n> > I just ran into a strangest thing: within transaction, select now() will\n> > always return time when transaction started. Same happens with select\n> > 'now'::timestamp.\n> >\n\n\nActually, thats useful since you can put now() into multiple fields in one\ntransaction.\n\nThe alternative is that CURRENT_TIMESTAMP (??? is that the one) which isn't a\nfunction\nand stuffs up when trying to use it as a field default or as part of an\nexpression in a view.\n\n(Comment true for 6.5.3 at least)\n\n Documentation on time constants and how to misuse them is weak...\n\nRegards\n\n",
"msg_date": "Wed, 18 Oct 2000 16:48:43 +1300",
"msg_from": "\"John Huttley\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time stops within transaction"
},
{
"msg_contents": "\"John Huttley\" <[email protected]> writes:\n> Documentation on time constants and how to misuse them is weak...\n\nYou can say that again! Who's up for submitting documentation patches?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Oct 2000 00:01:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time stops within transaction "
},
{
"msg_contents": "On Tue, 17 Oct 2000, Alex Pilosov wrote:\n\n> I just ran into a strangest thing: within transaction, select now() will\n> always return time when transaction started. Same happens with select\n> 'now'::timestamp.\n\n It's feature, not bug. IMHO good feature, an example I use it for rows \nidentification during table filling :-)\n\n\t\t\t\t\tKarel\n\n",
"msg_date": "Wed, 18 Oct 2000 10:38:01 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time stops within transaction"
},
{
"msg_contents": "Tom Lane writes:\n\n> That is what now() is defined to return: transaction start time.\n> Perhaps the documentation needs improvement...\n\nThen CURRENT_TIMESTAMP is in violation of SQL.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 18 Oct 2000 17:38:22 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time stops within transaction "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> That is what now() is defined to return: transaction start time.\n\n> Then CURRENT_TIMESTAMP is in violation of SQL.\n\nAu contraire, if it did not behave that way it would violate the spec.\nSee SQL92 6.8 general rule 3:\n\n 3) If an SQL-statement generally contains more than one reference\n to one or more <datetime value function>s, then all such ref-\n erences are effectively evaluated simultaneously. The time of\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n evaluation of the <datetime value function> during the execution\n of the SQL-statement is implementation-dependent.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Oct 2000 11:41:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time stops within transaction "
},
{
"msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut <[email protected]> writes:\n> > Tom Lane writes:\n> >> That is what now() is defined to return: transaction start time.\n> \n> > Then CURRENT_TIMESTAMP is in violation of SQL.\n> \n> Au contraire, if it did not behave that way it would violate the spec.\n> See SQL92 6.8 general rule 3:\n> \n> 3) If an SQL-statement generally contains more than one reference\n> to one or more <datetime value function>s, then all such ref-\n> erences are effectively evaluated simultaneously. The time of\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> evaluation of the <datetime value function> during the execution\n> of the SQL-statement is implementation-dependent.\n\nstatement != transaction\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 18 Oct 2000 18:10:04 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time stops within transaction "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> Au contraire, if it did not behave that way it would violate the spec.\n>> See SQL92 6.8 general rule 3:\n>> \n>> 3) If an SQL-statement generally contains more than one reference\n>> to one or more <datetime value function>s, then all such ref-\n>> erences are effectively evaluated simultaneously. The time of\n>> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n>> evaluation of the <datetime value function> during the execution\n>> of the SQL-statement is implementation-dependent.\n\n> statement != transaction\n\nSo? It also says that the choice of exactly when to evaluate now()\nis implementation-dependent. Doing so at start of transaction is\nan allowed behavior AFAICS. Actually calling time(2) at each use\nof now(), which is what the original poster seemed to want, is\nclearly *not* an allowed behavior.\n\nI think what you are advocating is recomputing now() at each statement\nboundary within a transaction, but that's not as simple as it looks\neither. Consider statement boundaries in an SQL function --- the\nfunction is probably being called from some outer statement, so\nadvancing now() within the function would violate the spec constraint\nwith respect to the outer statement.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Oct 2000 12:39:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time stops within transaction "
},
{
"msg_contents": "Tom Lane writes:\n\n> So? It also says that the choice of exactly when to evaluate now()\n> is implementation-dependent. Doing so at start of transaction is\n> an allowed behavior AFAICS.\n\nBut it's only talking about statements. You can't reuse things that you\ncalculated for previous statements unless it says so. (Of course\nimplementation-dependent means that you can do anything you want to, but\nlet's not go there. :-)\n\n> Actually calling time(2) at each use\n> of now(), which is what the original poster seemed to want, is\n> clearly *not* an allowed behavior.\n\nWhat this covers is doing things like\n\nSELECT CURRENT_TIMESTAMP as \"Today\", CURRENT_TIMESTAMP + 1 DAY AS \"Tomorrow\";\n\nBut keep in mind that other/correct SQL implementations don't have\nautocommit, so if you're in some interactive SQL shell and you keep\nentering\n\nselect current_timestamp;\n\nthen it won't ever advance unless you do commits in between. This doesn't\nmake much sense to me, as CURRENT_TIMESTAMP is defined to return the\n\"current time\" . The point of a transaction is all data or no data, not\nall the same data.\n\n> I think what you are advocating is recomputing now() at each statement\n> boundary within a transaction, but that's not as simple as it looks\n> either. Consider statement boundaries in an SQL function --- the\n> function is probably being called from some outer statement, so\n> advancing now() within the function would violate the spec constraint\n> with respect to the outer statement.\n\nGood point. There are probably special rules for SQL functions.\n\nI'm not saying that this thing is a priority to me, but it's something to\nconsider.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 18 Oct 2000 23:22:27 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time stops within transaction "
},
{
"msg_contents": "At 12:39 PM 10/18/00 -0400, Tom Lane wrote:\n>Peter Eisentraut <[email protected]> writes:\n>> Tom Lane writes:\n>>> Au contraire, if it did not behave that way it would violate the spec.\n>>> See SQL92 6.8 general rule 3:\n>>> \n>>> 3) If an SQL-statement generally contains more than one reference\n>>> to one or more <datetime value function>s, then all such ref-\n>>> erences are effectively evaluated simultaneously. The time of\n>>> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n>>> evaluation of the <datetime value function> during the execution\n>>> of the SQL-statement is implementation-dependent.\n>\n>> statement != transaction\n>\n>So? It also says that the choice of exactly when to evaluate now()\n>is implementation-dependent. \n\nNote the phrase \"during the execution of the SQL-STATEMENT\" above. It\nsays that exactly when it will be evaluated within the statement is\nimplementation-defined, BUT THAT IT IS EVALUATED WITHIN THE STATEMENT,\nnot beforehand.\n\nAt least, that's how I read it :)\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Wed, 18 Oct 2000 14:36:18 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time stops within transaction "
},
{
"msg_contents": "On Wed, 18 Oct 2000, Tom Lane wrote:\n\n> I think what you are advocating is recomputing now() at each statement\n> boundary within a transaction, but that's not as simple as it looks\n> either. Consider statement boundaries in an SQL function --- the\n> function is probably being called from some outer statement, so\n> advancing now() within the function would violate the spec constraint\n> with respect to the outer statement.\nPostgres doesn't have an idea of what a 'top-level' statement is? I.E.\nstatement as submitted by a client (libpq)?\n\n-alex\n\n",
"msg_date": "Wed, 18 Oct 2000 22:24:04 -0400 (EDT)",
"msg_from": "Alex Pilosov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: time stops within transaction "
},
{
"msg_contents": "Alex Pilosov <[email protected]> writes:\n>> Consider statement boundaries in an SQL function --- the\n>> function is probably being called from some outer statement, so\n>> advancing now() within the function would violate the spec constraint\n>> with respect to the outer statement.\n> Postgres doesn't have an idea of what a 'top-level' statement is? I.E.\n> statement as submitted by a client (libpq)?\n\nThere's never been any reason to make such a distinction.\nNor am I entirely convinced that that's the right definition...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Oct 2000 23:02:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time stops within transaction "
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n> Alex Pilosov <[email protected]> writes:\n> >> Consider statement boundaries in an SQL function --- the\n> >> function is probably being called from some outer statement, so\n> >> advancing now() within the function would violate the spec constraint\n> >> with respect to the outer statement.\n> > Postgres doesn't have an idea of what a 'top-level' statement is? I.E.\n> > statement as submitted by a client (libpq)?\n>\n> There's never been any reason to make such a distinction.\n\nThere's already a distinction.\nSnapshot is made per top-level statement and functions/subqueries\nuse the same snapshot as that of top-level statement.\n\nRegards.\n\nHiroshi Inoue\n\n\n",
"msg_date": "Thu, 19 Oct 2000 12:48:09 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time stops within transaction"
},
{
"msg_contents": "Hiroshi Inoue <[email protected]> writes:\n>>>> Postgres doesn't have an idea of what a 'top-level' statement is? I.E.\n>>>> statement as submitted by a client (libpq)?\n>> \n>> There's never been any reason to make such a distinction.\n\n> There's already a distinction.\n> Snapshot is made per top-level statement and functions/subqueries\n> use the same snapshot as that of top-level statement.\n\nNot so. SetQuerySnapshot is executed per querytree, not per top-level\nstatement --- for example, if a rule generates multiple queries from\na user statement, SetQuerySnapshot is called again for each query.\n\nWith the current structure of pg_exec_query_string(), an operation\nexecuted in the outer loop, rather than the inner, would more or less\ncorrespond to one \"top level\" query --- if you want to assume that\npg_exec_query_string() is only called from PostgresMain. That's\ntrue today but hasn't always been true --- I believe it used to be\nused to parse SPI commands, and someday it may be again.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Oct 2000 00:22:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time stops within transaction "
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n> Hiroshi Inoue <[email protected]> writes:\n> >>>> Postgres doesn't have an idea of what a 'top-level' statement is? I.E.\n> >>>> statement as submitted by a client (libpq)?\n> >>\n> >> There's never been any reason to make such a distinction.\n>\n> > There's already a distinction.\n> > Snapshot is made per top-level statement and functions/subqueries\n> > use the same snapshot as that of top-level statement.\n>\n> Not so. SetQuerySnapshot is executed per querytree, not per top-level\n> statement --- for example, if a rule generates multiple queries from\n> a user statement, SetQuerySnapshot is called again for each query.\n>\n> With the current structure of pg_exec_query_string(), an operation\n> executed in the outer loop, rather than the inner, would more or less\n> correspond to one \"top level\" query --- if you want to assume that\n> pg_exec_query_string() is only called from PostgresMain. That's\n> true today but hasn't always been true --- I believe it used to be\n> used to parse SPI commands, and someday it may be again.\n>\n\nIf there's no concept of top-level statement,there's no\nconcept of read consistency and MVCC isn't needed.\n\nRegards.\n\nHiroshi Inoue.\n\n\n",
"msg_date": "Thu, 19 Oct 2000 13:42:51 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time stops within transaction"
},
{
"msg_contents": "> > > Snapshot is made per top-level statement and functions/subqueries\n> > > use the same snapshot as that of top-level statement.\n> >\n> > Not so. SetQuerySnapshot is executed per querytree, not per top-level\n> > statement --- for example, if a rule generates multiple queries from\n> > a user statement, SetQuerySnapshot is called again for each query.\n\nThis is true. I just make it to work as it was in pre-6.5 times - each\nquery of *top level* query list uses own snapshot (in read committed mode\nonly) as if they were submitted by user one by one.\n\nBut functions/subqueries called while executing query uses same snapshot\nas query itself.\n\n> > With the current structure of pg_exec_query_string(), an operation\n> > executed in the outer loop, rather than the inner, would more or less\n> > correspond to one \"top level\" query --- if you want to assume that\n> > pg_exec_query_string() is only called from PostgresMain. That's\n> > true today but hasn't always been true --- I believe it used to be\n> > used to parse SPI commands, and someday it may be again.\n\nIt was never used in SPI. Just look at _SPI_execute. Same parent query\nsnapshot is used in SPI functions. *But* SPI' queries *see* changes\nmade by parent query - I never was sure about this and think I've asked\nother opinions. No opinions - no changes -:)\n\n> If there's no concept of top-level statement,there's no\n> concept of read consistency and MVCC isn't needed.\n\nExcept of the fact that SPI' queries see changes made by parent same\nsnapshot\nis used all time while executing top-level query (single query, not query\nlist).\n\nVadim\n\n\n",
"msg_date": "Wed, 18 Oct 2000 22:51:22 -0700",
"msg_from": "\"Vadim Mikheev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time stops within transaction"
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n> Hiroshi Inoue <[email protected]> writes:\n> >>>> Postgres doesn't have an idea of what a 'top-level' statement is? I.E.\n> >>>> statement as submitted by a client (libpq)?\n> >>\n> >> There's never been any reason to make such a distinction.\n>\n> > There's already a distinction.\n> > Snapshot is made per top-level statement and functions/subqueries\n> > use the same snapshot as that of top-level statement.\n>\n> Not so. SetQuerySnapshot is executed per querytree, not per top-level\n> statement --- for example, if a rule generates multiple queries from\n> a user statement, SetQuerySnapshot is called again for each query.\n>\n\nIs it possible that a rule generates multiple queries from\na read(select)-only statement ? If so,the queries must\nbe executed under the same snapshot in order to guaran\ntee read consistency from user's POV.\nAs for non-select queries I'm not sure because read\nconsistency doesn't have much meaning for them.\n\nI just remembered a report from Forest Wilkinson\nabout a month ago [SQL] SQL functions not locking\nproperly?\n\nDon't we have to distiguish simple procedure calls\n(select func();) and function calls as a part of a query ?\nAs I mentioned once before,it seems a problem that\narbitrary functions could be called from queries.\n\nAs for procedures,it seems preferable that each\nstatement of them is treated as a top-level query.\n\nRegards.\nHiroshi Inoue\n\n",
"msg_date": "Thu, 19 Oct 2000 18:24:56 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time stops within transaction"
},
{
"msg_contents": "Hiroshi Inoue <[email protected]> writes:\n> Is it possible that a rule generates multiple queries from\n> a read(select)-only statement ? If so,the queries must\n> be executed under the same snapshot in order to guaran\n> tee read consistency from user's POV.\n> As for non-select queries I'm not sure because read\n> consistency doesn't have much meaning for them.\n\nIn SERIALIZABLE mode everything is done with the first snapshot obtained\n*in the transaction*, which seems correct to me. In READ COMMITTED mode\na new snapshot is taken at every SetQuerySnapshot, which means later\ncommands in an xact can see data committed later than transaction start.\nThe issue here seems to be just how often we want to do\nSetQuerySnapshot.\n\nOne thing that bothers me about the current setup is that\npg_exec_query_string delays calling SetQuerySnapshot until the last\npossible moment before executing a query. In particular, parsing and\nplanning of the first query in a transaction will be done with no\nsnapshot at all! Is this good, and if so why?\n\nI am inclined to think that we should do SetQuerySnapshot in the outer\nloop of pg_exec_query_string, just before calling\npg_analyze_and_rewrite. This would ensure that parse/plan accesses to\nthe database have a snapshot, and would eliminate the question I raised\nyesterday about whether ProcessUtility is missing SetQuerySnapshot\ncalls.\n\nIf we did that, then SetQuerySnapshot would be called once per user-\nwritten command (defining a command as whatever the grammar produces\na single parsetree for, which is probably OK) so long as SPI functions\ndon't try to use pg_exec_query_string...\n\nThen this'd also be an appropriate place to advance now(), if people\nfeel that's more appropriate behavior for now() than the existing one.\n\n> I just remembered a report from Forest Wilkinson\n> about a month ago [SQL] SQL functions not locking\n> properly?\n\nYes, that was on my to-look-at list too. Not sure if it's related.\n\n> Don't we have to distiguish simple procedure calls\n> (select func();) and function calls as a part of a query ?\n\n\"select func()\" looks like a query to me. I don't see how you are going\nto make such a distinction in a useful way. If we had a CALL statement\ndistinct from function invocation in expressions, then maybe it'd make\nsense for that context to act differently.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Oct 2000 10:29:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time stops within transaction "
},
{
"msg_contents": "> I am inclined to think that we should do SetQuerySnapshot in the outer\n> loop of pg_exec_query_string, just before calling\n> pg_analyze_and_rewrite. This would ensure that parse/plan accesses to\n ^^^^^^^^^^^^^^\nActually not - snapshot is passed as parameter to heap_beginscan...\nAnd currently SnapshotNow is used everywhere.\n\n> If we did that, then SetQuerySnapshot would be called once per user-\n> written command (defining a command as whatever the grammar produces\n> a single parsetree for, which is probably OK) so long as SPI functions\n> don't try to use pg_exec_query_string...\n\nSPI doesn't try this from its birthday in ~6.2\n\nVadim\n\n\n",
"msg_date": "Thu, 19 Oct 2000 11:55:18 -0700",
"msg_from": "\"Vadim Mikheev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time stops within transaction "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \n> > I just remembered a report from Forest Wilkinson\n> > about a month ago [SQL] SQL functions not locking\n> > properly?\n> \n> Yes, that was on my to-look-at list too. Not sure if it's related.\n> \n\nAs I replied to his posting,the cause is obvious.\nBecause the queries in a function are executed under\nthe same snapshot,SELECT statements never\nsee the changes made by other backends.\nOTOH SELECT .. FOR UPDATE has a different visiblity\nfrom simple SELECT. Yes,SELECT .. FOR UPDATE\ndoesn't guarantee read consistency because it has to\nacquire a lock on the latest tuples.\nI recommended to use SELECT .. FOR UPDATE then\nbut it's far from being reasonable.\n\n> > Don't we have to distiguish simple procedure calls\n> > (select func();) and function calls as a part of a query ?\n> \n> \"select func()\" looks like a query to me. I don't see how you are going\n> to make such a distinction in a useful way. If we had a CALL statement\n> distinct from function invocation in expressions, then maybe it'd make\n> sense for that context to act differently.\n>\n\nAs I mentioned before,calling functions which have strong side effect e.g.\n select strong_effect(column1), column2 from table1 where ...;\nis a problem. IMHO the use of functions should be restricted.\nOf cource,we have to call(execute)procedures which change\nthe database. Unfortunately we don't have a command to call\n(execute) functions as procedures currently.\n\nRegards.\nHiroshi Inoue\n",
"msg_date": "Fri, 20 Oct 2000 06:13:16 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: time stops within transaction "
},
{
"msg_contents": "\n\nVadim Mikheev wrote:\n\n> > I am inclined to think that we should do SetQuerySnapshot in the outer\n> > loop of pg_exec_query_string, just before calling\n> > pg_analyze_and_rewrite. This would ensure that parse/plan accesses to\n> ^^^^^^^^^^^^^^\n> Actually not - snapshot is passed as parameter to heap_beginscan...\n> And currently SnapshotNow is used everywhere.\n>\n\nI sometimes mentioned anxieties about the use of SnapshotNow,\nthough I 've had no reasonable solution for it.\nSnapshotNow isn't a real snapshot and so it wouldn't be able to\ngive us a complete consistency e.g. in the case \"DDL statements\nin transaction block\".\n\nHowever I couldn't think of any reasnoable way how to\nhandle the following cases.\n\nWe would have PREPARE statements in the near future.\nHow does PREPARE use the same snapshot as the execution ?\nWe would never be able to have shared catalog cache.\nWe coulnd't delete dropped table files immediately after commit.\n...\n\nRegards.\nHiroshi Inoue\n\n",
"msg_date": "Fri, 20 Oct 2000 08:52:14 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time stops within transaction"
}
]
|
[
{
"msg_contents": "Strangely, the same thing does not happen when I do timenow() instead of\ntime(). This is very counter-intuitive, if this is the way it is supposed\nto work, at least docs should be saying that.\n\nAlso, I checked, and its probably not the fmgr cache, since now() is set\nto be noncacheable...\n\n-alex\n\n",
"msg_date": "Tue, 17 Oct 2000 22:40:33 -0400 (EDT)",
"msg_from": "Alex Pilosov <[email protected]>",
"msg_from_op": true,
"msg_subject": "time stops/workaround"
}
]
|
[
{
"msg_contents": "> BTW, avoiding writes is base WAL feature, ie - it'll be\n> implemented in 7.1.\n\nWow, great, I thought first step was only to avoid sync :-)\n\n> > No, but rollforward is currently the main feature, no ?\n> \n> I'm going to rollback changes on abort in 7.1. Seems I've\n> mentioned both redo and UNDO (without compensation records)\n> AM methods many times.\n\nI don't think that I misunderstood anything here. If the commit \nrecord is in the tx log this tx will have to be rolled forward, and\nnot aborted. Of course open tx's on abort will be rolled back.\nBut this roll forward for committed tx could be a starting point, no?\n\n> > Does it make sense to ship WAL without using it's currently \n> > main feature ?\n> \n> Sorry, but it's not always possible to have all at once.\n\nSorry, my main point was not to argument against WAL in 7.1,\nbut to state, that backup/restore would be very important.\n\n> (BTW, replication server prototype announced by Pgsql, Inc\n> could be used for incremental backup)\n\nYes, that could be a good starting point for rollforward if it is \nbased on WAL.\n\nWe should not call this tx log business \"Incremental backup\"\nan incremental backup scans all pages, and backs\nthem up if they changed in respect to the last higher level backup.\n(full backup, level 1 backup, level 2 backup ....)\nOracle only uses this chargon, since they don't have such a \nbackup, and want to fool their customer's managers. \nAll other DB companies make correct use of the wording \n\"incremental backup\" in the above sense.\n\nAndreas\n",
"msg_date": "Wed, 18 Oct 2000 10:38:09 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: Backup, restore & pg_dump"
}
]
|
[
{
"msg_contents": "\n> > The Upper- and the lower function don't convert the german \n> umlauts (�.�.�.) but leave them in their original condition\n> \n> Gert (or anyone): what should the result be? I'm German-impaired, so\n> you'll need to be more specific. Did you compile with locale \n> turned on?\n> Multi-byte character sets?? \n\nWe usually use ISO8859-1 which is single byte.\n\n> Which byte codes correspond to the lower-\n> and upper-case umlaut characters???\n\nlower --> upper\n�\t�\n�\t�\n�\t�\n\nCurrent CVS does this correctly when compiled with --enable-locale,\nso imho we don't have a bug here, since we can't do this sort of thing without locale.\n\nAndreas\n\nPS: Thomas, can you please forward to the report originator.\n",
"msg_date": "Wed, 18 Oct 2000 11:16:26 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Re: [BUGS] UPPER and LOWER dosen't work correctly o\n\tn special caracters (umlauts)"
},
{
"msg_contents": "> Current CVS does this correctly when compiled with --enable-locale,\n> so imho we don't have a bug here, since we can't do this sort of thing without locale.\n\nGreat.\n\n> PS: Thomas, can you please forward to the report originator.\n\nafaict, no. He entered a name only, not a valid email address :(\n\nVince, how do I go about clearing or updating the online bug reports?\nI'm sure you've told me before, but I need a reminder...\n\n - Thomas\n",
"msg_date": "Wed, 18 Oct 2000 13:44:36 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: [BUGS] UPPER and LOWER dosen't work correctly on\n\tspecial caracters (umlauts)"
}
]
|
[
{
"msg_contents": "Hi all\n\nI see heap_tuple_toast_attrs() calls in heapam.c.\nWhen they are called from heap_update/delete()\n,a buffer is locked.\nOTOH heap_tuple_toast_attrs() calls heap_open(..,\n.., RowExclusiveLock). Hmm,try to acquire a\nRowExclusiveLock while locking a buffer.\nIs there no problem ?\n\nRegards.\n\nHiroshi Inoue\n\n",
"msg_date": "Wed, 18 Oct 2000 18:39:53 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": true,
"msg_subject": "toast operations while locking a buffer"
},
{
"msg_contents": "Hiroshi Inoue <[email protected]> writes:\n> I see heap_tuple_toast_attrs() calls in heapam.c.\n> When they are called from heap_update/delete()\n> ,a buffer is locked.\n\nHm. Seems like it would be better to do the tuple toasting before\nwe lock the target buffer...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Oct 2000 12:07:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: toast operations while locking a buffer "
}
]
|
[
{
"msg_contents": "\n> >We should not call this tx log business \"Incremental backup\"\n> >an incremental backup scans all pages, and backs\n> >them up if they changed in respect to the last higher level backup.\n> >(full backup, level 1 backup, level 2 backup ....)\n> \n> You may be tying implementation too closely to function; so long as\n> succesive incremental backups are (a) incremental since the last higher\n> level backup, and (b) sufficient to restore the database when cobined with\n> other incremental backups, then ISTM that the method of deriving the backup\n> (from logs, or reading data pages) is irrelevant.\n\nYes, definitely, but they actually refer to their redo logs directly,\nnot something extracted from the logs like you describe.\nThey call tx log backup incremental backup which we should avoid.\n\n> I too am used to incremental backups that actually read data pages, but if\n> the WAL has enough information to determine the pages, the why not use\n> it...but maybe I'm missing something.\n\nSomething to think about later, but imho reading the data pages is The Way.\n\nDo we have the info when a page was last modified (in respect to a WAL position,\nnot wallclock time) on each page ? This is probably an info we will need.\n\nAndreas\n",
"msg_date": "Wed, 18 Oct 2000 12:03:59 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: AW: Backup, restore & pg_dump"
}
]
|
[
{
"msg_contents": "At 10:38 18/10/00 +0200, Zeugswetter Andreas SB wrote:\n>\n>We should not call this tx log business \"Incremental backup\"\n>an incremental backup scans all pages, and backs\n>them up if they changed in respect to the last higher level backup.\n>(full backup, level 1 backup, level 2 backup ....)\n\nYou may be tying implementation too closely to function; so long as\nsuccesive incremental backups are (a) incremental since the last higher\nlevel backup, and (b) sufficient to restore the database when cobined with\nother incremental backups, then ISTM that the method of deriving the backup\n(from logs, or reading data pages) is irrelevant.\n\nI too am used to incremental backups that actually read data pages, but if\nthe WAL has enough information to determine the pages, the why not use\nit...but maybe I'm missing something.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 18 Oct 2000 20:39:59 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AW: AW: Backup, restore & pg_dump"
}
]
|
[
{
"msg_contents": "The following patch was sent to the patches list:\n\nThis patch forces the use of 'DROP VIEW' to destroy views.\n\nIt also changes the syntax of DROP VIEW to\nDROP VIEW v1, v2, ...\nto match the syntax of DROP TABLE.\n\nSome error messages were changed so this patch also includes changes to the\nappropriate expected/*.out files.\n\nDoc changes for 'DROP TABLE\" and 'DROP VIEW' are included.\n\n\n-- \nMark Hollomon\n",
"msg_date": "Wed, 18 Oct 2000 11:12:42 -0400",
"msg_from": "Mark Hollomon <[email protected]>",
"msg_from_op": true,
"msg_subject": "DRP TABLE/VIEW patch"
}
]
|
[
{
"msg_contents": "I've just successfully completed an out of the box VPATH build of\nPostgreSQL (i.e., putting the object files in a different directory\nstructure than the source files). It should be ready to go within the\nnext few days.\n\nThis is an opportune time to sort out the use of the make variables\nCPPFLAGS and CFLAGS, which are used interchangeably in some places. \nUnfortunately, this would mean having to fix each of the targets\n\ndep depend:\n\t$(CC) -MM $(CFLAGS) *.c >depend\n\n(because the preprocessor options like -I and -D would be in CPPFLAGS). \nI can install a hook to make this work specially without need to fix each\nfile, but that would require GNU make 3.76 for those using `make depend'. \nI think this should not bother anyone too much, but I'm just letting you\nknow. (Of course, `make depend' is obsolescent anyway.)\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 18 Oct 2000 19:20:38 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Coming attractions: VPATH build; make variables issue"
},
{
"msg_contents": "> (because the preprocessor options like -I and -D would be in CPPFLAGS). \n> I can install a hook to make this work specially without need to fix each\n> file, but that would require GNU make 3.76 for those using `make depend'. \n> I think this should not bother anyone too much, but I'm just letting you\n> know. (Of course, `make depend' is obsolescent anyway.)\n\nLooks like I have gmake 3.75 on BSD/OS 4.01:\n\t\n\t#$ gmake -v\n\tGNU Make version 3.75, by Richard Stallman and Roland McGrath.\n\tCopyright (C) 1988, 89, 90, 91, 92, 93, 94, 95, 96\n\t Free Software Foundation, Inc.\n\tThis is free software; see the source for copying conditions.\n\tThere is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A\n\tPARTICULAR PURPOSE.\n\t\n\tReport bugs to <[email protected]>.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Oct 2000 13:21:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Coming attractions: VPATH build; make variables issue"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> This is an opportune time to sort out the use of the make variables\n> CPPFLAGS and CFLAGS, which are used interchangeably in some places. \n> Unfortunately, this would mean having to fix each of the targets\n\n> dep depend:\n> \t$(CC) -MM $(CFLAGS) *.c >depend\n\nWhy? Shouldn't CFLAGS include CPPFLAGS? These targets seem correct\nto me as they stand ... other than assuming CC is gcc, but nevermind\nthat...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Oct 2000 13:24:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Coming attractions: VPATH build; make variables issue "
},
{
"msg_contents": "Tom Lane writes:\n\n> > dep depend:\n> > \t$(CC) -MM $(CFLAGS) *.c >depend\n> \n> Why? Shouldn't CFLAGS include CPPFLAGS?\n\nNope. That's what it does now, but the implicit rule is\n\n%.o: %.c\n\t$(CC) -c $(CPPFLAGS) $(CFLAGS)\n\nso if you set CFLAGS to include CPPFLAGS then you get all of it\ndouble. So I have to fix all the rules to say\n\ndep depend:\n\t$(CC) -MM $(CPPFLAGS) $(CFLAGS) *.c >depend\n\nI just notice that the workaround I had in mind won't work so well, so I\nguess I'll have to fix it the hard way.\n\n\n> These targets seem correct to me as they stand ... other than assuming\n> CC is gcc, but nevermind that...\n\nI'd be glad to add support for other compilers into the --enable-depend\nmechanism, if someone supplies the details.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 18 Oct 2000 21:43:05 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Coming attractions: VPATH build; make variables issue"
},
{
"msg_contents": "Tom Lane writes:\n\n> > dep depend:\n> > \t$(CC) -MM $(CFLAGS) *.c >depend\n> \n> Why? Shouldn't CFLAGS include CPPFLAGS? These targets seem correct\n> to me as they stand ... other than assuming CC is gcc, but nevermind\n> that...\n\nJust a sanity check: Does anyone use `make depend'? Does everyone know\nabout the better way to track dependencies? Does every-/anyone know why\n`make depend' is worse? I just don't want to bother fixing something\nthat's dead anyway...\n\n(helpful reading: http://www.paulandlesley.org/gmake/autodep.html)\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 19 Oct 2000 17:58:40 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "make depend (Re: Coming attractions: VPATH build; make\n\tvariables issue)"
},
{
"msg_contents": "On Thu, 19 Oct 2000, Peter Eisentraut wrote:\n\n> Tom Lane writes:\n> \n> > > dep depend:\n> > > \t$(CC) -MM $(CFLAGS) *.c >depend\n> > \n> > Why? Shouldn't CFLAGS include CPPFLAGS? These targets seem correct\n> > to me as they stand ... other than assuming CC is gcc, but nevermind\n> > that...\n> \n> Just a sanity check: Does anyone use `make depend'? Does everyone know\n> about the better way to track dependencies? Does every-/anyone know why\n> `make depend' is worse? I just don't want to bother fixing something\n> that's dead anyway...\n\nUmmmm ... I don't *hangs head* The only place I've ever really seen it\nused extensively is the FreeBSD OS/kernel builds ...\n\n\n",
"msg_date": "Thu, 19 Oct 2000 13:00:28 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make depend (Re: Coming attractions: VPATH build; make\n\tvariables issue)"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Just a sanity check: Does anyone use `make depend'? Does everyone know\n> about the better way to track dependencies? Does every-/anyone know why\n> `make depend' is worse? I just don't want to bother fixing something\n> that's dead anyway...\n> (helpful reading: http://www.paulandlesley.org/gmake/autodep.html)\n\nWell, you'll still have to touch every makefile :-( --- but I see no\ngood reason not to remove \"make depend\" if we have support for a better\nsolution. Comments anyone?\n\nOne thought here: \"make depend\" has the advantage of being\nnon-intrusive, in the sense that you're not forced to use it and if\nyou don't use it it doesn't cost you anything. In particular,\nnon-developer types probably just want to build from scratch when they\nget a new distribution --- they don't want to expend cycles on making\nuseless (for them) dependency files, and they most certainly don't want\nto be forced to use gcc, nor to install a makedepend tool. I trust what\nyou have in mind doesn't make life worse for people who don't need\ndependency tracking.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Oct 2000 13:17:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make depend (Re: Coming attractions: VPATH build;\n\tmake variables issue)"
},
{
"msg_contents": "Tom Lane writes:\n\n> One thought here: \"make depend\" has the advantage of being\n> non-intrusive, in the sense that you're not forced to use it and if\n> you don't use it it doesn't cost you anything. In particular,\n> non-developer types probably just want to build from scratch when they\n> get a new distribution --- they don't want to expend cycles on making\n> useless (for them) dependency files, and they most certainly don't want\n> to be forced to use gcc, nor to install a makedepend tool.\n\nAll of this is true for the \"advanced\" method as well.\n\nThe only advantage of `make depend' is that you can run it after you have\nalready built. But then you have to remember to re-run it all the time,\notherwise the point of having accurate dependencies is gone. So this\nmight be useful for someone installing a patch into a header file and not\nwanting to rebuild from scratch, but that's about it.\n\nWhat we could do is ship the dependencies (.deps/*.P) in the tarball. \nThat would require running an entire build before making a tarball, but it\nwould be a nice service to users.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 19 Oct 2000 22:30:17 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: make depend (Re: Coming attractions: VPATH build; make\n\tvariables issue)"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> What we could do is ship the dependencies (.deps/*.P) in the tarball. \n> That would require running an entire build before making a tarball, but it\n> would be a nice service to users.\n\nHm. It might be handy for people not using gcc, since they'd have no\neasy way to build dependencies for themselves. Do you have an idea\nhow much it'd bloat the tarball to do that?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Oct 2000 17:14:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make depend (Re: Coming attractions: VPATH build;\n\tmake variables issue)"
},
{
"msg_contents": " Peter Eisentraut <[email protected]> writes:\n > What we could do is ship the dependencies (.deps/*.P) in the tarball. \n > That would require running an entire build before making a tarball, but it\n > would be a nice service to users.\n\n Hm. It might be handy for people not using gcc, since they'd have no\n easy way to build dependencies for themselves. Do you have an idea\n how much it'd bloat the tarball to do that?\n\nIsn't the basic idea to write Makefile targets to remake dependency\nfiles when they are out of date with code? Won't those targets\ninvolve implicit rules for going, for example, from *.c -> *.d (or\nwhatever convention you use for dependency files)? Don't these\nMakefiles also have a list of srcs to be built, e.g., a make variable\nthat defines a list of *.c filename?\n\nIf so, can't you just implement a depend: target as\n\n ${DEPEND_FILES}+=${SRCS:%c=%d}\n depend: ${DEPEND_FILES}\n .SUFFIXES: .c .d\n .c.d:\n\tgcc -M ...\n .include \"${DEPEND_FILES}\"\n\nFor gmake users, all the magic happens automatically. Prior to\ndistribution, just do make depend to get all the *.d files to include\nin the tarball. For non-gmake users, all the *.d files already exist\nin the source. If they make changes, they can run make depend\nmanually.\n\nSorry if this is what you had in mind already, but the discussion\nseemed to imply that you can't have it both ways.\n\nCheers,\nBrook\n",
"msg_date": "Fri, 20 Oct 2000 08:18:32 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make depend (Re: Coming attractions: VPATH build;\n\tmake variables issue)"
},
{
"msg_contents": "Tom Lane writes:\n\n> Do you have an idea how much it'd bloat the tarball to do that?\n\n\t\tcurrent\t\tdeps\t\tincrease\ngzipped\t\t7.1 MB\t\t35591 bytes\t0.5%\nunpacked\t29MB\t\t1309669 bytes\t4%\n\nShould be okay...\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 20 Oct 2000 20:59:34 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: make depend (Re: Coming attractions: VPATH build; make\n\tvariables issue)"
}
]
|
[
{
"msg_contents": "\n\nI'm trying to use the php/postgresql interface via my apache server.\nWhen I try and load a page containing:\n\n<?php $db = pg_connect( \"database=mydb owner=me\" )\n or die ( \"could not connect\" ) ?>\n\n(both the database and owner are valid and tested via psql)\n\napache complains:\n/usr/libexec/ld.so: Undefined symbol \"_PQconnectdb\" called from httpd:/usr/lib/apache/modules/libphp4.so at 0x4030a394\n\nI have verified through ldconfig that libpq.so.2.0 is being loaded into hints,\nwhat am I missing?\n\nThanks,\n edge\n\n\n",
"msg_date": "18 Oct 2000 17:35:13 GMT",
"msg_from": "Brian Edginton <[email protected]>",
"msg_from_op": true,
"msg_subject": "[HACKERS] pg_connect error"
},
{
"msg_contents": "\nWhen you compiles php, did you ./configure with --with-pgsql? If you did\nnot compile php explicitly telling it to includ pgsql support, it probably\ndidn't.\n\nTravis\n\nBrian Edginton ([email protected]) wrote:\n\n> \n> \n> I'm trying to use the php/postgresql interface via my apache server.\n> When I try and load a page containing:\n> \n> <?php $db = pg_connect( \"database=mydb owner=me\" )\n> or die ( \"could not connect\" ) ?>\n> \n> (both the database and owner are valid and tested via psql)\n> \n> apache complains:\n> /usr/libexec/ld.so: Undefined symbol \"_PQconnectdb\" called from httpd:/usr/lib/apache/modules/libphp4.so at 0x4030a394\n> \n> I have verified through ldconfig that libpq.so.2.0 is being loaded into hints,\n> what am I missing?\n> \n> Thanks,\n> edge\n> \n\n",
"msg_date": "Wed, 18 Oct 2000 13:28:37 -0500",
"msg_from": "Travis Bauer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_connect error"
},
{
"msg_contents": "I don't think that is it. It has been a while, but I thought if you didn't\ncompile php with psql, it returned an \"undefined function\" error.\n\nBy the way... I just found out a little bit ago that the pgsql-php list is\nstill alive. :)\n\nAdam Lang\nSystems Engineer\nRutgers Casualty Insurance Company\n----- Original Message -----\nFrom: \"Travis Bauer\" <[email protected]>\nTo: \"Brian Edginton\" <[email protected]>\nCc: <[email protected]>\nSent: Wednesday, October 18, 2000 2:28 PM\nSubject: Re: [GENERAL] [HACKERS] pg_connect error\n\n\n>\n> When you compiles php, did you ./configure with --with-pgsql? If you did\n> not compile php explicitly telling it to includ pgsql support, it probably\n> didn't.\n>\n> Travis\n>\n> Brian Edginton ([email protected]) wrote:\n>\n> >\n> >\n> > I'm trying to use the php/postgresql interface via my apache server.\n> > When I try and load a page containing:\n> >\n> > <?php $db = pg_connect( \"database=mydb owner=me\" )\n> > or die ( \"could not connect\" ) ?>\n> >\n> > (both the database and owner are valid and tested via psql)\n> >\n> > apache complains:\n> > /usr/libexec/ld.so: Undefined symbol \"_PQconnectdb\" called from\nhttpd:/usr/lib/apache/modules/libphp4.so at 0x4030a394\n> >\n> > I have verified through ldconfig that libpq.so.2.0 is being loaded into\nhints,\n> > what am I missing?\n> >\n> > Thanks,\n> > edge\n> >\n\n",
"msg_date": "Wed, 18 Oct 2000 16:05:50 -0400",
"msg_from": "\"Adam Lang\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_connect error"
},
{
"msg_contents": "And, is the postmaster started with -i?\n* Travis Bauer <[email protected]> [001018 13:49]:\n> \n> When you compiles php, did you ./configure with --with-pgsql? If you did\n> not compile php explicitly telling it to includ pgsql support, it probably\n> didn't.\n> \n> Travis\n> \n> Brian Edginton ([email protected]) wrote:\n> \n> > \n> > \n> > I'm trying to use the php/postgresql interface via my apache server.\n> > When I try and load a page containing:\n> > \n> > <?php $db = pg_connect( \"database=mydb owner=me\" )\n> > or die ( \"could not connect\" ) ?>\n> > \n> > (both the database and owner are valid and tested via psql)\n> > \n> > apache complains:\n> > /usr/libexec/ld.so: Undefined symbol \"_PQconnectdb\" called from httpd:/usr/lib/apache/modules/libphp4.so at 0x4030a394\n> > \n> > I have verified through ldconfig that libpq.so.2.0 is being loaded into hints,\n> > what am I missing?\n> > \n> > Thanks,\n> > edge\n> > \n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Wed, 18 Oct 2000 19:07:03 -0500",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_connect error"
},
{
"msg_contents": "Travis Bauer <[email protected]> wrote:\n\n> When you compiles php, did you ./configure with --with-pgsql? If you did\n> not compile php explicitly telling it to includ pgsql support, it probably\n> didn't.\n\nYes I did, and postgresql is installed in the default location. Notice that\nthe pg_connect from the pgsql module (ext/pgsql) is being executed, it's\njust not finding the PQconnectdb function from the libpq.so library.\n\n> Travis\n\n> Brian Edginton ([email protected]) wrote:\n\n>> \n>> \n>> I'm trying to use the php/postgresql interface via my apache server.\n>> When I try and load a page containing:\n>> \n>> <?php $db = pg_connect( \"database=mydb owner=me\" )\n>> or die ( \"could not connect\" ) ?>\n>> \n>> (both the database and owner are valid and tested via psql)\n>> \n>> apache complains:\n>> /usr/libexec/ld.so: Undefined symbol \"_PQconnectdb\" called from httpd:/usr/lib/apache/modules/libphp4.so at 0x4030a394\n>> \n>> I have verified through ldconfig that libpq.so.2.0 is being loaded into hints,\n>> what am I missing?\n>> \n>> Thanks,\n>> edge\n>> \n\n",
"msg_date": "19 Oct 2000 02:29:24 GMT",
"msg_from": "Brian Edginton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_connect error"
},
{
"msg_contents": "Larry Rosenman <[email protected]> wrote:\n> And, is the postmaster started with -i?\n\nYup.\n\n",
"msg_date": "19 Oct 2000 02:29:48 GMT",
"msg_from": "Brian Edginton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_connect error"
},
{
"msg_contents": "Larry Rosenman <[email protected]> wrote:\n> And, is the postmaster started with -i?\n\nYup\n",
"msg_date": "19 Oct 2000 02:30:18 GMT",
"msg_from": "Brian Edginton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_connect error"
},
{
"msg_contents": "I noticed that when I was compiling my copy that if I did not specify \nthe pgsql installation properly, that the ./configure script gave one\nerror in the middle of the ./configure output that was easy to \nmiss. At the end, then it gave a warning that did not mention pgsql at all.\nAre you sure that you correctly specified the install directory of the pgsql\nserver? \n\n-- \n----------------------------------------------------------------\nTravis Bauer | CS Grad Student | IU |www.cs.indiana.edu/~trbauer\n----------------------------------------------------------------\n\nBrian Edginton ([email protected]) wrote:\n\n> Travis Bauer <[email protected]> wrote:\n> \n> > When you compiles php, did you ./configure with --with-pgsql? If you did\n> > not compile php explicitly telling it to includ pgsql support, it probably\n> > didn't.\n> \n> Yes I did, and postgresql is installed in the default location. Notice that\n> the pg_connect from the pgsql module (ext/pgsql) is being executed, it's\n> just not finding the PQconnectdb function from the libpq.so library.\n> \n> > Travis\n> \n> > Brian Edginton ([email protected]\n\n\n\n",
"msg_date": "Wed, 18 Oct 2000 22:45:52 -0500",
"msg_from": "Travis Bauer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_connect error"
}
]
|
[
{
"msg_contents": "> > BTW, avoiding writes is base WAL feature, ie - it'll be\n> > implemented in 7.1.\n> \n> Wow, great, I thought first step was only to avoid sync :-)\n\n? If syncs are not required then why to do write call?\n\n> > > No, but rollforward is currently the main feature, no ?\n> > \n> > I'm going to rollback changes on abort in 7.1. Seems I've\n> > mentioned both redo and UNDO (without compensation records)\n> > AM methods many times.\n> \n> I don't think that I misunderstood anything here. If the commit \n> record is in the tx log this tx will have to be rolled forward, and\n> not aborted. Of course open tx's on abort will be rolled back.\n> But this roll forward for committed tx could be a starting point, no?\n\nCurrently records inserted by aborted transactions remain in db\nuntill vacuum. I try to rollback changes - ie *delete* inserted\ntuples on abort (though could do not do this), - isn't there\nsome difference now?\n\n> > > Does it make sense to ship WAL without using it's currently \n> > > main feature ?\n> > \n> > Sorry, but it's not always possible to have all at once.\n> \n> Sorry, my main point was not to argument against WAL in 7.1,\n> but to state, that backup/restore would be very important.\n\nYes but I'm not able to do this work. And note that with WAL in\n7.1 we could add backup/restore in any version > 7.1 (eg 7.1.1).\n\n> > (BTW, replication server prototype announced by Pgsql, Inc\n> > could be used for incremental backup)\n> \n> Yes, that could be a good starting point for rollforward if it is \n> based on WAL.\n\nIt's not.\n\n> We should not call this tx log business \"Incremental backup\"\n> an incremental backup scans all pages, and backs\n> them up if they changed in respect to the last higher level backup.\n> (full backup, level 1 backup, level 2 backup ....)\n> Oracle only uses this chargon, since they don't have such a \n> backup, and want to fool their customer's managers. \n> All other DB companies make correct use of the wording \n> \"incremental backup\" in the above sense.\n\nScanning *all* pages?! Not the best approach imho.\nOr did you meant scanning log to get last *committed\"\nchanges to all pages - ie some kind of log compression?\n\nVadim\n",
"msg_date": "Wed, 18 Oct 2000 14:40:31 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: AW: Backup, restore & pg_dump"
}
]
|
[
{
"msg_contents": "> Do we have the info when a page was last modified (in respect \n> to a WAL position, not wallclock time) on each page ? This is\n> probably an info we will need.\n\nHow else one could know was a change applied to page or not?\n\nVadim\n",
"msg_date": "Wed, 18 Oct 2000 14:42:57 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: AW: AW: Backup, restore & pg_dump"
}
]
|
[
{
"msg_contents": "I have removed the new --export-dynamic item from the Solaris FAQ. \nLooks like 7.1 has it fixed already.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Oct 2000 23:19:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Solaris FAQ"
},
{
"msg_contents": "Does anyone know if there is an operator class that supports the money data\ntype?\nWhen I issue a command like :\ncreate index \"table1_price_idx\" on \"table\" using btree (\"price\"\n\"money_ops\");\nor\ncreate index \"table1_price_idx\" on \"table\" using btree (\"price\"\n\"float8_ops\");\nI get errors like :\nERROR: DefineIndex: opclass \"money_ops\" not found\nand\nERROR: DefineIndex: opclass \"float8_ops\" does not accept datatype \"money\"\nrespectively.\nI looked everywhere for documentation about this, and asked this question\nelsewhere, but nobody knew anything.\nThanks in advance.\nRich Ryan\n\n",
"msg_date": "Wed, 18 Oct 2000 21:54:17 -0700",
"msg_from": "\"Rich Ryan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Index Ops supporting money type"
},
{
"msg_contents": "\"Rich Ryan\" <[email protected]> writes:\n> Does anyone know if there is an operator class that supports the money data\n> type?\n\nThere is not.\n\nType money is on its way out anyway, at least in its current\nincarnation, because it's (a) nonstandard, (b) doesn't support\na reasonable number of digits, and (c) isn't internationalizable.\nWhile (a) is not a fatal objection, (b) and (c) mean the type is\npretty badly crippled.\n\nI doubt that anyone will want to do any work on money in its present\nform. Sooner or later it ought to be reimplemented as an I/O skin\non type \"numeric\" ... hopefully with decent locale support. In the\nmeantime, efforts like adding index support seem like throwing good\nwork after bad.\n\nI suggest using type numeric for now.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Oct 2000 01:40:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index Ops supporting money type "
},
{
"msg_contents": "Bruce Momjian writes:\n\n> I have removed the new --export-dynamic item from the Solaris FAQ. \n> Looks like 7.1 has it fixed already.\n\nNot that I could tell.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 19 Oct 2000 16:48:35 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Solaris FAQ"
},
{
"msg_contents": "I see in makefiles/Makefile.solaris:\n\n%.so: %.o\n\t$(LD) -G -Bdynamic -o $@ $<\n ^^^^^^^^^\n\nIs that OK?\n\n\n Bruce Momjian writes:\n> \n> > I have removed the new --export-dynamic item from the Solaris FAQ. \n> > Looks like 7.1 has it fixed already.\n> \n> Not that I could tell.\n> \n> -- \n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 19 Oct 2000 11:56:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Solaris FAQ"
},
{
"msg_contents": "Oh, OK.\n\n> Bruce Momjian writes:\n> \n> > I see in makefiles/Makefile.solaris:\n> > \n> > %.so: %.o\n> > \t$(LD) -G -Bdynamic -o $@ $<\n> > ^^^^^^^^^\n> > \n> > Is that OK?\n> \n> This is unrelated. The issue at hand is that the postgres/postmaster\n> executable must export its symbols so that the dynamically loaded modules\n> (e.g., PL handlers) can refer back to them. Normally this is done using\n> -Wl,-E when linking the postmaster executable. However, the Solaris\n> linker does not have this option, I believe it does it by default. So we\n> have to first find out what linker is being used and then write something\n> like\n> \n> if {GNU ld}\n> export_dynamic := -Wl,-E\n> endif\n> \n> > \n> > \n> > Bruce Momjian writes:\n> > > \n> > > > I have removed the new --export-dynamic item from the Solaris FAQ. \n> > > > Looks like 7.1 has it fixed already.\n> > > \n> > > Not that I could tell.\n> > > \n> > > -- \n> > > Peter Eisentraut [email protected] http://yi.org/peter-e/\n> > > \n> > > \n> > \n> > \n> > \n> \n> -- \n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 19 Oct 2000 12:12:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Solaris FAQ"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> I see in makefiles/Makefile.solaris:\n> \n> %.so: %.o\n> \t$(LD) -G -Bdynamic -o $@ $<\n> ^^^^^^^^^\n> \n> Is that OK?\n\nThis is unrelated. The issue at hand is that the postgres/postmaster\nexecutable must export its symbols so that the dynamically loaded modules\n(e.g., PL handlers) can refer back to them. Normally this is done using\n-Wl,-E when linking the postmaster executable. However, the Solaris\nlinker does not have this option, I believe it does it by default. So we\nhave to first find out what linker is being used and then write something\nlike\n\nif {GNU ld}\nexport_dynamic := -Wl,-E\nendif\n\n> \n> \n> Bruce Momjian writes:\n> > \n> > > I have removed the new --export-dynamic item from the Solaris FAQ. \n> > > Looks like 7.1 has it fixed already.\n> > \n> > Not that I could tell.\n> > \n> > -- \n> > Peter Eisentraut [email protected] http://yi.org/peter-e/\n> > \n> > \n> \n> \n> \n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 19 Oct 2000 18:14:26 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Solaris FAQ"
}
]
|
[
{
"msg_contents": "I notice that ProcessUtility() calls SetQuerySnapshot() for FETCH\nand COPY TO statements, and nothing else.\n\nSeems to me this is very broken. Isn't a query snapshot needed for\nany utility command that might do database accesses?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Oct 2000 00:43:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "SetQuerySnapshot() for utility statements"
},
{
"msg_contents": "> I notice that ProcessUtility() calls SetQuerySnapshot() for FETCH\n> and COPY TO statements, and nothing else.\n>\n> Seems to me this is very broken. Isn't a query snapshot needed for\n> any utility command that might do database accesses?\n\nNot needed. We don't support multi-versioning for schema operations.\nMore of that, sometimes it would be better to read *dirty* data from\nsystem tables - so, no snapshot required.\n\nWhat is really, hm, not good is that first SetQuerySnapshot defines\nserializable snapshot for *all* transactions, even for ones with read\ncommitted\nisolevel: in the times of 6.5 I thought about ability to switch between\nisolevels\ninside single xaction - this is not required by standard and *bad* for\nsystem:\njust remember that vacuum doesn't clean up deleted tuples if there is some\ntransaction *potentially* interested in them. For read committed xactions\nmust be no serializable snapshot defined and MyProc->xmin must be\nupdated when *each* top-level query begins.\n\nVadim\n\n\n",
"msg_date": "Wed, 18 Oct 2000 22:13:18 -0700",
"msg_from": "\"Vadim Mikheev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SetQuerySnapshot() for utility statements"
}
]
|
[
{
"msg_contents": "\n> what happens to sessions is that it does:\n> \n> SELECT session_data, id \n> FROM sessions\n> WHERE id = ?\n> FOR UPDATE\n> \n> .... client does some processing ...\n> \n> UPDATE sesssions set session_data = ? WHERE id = ?;\n> \n> (this is where the error happens)\n> \n> I think part of my problem might be that sessions is a view \n> and not a table,\n\nDid you create an on update do instead rule ?\n\nThis is currently not done automatically for views,\nthus views without additional \"create rule\"s are select only.\n\nBut, I am wondering whether the \"for update\" places the correct lock ?\n\nAndreas\n",
"msg_date": "Thu, 19 Oct 2000 10:21:04 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: The lightbulb just went on... "
},
{
"msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n>> SELECT session_data, id \n>> FROM sessions\n>> WHERE id = ?\n>> FOR UPDATE\n>>\n>> I think part of my problem might be that sessions is a view \n>> and not a table,\n\n> Did you create an on update do instead rule ?\n> This is currently not done automatically for views,\n> thus views without additional \"create rule\"s are select only.\n> But, I am wondering whether the \"for update\" places the correct lock ?\n\nHmm, good point! I'm not sure what \"select for update\" on a view ought\nto do, but I am pretty sure that the code will not do anything useful\nor sensible for this case...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Oct 2000 10:11:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: The lightbulb just went on... "
},
{
"msg_contents": "\n\nOn Thu, 19 Oct 2000, Tom Lane wrote:\n\n> Zeugswetter Andreas SB <[email protected]> writes:\n> >> SELECT session_data, id \n> >> FROM sessions\n> >> WHERE id = ?\n> >> FOR UPDATE\n> >>\n> >> I think part of my problem might be that sessions is a view \n> >> and not a table,\n> \n> > Did you create an on update do instead rule ?\n\nYes actually :).\n\nBut Ive since elimintated the rule and figured out I could get\nthe equivalent functionality I was getting the the RULE/VIEW by just \nusing a simple PL/pgSQL trigger.\n\nSince doing that, the \"relation XXXXX modified while in use\" errors\nhave gone away, but I'm still not sure I trust VACUUM ANALYZE enough\nto run it on a non-idle production database :). I want to do more\ntesting before I get that brave :).\n\nMike\n\n",
"msg_date": "Sun, 22 Oct 2000 23:46:35 -0500 (CDT)",
"msg_from": "Michael J Schout <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: The lightbulb just went on... "
}
]
|
[
{
"msg_contents": "\n> > > BTW, avoiding writes is base WAL feature, ie - it'll be\n> > > implemented in 7.1.\n> > \n> > Wow, great, I thought first step was only to avoid sync :-)\n> \n> ? If syncs are not required then why to do write call?\n\nYes, writes are only necessary when \"too many dirty pages\"\nare in the buffer pool. Those writes can be done by a page flusher\non demand or during checkpoint (don't know if we need checkpoint,\nbut you referred to doing checkpoints).\n\n> Currently records inserted by aborted transactions remain in db\n> untill vacuum. I try to rollback changes - ie *delete* inserted\n> tuples on abort (though could do not do this), - isn't there\n> some difference now?\n\nthis has two sides: less space wastage, but slower rollback.\nWhat you state is a separate issue, and is not part of the\nstartup rollforward for committed tx'ns. It is part of the rollback\nof aborted tx'ns.\n\n> > We should not call this tx log business \"Incremental backup\"\n> > an incremental backup scans all pages, and backs\n> > them up if they changed in respect to the last higher level backup.\n> > (full backup, level 1 backup, level 2 backup ....)\n> > Oracle only uses this chargon, since they don't have such a \n> > backup, and want to fool their customer's managers. \n> > All other DB companies make correct use of the wording \n> > \"incremental backup\" in the above sense.\n> \n> Scanning *all* pages?! Not the best approach imho.\n> Or did you meant scanning log to get last *committed\"\n> changes to all pages - ie some kind of log compression?\n\nYou could call an incremental backup some kind of log compression, yes,\nbut it is usually done by reading base data pages (an optimization would be to \nskip page ranges that are known to not have changed, but how do you know) \n\nIn the backup I have in mind there are 3 things:\n\n1. full backup\n2. tx log backup\n3. incremental backup with multiple levels\n\nPoint 3 is something to keep in mind, but do later.\nOn restore you do\n\nrestore full backup (level 0)\nrestore incremental backup level 1 (pages that changed after level 0)\nrestore incremental backup level 2 (pages that changed after level 1)\nrestore tx logs that were written after level 2 \n\nThe incremental backups are especially useful in an overwrite smgr,\nwhere you can get a lot of tx log that only changes a few pages.\n\nAndreas\n",
"msg_date": "Thu, 19 Oct 2000 11:25:05 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: Backup, restore & pg_dump"
},
{
"msg_contents": "> Yes, writes are only necessary when \"too many dirty pages\"\n> are in the buffer pool. Those writes can be done by a page flusher\n> on demand or during checkpoint (don't know if we need checkpoint,\n> but you referred to doing checkpoints).\n\nHow else to know from where in log to start redo and how far go back\nfor undo ?\n\n> > Currently records inserted by aborted transactions remain in db\n> > untill vacuum. I try to rollback changes - ie *delete* inserted\n> > tuples on abort (though could do not do this), - isn't there\n> > some difference now?\n> \n> this has two sides: less space wastage, but slower rollback.\n> What you state is a separate issue, and is not part of the\n> startup rollforward for committed tx'ns. It is part of the rollback\n> of aborted tx'ns.\n\nAnd I've stated this as separate feature.\n\nVadim\n\n\n",
"msg_date": "Thu, 19 Oct 2000 13:02:45 -0700",
"msg_from": "\"Vadim Mikheev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Backup, restore & pg_dump"
}
]
|
[
{
"msg_contents": "> I notice that ProcessUtility() calls SetQuerySnapshot() for FETCH\n> and COPY TO statements, and nothing else.\n>\n> Seems to me this is very broken. Isn't a query snapshot needed for\n> any utility command that might do database accesses?\n\nNot needed. We don't support multi-versioning for schema operations.\nMore of that, sometimes it would be better to read *dirty* data from\nsystem tables - so, no snapshot required.\n\nWhat is really, hm, not good is that first SetQuerySnapshot defines\nserializable snapshot for *all* transactions, even for ones with read\ncommitted isolevel: in the times of 6.5 I thought about ability to switch\nbetween isolevels inside single xaction - this is not required by standard\nand *bad* for system: just remember that vacuum doesn't clean up deleted\ntuples if there is some transaction *potentially* interested in them. For\nread\ncommitted xactions must be no serializable snapshot defined and\nMyProc->xmin must be updated when *each* top-level query begins.\n\nVadim\n\n",
"msg_date": "Thu, 19 Oct 2000 04:20:11 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SetQuerySnapshot() for utility statements"
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n>> Seems to me this is very broken. Isn't a query snapshot needed for\n>> any utility command that might do database accesses?\n\n> Not needed. We don't support multi-versioning for schema operations.\n\nNo? Seems to me we're almost there. Look for instance at that DROP\nUSER bug I just fixed: it was failing because it wasn't careful to make\nsure that during \"DROP USER foo,bar\", the loop iteration to delete user\nbar would see the changes the first loop iteration had made. So even\nthough we use a lot of table-level locking rather than true MVCC\nbehavior for schema changes, ISTM that we still have to play by all the\nrules when it comes to tuple visibility. In particular I suspect we\nought to be using standard query snapshot behavior...\n\n> More of that, sometimes it would be better to read *dirty* data from\n> system tables - so, no snapshot required.\n\nThere may be a small number of places like that, but for generic utility\noperations like CREATE/DROP USER, I don't see that this is a good idea.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Oct 2000 11:04:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SetQuerySnapshot() for utility statements "
},
{
"msg_contents": "> >> Seems to me this is very broken. Isn't a query snapshot needed for\n> >> any utility command that might do database accesses?\n> \n> > Not needed. We don't support multi-versioning for schema operations.\n> \n> No? Seems to me we're almost there. Look for instance at that DROP\n> USER bug I just fixed: it was failing because it wasn't careful to make\n> sure that during \"DROP USER foo,bar\", the loop iteration to delete user\n> bar would see the changes the first loop iteration had made. So even\n ^^^^^^^^^^^^^^^^^^^\nSnapshot defines visibility of changes made by other transactions.\nSeems that you talk here about self-visibility, defined by CommandId.\n\n> though we use a lot of table-level locking rather than true MVCC\n> behavior for schema changes, ISTM that we still have to play by all the\n> rules when it comes to tuple visibility. In particular I suspect we\n> ought to be using standard query snapshot behavior...\n\nWhat would it buy for us? MVCC lies to user - it returns view of data\nas they were some time ago. What would we get by seeing old\nview of catalog?\n\n> > More of that, sometimes it would be better to read *dirty* data from\n> > system tables - so, no snapshot required.\n> \n> There may be a small number of places like that, but for generic utility\n> operations like CREATE/DROP USER, I don't see that this is a good idea.\n\nBut your fix for DROP USER didn't change anything about snapshot used by\nscans so it's not good example.\n\nVadim\n\n\n",
"msg_date": "Thu, 19 Oct 2000 11:24:08 -0700",
"msg_from": "\"Vadim Mikheev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SetQuerySnapshot() for utility statements "
},
{
"msg_contents": "\"Vadim Mikheev\" <[email protected]> writes:\n>> bar would see the changes the first loop iteration had made. So even\n> ^^^^^^^^^^^^^^^^^^^\n> Snapshot defines visibility of changes made by other transactions.\n> Seems that you talk here about self-visibility, defined by CommandId.\n\nSure. The example was just to point out that we do have tuple\nvisibility rules, even in utility statements.\n\n>> though we use a lot of table-level locking rather than true MVCC\n>> behavior for schema changes, ISTM that we still have to play by all the\n>> rules when it comes to tuple visibility. In particular I suspect we\n>> ought to be using standard query snapshot behavior...\n\n> What would it buy for us? MVCC lies to user - it returns view of data\n> as they were some time ago. What would we get by seeing old\n> view of catalog?\n\nConsistency. For example: pg_dump wants a consistent view of the\ndatabase, so it runs in a serializable transaction. To the extent that\nit uses utility statements rather than standard SELECTs to look at the\nstate of the system catalogs, it will get the wrong answer if the\nutility statements believe that they can ignore the transaction\nisolation mode setting.\n\nI'm not sure that there are any utility statements that would be useful\nfor pg_dump, but certainly there could be such a thing, no?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Oct 2000 14:46:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SetQuerySnapshot() for utility statements "
},
{
"msg_contents": "> >> though we use a lot of table-level locking rather than true MVCC\n> >> behavior for schema changes, ISTM that we still have to play by all the\n> >> rules when it comes to tuple visibility. In particular I suspect we\n> >> ought to be using standard query snapshot behavior...\n>\n> > What would it buy for us? MVCC lies to user - it returns view of data\n> > as they were some time ago. What would we get by seeing old\n> > view of catalog?\n>\n> Consistency. For example: pg_dump wants a consistent view of the\n> database, so it runs in a serializable transaction. To the extent that\n> it uses utility statements rather than standard SELECTs to look at the\n> state of the system catalogs, it will get the wrong answer if the\n> utility statements believe that they can ignore the transaction\n> isolation mode setting.\n>\n> I'm not sure that there are any utility statements that would be useful\n> for pg_dump, but certainly there could be such a thing, no?\n\nI'm not sure are there any utilities just to look into catalog, seems all of\nthem to change them. So, is this reason to change heap_beginscan\nparameters in dozens places?\n(Note! If you just need to call SetQuerySnapshot once per statement\nplease do it - it doesn't matter for subject of this thread).\n\nNow examples why old catalog view is bad: what if serializable xaction S\ntries to use index I to read from user table, and meanwhile another xaction\ndeleted this index. Any reason to abort S? But we'll do this.\nAnd what if index I over int column was changed to index with name \"I\"\nbut over text column? Or function F now needs in 3 args instead of 2\nargs as it was when S started?\n\nLet me repeat - we don't support multi-versioning of metadata (schema).\nSo, changing snapshot parameter of heap_beginscan used for internal ops\nwe'll not make us more happy than we are not.\n\nVadim\n\n\n",
"msg_date": "Thu, 19 Oct 2000 12:57:34 -0700",
"msg_from": "\"Vadim Mikheev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SetQuerySnapshot() for utility statements "
}
]
|
[
{
"msg_contents": "> > > Snapshot is made per top-level statement and functions/subqueries\n> > > use the same snapshot as that of top-level statement.\n> >\n> > Not so. SetQuerySnapshot is executed per querytree, not per top-level\n> > statement --- for example, if a rule generates multiple queries from\n> > a user statement, SetQuerySnapshot is called again for each query.\n\nThis is true. I just made it to work as it was in pre-6.5 times - each\nquery of *top level* query list uses own snapshot (in read committed mode\nonly) as if they were submitted by user one by one.\n\nBut functions/subqueries called while executing query uses same snapshot\nas query itself.\n\n> > With the current structure of pg_exec_query_string(), an operation\n> > executed in the outer loop, rather than the inner, would more or less\n> > correspond to one \"top level\" query --- if you want to assume that\n> > pg_exec_query_string() is only called from PostgresMain. That's\n> > true today but hasn't always been true --- I believe it used to be\n> > used to parse SPI commands, and someday it may be again.\n\nIt was never used in SPI. Just look at _SPI_execute. Same parent query\nsnapshot is used in SPI functions. *But* SPI' queries *see* changes\nmade by parent query - I never was sure about this and think I've asked\nother opinions. No opinions - no changes -:)\n\n> If there's no concept of top-level statement,there's no\n> concept of read consistency and MVCC isn't needed.\n\nExcept of the fact that SPI' queries see changes made by parent same\nsnapshot is used all time while executing top-level query (single query,\nnot query list).\n\nVadim\n\n",
"msg_date": "Thu, 19 Oct 2000 04:23:50 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: time stops within transaction"
}
]
|
[
{
"msg_contents": ">From: Tom Lane\n>Fixed in current and back-patched for 7.0.3.\n>\n>\t\t\tregards, tom lane\n\nShould I check out the current pre 7.0.3 CVS and test? If so I think you\ngave the CVS information in a few previous emails on the hackers list so I\nwill look there for it.\n\nThanks for the quick response.\n\nMatt O'Connor\n",
"msg_date": "Thu, 19 Oct 2000 07:15:14 -0500",
"msg_from": "Matthew <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Postgre7.0.2 drop user bug "
},
{
"msg_contents": "Matthew <[email protected]> writes:\n> Should I check out the current pre 7.0.3 CVS and test?\n\nSure, the more the merrier ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Oct 2000 10:31:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgre7.0.2 drop user bug "
}
]
|
[
{
"msg_contents": ">-----Original Message-----\n>From: Tom Lane\n\n>The correct fix is CommandCounterIncrement() in the DROP USER loop,\n>so that later iterations can see the changes made by prior iterations.\n>\n>\t\t\tregards, tom lane\n\nSince postgre now suppport referential integrity and cascading deletes,\nwouldn't it make more sense to use that code to manage the relationship\nbetween pg_user and pg_group (and probably a wealth of other system tables),\nrather then having to write specific code to manage every relationship\nbetween system tables, or are those types of constraints just not applicable\nto system tables?\n",
"msg_date": "Thu, 19 Oct 2000 07:25:08 -0500",
"msg_from": "Matthew <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Postgre7.0.2 drop user bug "
},
{
"msg_contents": "Matthew <[email protected]> writes:\n>> The correct fix is CommandCounterIncrement() in the DROP USER loop,\n>> so that later iterations can see the changes made by prior iterations.\n\n> Since postgre now suppport referential integrity and cascading deletes,\n> wouldn't it make more sense to use that code to manage the relationship\n> between pg_user and pg_group (and probably a wealth of other system tables),\n\nDunno if it's worth the trouble to change. Certainly this particular\nbug would still be a bug, whether the cascaded deletion is done \"by hand\"\nor not. \n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Oct 2000 10:33:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgre7.0.2 drop user bug "
}
]
|
[
{
"msg_contents": "It is already built in. Use pg_dump or pg_dumpall when online. Only problem\nis OIDs and large objects. pg_dump -o to keep OIDs not sure about large\nobjects.\n\nHope this is what you mean by live/hot backup.\n\nBen\n\n> -----Original Message-----\n> From: Matthew H. North [mailto:[email protected]]\n> Sent: 19 October 2000 16:41\n> To: [email protected]\n> Subject: [ADMIN] Automation/scheduling of Backup stratetgy\n> \n> \n> \n> Any word on when (if?) live/hot backup will be available?\n> \n> Matthew H. North\n> Software Engineer\n> CTSnet Internet Services\n> t (858) 637-3600\n> f (858) 637-3630\n> mailto:[email protected]\n> \n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Michel Decima\n> Sent: Thursday, October 19, 2000 6:04 AM\n> To: [email protected]; [email protected]\n> Subject: Re: [ADMIN] Automation/scheduling of Backup stratetgy\n> \n> \n> >Is it posible to schedule / automate a backup task and \n> functions to execute\n> >at a pre-defined time at a pre-defined recurrence rate?\n> \n> yes, using the cron daemon and the commands pg_dump or pg_dumpall.\n> \n> The following entry in the postgres user crontab will backup the\n> database every day at 03:01 AM in /dev/null (very usefull)\n> \n> 1 3 * * * /usr/local/pgsql/bin/pg_dumpall > /dev/null\n> \n> just look at\n> man cron\n> man crontab\n> man pg_dumpall\n> \n> MD.\n> \n> \n\n\n\n\n\nRE: [ADMIN] Automation/scheduling of Backup stratetgy\n\n\nIt is already built in. Use pg_dump or pg_dumpall when online. Only problem is OIDs and large objects. pg_dump -o to keep OIDs not sure about large objects.\nHope this is what you mean by live/hot backup.\n\nBen\n\n> -----Original Message-----\n> From: Matthew H. North [mailto:[email protected]]\n> Sent: 19 October 2000 16:41\n> To: [email protected]\n> Subject: [ADMIN] Automation/scheduling of Backup stratetgy\n> \n> \n> \n> Any word on when (if?) live/hot backup will be available?\n> \n> Matthew H. North\n> Software Engineer\n> CTSnet Internet Services\n> t (858) 637-3600\n> f (858) 637-3630\n> mailto:[email protected]\n> \n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Michel Decima\n> Sent: Thursday, October 19, 2000 6:04 AM\n> To: [email protected]; [email protected]\n> Subject: Re: [ADMIN] Automation/scheduling of Backup stratetgy\n> \n> \n> >Is it posible to schedule / automate a backup task and \n> functions to execute\n> >at a pre-defined time at a pre-defined recurrence rate?\n> \n> yes, using the cron daemon and the commands pg_dump or pg_dumpall.\n> \n> The following entry in the postgres user crontab will backup the\n> database every day at 03:01 AM in /dev/null (very usefull)\n> \n> 1 3 * * * /usr/local/pgsql/bin/pg_dumpall > /dev/null\n> \n> just look at\n> man cron\n> man crontab\n> man pg_dumpall\n> \n> MD.\n> \n>",
"msg_date": "Thu, 19 Oct 2000 17:31:52 +0100",
"msg_from": "\"Trewern, Ben\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Automation/scheduling of Backup stratetgy"
},
{
"msg_contents": "\nLet me clarify:\n\nBy 'hot backup' I mean some kind of method we can use to keep a live mirror\nof a PostgreSQL database as it's being updated. As it is, the pg_dump\nmethod has quite a large granularity - we currently have a cron job that\nruns pg_dumpall once per day, in the early morning. This means that if we\nhad to revert to one of these daily backups, we have the potential of a\nday's worth of lost data.... Not good.\n\nWe would like to be able to backup the database AS it's being worked on so\nthat if the primary db server drops dead, we have a backup that's pretty\nmuch up-to-date as of the time of the crash.\n\nAgain, any word from the developers on when we might be able to expect this?\n\nMatthew H. North\nSoftware Engineer\nCTSnet Internet Services\nt (858) 637-3600\nf (858) 637-3630\nmailto:[email protected]\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]]On Behalf\nOf Trewern, Ben\nSent: Thursday, October 19, 2000 9:32 AM\nTo: [email protected]\nSubject: RE: [ADMIN] Automation/scheduling of Backup stratetgy\n\n\nIt is already built in. Use pg_dump or pg_dumpall when online. Only problem\nis OIDs and large objects. pg_dump -o to keep OIDs not sure about large\nobjects.\nHope this is what you mean by live/hot backup.\nBen\n> -----Original Message-----\n> From: Matthew H. North [mailto:[email protected]]\n> Sent: 19 October 2000 16:41\n> To: [email protected]\n> Subject: [ADMIN] Automation/scheduling of Backup stratetgy\n>\n>\n>\n> Any word on when (if?) live/hot backup will be available?\n>\n> Matthew H. North\n> Software Engineer\n> CTSnet Internet Services\n> t (858) 637-3600\n> f (858) 637-3630\n> mailto:[email protected]\n>\n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Michel Decima\n> Sent: Thursday, October 19, 2000 6:04 AM\n> To: [email protected]; [email protected]\n> Subject: Re: [ADMIN] Automation/scheduling of Backup stratetgy\n>\n>\n> >Is it posible to schedule / automate a backup task and\n> functions to execute\n> >at a pre-defined time at a pre-defined recurrence rate?\n>\n> yes, using the cron daemon and the commands pg_dump or pg_dumpall.\n>\n> The following entry in the postgres user crontab will backup the\n> database every day at 03:01 AM in /dev/null (very usefull)\n>\n> 1 3 * * * /usr/local/pgsql/bin/pg_dumpall > /dev/null\n>\n> just look at\n> man cron\n> man crontab\n> man pg_dumpall\n>\n> MD.\n>\n>\n\n",
"msg_date": "Thu, 19 Oct 2000 11:40:23 -0700",
"msg_from": "\"Matthew H. North\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Automation/scheduling of Backup stratetgy"
}
]
|
[
{
"msg_contents": "\n\n http://www.l-t.ee/marko/pgsql/pgcrypto-0.1.tar.gz (11k)\n\nHere is a implementation of crypto hashes for PostgreSQL.\nIt exports 2 functions to SQL level:\n\n digest(data::text, hash_name::text)\n which returns hexadecimal coded hash over data by\n specified algorithm. eg\n\n > select digest('blah', 'sha1');\n 5bf1fd927dfb8679496a2e6cf00cbe50c1c87145\n\n (I see no point returning binary hash.. ??)\n\n digest_exists(hash_name::text)::bool\n which reports if particular hash type exists.\n\nIt can be linked with various libraries:\n\n standalone:\n MD5, SHA1\n\n (the code is from KAME project. Actually I hate code\n duplication, but I also want to quarantee that MD5 and\n SHA1 exist)\n\n mhash (0.8.1):\n MD5, SHA1, CRC32, CRC32B, GOST, TIGER, RIPEMD160,\n HAVAL(256,224,192,160,128)\n\n openssl:\n MD5, SHA1, RIPEMD160, MD2\n\n kerberos5 (heimdal):\n MD5, SHA1\n\nAs you can see I am thinking making MD5 and SHA1 standard.\n\nAnd yes, it could be made 'standalone only' but in case of\nOpenSSL and mhash the code is imported very cleanly (my\ncode has no hard-wired hashes) ???\n\nThe code should also be 8-bit clean and TOAST-enabled - but\nI am not 100% sure. And if is declared as digest(text,text)\ndoes it work on bytea?\n\nIt is packaged at the moment as stand-alone package, because\nI am trying to write general autoconf macros for use with outside\npackages. At the moment any package author must generate those\nhimself. Also the contrib stuff should be possible to make use\nof this instead of include ../../Makefile.* so they can enjoy\nthe same (dis)advantages as outside packages.\n\nStatus:\n\n C code - stable\n autoconf - beta\n make install - does not work\n\nIf there is interest I can package it as a contrib or even\nmainstream diff against CVS ???\n\n\n-- \nmarko\n\n",
"msg_date": "Thu, 19 Oct 2000 19:22:20 +0200",
"msg_from": "Marko Kreen <[email protected]>",
"msg_from_op": true,
"msg_subject": "[ANNC][RFC] crypto hashes for PostgreSQL 7.0, 7.1"
},
{
"msg_contents": "\nOn Thu, 19 Oct 2000, Marko Kreen wrote:\n\n> \n> \n> http://www.l-t.ee/marko/pgsql/pgcrypto-0.1.tar.gz (11k)\n> \n> Here is a implementation of crypto hashes for PostgreSQL.\n\n> It can be linked with various libraries:\n> \n> standalone:\n> MD5, SHA1\n> \n> (the code is from KAME project. Actually I hate code\n> duplication, but I also want to quarantee that MD5 and\n> SHA1 exist)\n> \n> mhash (0.8.1):\n> MD5, SHA1, CRC32, CRC32B, GOST, TIGER, RIPEMD160,\n> HAVAL(256,224,192,160,128)\n> \n> openssl:\n> MD5, SHA1, RIPEMD160, MD2\n> \n> kerberos5 (heimdal):\n> MD5, SHA1\n> \n\n If it's really allows use all this hash methods, it's firts complex\nmodule for PG. And if this module *not-contains* some for law problematic \ncrypto code, but call only some standard libs (like SSL) it is cool! But \nyour license? \n\n I vote for include this into PG's contrib... :-) \n\n\t\t\t\t\t\tKarel\n\n\n",
"msg_date": "Thu, 19 Oct 2000 19:31:28 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ANNC][RFC] crypto hashes for PostgreSQL 7.0, 7.1"
},
{
"msg_contents": "Marko Kreen <[email protected]> writes:\n> If there is interest I can package it as a contrib or even\n> mainstream diff against CVS ???\n\nSure, I think people would be interested. It might be best to make it\ncontrib for now, until you are sure you have dealt with portability and\ninstallation issues --- mainstream code has to build everywhere, whereas\nwe are more lenient about contrib...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Oct 2000 13:33:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ANNC][RFC] crypto hashes for PostgreSQL 7.0, 7.1 "
},
{
"msg_contents": "On Thu, Oct 19, 2000 at 07:31:28PM +0200, Karel Zak wrote:\n> On Thu, 19 Oct 2000, Marko Kreen wrote:\n> > \n> > http://www.l-t.ee/marko/pgsql/pgcrypto-0.1.tar.gz (11k)\n\nAt the last second I thougt that I should generate the 'configure'\nso its 20k actually...\n\n> > \n> If it's really allows use all this hash methods, it's firts complex\n\nWell, imho its fairly simple :) look at the code...\n\n> module for PG. And if this module *not-contains* some for law problematic \n> crypto code, but call only some standard libs (like SSL) it is cool! But \n> your license? \n\nUh, PostgreSQL default license / BSD...\n\nIt contains SHA1 and MD5 for the standalone case, but thats not\nmandatory...\n\n> \n> I vote for include this into PG's contrib... :-) \n> \nThanks.\n\n-- \nmarko\n\n",
"msg_date": "Thu, 19 Oct 2000 20:01:16 +0200",
"msg_from": "Marko Kreen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [ANNC][RFC] crypto hashes for PostgreSQL 7.0, 7.1"
},
{
"msg_contents": "On Thu, Oct 19, 2000 at 01:33:35PM -0400, Tom Lane wrote:\n> Marko Kreen <[email protected]> writes:\n> > If there is interest I can package it as a contrib or even\n> > mainstream diff against CVS ???\n> \n> Sure, I think people would be interested. It might be best to make it\n> contrib for now, until you are sure you have dealt with portability and\n> installation issues --- mainstream code has to build everywhere, whereas\n> we are more lenient about contrib...\n> \n\nOk, I send it to -patches shortly.\n\n-- \nmarko\n\n",
"msg_date": "Thu, 19 Oct 2000 20:03:49 +0200",
"msg_from": "Marko Kreen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [ANNC][RFC] crypto hashes for PostgreSQL 7.0, 7.1"
},
{
"msg_contents": "\nOn Thu, 19 Oct 2000, Marko Kreen wrote:\n\n> On Thu, Oct 19, 2000 at 01:33:35PM -0400, Tom Lane wrote:\n> > Marko Kreen <[email protected]> writes:\n> > > If there is interest I can package it as a contrib or even\n> > > mainstream diff against CVS ???\n> > \n> > Sure, I think people would be interested. It might be best to make it\n> > contrib for now, until you are sure you have dealt with portability and\n> > installation issues --- mainstream code has to build everywhere, whereas\n> > we are more lenient about contrib...\n> > \n> \n> Ok, I send it to -patches shortly.\n\n But, please try check and set style of your source and files like \ncurrent CVS contrib tree. I a little clean up contrib tree for 7.1\nand will bad if anyone add again some mazy files. \n\n\t\t\t\t\tKarel\n\n",
"msg_date": "Thu, 19 Oct 2000 20:27:00 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ANNC][RFC] crypto hashes for PostgreSQL 7.0, 7.1"
},
{
"msg_contents": "Marko Kreen writes:\n\n> Here is a implementation of crypto hashes for PostgreSQL.\n\n> It is packaged at the moment as stand-alone package, because\n> I am trying to write general autoconf macros for use with outside\n> packages. At the moment any package author must generate those\n> himself. Also the contrib stuff should be possible to make use\n> of this instead of include ../../Makefile.* so they can enjoy\n> the same (dis)advantages as outside packages.\n\nA coupla comments:\n\n* Your code seems to be quite optimistic about being on a Linux system.\n\n* Use AC_DEFUN, not `define', to create Autoconf macros.\n\n* From PostgreSQL 7.1-to-be on you can detect the location of the include\nand library files with `pg_config --includedir` and `pg_config --libdir`\nrespectively.\n\n(I've been thinking about writing a general-purpose PostgreSQL detection\nmacro for use by third-party products to be included in the PostgreSQL\nsource.)\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 19 Oct 2000 20:55:55 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ANNC][RFC] crypto hashes for PostgreSQL 7.0, 7.1"
},
{
"msg_contents": "On Thu, Oct 19, 2000 at 08:55:55PM +0200, Peter Eisentraut wrote:\n> Marko Kreen writes:\n> > Here is a implementation of crypto hashes for PostgreSQL.\n> \n> > It is packaged at the moment as stand-alone package, because\n> > I am trying to write general autoconf macros for use with outside\n> > packages. At the moment any package author must generate those\n> > himself. Also the contrib stuff should be possible to make use\n> > of this instead of include ../../Makefile.* so they can enjoy\n> > the same (dis)advantages as outside packages.\n> \n> A coupla comments:\n> \n> * Your code seems to be quite optimistic about being on a Linux system.\n\nWell, it has a reason :) Anyway the autoconf stuff is alpha and\ncontrib patch I will send will use current makefile system,\nno autoconf.\n\n> * Use AC_DEFUN, not `define', to create Autoconf macros.\n\nOh. Ofcourse. I have played too much with pure m4... (I use it\nfor generating SQL)\n\n> * From PostgreSQL 7.1-to-be on you can detect the location of the include\n> and library files with `pg_config --includedir` and `pg_config --libdir`\n> respectively.\n\nNice. I didn't know.\n\n> (I've been thinking about writing a general-purpose PostgreSQL detection\n> macro for use by third-party products to be included in the PostgreSQL\n> source.)\n\nWell, that happens to be my idea also :) Maybe we can, like, join\nthe effort?\n\n-- \nmarko\n\n",
"msg_date": "Thu, 19 Oct 2000 21:41:33 +0200",
"msg_from": "Marko Kreen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [ANNC][RFC] crypto hashes for PostgreSQL 7.0, 7.1"
},
{
"msg_contents": "On Thu, Oct 19, 2000 at 08:27:00PM +0200, Karel Zak wrote:\n> On Thu, 19 Oct 2000, Marko Kreen wrote:\n> > On Thu, Oct 19, 2000 at 01:33:35PM -0400, Tom Lane wrote:\n> > > Marko Kreen <[email protected]> writes:\n> > > > If there is interest I can package it as a contrib or even\n> > > > mainstream diff against CVS ???\n> > > \n> > > Sure, I think people would be interested. It might be best to make it\n> > > contrib for now, until you are sure you have dealt with portability and\n> > > installation issues --- mainstream code has to build everywhere, whereas\n> > > we are more lenient about contrib...\n> > > \n> > \n> > Ok, I send it to -patches shortly.\n> \n> But, please try check and set style of your source and files like \n> current CVS contrib tree. I a little clean up contrib tree for 7.1\n> and will bad if anyone add again some mazy files. \n\nWhat especially should I note? Could you look current stuff?\n\n. I drop autoconf stuff / use current contrib Makefile stuff\n. add license boilerblates\n\nWhat else needs doing?\n\n-- \nmarko\n\n",
"msg_date": "Thu, 19 Oct 2000 21:46:08 +0200",
"msg_from": "Marko Kreen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [ANNC][RFC] crypto hashes for PostgreSQL 7.0, 7.1"
},
{
"msg_contents": "> > > http://www.l-t.ee/marko/pgsql/pgcrypto-0.1.tar.gz (11k)\n\nFirst of all, thankd for tis contribution. I had impemented a similar thing for my own purposes. A problem I still have is using the digest for \"checksumming\" rows in my tables - which is a 'MUST' for medico-legal reasons in my case. I use the follwing trigger and function (just a proof of concept implementation)\n\nDROP TRIGGER trig_crc ON crclog;\nDROP FUNCTION trigfunc_crc();\n\nCREATE FUNCTION trigfunc_crc()\nRETURNS OPAQUE as '\n # create a string by concatenating all field contents\n set cstr \"\";\n set len [llength $TG_relatts];\n for {set i 1} {$i < $len} {incr i} {\n set istr [lindex $TG_relatts $i]\n # skip the crc field!\n if {[string compare \"crc\" $istr] == 0} continue; \n # beware of NULL fields\n if [catch {set cstr $cstr$NEW($istr)}] continue; \n } \n # calculate the strong hash\n spi_exec \"select pg_crc32(''$cstr'') as crcs\";\n # update the new record\n set NEW(crc) $crcs;\n #spi_exec \"insert into logger(crc) values (''$crcs'')\";\nreturn [array get NEW] \n' LANGUAGE 'pltcl'; \n\nCREATE TRIGGER trig_crc\nBEFORE INSERT OR UPDATE ON crclog\nFOR EACH ROW\nEXECUTE PROCEDURE trigfunc_crc(); \n\n----------------------------------------------------------------------\n\nAs you can see, the trigfunc_crc is fairly generic and will work with any table containing the attribute \"crc\".\n\nHave you found a way of \n- making the trigger generic as well (I hate to rebuild all triggers for 300+ tables whenever I modify trigfunc_crc)\n- any better performing way to implement trigfunc_crc ?\n\nHorst\n\n",
"msg_date": "Sat, 21 Oct 2000 23:27:54 +1000",
"msg_from": "\"Horst Herb\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ANNC][RFC] crypto hashes for PostgreSQL 7.0, 7.1"
},
{
"msg_contents": "On Sat, Oct 21, 2000 at 11:27:54PM +1000, Horst Herb wrote:\n> > > > http://www.l-t.ee/marko/pgsql/pgcrypto-0.1.tar.gz (11k)\n> \n> First of all, thankd for tis contribution. I had impemented a\n> similar thing for my own purposes. A problem I still have is using\n> the digest for \"checksumming\" rows in my tables - which is a 'MUST'\n> for medico-legal reasons in my case. I use the follwing trigger and\n> function (just a proof of concept implementation)\n> \n\n[ pltcl trigger ]\n\n> As you can see, the trigfunc_crc is fairly generic and will work\n> with any table containing the attribute \"crc\".\n> \n> Have you found a way of - making the trigger generic as well (I hate\n> to rebuild all triggers for 300+ tables whenever I modify\n> trigfunc_crc)\n\nYou do a trigfunc_crc_real which is called from trigfunc_crc? I guess\nyou could then drop/create as you please? Sorry, I do not speak Tcl\nso I cant show how to do it exactly. It will be a bit slower though.\n\n> - any better performing way to implement trigfunc_crc ?\n\nHmm, probably you should at some point drop to C level. It will\nbe a pain, so if the need is not too bad then you should avoid it.\n\n\nBtw, the concept of checksumming rows is kinda new to me.\nI needed this to store passwords on a table, so sorry if I\ncant be more help. But I am a litte bit curious, why is it\nneeded? Simple checksumming (crc32/md5) does not help malicious\nchanging of data, only hardware failures, but today's hardware\nhas itself checksumming builtin... It probably would be a\nmore point if you do some pgp/gpg style signing so you would\nget some authenticy too, but this is hard to implement right.\n\n\n\n-- \nmarko\n\n",
"msg_date": "Sat, 21 Oct 2000 15:39:56 +0200",
"msg_from": "Marko Kreen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [ANNC][RFC] crypto hashes for PostgreSQL 7.0, 7.1"
},
{
"msg_contents": "> Btw, the concept of checksumming rows is kinda new to me.\n> I needed this to store passwords on a table, so sorry if I\n> cant be more help. But I am a litte bit curious, why is it\n> needed? Simple checksumming (crc32/md5) does not help malicious\n> changing of data, only hardware failures, but today's hardware\n> has itself checksumming builtin... It probably would be a\n> more point if you do some pgp/gpg style signing so you would\n> get some authenticy too, but this is hard to implement right.\n\n1.) checksumming a row will alert you when glitches have changed data. Happened twice in 3 years to me with my previous system (with top end hardware!). This is probably due to file system or hardware failures. There is no other way to find out whether such a glitch has happened other than regularly checking the checksums. Despite all progress in hardware, these errors still happen and I have these happenings well documented. Most of the people never will notice as they do not use such a checking.\n\n2.) We had problems before with tunneled IP connections and corrupted data. These errors are very rare, but again, they can happen - the more complex your network setup is, the more likely you might get a glitch or two per year. I never fou d out what to blame: the protocol implementation, the server, the client ...\nWith large packet sizes, the checksumming the network protocols use is not as collision proof as one might wish. The same crc works more reliable with small amounts of data than with larger amounts.\n\n3.) This checksumming helps to check whether a complex database setup with lots of triggers and interdependencies really stores the data the way it is supposed to as you can do the same calculation on the client and compare after commitment. Helps a lot while testing such a setup\n\nHorst\n\n",
"msg_date": "Sun, 22 Oct 2000 09:47:34 +1000",
"msg_from": "\"Horst Herb\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ANNC][RFC] crypto hashes for PostgreSQL 7.0, 7.1"
}
]
|
[
{
"msg_contents": "Hi.\n\nThis whole message might be a giant brain fart but I had an \ninteresting idea today.\n\nI was confronted by an obscene query plan. I have a table of logins \nthat shows when webmail accounts were created. So a spammer went and \nset up 20 or so spam accounts. So I got a list by his IP and the time \nwhen he set them up. Now to batch cancel them I hacked up a quick \nquery:\nupdate users set enabled='f',disablereason='We do not allow our \nsystem to be used for SPAM.' where id in (select id from users where \nloginid in (select distinct loginid from logins where \nip='123.123.12.12'));\n\nThis is a horrible way to do it and the query plan is even worse:\nNOTICE: QUERY PLAN:\n\nSeq Scan on users (cost=0.00..612996782699.54 rows=18180 width=172)\n SubPlan\n -> Materialize (cost=33718194.83..33718194.83 rows=18180 width=4)\n -> Seq Scan on users (cost=0.00..33718194.83 rows=18180 width=4)\n SubPlan\n -> Materialize (cost=1854.65..1854.65 rows=48 width=12)\n -> Unique (cost=1853.44..1854.65 rows=48 width=12)\n -> Sort (cost=1853.44..1853.44 rows=482 width=12)\n -> Index Scan using logins_ip_idx on logins \n(cost=0.00..1831.97 rows=482 width=12)\n\nGiven that the first and second subplan actually return only 25 rows, \nthere are 2 possibly distillations of this plan:\n\nupdate users set enabled='f',disablereason='We do not allow our \nsystem to be used for SPAM.' where id in \n(27082,27083,27084,27085,27086,27087,27088,27089,27090,27091,27092,270\n97,27098,27099,27101,27102,27103,27104,27094,27096,27095,27106,27100,2\n7105,27093);\n\nWhich comes up with a plan:\nNOTICE: QUERY PLAN:\n\nIndex Scan using users_pkey, users_pkey, users_pkey, users_pkey, \nusers_pkey, users_pkey, users_pkey, users_pkey, users_pkey, \nusers_pkey, users_pkey, users_pkey, users_pkey, users_pkey, \nusers_pkey, users_pkey, users_pkey, users_pkey, users_pkey, \nusers_pkey, users_pkey, users_pkey, users_pkey, users_pkey, \nusers_pkey on users (cost=0.00..57.04 rows=2 width=172)\n\nBasically it's going through each of the 25 as though they were \nseparate updates.\n\nThe second and probably less optimal plan would be to create a hash \nof these 25 answers and do a sequential scan on users updating rows \nwhere id is found in that hash.\n\n\nFor these 2 query plans, 1 would be optimal in the event there is a \nsmall list to update, and the other would be ideal in the event there \nis a large list to update. \n\nWhy attempt to formulate a complete query plan at the outset. Could \nyou not break the query into smaller parts and re-optimize after \nevery subplan completes? This way you would have an exact number of \nrows provided from the subplans so more accurate choices could be \nmade farther down the line? This becomes especially relevant on large \njoins and other complex queries.\n\nMaybe I just gave away an idea I could have sold to Oracle for \nmillions, and maybe everyone is already doing this. Anyway, it's just \nthoughts and if anyone makes it this far it might be worthwhile for a \nlittle discussion.\n\n-Michael\n_________________________________________________________________\n http://fastmail.ca/ - Fast Free Web Email for Canadians\n>From [email protected] Thu Oct 19 19:12:20 2000\nReceived: from trixie.kosman.via.ayuda.com ([email protected] [63.194.39.148])\n\tby hub.org (8.10.1/8.11.0) with ESMTP id e9JNCFJ49070\n\tfor <[email protected]>; Thu, 19 Oct 2000 19:12:15 -0400 (EDT)\n\t(envelope-from [email protected])\nReceived: from pacbell.net (kevin@localhost [127.0.0.1])\n\tby trixie.kosman.via.ayuda.com (8.9.3/8.9.3) with ESMTP id QAA12018;\n\tThu, 19 Oct 2000 16:12:02 -0700\nMessage-ID: <[email protected]>\nDate: Thu, 19 Oct 2000 16:12:00 -0700\nFrom: \"Kevin O'Gorman\" <[email protected]>\nReply-To: [email protected]\nOrganization: K.O.'s (chaos?) Manor\nX-Mailer: Mozilla 4.75C-CCK-MCD Caldera Systems OpenLinux [en] (X11; U; Linux 2.2.10 i586)\nX-Accept-Language: en\nMIME-Version: 1.0\nTo: PGSQL Hackers List <[email protected]>\nSubject: Rule system goes weird with SELECT queries\nContent-Type: text/plain; charset=us-ascii\nContent-Transfer-Encoding: 7bit\nX-Archive-Number: 200010/759\nX-Sequence-Number: 7789\n\nI must admit I'm trying to (ab)use the rule system into\nbeing a stored-procedure system, so maybe I'm just getting\nwhat I deserve. However, the results I'm getting are\njust plain weird.\n\nIf I define two rules for the same action, each with \na single select command, I wind up with two selects as\nexpected, but they are both cross-product selects on the\ntwo tables. This is unexpected.\n\nIf I change the grammar rules so that I can have a\ncompound action with two selects, I get two selects,\neach effectively the four-times cross-product of\nthe selects. Talk about exponential growth!!\n\nNow I can see why compound SELECTs were disallowed.\nAnd I can guess why my two separate rules behaved this\nway, sort of. But if I'm right, the rules are being\nprocessed by the planner once on creation and again when\nbeing invoked, and something is not quite right about\nit.\n\nBut: does anyone else see a need for a stored-procedure\nfacility, different from function definition? I'm\nprobably going to do it anyway, but if there's support\nfor the idea, I will try to make it conform to the\nstandards of the community. In return for a little\nguidance on that subject.\n\nHere are the details (all tables initially empty):\n\nForm 1: two separate rules gives two cross-products.\n\ncreate rule rule4a as on insert to dummy do instead select * from d2;\ncreate rule rule4b as on insert to dummy do instead select * from d3;\nexplain insert into dummy values(1);\n\npsql:rule4.sql:14: NOTICE: QUERY PLAN:\n \nNested Loop (cost=0.00..30020.00 rows=1000000 width=8)\n -> Seq Scan on d3 (cost=0.00..20.00 rows=1000 width=4)\n -> Seq Scan on d2 (cost=0.00..20.00 rows=1000 width=4)\n \npsql:rule4.sql:14: NOTICE: QUERY PLAN:\n \nNested Loop (cost=0.00..30020.00 rows=1000000 width=8)\n -> Seq Scan on d2 (cost=0.00..20.00 rows=1000 width=4)\n -> Seq Scan on d3 (cost=0.00..20.00 rows=1000 width=4)\n\nEXPLAIN\n \nForm 2: single rule with two SELECT commands gives something\nquite weird apparently a quadruple cross-product, performed\ntwice:\n\ncreate rule rule3 as on insert to dummy do instead (select * from d2;\nselect * from d3;);\nexplain insert into dummy values(1);\n\npsql:rule3.sql:13: NOTICE: QUERY PLAN:\n \nNested Loop (cost=0.00..30030030020.00 rows=1000000000000 width=16)\n -> Nested Loop (cost=0.00..30030020.00 rows=1000000000 width=12)\n -> Nested Loop (cost=0.00..30020.00 rows=1000000 width=8)\n -> Seq Scan on d2 (cost=0.00..20.00 rows=1000 width=4)\n -> Seq Scan on d3 (cost=0.00..20.00 rows=1000 width=4)\n -> Seq Scan on d3 (cost=0.00..20.00 rows=1000 width=4)\n -> Seq Scan on d2 (cost=0.00..20.00 rows=1000 width=4)\n \npsql:rule3.sql:13: NOTICE: QUERY PLAN:\n \nNested Loop (cost=0.00..30030030020.00 rows=1000000000000 width=16)\n -> Nested Loop (cost=0.00..30030020.00 rows=1000000000 width=12)\n -> Nested Loop (cost=0.00..30020.00 rows=1000000 width=8)\n -> Seq Scan on d2 (cost=0.00..20.00 rows=1000 width=4)\n -> Seq Scan on d3 (cost=0.00..20.00 rows=1000 width=4)\n -> Seq Scan on d3 (cost=0.00..20.00 rows=1000 width=4)\n -> Seq Scan on d2 (cost=0.00..20.00 rows=1000\nwidth=4) \n\nEXPLAIN\n\n-- \nKevin O'Gorman (805) 650-6274 mailto:[email protected]\nPermanent e-mail forwarder: mailto:Kevin.O'[email protected]\nAt school: mailto:[email protected]\nWeb: http://www.cs.ucsb.edu/~kogorman/index.html\nWeb: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n\n\"There is a freedom lying beyond circumstance,\nderived from the direct intuition that life can\nbe grounded upon its absorption in what is\nchangeless amid change\" \n -- Alfred North Whitehead\n",
"msg_date": "Thu, 19 Oct 2000 17:01:50 -0400 (EDT)",
"msg_from": "\"Michael Richards\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Conditional query plans."
},
{
"msg_contents": "\"Michael Richards\" <[email protected]> writes:\n> The second and probably less optimal plan would be to create a hash \n> of these 25 answers and do a sequential scan on users updating rows \n> where id is found in that hash.\n\nGiven the presence of the \"materialize\" nodes, I don't think this query\nplan is quite as nonoptimal as you think, especially for ~25 rows out of\nthe subplan. It's a linear search over a 25-entry table for each outer\nrow, but so what? With hundreds or thousands of rows out of the\nsubquery, it'd be nice to have a smarter table lookup method, agreed,\nbut here it hardly matters.\n\nSomething that's been on the todo list for a long time is to try to\nconvert WHERE foo IN (SELECT ...) queries into some kind of join,\ninstead of a subselect. With that approach we'd be able to use merge\nor hash strategies to match up inner and outer rows, which'd work a lot\nbetter when there are large numbers of rows involved. It might actually\nhappen for 7.2...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Oct 2000 19:49:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Conditional query plans. "
},
{
"msg_contents": "How does PostgreSQL handles a \"too big\" transaction?\n\nBy that I mean a transaction which, after a certain point, there will be no\nway to roll back. On PgSQL, maybe that only happens when the disk fills. Is\nthere a configurable \"size\" limit for a single transaction?\n\nIn addition, what happens if the disk fills up? Postgres is able to roll\nback, right?\n\nI'm assuming you can prevent the disk from actually filling up (and crashing\nthe whole server) by turning on quotas for the postgres super user, so that\nonly pgsql would complain. Please correct me if I'm wrong.\n\n",
"msg_date": "Thu, 19 Oct 2000 22:59:54 -0200",
"msg_from": "\"Edmar Wiggers\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "\"too big\" transactions"
},
{
"msg_contents": "> update users set enabled='f',disablereason='We do not allow our\n> system to be used for SPAM.' where id in (select id from users where\n> loginid in (select distinct loginid from logins where\n> ip='123.123.12.12'));\n\nWould it run better as:\n\nupdate users set enabled='f',disablereason='We do not allow our\nsystem to be used for SPAM.' where id in (select distinct loginid from\nlogins where\nip='123.123.12.12');\n\nOr perhaps even:\n\nupdate users set enabled='f',disablereason='We do not allow our\nsystem to be used for SPAM.' where id in (select unique id from users,logins\nwhere\nusers.loginid=logins.loginid where ip='123.123.12.12');\n\nI don't know if that helps the query plan, but it looks prettier :)\n\n\n",
"msg_date": "Fri, 20 Oct 2000 09:16:55 -0300",
"msg_from": "\"Continuing Technical Education\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Conditional query plans."
}
]
|
[
{
"msg_contents": "\"Kevin O'Gorman\" <[email protected]> writes:\n> If I define two rules for the same action, each with \n> a single select command, I wind up with two selects as\n> expected, but they are both cross-product selects on the\n> two tables. This is unexpected.\n\nRangetable leakage, sounds like --- the two queries are sharing the same\nlist of rangetable entries, and that's what the planner joins over. Not\nsure if it's *good* that they share the same rtable list, or if that's a\nbug. In any event, it's old news because current sources don't use the\nrtable list to control joining; now there is a separate join tree, which\nis definitely not shared. I get\n\nregression=# create rule rule4a as on insert to dummy do instead select * from\nd2;\nCREATE\nregression=# create rule rule4b as on insert to dummy do instead select * from\nd3;\nCREATE\nregression=# explain insert into dummy values(1);\nNOTICE: QUERY PLAN:\n\nSeq Scan on d2 (cost=0.00..20.00 rows=1000 width=4)\n\nNOTICE: QUERY PLAN:\n\nSeq Scan on d3 (cost=0.00..20.00 rows=1000 width=4)\n\nEXPLAIN\n\nwhich looks fine. Can't check your other example, since the grammar\nfile hasn't been changed to allow it...\n\nI'm not sure whether to recommend that you work from current CVS sources\nor not. A couple weeks ago that's what I would have said, but Vadim is\nhalfway through integrating WAL changes and I'm not sure how stable the\ntip really is. You could try the tip, and if it blows up fall back to\na dated retrieval from about 7-Oct. Or you could investigate the way\nthat the 7.0.* rewriter handles the rtable list for multiple queries,\nbut that's probably not a real profitable use of your time.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Oct 2000 19:36:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Rule system goes weird with SELECT queries "
},
{
"msg_contents": "Thanks for the reply. I'll look into setting up CVS -- I've\njust\nbeen using the distributed 7.0.2 actually.\n\nMoreover, the situation is even a bit more confused for me. \nWhen\nI actually *execute* the 'insert into dummy', I get the\noutput\nof only one select: the second one listed. Is there\nsomething\nabout executing a list I don't know about, or is this also\nold news??\n\n++ kevin\n\n\n\nTom Lane wrote:\n> \n> \"Kevin O'Gorman\" <[email protected]> writes:\n> > If I define two rules for the same action, each with\n> > a single select command, I wind up with two selects as\n> > expected, but they are both cross-product selects on the\n> > two tables. This is unexpected.\n> \n> Rangetable leakage, sounds like --- the two queries are sharing the same\n> list of rangetable entries, and that's what the planner joins over. Not\n> sure if it's *good* that they share the same rtable list, or if that's a\n> bug. In any event, it's old news because current sources don't use the\n> rtable list to control joining; now there is a separate join tree, which\n> is definitely not shared. I get\n> \n> regression=# create rule rule4a as on insert to dummy do instead select * from\n> d2;\n> CREATE\n> regression=# create rule rule4b as on insert to dummy do instead select * from\n> d3;\n> CREATE\n> regression=# explain insert into dummy values(1);\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on d2 (cost=0.00..20.00 rows=1000 width=4)\n> \n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on d3 (cost=0.00..20.00 rows=1000 width=4)\n> \n> EXPLAIN\n> \n> which looks fine. Can't check your other example, since the grammar\n> file hasn't been changed to allow it...\n> \n> I'm not sure whether to recommend that you work from current CVS sources\n> or not. A couple weeks ago that's what I would have said, but Vadim is\n> halfway through integrating WAL changes and I'm not sure how stable the\n> tip really is. You could try the tip, and if it blows up fall back to\n> a dated retrieval from about 7-Oct. Or you could investigate the way\n> that the 7.0.* rewriter handles the rtable list for multiple queries,\n> but that's probably not a real profitable use of your time.\n> \n> regards, tom lane\n\n-- \nKevin O'Gorman (805) 650-6274 mailto:[email protected]\nPermanent e-mail forwarder: \nmailto:Kevin.O'[email protected]\nAt school: mailto:[email protected]\nWeb: http://www.cs.ucsb.edu/~kogorman/index.html\nWeb: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n\n\"There is a freedom lying beyond circumstance,\nderived from the direct intuition that life can\nbe grounded upon its absorption in what is\nchangeless amid change\" \n -- Alfred North Whitehead\n",
"msg_date": "Thu, 19 Oct 2000 16:50:51 -0700",
"msg_from": "\"Kevin O'Gorman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rule system goes weird with SELECT queries"
},
{
"msg_contents": "\"Kevin O'Gorman\" <[email protected]> writes:\n> When I actually *execute* the 'insert into dummy', I get the output of\n> only one select: the second one listed. Is there something about\n> executing a list I don't know about, or is this also old news??\n\nIf you're using psql then that doesn't surprise me, because psql submits\nqueries via PQexec, and PQexec is not capable of dealing with multiple\nresult sets per submitted query string --- its API provides no way to\nhandle that, so it just bit-buckets all but the last command result.\n\nThis would work OK in an app that uses PQsendQuery followed by a\nPQgetResult loop, however. See\nhttp://www.postgresql.org/devel-corner/docs/postgres/libpq-async.htm\n\nI've been kind of wanting to update psql to use these lower-level\nroutines, but that item has been languishing in the to-do queue for\na couple years now...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Oct 2000 20:10:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Rule system goes weird with SELECT queries "
},
{
"msg_contents": "Tom Lane wrote:\n> I'm not sure whether to recommend that you work from current CVS sources\n> or not. A couple weeks ago that's what I would have said, but Vadim is\n> halfway through integrating WAL changes and I'm not sure how stable the\n> tip really is. You could try the tip, and if it blows up fall back to\n> a dated retrieval from about 7-Oct. Or you could investigate the way\n> that the 7.0.* rewriter handles the rtable list for multiple queries,\n> but that's probably not a real profitable use of your time.\n> \n> regards, tom lane\n\nWell, I tried the tip of the tree today, and initdb fails to\ncomplete,\nso I tried going back to '7 Oct 2000 10:00:00 PST' and it's\nbetter,\nbut regression tests fail on the rule system. It makes the\nserver die.\nSince rules are what I want, this won't do.\n\nI'm not familiar enough with CVS or your changelog system\nwell enough\nto know a good way to find a time-point that might be stable\nenough\nfor me. How would I find out where I need to be??\n\n++ kevin\n\n\n-- \nKevin O'Gorman (805) 650-6274 mailto:[email protected]\nPermanent e-mail forwarder: \nmailto:Kevin.O'[email protected]\nAt school: mailto:[email protected]\nWeb: http://www.cs.ucsb.edu/~kogorman/index.html\nWeb: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n\n\"There is a freedom lying beyond circumstance,\nderived from the direct intuition that life can\nbe grounded upon its absorption in what is\nchangeless amid change\" \n -- Alfred North Whitehead\n",
"msg_date": "Fri, 20 Oct 2000 13:54:19 -0700",
"msg_from": "\"Kevin O'Gorman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Navigating time-warps in the CVS tree (was re the rule system)"
},
{
"msg_contents": "\"Kevin O'Gorman\" <[email protected]> writes:\n> so I tried going back to '7 Oct 2000 10:00:00 PST' and it's better,\n> but regression tests fail on the rule system. It makes the server\n> die. Since rules are what I want, this won't do.\n\nDetails? AFAIK, the system was operational on 7-Oct; I did not pick\nthat date out of the air. There was a broken version of the\nexpected/rules.out file in place right around then --- see\nhttp://www.postgresql.org/cgi/cvsweb.cgi/pgsql/src/test/regress/expected/rules.out\nBut that'd just have caused a bogus comparison failure, not a server\ncrash. (What was *in the expected file* was a report of a server\ncrash :-(, so if you didn't look carefully at the diff you might've\ngotten confused...)\n\nIf you want a more exact timestamp, try 7-Oct-2000 00:00 PDT which\npredates the BEOS patch breakage, or 8-Oct-2000 00:00 PDT which follows\ncleanup. If either of those fail on your system it'd be useful to know\nabout.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 21 Oct 2000 20:13:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Navigating time-warps in the CVS tree (was re the rule system) "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Kevin O'Gorman\" <[email protected]> writes:\n> > so I tried going back to '7 Oct 2000 10:00:00 PST' and it's better,\n> > but regression tests fail on the rule system. It makes the server\n> > die. Since rules are what I want, this won't do.\n> \n> Details? AFAIK, the system was operational on 7-Oct; I did not pick\n> that date out of the air. There was a broken version of the\n> expected/rules.out file in place right around then --- see\n> http://www.postgresql.org/cgi/cvsweb.cgi/pgsql/src/test/regress/expected/rules.out\n> But that'd just have caused a bogus comparison failure, not a server\n> crash. (What was *in the expected file* was a report of a server\n> crash :-(, so if you didn't look carefully at the diff you might've\n> gotten confused...)\n> \n> If you want a more exact timestamp, try 7-Oct-2000 00:00 PDT which\n> predates the BEOS patch breakage, or 8-Oct-2000 00:00 PDT which follows\n> cleanup. If either of those fail on your system it'd be useful to know\n> about.\n> \n> regards, tom lane\n\nIt's odd. I had already tried \"8 Oct 2000 10:00:00 PDT\" on\none system \n(RedHat Linux 6.1), and it had worked. Today I'm building\non a Caldera\n2.3 system, and both the 00:00 and 10:00 builds fail.\n\nI've attached the output of the make. Could I have a bad\ncopy of this\nsource file? How could I tell (not knowing much about CVS,\nI'm\ndisinclined to perform random experiments).\n\n++ kevin\n\n-- \nKevin O'Gorman (805) 650-6274 mailto:[email protected]\nPermanent e-mail forwarder: \nmailto:Kevin.O'[email protected]\nAt school: mailto:[email protected]\nWeb: http://www.cs.ucsb.edu/~kogorman/index.html\nWeb: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n\n\"There is a freedom lying beyond circumstance,\nderived from the direct intuition that life can\nbe grounded upon its absorption in what is\nchangeless amid change\" \n -- Alfred North Whitehead",
"msg_date": "Sun, 22 Oct 2000 21:37:36 -0700",
"msg_from": "\"Kevin O'Gorman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Navigating time-warps in the CVS tree (was re the rule system)"
},
{
"msg_contents": "\"Kevin O'Gorman\" <[email protected]> writes:\n> It's odd. I had already tried \"8 Oct 2000 10:00:00 PDT\" on one system\n> (RedHat Linux 6.1), and it had worked. Today I'm building on a\n> Caldera 2.3 system, and both the 00:00 and 10:00 builds fail.\n\nHm. Portability bug maybe? But I can't tell with no info.\n\n> I've attached the output of the make.\n\nUh, it looked more like an amazon.com search from here...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Oct 2000 00:42:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Navigating time-warps in the CVS tree (was re the rule system) "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Kevin O'Gorman\" <[email protected]> writes:\n> > It's odd. I had already tried \"8 Oct 2000 10:00:00 PDT\" on one system\n> > (RedHat Linux 6.1), and it had worked. Today I'm building on a\n> > Caldera 2.3 system, and both the 00:00 and 10:00 builds fail.\n> \n> Hm. Portability bug maybe? But I can't tell with no info.\n> \n> > I've attached the output of the make.\n> \n> Uh, it looked more like an amazon.com search from here...\n> \n> regards, tom lane\n\nUh, so it does. How embarassing. I've been having MORE trouble\nwith Netscape... Anyway, here it is again.\n\nIn the meantime, I did a diff with the version on a system that \nmade okay, and there are no source differences in the pg_backup_custom.c\nfile.\n\nIf we get browser junk again, here is the tail of the file\nvia cut-and-paste; there are about 100 lines of error output total:\n\npg_backup_custom.c: In function `_DoDeflate':\npg_backup_custom.c:846: `z_streamp' undeclared (first use in this function)\npg_backup_custom.c:846: parse error before `zp'\npg_backup_custom.c:849: `ctx' undeclared (first use in this function)\npg_backup_custom.c:852: `AH' undeclared (first use in this function)\npg_backup_custom.c:854: `zp' undeclared (first use in this function)\npg_backup_custom.c:854: `flush' undeclared (first use in this function)\npg_backup_custom.c: In function `_EndDataCompressor':\npg_backup_custom.c:912: `ctx' undeclared (first use in this function)\npg_backup_custom.c:912: parse error before `)'\npg_backup_custom.c:913: `z_streamp' undeclared (first use in this function)\npg_backup_custom.c:918: `zp' undeclared (first use in this function)\nmake[3]: *** [pg_backup_custom.o] Error 1\nmake[3]: Leaving directory `/usr/local/src/pgsql/src/bin/pg_dump'\nmake[2]: *** [all] Error 2\nmake[2]: Leaving directory `/usr/local/src/pgsql/src/bin'\nmake[1]: *** [all] Error 2\nmake[1]: Leaving directory `/usr/local/src/pgsql/src'\nmake: *** [all] Error 2\n[kevin@trixie pgsql]$ exit\nScript done, file is typescript\n\n++ kevin\n\n-- \nKevin O'Gorman (805) 650-6274 mailto:[email protected]\nPermanent e-mail forwarder: mailto:Kevin.O'[email protected]\nAt school: mailto:[email protected]\nWeb: http://www.cs.ucsb.edu/~kogorman/index.html\nWeb: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n\n\"There is a freedom lying beyond circumstance,\nderived from the direct intuition that life can\nbe grounded upon its absorption in what is\nchangeless amid change\" \n -- Alfred North Whitehead",
"msg_date": "Mon, 23 Oct 2000 11:09:16 -0700",
"msg_from": "\"Kevin O'Gorman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Navigating time-warps in the CVS tree (was re the rule system)"
},
{
"msg_contents": "\"Kevin O'Gorman\" <[email protected]> writes:\n> pg_backup_custom.c: In function `_DoDeflate':\n> pg_backup_custom.c:846: `z_streamp' undeclared (first use in this function)\n> pg_backup_custom.c:846: parse error before `zp'\n> pg_backup_custom.c:849: `ctx' undeclared (first use in this function)\n> pg_backup_custom.c:852: `AH' undeclared (first use in this function)\n> pg_backup_custom.c:854: `zp' undeclared (first use in this function)\n> pg_backup_custom.c:854: `flush' undeclared (first use in this function)\n> pg_backup_custom.c: In function `_EndDataCompressor':\n> pg_backup_custom.c:912: `ctx' undeclared (first use in this function)\n> pg_backup_custom.c:912: parse error before `)'\n> pg_backup_custom.c:913: `z_streamp' undeclared (first use in this function)\n> pg_backup_custom.c:918: `zp' undeclared (first use in this function)\n\nHmm. Looks like Philip neglected to see to it that pg_dump will compile\nwhen libz is not present --- the \"#include <zlib.h>\" is properly ifdef'd\nout, but the code isn't. Over to you, Philip ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Oct 2000 17:22:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Navigating time-warps in the CVS tree (was re the rule system) "
}
]
|
[
{
"msg_contents": "I suggest that the CREATE RULE syntax be relaxed so that\nit is legal to have a list of SELECT commands in a rule.\n\nI'll argue that:\n 1) The change is simple (2 lines in gram.y). Diff is\nattached.\n 2) It breaks nothing (more things become legal)\n 3) It makes the parser agree with the published syntax,\n which currently makes no distinction about SELECT\ncommands.\n 4) It makes the language more \"regular\" in that SELECT\n commands are no longer special.:\n\nDiff is an attachment because of line-wrapping. I'm new\nhere\nso I don't know if this works. But as I said it's only 2\nlines.\n\n++ kevin \n\n\n\n--- \nKevin O'Gorman (805) 650-6274 mailto:[email protected]\nPermanent e-mail forwarder: \nmailto:Kevin.O'[email protected]\nAt school: mailto:[email protected]\nWeb: http://www.cs.ucsb.edu/~kogorman/index.html\nWeb: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n\n\"There is a freedom lying beyond circumstance,\nderived from the direct intuition that life can\nbe grounded upon its absorption in what is\nchangeless amid change\" \n -- Alfred North Whitehead",
"msg_date": "Thu, 19 Oct 2000 17:52:30 -0700",
"msg_from": "\"Kevin O'Gorman\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Proposed relaxation of CREATE RULE syntax"
},
{
"msg_contents": "\"Kevin O'Gorman\" <[email protected]> writes:\n> I suggest that the CREATE RULE syntax be relaxed so that\n> it is legal to have a list of SELECT commands in a rule.\n\nI don't have any strong objection to this myself, but it would probably\nbe a good idea to wait and see what Jan thinks of it before we change\nit. (Kevin, Jan Wieck is the main developer of our rules stuff, and\nis more likely to know about any gotchas in the idea than the rest of\nus.) I think Jan is still in Poland someplace, but hopefully he'll\nbe back in touch before long.\n\n> 2) It breaks nothing (more things become legal)\n\nGiven our earlier discussions, it might only be that it exposes things\nthat don't work :-(.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Oct 2000 20:59:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposed relaxation of CREATE RULE syntax "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Kevin O'Gorman\" <[email protected]> writes:\n> > I suggest that the CREATE RULE syntax be relaxed so that\n> > it is legal to have a list of SELECT commands in a rule.\n> \n> I don't have any strong objection to this myself, but it would probably\n> be a good idea to wait and see what Jan thinks of it before we change\n> it. (Kevin, Jan Wieck is the main developer of our rules stuff, and\n> is more likely to know about any gotchas in the idea than the rest of\n> us.) I think Jan is still in Poland someplace, but hopefully he'll\n> be back in touch before long.\n> \n> > 2) It breaks nothing (more things become legal)\n> \n> Given our earlier discussions, it might only be that it exposes things\n> that don't work :-(.\n> \n> regards, tom lane\n\nOf course, wait for the guy in charge of this area. It's\nnot urgent for\nme anyway, because I'm going to be working with a variant\npretty soon\nanyhow, to support my research. (I'm gonna have to do things\nin the\nbackend that you would not want distributed).\n\n++ kevin\n\n\n-- \nKevin O'Gorman (805) 650-6274 mailto:[email protected]\nPermanent e-mail forwarder: \nmailto:Kevin.O'[email protected]\nAt school: mailto:[email protected]\nWeb: http://www.cs.ucsb.edu/~kogorman/index.html\nWeb: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n\n\"There is a freedom lying beyond circumstance,\nderived from the direct intuition that life can\nbe grounded upon its absorption in what is\nchangeless amid change\" \n -- Alfred North Whitehead\n",
"msg_date": "Thu, 19 Oct 2000 19:45:34 -0700",
"msg_from": "\"Kevin O'Gorman\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Proposed relaxation of CREATE RULE syntax"
}
]
|
[
{
"msg_contents": "FYI, it is 376k lines of C code, not bytes.\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> What is amazing, is that you can make such complete system on Linux with\n> only 376k of code... \n> \n> I think bloated software is not part of your dictionnary, and that's good...\n> \n> Franck Martin\n> Database Development Officer\n> SOPAC South Pacific Applied Geoscience Commission\n> Fiji\n> E-mail: [email protected] <mailto:[email protected]> \n> Web site: http://www.sopac.org/ <http://www.sopac.org/> \n> \n> This e-mail is intended for its recipients only. Do not forward this\n> e-mail without approval. The views expressed in this e-mail may not be\n> neccessarily the views of SOPAC.\n> \n> \n> \n> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Friday, October 20, 2000 1:03 PM\n> To: Ross J. Reedstrom\n> Cc: Tom Lane; PostgreSQL-development\n> Subject: Re: [HACKERS] Now 376175 lines of code\n> \n> \n> Never mind. I see I ran it already on 7.0 and got 376k. You used my\n> idential script to get these numbers. I will use your nice numbers for\n> a presentation at the show in two weeks. Thanks a lot.\n> \n> \n> \n> > On Thu, May 11, 2000 at 01:45:31AM -0400, Tom Lane wrote:\n> > > Bruce Momjian <[email protected]> writes:\n> > > > I did a distclean on 7.0, and ran 'wc' on all the *.[chly] files, and\n> > > > got a much larger number than what we got from Berkeley.\n> > > > \t376175\n> > > > Seems someone has been busy. :-)\n> > > \n> > > Forgive a newbie --- what was the count for the original Berkeley code?\n> > > Do you have the same numbers for other milestones?\n> > \n> > Not that I'm a big believer in kloc as a measure of productivity (oh,\n> > Bruce just said busy, didn't he? That's a different story...), I happen\n> > to have a couple historical trees laying around, starting with the last\n> > one I found at Berkeley:\n> > \n> > postgres-v4r2 244581\n> > postgres95-1.09 178976\n> > postgresql-6.1.1 200709\n> > postgresql-6.3.2 260809\n> > postgresql-6.4.0 297479\n> > postgresql-6.4.2 297918\n> > postgresql-6.5.3 331278 \n> > \n> > Well, more than a couple trees, I guess (actually I unpacked tarballs\n> > for most of these)\n> > \n> > HTH,\n> > Ross\n> > -- \n> > Ross J. Reedstrom, Ph.D., <[email protected]> \n> > NSBRI Research Scientist/Programmer\n> > Computer and Information Technology Institute\n> > Rice University, 6100 S. Main St., Houston, TX 77005\n> > \n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 19 Oct 2000 21:29:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Now 376175 lines of code"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> Bruce Momjian writes:\n> \n> > FYI, it is 376k lines of C code, not bytes.\n> \n> How did you calculate that? I get this using c_count over all .c and .h\n> files:\n> \n> 20903 lines had comments 25.4 %\n> 6603 comments are inline -8.0 %\n> 11911 lines were blank 14.5 %\n> 7287 lines for preprocessor 8.9 %\n> 48716 lines containing code 59.3 %\n> 82214 total lines 100.0 %\n> \n> Surely we don't have 294000 lines of Java, C++, Shell, and Perl???\n\ndoing the following in version 6.5.3 in src/backend\n\n[hannu@hu backend]$ cat */*.[ch] */*/*.[ch] */*/*/*.[ch]| wc\n\ngives\n\n 208284 658632 5249304\n\nSo you (or c_count ;) must be missing some files\n\nin src/ ther result was\n[hannu@hu src]$ cat */*.[ch] */*/*.[ch] */*/*/*.[ch] */*/*/*/*.[ch]| wc\n 311469 1069935 8440682\n\n\n-------------\nHannu\n",
"msg_date": "Fri, 20 Oct 2000 19:20:59 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Now 376175 lines of code"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > FYI, it is 376k lines of C code, not bytes.\n> \n> How did you calculate that? I get this using c_count over all .c and .h\n> files:\n> \n> 20903 lines had comments 25.4 %\n> 6603 comments are inline -8.0 %\n> 11911 lines were blank 14.5 %\n> 7287 lines for preprocessor 8.9 %\n> 48716 lines containing code 59.3 %\n> 82214 total lines 100.0 %\n> \n> Surely we don't have 294000 lines of Java, C++, Shell, and Perl???\n\nI just counted lines, not line content. Not sure which is more\nmeaningful. Our comments are as important as the code, sometimes,\nthough they do not add functionality to the application. I am not\ninclined to inflate numbers, but I am not sure the 59% number is\naccurate either.\n\nOpinions?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 20 Oct 2000 13:11:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Now 376175 lines of code"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> FYI, it is 376k lines of C code, not bytes.\n\nHow did you calculate that? I get this using c_count over all .c and .h\nfiles:\n\n 20903 lines had comments 25.4 %\n 6603 comments are inline -8.0 %\n 11911 lines were blank 14.5 %\n 7287 lines for preprocessor 8.9 %\n 48716 lines containing code 59.3 %\n 82214 total lines 100.0 %\n\nSurely we don't have 294000 lines of Java, C++, Shell, and Perl???\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 20 Oct 2000 19:12:42 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Now 376175 lines of code"
},
{
"msg_contents": "\nOn Fri, 20 Oct 2000, Hannu Krosing wrote:\n\n> Peter Eisentraut wrote:\n> > \n> > Bruce Momjian writes:\n> > \n> > > FYI, it is 376k lines of C code, not bytes.\n> > \n> > How did you calculate that? I get this using c_count over all .c and .h\n> > files:\n> > \n> > 20903 lines had comments 25.4 %\n> > 6603 comments are inline -8.0 %\n> > 11911 lines were blank 14.5 %\n> > 7287 lines for preprocessor 8.9 %\n> > 48716 lines containing code 59.3 %\n> > 82214 total lines 100.0 %\n> > \n> > Surely we don't have 294000 lines of Java, C++, Shell, and Perl???\n> \n> doing the following in version 6.5.3 in src/backend\n> \n> [hannu@hu backend]$ cat */*.[ch] */*/*.[ch] */*/*/*.[ch]| wc\n> \n> gives\n> \n> 208284 658632 5249304\n> \n> So you (or c_count ;) must be missing some files\n> \n> in src/ ther result was\n> [hannu@hu src]$ cat */*.[ch] */*/*.[ch] */*/*/*.[ch] */*/*/*/*.[ch]| wc\n> 311469 1069935 8440682\n\n Just now downloaded from ftp.postgresql.org:\n\n$ tar -zxvf postgresql-6.5.3.tar.gz\n\n\t$ cd postgresql-6.5.3\n\t\t$ wc `find -name \"*.[ch]\"`\n\t\t 318131 1089740 8585092 total\n\t\t$ wc `find -name \"*\"`\n\t\t 756810 3037982 25583644 total\n\n\t$ cd src\n\t\t$ wc `find -name \"*.[ch]\"`\n\t\t 311469 1069935 8440682 total\n\t\t$ wc `find -name \"*\"`\t\t\n\t\t 519318 2024262 16656475 total\n\n$ tar -zxvf postgresql-7.0.2.tar.gz\n\n\t$ cd postgresql-7.0.2\n\t\t$ wc `find -name \"*.[ch]\"`\n\t\t 368502 1263333 9910813 total\n\t\t$ wc `find -name \"*\"`\n\t\t 756810 3037982 25583644 total\n\n\t$ cd src\n\t\t$ wc `find -name \"*.[ch]\"`\n\t\t 361297 1240788 9751161 total\n\t\t$ wc `find -name \"*\"`\t\t\n\t\t 596772 2360555 18574015 total\n\n\t\n\t\t\t\t\tKarel\n\t\t\t\t\t\n\n",
"msg_date": "Fri, 20 Oct 2000 19:48:41 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Now 376175 lines of code"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n\n> I just counted lines, not line content. Not sure which is more\n> meaningful. Our comments are as important as the code, sometimes,\n> though they do not add functionality to the application. I am not\n> inclined to inflate numbers, but I am not sure the 59% number is\n> accurate either.\n> \n\nCounting the number of lines is only meaningful as a relative measurement\nof complexity and spent effort - IMHO. And I think lines of code\nmeasurements usually ignore blank lines and lines with\ncomments. However, Preprocessor directives is code - and sometimes it would\nbe fair to add some extra lines for the increased complexity caused by cool\nCPP macros ;-)\n\nRegards, \n\tGunnar\n\tGunnar\n",
"msg_date": "21 Oct 2000 11:03:07 +0100",
"msg_from": "Gunnar R|nning <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Now 376175 lines of code"
}
]
|
[
{
"msg_contents": "I've been unable to follow the directions\nin the Programmer's Guide\nfor getting to the anonymous CVS server.\n\nI'm running RedHat 6.1, and CVS 1.10 which\ncomes with it. I get as far as entering\nthe 'postgresql' password, but it gets\nrejected every time.\n\nAny hints?\n\n++ kevin\n\n-- \nKevin O'Gorman (805) 650-6274 mailto:[email protected]\nPermanent e-mail forwarder: \nmailto:Kevin.O'[email protected]\nAt school: mailto:[email protected]\nWeb: http://www.cs.ucsb.edu/~kogorman/index.html\nWeb: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n\n\"There is a freedom lying beyond circumstance,\nderived from the direct intuition that life can\nbe grounded upon its absorption in what is\nchangeless amid change\" \n -- Alfred North Whitehead\n",
"msg_date": "Thu, 19 Oct 2000 19:50:11 -0700",
"msg_from": "\"Kevin O'Gorman\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Unable to access CVS server"
},
{
"msg_contents": "Sorry to bother you all. I got more recent docs and\nfound I needed a different command to do this, and\nit works.\n\n++ kevin\n\n\n\nKevin O'Gorman wrote:\n> \n> I've been unable to follow the directions\n> in the Programmer's Guide\n> for getting to the anonymous CVS server.\n> \n> I'm running RedHat 6.1, and CVS 1.10 which\n> comes with it. I get as far as entering\n> the 'postgresql' password, but it gets\n> rejected every time.\n> \n> Any hints?\n\n-- \nKevin O'Gorman (805) 650-6274 mailto:[email protected]\nPermanent e-mail forwarder: \nmailto:Kevin.O'[email protected]\nAt school: mailto:[email protected]\nWeb: http://www.cs.ucsb.edu/~kogorman/index.html\nWeb: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n\n\"There is a freedom lying beyond circumstance,\nderived from the direct intuition that life can\nbe grounded upon its absorption in what is\nchangeless amid change\" \n -- Alfred North Whitehead\n",
"msg_date": "Thu, 19 Oct 2000 21:12:12 -0700",
"msg_from": "\"Kevin O'Gorman\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Solved: Re: Unable to access CVS server"
},
{
"msg_contents": "Kevin O'Gorman wrote:\n> \n> I've been unable to follow the directions\n> in the Programmer's Guide\n> for getting to the anonymous CVS server.\n> \n> I'm running RedHat 6.1, and CVS 1.10 which\n> comes with it. I get as far as entering\n> the 'postgresql' password, but it gets\n> rejected every time.\n> \n> Any hints?\n\nI ran into the same problem a while ago.\n\nBoth username and password in docs are WRONG\n\nunfortunately I'm away from my regular computer \nand so I can't look up the right ones ..\n\n-------------\nHannu\n",
"msg_date": "Fri, 20 Oct 2000 18:58:26 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unable to access CVS server"
},
{
"msg_contents": "Hannu Krosing <[email protected]> writes:\n> Kevin O'Gorman wrote:\n>> I've been unable to follow the directions\n>> in the Programmer's Guide\n>> for getting to the anonymous CVS server.\n\n> I ran into the same problem a while ago.\n> Both username and password in docs are WRONG\n\nIIRC, the username/password are OK, but the repository path at hub.org\nchanged since 7.0 release.\n\nUp-to-date info can be found in the current-tree documentation,\nhttp://www.postgresql.org/devel-corner/docs/postgres/anoncvs.htm\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 21 Oct 2000 21:08:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unable to access CVS server "
}
]
|
[
{
"msg_contents": "Pete Forman wrote:\n> The basic problem is that <netinet/tcp.h> is a BSD header. The\n> correct header for TCP internals such as TCP_NODELAY on a UNIX system\n> is <xti.h>. By UNIX I mean UNIX95 (aka XPG4v2 or SUSv1) or later.\n> The 2 files which conditionally include <netinet/tcp.h> need also to\n> conditionally include <xti.h>.\n\nThis patch is causing compilation warnings on HPUX 10.20:\n\ngcc -c -I../../../src/include -O1 -Wall -Wmissing-prototypes -Wmissing-declarations -g -o pqcomm.o pqcomm.c\nIn file included from pqcomm.c:76:\n/usr/include/netinet/tcp.h:71: warning: `TCP_NODELAY' redefined\n/usr/include/sys/xti.h:469: warning: this is the location of the previous definition\n/usr/include/netinet/tcp.h:72: warning: `TCP_MAXSEG' redefined\n/usr/include/sys/xti.h:470: warning: this is the location of the previous definition\n\nI have never heard of <xti.h> before and am rather dubious that it\nshould be considered more standard than <tcp.h>. However, if we are\ngoing to include it then it evidently must be *mutually exclusive*\nwith including <tcp.h>. The $64 question is, which one ought to be\nincluded when both are available? I'd tend to go for <tcp.h> on the\ngrounds of \"don't fix what wasn't broken\".\n\nActually, given your description of the problem, I'm half inclined to\nrevert the whole patch and instead make configure's test for\navailability of <netinet/tcp.h> first include <netinet/in.h>, so that\nthat configure test will succeed on IRIX etc. Do you know any platforms\nwhere <tcp.h> doesn't exist at all?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Oct 2000 23:09:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add support for <xti.h>"
},
{
"msg_contents": "> Pete Forman wrote:\n> > The basic problem is that <netinet/tcp.h> is a BSD header. The\n> > correct header for TCP internals such as TCP_NODELAY on a UNIX system\n> > is <xti.h>. By UNIX I mean UNIX95 (aka XPG4v2 or SUSv1) or later.\n> > The 2 files which conditionally include <netinet/tcp.h> need also to\n> > conditionally include <xti.h>.\n> \n> This patch is causing compilation warnings on HPUX 10.20:\n> \n> gcc -c -I../../../src/include -O1 -Wall -Wmissing-prototypes -Wmissing-declarations -g -o pqcomm.o pqcomm.c\n> In file included from pqcomm.c:76:\n> /usr/include/netinet/tcp.h:71: warning: `TCP_NODELAY' redefined\n> /usr/include/sys/xti.h:469: warning: this is the location of the previous definition\n> /usr/include/netinet/tcp.h:72: warning: `TCP_MAXSEG' redefined\n> /usr/include/sys/xti.h:470: warning: this is the location of the previous definition\n> \n> I have never heard of <xti.h> before and am rather dubious that it\n\nYes, I never heard of xti.h either.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 19 Oct 2000 23:33:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Add support for <xti.h>"
},
{
"msg_contents": "On my UnixWare Box, both xti.h and netinet/... are present.\n\n(Arguably the ONE TRUE UNIX, decendant from the ATT sources, and all\nthat rot, and current highest SysVrX release, at SysV R5). \n\nLER\n\n* Bruce Momjian <[email protected]> [001019 22:34]:\n> > Pete Forman wrote:\n> > > The basic problem is that <netinet/tcp.h> is a BSD header. The\n> > > correct header for TCP internals such as TCP_NODELAY on a UNIX system\n> > > is <xti.h>. By UNIX I mean UNIX95 (aka XPG4v2 or SUSv1) or later.\n> > > The 2 files which conditionally include <netinet/tcp.h> need also to\n> > > conditionally include <xti.h>.\n> > \n> > This patch is causing compilation warnings on HPUX 10.20:\n> > \n> > gcc -c -I../../../src/include -O1 -Wall -Wmissing-prototypes -Wmissing-declarations -g -o pqcomm.o pqcomm.c\n> > In file included from pqcomm.c:76:\n> > /usr/include/netinet/tcp.h:71: warning: `TCP_NODELAY' redefined\n> > /usr/include/sys/xti.h:469: warning: this is the location of the previous definition\n> > /usr/include/netinet/tcp.h:72: warning: `TCP_MAXSEG' redefined\n> > /usr/include/sys/xti.h:470: warning: this is the location of the previous definition\n> > \n> > I have never heard of <xti.h> before and am rather dubious that it\n> \n> Yes, I never heard of xti.h either.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749",
"msg_date": "Thu, 19 Oct 2000 23:00:55 -0500",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Add support for <xti.h>"
},
{
"msg_contents": "Tom Lane writes:\n > Pete Forman wrote:\n > > The basic problem is that <netinet/tcp.h> is a BSD header. The\n > > correct header for TCP internals such as TCP_NODELAY on a UNIX\n > > system is <xti.h>. By UNIX I mean UNIX95 (aka XPG4v2 or SUSv1)\n > > or later. The 2 files which conditionally include\n > > <netinet/tcp.h> need also to conditionally include <xti.h>.\n\nI've done bit more research. <xti.h> was the correct place to find\nTCP_NODELAY in UNIX98/SUSv2. However in the Austin Group draft of the\nnext version of POSIX and UNIX0x/SUSv3, XTI has been dropped and\n<netinet/tcp.h> officially included.\n\n > I have never heard of <xti.h> before and am rather dubious that it\n > should be considered more standard than <tcp.h>. However, if we\n > are going to include it then it evidently must be *mutually\n > exclusive* with including <tcp.h>. The $64 question is, which one\n > ought to be included when both are available? I'd tend to go for\n > <tcp.h> on the grounds of \"don't fix what wasn't broken\".\n > \n > Actually, given your description of the problem, I'm half inclined\n > to revert the whole patch and instead make configure's test for\n > availability of <netinet/tcp.h> first include <netinet/in.h>, so\n > that that configure test will succeed on IRIX etc. Do you know any\n > platforms where <tcp.h> doesn't exist at all?\n\nI agree with this. Back out the patch and update configure.in. I\nmight have done that myself but I do not have enough experience with\nautoconf.\n\nThe only platform I know of without <netinet/tcp.h> is Cygwin B20.1.\nThere is a workaround in place for that. The current Cygwin 1.1 does\nhave the header.\n-- \nPete Forman -./\\.- Disclaimer: This post is originated\nWestern Geophysical -./\\.- by myself and does not represent\[email protected] -./\\.- the opinion of Baker Hughes or\nhttp://www.crosswinds.net/~petef -./\\.- its divisions.\n",
"msg_date": "Fri, 20 Oct 2000 14:04:16 +0100 (BST)",
"msg_from": "Pete Forman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for <xti.h>"
},
{
"msg_contents": "{retry of message sent Fri, 20 Oct 2000 14:04:16 +0100 (BST)]\n\nTom Lane writes:\n > Pete Forman wrote:\n > > The basic problem is that <netinet/tcp.h> is a BSD header. The\n > > correct header for TCP internals such as TCP_NODELAY on a UNIX\n > > system is <xti.h>. By UNIX I mean UNIX95 (aka XPG4v2 or SUSv1)\n > > or later. The 2 files which conditionally include\n > > <netinet/tcp.h> need also to conditionally include <xti.h>.\n\nI've done bit more research. <xti.h> was the correct place to find\nTCP_NODELAY in UNIX98/SUSv2. However in the Austin Group draft of the\nnext version of POSIX and UNIX0x/SUSv3, XTI has been dropped and\n<netinet/tcp.h> officially included.\n\n > I have never heard of <xti.h> before and am rather dubious that it\n > should be considered more standard than <tcp.h>. However, if we\n > are going to include it then it evidently must be *mutually\n > exclusive* with including <tcp.h>. The $64 question is, which one\n > ought to be included when both are available? I'd tend to go for\n > <tcp.h> on the grounds of \"don't fix what wasn't broken\".\n > \n > Actually, given your description of the problem, I'm half inclined\n > to revert the whole patch and instead make configure's test for\n > availability of <netinet/tcp.h> first include <netinet/in.h>, so\n > that that configure test will succeed on IRIX etc. Do you know any\n > platforms where <tcp.h> doesn't exist at all?\n\nI agree with this. Back out the patch and update configure.in. I\nmight have done that myself but I do not have enough experience with\nautoconf.\n\nThe only platform I know of without <netinet/tcp.h> is Cygwin B20.1.\nThere is a workaround in place for that. The current Cygwin 1.1 does\nhave the header.\n\n-- \nPete Forman -./\\.- Disclaimer: This post is originated\nWestern Geophysical -./\\.- by myself and does not represent\[email protected] -./\\.- the opinion of Baker Hughes or\nhttp://www.crosswinds.net/~petef -./\\.- its divisions.\n",
"msg_date": "Mon, 23 Oct 2000 08:50:37 +0100 (BST)",
"msg_from": "Pete Forman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for <xti.h>"
},
{
"msg_contents": "Pete Forman <[email protected]> writes:\n> I've done bit more research. <xti.h> was the correct place to find\n> TCP_NODELAY in UNIX98/SUSv2. However in the Austin Group draft of the\n> next version of POSIX and UNIX0x/SUSv3, XTI has been dropped and\n> <netinet/tcp.h> officially included.\n\nOK, thanks for following up on that.\n\n>> to revert the whole patch and instead make configure's test for\n>> availability of <netinet/tcp.h> first include <netinet/in.h>, so\n>> that that configure test will succeed on IRIX etc.\n\n> I agree with this. Back out the patch and update configure.in.\n\nWill do.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Oct 2000 09:35:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add support for <xti.h> "
},
{
"msg_contents": "xti.h portion of patch has been backed out.\n\n\n> Pete Forman <[email protected]> writes:\n> > I've done bit more research. <xti.h> was the correct place to find\n> > TCP_NODELAY in UNIX98/SUSv2. However in the Austin Group draft of the\n> > next version of POSIX and UNIX0x/SUSv3, XTI has been dropped and\n> > <netinet/tcp.h> officially included.\n> \n> OK, thanks for following up on that.\n> \n> >> to revert the whole patch and instead make configure's test for\n> >> availability of <netinet/tcp.h> first include <netinet/in.h>, so\n> >> that that configure test will succeed on IRIX etc.\n> \n> > I agree with this. Back out the patch and update configure.in.\n> \n> Will do.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 23 Oct 2000 10:45:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for <xti.h>"
},
{
"msg_contents": ">> Actually, given your description of the problem, I'm half inclined\n>> to revert the whole patch and instead make configure's test for\n>> availability of <netinet/tcp.h> first include <netinet/in.h>, so\n>> that that configure test will succeed on IRIX etc.\n\nPete,\n After looking at this I'm confused again. The configure test\nconsists of seeing whether cpp will process\n\n\t#include <netinet/tcp.h>\n\nwithout complaint. I can well believe that the full C compilation\nprocess will generate errors if <netinet/tcp.h> is included without\nalso including <netinet/in.h>, but it's a little harder to believe\nthat cpp alone will complain. Could you double-check this?\n\nIt would be useful to look at the config.log file generated by the\nconfigure run that's reporting tcp.h isn't found. It should contain\nthe error messages generated by failed tests.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Oct 2000 12:19:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add support for <xti.h> "
},
{
"msg_contents": "Tom Lane writes:\n > >> Actually, given your description of the problem, I'm half\n > >> inclined to revert the whole patch and instead make configure's\n > >> test for availability of <netinet/tcp.h> first include\n > >> <netinet/in.h>, so that that configure test will succeed on IRIX\n > >> etc.\n > \n > Pete,\n > After looking at this I'm confused again. The configure test\n > consists of seeing whether cpp will process\n > \n > \t#include <netinet/tcp.h>\n > \n > without complaint. I can well believe that the full C compilation\n > process will generate errors if <netinet/tcp.h> is included without\n > also including <netinet/in.h>, but it's a little harder to believe\n > that cpp alone will complain. Could you double-check this?\n > \n > It would be useful to look at the config.log file generated by the\n > configure run that's reporting tcp.h isn't found. It should\n > contain the error messages generated by failed tests.\n\nOn IRIX 6.5.5m I get the following error. The header <standards.h> is\nincluded by (nearly!) all of the standard headers. It is the IRIX\nequivalent of config.h if you will.\n\nIn order to preprocess this test on IRIX a system header such as\n<stdio.h> must precede <netinet/tcp.h>. The logical choice of header\nto use is <netinet/in.h> as tcp.h is supplying values for levels\ndefined in in.h.\n\nThis is an IRIX bug but I think that we need to work around it.\n\n\nconfigure:4349: checking for netinet/tcp.h\nconfigure:4359: cc -E conftest.c >/dev/null 2>conftest.out\ncc-1035 cc: WARNING File = /usr/include/sys/endian.h, Line = 32\n #error directive: \"<standards.h> must be included before <sys/endian.h>.\"\n\n #error \"<standards.h> must be included before <sys/endian.h>.\"\n ^\nconfigure: failed program was:\n#line 4354 \"configure\"\n#include \"confdefs.h\"\n#include <netinet/tcp.h>\n\n\n\n-- \nPete Forman -./\\.- Disclaimer: This post is originated\nWestern Geophysical -./\\.- by myself and does not represent\[email protected] -./\\.- the opinion of Baker Hughes or\nhttp://www.crosswinds.net/~petef -./\\.- its divisions.\n",
"msg_date": "Tue, 24 Oct 2000 10:40:34 +0100 (BST)",
"msg_from": "Pete Forman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Add support for <xti.h> "
},
{
"msg_contents": "Pete Forman <[email protected]> writes:\n> On IRIX 6.5.5m I get the following error. The header <standards.h> is\n> included by (nearly!) all of the standard headers. It is the IRIX\n> equivalent of config.h if you will.\n\n> configure:4349: checking for netinet/tcp.h\n> configure:4359: cc -E conftest.c >/dev/null 2>conftest.out\n> cc-1035 cc: WARNING File = /usr/include/sys/endian.h, Line = 32\n> #error directive: \"<standards.h> must be included before <sys/endian.h>.\"\n\n> #error \"<standards.h> must be included before <sys/endian.h>.\"\n> ^\n> configure: failed program was:\n> #line 4354 \"configure\"\n> #include \"confdefs.h\"\n> #include <netinet/tcp.h>\n\nHow bizarre. One would think it'd make more sense to just include the\ndesired file, instead of going belly-up like that.\n\n> In order to preprocess this test on IRIX a system header such as\n> <stdio.h> must precede <netinet/tcp.h>. The logical choice of header\n> to use is <netinet/in.h> as tcp.h is supplying values for levels\n> defined in in.h.\n\n> This is an IRIX bug but I think that we need to work around it.\n\nRoger, will do.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 24 Oct 2000 10:37:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: Add support for <xti.h> "
},
{
"msg_contents": ">> This is an IRIX bug but I think that we need to work around it.\n\n> Roger, will do.\n\nI have changed configure in the CVS repository to test for netinet/tcp.h\nper your recommendation. At your convenience, please verify that it\nreally does do the right thing on IRIX.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 24 Oct 2000 11:07:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: Add support for <xti.h> "
},
{
"msg_contents": "Tom Lane writes:\n\n> After looking at this I'm confused again. The configure test\n> consists of seeing whether cpp will process\n> \n> \t#include <netinet/tcp.h>\n> \n> without complaint. I can well believe that the full C compilation\n> process will generate errors if <netinet/tcp.h> is included without\n> also including <netinet/in.h>, but it's a little harder to believe\n> that cpp alone will complain. Could you double-check this?\n\nI'm not quite sure whether it explains it, but note that preprocessor\nchecks also \"fail\" when warnings are generated.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 24 Oct 2000 18:09:14 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Add support for <xti.h> "
},
{
"msg_contents": "Tom Lane writes:\n > >> This is an IRIX bug but I think that we need to work around it.\n > \n > > Roger, will do.\n > \n > I have changed configure in the CVS repository to test for\n > netinet/tcp.h per your recommendation. At your convenience, please\n > verify that it really does do the right thing on IRIX.\n\nYes, that works.\n\n\nThere is a separate problem running the configure script on AIX. It\nhangs while testing for flex. The two processes that I killed to\nallow configure to continue were\n\n /usr/ccs/bin/lex --version\n /usr/bin/lex --version\n\nThe problem is that lex is waiting for input from stdin. This patch\nshould fix it. I've only tested modification of the configure file\ndirectly.\n\n*** config/programs.m4.orig Mon Aug 28 12:53:13 2000\n--- config/programs.m4 Wed Oct 25 10:20:31 2000\n***************\n*** 22,28 ****\n for pgac_prog in flex lex; do\n pgac_candidate=\"$pgac_dir/$pgac_prog\"\n if test -f \"$pgac_candidate\" \\\n! && $pgac_candidate --version >/dev/null 2>&1\n then\n echo '%%' > conftest.l\n if $pgac_candidate -t conftest.l 2>/dev/null | grep FLEX_SCANNER >/dev/null 2>&1; then\n--- 22,28 ----\n for pgac_prog in flex lex; do\n pgac_candidate=\"$pgac_dir/$pgac_prog\"\n if test -f \"$pgac_candidate\" \\\n! && $pgac_candidate --version </dev/null >/dev/null 2>&1\n then\n echo '%%' > conftest.l\n if $pgac_candidate -t conftest.l 2>/dev/null | grep FLEX_SCANNER >/dev/null 2>&1; then\n\n\n\n-- \nPete Forman -./\\.- Disclaimer: This post is originated\nWestern Geophysical -./\\.- by myself and does not represent\[email protected] -./\\.- the opinion of Baker Hughes or\nhttp://www.crosswinds.net/~petef -./\\.- its divisions.\n",
"msg_date": "Wed, 25 Oct 2000 10:37:31 +0100 (BST)",
"msg_from": "Pete Forman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Add support for <xti.h> "
},
{
"msg_contents": "Pete Forman writes:\n\n> There is a separate problem running the configure script on AIX. It\n> hangs while testing for flex. The two processes that I killed to\n> allow configure to continue were\n> \n> /usr/ccs/bin/lex --version\n> /usr/bin/lex --version\n> \n> The problem is that lex is waiting for input from stdin. This patch\n> should fix it. I've only tested modification of the configure file\n> directly.\n\nDone.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 26 Oct 2000 18:34:10 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Add support for <xti.h> "
}
]
|
[
{
"msg_contents": "Hello,\n\nI try to port my large objects patch to current CVS. All is fine, except it \ndoes not work... :-)))\n\nWhen I try to restore data from archive I get:\n\n---\nCreating table for BLOBS xrefs\n - Restoring BLOB oid 4440844\n - Restoring BLOB oid 4440846\n - Restoring BLOB oid 4440848\n - Restoring BLOB oid 4440850\n - Restoring BLOB oid 4440852\n - Restoring BLOB oid 4440854\n - Restoring BLOB oid 4440856\n - Restoring BLOB oid 4440858\n - Restoring BLOB oid 4440860\n - Restoring BLOB oid 4440862\n - Restoring BLOB oid 4440864\n - Restoring BLOB oid 4440866\n - Restoring BLOB oid 4440868\n - Restoring BLOB oid 4440870\n - Restoring BLOB oid 4440872\n - Restoring BLOB oid 4440874\n - Restoring BLOB oid 4440876\n - Restoring BLOB oid 4440878\nArchiver(db): can not commit database transaction. No result from backend.\n---\n\nWhat's this? Is this a temporary problem or what? If it is just temporaral, I \nwill wait until it becames stable. If it my bug, how can I catch it?\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n",
"msg_date": "Fri, 20 Oct 2000 10:26:15 +0700",
"msg_from": "Denis Perchine <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problems with the latest CVS."
},
{
"msg_contents": "At 10:26 20/10/00 +0700, Denis Perchine wrote:\n>\n>When I try to restore data from archive I get:\n>\n>---\n>Creating table for BLOBS xrefs\n> - Restoring BLOB oid 4440844\n...\n> - Restoring BLOB oid 4440878\n>Archiver(db): can not commit database transaction. No result from backend.\n>\n>What's this? Is this a temporary problem or what? If it is just\ntemporaral, I \n>will wait until it becames stable. If it my bug, how can I catch it?\n>\n\nI sincerely hope it's a temporary problem; I'm looking at it now.\n\nAny chance you could send me the header of the archive file you are using?\n(Don't bother sending it to list).\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 20 Oct 2000 15:07:08 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with the latest CVS."
}
]
|
[
{
"msg_contents": "\n> > Yes, writes are only necessary when \"too many dirty pages\"\n> > are in the buffer pool. Those writes can be done by a page flusher\n> > on demand or during checkpoint (don't know if we need checkpoint,\n> > but you referred to doing checkpoints).\n> \n> How else to know from where in log to start redo and how far go back\n> for undo ?\n\nI don't know, but if your checkpoint algorithm does not need to block \nother activity, that would be great. \nThe usual way would involve: \n\twriting all dirty pages to disk during checkpoint\n\tblock all modifying activity\n\nOne other thing I would like to ask, is O_SYNC not available on all platforms ?\nThen you could avoid the (or some) fsync calls in xlog.c ?\n\nAnd is there a possibility to add -F mode without fsyncs to xlog.c ?\n\nAndreas\n",
"msg_date": "Fri, 20 Oct 2000 10:52:22 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: Backup, restore & pg_dump"
}
]
|
[
{
"msg_contents": "In 7.0.2 \n\n select to_char(sum(n),'999') from t1;\n\ncauses backend dump a core if n is a float/numeric ...data type AND if\nsum(n) returns NULL. This seems due to a bad null pointer handling for\naruguments of pass-by-reference data types. I think just a simple\nnull pointer checking at very top of each function (for example\nfloat4_to_char()) would solve the problem. Comments?\n\ntest=# create table t1(f float);\nCREATE\ntest=# select to_char(sum(f),'999') from t1;\npqReadData() -- backend closed the channel unexpectedly.\n\tThis probably means the backend terminated abnormally\n\tbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n",
"msg_date": "Fri, 20 Oct 2000 23:22:19 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "to_char() dumps core"
},
{
"msg_contents": "\nOn Fri, 20 Oct 2000, Tatsuo Ishii wrote:\n\n> In 7.0.2 \n> \n> select to_char(sum(n),'999') from t1;\n> \n> causes backend dump a core if n is a float/numeric ...data type AND if\n> sum(n) returns NULL. This seems due to a bad null pointer handling for\n> aruguments of pass-by-reference data types. I think just a simple\n> null pointer checking at very top of each function (for example\n> float4_to_char()) would solve the problem. Comments?\n\n In the 7.1devel it's correct, but here it's bug, IMHO it bear on changes\nin the 7.1's fmgr, because code is same in both versions for this. On Monday, \nI try fix it for 7.0.3 \n\n\t\t\t\t\t\tKarel\n \n> test=# create table t1(f float);\n> CREATE\n> test=# select to_char(sum(f),'999') from t1;\n> pqReadData() -- backend closed the channel unexpectedly.\n> \tThis probably means the backend terminated abnormally\n> \tbefore or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> \n\n",
"msg_date": "Fri, 20 Oct 2000 18:35:24 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: to_char() dumps core"
},
{
"msg_contents": "On Fri, 20 Oct 2000, Karel Zak wrote:\n\n> \n> On Fri, 20 Oct 2000, Tatsuo Ishii wrote:\n> \n> > In 7.0.2 \n> > \n> > select to_char(sum(n),'999') from t1;\n> > \n> > causes backend dump a core if n is a float/numeric ...data type AND if\n> > sum(n) returns NULL. This seems due to a bad null pointer handling for\n> > aruguments of pass-by-reference data types. I think just a simple\n> > null pointer checking at very top of each function (for example\n> > float4_to_char()) would solve the problem. Comments?\n> \n> In the 7.1devel it's correct, but here it's bug, IMHO it bear on changes\n> in the 7.1's fmgr, because code is same in both versions for this. On Monday, \n> I try fix it for 7.0.3 \n\n Not, monday .. just now :-)\n\n The patch is attached... Bruce, it's again to 7.0.3!\n\n Thanks for bug report\n\n\t\t\tKarel \n\n\ntest=# create table t1 (f4 float4, f8 float8, n numeric, i4 int4, i8 int8);\nCREATE\ntest=# select to_char(sum(f4), '9'), to_char(sum(f8), '9'), to_char(sum(n),\n'9'), to_char(sum(i4), '9'), to_char(sum(i8), '9') from t1;\n to_char | to_char | to_char | to_char | to_char\n---------+---------+---------+---------+---------\n | | | |\n(1 row)",
"msg_date": "Fri, 20 Oct 2000 19:18:11 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] to_char() dumps core"
},
{
"msg_contents": "> > > causes backend dump a core if n is a float/numeric ...data type AND if\n> > > sum(n) returns NULL. This seems due to a bad null pointer handling for\n> > > aruguments of pass-by-reference data types. I think just a simple\n> > > null pointer checking at very top of each function (for example\n> > > float4_to_char()) would solve the problem. Comments?\n> > \n> > In the 7.1devel it's correct, but here it's bug, IMHO it bear on changes\n> > in the 7.1's fmgr, because code is same in both versions for this. On Monday, \n> > I try fix it for 7.0.3 \n> \n> Not, monday .. just now :-)\n> \n> The patch is attached... Bruce, it's again to 7.0.3!\n\nGot it. You don't have to hit me over head all the time (just most of\nthe time). :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 20 Oct 2000 13:33:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [HACKERS] to_char() dumps core"
},
{
"msg_contents": " */\n\nOn Fri, 20 Oct 2000, Bruce Momjian wrote:\n> > \n> > The patch is attached... Bruce, it's again to 7.0.3!\n> \n> Got it. You don't have to hit me over head all the time (just most of\n> the time). :-)\n\n Oh no, I want pull up your head from 7.1 cycle only :-)\n\n Thanks \n \tKarel\n\n",
"msg_date": "Fri, 20 Oct 2000 19:53:41 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [HACKERS] to_char() dumps core"
},
{
"msg_contents": "> > In the 7.1devel it's correct, but here it's bug, IMHO it bear on changes\n> > in the 7.1's fmgr, because code is same in both versions for this. On Monday, \n> > I try fix it for 7.0.3 \n> \n> Not, monday .. just now :-)\n> \n> The patch is attached... Bruce, it's again to 7.0.3!\n> \n> Thanks for bug report\n> \n> \t\t\tKarel \n\nThank for your qucik fix!\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 21 Oct 2000 22:11:51 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: [HACKERS] to_char() dumps core"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> In 7.0.2 \n> select to_char(sum(n),'999') from t1;\n\n> causes backend dump a core if n is a float/numeric ...data type AND if\n> sum(n) returns NULL. This seems due to a bad null pointer handling for\n> aruguments of pass-by-reference data types. I think just a simple\n> null pointer checking at very top of each function (for example\n> float4_to_char()) would solve the problem. Comments?\n\nJust a note to remind everyone, since I haven't yet updated the\ndocumentation for the new-fmgr changes: under the 7.1 fmgr it is *no\nlonger necessary* to check for NULL pointer in function execution\nroutines, assuming that your function is marked \"strict\" in pg_proc\n(as nearly all built-in functions are). The fmgr will not call such\na function in the first place, if any of its inputs are NULLs.\n\nSo, while adding the NULL-pointer checks is an OK patch for 7.0.*,\ndon't stick such checks into current sources.\n\n(Also, if you do want to check for a NULL input in current sources,\nlooking for a NULL pointer is the wrong way to code it anyway;\nPG_ARGISNULL(n) is the right way.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 21 Oct 2000 21:04:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: to_char() dumps core "
},
{
"msg_contents": "Karel Zak <[email protected]> writes:\n>> In the 7.1devel it's correct, but here it's bug, IMHO it bear on changes\n>> in the 7.1's fmgr, because code is same in both versions for this. On Monday, \n>> I try fix it for 7.0.3 \n\nApplied to REL7_0_PATCHES branch (only). Thanks.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 22 Oct 2000 15:21:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [HACKERS] to_char() dumps core "
}
]
|
[
{
"msg_contents": "I'm unable to find the complementary function to \n\nALTER TABLE t ADD FOREIGN KEY(id) REFERENCES pkt(pk);\n\nI would try DROP TRIGGER, but I've also been unable to \nfind a way to name the constraint ;(\n\n\nthe built-in help is both inadequate and inconsistent\n----8<---------8<---------8<-------------8<-----8<---------8<-----\namphora2=# \\h alter table\nCommand: ALTER TABLE\nDescription: Modifies table properties\nSyntax:\nALTER TABLE table [ * ]\n ADD [ COLUMN ] column type\nALTER TABLE table [ * ]\n ALTER [ COLUMN ] column { SET DEFAULT value | DROP DEFAULT }\nALTER TABLE table [ * ]\n RENAME [ COLUMN ] column TO newcolumn\nALTER TABLE table\n RENAME TO newtable\nALTER TABLE table\n ADD table constraint definition\n\namphora2=# \\h create table\nCommand: CREATE TABLE\nDescription: Creates a new table\nSyntax:\nCREATE [ TEMPORARY | TEMP ] TABLE table (\n column type\n [ NULL | NOT NULL ] [ UNIQUE ] [ DEFAULT value ]\n [column_constraint_clause | PRIMARY KEY } [ ... ] ]\n [, ... ]\n [, PRIMARY KEY ( column [, ...] ) ]\n [, CHECK ( condition ) ]\n [, table_constraint_clause ]\n ) [ INHERITS ( inherited_table [, ...] ) ]\n\namphora2=# \\h column_constraint_clause\nNo help available for 'column_constraint_clause'.\nTry \\h with no arguments to see available help.\namphora2=# \\h column constraint definition\nNo help available for 'column constraint definition'.\nTry \\h with no arguments to see available help.\n----8<---------8<---------8<-------------8<-----8<---------8<-----\n",
"msg_date": "Fri, 20 Oct 2000 17:42:36 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "is there a way to DROP foreign key constraint ?"
},
{
"msg_contents": "\nOn Fri, 20 Oct 2000, Hannu Krosing wrote:\n\n> I'm unable to find the complementary function to \n> \n> ALTER TABLE t ADD FOREIGN KEY(id) REFERENCES pkt(pk);\n\nCurrently, ALTER TABLE ... DROP table constraint definition \ndoesn't exist.\n\n> I would try DROP TRIGGER, but I've also been unable to \n> find a way to name the constraint ;(\n\nUmm, put the constraint name in the table constraint definition?\n>From the SQL spec:\n <table constraint definition> ::=\n [ <constraint name definition> ]\n <table constraint> [ <constraint attributes> ]\n <constraint name definition> ::= CONSTRAINT <constraint name>\n\nCurrently, you need to drop the three triggers from pg_trigger\nthat are associated with the constraint. Easiest way to find\nthem if you don't have a constraint name is to look at the tg_args.\n\n",
"msg_date": "Fri, 20 Oct 2000 10:59:55 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: is there a way to DROP foreign key constraint ?"
},
{
"msg_contents": "Stephan Szabo wrote:\n> \n> On Fri, 20 Oct 2000, Hannu Krosing wrote:\n> \n> > I'm unable to find the complementary function to\n> >\n> > ALTER TABLE t ADD FOREIGN KEY(id) REFERENCES pkt(pk);\n> \n> Currently, ALTER TABLE ... DROP table constraint definition\n> doesn't exist.\n> \n> > I would try DROP TRIGGER, but I've also been unable to\n> > find a way to name the constraint ;(\n> \n> Umm, put the constraint name in the table constraint definition?\n> >From the SQL spec:\n> <table constraint definition> ::=\n> [ <constraint name definition> ]\n> <table constraint> [ <constraint attributes> ]\n> <constraint name definition> ::= CONSTRAINT <constraint name>\n\nThanks! I was missing the CONSTRAINT word in <constraint name\ndefinition>\n\n------------\nHannu\n",
"msg_date": "Mon, 23 Oct 2000 01:01:17 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: is there a way to DROP foreign key constraint ?"
}
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.